The following technical report is available from
http://aib.informatik.rwth-aachen.de/:
A New Satisfiability Algorithm With Applications To Max-Cut
Joachim Kneis, Peter Rossmanith
AIB 2005-08
We prove an upper bound of m/5.217+3 on the treewidth of a graph with
m edges. Moreover, we can always find efficiently a set of no more
than m/5.217 nodes whose removal yields a very simple graph. As an
application, we immediately get simple algorithms for several problems,
including Max-Cut, Max-2-SAT and Max-2-XSAT. The resulting algorithms
have running times of O*(2^t/5.217), where t is the number of distinct
clause types. In particular, this implies a record-breaking time
complexity of O*(2^m/5.217).
The following technical report is available from
http://aib.informatik.rwth-aachen.de/:
Felix C. Freiling, Thorsten Holz, Georg Wicherski
Botnet Tracking: Exploring a Root-Cause Methodology to Prevent
Distributed Denial-of-Service Attacks
AIB 2005-07
Abstract:
Denial-of-Service (DoS) attacks pose a significant threat to the
Internet today especially if they are distributed, i.e., launched
simultaneously at a large number of systems. Reactive techniques that
try to detect such an attack and throttle down malicious traffic prevail
today but usually require an additional infrastructure to be really
effective. In this paper we show that preventive mechanisms can be as
effective with much less effort: We present an approach to (distributed)
DoS attack prevention that is based on the observation that coordinated
automated activity by many hosts needs a mechanism to remotely control
them. To prevent such attacks, it is therefore possible to identify,
infiltrate and analyze this remote control mechanism and to stop it in
an automated fashion. We show that this method can be realized in the
Internet by describing how we infiltrated and tracked IRC-based botnets
which are the main DoS technology used by attackers today.
The following technical report is available from
http://aib.informatik.rwth-aachen.de/:
Adaptive Routing with Stale Information
Simon Fischer, Berthold Vöcking
AIB 2005-06
We investigate adaptive routing policies for large networks in which
agents reroute traffic based on old information. It is a well known and
practically relevant problem that old information can lead to undesirable
oscillation effects resulting in poor performance. We investigate how
adaptive routing policies should be designed such that these effects can
be avoided.
In our theoretical model, the network is represented by a general graph
with latency functions on the edges. Traffic is managed by a large number
of agents each of which is responsible for a negligible amount of traffic.
Initially the agents' routing paths are chosen in an arbitrary fashion.
>From time to time each agent revises her routing strategy by sampling
another path and switching with positive probability to this path if it
promises smaller latencies. As the information on which the agent bases
her decision might be stale, however, this does not necessarily lead to
an improvement. The points of time at which agents revise their strategy
are generated by a Poisson distribution. Stale information is modelled
in form of a bulletin board that is updated periodically and lists the
latencies on all edges.
We analyze such a distributed routing process in the so-called fluid
limit, that is, we use differential equations describing the fractions
of traffic on different paths over time. In our model, we can show the
following effects. Simple routing policies that always switch to the
better alternative lead to oscillation, regardless at which frequency
the bulletin board is updated. Oscillation effects can be avoided,
however, when using smooth adaption policies that do not always switch
to better alternatives but only with a probability depending on the
advantage in the latency. In fact, such policies have dynamics that
converge to a fixed point corresponding to a Nash equilibrium for the
underlying routing game, provided the update periods are not too large.
In addition, we also analyze the speed of convergence towards approximate
equilibria of two specific variants of smooth adaptive routing policies,
e.g., for a replication policy adopted from evolutionary game theory.
The following technical report is available from
http://aib.informatik.rwth-aachen.de/:
Fabien Pouget, Thorsten Holz:
A Pointillist Approach for Comparing Honeypots
AIB 2005-05
The concept of electronic decoys ("honeypots"), which are network
resources that are deployed to be probed, attacked, and eventually
compromised, is used in the area of IT security to learn more about
attack patterns and attackers' behavior in real-world networks. Our
research focuses on gathering detailed statistics on the threats over a
long period of time in order to get a better understanding of their
characteristics. In this perspective, we are deploying honeypots of
different interaction levels in various locations. At a first glance,
these honeypots can be considered as permanent sensors that gather
statistical information on a long-term perspective.
Generally speaking, honeypots are often classified by their level of
interaction. For instance, it is admitted that a high interaction
approach is suited for recording hacker shell commands, while a low
interaction approach provides limited information on the attackers'
activities. So far, there exists no serious comparison to express the
level of information on which both approaches differ. Thanks to the
environment that we are deploying, we are able to provide a rigorous
comparison between the two approaches, both qualitatively and
quantitatively. The proposed analysis leads to an interesting study of
malicious activities hidden by the noise of less interesting ones.
Furthermore, it shows the complementarities of the two approaches: a
high interaction honeypot allows controlling the relevance of low
interaction honeypot configurations. Thus, both interaction levels are
required to build an efficient network of distributed honeypots.
The following technical report is available from
http://aib.informatik.rwth-aachen.de/:
Jürgen Giesl, René Thiemann, Peter Schneider-Kamp:
Proving and Disproving Termination of Higher-Order Functions
AIB 2005-03
The dependency pair technique is one of the most powerful methods for
automated termination proofs of term rewrite systems (TRSs). We present
two important extensions of this technique: First, we show how to prove
termination of higher-order functions with dependency pairs. To this
end, the dependency pair technique is extended to handle (untyped)
applicative TRSs. Second, we introduce a method to prove non-
termination in the dependency pair framework, while up to now dependency
pairs were only used to verify termination. We implemented and evaluated
our results in the automated termination prover AProVE.
The following technical report is available from
http://aib.informatik.rwth-aachen.de/:
Daniel Mölle, Stefan Richter, Peter Rossmanith:
A Faster Algorithm for the Steiner Tree Problem
AIB 2005-04
The best algorithm for the Steiner tree problem by Dreyfus and
Wagner is 33 years old. We celebrate this occasion and present a
variation on this theme. A new algorithm is developed, which
improves the running time from O(3^k n^3) to O((2+epsilon)^k poly(n)).
The following technical report is available from
http://aib.informatik.rwth-aachen.de/:
Maximillian Dornseif, Felix C. Gärtner, Thorsten Holz, Martin Mink
An Offensive Approach to Teaching Information Security: "Aachen Summer School Applied IT Security"
AIB 2005-02
Abstract:
There is a general consensus that courses on data security at university
degree level should be research-oriented and teach fundamentals of the
field, i.e., items of long-term knowledge in contrast to
technology-oriented system knowledge. Unfortunately, this consensus
often results in courses that are either too theoretical or are outdated
with respect to current developments in security technology. To
understand the importance of information security, students should have
the possibility to gain practical experience how security systems fail,
using offensive techniques.
In this article, we give an overview over a three-week intensive course
on applied computer security we held at RWTH Aachen university. It
brought together students from various countries and with different
previous knowledge. We describe in detail the course outline, course
contents and the lessons learned.
The following technical report is available from
http://aib.informatik.rwth-aachen.de/:
Klaus Indermark, Thomas Noll:
Algebraic Correctness Proofs for Compiling Recursive Function
Definitions with Strictness Information
2004-8
Abstract:
Adding appropriate strictness information to recursive function
definitions we achieve a uniform treatment of lazy and eager evaluation
strategies. By restriction to first-order functions over basic types we
develop a pure stack implementation that avoids a heap even for lazy
arguments. We present algebraic definitions of denotational,
operational, and stack-machine semantics and prove their equivalence
by means of structural induction.
Regards,
Volker
The following technical report is available from
http://aib.informatik.rwth-aachen.de/:
Zinaida Benenson, Felix C. Gärtner, Dogan Kesdogan
Secure Multi-Party Computation with Security Modules
2004-10
We consider the problem of secure multi-party computation (SMC) in a
new model where individual processes contain a tamper-proof security
module. Security modules can be trusted by other processes and can
establish secure channels between each other. However, their
availability is restricted by their host, i.e., a corrupted party
can stop the computation of its own security module as well as drop
any message sent by or to its security module. In this model we show
that SMC is solvable if and only if a majority of processes is
correct. We prove this by relating SMC to the problem of Uniform
Interactive Consistency among security modules (a variant
of the Byzantine Generals Problem from the area of
fault-tolerance). The obtained solutions to SMC for the first time
allow to compute any function securely with a complexity which is
polynomial only in the number of processes (i.e., the complexity
does not depend on the function which is computed). We conclude
that adding secure hardware does not improve the resilience of SMC
but can effectively improve the efficiency.
Regards,
Volker
The following technical report is available from
http://aib.informatik.rwth-aachen.de/:
Joachim Kneis, Daniel Mölle, Stefan Richter, Peter Rossmanith
Parameterized Power Domination Complexity
2004-09
Abstract:
The optimization problem of measuring all nodes in an electrical network
by placing as few measurement units (PMUs) as possible is known as
Power Dominating Set. Nodes can be measured indirectly according to
Kirchhoff's law. We show that this problem can be solved in linear time
for graphs of bounded treewidth and establish bounds on its
parameterized
complexity if the number of PMUs is the parameter.
Regards,
Volker