The following technical report is available from
http://aib.informatik.rwth-aachen.de:
An L-Attributed Grammar for Adjoint Code
Uwe Naumann
AIB 2007-12
Gradients of high-dimensional functions can be computed efficiently
and with machine accuracy by so-called adjoint codes.We present an
L-attributed grammar for the single-pass generation of intraprocedural
adjoint code for a simple imperative language (a subset of C). Our ideas
can easily be applied to any programming language that is suitable for
syntax-directed translation. Moreover the conceptual insights are useful
in the context of multi-pass generation of adjoint code. Our focus is
on correctness. The necessary domain-specific code optimizations are
beyond the scope of this paper. We give references to corresponding work
in this area.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Achieving Highly Reliable Embedded Software: An empirical evaluation of
different approaches
Falk Salewski and Stefan Kowalewski
AIB 2007-08
Designing highly reliable embedded software is a challenge and several
approaches are known to improve the reliability of this software.
However, all approaches have their advantages and disadvantages which
makes empirical evaluations investigating their potentials necessary. In
this paper, different approaches of software reliability improvement for
embedded systems were compared on basis of experiments conducted at our
institute. The first approach is an instance of N-version programming
based on forced diversity. Two fundamentally diverse hardware platforms
(microcontroller and CPLD/FPGA) were used to force diversity. Another
experiment was conducted in which participants designed their software
on one hardware platform only. The second half of this experiment was
used for review and testing. Based on our experiments, the potentials
of our application of N-version programming, review and testing are
compared with respect to different fault categories (specification,
implementation, application) identified during evaluation.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
A Probabilistic Justification of the Combining Calculus under the Uniform
Scheduler Assumption
Tina Kraußer, Heiko Mantel, and Henning Sudbrock
AIB 2007-09
The combining calculus provides a framework for analyzing the information
flow of multi-threaded programs. The calculus incorporates so called
plug-in rules for integrating several previously existing analysis
techniques. By applying a plug-in rule to a subprogram, one decides
to analyze this subprogram with the given analysis technique, and not
with the rules of the combining calculus. The novelty of the combining
calculus was that one can analyze the information flow security of a
given program by using multiple analysis techniques in combination. It
was demonstrated that this flexibility leads to a more precise analysis,
allowing one to successfully verify the security of some programs
that cannot be verified with any of the existing analysis techniques
in isolation.
In MSK07, the soundness of the combining calculus is proved for a
possibilistic characterization of information flow security. This
characterization assumes a purely nondeterministic scheduling of
concurrent threads. In this report, we demonstrate that the combining
calculus is also sound for a probabilistic characterization of security
that assumes a uniform scheduler. This result further increases the
confidence in the combining calculus as a reliable and flexible tool
for formally analyzing the information flow security of multi-threaded
programs.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
MeDUSA - MethoD for UML2-based Design of Embedded Software Applications
Alexander Nyßen, Horst Lichter
AIB 2007-07
MeDUSA (Method for UML2-based Design of Embedded Software Applications)
is a model-driven software design method targeting the domain of small
embedded systems, especially field devices.
Being Use Case-driven, MeDUSA systematically covers the software
development lifecycle from the early requirements up to the late detailed
design modelling. Models are successivly developed and employed throughout
all activities.
By enforcing an object-based rather than an object-oriented design,
a smooth transition of the resulting detailed design model towards an
implementation in a procedural programming language is facilitated. This
is essential, as procedural programming languages as the C language are
still state-of-the-art in the regarded domain.
By leading to a component-based architectural design, MeDUSA explicitly
addresses the reuse of components, something that is the prerequisite for
the application of the method in a product-line setting. This has gained
significant importance to the industrial practice in the last years.
MeDUSA was developed by the Research Group Software Construction of the
RWTH Aachen University in close cooperation with the German ABB Research
Centre in Ladenburg. It incorporates various practical experiences gained
during the industrial development of embedded software in ABB Business
Unit Instrumentation.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Verifying Concurrent List-Manipulating Programs by LTL Model Checking
Joost-Pieter Katoen, Thomas Noll, and Stefan Rieger
AIB 2007-06
We present a novel approach to the verification of concurrent
pointer-manipulating programs which operate on singly-linked lists. By
abstracting from chains (i.e., non-interrupted sublists) in the heap,
we obtain a finite-state representation of all possible executions
of a given program. The combination of a simple pointer logic for
expressing heap properties and of temporal operators then allows us to
employ standard LTL model checking techniques. The usability of this
approach is demonstrated by establishing correctness properties of a
producer/consumer system and of a concurrent garbage collector.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
On Optimal DAG Reversal
Uwe Naumann
AIB 2007-05
Runs of numerical computer programs can be visualized as directed acyclic
graphs (DAGs). We consider the problem of restoring the intermediate
values computed by such a program (the vertices in the DAG) in reverse
order for a given upper bound on the available memory. The minimization
of the associated computational cost in terms of the number of performed
arithmetic operations is shown to be NP-complete. The reversal of the
data-flow finds application, for example, in the efficient evaluation
of adjoint numerical programs. We derive special cases of numerical
programs that require the intermediate values exactly in reverse order,
thus establishing the NP-completeness of the optimal adjoint computation
problem. Last but not least we review some state-of-theart approaches
to efficient data-flow reversal taken by existing software tools for
automatic differentiation.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
coJIVE: A System to Support Collaborative Jazz Improvisation
Jan Buchholz, Eric Lee, Jonathan Klein, and Jan Borchers
AIB 2007-04
Jazz improvisation is a complex and demanding art of on-the-fly
composition and performance. We present coJIVE, a system that allows
musically inexperienced people to improvise to jazz by substituting
for musical knowledge and experience. Using well-established musical
theory, it aids users in creating a harmonic performance, and also
helps coordinate a collaborative session amongst the participants.
Remarks from users during evaluation showed an overall positive response
to the support provided by the system.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Proving Termination by Bounded Increase
Jürgen Giesl, René Thiemann, Stephan Swiderski, and Peter Schneider-Kamp
AIB 2007-03
Most methods and tools for termination analysis of term rewrite systems
(TRSs) essentially try to find arguments of functions that decrease
in recursive calls. However, they fail if the reason for termination
is that an argument is increased in recursive calls repeatedly until
it reaches a bound. In this paper, we solve that problem and present a
method to prove termination of TRSs with bounded increase automatically.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
SAT Solving for Termination Analysis with Polynomial Interpretations
Carsten Fuhs, Jürgen Giesl, Aart Middeldorp, Peter Schneider-Kamp,
René Thiemann, and Harald Zankl
AIB 2007-02
Polynomial interpretations are one of the most popular techniques
for automated termination analysis. Therefore, the search for such
interpretations is a main bottleneck in most termination provers. We
show that one can obtain speedups in orders of magnitude by encoding
this task as a SAT problem and by applying modern SAT solvers.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Parallel Algorithms for Verification of Large Systems
Michael Weber
AIB 2006-02
The model-checking problem is the question whether a given system model
satisfies a property. The property is usually given as formula of a
temporal logic, and the system model as labelled transition system.
However, the well-known state-space explosion effect is responsible for
yielding transition systems of exponential size when compared to their
description, and common sequential algorithms often are not capable to
solve the model-checking problem with resources available on a single
computer.
In this thesis, we develop parallel and, in particular, distributed
algorithms which exploit the combined resources of a network of commodity
workstations to solve problem instances which are beyond the capabilities
of today's sequential algorithms.
In a second part, we investigate ways to efficiently generate (low-level)
transition systems suitable for many verification tools from compact
high-level descriptions of the input model. We propose a virtual-machine
based approach, which uses an intermediate format to break the translation
from high-level to low-level representations of a model into two steps.
This well-known compiler technique simplifies the translation and still
is very fast in practice.