The following technical report is available from
http://aib.informatik.rwth-aachen.de:
HotBox: Testing Temperature Effects in Sensor Networks
Florian Schmidt, Matteo Ceriotti, Niklas Hauser, and Klaus Wehrle
AIB 2014-14
Low-power wireless networks, especially in outside deployments, are
exposed to a wide range of temperatures. The detrimental effect of high
temperatures on communication quality is well known. To investigate
these influences under controlled conditions, we present HotBox, a
solution with the following properties: (1) It allows exposition of
sensor motes to a wide range of temperatures with a high degree of
accuracy. (2) It supports specifying exact spatial orientation of motes
which, if not ensured, interferes with repeatable experiment setups. (3)
It is reasonably easy to assemble by following the information (code,
PCB schematics, hardware list and crafting instructions) available
online, facilitating further use of the platforms by other researchers.
After presenting HotBox, we will show its performance and prove its
feasibility as evaluation platform by conducting several experiments.
These experiments additionally provide further insight into the
influence of temperature effects on communication performance in
low-power wireless networks.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Verifying Probabilistic Systems: New Algorithms and Complexity Results
Hongfei Fu
AIB 2014-16
The content of the dissertation falls in the area of formal verification
of probabilistic systems.
It comprises four parts listed below:
1. the decision problem of (probabilistic) simulation preorder between
probabilistic pushdown automata (pPDAs) and finite probabilistic
automata (fPAs);
2. the decision problem of a bisimilarity metric on finite probabilistic
automata (fPAs);
3. the approximation problem of acceptance probability of
deterministic-timed-automata (DTA) objectives on continuous-time Markov
chains (CTMCs);
4. the approximation problem of cost-bounded reachability probability on
continuous-time Markov decision processes (CTMDPs).
The first two parts are concerned with equivalence checking on
probabilistic automata, where probabilistic automata (PAs) are an
analogue of discrete-time Markov decision processes that involves both
non-determinism and discrete-time stochastic transitions. The last two
parts are concerned with numerical algorithms on Markov jump processes.
In Part 1 and Part 2, we mainly focus on complexity issues; as for Part
3 and Part 4, we mainly focus on numerical approximation algorithms.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Proceedings of the International Joint Workshop on Implementation of
Constraint and Logic Programming Systems and Logic-based Methods in
Programming Environments 2014
Thomas Ströder and Terrance Swift (Editors)
AIB 2014-09
Software plays a crucial role in modern society. While the continuous
advent of faster, smaller and more powerful computing devices makes the
development of new and interesting applications feasible, it puts even
more demands on the software developer. Indeed, while software keeps on
growing in size and complexity, it is more than ever required to be
delivered on time, free of error and to meet the most stringent
efficiency requirements. Consequently, it is widely recognized that
there is a need for methods and tools that support the programmer in
every aspect of the software development process.
Having logic as the underlying formalism means that logic-based analysis
techniques are often successfully used for program verification and
optimization. Emerging programming paradigms and growing complexity of
the properties to be verified pose new challenges for the community,
while emerging reasoning techniques can be exploited.
The International Colloquium on Implementation of Constraint and LOgic
Programming Systems (CICLOPS) provides a forum to discuss the design,
implementation, and optimization of logic, constraint (logic)
programming systems, and other systems based on logic as a means of
expressing computations. Experience backed up by real implementations
and their evaluation is given preference, as well as descriptions of
work in progress in that direction.
The aim of the Workshop on Logic-based methods in Programming
Environments (WLPE) is to provide an informal meeting for researchers
working on logic-based methods and tools that support program
development and analysis. As in recent years, these topics include not
only environmental tools for logic programming, but increasingly also
logic-based environmental tools for programming in general and
frameworks and resources for sharing in the logic programming community.
The combination of these two areas of interest in this year’s joint
workshop provides a forum to discuss together the states of the art for
using logic both in the evaluation of programs and in meta-reasoning
about programs.
Due to the strong overlap between the CICLOPS-WLPE community and several
FLoC communities (in particular logic (programming), verification,
automated reasoning, rewriting techniques, and SAT solving), the
workshop is affiliated to several conferences:
- 30th International Conference on Logic Programming (ICLP)
- 26th International Conference on Computer Aided Verification (CAV)
- 7th International Joint Conference on Automated Reasoning (IJCAR)
- Joint meeting of the 23rd EACSL Annual Conference on Computer Science
Logic (CSL) and the 9th ACM/IEEE Symposium on Logic in Computer Science
(LICS)
- 25th International Conference on Rewriting Techniques and Applications
(RTA) joined with the 12th International Conference on Typed Lambda
Calculi and Applications (TLCA)
- 17th International Conference on Theory and Applications of
Satisfiability Testing (SAT)
In 2014, CICLOPS-WLPE also joins its program with the 11th International
Workshop on Constraint Handling Rules (CHR) and has one joint session
together with the Workshop on Probabilistic Logic Programming (PLP).
The International Joint Workshop on Implementation of Constraint and
Logic Programming Systems and Logic-based Methods in Programming
Environments 2014 consists of nine regular submissions and two invited
talks. These informal proceedings contain the regular papers and the
abstracts of the two invited talks.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Generating Inductive Predicates for Symbolic Execution of
Pointer-Manipulating Programs
Christina Jansen, Florian Göbe, and Thomas Noll
AIB 2014-08
Separation Logic (SL) is an extension of Hoare Logic that supports
reasoning about pointer-manipulating programs. It employs
inductively-defined predicates for specifying the (dynamic) data
structures maintained at runtime, such as lists or trees. To support
symbolic execution, SL introduces abstraction rules that yield symbolic
representations of concrete heap states. Whenever concrete program
instructions are to be applied, so-called unrolling rules are used to
locally concretise the symbolic heap. To enable automatic generation of
a complete set of unrolling rules, however, predicate definitions have
to meet certain requirements, which are currently only implicitly
specified and manually established.
To tackle this problem, we exploit the relationship between SL and
hyperedge replacement grammars (HRGs). The latter represent (abstracted)
heaps by hypergraphs containing placeholders whose interpretation is
specified by grammar rules. Earlier work has shown that the correctness
of heap abstraction using HRGs requires certain structural properties,
such as increasingness, which can automatically be established. We show
that it is exactly the Separation Logic counterparts of those properties
that enable a direct generation of abstraction and unrolling rules from
predicate definitions for symbolic execution.
Technically, this result is achieved by first providing formalisations
of the structural properties in SL. We then extend an existing
translation between SL and HRGs such that it covers all HRGs describing
data structures, and show that it preserves these structural properties.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Algorithmic Differentiation of Numerical Methods: Second-Order Tangent
and Adjoint Solvers for Systems of Parametrized Nonlinear Equations
Niloofar Safiran, Johannes Lotz, and Uwe Naumann
AIB 2014-07
Forward and reverse modes of algorithmic differentiation (AD) transform
implementations of multivariate vector functions as computer programs
into tangent and adjoint code, respectively. The reapplication of the
same ideas yields higher derivative code. In particular, second
derivatives play an important role in nonlinear programming.
Second-order methods based on Newton's algorithm promise faster
convergence in the neighbourhood of the minimum by taking into account
second derivative information. The adjoint mode is of particular
interest in large-scale nonlinear optimization due to the independence
of its computational cost on the number of free variables. Solvers for
parametrized system of n equations embedded into evaluation of objective
function for a (without loss of generality) unconstrained nonlinear
optimization problem. Require Hessian of objective with respect to free
variables implying need for second derivatives of the nonlinear solver.
The local computational overhead as well as the additional memory
requirement for the computation of second-order tangents/adjoints of the
solution vector with respect to parameters by a fully discrete method
(derived by AD) can quickly become prohibitive for large values of n.
Both can be reduced extremely by the second-order continuous approach to
differentiation of the underlying numerical method to be discussed in
this paper.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
dco/c++ User Guide
Uwe Naumann, Klaus Leppkes, and Johannes Lotz
AIB 2014-03
dco/c++ is a highly flexible and efficient implementation of first- and
higher-order tangent-linear and adjoint Algorithmic Differentiation (AD)
by operator overloading in C++. It combines a cache-optimized internal
representation based on C++ expression templates with an intuitive and
powerful application programmer interface (API). Support for externally
differentiated functions and for optimized data flow reversal through
checkpointing is provided. dco/c++ has been applied successfully to a
number of numerical simulations in the context of, for example,
large-scale parameter estimation and shape optimization.
Starting with an introduction to the fundamentals of AD this user guide
describes the various modes of dco/c++ in terms of their APIs and with
the help of very simple examples.
dco/c++ is actively developed resulting in a steady evolution of the
associated user guide.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Sequence Level Salient Object Proposals for Generic Object Detection in
Video
Esther Horbert, Germán Martín García, Simone Frintrop, and Bastian Leibe
AIB 2014-06
In this paper, we propose a novel approach for generating generic object
proposals for object discovery and recognition in continuous monocular
video. Such proposals have recently become a popular alternative to
exhaustive window-based search as basis for classification. Contrary to
previous approaches, we address the proposal generation problem at the
level of entire video sequences instead of at the single image level. We
propose a processing pipeline that starts from individual region
proposals and tracks them over time. This enables to group proposals for
similar objects and to automatically filter out inconsistent regions.
For generating the per-frame proposals, we introduce a novel multi-scale
saliency approach that achieves a higher per-frame recall with fewer
proposals than current state-of-the-art methods. Taken together, those
two components result in a significant reduction of the number of object
candidates compared to frame level methods, while keeping a consistently
high recall.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Integration und Analyse von Artefakten in der modellbasierten
Entwicklung eingebetteter Software
Daniel Merschen
AIB 2014-02
Um heutigen Funktionalitäts- und Sicherheitsanforderungen an Automobile
gerecht zu werden, ist eingebettete Software unerlässlich, da die rein
elektronische Realisierung einerseits sehr komplex wäre und andererseits
in einer hohen Anzahl elektrischer und elektronischer Komponenten münden
würde. Dadurch würde das Automobil schwerer und folglich der
Kraftstoffverbrauch höher. Für die Entwicklung eingebetteter Software
hat sich in dieser Branche die modellbasierte Softwareentwicklung mit
MATLAB/Simulink etabliert. Um dabei das Wiederverwendungspotenzial
optimal auszuschöpfen, folgt man dabei dem Software-Produktlinienansatz.
In der Entwicklung ergeben sich nun neue Herausforderungen im Bereich
des Komplexitäts- und Evolutionsmanagements der Software. So sind
Abhängigkeiten im Simulink-Modell oft nicht offensichtlich. Auch der
Zusammenhang zwischen Artefakten des Entwicklungsprozesses, z.B.
zwischen dem Simulink-Modell und dem Lastenheft, ist häufig unklar. Dies
erschwert insbesondere die Einarbeitung späterer Änderungen.
Diese Dissertation widmet sich derartigen Herausforderungen, indem
zunächst ein allgemeines Lösungskonzept für die Artefaktintegration und
-analyse erarbeitet wird, welches anschließend auf zwei Arten umgesetzt
wird, die unterschiedlichen Paradigmen folgen.
Der modellbasierte Ansatz parst die Originalartefakte in Modelle des
Eclipse Modeling Frameworks (EMF), um sie anschließend mit Hilfe von
Modelltransformationen für spätere Analysen vorzubereiten. Dabei
entsteht pro Artefakt ein Modell, der sogenannte Repräsentant. Die
Artefaktintegration erfolgt über miteinander verknüpfte Metamodelle
dieser Repräsentanten. Artefaktanalysen arbeiten auf diesen
Repräsentanten und sind ebenfalls mit Modelltransformationen realisiert.
Das Ergebnis ist daher abermals ein EMF-Modell, welches geeignet
visualisiert werden kann.
Der zweite Ansatz integriert Artefakte über eine zentrale Datenbank.
Dazu werden diese mit Hilfe einer Kombination aus werkzeugspezifischer
und Java-Funktionalität in Java-Klassenmodelle transferiert, die
anschließend in der Datenbank abgelegt und dort mit anderen Artefakten
verknüpft werden können. Analysen sind hier ebenfalls in Java umgesetzt
und operieren auf den Java-Klassenmodellen und der Datenbank.
Schließlich wird die Eignung der beiden Ansätze für die
Artefaktintegration und Analyse hinsichtlich verschiedener Kriterien
evaluiert. Dazu gehören zunächst die Laufzeiteffizienz und
Skalierbarkeit für große Simulink-Modelle. Des Weiteren wird die
Handhabbarkeit in der Praxis auf Grundlage von Fallstudien und
Einschätzungen von Entwicklern bewertet. Da die Ansätze dynamisch um
weitere Analysen erweiterbar sein müssen, um an neue bzw.
unternehmensspezifische Bedürfnisse angepasst zu werden, werden dabei
auch der Schwierigkeitsgrad der Analysenimplementierung und das
notwendige Vorwissen in die Bewertung einbezogen.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Time-Continuous Behaviour Comparison Based on Abstract Models
Jacob Palczynski
AIB 2013-20
Model-based approaches in the development of embedded software become
more and more widely spread. In these approaches, different models of
one target system are the central elements. Re- cently, a growing number
of algorithms and functionality of embedded software are designed using
such model-based approaches, a prominent one is Rapid Control
Prototyping. Its main advantage is the possibility to develop
functionality of both controlling embedded software and controlled
system in one modelling environment. Additionally, the developer can
simulate the whole system and improve its performance, debug algorithms,
etc. in early development stage. Important test phases are Software in
the Loop and Hardware in the Loop. In this work, we discuss two problems
that may occur during these phases.
The first question is: Taking into account the continuous
characteristics of system variables, how can we safeguard a consistent
evolution of the development artefacts? To deal with this problem, test
results are compared in order to safeguard a consistent evolution of the
development artefacts. Since different artefacts were tested at
different stages of the development under different conditions, we
cannot expect that the results of each test case to be exactly equal,
but at most similar. This similarity check of continuous systems is
addressed in this work.
The methodology presented here is based on behavioural black-box models
of our system in the time domain on different levels of abstraction. A
behaviour is represented by the input and output signals of a system and
their interrelationship. On each level of abstraction, a system’s model
is put up by a set of behaviours, each of which consists of input
signals and the according output signals created by the system. The
description of the signals themselves varies strongly, depending on the
level of abstraction.
For the comparison of two systems or artefacts, we have to introduce a
similarity relation with respect to an abstract model. Two systems are
similar with respect to an abstract model A, when their behaviours
conform to this abstract model A. The central question is how we can
find properties in measured signal data. To find a characteristic
property in a set of measured signal data, we compute the cross
correlation of the interpolated measured data and a template signal. By
this, we find on which time interval of the measured data one property
potentially occurs. Repeating this for all properties yields all
occurrences of the properties in the system’s abstract behaviour model.
In the end, we know that these systems conform the abstract model and
therefore are similar with respect to the abstract model.
The second problem we address is: How can we analyse correctly test
results obtained from real time test devices that do not have certified
clocks? The motivation here are possible differences in sample timing
due to not exactly working hardware timers. Due to this, a drift occurs
in the sampled data which leads to distorted signals and seemingly wrong
timings of events. We compare such a signal to one obtained with a
certified device by searching for certain properties, and obtaining the
points in time of the occurrences. Using these time information, we can
estimate the difference of the sample times, i.e. the drift. For the
search for properties based we use the cross correlation approach
already used for regression tests.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Automatic Abstraction for Bit-Vectors using Decision Procedures
Jörg Brauer
AIB 2013-14
This dissertation is concerned with abstract interpretation of programs
whose semantics is defined over finite machine words. Most notably, the
considered class of programs contains executable binary code, the
analysis of which turns out demanding due to the complexity and the
sheer number of involved operations. Challenging for correct yet precise
abstract interpretation of binary code are transfer functions, which
simulate the execution of any concrete operation in a program in an
abstract domain. Crucially for correctness, over- and underflows need to
be supported faithfully.
This dissertation argues that transfer functions and abstractions for
sequences of operations over finite machine words can precisely and
efficiently be generated, which contrasts with classical methods that
depend on handcrafted transfer functions. To support this statement, we
present an approach that eliminates the time-consuming process of
manually deriving transfer functions altogether. The core of our methods
are specifications of the concrete semantics of sequences of operations,
which are given in propositional Boolean logic. By utilizing SAT and SMT
solvers, which can determine satisfiability of Boolean formulae, we show
how to automatically synthesize optimal abstractions from such semantic
specifications. The practicality of our method is highlighted using
abstractions generated for a variety of numerical domains that are
frequently used in abstract interpretation. The abstract domains
considered in this dissertation are, most notably, intervals, value
sets, octagons, convex polyhedra, arithmetical congruences, affine
equalities, and polynomials of bounded degree. Importantly, all
presented techniques automatically handle finiteness of machine words,
which manifests itself in over- and underflows.
Once the analysis of a program has terminated, an abstract interpreter
often emits a warning that highlights a potential error in the analyzed
program. Since abstract interpretation computes an over-approximation of
the states reachable in a concrete execution, such a warning may be
spurious. For this setting, we present variations of our methods, which
compute complete abstractions. Then, it is possible to provide
guarantees about actually reachable states, which allows us to do both,
identify a warning as spurious or generate a legitimate counterexample
trace.