The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Cylindrical Algebraic Decomposition for Nonlinear Arithmetic Problems
Gereon Kremer
AIB 2020-04
Satisfiability modulo theories solving is a technology to solve logically encoded problems for many applications like verification, testing, or planning. Among the many theories that are considered within this logical framework, nonlinear real arithmetic stands out as particularly challenging, yet decidable and sufficiently well understood from a mathematical perspective. The most prominent approach that can decide upon nonlinear real questions in a complete way is the cylindrical algebraic decomposition method.
We explore the usage of the cylindrical algebraic decomposition method for satisfiability modulo theories solving, both theoretically and experimentally. This method is commonly understood as an almost atomic procedure that gathers information about an algebraic problem and then allows to answer all kinds of questions about this algebraic problem afterward. We essentially break up this method into smaller components that we can then process in varying order to derive the particular piece of information - whether the problem is satisfiable or unsatisfiable - allowing to avoid some amount of computations. As this method often exhibits doubly exponential running time, these savings can be very significant in practice.
We furthermore embed this method in the regular satisfiability modulo theories framework where the cylindrical algebraic method is faced with a sequence of problems that are "related" in the sense that they usually share large parts of their problem statements. We devise different approaches to retain information from a previous run so that it can be reused when the problem is only "extended" as well as purging now obsolete information if the problem is "reduced". These variants change in how much information can be reused, the granularity of the information that is removed, and how much bookkeeping needs to be done.
This integration is then enhanced with techniques that are more or less well-known in the computer algebra community, for example, different projection operators, equational constraints, or employing the so-called resultant rule. Furthermore, we present novel features necessary for an efficient embedding in the satisfiability modulo theories framework like infeasible subset computations and early termination as well as extensions to integer problems and optimization problems.
We then turn to an alternative approach to satisfiability modulo theories solving that is commonly called model-constructing satisfiability calculus. The core idea of this framework is to integrate the theory reasoning, in particular the construction of a theory model, tightly with the Boolean reasoning. The most popular theory reasoning engine is again based on the cylindrical algebraic decomposition method, though we focus on the overall framework here.
We start with our own variant of the model-constructing satisfiability calculus and discuss some general insights and changes compared to current implementations. We then proceed to present a whole series of reasoning engines for arithmetic problems and show how a proper (though still naive) combination of those serves to significantly improve a practical solver. We also show how the tight integration into the Boolean reasoning allows for novel strategies for notoriously hard problems like the theory variable ordering or expedient cooperation between the Boolean and the theory reasoning.
Finally, we consider the theoretical relation of the model-constructing satisfiability calculus to other proof systems, in particular, the aforementioned regular satisfiability modulo theories solving. Under certain assumptions - that turn out to be instructive in and of themselves - we show that they are equivalent with respect to their proof complexity and even establish what we call "algorithmic equivalency" afterward.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:<http://aib.informatik.rwth-aachen.de/>
Pre-Study on the Usefulness of Difference Operators for Modeling Languages in Software Development
Imke Drave, Oliver Kautz, Judith Michael, and Bernhard Rumpe
AIB 2020-05
Models are the primary development artifacts in model-driven software engineering making model change management crucial for developers. In our work, we investigate if semantic differencing improves the developers' understandings of differences between model versions. Current research in this field focuses on pure syntactic differences between models which might not reveal the impact of the syntactic change to the real world. Thus, the semantic difference of models is an open field to investigate. We propose differencing operators for model comparison for four different modeling languages (Class Diagrams, Activity Diagrams, Statecharts and Feature Diagrams). This technical report describes the main fundamentals of the semantic differencing operators for the four modeling languages and the results of a pre-study on the usefulness of the differencing operators for software engineers. In the study, we were asking how well aspects such as syntactic differences, semantic differences, a combination of syntactic and semantic differences, and the abstraction as well as the summarization of semantic differences helped to understand the differences between shown models in each of the four modeling languages. The pre-study has shown that a combination of the syntactic and semantic difference is the most suitable alternative to providing intuitive explanations that take semantic differences into consideration. However, a larger study with real-size models is needed for obtaining more meaningful results.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Adaptierung des Zeitverhaltens nicht-echtzeitfähiger Software für den Einsatz in zeitheterogenen Netzwerken
John F. Schommer
AIB 2020-03
Sind Echtzeitsysteme vernetzt, werden zeitkritische Anforderungen auf andere Netzwerkknoten propagiert. Erfüllen solche Knoten diese Anforderungen aber nicht, können die Echtzeitsysteme im Umkehrschluss nicht mehr garantieren, dass an sie gestellte zeitkritische Anforderungen erfüllt werden. Solche Netzwerke heißen zeitheterogen. Es wird eine Vorgehensweise präsentiert, Software zu evaluieren und dann zu adaptieren, welche per-Entwurf nicht echtzeitfähig entwickelt wurde. Ziel ist es die Software so zu verbessern, dass die Echtzeitsysteme in ihrer Hauptfunktion nicht gehindert werden. Dazu wurde eine Evaluierungsmethode definiert und eine Reihe von Entwurfsmustern entwickelt, die im Kontext der jeweiligen Fallstudien und Kooperationsprojekten mit der Industrie vorgestellt werden, in denen sie erarbeitet wurden.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Towards an Isabelle Theory for distributed, interactive systems — the untimed case
Jens Christoph Bürger, Hendrik Kausch, Deni Raco, Jan Oliver Ringert, Bernhard Rumpe, Sebastian Stüber, and Marc Wiartalla
AIB 2020-02
This report describes a specification and verification framework for distributed interactive systems. The framework encodes the untimed part of the formal methodology FOCUS in the proof assistant Isabelle using domain-theoretical concepts. The key concept of FOCUS, the stream data type, together with the corresponding prefix-order, is formalized as a pointed complete partial order. Furthermore, a high-level API is provided to hide the explicit usage of domain theoretical concepts by the user in typical proofs. Realizability constraints for modeling component networks with potential feedback loops are implemented. Moreover, a set of commonly used functions on streams are defined as least fixed points of the corresponding functionals and are proven to be prefix-continuous.
As a second key concept the stream processing function (SPF) is introduced describing a statefull, deterministic behavior of a message-passing component. The denotational semantics of components in this work is a defined set of stream processing functions, each of which maps input streams to output streams.
Furthermore, an extension of the framework is presented by using an isomorphic transformation of tuples of streams to model component interfaces and allowing composition. The structures for modeling component networks are implemented by giving names to channels and defining composition operators. This is motivated by the advantage that a modular modeling of component networks offers, based on the correctness of components of the decomposed system and using proper composition operators, the correctness of the whole system is automatically derived by construction.
To facilitate automated reasoning, a set of theorems is proven covering the main properties of these structures. Moreover, essential proof methods such as stream-induction are introduced and support these by further theorems. These examples demonstrate the principle usability of the modeling concepts of FOCUS and the realized verification framework for distributed systems with security and safety issues such as cars, airplanes, etc. Finally, a running example extracted from a controller in a car is realized to demonstrate and validate the framework.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Runtime Supervision of PLC Programs Using Discrete-Event Systems
Florian Göbe
AIB 2019-05
The supervisory control theory (SCT) introduced by Ramadge and Wonham is one of the most noted formalisms for the synthesis of solutions in discrete event control. In this dissertation, an approach is elaborated which applies the SCT framework for the supervision of arbitrary existing PLC controller programs. The latter are assumed to be provided manually by the user and hence are not formally guaranteed to respect certain constraints, such as demands on safety, in all possible situations. With the presented approach, these conditions can be formulated in form of discrete-event systems. Using adaptations of the Ramadge and Wonham framework, a supervision layer is generated from these models. It prevents the controller from executing critical actions during runtime, which could eventually lead to the violation of the specified requirements.
In order to address a wide range of realistic use cases, several adaptations and extensions have been introduced to the original framework, such as conditional transitions, templates and event enforcement for the preemption of undesired incidents. A concept for an end-to-end solution from the creation of the requirement models to a ready-to-use safety layer is presented and has been implemented in a tool.
The suitability of the concept has been evaluated in several case studies, some on industrial hardware. Furthermore, the usability of the approach as a whole, the introduced modifications and the tool implementation have been evaluated in a user study.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
Formale Methoden für die Entwicklung von eingebetteter Software in kleinen und mittleren Unternehmen
Sebastian Patrick Grobosch
AIB 2019-03
Die Anzahl der Steuergeräte in Fahrzeugen der Oberklasse ist in den letzten 15 Jahren stetig gestiegen und liegt aktuell bei etwa 100 Stück. Dabei entsteht der Großteil aller Innovationen im Fahrzeug durch Elektronik und Software. Dies macht Software einerseits zu einem der wichtigsten Innovationstreiber für Unternehmen in der Automobilindustrie, andererseits birgt sie ein hohes Risikopotential: Programmfehler.
Durch internationale Normen und strengere Anforderungen an die Software-Qualität wird versucht, diesem Risiko entgegenzuwirken. Um alle Wünsche der Endkunden individuell erfüllen zu können, wächst allerdings die Komplexität der Systeme durch die steigende Variantenvielfalt. Das Testen von solchen Software-Systemen kann dabei nur das Vorhandensein von Fehlern zeigen, jedoch nicht deren Abwesenheit. Eine Garantie, dass ein System die gestellten Anforderungen erfüllt, kann durch formale Methoden gegeben werden.
Die in dieser Arbeit vorgestellten Ansätze tragen dazu bei, die Software-Entwicklung in kleinen und mittleren Unternehmen durch Methoden der formalen Verifikation zu verbessern und zu unterstützen. Dabei werden die Vorteile von kleinen gegenüber großen Unternehmen genutzt und ausgebaut. Dazu zählen die ausgeprägte Nähe zum Kunden sowie ein hohes Maß an Flexibilität und Wirtschaftlichkeit. Die Systemkomplexität der meisten Projekte sowie die Prozessstrukturen können positiv zur Akzeptanz für die Einführung von formalen Methoden in den jeweiligen Entwicklungsprozess beitragen.
Der erste Ansatz befasst sich mit der Analyse von Zeitanforderungen für eingebettete Systeme basierend auf der formalen Methode des Model-Checkings. Dabei wird für ein bestehendes Variantensystem für Steuergeräte ein Task-System mittels Uppaal modelliert und eine Einplanbarkeitsanalyse auf Basis von Zeitautomaten vorgestellt. Zur Verwaltung der Varianten wurde ein Framework basierend auf pure::variants entworfen und eine bestehende Software-Plattform evolutionär in eine Produktlinie umgewandelt. Damit können sich Unternehmen stärker auf die individuellen Kundenwünsche fokussieren und vorhandene Komponenten effizient und mit hoher Qualität wiederverwenden.
Der zweite Ansatz zur Verbesserung der Software-Qualität befasst sich mit der Verifikation von Programmcode eingebetteter Systeme durch den Model-Checker Arcade. Hierbei wurde speziell das Formulieren von formalen Anforderungen und die Anwendbarkeit im industriellen Umfeld untersucht. Mittels dieses Ansatzes konnten sowohl Fehler im Programmcode lokalisiert als auch das Einhalten von Anforderungen gezeigt werden. Der Einsatz von Binär-Code-Verifikation kann den Testaufwand reduzieren, aber nicht ersetzen. Der Vorteil für Unternehmen ist allerdings, dass diese Methode das Ausbleiben von Fehlern beweisen kann, was durch herkömmliches Testen nicht möglich ist.
Insgesamt wurde ein Ansatz zur Integration formaler Methoden in den Entwicklungsprozess eines kleinen und mittleren Unternehmens vorgestellt, erfolgreich mit entsprechender Werkzeugunterstützung umgesetzt und evaluiert. Mit Hilfe der gezeigten Methoden ist es möglich, den Testaufwand zu reduzieren und die Qualität von automobilen Steuerungssystemen schon frühzeitig in den wichtigen Phasen der Entwicklung zu steigern.
The following technical report is available from
http://aib.informatik.rwth-aachen.de:
IC3 Software Model Checking
Tim Felix Lange
AIB 2019-02
In times where computers become ever smaller and more powerful and software becomes more complex and advances even deeper into every aspect of our lives, the risk of software misbehaviour and the resulting damage grows dramatically. In order to prevent such erroneous behaviour model checking, a formal verification technique for determining functional properties of information and communication systems, has proven to be highly useful.
For proving mathematical properties, one of the first methods to be taught in schools is induction. With the concept of proving a concrete induction base and a general induction step it is considered a very simple and intuitive, yet powerful proof method. However, for difficult properties finding an inductive formulation can be an extremely hard task. When humans try to solve this problem, they naturally produce a set of smaller, simple lemmas that together imply the desired property. Each of these lemmas holds relative to some subset of previously established lemmas by invoking the knowledge to prove the new lemma.
This incremental approach to proving complex properties using sets of small inductive lemmas was first applied to model checking of hardware systems in the IC3 algorithm and has proven to outperform all known approaches to hardware model checking.
This thesis aims at applying the principles of incremental, inductive verification laid by the IC3 algorithm to software model checking for industrial control software with special attention to the control-flow induced by the program under consideration. For this purpose, basic concepts are introduced and an in-depth explanation of the IC3 algorithm and its different building blocks (search phase, generalization and propagation) is given. Based on these prerequisites the novel IC3CFA algorithm is presented. In this algorithm, the control-flow of the program is explicitly modelled as an automaton, while the variable valuations are handled symbolically, thus using the best of both worlds. Following the search phase of IC3CFA, solutions for applying generalization to IC3CFA are presented and problems arising with propagation are discussed. Finally, the performance of the IC3CFA algorithm and all proposed improvements is extensively evaluated on a set of well-recognised benchmarks. To set these results into a relation, a comparison with other available IC3 software model checking implementations concludes this thesis and underlines the strong potential of the IC3CFA model checking algorithm.
The following technical report is available from
http://aib.informatik.rwth-aachen.de: <http://aib.informatik.rwth-aachen.de/>
The Probabilistic Model Checker Storm — Symbolic Methods for Probabilistic Model Checking
Christian Hensel
AIB 2018-06
In a world in which we increasingly rely on safety critical systems that simultaneously are becoming ever more complex, formal methods provide a means to mathematically rigorously prove systems correct. Model checking is a fully automated technique that is successfully applied in the verification of software and hardware circuits. However, in practice many systems exhibit stochastic behavior, for instance when components may fail or randomization is used as a key element to improve efficiency. Probabilistic model checking extends traditional (qualitative) model checking to deal with such systems. As model checking can be (simplistically) viewed as an exhaustive exploration of the state space of the model under consideration, it suffers from the curse of dimensionality: State spaces grow exponentially in the number of components and variables and they quickly become too large to be effectively manageable, a problem that is typically referred to as state space explosion.
Symbolic methods have helped to alleviate this problem substantially. Rather than considering states and transitions of the system individually, they instead exploit structure in the model and treat sets of states and transitions simultaneously. Model checkers based on symbolic techniques dominate the landscape of hardware and software model checking. In the probabilistic setting, symbolic methods too show potential but are arguably not on par with their qualitative counterparts. This thesis is concerned with advances in the field of symbolic techniques in the context of probabilistic model checking. The major contributions are fivefold:
(1) We propose the JANI modeling language to unify the multitude of input languages of probabilistic model checkers. It covers a large range of models involving randomization and timing aspects and offers well-defined points for future extensions.
(2) We show how counterexamples on the level of JANI specifications can be synthesized. For this, we develop a method based on the connection of standard probabilistic model checking and a smart enumeration of the solutions of a satisfiability problem.
(3) We combine the strengths of decision diagrams for the representation of gigantic systems and bisimulation minimization, a technique that reduces systems by factoring out symmetry. Our implementation is shown to drastically reduce the sizes of models involving probabilities, continuous time, nondeterminism and rewards.
(4) We summarize, extend and implement a game-based abstraction-refinement framework that is able to treat infinite-state probabilistic automata. In contrast to other implementations, ours is openly available and computes strictly sound bounds through the use of rational arithmetic.
(5) We present Storm, a new high-performance probabilistic model checker. It goes beyond the standard verification tasks through numerous features and in particular integrates the items (1) to (4) above. We show that it outperforms other state-of-the-art model checkers on the majority of instances of the PRISM benchmark suite.
The following technical report is available from
http://aib.informatik.rwth-aachen.de: <http://aib.informatik.rwth-aachen.de/>
Real-World Deployment and Evaluation of Synchronous Programming in Reactive Embedded Systems
Matthias Terber
AIB 2018-05
Pervasive smart devices link embedded concerns to information technology in a single, resource-constrained system. Both domains have quite opposite computation characteristics - reactive versus transformational. Due to C's dominance in industry, the reactive part is usually based on conventional sequential programming which lacks domain-specific support making the solution hard to program, comprehend and maintain. Synchronous languages might be a promising solution in order to facilitate software engineering and improve software quality. However, to date, they are restricted to very specific industrial niches; real-world deployments and evaluations are rarely reported in literature.
This thesis conducts a case study that examines the feasibility and suitability of the synchronous approach based on a real-life smart device. Focusing on its reactive concerns, it elaborates the engineering challenges and quality issues of the existing production code. By taking advantage of the synchronous language Céu, it outlines a synchronous reimplementation, thereby illustrating the deployment of synchronous programming and how to reconcile it with the transformational part of the system. Architectural considerations and best practices are provided for developers in order to effectively apply the synchronous language concepts. Furthermore, it shows the applicability of established object-oriented software design patterns and how to implement reproducible unit tests for reactive code.
Several qualitative discussions treat the software engineering and quality benefits gained by the synchronous reimplementation compared to the existing production code. A code analysis uses the separation of concerns, the scattering of interfaces and the code size as performance indicators in order to quantitatively substantiate the results. A user study confirms that reactive behavior is easier to implement and comprehend using the synchronous approach.
This thesis represents a proof of concept which demonstrates the feasibility and suitability of synchronous programming in resource-constrained, real-life industrial embedded applications that are exposed to reactive and transformational concerns likewise. By using synchronous programming, we were able to recover fundamental software engineering principles while, at the same time, fulfill the strong resource limitations - a combination that is known to be hard to achieve. Finally, we believe that our work generally suggests a practicable way of improving embedded software quality in industrial applications.
The following technical report is available from
http://aib.informatik.rwth-aachen.de: <http://aib.informatik.rwth-aachen.de/>
Operation-Based Model Recommenders
Andreas Ganser
AIB 2018-04
For higher-level reuse, i.e., for activities from the preservation until the reutilization of knowledge, we can firstly say that activities involved in reuse do not pay off immediately, but only in the long run. Even worse, all these activities are generally considered tedious, because they expose no immediate benefit. Thus, starting with the harvesting of knowledge and storage for later reuse, continuing with looking for suitable harvested knowledge, i.e., retrieving it, and finishing with reusing (or reutilizing) it, these activities are perceived as rather unappealing. Fortunately, integrated development environments for lower-level reuse, e.g., source code, have already demonstrated how to approach this using completion mechanisms that foster enhanced querying and recommender systems. This places the experience of whole communities at the fingertips of every programmer. Yet, there is no such support for modeling.
Hence, we approach activities dealing with challenges denoted by representation, harvesting, evolution, and retrieval. These challenges shall be addressed, and, eventually, we contribute an approach tailored for modeling with UML or models akin to class diagrams, and this approach turns out to be a knowledge-based recommender system based on property graphs and metagraphs suitable for a broader scope. Further, we provide a cookbook for developing such a system, which includes schema for model recommendation production for operation-based model recommenders based on our deployment experiences with HERMES. As we are taking into account contextual information monitored as modeling operations, this too could be denoted as an operation-and-knowledge-based recommender system that (semi-)automates tedious activities.