+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Donnerstag, 4. Mai 2023, 13.30 Uhr
Ort: Seminarraum 2202, Hauptbau, Ahornstr. 55
Referent: Hendrik Simon M.Sc.
Lehrstuhl Informatik 11
Thema: Automatic Test Case Generation for PLC Software
Abstract:
Automatic test case generation for the purpose of bug finding or achieving coverage goals has recently evolved to a scalable technique that is nowadays used to find highly critical security bugs, e. g. by Microsoft.
However, in the domain of Programmable Logic Controllers (PLCs), applications of this technique are rare and usually rely on tools and mechanisms that were not initially designed for this domain.
In fact, a discussion on how to design such techniques with the peculiarities of PLC software in mind, is missing.
At the same time, PLC software is typically used in safety critical environments where software errors pose significant threats to the environment or humans and may additionally result in significant financial losses.
Mature automatic testing techniques for the PLC domain would, thus, be highly beneficial to further support software quality in this area.
PLC software typically follows a cyclic execution scheme that involves a repeated process of reading input values, executing a (often state machine based) control program that relies on local variables and writing computed values to outputs.
Although the cyclic execution resembles only a small change in the execution semantics, the impact on automatic testing techniques is significant.
This dissertation provides insights and mechanisms to transfer automatic test case generation into the domain of PLC software. We conduct an in-depth discussion on related approaches and point out strengths and weaknesses in order to provide baseline knowledge that can be utilised in future developments in this field of research.
Further, we introduce our own automatic test case generation approaches and exemplify their effectiveness on PLC software. We are able to show that the generation of branch coverage tests can be achieved significantly faster than with existing techniques, rendering our approaches more applicable for larger software.
The focus of our techniques lies in the exploitation of state-machine based execution behaviour and the preservation of structural information in Sequential Function Chart.
For the latter, our presented algorithm can achieve full coverage in a few seconds for programs that could only partly be covered within an hour by related approaches.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Freitag, 10. März 2023, 11.00 Uhr
Ort: Raum 9222, Gebäude E3, Ahornstr. 55
Zoom:
https://rwth.zoom.us/j/95679484581?pwd=Q0N4SUFZQ21jVERZMDAwc2cyQXpzZz09
Referent: Steffen van Bergerem, M.Sc.
Lehrstuhl für Informatik 7
Thema: Descriptive Complexity of Learning
Abstract:
Supervised learning is a field in machine learning that strives to
classify data based on labelled training examples. In the Boolean
setting, each input is to be assigned to one of two classes, and there
are several fruitful machine-learning methods to obtain a classifier.
However, different algorithms usually come with different types of
classifiers, e.g. decision trees, support-vector machines, or neural
networks, and this is cumbersome for a unified study of the intrinsic
complexity of learning tasks.
This thesis aims at strengthening the theoretical foundations of
machine learning in a consistent framework. In the setting due to
Grohe and Turán (2004), the inputs for the classification are tuples
from a relational structure and the search space for the classifiers
consists of logical formulas. The framework separates the definition
of the class of potential classifiers (the hypothesis class) from the
precise machine-learning algorithm that returns a classifier. This
facilitates an information-theoretic analysis of hypothesis classes
as well as a study of the computational complexity of learning
hypotheses from a specific hypothesis class.
As a first step, Grohe and Ritzert (2017) proved that hypotheses
definable in first-order logic (FO) can be learned in sublinear time
over structures of small degree. We generalise this result to two
extensions of FO that provide data-aggregation methods similar to
those in commonly used relational database systems. First, we study
the extension FOCN of FO with counting quantifiers. Then, we analyse
logics that operate on weighted structures, which can model relational
databases with numerical values. For that, we introduce the new logic
FOWA, which extends FO by weight aggregation. We provide locality
results and prove that hypotheses definable in a fragment of the logic
can be learned in sublinear time over structures of small degree.
To better understand the complexity of machine-learning tasks on
richer classes of structures, we then study the parameterised
complexity of these problems. On arbitrary relational structures and
under common complexity-theoretic assumptions, learning hypotheses
definable in pure first-order logic turns out to be intractable.
In contrast to this, we show that the problem is fixed-parameter
tractable if the structures come from a nowhere dense class.
This subsumes numerous classes of sparse graphs. In particular,
we obtain fixed-parameter tractability for planar graphs, graphs of
bounded treewidth, and classes of graphs excluding a minor.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Montag, 30.01.2023, 13:00-14:00 Uhr
Ort: Informatikzentrum, Ahornstraße 55, Raum 5053.2 (B-IT-Hörsaal)
Referent: Herr Aleksandar Mitrevski, M. Sc.
LuFG Informatik 5
Thema: Skill Generalisation and Experience Acquisition for Predicting and Avoiding Execution Failures
Abstract:
For performing tasks in their target environments, autonomous robots usually execute and combine skills. Robot skills in general and learning-based skills in particular are usually designed so that flexible skill acquisition is possible, but without an explicit consideration of execution failures, the impact that failure analysis can have on the skill learning process, or the benefits of introspection for effective coexistence with humans. Particularly in human-centered environments, the ability to understand, explain, and appropriately react to failures can affect a robot's trustworthiness and, consequently, its overall acceptability. Thus, in this dissertation, we study the questions of how parameterised skills can be designed so that execution-level decisions are associated with semantic knowledge about the execution process, and how such knowledge can be utilised for avoiding and analysing execution failures.
The first major segment of this work is dedicated to developing a representation for skill parameterisation whose objective is to improve the transparency of the skill parameterisation process and enable a semantic analysis of execution failures. We particularly develop a hybrid learning-based representation for parameterising skills, called an execution model, which combines qualitative success preconditions with a function that maps parameters to predicted execution success. The second major part of this work focuses on applications of the execution model representation to address different types of execution failures. We first present a diagnosis algorithm that, given parameters that have resulted in a failure, finds a failure hypothesis by searching for violations of the qualitative model, as well as an experience correction algorithm that uses the found hypothesis to identify parameters that are likely to correct the failure. Furthermore, we present an extension of execution models that allows multiple qualitative execution contexts to be considered so that context-specific execution failures can be avoided. Finally, to enable the avoidance of model generalisation failures, we propose an adaptive ontology-assisted strategy for execution model generalisation between object categories that aims to combine the benefits of model-based and data-driven methods; for this, information about category similarities as encoded in an ontology is integrated with outcomes of model generalisation attempts performed by a robot. The proposed methods are evaluated in multiple experiments performed with a Toyota Human Support Robot.
The main contributions of this work include a formalisation of the skill parameterisation problem by considering execution failures as an integral part of the skill design and learning process, a demonstration of how a hybrid representation for parameterising skills can contribute towards improving the introspective properties of robot skills, as well as an extensive evaluation of the proposed methods in various experiments. We believe that this work constitutes a small first step towards more failure-aware robots that are suitable to be used in human-centered environments.
Es laden ein: die Dozentinnen und Dozenten der Informatik
_______________________________
Leany Maaßen
RWTH Aachen University
Lehrstuhl Informatik 5, LuFG Informatik 5
Prof. Dr. Stefan Decker, Prof. Dr. Matthias Jarke,
Prof. Gerhard Lakemeyer Ph.D., JunProf. Dr. Sandra Geisler
Ahornstrasse 55
D-52074 Aachen
Tel: 0241-80-21509
Fax: 0241-80-22321
E-Mail: maassen(a)dbis.rwth-aachen.de
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Montag, 06. März 2023, 14:00 Uhr
Ort: Raum 2202 (HBau, 2. Stock), Ahornstr. 55
Hybrid über Zoom:
https://rwth.zoom.us/j/99712637671?pwd=U0UzV1JiL1hwWnlzNW5pUC9hVDdOUT09
Referent: Marcus Völker, M.Sc. RWTH
Informatik 11 Embedded Software
Thema: Policy Iteration for Value Set Analysis of PLC Programs
Abstract:
Ensuring the correct behaviour of computing systems is an important task to
prevent danger to the users of such systems. To do this, many analysis
techniques have been developed that can find bugs in software, or prove that
it complies to some specification of correct behaviour.
Among these techniques, a fundamental analysis is value set analysis (VSA),
which can determine an approximation of the program variables' values at
each point of the program. This is very important information, as many
faulty behaviours can be traced back to variables taking unexpected values,
such as division by zero, access to uninitialised memory or outside a
buffer, or unreachable code.
While classically, value set analysis is performed with the algorithm of
Kleene iteration, another approach called policy iteration has been
developed in recent years that provides an alternative with the potential of
finding similar or better results than Kleene iteration in less time.
Policy iteration works by using a heuristic to simplify the program in a
certain way, finding the value sets of that program, and then checking
whether the result is applicable to the original program. If yes, the
results are used, otherwise different simplifications have to be checked,
until a usable result is found.
As policy iteration is a heuristic algorithm, it makes certain assumptions
about program behaviour in order to achieve good results. It turns out,
however, that these assumptions are not guaranteed if the program contains
errors which cause it to behave differently than expected. As we use program
analysis to find errors such as this, assuming an error-free program is not
necessarily a good assumption.
In this thesis, we show several ways to improve the original heuristic by
focusing on program loops. First, we present a way to use a pre-analysis to
determine some aspect of the loops' behaviours, and use this information in
order to build a heuristic that leads to more accurate solutions than the
standard heuristic in many cases, at the cost of additional running time
necessary to perform this pre-analysis.
Then, we show a way to reinterpret branches as loops if they occur in
cyclical code, which is typical for programs in reactive systems, such as
the systems used for factory automation. This allows us to use our loop
heuristic on a wider variety of programs, even though the cost becomes even
greater, and it is useful only in specific cases.
Afterwards, we show how to remove the pre-analysis to gain back the lost
time, while still retaining similarly good results to the expensive version
introduced before. This has the additional benefit of allowing usage of the
algorithms on branches as in the second approach, without incurring any
additional costs. We motivate that this is the version of policy iteration
that should be used in general with an extensive evaluation of generated
programs.
Finally, we show a way to analyse polynomial inequalities with value set
analysis by reinterpreting them as conjunctions of simpler inequalities.
This not only allows us to improve value set analysis results on programs
that feature such inequalities, but also makes these programs accessible to
policy iteration.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
+**********************************************************************
Zeit: Donnerstag, 12. Januar 2023, 10:00 Uhr
Ort: 9222 (Gebäude E3, Informatikzentrum)
Referent: Sascha Müller M.Sc.
DLR Braunschweig
Thema: Synthesizing FDIR Recovery Strategies for Space Systems
Abstract:
This talk proposes an inherently non-deterministic model for Dynamic Fault Trees (DFTs) to analyze Fault Detection Isolation and Recovery concepts with a particular focus on the needs of space systems. Deterministic recovery strategies are synthesized by transforming these non-deterministic DFTs into Markov automata. From the corresponding scheduler, optimized to maximize a given RAMS metric, an optimal recovery strategy can then be derived and represented by a model we call recovery automaton. We discuss dedicated techniques for reducing the state space of this recovery automaton and investigate lifting the approach to a partially observable setting.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Monday, 23. January 2023, 15:30 Uhr
Ort: Online (Zoom: https://umu.zoom.us/my/pauldj)
Referent:
Christos Psarras, M.Sc.
International Research Training Group (IRTG-2379)
Thema:
Beyond the rigid interfaces of super-optimized building-block libraries.
Our experiences in Chemometrics
Abstract:
The efficient computation of linear algebra expressions is a challenging
task faced by many practitioners in scientific fields, such as engineering,
image processing, and computational chemistry, to name a few. For most
applications, mapping a target expression into a sequence of
highly-optimized library routines (often referred to as "building-block"
libraries, e.g., BLAS, LAPACK), is an approach that offers good
computational performance as well as accuracy. However, in other
applications, this approach inherently results in a vast under-utilization
of the available computational resources, and thus reduced performance. In
this talk, we emphasize on these, latter, applications, showcasing two
occurrences that routinely arise in Chemometrics: the Canonical Polyadic
Decomposition (CP) and Jackknife resampling of CP models. For the first
occurrence, we describe the limitations of "mapping to building-blocks"
when computing multiple, low-rank CP decompositions. After close
collaboration with Chemometrics practitioners, we present a method (and
algorithm), CP-CALS, which leverages information about their workflow, to
overcome said limitations and achieve better performance. For the second
occurrence, we describe the unique challenge of Jackknife resampling. We
present a solution that addresses this challenge by making it possible to
use CP-CALS to significantly increase performance, at the cost of slightly
increasing the total amount of required computation. Through extensive
experimentation with synthetic and real datasets on single-threaded and
multi-threaded architectures, as well as on accelerators, we illustrate the
improved efficiency and performance of our methods.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Montag, 28. November 2022, 14:30 Uhr
Ort: Raum 2222, Ahornstr. 55
Referent: Christian Cherek M.Sc.
Lehrstuhl Informatik 10
Thema: The Impact of Tangible Interaction Techniques on Higher Cognitive Processes
Abstract:
Multitouch interaction brought incredible advancements to our everyday life.
The success of smartphones is unprecedented in modern history for a good reason.
On multitouch displays, input and output are collocated at the tip of our fingers.
This enables immediate feedback, highly flexible utilization of the available space,
updatability of interfaces, and new accessibility features. However, a touchscreen's
flat surface lacks haptic features, neglecting a big part of our sensory capabilities.
This thesis integrates itself into the tangible research community by presenting novel
ways to create tangibles for capacitive screens and presenting a software framework to
develop tangible applications with Apple's native APIs. We developed the Design Space
of Tangible Interaction, a taxonomy to help researchers and designers comparing tangible
designs and finding new ways to interact with tangibles. In this spirit, we evaluated
tangibles in novel ways beyond their well-established usability benefits. We found them
to contribute to users' way of thinking, awareness for collaborators, and intuitiveness
of highly complex input tasks.
Es laden ein: die Dozentinnen und Dozenten der Informatik
**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
***********************************************************************
Zeit: Donnerstag, 24. November 2022, 10:00 Uhr
Zoom:
https://rwth.zoom.us/j/93616983610?pwd=UnkwdWd4azRSNzVVUEt4WW1FdHNJUT09
Meeting-ID: 936 1698 3610
Kenncode: 382088
Referentin: Parnia Bahar, M.Sc.
Thema: Neural Sequence-to-Sequence Modeling for Language and Speech
Translation
Abstract:
In recent years, various fields in human language technology have been
advanced by the success of neural sequence-to-sequence modeling. The
application of attention models to automatic speech recognition, text,
and speech machine translation has become dominant and well-established.
Although the effectiveness of such models has been documented in
scientific papers, not all aspects of attention sequence-to-sequence
models have been explored. Therefore, the main contribution of this
thesis centers around redesigning attention models by proposing novel
alternative architectures.
From a modeling perspective, this research goes beyond current
sequence-to-sequence backbone models to directly incorporate input and
output sequences in a two-dimensional structure where an attention
mechanism is no longer required. This model distinguishes itself from
attention models in which inputs and outputs are treated as
one-dimensional sequences over time.
Current state-of-the-art attention models also lack an explicit
alignment, a core component of traditional systems. Such a gross
simplification of a complex process complicates the extraction of
alignments between input and output positions. To enable the
explainability of attention models and more controllable output, the
next part of this study integrates the attention model into the hidden
Markov model formulation by introducing alignments as a sequence of
hidden variables.
Finally, an exciting research direction is combining speech recognition
with text machine translation for speech-to-text translation. Besides
advancing a cascade of independently trained speech recognition and
machine translation systems, this thesis sheds light on different
end-to-end models to directly translate speech into a target text and
shows that such end-to-end models can practically translate speech
utterances as a substitute solution to cascaded speech translation.
Es laden ein: die Dozentinnen und Dozenten der Informatik
--
Stephanie Jansen
Faculty of Mathematics, Computer Science and Natural Sciences
HLTPR - Human Language Technology and Pattern Recognition
RWTH Aachen University
Theaterstraße 35-39
D-52062 Aachen
Tel: +49 241 80-21601
sek(a)hltpr.rwth-aachen.de
www.hltpr.rwth-aachen.de
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Montag, 21.11.2022, 14:00-15:00 Uhr
Der öffentliche Vortrag findet hybrid statt:
Raum: Raum 5053.2 (B-IT-Hörsaal)/Informatikzentrum, Ahornstraße 55
Zoom: https://rwth.zoom.us/j/96565981989?pwd=b3B3aEVmSnJ1VFJhUDYwSlorbTcvQT09
Meeting-ID: 965 6598 1989
Kenncode: 346594
Referent: Herr Daxin Liu, M. Sc.
LuFG Informatik 5
Thema: Projection in a Probabilistic Epistemic Logic and Its Application to Belief-based-Program Verification
Abstract:
Rich representation of knowledge and actions has been a goal that many AI researchers pursue. Among all proposals, perhaps, the situation calculus by Reiter is the most widely studied, where actions are treated as logical terms and the agent's knowledge is represented by logical formulas. The language has been extended to incorporate many features like time, concurrency, procedures, etc..
Most recently, Belle and Lakemeyer proposed a modal logic DS which deals with degrees of belief and noisy sensing. The logic has many appealing properties like full introspection, however, it also has some shortcomings. Perhaps the main one is the lack of expressiveness when it comes to degrees of belief. Currently, the language allows expressing degrees of belief only as constants making it impossible to express belief distribution. Another important problem is that it lacks projection reasoning mechanisms. Projection is the task to determine whether a query about the future is entailed by an initial knowledge base. Two solutions of projection exist regression and progression.
While regression transfers the query about the future into a query about the initial state and evaluates it there, progression transfers the whole initial knowledge base into a future one.
In this thesis, we first lift the expressiveness of the logic DS by modifying both the syntax and semantics. Moreover, we investigate the projection problem in DS.
In particular, we propose a regression operator which can handle queries with nested beliefs and beliefs with quantifying-in. For progression, we show that classical progression is first-order definable for a fragment of the logic and provide our solution for the progression of belief in terms of only-believing after actions.
Moreover, we exploit how to apply the proposed methods in a more practical scenario: on the verification of belief programs, a probabilistic extension of Golog programs, where every action and sensing could be noisy and every test refers to the agent's subjective beliefs. We show that the verification problem is undecidable even in very restrictive settings. We also show a special case where the problem is decidable.
Es laden ein: die Dozentinnen und Dozenten der Informatik
_______________________________
Leany Maaßen
RWTH Aachen University
Lehrstuhl Informatik 5, LuFG Informatik 5
Prof. Dr. Stefan Decker, Prof. Dr. Matthias Jarke,
Prof. Gerhard Lakemeyer Ph.D., JunProf. Dr. Sandra Geisler
Ahornstrasse 55
D-52074 Aachen
Tel: 0241-80-21509
Fax: 0241-80-22321
E-Mail: maassen(a)dbis.rwth-aachen.de
Dear colleagues and students,
as a reminder ...
We invite you to join a guest talk by our visiting professor and Alexander-
von-Humboldt-awardee Salil Kanhere of UNSW Sydney this afternoon.
Best Regards
klaus
When? Monday, October 24, 15:30
Where? Room 9222, E3 building, Ahornstraße 55
The title of the talk will be:
Practical and Extensible Decentralised Identity Management
Abstract:
Self-Sovereign Identity (SSI) is an emerging, user-centric,
decentralized identity approach affording entities greater control over
their identity and data flow during digital interactions. For digital
credentials to be widely accepted, there is a need for an end-to-end
system that provides secure verification of the participant identities
and credentials to increase trust, and a data minimisation mechanism to
reduce the risk of oversharing the credential data. In this talk, we
first introduce CredChain, a blockchain-based SSI platform that allows
secure creation, sharing and verification of credentials. Beyond the
verification of identities and credentials, the self-sovereign identity
architecture allows users to have full control over their credential
data using a digital wallet, including the ability to selectively
disclose part of credential data, as necessary. Current SSI solutions,
assume the issuers to be “official” entities (e.g., government agencies)
who must follow a stringent process to vet their credentials. However,
there is no systematic support for directing the same level of trust
agencies for individual users who may issue credentials (e.g.,
delegation of access, consent letter) in the context of business
processes. A verifier who relies on user-issued credentials to complete
a business process (e.g., a postal worker handing over parcel to someone
other than the addressee) bears the risk of accepting these credentials
without reliance on a trust agency. The second part of the talk presents
CredTrust, a blockchain-based SSI framework that allows individual users
to be “onboarded” to the platform as a verifiable issuer via the
establishment of a "chain of trust". The talk will end with an overview
of TradeChain, an architecture for decoupling identities and trade
activities on blockchain enabled supply chains. TradeChain incorporates
two separate ledgers: a public permissioned blockchain for maintaining
identities and the permissioned blockchain for recording trade flows.
Traders use Zero Knowledge Proofs (ZKPs) on their private credentials to
prove multiple identities on the trade ledger. Traders can define
dynamic access rules for verifying traceability information from the
trade ledger using access tokens and Ciphertext Policy Attribute-Based
Encryption (CP-ABE).