+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Montag, 06. März 2023, 14:00 Uhr
Ort: Raum 2202 (HBau, 2. Stock), Ahornstr. 55
Hybrid über Zoom:
https://rwth.zoom.us/j/99712637671?pwd=U0UzV1JiL1hwWnlzNW5pUC9hVDdOUT09
Referent: Marcus Völker, M.Sc. RWTH
Informatik 11 Embedded Software
Thema: Policy Iteration for Value Set Analysis of PLC Programs
Abstract:
Ensuring the correct behaviour of computing systems is an important task to
prevent danger to the users of such systems. To do this, many analysis
techniques have been developed that can find bugs in software, or prove that
it complies to some specification of correct behaviour.
Among these techniques, a fundamental analysis is value set analysis (VSA),
which can determine an approximation of the program variables' values at
each point of the program. This is very important information, as many
faulty behaviours can be traced back to variables taking unexpected values,
such as division by zero, access to uninitialised memory or outside a
buffer, or unreachable code.
While classically, value set analysis is performed with the algorithm of
Kleene iteration, another approach called policy iteration has been
developed in recent years that provides an alternative with the potential of
finding similar or better results than Kleene iteration in less time.
Policy iteration works by using a heuristic to simplify the program in a
certain way, finding the value sets of that program, and then checking
whether the result is applicable to the original program. If yes, the
results are used, otherwise different simplifications have to be checked,
until a usable result is found.
As policy iteration is a heuristic algorithm, it makes certain assumptions
about program behaviour in order to achieve good results. It turns out,
however, that these assumptions are not guaranteed if the program contains
errors which cause it to behave differently than expected. As we use program
analysis to find errors such as this, assuming an error-free program is not
necessarily a good assumption.
In this thesis, we show several ways to improve the original heuristic by
focusing on program loops. First, we present a way to use a pre-analysis to
determine some aspect of the loops' behaviours, and use this information in
order to build a heuristic that leads to more accurate solutions than the
standard heuristic in many cases, at the cost of additional running time
necessary to perform this pre-analysis.
Then, we show a way to reinterpret branches as loops if they occur in
cyclical code, which is typical for programs in reactive systems, such as
the systems used for factory automation. This allows us to use our loop
heuristic on a wider variety of programs, even though the cost becomes even
greater, and it is useful only in specific cases.
Afterwards, we show how to remove the pre-analysis to gain back the lost
time, while still retaining similarly good results to the expensive version
introduced before. This has the additional benefit of allowing usage of the
algorithms on branches as in the second approach, without incurring any
additional costs. We motivate that this is the version of policy iteration
that should be used in general with an extensive evaluation of generated
programs.
Finally, we show a way to analyse polynomial inequalities with value set
analysis by reinterpreting them as conjunctions of simpler inequalities.
This not only allows us to improve value set analysis results on programs
that feature such inequalities, but also makes these programs accessible to
policy iteration.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
+**********************************************************************
Zeit: Donnerstag, 12. Januar 2023, 10:00 Uhr
Ort: 9222 (Gebäude E3, Informatikzentrum)
Referent: Sascha Müller M.Sc.
DLR Braunschweig
Thema: Synthesizing FDIR Recovery Strategies for Space Systems
Abstract:
This talk proposes an inherently non-deterministic model for Dynamic Fault Trees (DFTs) to analyze Fault Detection Isolation and Recovery concepts with a particular focus on the needs of space systems. Deterministic recovery strategies are synthesized by transforming these non-deterministic DFTs into Markov automata. From the corresponding scheduler, optimized to maximize a given RAMS metric, an optimal recovery strategy can then be derived and represented by a model we call recovery automaton. We discuss dedicated techniques for reducing the state space of this recovery automaton and investigate lifting the approach to a partially observable setting.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Monday, 23. January 2023, 15:30 Uhr
Ort: Online (Zoom: https://umu.zoom.us/my/pauldj)
Referent:
Christos Psarras, M.Sc.
International Research Training Group (IRTG-2379)
Thema:
Beyond the rigid interfaces of super-optimized building-block libraries.
Our experiences in Chemometrics
Abstract:
The efficient computation of linear algebra expressions is a challenging
task faced by many practitioners in scientific fields, such as engineering,
image processing, and computational chemistry, to name a few. For most
applications, mapping a target expression into a sequence of
highly-optimized library routines (often referred to as "building-block"
libraries, e.g., BLAS, LAPACK), is an approach that offers good
computational performance as well as accuracy. However, in other
applications, this approach inherently results in a vast under-utilization
of the available computational resources, and thus reduced performance. In
this talk, we emphasize on these, latter, applications, showcasing two
occurrences that routinely arise in Chemometrics: the Canonical Polyadic
Decomposition (CP) and Jackknife resampling of CP models. For the first
occurrence, we describe the limitations of "mapping to building-blocks"
when computing multiple, low-rank CP decompositions. After close
collaboration with Chemometrics practitioners, we present a method (and
algorithm), CP-CALS, which leverages information about their workflow, to
overcome said limitations and achieve better performance. For the second
occurrence, we describe the unique challenge of Jackknife resampling. We
present a solution that addresses this challenge by making it possible to
use CP-CALS to significantly increase performance, at the cost of slightly
increasing the total amount of required computation. Through extensive
experimentation with synthetic and real datasets on single-threaded and
multi-threaded architectures, as well as on accelerators, we illustrate the
improved efficiency and performance of our methods.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Montag, 28. November 2022, 14:30 Uhr
Ort: Raum 2222, Ahornstr. 55
Referent: Christian Cherek M.Sc.
Lehrstuhl Informatik 10
Thema: The Impact of Tangible Interaction Techniques on Higher Cognitive Processes
Abstract:
Multitouch interaction brought incredible advancements to our everyday life.
The success of smartphones is unprecedented in modern history for a good reason.
On multitouch displays, input and output are collocated at the tip of our fingers.
This enables immediate feedback, highly flexible utilization of the available space,
updatability of interfaces, and new accessibility features. However, a touchscreen's
flat surface lacks haptic features, neglecting a big part of our sensory capabilities.
This thesis integrates itself into the tangible research community by presenting novel
ways to create tangibles for capacitive screens and presenting a software framework to
develop tangible applications with Apple's native APIs. We developed the Design Space
of Tangible Interaction, a taxonomy to help researchers and designers comparing tangible
designs and finding new ways to interact with tangibles. In this spirit, we evaluated
tangibles in novel ways beyond their well-established usability benefits. We found them
to contribute to users' way of thinking, awareness for collaborators, and intuitiveness
of highly complex input tasks.
Es laden ein: die Dozentinnen und Dozenten der Informatik
**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
***********************************************************************
Zeit: Donnerstag, 24. November 2022, 10:00 Uhr
Zoom:
https://rwth.zoom.us/j/93616983610?pwd=UnkwdWd4azRSNzVVUEt4WW1FdHNJUT09
Meeting-ID: 936 1698 3610
Kenncode: 382088
Referentin: Parnia Bahar, M.Sc.
Thema: Neural Sequence-to-Sequence Modeling for Language and Speech
Translation
Abstract:
In recent years, various fields in human language technology have been
advanced by the success of neural sequence-to-sequence modeling. The
application of attention models to automatic speech recognition, text,
and speech machine translation has become dominant and well-established.
Although the effectiveness of such models has been documented in
scientific papers, not all aspects of attention sequence-to-sequence
models have been explored. Therefore, the main contribution of this
thesis centers around redesigning attention models by proposing novel
alternative architectures.
From a modeling perspective, this research goes beyond current
sequence-to-sequence backbone models to directly incorporate input and
output sequences in a two-dimensional structure where an attention
mechanism is no longer required. This model distinguishes itself from
attention models in which inputs and outputs are treated as
one-dimensional sequences over time.
Current state-of-the-art attention models also lack an explicit
alignment, a core component of traditional systems. Such a gross
simplification of a complex process complicates the extraction of
alignments between input and output positions. To enable the
explainability of attention models and more controllable output, the
next part of this study integrates the attention model into the hidden
Markov model formulation by introducing alignments as a sequence of
hidden variables.
Finally, an exciting research direction is combining speech recognition
with text machine translation for speech-to-text translation. Besides
advancing a cascade of independently trained speech recognition and
machine translation systems, this thesis sheds light on different
end-to-end models to directly translate speech into a target text and
shows that such end-to-end models can practically translate speech
utterances as a substitute solution to cascaded speech translation.
Es laden ein: die Dozentinnen und Dozenten der Informatik
--
Stephanie Jansen
Faculty of Mathematics, Computer Science and Natural Sciences
HLTPR - Human Language Technology and Pattern Recognition
RWTH Aachen University
Theaterstraße 35-39
D-52062 Aachen
Tel: +49 241 80-21601
sek(a)hltpr.rwth-aachen.de
www.hltpr.rwth-aachen.de
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Montag, 21.11.2022, 14:00-15:00 Uhr
Der öffentliche Vortrag findet hybrid statt:
Raum: Raum 5053.2 (B-IT-Hörsaal)/Informatikzentrum, Ahornstraße 55
Zoom: https://rwth.zoom.us/j/96565981989?pwd=b3B3aEVmSnJ1VFJhUDYwSlorbTcvQT09
Meeting-ID: 965 6598 1989
Kenncode: 346594
Referent: Herr Daxin Liu, M. Sc.
LuFG Informatik 5
Thema: Projection in a Probabilistic Epistemic Logic and Its Application to Belief-based-Program Verification
Abstract:
Rich representation of knowledge and actions has been a goal that many AI researchers pursue. Among all proposals, perhaps, the situation calculus by Reiter is the most widely studied, where actions are treated as logical terms and the agent's knowledge is represented by logical formulas. The language has been extended to incorporate many features like time, concurrency, procedures, etc..
Most recently, Belle and Lakemeyer proposed a modal logic DS which deals with degrees of belief and noisy sensing. The logic has many appealing properties like full introspection, however, it also has some shortcomings. Perhaps the main one is the lack of expressiveness when it comes to degrees of belief. Currently, the language allows expressing degrees of belief only as constants making it impossible to express belief distribution. Another important problem is that it lacks projection reasoning mechanisms. Projection is the task to determine whether a query about the future is entailed by an initial knowledge base. Two solutions of projection exist regression and progression.
While regression transfers the query about the future into a query about the initial state and evaluates it there, progression transfers the whole initial knowledge base into a future one.
In this thesis, we first lift the expressiveness of the logic DS by modifying both the syntax and semantics. Moreover, we investigate the projection problem in DS.
In particular, we propose a regression operator which can handle queries with nested beliefs and beliefs with quantifying-in. For progression, we show that classical progression is first-order definable for a fragment of the logic and provide our solution for the progression of belief in terms of only-believing after actions.
Moreover, we exploit how to apply the proposed methods in a more practical scenario: on the verification of belief programs, a probabilistic extension of Golog programs, where every action and sensing could be noisy and every test refers to the agent's subjective beliefs. We show that the verification problem is undecidable even in very restrictive settings. We also show a special case where the problem is decidable.
Es laden ein: die Dozentinnen und Dozenten der Informatik
_______________________________
Leany Maaßen
RWTH Aachen University
Lehrstuhl Informatik 5, LuFG Informatik 5
Prof. Dr. Stefan Decker, Prof. Dr. Matthias Jarke,
Prof. Gerhard Lakemeyer Ph.D., JunProf. Dr. Sandra Geisler
Ahornstrasse 55
D-52074 Aachen
Tel: 0241-80-21509
Fax: 0241-80-22321
E-Mail: maassen(a)dbis.rwth-aachen.de
Dear colleagues and students,
as a reminder ...
We invite you to join a guest talk by our visiting professor and Alexander-
von-Humboldt-awardee Salil Kanhere of UNSW Sydney this afternoon.
Best Regards
klaus
When? Monday, October 24, 15:30
Where? Room 9222, E3 building, Ahornstraße 55
The title of the talk will be:
Practical and Extensible Decentralised Identity Management
Abstract:
Self-Sovereign Identity (SSI) is an emerging, user-centric,
decentralized identity approach affording entities greater control over
their identity and data flow during digital interactions. For digital
credentials to be widely accepted, there is a need for an end-to-end
system that provides secure verification of the participant identities
and credentials to increase trust, and a data minimisation mechanism to
reduce the risk of oversharing the credential data. In this talk, we
first introduce CredChain, a blockchain-based SSI platform that allows
secure creation, sharing and verification of credentials. Beyond the
verification of identities and credentials, the self-sovereign identity
architecture allows users to have full control over their credential
data using a digital wallet, including the ability to selectively
disclose part of credential data, as necessary. Current SSI solutions,
assume the issuers to be “official” entities (e.g., government agencies)
who must follow a stringent process to vet their credentials. However,
there is no systematic support for directing the same level of trust
agencies for individual users who may issue credentials (e.g.,
delegation of access, consent letter) in the context of business
processes. A verifier who relies on user-issued credentials to complete
a business process (e.g., a postal worker handing over parcel to someone
other than the addressee) bears the risk of accepting these credentials
without reliance on a trust agency. The second part of the talk presents
CredTrust, a blockchain-based SSI framework that allows individual users
to be “onboarded” to the platform as a verifiable issuer via the
establishment of a "chain of trust". The talk will end with an overview
of TradeChain, an architecture for decoupling identities and trade
activities on blockchain enabled supply chains. TradeChain incorporates
two separate ledgers: a public permissioned blockchain for maintaining
identities and the permissioned blockchain for recording trade flows.
Traders use Zero Knowledge Proofs (ZKPs) on their private credentials to
prove multiple identities on the trade ledger. Traders can define
dynamic access rules for verifying traceability information from the
trade ledger using access tokens and Ciphertext Policy Attribute-Based
Encryption (CP-ABE).
Liebe Kolleg:innen, dear colleagues,
We invite you to join a guest talk by our visiting professor Salil
Kanhere of UNSW Sydney next Monday afternoon.
When? Monday, October 24, 15:30
Where? Room 9222, E3 building, Ahornstraße 55
The title of the talk will be:
Practical and Extensible Decentralised Identity Management
Abstract:
Self-Sovereign Identity (SSI) is an emerging, user-centric,
decentralized identity approach affording entities greater control over
their identity and data flow during digital interactions. For digital
credentials to be widely accepted, there is a need for an end-to-end
system that provides secure verification of the participant identities
and credentials to increase trust, and a data minimisation mechanism to
reduce the risk of oversharing the credential data. In this talk, we
first introduce CredChain, a blockchain-based SSI platform that allows
secure creation, sharing and verification of credentials. Beyond the
verification of identities and credentials, the self-sovereign identity
architecture allows users to have full control over their credential
data using a digital wallet, including the ability to selectively
disclose part of credential data, as necessary. Current SSI solutions,
assume the issuers to be “official” entities (e.g., government agencies)
who must follow a stringent process to vet their credentials. However,
there is no systematic support for directing the same level of trust
agencies for individual users who may issue credentials (e.g.,
delegation of access, consent letter) in the context of business
processes. A verifier who relies on user-issued credentials to complete
a business process (e.g., a postal worker handing over parcel to someone
other than the addressee) bears the risk of accepting these credentials
without reliance on a trust agency. The second part of the talk presents
CredTrust, a blockchain-based SSI framework that allows individual users
to be “onboarded” to the platform as a verifiable issuer via the
establishment of a "chain of trust". The talk will end with an overview
of TradeChain, an architecture for decoupling identities and trade
activities on blockchain enabled supply chains. TradeChain incorporates
two separate ledgers: a public permissioned blockchain for maintaining
identities and the permissioned blockchain for recording trade flows.
Traders use Zero Knowledge Proofs (ZKPs) on their private credentials to
prove multiple identities on the trade ledger. Traders can define
dynamic access rules for verifying traceability information from the
trade ledger using access tokens and Ciphertext Policy Attribute-Based
Encryption (CP-ABE).
We are looking forward to seeing you on Monday!
Best regards,
Roman Matzutt
--
Roman Matzutt, M.Sc., Ph.D. Student
Chair of Communication and Distributed Systems
RWTH Aachen University, Germany
tel: +49 241 80 21412
web: https://www.roman-matzutt.de
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Freitag, 11. November 2022, 10:00 Uhr
Ort: Raum 9222 (2359|222), E3, Informatikzentrum, Ahornstr. 55
Referent: Jörg Christian Kirchhof, M.Sc. RWTH (Software Engineering)
Thema: Model-driven Development, Deployment, and Analysis of
Internet of Things Applications
Kurzfassung (english version below):
Das Internet der Dinge (IoT) beschreibt die Idee, mit Sensoren und
Aktuatoren ausgestattete Gegenstände untereinander und mit dem Internet
zu verbinden. Die Entwicklung von IoT-Anwendungen ist aus verschiedenen
Gründen komplex. Dazu gehören die Heterogenität der IoT-Geräte, die
Tatsache, dass es sich bei IoT-Anwendungen normalerweise um verteilte
Anwendungen handelt, und die Fehleranfälligkeit der Hardware und der
Netzwerkverbindung. Modellgetriebene Methoden versprechen, die komplexe
Entwicklung von IoT-Anwendungen durch die Anhebung des
Abstraktionsniveaus handhabbar zu machen. In verwandten Arbeiten wurde
eine Vielzahl von Komponenten-und-Konnektor (C&C)
Architekturbeschreibungssprachen (ADLs) zur Entwicklung von
IoT-Anwendungen vorgestellt. Diese konzentrieren sich jedoch
hauptsächlich auf die frühen Entwicklungsphasen und vernachlässigen
weitgehend Zuverlässigkeitsaspekte.
Dementsprechend konzentriert sich diese Arbeit auf die modellgetriebene
Entwicklung von IoT-Anwendungen über ihren gesamten Lebenszyklus
hinweg. Wir stellen MontiThings vor, ein Ökosystem zur
modellgetriebenen Entwicklung von IoT-Anwendungen. Basierend auf
bestehenden Ansätzen spezifiziert das MontiThings-Ökosystem eine
IoT-fokussierte C&C ADL unter Verwendung der MontiCore Language
Workbench. MontiThings zielt darauf ab, ein Ökosystem anzubieten, das
den Lebenszyklus von IoT-Anwendungen abdeckt, angefangen bei den ersten
Architekturkonzepten bis hin zur Bereitstellung der Anwendung und der
Analyse der Anwendung während der Laufzeit. In allen Phasen dieses
Prozesses bietet MontiThings dabei Zuverlässigkeitsmechanismen, die
dabei helfen können, robuste Anwendungen zu spezifizieren.
Beim Deployment von Anwendungen ist die anforderungsbasierte
Deployment-Methode von MontiThings in der Lage, nicht nur eine
Verteilung der Komponenten auf die IoT-Geräte zu berechnen, sondern
aktiv Änderungen vorzuschlagen, sollten Anforderungen nicht erfüllbar
sein. Fallen Geräte zur Laufzeit aus, kann MontiThings das Deployment
automatisch an die geänderte Situation anpassen (sofern es im Rahmen
der Anforderungen möglich ist) und den vorherigen Softwarestand der
ausgefallenen Geräte wiederherstellen. Zum Verständnis
unvorhergesehener Situationen zur Laufzeit stellt MontiThings
Analysedienste zur Verfügung. Insgesamt demonstriert MontiThings eine
durchgängig modellgetriebene Methode zur Entwicklung von
IoT-Anwendungen.
Es laden ein: die Dozentinnen und Dozenten der Informatik
---
Abstract:
The Internet of Things (IoT) describes the idea of connecting objects
equipped with sensors and actuators to each other and to the Internet.
IoT applications are complex to develop for a variety of reasons,
including the heterogeneity of the IoT devices, diverse software
stacks, the fact that IoT applications are usually distributed
applications, and the fragility of the hardware and network connection.
Model-driven methods promise to make the complex development of IoT
applications manageable by raising the level of abstraction. Related
work has proposed a variety of component and connector (C&C)
architecture description languages (ADLs) for developing IoT
applications. However, these mainly focus on the early development
phases and largely neglect reliability aspects.
Accordingly, this work focuses on the model-driven development of IoT
applications throughout their lifecycle. We present MontiThings, an
ecosystem for model-driven IoT applications. Based on existing
approaches, the MontiThings ecosystem specifies an IoT-focused C&C ADL
using the MontiCore language workbench. MontiThings aims at offering an
ecosystem that covers the lifecycle of IoT applications starting from
the first architecture concepts up to the eventual deployment of the
application and its analysis during runtime. At all stages of this
process, MontiThings offers reliability mechanisms that can help
to specify resilient applications.
When deploying applications, MontiThings’ requirements-based deployment
method is able to not only calculate a distribution of components to
IoT devices but can also actively propose changes should the
requirements be unfulfillable. If devices fail at runtime, MontiThings
can automatically adapt the deployment to the changed situation (if
possible within the requirements) and restore the previous software
state of failed devices. To understand unforeseen situations that may
arise at runtime, MontiThings provides model-driven analysis services.
Overall, MontiThings demonstrates an end-to-end model-driven approach
for designing IoT applications.
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Donnerstag, 6. Oktober 2022, 15:00 Uhr
Ort: Raum 9222, Ahornstr. 55
Referent: Tim Hartmann M.Sc.
Lehrstuhl Informatik 1
Thema: Facility Location on Graphs
Abstract:
We study two closely related Facility Location problems on graphs where
all edges have unit length and where the facilities may also be
positioned in the interior of the edges. For delta-Dispersion the goal
is to position as many facilities as possible subject to the condition
that any two facilities have at least distance delta from each other.
For delta-Covering the goal is to cover the entire graph with the
minimum number of facilities; that is, we want to position as few
facilities as possible subject to the condition that every point on
every edge is at distance at most delta from one of these facilities.
We investigate the algorithmic complexity of these problems for every
real distance value delta. Further, we explore the complexity of these
problems with the solution size as parameter. Finally, we study
delta-Dispersion with parameters that measure the complexity of the
input graph, such as treewidth, treedepth and neighborhood diversity.
Es laden ein: die Dozentinnen und Dozenten der Informatik