Dear all,
this is a reminder for Erika Ábrahám's talk with the title "Relating Stochastic Hybrid Petri Nets and Stochastic Hybrid Automata" taking place today at 12:30 in the B-IT room 5053.2. Please find the details below
--- Abstract ---
Hybrid systems are systems with mixed discrete-continuous behavior, typical examples being continuously
evolving physical systems controlled by discrete controllers. If such systems also posses some stochastic
behavior, then we talk about stochastic hybrid systems. In this talk we focus on the question how to model
a group of such systems, which evolve concurrently.
Different modeling formalisms offer different views on concurrent stochastic hybrid systems. On the one hand,
(discrete) Petri nets, which model concurrency in a deeply inherent way, have been extended with continuous
and stochastic components. On the other hand, finite automata has been extended with continuous evolution
to hybrid automata, and different approaches have been proposed to integrate also stochastic
components to hybrid automata.
In the first part of this talk, we will have a closer look on the modeling of hybrid systems within these formalisms,
and discuss an open problem, which sounds simple, but it seems to be hard to solve.
In a second part, we will turn our attention to adding stochasticity to these modeling formalisms. We will discuss
and relate existing approaches, make an attempt to find motivations for different design choices,
and conclude with some general observations.
----------------
Part of the programme of the research training group UnRAVeL is a series of lectures on the topics of UnRAVeL’s research thrusts algorithms and complexity, verification, logic and languages, and their application scenarios. Each lecture is given by one of the researchers involved in UnRAVeL.
This years topic is "UnRAVeL - New Ideas!". In these lectures, UnRAVeL professors will discuss current research as well as highlight open problems and offer a perspective on potential future directions.
All interested doctoral researchers and master students are invited to attend the UnRAVeL lecture series 2024 and engage in discussions with researchers and doctoral students.
We are looking forward to seeing you at the lectures.
Kind regards,
Jan-Christoph for the organisation committee
Dear all,
this is a reminder for Joost-Pieter Katoen's talk with the title "Facing Uncertainty in AI: From Verification to Synthesis" taking place today at 12:30 in the B-IT room 5053.2. Please find the details below
--- Abstract ---
Uncertainties occur in different forms: data may be noisy, mechanisms
may be inherently randomised, the visibility (of e.g. a robot) may not
be optimal, and the environment in which a system needs to operate may
behave in an unknown manner. The central question that we will address
is "Can we guarantee that AI systems are safe and dependable in the
presence of such uncertainty?'' We advocate using model-based, formal
verification and synthesis with a particular focus on automation. We
will present techniques to verify uncertainty aspects modeled as
randomness and to use formal synthesis to complete partial designs.
Several example AI systems will illustrate the capabilities of these approaches.
----------------
Part of the programme of the research training group UnRAVeL is a series of lectures on the topics of UnRAVeL’s research thrusts algorithms and complexity, verification, logic and languages, and their application scenarios. Each lecture is given by one of the researchers involved in UnRAVeL.
This years topic is "UnRAVeL - New Ideas!". In these lectures, UnRAVeL professors will discuss current research as well as highlight open problems and offer a perspective on potential future directions.
All interested doctoral researchers and master students are invited to attend the UnRAVeL lecture series 2024 and engage in discussions with researchers and doctoral students.
We are looking forward to seeing you at the lectures.
Kind regards,
Jan-Christoph for the organisation committee
Dear all,
part of the programme of the research training group UnRAVeL is a series of lectures on the topics of UnRAVeL’s research thrusts algorithms and complexity, verification, logic and languages, and their application scenarios. Each lecture is given by one of the researchers involved in UnRAVeL.
This years topic is "UnRAVeL - New Ideas!". In these lectures, UnRAVeL professors will discuss current research as well as highlight open problems and offer a perspective on potential future directions.
All interested doctoral researchers and master students are invited to attend the UnRAVeL lecture series 2024 and engage in discussions with researchers and doctoral students.
All events take place on Thursdays 12:30 to 14:00, Computer Science Center, Building E2, ground floor, B-IT room 5053.2. The schedule is as following:
11.04 - Joost-Pieter Katoen: Facing Uncertainty in AI: From Verification to Synthesis
18.04 - Erika Ábrahám: Relating Stochastic Hybrid Petri Nets and Stochastic Hybrid Automata
25.04 - Nils Nießen: Open (Research) Problems in Railways. What to do?
02.05 - Jürgen Giesl: Termination and Complexity Analysis of (Probabilistic) Programs: Results and Future Work
16.05 - Martin Grohe: The Complexity of Constraint Satisfaction
06.06 - Sebastian Trimpe: Bayesian Optimization for High-Dimensional, Adaptive, and Safe Controller Learning
13.06 - Christina Büsing: Robust Optimization in Health Care
27.06 - Michael Schaub: How can algebraic topology help with data analysis?
04.07 - Gerhard Lakemeyer: Challenges in Cognitive Robotics
11.07 - Christopher Morris: Understanding the Generalization Abilities of Graph Neural Networks: Current Results and Future Directions
18.07 - Britta Peis: Future Research in Submodular Function Optimization
We are looking forward to seeing you at the first lecture this Thursday!
Kind regards,
Jan-Christoph for the organisation committee
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+****************************************************************
Zeit: Montag, 8. April 2024, 13.30 Uhr
Ort: Seminarraum 3, Kopernikusstraße 6
Referent: Benedikt Heinrichs M.Sc.
IT Center, RWTH Aachen University
Thema: Asynchronous Tracking and Description of Research Data Changes in
Distributed Systems with Interoperable Metadata
Abstract:
With the digital revolution, the way to approach research has fundamentally
changed.
Suddenly, research processes created digital research data that needed to be
stored.
Initially, no standards for this existed, so practices diverged wildly.
Consequently, data was produced that was not findable without a management
system.
For this reason, movements entered the picture intending to standardize
these processes and define how research data should be managed.
One recommendation is the FAIR Guiding Principles, which describe that
research data should be findable, accessible, interoperable, and reusable.
While these principles have set goals, no implementation guideline is
provided since the different research areas are too diverse.
Therefore, research data management (RDM) teams around the globe have
created numerous implementations.
Some of them are platforms like Coscine, which can manage research data and
try to adhere to parts of the FAIR principles.
However, such platforms face the issue that researchers want to store their
research data with an enterprise-ready and openly accessible storage
provider.
Therefore, research data often does not move through these platforms but
directly through the storage providers.
This circumstance contradicts the aim of following the FAIR principles
because the platforms cannot account for the research data movement and miss
critical provenance information.
The presented thesis aims to close that gap by providing a method to
calculate the missing provenance information after changes occur.
This so-called asynchronous data provenance is produced by comparing
representations of research data.
If the representations have changed, a new version or variant of the
research data has likely been created.
Representations can range from a generated hash to interoperable metadata
about the research data.
This interoperable metadata is created by running a pipeline that receives
research data and extracts valuable information about its content.
This information is annotated as interoperable metadata by following
existing application profiles and ontologies.
Interoperable metadata can be used to compute the similarity of research
data with a method called FSS Jaccard.
The created methods are integrated into a standards-based RDM system (RDMS),
defined in this thesis, to show their applicability.
For this standards-based RDMS, Coscine is used as a use case.
Thereby, this thesis presents a method that can provide additional
information about research data and close the presented gap for any
standards-based RDMS.
By using this method, RDM teams can come closer to supporting the
implementation of the FAIR principles and improving the processes for
researchers.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Dienstag, 6. März 2024, 13.00 Uhr
Ort: Raum 025, Mies-van-der-Rohe Str. 15 (UMIC Gebäude)
Referent: Jonathon Luiten, M.Sc.
Lehrstuhl Informatik 13
Thema: Dynamic 3D Representations and Robust Evaluation for Visual Tracking
Abstract:
Visual tracking is a core task within computer vision, one that involves understanding the motion and persistence of the dynamic world when observed in video. Building performant tracking algorithms is critical for many applications such as robotics; self-driving vehicles; virtual and augmented reality; scene-analysis for sporting, retail and construction scenarios; and content creation and editing. While other areas of computer vision such as recognition and detection have recently reached outstanding performance due to deep learning and extremely large training datasets, tracking has remained an incredibly difficult task where such approaches have not been able to achieve similar success. We argue that tracking is inherently different from these tasks and that simply scaling up compute and data is not going to be enough. In this thesis we develop what we believe to be the missing piece holding tracking back from similar success: the use of dynamic 3D representations that can be used to model the underlying scene. Furthermore, we find that the second thing holding back the field of visual tracking was the lack of adequate evaluation metrics and benchmark settings. We address these limitations by introducing novel metrics and benchmarks, which are crucial for measuring the performance of algorithms and guiding the field toward making meaningful progress.
The first half of this thesis deals with approaches to lift representations for tracking to 3D, both at the level of whole objects (MOTSFusion) and at the level of infinitesimal 3D scene elements (Dynamic 3D Gaussians). Traditionally, tracking involves finding correspondences between static 2D representations in each timestep, such as pixels or bounding-boxes. Instead, we represent the world as a set of dynamic 3D representations that move around over time in order to consistently represent the same physical location in space as it moves.
We reformulate tracking from a correspondence estimation problem, to an analysis-by-synthesis problem of fitting an underlying dynamic 3D model, whose motion explains changes in image content across timesteps. By using 3D representations we can better model appearance changes due to the 3D motion of the scene and the motion of the camera through the world, while also making use of intuitive physics knowledge about how objects move through the 3D world. This enables us to both obtain better tracking results, while also resulting in consistent dynamic 3D representations that are directly useful for many downstream tasks.
The second half of this thesis deals with building robust metrics and benchmarks for evaluating the performance of visual tracking algorithms. For the task of Multi-Object Tracking (MOT), previous evaluation metrics have been sorely lacking, focusing only on particular aspects of tracking performance (e.g. detection or association), but not being able holistically measure improvements in tracking performance. Furthermore, tracking evaluation has been limited to settings where only a small number of fixed object classes were evaluated. We address both of these evaluation limitations by proposing the HOTA Metrics for evaluating tracking performance in a fair and holistic way, and introducing the task of Open-World Tracking for extending tracking evaluation to a open-world setting where a potentially unlimited set of object classes need to be tracked, even if they were not previously seen during training. Together, these mark a step-change in how tracking methods are evaluated and benchmarked, and allow the tracking community to make meaningful progress towards more performant and useful tracking algorithms.
Overall, by developing both dynamic 3D representations for tracking and a novel set of evaluation metrics and benchmarks, this thesis provides a number of crucial missing pieces that are needed to move towards truly useful and performant tracking algorithms, and thus toward the success of the multitude of applications for which tracking is a core component.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Dienstag, 5. März 2024, 10.00 Uhr
Ort: Raum 025, Mies-van-der-Rohe Str. 15 (UMIC Gebäude)
Referent: Ali Athar, M.Sc.
Lehrstuhl Informatik 13
Thema: Segmenting and Tracking Objects in Video
Abstract:
Research to develop methods that can accurately localize and track objects
in video has been ongoing for decades. Approaches capable of accomplishing
this are highly sought after for a variety of applications including
autonomous robots, self-driving vehicles, sports analytics, video editing,
etc. Despite significant progress in recent times, the task is far from
solved, in particular for challenging scenarios involving occlusions,
motion blur, and camera ego-motion. In this thesis, we present a series of
works that advance the state of research in this domain in various ways, as
outlined below.
Our first work, STEm-Seg, is an end-to-end trainable method for instance
segmentation that models the input video as a single 3D space-time volume
and relies on clustering per-pixel embeddings to segment and track objects.
This differs from existing approaches, which largely follow the
tracking-by-detection paradigm. Our novel formulation for these embeddings
enables us to cluster the embeddings in an efficient and end-to-end learned
fashion. The second work, called HODOR, is aimed at mitigating the need for
densely annotated data for training video tracking methods. Specifically,
it tackles the task of Video Object Segmentation (VOS) in a weakly
supervised manner where it can be trained using static images or sparsely
annotated video. To this end, we adopt a novel approach that encodes
objects into concise descriptors. This is in contrast to existing
approaches that predominantly learn space-time correspondences, which makes
it challenging to train them in such a setting.
Whereas the two aforementioned works propose network architectures, our
third project proposes a dataset and benchmark called BURST that aims to
unify the current, fragmented landscape of datasets in video segmentation
research. BURST includes a benchmark suite that evaluates multiple tasks
related to object segmentation in video with shared data and consistent
evaluation metrics. The idea behind this is to facilitate knowledge
exchange between the research sub-communities tackling these tasks and also
to encourage the development of methods with multi-task capability.
Finally, our fourth work, TarViS, can be seen as a logical continuation of
the above in that it is a method that can tackle multiple video
segmentation tasks. To achieve this, we decouple the task definition from
the core network architecture and use a set of dynamic query inputs to
specify the task-specific segmentation targets. This formulation enables us
to train a single model jointly on a collection of datasets spanning
multiple tasks (Video Instance/Object/Panoptic Segmentation). During
inference, the model can switch between tasks by simply hot-swapping the
input queries accordingly.
Es laden ein: die Dozentinnen und Dozenten der Informatik
**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
***********************************************************************
Zeit: Donnerstag, 22. Februar 2024, 14:00 Uhr
Ort: Raum 9222, E3, Informatikzentrum
Zoom:https://rwth.zoom-x.de/j/67896121061?pwd=RlJTNUw5RFJYU0NwMVRWNEhvQ0EwZz09
Meeting-ID: 678 9612 1061
Kenncode: 493665
Referent: Jan Rosendahl, M.Sc.; Lehrstuhl Informatik 6
Thema: Attention-Based Machine Translation Using Monolingual Data
Abstract:
Neural networks present a major advance in modeling for statistical
machine translation systems. In this dissertation, we focus on two
central aspects of neural machine translation systems, namely the
training data and the attention layer that connects the encoder and
decoder. The parameters of a neural machine translation system are
determined by minimizing the cross-entropy loss on a corpus of bilingual
training data, i.e. a set of sentence pairs where one is the translation
of the other. Since such sentence-aligned bilingual data is a scarce
resource and availability depends on the language pair, we investigate
using monolingual data to improve the performance of the machine
translation system (Methods used: language model integration,
monolingual pre-training, and back-translation). Inspired by existing
work on alignment models, we also incorporate a first-order dependency
in the encoder-decoder attention layer. In contrast with previous
machine translation models, the transformer is a pure feed-forward model
without any recurrent layers. That means that no information about the
previous attention decision is input to the computation of the attention
layer. Modeling attention with first-order dependencies allows the
attention layer to access previous attention decisions, which is a
prerequisite to express, e.g. source coverage.
Es laden ein: die Dozentinnen und Dozenten der Informatik
--
Stephanie Jansen
Faculty of Mathematics, Computer Science and Natural Sciences
Chair of Computer Science 6
ML - Machine Learning and Reasoning
RWTH Aachen University
Theaterstraße 35-39
D-52062 Aachen
Tel: +49 241 80-21601
sek(a)ml.rwth-aachen.de
www.hltpr.rwth-aachen.de
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit:
Dienstag, 30. Januar 2024, 14.00 Uhr
Ort:
9222, E3, Ahornstr. 55 und hybrid via Zoom (https://rwth.zoom-x.de/j/64937773189?pwd=eGttNUMzSElnQUVkc3FrYzBqK2F4UT09)
Referent:
Lubna Ali M.Sc. RWTH
Lehr- und Forschungsgebiet Informatik 9 (Lerntechnologien)
Thema:
convOERter: A Technical Assistance Tool to Support Semi-Automatic Conversion of Images in Educational Materials as OER
Abstract:
Open Educational Resources (OER) are seen as an important element in the process of digitizing higher education teaching and as essential building blocks for openness in education. They can be defined as teaching, learning, and research materials that have been made openly available, shareable, and modifiable. OER include different types of resources such as full courses, textbooks, videos, presentations, tests, and images, which are usually published under the open Creative Commons licences. OER can play an important role in improving education by facilitating access to high quality digital educational materials. Accordingly, there is a steady increase among higher education institutions to participate in the so-called "open movement" in general and in utilizing OER in particular. Nevertheless, there are many challenges that still face the deployment of OER in the educational context. One of the main challenges is the production of new OER materials and converting already existing materials into OER, which could be viable by qualifying educators through training courses and/or supporting them with specific tools.
There are many platforms and tools that support the creation of new OER content. However, to our knowledge, there are no tools that perform fully- or semi-automatic conversion of already existing educational materials. This identified gap was the basis for the design and implementation of the OER conversion tool (convOERter). The tool supports the user by semi-automatically converting educational materials containing images into OER-compliant materials. The main functionality of the tool is based on reading a file, extracting all images as well as all possible metadata, and substituting the extracted images with OER elements in a semi-automated way. The retrieved OER images are referenced and licenced properly according to the known TASLL rule. Finally, the entire file is automatically licenced under Creative Commons excluding specific elements from the entire licence such as logos. In order to evaluate the effectiveness of the tool in promoting the use of OER, a comprehensive user study was conducted with educators and OER enthusiastic at different universities. The study was accomplished by offering a series of OER evaluation workshops to compare the conversion efficiency of the tool with manual conversion. The results show that using the conversion tool improves the conversion process in terms of speed, license quality, and total efficiency. These results highlight that the tool can be a valuable addition to the community, especially for users less experienced with OER. As a future work, it is intended to further develop the tool and improve its functionality. Additionally, a long-term study can be conducted to assess the impact of the tool in facilitating and enhancing the production of OER on a larger scale.
Es laden ein: die Dozentinnen und Dozenten der Informatik
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Montag, 05.02.2024, 16:00-17:00 Uhr
Ort: Raum 5053.2 (großer B-IT-Hörsaal)/Informatikzentrum, Geb. E2, Ahornstraße 55
Der Vortrag findet hybrid statt:
https://rwth.zoom-x.de/j/67154623701?pwd=NnNIWWl0VnkzV0RiL3RyaWF3ZFlJQT09
Referent: Herr Tim Niemueller, Dipl.-Inform.
LuFG Informatik 5
Thema: Planning and Execution for Mobile Robots Using Distributed Persistent Memory
Abstract:
Robots are expected to help humans in ever more capacities, in household en vironments as well as in industry. This demands increasing autonomy and resilience in order to decrease the need for human intervention and support to a minimum. Robotic systems are composed of a wide range of software components from different areas like perception, self-localization, navigation, decision making, and task execution. This easily leads to compartmentalization of the development, where the whole robot system becomes an afterthought when only best-in-class benchmarks are considered per component.
Three particular problems motivated this thesis: First, data in robot systems is most often volatile, it briefly exists, is transferred, processed, used to make some decision, and eventually discarded, all within a short time window. If data is recorded, it is often done component-specific and not easily accessible. In this thesis, we tackle this problem by a document-oriented database that hooks into a robot system's middleware to collect as much data as possible. By using MongoDB as the data store and adding the ability to map middleware message into document structures, we gain query capabilities. We have used the database in several applications such as perception, performance and failure analysis, and action macro extraction.
Second, the data exchange and coordination of multi-robot systems usually requires custom approaches. However, with the robot database already in place, using its distributed nature and augmenting it with triggers and computable elements it can fill this role as well, providing a unified robot memory.
Third, robot task execution systems typically decouple domain modeling and execution flow specification, that is the specific details about an application the robot needs to know and the way it goes about choosing goals (which are typically few and human-defined). We have developed the CLIPS-based Executive that builds on Goal Reasoning to explicitly model the flow according to goals, of which there may be many and they can be considered and then selectively chosen for expansion, e.g., by invoking task planning, and then executed. Furthermore, by distributing the robot memory it can easily share data among a fleet of robots and aid in its coordination.
Demonstrations of these systems in two mobile robot domains - domestic service robotics with a single robot and industrial factory logistics with groups of robots - show the applicability and versatility of the approaches developed.
Es laden ein: die Dozentinnen und Dozenten der Informatik
_______________________________
Leany Maaßen
RWTH Aachen University
Lehrstuhl Informatik 5, LuFG Informatik 5
Prof. Dr. Stefan Decker, Prof. Dr. Matthias Jarke,
Prof. Gerhard Lakemeyer Ph.D., JunProf. Dr. Sandra Geisler
Ahornstrasse 55
D-52074 Aachen
Tel: 0241-80-21509
Fax: 0241-80-22321
E-Mail: maassen(a)dbis.rwth-aachen.de
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Freitag, 22. Dezember 2023, 11.00 Uhr
Ort: Gebäude E3, Seminarraum 118, Ahornstr. 55
Referent: Moritz Ibing M.Sc.
Lehrstuhl Informatik 8
Thema: Localized Control over the Latent Space of Neural Networks
Abstract:
Neural networks (NNs) are prevalent today when it comes to analyzing (classifying,
segmenting, detecting, etc.) or generating data in all kinds of modalities (text,
images, 3D shapes, etc.). They are so useful in these areas, because they have
great representation power, while being easy to optimize and generalizing well to
unseen data. However, their complexity makes them hard to interpret and modify.
Neural networks are usually used to compute a mapping between the data space
and a so-called latent space. Often we are interested in local properties of such a
mapping. For example, we might want to slightly change the embedding of a data
point to achieve a different classification. Such local modifications however are
difficult, as NNs usually have globally entangled properties. In this work we will
propose ideas how to deal with this problem.
Local control is especially of importance for shape representations. It has been
shown that NNs are well suited to represent these e.g. as parametric or implicit
functions. However, when a global function is used, local supervision is hard to
model. We therefore impose additional structure on the latent space of functional
representations, making them easier to work with and more expressive.
Such a structured representation makes downstream tasks easier, as we are more
versatile regarding the shapes we can represent, we can make use of its regularity
for the network design, and it allows a compressed encoding that can help to reduce
memory consumption. Our focus will be on general shape generation, but we will
also present more specific applications like shape completion or super-resolution
among others. Our approaches set the state-of-the-art among generative models
both in previously used metrics and a newly introduced measure we adapt for this
purpose.
Es laden ein: die Dozentinnen und Dozenten der Informatik