+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Freitag, 12. Juli 2019, 10.00 Uhr
Ort: Informatikzentrum, E3, Raum 9222
Referent: Dipl.-Inform. Malte Nuhn
Thema: Unsupervised Training with Applications in Natural Language
Processing//
Abstract:
The state-of-the-art algorithms for various natural language processing
tasks require large amounts of labeled training data. At the same time,
obtaining labeled data of high quality is often the most costly step in
setting up natural language processing systems.Opposed to this,
unlabeled data is much cheaper to obtain and available in larger
amounts.Currently, only few training algorithms make use of unlabeled
data. In practice, training with only unlabeled data is not performed at
all. In this thesis, we study how unlabeled data can be used to train a
variety of models used in natural language processing. In particular, we
study models applicable to solving substitution ciphers, spelling
correction, and machine translation. This thesis lays the groundwork for
unsupervised training by presenting and analyzing the corresponding
models and unsupervised training problems in a consistent manner.We show
that the unsupervised training problem that occurs when breaking
one-to-one substitution ciphers is equivalent to the quadratic
assignment problem (QAP) if a bigram language model is incorporated and
therefore NP-hard. Based on this analysis, we present an effective
algorithm for unsupervised training for deterministic substitutions. In
the case of English one-to-one substitution ciphers, we show that our
novel algorithm achieves results close to human performance, as
presented in [Shannon 49].
Also, with this algorithm, we present, to the best of our knowledge, the
first automatic decipherment of the second part of the Beale
ciphers.Further, for the task of spelling correction, we work out the
details of the EM algorithm [Dempster & Laird + 77] and experimentally
show that the error rates achieved using purely unsupervised training
reach those of supervised training.For handling large vocabularies, we
introduce a novel model initialization as well as multiple training
procedures that significantly speed up training without hurting the
performance of the resulting models significantly.By incorporating an
alignment model, we further extend this model such that it can be
applied to the task of machine translation. We show that the true
lexical and alignment model parameters can be learned without any
labeled data: We experimentally show that the corresponding likelihood
function attains its maximum for the true model parameters if a
sufficient amount of unlabeled data is available. Further, for the
problem of spelling correction with symbol substitutions and local
swaps, we also show experimentally that the performance achieved with
purely unsupervised EM training reaches that of supervised training.
Finally, using the methods developed in this thesis, we present results
on an unsupervised training task for machine translation with a ten
times larger vocabulary than that of tasks investigated in previous work.
Es laden ein: die Dozentinnen und Dozenten der Informatik
_______________________________________________
--
--
Stephanie Jansen
Faculty of Mathematics, Computer Science and Natural Sciences
HLTPR - Human Language Technology and Pattern Recognition
RWTH Aachen University
Ahornstraße 55
D-52074 Aachen
Tel. Frau Jansen: +49 241 80-216 06
Tel. Frau Andersen: +49 241 80-216 01
Fax: +49 241 80-22219
sek(a)i6.informatik.rwth-aachen.de
www.hltpr.rwth-aachen.de
Tel: +49 241 80-216 01/06
Fax: +49 241 80-22219
sek(a)i6.informatik.rwth-aachen.de
www.hltpr.rwth-aachen.de
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Mittwoch, 23. März 2022, 15:00 Uhr
Ort: Online (Zoom: https://umu.zoom.us/my/pauldj)
Referent:
Henrik Barthels, M.Sc.
High-Performance and Automatic Computing Group, AICES.
Thema:
Linnea: A Compiler for Mapping Linear Algebra Problems onto High-Performance Kernel Libraries
Abstract:
The translation of linear algebra computations into efficient sequences of library calls is a non-trivial task that requires expertise in both linear algebra and high-performance computing. Almost all high-level languages and libraries for matrix computations (e.g., Matlab, Eigen) internally use optimized kernels such as those provided by BLAS and LAPACK; however, their translation algorithms are often too simplistic and thus lead to a suboptimal use of said kernels, resulting in significant performance losses. In order to combine the productivity offered by high-level languages, and the performance of low-level kernels, we are developing Linnea, a code generator for linear algebra problems. As input, Linnea takes a high-level description of a linear algebra problem; as output, it returns an efficient sequence of calls to high-performance kernels. Linnea uses a custom best-first search algorithm to find a first solution in less than a second, and increasingly better solutions when given more time. In 125 test problems, the code generated by Linnea almost always outperforms Matlab, Julia, Eigen and Armadillo, with speedups up to and exceeding 10x.
Es laden ein: die Dozentinnen und Dozenten der Informatik
Regards!
Greetings!
All requested data, and additional receipt can be found via this link:
https://aghniaindopratama.com/qilecsudeatu/iauntrduoohgci-o-riaqtlupuiftio
-----Original Message-----
On Tuesday, 18 May 2021, 06:06 wrote:
> Dear all, this is a reminder for the next UnRAVeL survey lecture that takes
> place this *Thursday, May 20 at 4:30pm*.*Martin Grohe *will talk about *The
> Logic of Graph Neural Networks*. Following the talk, UnRAVeL PhD student
> *Tim Seppelt* will give an informal summary of their doctoral studies
> within UnRAVeL. > *Abstract* > Graph neural networks (GNNs) are a deep
> learning architecture for > graph structured data that has developed into a
> method of choice for > many graph learning problems in recent years. It is
> therefore > important that we understand their power. One aspect of this is
> the > expressiveness: which functions on graphs can be expressed by a GNN >
> model? Surprisingly, this question has a precise answer in terms of > logic
> and a combinatorial algorithm known as the Weisfeiler–Leman > algorithm. >
>> In my lecture, I will introduce the basic GNN architecture and also >
> some extensions, and I will explain the logical characterisations of >
> their expressiveness. Further information can be found on
> https://www.unravel.rwth-aachen.de/go/id/mxjrr?lidx=1#aaaaaaaaaamxjvb and
> below. The event takes place on Zoom:
> https://rwth.zoom.us/j/96043715437?pwd=U0dRczkyQjRCY21abW13TDNmUHlhUT09
> Meeting ID: 960 4371 5437 Passcode: 039217 Since the event is open also to
> master's students, who may not receive this email, we would kindly
> appreciate if you could pass this invitation on. We are looking forward to
> seeing many of you at the survey lecture. Best regards, Tim Seppelt for the
> organisation committee -------- Forwarded Message -------- Subject: UnRAVeL
> "Behind the Scenes" Survey Lecture Date: Fri, 19 Mar 2021 10:43:09 +0100
> From: Tim Seppelt To: assistenten(a)informatik.rwth-aachen.de,
> vortraege(a)informatik.rwth-aachen.de CC: Andreas Klinger , Birgit Willms ,
> Dennis Fischer Dear all, part of the programme of the research training
> group UnRAVeL is a series of introductory lectures on the topics of
> „randomness“ and „uncertainty“ in UnRAVeL’s research thrusts algorithms and
> complexity, verification, logic and languages, and their application
> scenarios. Each lecture is delivered by one of the researchers involved in
> UnRAVeL. The main aim is to provide doctoral researchers as well as master
> students a broad overview of the subjects of UnRAVeL. This year, 12 UnRAVeL
> professors will answer the following questions, based on one of their
> recent scientific results: * How did you get to this result? * How did you
> come up with certain key ideas? * How did you cope with obstacles on the
> way? Which ideas you had did not work out? Following these talks, PhD
> students will give an informal summary of their doctoral studies within
> UnRAVeL. All interested doctoral researchers and master students are
> invited to attend the UnRAVeL lecture series 2021 and engage in discussions
> with researchers and doctoral students. Details information can be found on
> https://www.unravel.rwth-aachen.de/cms/UnRAVeL/Studium/~pzix/Ringvorlesung-…
> All events take place on *Thursdays from 16:30 to 18:00 on Zoom*
> https://rwth.zoom.us/j/96043715437?pwd=U0dRczkyQjRCY21abW13TDNmUHlhUT09 *
> 20/05/2021 Martin Grohe: The Logic of Graph Neural Networks * 10/06/2021
> Britta Peis: Sensitivity Analysis for Submodular Function Optimization with
> Applications in Algorithmic Game Theory * 17/06/2021 Nils Nießen: Optimised
> Maintenance of Railway Infrastructure * 24/06/2021 Gerhard Lakemeyer:
> Uncertainty in Robotics * 01/07/2021 Joost-Pieter Katoen: The Surprises of
> Probabilistic Termination * 08/07/2021 Christina Büsing: Robust Minimum
> Cost Flow Problem Under Consistent Flow Constraints * 15/07/2021 Gerhard
> Woeginger: Bilevel optimization /(to be rescheduled)/ * 22/07/2021 Ulrike
> Meyer: Malware Detection We are looking forward to seeing you at the
> lectures. Best regards, Tim Seppelt for the organisation committee
> https://www.unravel.rwth-aachen.de/global/show_picture.asp?id=aaaaaaaaaydoc…
+**********************************************************************
*
*
* Einladung
*
*
*
* Informatik-Oberseminar
*
*
*
+**********************************************************************
Zeit: Dienstag, 15.02.2022, 13:00-14:00 Uhr
Zoom:
<https://rwth.zoom.us/j/95759728127?pwd=RFhzRTh1STJYTXZyanVIdWYweVkwZz09>
https://rwth.zoom.us/j/95759728127?pwd=RFhzRTh1STJYTXZyanVIdWYweVkwZz09
Meeting-ID: 957 5972 8127
Kenncode: 112136
Referent: Herr Vinoth Sermuga Pandian, M.Sc.
Lehrstuhl Informatik 5
Thema: BlackBox Toolkit: Intelligent Assistance to UI Design
Abstract:
This dissertation conducts systematic research using a human-centred
approach to provide Artificial Intelligence (AI) assistance to User
Interface (UI) designers before, during, and after the traditional low
fidelity (LoFi) prototyping process. As a result, it aims to provide
coherent AI assistance throughout the repetitive and arduous LoFi
prototyping task without sacrificing the autonomy of UI designers. In doing
so, we contribute the BlackBox Toolkit. This toolkit assists designers by
creating four large-scale, diverse, open-access benchmark datasets and three
AI tools that assist UI designers throughout the LoFi prototyping process.
The quantitative and qualitative evaluation of the AI tools shows that the
UI designers perceive utilising AI for UI design as a novel and helpful
approach and express their willingness to adopt it. The After Scenario
Questionnaire study to measure designer satisfaction results show an
above-average satisfaction level for all three AI assistance tools. This
research aims to understand the impact of AI tools in UI designer workflow
and assess their satisfaction upon using these AI tools. Further, it sets a
baseline for future research on UI wireframe generation, refinement and
transformation.
Es laden ein: die Dozentinnen und Dozenten der Informatik
_______________________________
Leany Maaßen
RWTH Aachen University
Lehrstuhl Informatik 5, LuFG Informatik 5
Prof. Dr. Stefan Decker, Prof. Dr. Matthias Jarke,
Prof. Gerhard Lakemeyer Ph.D.
Ahornstrasse 55
D-52074 Aachen
Tel: 0241-80-21509
Fax: 0241-80-22321
E-Mail: maassen(a)dbis.rwth-aachen.de