+**********************************************************************
*
*
*                          Einladung
*
*
*
*                     Informatik-Oberseminar
*
*
*
+**********************************************************************
 
Zeit:        Dienstag, 28.06.2022, 10:00-11:00 Uhr
 
Der öffentliche Vortrag findet hybrid statt:
 
Raum:     Raum 5053.1 (kleiner B-IT-Raum)/Informatikzentrum, Ahornstraße 55

Zoom:    https://zoom.us/j/94726175294?pwd=QXJVeGVvZ09wV000OU53QkdrU0RRdz09

  

Referent:     Herr Md Rezaul Karim, Master of Engineering
                      Lehrstuhl Informatik 5
 

Thema:        Interpreting Black-Box Machine Learning Models with Decision Rules and Knowledge Graph Reasoning

 
Abstract:

 

Machine learning (ML) algorithms are increasingly used to solve complex problems. However, due to high non-linear and higher-order interactions between features, complex ML models become black-box methods - which means it is not known how certain predictions are made.  This may not be acceptable in many situations (e.g., in clinical situations where AI may significantly impact human lives). With the EU GDPR explainability has not only become a desirable property of AI but also a legal requirement. An interpretable ML model can outline how input instances are mapped into certain outputs by identifying statistically significant features. Literature pointed out that complex ML models tend to be less interpretable, showing a trade-off between accuracy and interpretability. This thesis aims to improve the interpretability and explainability of black-box ML models without sacrificing significant predictive accuracy.  As a starting point, using a black-box multimodal neural network, representation learning is performed on multimodal data in order to use the learned representation for the classification task.  To improve the interpretability of the learned black-box model, different interpretable ML methods such as probing, perturbing, and model surrogation techniques are applied. An interpretable surrogate model is trained to approximate the behavior of the back-box model. The surrogate model is used to generate explanations in terms of decision rules and counterfactuals. To add symbolic reasoning capability to the black-box model, a domain-specific knowledge graph (KG) is constructed by integrating knowledge and facts from scientific literature. A semantic reasoner is then used to validate the association of significant features with different classes based on relations it learned from the KG. Evidence-based decision rules are generated by combining the reasoning with the predictions from the black-box model. The quantitative evaluation shows that the proposed approach achieves an average accuracy of 96.25% on the test dataset. It can also provide human-interpretable explanations of the decisions in the form of counterfactual rules and evidence-based decision rules. The quality of the explanations is evaluated in terms of comprehensiveness and sufficiency. 

 
Es laden ein: die Dozentinnen und Dozenten der Informatik
 

--

Romina Reddig

 

_______________________________

Romina Reddig

RWTH Aachen University

Lehrstuhl Informatik 5, LuFG Informatik 5

Prof. Dr. Stefan Decker, Prof. Dr. Matthias Jarke,

Prof. Gerhard Lakemeyer Ph.D.

Ahornstrasse 55

D-52074 Aachen

 

Tel: 0241-80-21501

Fax: 0241-80-22321

E-Mail: reddig@dbis.rwth-aachen.de