André F. T. Martins

Picture of André martins

Title: Towards Explainable and Uncertainty-Aware NLP

Abstract:

Natural language processing systems are becoming increasingly more accurate and powerful. However, in order to take full advantage of these advances, new capabilities are necessary for humans to understand model predictions and when to question or to bypass them. In this talk, I will present recent work from our group in two directions.

In the first part, I will describe how sparse modeling techniques can be extended and adapted for facilitating sparse communication in neural models. The building block is a family of sparse transformations called alpha-entmax, a drop-in replacement for softmax, which contains sparsemax as a particular case. Entmax transformations are differentiable and (unlike softmax) they can return sparse probability distributions, useful for explainability. Structured variants of these sparse transformations, SparseMAP and LP-SparseMAP, are able to handle constrained factor graphs as differentiable layers, and they have been applied with success to obtain deterministic and structured rationalizers, with favorable properties in terms of predictive power, quality of the explanations, and model variability, with respect to previous approaches.

In the second part, I will present an uncertainty-aware approach to machine translation evaluation. Recent neural-based metrics for translation quality such as COMET or BLEURT resort to point estimates, which provide limited information at segment level and can be unreliable due to noisy, biased, and scarce human judgements. We combine the COMET framework with two uncertainty estimation methods, Monte Carlo dropout and deep ensembles, to obtain quality scores along with confidence intervals. We experiment with varying numbers of references and further discuss the usefulness of uncertainty-aware quality estimation (without references) to flag possibly critical translation mistakes.

This is joint work with Taya Glushkova, Nuno Guerreiro, Vlad Niculae, Ben Peters, Marcos Treviso, and Chryssa Zerva in the scope of the DeepSPIN and MAIA projects.

Bio:

André Martins (PhD 2012, Carnegie Mellon University and University of Lisbon) is an Associate Professor at Instituto Superior Técnico, University of Lisbon, researcher at Instituto de Telecomunicações, and the VP of AI Research at Unbabel. His research, funded by a ERC Starting Grant (DeepSPIN) and other grants (P2020 project Unbabel4EU and CMU-Portugal project MAIA) include machine translation, quality estimation, structure and interpretability in deep learning systems for NLP. His work has received best paper awards at ACL 2009 (long paper) and ACL 2019 (system demonstration paper). He co-founded and co-organizes the Lisbon Machine Learning School (LxMLS), and he is a Fellow of the ELLIS society.

More informationat his web page


Barbara Plank

Picture of Barbara Plank

Title: Is Human Label Variation Really So Bad for AI?

Abstract:

Human variation in labeling is typically considered noise. Annotation projects in computer vision and natural language processing typically aim at minimizing human label variation, in order to maximize data quality and in turn optimize and maximize machine learning metrics. However, is human variation in labeling noise, or can we turn such information into signal for machine learning? In this talk, I will first illustrate the problem and then discuss approaches to tackle this fundamental issue.

Bio:

Barbara Plank is Chair (Full Professor) of AI and Computational Linguistics at LMU Munich, where she leads a research lab in Natural Language Processing (NLP) at the Center for Information and Language Processing (CIS). She is also a professor (part-time) at the Computer Science Department of the IT University of Copenhagen, where she co-leads the NLP North research unit. She holds a BSc and MSc in Computer Science and received her PhD in Computational Linguistics in 2011. Originally from South Tyrol, Italy, she worked and lived in the Netherlands, Italy, Denmark and Germany. Prof. Plank is interested in robust language technology, in particular cross-domain and cross-language learning, learning under annotation bias, and generally, semi-supervised and weakly-supervised machine learning for a broad range of NLP applications, including syntactic parsing, author profiling, opinion mining and information and relation extraction. She has received several prestigiuous grants, such as an Amazon Research Award and a Sapere Aude grant from the Independent Research Fund Denmark, she is a also a ELLIS NLP Scholar and received an outstanding paper award at EACL 2021 demo track.

More information at her web page