André F. T. Martins

Título do relatorio: Towards Explainable and Uncertainty-Aware NLP

Resumo:

Natural language processing systems are becoming increasingly more accurate and powerful. However, in order to take full advantage of these advances, new capabilities are necessary for humans to understand model predictions and when to question or to bypass them. In this talk, I will present recent work from our group in two directions.

In the first part, I will describe how sparse modeling techniques can be extended and adapted for facilitating sparse communication in neural models. The building block is a family of sparse transformations called alpha-entmax, a drop-in replacement for softmax, which contains sparsemax as a particular case. Entmax transformations are differentiable and (unlike softmax) they can return sparse probability distributions, useful for explainability. Structured variants of these sparse transformations, SparseMAP and LP-SparseMAP, are able to handle constrained factor graphs as differentiable layers, and they have been applied with success to obtain deterministic and structured rationalizers, with favorable properties in terms of predictive power, quality of the explanations, and model variability, with respect to previous approaches.

In the second part, I will present an uncertainty-aware approach to machine translation evaluation. Recent neural-based metrics for translation quality such as COMET or BLEURT resort to point estimates, which provide limited information at segment level and can be unreliable due to noisy, biased, and scarce human judgements. We combine the COMET framework with two uncertainty estimation methods, Monte Carlo dropout and deep ensembles, to obtain quality scores along with confidence intervals. We experiment with varying numbers of references and further discuss the usefulness of uncertainty-aware quality estimation (without references) to flag possibly critical translation mistakes.

This is joint work with Taya Glushkova, Nuno Guerreiro, Vlad Niculae, Ben Peters, Marcos Treviso, and Chryssa Zerva in the scope of the DeepSPIN and MAIA projects.

Biografía:

André F. T. Martins é profesor do Instituto Superior Técnico, investigador sénior do Instituto de Telecomunicações e vicepresidente de Investigación en IA en Unbabel en Lisboa, Portugal. Tamén realiza consultoría científica para Priberam Labs. As súas principais áreas de investigación son o procesamento da linguaxe natural e a aprendizaxe automática. Ata 2012 foi doutorando no programa conxunto CMU-Portugal en Tecnoloxías da Linguaxe, na Universidade Carnegie Mellon e no Instituto Técnico Superior, baixo a supervisión de Mário Figueiredo, Noah Smith, Pedro Aguiar e Eric Xing. Desde 2018 é investigador principal da ERC Starting Grant DeepSPIN (Deep Structured Prediction in Natural Language Processing) no Instituto de Telecomunicações, o Instituto Superior Técnico e Unbabel

Máis información na súa páxina web


Barbara Plank

Foto de barbara Plank

Título do relatorio: Is Human Label Variation Really So Bad for AI?

Resumo:

Human variation in labeling is typically considered noise. Annotation projects in computer vision and natural language processing typically aim at minimizing human label variation, in order to maximize data quality and in turn optimize and maximize machine learning metrics. However, is human variation in labeling noise, or can we turn such information into signal for machine learning? In this talk, I will first illustrate the problem and then discuss approaches to tackle this fundamental issue.

Biografía:

Barbara Plank é profesora de Intelixencia Artificial e Lingüística Computacional na LMU Munich, onde dirixe o Laboratorio de Procesamento da Linguaxe Natural do Centro de Procesamento da Información e da Linguaxe (CIS). Tamén é profesora a tempo parcial no Departamento de Ciencias da Computación da ITU (IT University of Copenhagen), onde é codirectora da unidade de investigación NLPnorth. É licenciada e mestrada en informática e doutorouse en lingüística computacional en 2011. Orixinaria do Tirol do Sur, Italia, traballou e viviu nos Países Baixos, Italia, Dinamarca e Alemaña. A profesora Plank investiga tecnoloxías de linguaxe robusta, en particular a aprendizaxe multilingüe e multidominio, a aprendizaxe baixo sesgo de anotación e a aprendizaxe automática semi-supervisada e débilmente supervisada en xeral para unha ampla gama de aplicacións de PNL, incluíndo análise, sintaxe, perfís de autor, minería de opinións e información. e extracción de relacións. Recibiu un Amazon Research Award, unha Sapere Aude Grant del Independent Research Fund Denmark, é ELLIS NLP Scholar e en 2021 recibiu un outstanding paper award  na  EACL 2021 demo track.

Más información na súa página web