Corentin Kervadec

Logo

Postdoctoral researcher @ UPF || PhD in Machine Learning.

Interested in improving our understanding of how artificial networks work.

Follow me on twitter

Send me an email

Google scholar

CV

👽 I am a postdoctoral member of the ALiEN research program lead by Prof. Marco Baroni in the COLT group at UPF (Barcelona, Spain). I conduct research on unnatural language processing, trying to understand how artificial networks share information.

👨‍🎓 Previously, I defended my PhD in December 2021, under the direction of Christian Wolf and co-supervised by Grigory Antipov and Moez Baccouche at Orange Labs (Rennes, France).

🧠 My PhD thesis is titled Bias and Reasoning in Visual Question Answering and focuses on Deep Learning applied to Vision and Language. I investigated how decisions made by a neural network trained on the Visual Question Answering (VQA) task are impacted by biases found in the training data.

News

👽 November 2022: Sarting a postdoc on the ALiEN project in COLT group at UPF (Barcelona, Spain).

🥇 October 2022: I have been selected as an outstanding reviewer for ECCV’22 !

🥇 May 2022: I have been selected as an outstanding reviewer for CVPR’22 !

👨‍🎓 December 2021: I successfully defended my PhD titled Bias and Reasoning in Visual Question Answering!

📜 September 2021: 1 paper accepted at NeurIPS2021! Supervising the Transfer of Reasoning Patterns in VQA

🥇 September 2021: I have been selected as an outstanding reviewer for ICCV’21 (top 5% students)!

📜 July 2021: 1 paper accepted at IEEE VIS2021! VisQA: X-raying Vision and Language Reasoning in Transformers

👨‍🏫 June 2021: I presented a poster about biases and reasoning at the VQA workshop at CVPR’21. Watch the video and the poster!

👨‍🏫 May 2021: I was invited to give a talk about biases and reasoning in VQA at “Devil is in the Deeptails” (slides and video).

👨‍🏫 April 2021: I gave a talk about VQA and visual reasoning at the GdR ISIS “Explicabilité et Interprétabilité des méthodes d’Intelligence Artificielle pour la classification et compréhension des scènes visuelles” meeting. Slides are available here.

📜 Mars 2021: 2 papers accepted at CVPR2021! “Roses Are Red, Violets Are Blue… but Should Vqa Expect Them To?” and “How Transferable are Reasoning Patterns in VQA?” (checkout our online demo here!)

📜 June 2020: New paper on Arxiv! “Estimating semantic structure for the VQA answer space”

📜 January 2020: One paper accepted at ECAI20! “Weak Supervision helps Emergence of Word-Object Alignment and improves Vision-Language Tasks.”

📜 May 2019: One paper accepted at IEEE FG2019! “The Many Variations of Emotion.”

👨‍🎓 October 2018: Sarting my PhD at INSA Lyon & Orange Labs under the direction of Christian Wolf and co-supervised by Grigory Antipov and Moez Baccouche.

📜 July 2018: One paper accepted at the IAHFAR wotkshop hosted at BMVC18! “CAKE: Compact and Accurate K-dimensional representation of Emotion.

🥉 June 2018: Rank 3th at the Emotion in the Wild 2018 challenge hosted at ICMI18! “An Occam’s Razor View on Learning Audiovisual Emotion Recognition with Small Training Sets.”

👨‍🎓 March 2018, Starting a Master’s internship at Orange Labs.

Publications

Bias and Reasoning in Visual Question Answering

blind-date


Corentin Kervadec,
PhD, INSA Lyon, 2021  
PDF

Despite impressive improvement made by deep learning approaches, VQA models are notorious for their tendency to rely on dataset biases. In this thesis, we adress the VQA task through the prism of biases and reasoning, following the motto: evaluate, analyse, and improve.

Supervising the Transfer of Reasoning Patterns in VQA

blind-date


Corentin Kervadec*, Christian Wolf*, Grigory Antipov, Moez Baccouche, Madiha Nadri,
NeurIPS, 2021  
PDF / OpenReview

We propose a method for knowledge transfer in VQA based on a regularization term in our loss function, supervising the sequence of required reasoning operations. We provide a theoretical analysis based on PAC-learning, showing that such program prediction can lead to decreased sample complexity under mild hypotheses.

VisQA: X-raying Vision and Language Reasoning in Transformers

blind-date


Theo Jaunet, Corentin Kervadec, Grigory Antipov, Moez Baccouche, Romain Vuillemot, Christian Wolf
IEEE VIS, 2021  
PDF / Arxiv / Github / Online Demo!

We introduce VisQA, a visual analytics tool that explores the question of reasoning vs. bias exploitation in Visual Question Answering systems. Try our interactive tool here!

How Transferable are Reasoning Patterns in VQA?

blind-date


Corentin Kervadec*, Theo Jaunet*, Grigory Antipov, Moez Baccouche, Romain Vuillemot, Christian Wolf
CVPR, 2021  
PDF / Arxiv / Video / Poster / Online Demo!

Noise and uncertainties in visual inputs are the main bottleneck in VQA, preventing successful learning of reasoning capacities. In a deep analysis, we show that oracle models trained on noiseless visual data, tend to depend significantly less on bias exploitation (checkout our interactive tool). In this, paper we demonstrate the feasability and the effectiveness of transfering learned reasoning patterns from oracle to real data based models.

Roses Are Red, Violets Are Blue... but Should Vqa Expect Them To?

blind-date


Corentin Kervadec, Grigory Antipov, Moez Baccouche, Christian Wolf
CVPR, 2021  
PDF / arXiv / Code / Benchmark / Video / Poster

We propose GQA-OOD, a new benchmark to evaluate VQA in out-of-distribution settings by reorganizing the GQA dataset, taylored for each sample (question group), targeting research in bias reduction in VQA.

Estimating semantic structure for the VQA answer space

blind-date


Corentin Kervadec, Grigory Antipov, Moez Baccouche, Christian Wolf
Arxiv, 2020  
PDF / arXiv

Semantic loss for VQA adding structure to the VQA answer space estimated from redundancy in annotations, questioning the classification approach to VQA.

Weak Supervision helps Emergence of Word-Object Alignment and improves Vision-Language Tasks

blind-date


Corentin Kervadec, Grigory Antipov, Moez Baccouche, Christian Wolf
ECAI, 2020  
PDF / arXiv / video / bibtex

We introduce a weakly supervised word-object alignment inside BERT-like Vision-Language encoders, allowing to model fine-grained entity relations and improve visual reasoning capabilities.

The Many Variations of Emotion

blind-date


Valentin Vielzeuf, Corentin Kervadec, Stéphane Pateux, Frederic Jurie
IEEE FG, 2019  
PDF / bibtex

We present a novel approach for changing facial expression in images by the use of a continuous latent space of emotion.

CAKE: Compact and Accurate K-dimensional representation of Emotion

blind-date


Corentin Kervadec*, Valentin Vielzeuf*, Stéphane Pateux, Alexis Lechervy, Frederic Jurie
IAHFAR workshop (BMVC), 2018  
PDF / arXiv / bibtex

We propose CAKE, a 3-dimensional representation of emotion learned in a multi-domain fashion, achieving accurate emotion recognition on several public datasets

An occam's razor view on learning audiovisual emotion recognition with small training sets

Valentin Vielzeuf, Corentin Kervadec, Stéphane Pateux, Alexis Lechervy, Frederic Jurie
EmotiW challenge (ICMI), 2018  
PDF / bibtex

A light-weight and accurate deep neural model for audiovisual emotion recognition. We ranked 3th at the Emotion in the Wild 2018 challenge.