Welcome!
I am a Lead Research Scientist at Salesforce AI Research working on Natural Language Processing, Explainable AI, and Human-Computer Interaction. My research explores diverse topics in NLP, from developing models for text summarization and simplification, to studying novel methods for interpreting language models. I also enjoy creating visualization tools to enhance understanding of AI models, such as the open-source BertViz library for visualizing attention in Transformers. My research has been recognized with a Best Paper award at the Intelligent User Interfaces conference and was featured in The Batch. I am an avid blogger, and enjoy creating user-facing applications such as the award-winning 100 Years Ago.
Select Publications
-
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig, Ali Madani, Lav R. Varshney, Caiming Xiong, Richard Socher, Nazneen Fatema Rajani
ICLR 2021
[Paper] [Blog] [Visualization Demo] [Papers Explained] -
Investigating Gender Bias in Language Models Using Causal Mediation Analysis
Jesse Vig*, Sebastian Gehrmann*, Yonatan Belinkov*, Sharon Qian, Daniel Nevo, Yaron Singer, Stuart Shieber (*equal contribution)
NeurIPS 2020, Spotlight
[Paper] [Poster] -
SummVis: Interactive Visual Analysis of Models, Data, and Evaluation for Text Summarization
Jesse Vig, Wojciech Kryscinski, Karan Goel, Nazneen Fatema Rajani
ACL System Demonstrations 2021
[Paper] [Demo Video] -
A Multiscale Visualization of Attention in the Transformer Model
Jesse Vig
ACL System Demonstrations 2019
[Paper] [Blog] [Poster] -
Analyzing the Structure of Attention in a Transformer Language Model
Jesse Vig, Yonatan Belinkov
ACL BlackboxNLP Workshop 2019
[Paper] [Poster]