I am an assistant professor at the computer science department at The University of Texas at Austin. Before UT, I was a researcher at Google AI in NYC and a Ph.D. student at UW, advised by Luke Zettlemoyer and Yejin Choi.
I enjoy studying real world language usages with simple and generalizable models. I also build benchmarks that allows us to evaluate NLP models, conduct model analysis, and bring the progresses in English NLP to a wider range of languages. Here are research topics that I am currently interested in:
- Continual Learning and Knowledge Editing: While LMs retain vast amounts of world knowledge seen during pretraining, such knowledge can get outdated. I am interested in retrieval augmentation and updating parametric knowledge in LMs.
- Long-form Question Answering: Enabling systems to produce paragraph-level answers opens up possiblities to handle more complicated questions and provide more comprehensive answers. LFQA merges two challenging research areas -- information retrieval and text generation. Furthermore, we have to synthesize information from multiple documents.
- Human-LM Interaction: NLP systems are getting deployed fast and widely. I am interested in improving human interactions with LM, for example, how should we present information such that users will not be misled by plausible yet imperfect model predictions? The deployment of models also creates opportunities to learn from interaction with users.
- Spoken Language Processing: Spoken language exhibits richer prosodic features that are absent in written text. Can we build textless NLP system which can work on speech signals, opening doors to handle languages without written scripts?