Language Technologies Ph.D. Thesis Defense
Speaker
DHEERAJ RAJAGOPAL
Ph.D. Student
Language Technologies Institute
Carnegie Mellon University
When
-
Where
Virtual Presentation - ET
Description
Understanding our reasoning process through explanations is spontaneous, ubiquitous and fundamental to our sense of perceiving the world around us. Scientific progress often relies on explanations to facilitate discovery of hypotheses, identify applications and also identify systematic errors and correct them. As NLP systems are deployed widely, explanations are central to how various stakeholders interact with the system and understand its decision making.
Current NLP systems, despite significant advances, are usually treated as black boxes with little to no insight into how they reason. Understanding Explanations is an under-explored area in the natural language processing literature due to the lack of a unified theory. In this thesis, we address these challenges, by first proposing goals for an explainable NLP system. Next, we show (i) data based and (ii) model based approaches on how to build explainable NLP systems. Finally, we also present our `recursive descent` theory for explanation and an instantiation of the theory via natural language templates using text-to-text generation models.
Thesis Committee:
Eduard Hovy (Chair)
Yulia Tsvetkov
Yonatan Bisk
Sebastian Riedel (University College, London)
Additional Information
Zoom Participation. See announcement.