Machine Learning Speaking Skills Talk
Speaker
INI OGUNTOLA
Ph.D. Student
Machine Learning Department
Carnegie Mellon University
When
-
Where
Virtual Presentation - ET
Description
When developing AI systems that interact with humans, it is essential to design both a system that can understand humans, and a system that humans can understand. Most deep network based agent-modeling approaches are 1) not interpretable and 2) only model external behavior, ignoring internal mental states, which potentially limits their capability for assistance, interventions, discovering false beliefs, etc. This talk discusses an interpretable modular neural framework for modeling the intentions of other observed entities. The efficacy of this approach is demonstrated with experiments on data from human participants on a search and rescue task in Minecraft, which show that incorporating interpretability can significantly increase predictive performance under the right conditions.
Zoom Participation. See announcement.