Carnegie Mellon University’s Swartz Center for Entrepreneurship has named Anhong Guo, Ph.D. candidate, Gierad Laput, Ph.D. candidate, and Brandon Taylor, postdoctoral researcher to this year’s class of innovators, making one half of the class of 2018-19 Innovation Fellows from the Human-Computer Interaction Institute (HCII).
One of the primary goals of the Innovation Fellows program is to foster entrepreneurship among graduate students, postdoctoral fellows, and research assistants, while encouraging commercialization of university research to benefit our communities.
"The HCII is very excited to see applications of our research in the real world. I am sure Brandon, Gierad, and Anhong will benefit greatly from this generous support," said Jodi Forlizzi, Geschke Director of the HCI Institute and Professor.
It is easy to see how the work of these three HCI scholars will benefit the community.
Anhong Guo is a 5th year PhD student with the HCII, advised by Jeff Bigham. He designs, develops, and deploys hybrid crowd-AI interactive systems to provide access to visual information in the real world. He applies this work to two domains, accessibility and Internet of Things. Projects include VizLens, a first of its kind mobile application for iOS that combines on-demand crowd sourcing with real-time computer vision, allowing blind users to interactively explore physical interfaces, and Zensors, a camera and crowdsourcing approach to detecting environmental states.
Gierad Laput is a 6th year PhD student, advised by Chris Harrison. His research explores the intersection of interactive systems, applied machine learning, and sensing -- especially creating sensing opportunities that don't require special-purpose hardware or unlocking unexplored sensing capabilities. Laput is the originator of Zensors, and is also currently the Editor-in-Chief of XRDS, ACM’s flagship magazine for students.
Brandon Taylor recently defended his thesis and is now a postdoctoral researcher with the HCII. As a PhD student, Taylor developed a real-time, depth-based, generative handtracking system for recognizing American Sign Language (ASL). While other languages have benefited from years of speech recognition and machine translation research, ASL's lack of aural and written components makes the process automatic translation very difficult. With expanded data collection, Taylor hopes to both improve automated ASL recognition and cultivate a database for exploring automated ASL to English translation.
Learn more about all six of the 2018-19 Innovation Fellows.