HCI PhD Thesis Proposal: Nathan DeVrio
When
-
Where
NSH 3305
Description
Addressing the Input Gap on Mobile Devices
HCII PhD Thesis Proposal
Date & Time: Monday, November 3rd @ 12:45 PM ET
Location: Newell-Simon Hall (NSH) 3305
Remote: Zoom link (Meeting ID: 831 764 8393, Passcode: 726487)
Committee:
Chris Harrison (chair), Carnegie Mellon University
Scott Hudson, Carnegie Mellon University
Nikolas Martelaro, Carnegie Mellon University
Hrvoje Benko, Meta Reality Labs
Abstract:
Computing devices have historically maintained a balance of input and output capabilities for positive and productive user experience. However, new and upcoming mobile devices, particularly mixed reality (XR) headsets and glasses, have made trade-offs that prioritize mobility and slim form factors to enable always available content display at the cost of usable input affordances. This has led to an input gap that threatens to increase frustration, confusion, and inefficiency when users are not given enough input capability to interact with the interfaces presented to them. In this thesis, I am proposing two ways to address this input-output gap: explicit input and implicit input. Explicit input addresses mismatches on present devices, such as XR headsets which can render virtual information on the physical world, but give users little in the way of methods to interact. Looking to the future, increasingly intelligent interfaces enabled by advances in AI can leverage implicit user signals, preempting explicit input and enabling more proactive features.
My research aims to use novel sensing techniques to enable input methods that were previously impossible or impractical due to concerns with power consumption and privacy invasiveness. In my existing work, through several proof-of-concept systems I have shown the effectiveness of novel sensing for enabling new forms of explicit input and initial explorations of implicit input. With the growing prominence of devices with their own reasoning capabilities, my proposed work will focus further on implicit input and new ways to automatically sense users’ behaviors and context. The ability to recognize user context allows a system to detect a number of activities such as drinking a cup of coffee, going for a run, or cooking a meal that could benefit from logging or specialized feedback. Performing this type of implicit input automatically can help estimate user intent and obviate the need for additional explicit input. In my proposed project, I plan to conduct an end-to-end evaluation of a constellation of devices (smart watch and glasses) that can work together with a personal agent to maximize power savings and privacy while providing automated context detection and feedback to users.
Proposal Document: PDF link
