Synthetic Sensors Project Presented at CHI 2017
News
HCII Professor Chris Harrison and PhD students Gierad Laput and Yang Zhang have unveiled Synthetic Sensors, a sensing abstraction project that allows everyday environments to become smart environments, without the use of cameras. The project originated from Zensors, which used a camera-based sensing approach and crowd sourcing to detect environmental states and changes.
Because of the invasive nature of cameras and overall privacy concerns, a camera is impractical in many types of environments, such as homes, schools, healthcare facilities, and the workplace. “If we want to make this practical in the real world, we need to see if there is a way to take the camera out of the equation, which is a much more difficult problem.”
[[{"fid":"3197","view_mode":"default","type":"media","link_text":null,"fields":{},"attributes":{"class":"panopoly-image-original media-element file-default"}}]]
Developing the sensor
Sensing an entire room containing dozens of devices and appliances without having sensors directly attached to each object was the biggest challenge for the team. Thus, they took a kitchen sink approach when developing a prototype sensor that would indirectly sense multiple state and event changes without being coupled to each object. “We basically threw every sensor known under the sun onto a little board and then let the pattern recognition of machine learning do the rest.” Their prototype contained 19 different sensor channels, including sensors that indirectly detect sound, vibration, motion, color, light intensity, speed, and direction.
In their paper, “Synthetic Sensors: Towards General-Purpose Sensing”, Harrison explains that these sensors are “synthetic” because they’re synthesized from lower level data. “Our system might expose a synthetic sensor like “dishwasher is running”, but since no sensor like that exists, we had to form one from lower level signals like the sound of the dishwasher running, and the vibration from the pipes carrying water, etc.”
The sensor board can be easily plugged in to a wall outlet, eliminating the need for batteries, and is more aesthetically pleasing than having dozens of sensors attached to various objects. “Even though we don’t have a picture of the scene, we have enough other dimensions of the scene that we can make confident guesses about what’s going on in a particular environment.” By drawing on all of these sensor channels, a powerful inference about an object can still be drawn even without use of a camera.
It’s all about patterns
After developing the prototype, the team then gave the sensor “training” data by providing it with hundreds of real world examples of what 38 different devices and appliances sound like when in use. Once the sensor “learns” this data, it can then be presented with a new object where it can pick out what it is, based upon the 38 patterns it’s already recognized. “It’s like a little AI,” says Harrison. “It only knows about the things we’ve trained it on. Every time we give it something new, it gives a best guess based on the data we’ve given it.” Initial testing performed at or near 100% accuracy.
[[{"fid":"3193","view_mode":"default","type":"media","link_text":null,"fields":{},"attributes":{"class":"panopoly-image-original media-element file-default"}}]]
Making “smart” environments actually smart
According to Harrison, “smart” environments aren’t that smart because they’re not delivering what users want. “We’ve been promised these smart buildings of the future – offices, homes, hospitals. But even the smartest environment is actually pretty dumb,” Harrison says. “People don’t want to spend thousands of dollars to upgrade their homes. The idea is to do this in a way where you don’t have to upgrade your entire home or business.” The sensor can be plugged in anywhere and can immediately make a room “smart” without having to replace everything, all for around $100.
The team is already working on the next phase of research, including studying how sensors in different rooms can work together to develop a higher level of accuracy. The team must also figure out how a sensor will notify users about an environmental or state change in a room. For example, indicating that the paper towel dispenser is empty through either text messaging or an app. Once these questions are answered, Harrison envisions this being deployed in classrooms, hospitals, healthcare facilities, workshops, etc. “Right now we know nothing about what is happening in these environments. If there is a way to help assist and automate certain tasks and improve efficiency and quality of life, there is value in that.”
Harrison sees a high potential for a lot of good with the development of synthetic sensors. “From an academic perspective, we asked ourselves, can we still achieve the same level of sensing fidelity without the camera? The question was not that we didn’t think we could do it. The question for us was, how far can we push this? In the end, it turned out pretty far. We didn’t know until we tried.”
Synthetic Sensors is part of the Google-sponsored project, GIoTTO, and will be presented this week at CHI. Read the complete paper here.