For decades, “intelligent” environments have promised to improve our lives by inferring context, activity and events in diverse environments, ranging from public spaces, offices, and labs, to homes and healthcare facilities. To achieve this vision, smart environments require sensors, and lots of them. However, installation continues to be expensive, special purpose, and often invasive (e.g., running power).
An even more challenging problem is that sensor output rarely matches the types of questions humans wish to ask. For example, a door opened/closed sensor may not answer the user’s true question: “are my children home from school?” A restaurateur may want to know how many patrons need their beverages refilled, and graduate students want to know, “is there free food in the kitchenette?” Unfortunately, these sophisticated, multidimensional and often contextual questions are not easily answered by the simple sensors we deploy today. Although advances in sensing, computer vision (CV) and machine learning (ML) have brought us closer, systems that generalize across these broad and dynamic contexts do not yet exist.
In this research, we introduce Zensors, a new sensor approach that requires minimal and non-permanent sensor installation and provides human-centered and actionable sensor output. To achieve this, we fuse answers from crowd workers and automatic approaches to provide instant, human-intelligent sensors, which end users can set up in under one minute.
Laput, G., Lasecki, W., Wiese, J., Xiao, R., Bigham, J. and Harrison, C. 2015. Zensors: Adaptive, Rapidly Deployable, Human-Intelligent Sensor Feeds. In Proceedings of the 33nd Annual SIGCHI Conference on Human Factors in Computing Systems (Seoul, Korea, April 18 – 23, 2015). CHI ’15. ACM, New York, NY. 1935-1944.