Watches are unique among computing devices in that they are worn, offering great potential to transform arms and hands into expressive input and sensing platforms. As people use their hands, tiny micro-vibrations propagate through the arm, carrying information about the objects they interact with and the activities they perform throughout the day. Smartwatches are ideally situated to capture these vibrations. Although all modern smartwatches contain accelerometers, their APIs generally limit the sampling rate to around 100 Hz (Figure 1, top purple lines). This is sufficient for their main use: detecting the orientation of the watch (e.g., to automatically activate the screen when raised). Some smartwatches also track step count (~2 Hz), which is also easily captured with 100 Hz sampling.
In this work, we use an off-the-shelf smartwatch with a modified OS kernel to capture accelerometer data at 4000 times per second. This fast sampling allows the smartwatch to not only capture coarse motions, but also rich bio-acoustic signals. For example, in the figure above (1B), the sinusoidal oscillations of the toothbrush’s motor are clearly visible. In 1C (fingers rubbing) and D (pressing stapler), the 100 Hz signal captures the coarse impulse, but no useful spectral information is available.
Most smartwatches include microphones, which provide even higher sampling rates (typically 44.1 kHz). However, microphones are specifically designed to capture airborne vibrations, not contact vibrations, which means purposeful signals must be segmented from background environmental noise. In contrast, our bio-acoustic approach only captures signals that are physically coupled to the body (Figures 1A and B, and Video Figure). This approach makes our technique naturally resistant to external environmental noise.
As we discuss in our paper, this approach can be applied to a wide array of use domains; we selected three that we found to be particularly compelling. First, we use bio-acoustic data to classify hand gestures, which we combine with on-device motion tracking to enable a wide range of expressive input modalities. Second, we detect and classify vibrations of grasped mechanical or motor-powered objects, enabling un-instrumented object recognition. Finally, we explore structured vibrations and demonstrate reliable data transmission through the human body.
Our evaluations show that our sensing technique is accurate, robust to noise, relatively consistent across users, and independent of location or environment. Our system, which we call ViBand, makes the following contributions: 1) a system that performs bio-acoustic sensing using commodity accelerometers already present in modern smartwatches; 2) a set of example use domains enabled by our technique, including gesture detection, grasped object sensing, and data transmission; 3) a series of user studies evaluating the feasibility and accuracy of our sensing technique; and 4) a series of example applications for wrist-worn bio-acoustic sensing that illustrate the potential of our approach. Collectively, these bring to light novel and rich functionality for smartwatches, expanding their envelope of possible interactions.
Laput, G., Xiao, R. and Harrison, C. 2016. ViBand: High-Fidelity Bio-Acoustic Sensing Using Commodity Smartwatch Accelerometers. In Proceedings of the 29th Annual ACM Symposium on User interface Software and Technology (Tokyo, Japan, October 16 – 19, 2016). UIST ’16. ACM, New York, NY.
Best Paper Award