AI Summary • Published on Dec 3, 2025
Interactive surface technologies have undergone significant evolution, yet there remains a critical need for a comprehensive understanding of the various sensing modalities, their technical progression, inherent trade-offs, and the challenges in their real-world application. The paper addresses the problem of synthesizing this diverse landscape to better understand how interactive surfaces have transitioned from basic two-dimensional touch inputs to richer, spatially expressive forms of human-computer interaction, and what factors will shape their continued advancement.
The authors conduct a thorough survey tracing the evolution of interactive surface sensing technologies. They begin by examining early infrared vision-based techniques like Frustrated Total Internal Reflection (FTIR) and Diffuse Illumination (DI), and the subsequent rise of capacitive touch as a dominant technology in modern devices. The review then shifts its focus to contemporary modalities, including computer vision and acoustic sensing, before discussing emerging technologies such as mmWave radar and vibration-based techniques. For each sensing technique, the paper meticulously analyzes its operating principles, spatial resolution, scalability, typical applications, and the trade-offs involved, alongside considerations for multimodal integration.
The survey identifies that early optical methods, while scalable and low-cost, were limited by environmental sensitivity and precision. Capacitive touch became the mainstream solution due to its high precision, low latency, and robust multi-touch capabilities, despite limitations in expressiveness for force and shear, and susceptibility to environmental factors. More recent advancements in computer vision and acoustic sensing, driven by AI, have expanded interactive surfaces to spatial interaction and touch inference, but still face challenges in generalization, noise robustness, and power consumption. Emerging modalities like mmWave radar offer millimeter-level precision for gesture tracking and multi-user movement, along with enhanced privacy, while vibration sensing enables touch localization and force estimation on ordinary surfaces without line-of-sight. Across all modalities, persistent challenges include ensuring sensing accuracy across diverse users and environments, managing power constraints for computationally intensive tasks, and addressing user privacy and data security concerns.
The paper highlights that overcoming existing challenges in sensing accuracy, power efficiency, and privacy will be essential for the widespread adoption of interactive surface technologies. The future of interactive surfaces lies in multimodal sensing, where the strengths of different technologies are combined to achieve more reliable, expressive, and context-aware interactions. The continuous development of lightweight, scalable, and seamlessly integrated sensing solutions is crucial for realizing truly ubiquitous interactive environments. These advancements will enable everyday surfaces to become active, intelligent participants in human-computer interaction, fundamentally transforming how users engage with digital content in the future.