Our XR team just published it's third XR report. The report shows how combining gesture control cameras with neural wearable interfaces surpasses line-of-sight and field-of-view limits—highlighting applications with Meta Ray-Ban and Lenovo ThinkReality glasses.
The report opens with various concepts which illuminate how the pace of technology advancements has always been dictated by user interfaces. It covers pre-digital to modern-era interface advancements and the strong correlation between GUI and input. Then it covers user input and feedback, direct and indirect manipulation, input device types, screen size and its relation to pointing device functionality, and advocate that certain input functionalities - and gesture types - are optimized for specific display types, using gestures.
The report then explores the origins, evolution, technologies, and boundaries of Gesture Control technologies. It covers its start with a wearable approach and then shifted to camera-based technologies, then went back to wearable, and nowadays is firmly fitted as a built-in solution for smart glasses. It analyzes the line of sight and field of view camera based solution limitations, and their equivalents in neural interface technology.
The report introduces the work we’ve achieved in collaboration with Qualcomm on the Lenovo AR ThinkReality Smart Glasses. It demonstrates how we’ve been able to take gesture control and interaction beyond gesture recognition boundaries, by utilizing fingertip pressure gradations control, interactions beyond field-of-view boundaries, and laser-pointer functionality and features.
Comentários