top of page
Shmuel Barel

What Provides the Best Experience to Interact with Smart Glasses?

Our exploration of smart glasses input methods led us through the landscape of Human-Computer Interaction and the measures of a good user experience. While Apple and Meta impressed with gesture controls and wrist neural interfaces respectively, none currently offer a neural interface wristband. Enter the Mudra Band: with familiar gestures, ergonomic design, and an established user base, it's poised to redefine input standards in extended reality experiences


Extended Reality. Spatial Computing. Ambient Computing. Augmented Reality. Virtual Reality. Mixed Reality. Metaverse. Did we miss something? Everybody is trying to name and claim dominance in the “next big thing” computer product category. Winning the customer’s heart in this category will conquer their most desired user real-estate: the face.


Juliet says it best: “What's in a name? That which we call a rose by any other name would smell just as sweet.” Names are arbitrary labels and do not change the essence or true nature of something. The essence of a thing remains the same regardless of the name it's given.


Simply put, any face-worn device, from lean consumer stylish glasses all the way to a bulby large visor headset, allows the user to do two things: display various types of digital elements in the user's environment, and allows the user to interact with these elements. The complexity of the digital overlays and the richness of the interaction will determine the devices’ utility, be it a practical tool for everyday use that remains on the face, or a novelty hardware used for specific use-cases and is sent to collect dust on the shelf.


Some consumer level glasses only display short line text messages, with minimal block of the real world environment, wherever the user is looking, and allow no interaction at all. Other high-level headsets display various screens, applications and widgets, and allow the user to pin a certain display to a specific location, and interact with multiple elements. With highly advanced passthrough cameras and processing, users can now control how much of their physical surroundings they care to see. And that is even before we’ve considered bionic eyes, in the form factor of smart contact lenses and direct occipital lobe implants.


Early headset user input was achieved using hand-held controllers with buttons, triggers, sticks and touchpads. For augmented experiences, input was through mid-air gestures or a simple hand-held clicker. As the technology evolved, more devices were introduced within the wide spectrum of full virtual vs entirely real, including temple area trackpads, voice commands, and hand-tracking.


Thus, instead of delving into the benefits and drawbacks of input devices, we’ve decided to focus our attention on finding out what input method provides the best user experience for face-worn devices.


Part 1 of the blog post will introduce fundamental concepts for Human-Computer Interaction (HCI), Graphical User Interface (GUI), User Interface (UI) and User Experience (UX). We will discuss the user jobs, tasks and environment, and briefly point out the three dominant pointing device products.


In Part 2 we shall review the UX/UI of the Apple Vision Pro, the future Meta glasses, and the recent Neuralink human implant test. In this section we will clarify what constitutes the optimal user experience, form factor, and technology for providing the user with a user interface device that delivers a wonderful user experience.


In Part 3, we introduce the Mudra Band, a neural input wristband to control devices using finger and hand gestures. We’ll describe its Air-Touch functionality, its cross ecosystem capabilities, and why we believe a wearable wristband is the right input solution for extended reality experiences and how you can experience it right now on your current devices you are using day to day.


Let's dive in.


HCI, GUI, UI and UX


Human-Computer Interaction (HCI) is conveyed1 through a user's input and computer output, by means of a user interface and an interaction device. The user forms an intent, expressed by selecting and executing an input action. The computer interprets the input command and presents the output result, which the user perceives to evaluate the outcome.


 Human-Computer Interaction


The goal is to make computer interaction feel as easy and natural as doing things in real life. This means connecting what the user intends to do with what the computer perceives the user is doing, in a simple and intuitive way. Now what does that mean exactly?


  • Intuitive, means you perform input using familiar and common methods. An intuitive gesture is a gesture which binds the “same functionality” with the “same body movement” for any device.

  • Natural, means you perform the input using a comfortable and relaxed body posture. You are at ease, and your body resides in a natural stance;

The basic computer mouse capabilities are navigation and pointing: the ability to navigate in the 2-dimensional space, and to manipulate - select or interact with - certain elements. Navigation and Pointing. Keep these two functions in mind, as they are the fundamental elements of HCI.


The evolution of computers has brought about exciting opportunities for designing new ways of interacting. This has led to the emergence of fresh and unique methods for engaging with technology. Typically, while using a personal computer, an individual tends to lean forward in contrast to interacting with a smartphone, where a more relaxed posture is adopted. Nevertheless, with face-worn devices, diverse interaction scenarios which typically require quick on-the-go based commands necessitate spatial body posture.


Various body postures when using computers


A Graphical User Interface (GUI) allows users to interact with electronic devices using graphical icons and visual indicators. It typically includes elements such as windows, buttons, menus, and other graphical elements that users can click, tap, or manipulate to perform tasks. GUIs make it easier for users to navigate and interact with software applications by providing a more intuitive and visually appealing interface, in comparison to text-based commands.


The GUI design for face-worn devices is quite versatile, and is mostly dependent on the display size and resolution, which determines the device form-factor and size. That may imply a strong correlation between HCI and GUI. Interacting with a GUI can be imagined simpler by the metaphor of driving a car. 


Some ways of interacting, e.g. in-line text editing, need a lot of thinking and careful movements just like parking your car in a crowded garage with a good degree of focus. Other interactions, like browsing through icons, are easier and can be compared to driving on city roads. Finally, there are the really easy interactions, like selecting a big icon, which feels like driving on a highway - only a slight nudge is required to get back on course. So, different interactions need different amounts of thinking and effort, just like different driving situations.


In the context of Fitts's law2, selecting a big icon requires the lowest level of cognitive load versus its peer groups of in-line text editing and icon browsing. So GUI design considerations should favor decreasing cognitive load, dictating that the input should be simple with a low index of difficulty.


Qualitative evaluation of Index of Difficulty. Source: Wearable Devices Ltd.


In the context of User Interface (UI) input, a Human Interface Device (HID) is a type of hardware device that enables interaction between humans and computers. These include keyboards, mice, game controllers, touchscreens, gesture recognition, and voice commands. These devices allow users to input data or commands into a computer.


Input devices are traditionally categorized as a pointing device or a character device, the former is used to input position, motion, and pressure, and the latter is used to input text. Finger tracking technology is utilized to track the position of fingers and hands, both for generating a 3D representation and for discrete input, as demonstrated by newer technologies, such as voice and gesture recognition.


We traditionally use three methods to input commands via a device: we either hold a device, touch a device, or gesture to a device.


The most common pointing device products are the computer mouse, the digital pad, and the gaming controller. When thinking about the user’s input tasks of navigation and pointing, a computer mouse is most beneficial with in-line text editing, a digital pad for skipping or advancing menu options, and a controller for low Index-of-Difficulty tasks. And while the mouse is very accurate, it is difficult to use comfortably with face-worn devices.


User experience (UX) in the realm of computer input encompasses the entirety of a user’s interaction with a computer system. It incorporates a range of elements, including usability, accessibility, efficiency, satisfaction, and emotional response.


A positive user experience is characterized by several key factors. Firstly, the system should be highly usable, meaning it is intuitive and easy to navigate, enabling users to complete tasks efficiently and effectively. Additionally, accessibility is paramount, ensuring that the system caters to users with diverse needs and abilities.


Efficiency is another crucial aspect, with users being able to achieve their objectives swiftly and with minimal effort. This is achieved through streamlined workflows and well-designed interfaces. Furthermore, a positive user experience leads to user satisfaction, as individuals feel fulfilled and content after interacting with the system.


Lastly, emotional response plays a significant role in user experience, with the system evoking positive emotions such as joy, delight, or confidence. These emotions contribute to an overall positive perception of the system and can foster increased user loyalty.


For a deeper understanding of the user experience in extended reality (XR) environments, we invite you to download our comprehensive whitepaper on the subject.


Part 1 conclusion


A wonderful user experience should consider both what the user perceives and the user’s intent. The user experience should be functional and accurate. As experiences and interactions extend beyond the office desk or living room sofa, the user input method and device should support navigation and pointing, whether indoors or outdoors, at rest or on-the-go, and in various bodily postures.


APPLE VISION PRO, META GLASSES, and NEURALINK IMPLANT


Apple Vision Pro - The User Experience


Regardless of your opinion of the Apple Vision Pro, one of its most celebrated features is its ability to use hand gestures to interact with VisionOS. It provides a smooth, continuous and intuitive user experience, which allows using gestures in unprecedented ways. The reason it feels great is simple: it uses familiar gestures, and it supports comfortable body postures.


Interaction with the Apple Vision Pro involves the use of hands and eyes, along with the digital Crown and top button. The hands are used for Pointing, to input gestures such as tap, pinch and drag, pinch and flick, and virtual touch. These actions allow users to select an item, move items around, move or scroll content, and type on the virtual keyboard.


This is achieved by the Outward-facing cameras which track hand gestures, and with a large enough field of view and line-of-sight, the user is not required to hold the arm up in mid-air when performing a gesture. The hand can rest comfortably on a desk or along the waist when making most gestures. The eyes are used for Navigating - you simply gaze at an element and it slightly changes its contrast or texture to hint that it is selectable.


Comparing this input method with the HoloLens first generation will immediately illuminate the lessons Apple implemented for its gesture recognition. In the HoloLens, the navigation was controlled by neck movements, with a fixed pointer in the middle of the display. The HL gesture camera’s field of view was in the middle of the user’s view, so in order to perform a gesture the user needed to place the palm of the hand in front of his nose, thus blocking the real-world view. Another AVP achievement is resolution and algorithms, which support very delicate and natural gestures, instead of the Bloom and Middle Tap gestures used with HoloLens. Thus, Apple achieved both familiar and intuitive input gestures and comfortable intuitive body postures.


It's evident that the Apple Vision Pro inputs are commendable features that merit acknowledgement. However, akin to a well-constructed narrative with its layers waiting to be uncovered, there are aspects of the AVP interaction that warrant further exploration and refinement.


The fact that you use the eyes as an input interface creates multiple barriers. While the AVP Home view displays a dozen large and well placed icons, making gazing easy, navigating through its Settings menu requires full attention and very minute eye movements. These two GUI scenarios highlight the importance of using the right input method for the Index-of-Difficulty as we’ve elaborated in the context of GUI design and Fitt’s law. The eyes are not a natural human pointing mechanism, since they are used for numerous sensing and cognitive tasks with complex human physiology, which humans can not control in high enough resolutions. In addition, the need for customized prescription lenses for AVP is crucial for proper navigation. Furthermore, it is required to redo the eye and hand setup each time a new user engages, which can become tedious, especially when the device is shared among multiple users.




Apple Vision Pro home view (top), settings screen (bottom). Image source: Apple.com 


Another major issue is comfort - weight and face-fitment. The AVP comes with 2 types of Headbands, to provide cushioning, breathability, and stretch. It has approximately 30 shapes and sizes of face light seal cushions to deliver a precise fit while blocking out stray light. Weighing at 650 grams and tethered to a 2 hour battery pack, it has been widely reported to cause users with headaches, discomfort, and excessive sweating. According to the media, many Apple customers who returned their devices cited discomfort as a major issue. It is called wearable only if people are willing to wear it.


Why is it so bulky and heavy, with a relatively short battery life? That is partially due to the fact that the gesture camera hardware and its algorithms require a lot of processing power and specific locations on the headset. Now, what if you could get the same Apple Vision Pro user experience input, but without the deadweight and the drag?


Additionally, what about the fact that you still need to use two additional buttons which are not visible at all and are placed on top of the device? Users find it difficult to locate and use these buttons due to loss of proprioception, which is basically the loss of self-awareness in your body's ability to sense movement, action, and location. Without proprioception, you wouldn't be able to move without thinking about your next step, which makes using the Digital Crown and top button a relatively high cognitive load task.


As with any gesture recognition device, some challenges remain. The hands will need to be visible to Apple Vision Pro, and not hidden under a desk or a blanket. Gloves, long sleeves, or large jewelry that covers a significant part or all of your hands can affect how Apple Vision Pro tracks your gestures. Crossing your hands, or covering up your gesturing hand with your other hand, wearing gloves, and for people with hand and finger disabilities - using the Apple Vision Pro gesture control can be quite challenging.


So, as for Apple Vision Pro, we believe that Apple hit the bullseye with the User Experience of Pointing - they use the proper gestures which bind with the proper functions to provide a seamless, comfortable, and familiar input.


However, when developing UX/UI for wearables, there is always a tradeoff between functionality, accuracy, and design:

  • Functionality – determining the scope of features and capabilities and the input types it offers

  • Accuracy – an input method that is reliable and provides high accuracy of the intended functionality for all types of user physiology

  • Design – an interface that is comfortable, durable, stylish, and fits a user's daily routine

With the Apple Vision Pro, Apple has clearly prioritized the human interface - how the product works and its intuitiveness, over the product design - how it looks and feels.


Meta Glasses - The Product Form Factor


During February 2024 it was rumored that Meta was working on a pair of augmented reality smart glasses -a.k.a code-name “Orion” - and that the company plans to demo a pair of true AR smart glasses later this fall at Meta Connect. The glasses are a separate product from the Ray-Ban Meta Smart Glasses and the Meta Quest headsets. What makes these glasses “true” AR devices is the fact that they’re supposed to be more technologically advanced with a visual element — and we don’t know much else yet.


In a recent interview with The Morning Brew Daily3, Mark Zuckerberg was asked about The purpose and future of smart glasses and headsets, and what would he show people to blow their minds with the possibilities of AI.


As for the future of smart glasses, Mr. Zuckerberg says that the glasses are the next computing generation's phone, which will be used as an on-the-go computing platform. If Quest is the equivalent for your home screen TV, for most people the phone is probably the more important device in their life. However, in this next generation, the glasses are probably going to be the more important and ubiquitous device.


Mr. Zuckerberg explains that the Meta Ray Ban glasses, launched by Meta in cooperation with Essilor Luxottica, prove that there is a broader appeal for face-computers. Users can talk to them, ask questions, and perform functions, even without an overlay display, at an affordable price and a stylish product. So the glasses will not necessarily replace the phone, but will rather be used in situations where certain tasks can be performed more naturally and in a socially preferable manner. With glasses, users can remain engaged without having to have something in their hand, or divert their gaze away from their current interaction to look down at a phone, according to Mr. Zuckerberg.


And what is the most mind-blowing AI tech as per Mr. Zuckerberg? Besides the Meta Ray-Ban glasses, one of the wilder things that Meta is working on, according to Mr. Zuckerberg, is a neural interface, in the form of a wristband. As he explains it, the brain communicates with the body by sending neural signals through the nervous system. The band senses these signals, which are sent to the muscles, and picks them up using an EMG wristband. And it turns out that there's all this extra bandwidth in this method, and in the future you’ll essentially be able to type and control devices by just thinking.


It won't even require big motions. You can simply sit and type something to an AI and receive notifications on the display. So he thinks you'll have this completely private and discreet interface where you can walk around throughout your day, text different things and get the response back in real time, all powered by AI. Mr. Zuckerberg thinks that it is going to be insane. Meta has been working on it for a while, and are actually close to having a product in the next few years.


Meta neural interface (Source: tech.facebook.com) , Ray-Ban Meta glasses (Source: ray-ban.com)


Analyzing Mr. Zuckerberg and Meta’s vision for interacting with smart glasses, it is clear that Meta has chosen the correct technology and form-factor for inputting commands into face-worn computers.


However, a deeper inspection into the bolts and nuts of the current Meta status may reveal it will take quite some time to get your hands on their neural input wristband. Mr. Zuckerberg has stated that "we're actually kind of close to having something here that we're going to have in a product in the next few years".4


As for the gestures and user experience, there is not much known on which gestures Meta will use. It may choose AVP familiar gestures, or actually introduce other innovative micro-gestures such as the ones the Soli radar presented a couple of years back, which tracks minor finger movement twitches. Alternatively, it can skip directly to what the company has described as “neuro-control”, which allows the user to input gestures without even moving the fingers, but rather only think about the movement.


In terms of form-factor and style, the latest videos and images of the Meta neural interface wristband are not quite in the form of a wristband, but rather an arm-band, or “a clunky iPod on a strap” - as one reporter suggested. Nailing the form-factor is crucial for a successful adoption of a wrist wearable. People have been adorning their wrists for centuries, and the wrist can be considered as the second-best body real-estate after the face, which is at first place. Providing a rich and fascinating input gesture scheme using a non-aesthetic product is not a promising direction.


For interaction with smart glasses, Meta is prioritizing a face-worn light and stylish form factor - by removing all the superfluous hardware related to input from the device and moving it to the wrist.


Neuralink Implant - The red plant endeavor


Neuralink develops an implantable brain–computer interface. The brainchild of Elon Musk, Neurlink’s mission is to create a generalized brain interface to restore autonomy to those with unmet medical needs today and to unlock potential tomorrow. The company aims to restore lost capabilities such as vision, motor function and speech. On the wilder side of things, Neuralink’s brain implants could help protect humanity from the risks of artificial intelligence, by increasing the synergy between humans and machines.


The Neuralink product is a fully implantable, cosmetically invisible, brain-computer interface, designed to let the user control a computer or mobile device anywhere they go. The electrode threads can capture the brain signals, and instantaneously translate movement intent into digital command.


Recently, Mr. Musk announced that the first Neuralink patient can control a computer mouse through thinking.5 How does that work? On the Neuralink company website, the company published their “Seamless BCI Experience”, which seems to be part of the onboarding process that enables fast and reliable computer control and prioritizes ease of use.


The user navigates to certain locations with the eyes using a cursor. Blue circle targets are click targets, and the user is asked to imagine pressing down the index finger to tap on them. Orange circles are dwell targets, and the user is asked to hold as still as possible.


Neuralink Seamless BCI Experience. source: neurlink.com



What Neuralink can potentially achieve is what we termed in the whitepaper; HCI Level 5. Our simplified framework and taxonomy defines levels of interaction via 4 parameters:

  1. handheld versus hands-free

  2. hands-on versus touchless

  3. big versus small physical movements

  4. command input and feedback return time

The Neurlink device offers hands-free, touchless, non-visible physical movements, at almost instantaneous speeds. Humans may benefit immensely from being able to continuously communicate with computers without the time-lags that physical motion takes. However, this comes with a hefty price - a surgery using a specialized robot and an implant into the skull.


For a neural input device specifically, such as those used to control prosthetics or assistive technologies, there are some additional pros and cons to consider for wearables vs. implants:


Wearables:


Pros:

  • Non-invasive, meaning they do not require surgery or implantation into the body

  • Can be easily removed or replaced if needed

  • Generally, less expensive than implants

Cons:

  • Limited in their ability to detect neural signals with high precision and accuracy

  • May be subject to interference from external factors, such as movement or electromagnetic fields

  • Limited in terms of the types of signals they can detect and interpret

Implants:


Pros:

  • Able to detect neural signals with high precision and accuracy

  • Not subject to interference from external factors

  • Can provide a more permanent and reliable neural input

Cons:

  • Invasive, requiring surgery or implantation into the body

  • May pose risks such as infection, rejection, or damage to surrounding tissue

  • Expensive

So while Neualink’s premise is appealing to many users, the practicality of it and the social acceptance it will require will probably keep it in the niche market of assistive technology for people with severe disabilities, before it’ll help us overthrow the future AI warlords.


Part 2 conclusion


Apple Vision Pro has adopted the right user experience - familiar gestures bound with the right functions, and Meta is wise in choosing a neural interface wristband to support lighter headsets. Now, how can a user get the chance to experience gestures using a neural wristband without waiting a few years? That is where we present the Mudra Band.


THE MUDRA BAND


The Mudra Band for Apple Watch


Mudra Band is the world’s first neural input wristband. It translates movement intent into digital commands to control digital devices using subtle finger and hand gestures. It connects to the Apple Watch just like any regular watch band, and lets you control Apple ecosystem devices using simple gestures. Your iPhone, iPad, Apple TV, Mac computer, Vision Pro, and additional Bluetooth controlled devices can be paired with the Mudra Band and be operated using Touchless Gestures. Mudra is Sanskrit for “Gesture”.


The Mudra Band app and customized watch face companions are extensions of its neural technology and provide additional features and benefits. The Mudra Band app features a technological novelty! For the first time, you can see your wrist neural signals live and in real-time on your smart device! Using the app, you can also pair your Mudra Band device with the iPhone, complete the quick onboarding process, and adjust personal settings such as speed and sensitivity.


The Mudra Band watch face can be customized with shortcuts to present icons of specific devices you may want to choose to control. It also offers a quick on-off switch to easily initiate and disengage the gesture control.


You could barely notice someone gracing their wrist with the Mudra Band - with stylish design and ergonomic comfort, it feels secure and grants you neural powers without drawing attention. You can easily adjust it for a snug or loose fit, so you can feel comfortable whether you’re resting or on-the-go.


With its LED indicator colors, you constantly stay informed about various aspects of your Mudra Band’s functionality, connectivity, and battery status. You can double the power by simultaneously charging your watch and your band, indoors or elsewhere. The Mudra Band is built to evolve - it can work on Apple Watch series 3 all the way up to the Ultra! You can always enhance your experience as you switch to a newer generation watch.

The Mudra Band won CES 2021 Innovation Honoree award, Best Wearable at CES 2021 by

Engadget, and Best of CES 2024 by SlashGear.


Mudra Band product video


Welcome to the Era of Human Control!


Mudra Surface Nerve Conductance Technology


The Mudra Band is equipped with three proprietary Surface Nerve Conductance (SNC) sensors. These sensors are located on the inside face of the band and keep constant contact with the skin surface. Each sensor is approximately located above the ulnar, median, and radial nerve bundles, which control hand and finger movement.


When you intend to perform a gesture - move a finger or tap fingers together - your brain sends a signal through your body to make it happen. This signal travels down the spinal cord and activates the nervous system, which in turn triggers the appropriate muscle groups to execute such a movement. To issue digital commands, you’d interact with an interface or device, which may include a keyboard, mouse, touchscreen, pointing device, gesture recognition system, or character device. Eventually, your command is sent to the digital device.

The command flow from movement intent to digital command



The Mudra Band’s SNC sensors capture the neural signals on your wrist, and using Machine Learning algorithms it deciphers your movement intent - did you just perform a discrete tap of the index finger on the thumb, or did you continue to apply fingertip pressure? This is how Mudra captures fingertip pressure gradations from your wrist.


The Mudra Band also uses an IMU to track your wrist movement and speed. If you’ve moved your wrist up, down, left or right, inwards or outwards - the IMU captures the motion.


Using sensor fusion, our algorithms integrate fingertip pressure and wrist motion to determine the type of gesture you’ve performed. It can be a mere navigation function that is only using wrist movement, or it can also incorporate any type of fingertip pressure for pointing. Combining the two readings, motion and pressure, manifests in the magical experience of Air-Touch: performing simple gestures such as tap, pinch and glide, using a neural wristband.



How the SNC sensors create biopotential signals


Mudra Band Air-Touch Gestures


Air-Touch allows you to touchlessly control digital devices using simple gestures. You can swipe screens, launch applications, browse between apps, adjust controls, and interact with digital devices using familiar gestures in comfortable body postures.


You use a tap gesture to select an item, pinch and hold to grab an item, and combined with a wrist movement you can drag items around. By applying fingertip pressure and gliding your wrist, you can perform swipe and scroll gestures.


The interaction is hands-free and touchless. You can rest your hand when you don’t need to intervene. When you need action, you simply perform the gesture and achieve your task. This is true spatial computing interaction, but without the headset.


With the Mudra Band, enjoy interaction beyond boundaries. Use simple gestures to control it all, touch nothing, yet control everything!


Mudra Band Cross-Device Control


With the Mudra Band you don't just control multiple devices, you can hop from controlling one device to another using the custom watch face. The watch face contains complication shortcuts to your selected devices. Once you’ve completed a short setup, you can seamlessly toggle and switch control between your devices by simply tapping the desired device icon on the watch face.


You can also add additional Bluetooth controlled devices to your list by following the same procedure as adding Apple ecosystem devices.


With the Mudra Band, you can streamline your interactions with Apple products while maintaining a harmonized consistency of interactions.


Enhance your mouse, touchscreen, remote, and trackpad interactions with gesture control!

Hop from gaming to video streaming, and between your favorite applications.


Control your iPhone, Apple TV, iPad and Mac computer using hands-free and touchless gestures, and expand your reach by pairing additional BT mouse-controlled products.


Mudra - Simply control it All.


Part 3 conclusion


The Mudra Band for Apple Watch is a readily available neural interface wristband to control digital devices using gestures. You can view live, real-time neural signals on the app, and perform simple gestures such as tap, and pinch and glide to control applications and devices. You can easily switch between devices using the customized watch face.


You can get the spatial gesture input using a neural interface wristband, today!


Oh, and we’ve also got a dev-kit, to integrate any element of our technology into your product or solution.


CONCLUSION


We sought to look for the best experience to input commands into smart glasses, by exploring the most dominant market players. We started by presenting the correlation between HCI, GUI, UX and UI. We learned that a pointing device’s user experience can be tested by two parameters: navigation and pointing, and introduced the Index-of-Difficulty parameter which can be used to evaluate how natural and intuitive an input interface is.


We then moved to analyzing the interface input methods of the Apple Vision Pro, the Meta Glasses and a bit of the Neuralink implant. We’ve pointed out that Apple has mastered the optimal pointing user experience with its gestures, and Meta has chosen wisely with its approach of a wrist-worn interface for light-weight glasses. However, none currently offer a neural interface which can input gesture commands into any device.


And finally we presented our input solution for face-worn devices, the Mudra Band. With familiar gestures performed in comfortable body postures, a slick stylish and ergonomic design, and thousands of customers who already use the product, we believe the Mudra Band is setting the input standard for extended reality experiences.



If you’ve liked what you’ve read, we welcome you to Start a Movement and Join the Band at www.mudra-band.com


1 Norman, D. A, (1984) Stages and levels in human-machine interaction. Int. J. Man-Machine Studies (1984), 21, 365-375

2 Fitts, P. M. The information capacity of the human motor system in controlling the amplitude of movement, Journal of Experimental Psychology, 47(6), 381–391




04.png

STAY IN THE KNOW

Thanks for subscribing!

ABOUT US

Wearable Devices Ltd. develops a non-invasive Neural input interface for controlling digital devices using subtle finger movements.

 

We believe that neural-based interfaces will become as ubiquitous as wearable computing and digital devices in general, just as the touchscreen became the universal input method for smartphones.

TALK TO US

Wearable Devices Ltd - HQ
Hatnufa 5 street,
Yokneam Illit 2066717,

Israel.

STAY CONNECTED

  • YouTube
  • LinkedIn

© All right reserved to Wearable Devices Ltd

bottom of page