New Devices+UI Software that change the UI game decisively are starting to come to market. They are referred to as Natural User Interfaces or PostWIMP – succeeding GUI interfaces that use WIMP- Windows Icons Menus and Pointers. There is a whole community of different Natural User Interface designs from Tangible UI that have inspired the Microsoft Surface development to NUIGroup that is doing Open Source research on a number of Natural User interface designs. One can see the cross fertilzation engendered by academic, art, and Computer Science communities in the such organizations looking for more productive Human-Computer Interactions. Out of this 6-10 year mix of PostWIMP has emerged the multi-touch interface ideas seen in Apple iOS devices and Google combo of voice and touch in Android. Some NUIs such as Nintendo’s Wii and Microsoft Kinect have already appeared in the marketplace.
Here are four NUI-Natural User Interface technologies that will impact computing massively in the next 1-3 years.They will change yet again how we interact with computing power perhaps even supplanting the new found touch+gesture UI and the spoken commands of Apple Siri and other voice activated devices. They also could break the stranglehold on UIs that the major OS vendors currently hold as two very promising NUIs are outside the major OS vendors control. Think of Apple’s refusal to allow touchscreen+gestures on its Mac OS/X or the bipolar split between Windows 8 Metro Style UI and Desktop UI.
The NUIs are ordered by the closeness to market and their potential impact.
|NUI- Natural User Interfaces Coming to Market|
LeapMotion $70 half brick-like device that will revolutionize gaming and 3D touch screens.
[iframe width=”560″ height=”315″ src=”http://www.youtube.com/embed/_d6KuiuteIA” frameborder=”0″ allowfullscreen>]
The Leapcreates a 8 cubic feet space above the device that allows users fingers and other pointers to be tracked with uncanny speed and accuracy; then mapped to the screen as seen in the videos. So like Kinect or Wii, Leap is able to track hand movement in space but with much greater accuity. It also does not require the user to touch the screen and mar it with potential streaking.
Oblong’s g-speak SDK + hardware enables Minority Report like interactions with Whiteboard and/or computer screens.
[iframe src=”http://player.vimeo.com/video/2229299″ width=”500″ height=”281″ frameborder=”0″ webkitAllowFullScreen mozallowfullscreen allowFullScreen]
g-speak is a framework for building applications that offer synchronous multi-participant access, remote collaboration, and arbitrary-scale data and media handling. Using gestures provides for fluid, dextrous navigation which doubles the value of data visualization.Oblong also offers Mezzanine is a new collaboration, whiteboarding, and presentation system whose triptych of high-definition displays forms a shared workspace. Multiple participants simultaneously manipulate elements on Mezzanine’s displays, working via the system’s intuitive spatial wands, via a fluid browser-based client, and via their own portable devices. This is a an innovative meeting room utlizing g-speak-software as underlying driver.As more OEM become savvy in g-speak, expect to see tailored dashboards and meeting room “whiteboards” become more sophisticated and productive.
Microsoft XBOX Smart Glass may be the sleeper here.
Launching this Fall as an app on major mobile devices – read iOS and Android as well as Windows Phone – this allows Xbox console to share multi-screen info back to the mobile device while the mobile acts as a the Xbox contoller for that video/game/browser session. This is in part a response to Apple’s AirPlay and partly a way to bring the Xbox gaming and TV control franchise to a wider array of users. Notable is the fact that Microsoft Phone 7/8 has not been given an exclusive. Open question- Is this the interface that will blunt the rumored Apple TV and UI coming in the Fall also?
Google’s Project Glass has the longest time horizon for intro- and a high propensity to be parodied.
[iframe src=”http://player.vimeo.com/video/42592801″ width=”500″ height=”281″ frameborder=”0″ webkitAllowFullScreen mozallowfullscreen allowFullScreen]
But co-Google founder Sergey Brin insists this project is on tap for intro in 2013. The basic idea is to create a smartphone that you wear as glases. User control the GoogGlasses by means of voice commands primaily. The glasses project images and messages on the field of view. The GoogGlasses can take pictures, play music, issue alerts on incoming calls, even allow simple browsing. Of course the GoogGlasses have attracted some concern for loss of attention and verbal annoyance for others in public spaces. And of course this has attracted a number of parodies like the one below:
There are many more NUIs lurking in academia and the Maker community. Just browse around here for a lively series of discussions. However, LeapMotion has this viewer’s vote for the NUI most likely to shake up Computing in the next year. Leap has all the bases covered: convenient size, no smearing the monitor, ability to expand the coverage with networked Leaps, easy link through USB port, winning price, better accuracy and speed of operation than either Wii or Kinect. And most of all they have connected Leap to all the major operating systems.Curiously a critical factor will be if the Leap allows quickly zeroing in and touching the screen for picking up small targets like window edges, scroll buttons, thin graphic lines or other small objects/icons.
Nonetheless, I suspect a few killer games will lead the charge but some artistic apps could also leap into action. If I were Corel or Magix I would be working triple overtime to bring the Leap to their graphics software to get a lead over Adobe and AutoCAD that may be too clumsy to respond quickly. Finally, will Leap solve the no touchscreen exile that Apple has foced on Mac users or unite the great divide that is the Windows 8 Metro Ui versus Desktop UI? I suspect that the Leap could easily make both of those jumps.