Multimodal Interfaces

23:30 § Leave a comment

The use of speech and gestures simultaneously, is a relatively new area of research. Sharon Oviatt has studied the use of speech and gesture in a pen-based, geographic map application. When given the choice to issue commands with a pen and/or voice, users preferred to convey locatives (i.e. points, lines, and areas) with the pen, while they preferred speech for describing objects and giving commands.  For intelligent environments, this means that perhaps the most important gesture to detect is pointing, while other commands should, initially at least, be left for voice.

Ambient Control through Cognitive Data

23:30 § Leave a comment

The Emotiv headset allows users the ability to wirelessly control objects through expression, emotion and cognitive data.Based on the EEG technology, emotiv has transformed the cognitive control patterns into a wearable remote control. This headset allows for control of real-time data which can directly a UI or environment. For example, your mood state could drive the meta-data relationships in a user interface to show you only particular images from your Flickr account or directly affect the physical geometry of the user’s seat as they read a book.

New Spatial Thinking

23:30 § Leave a comment

New spatial thinking projects are very useful in understanding how new relationships are being developed between information in interfaces and users controlling or interacting with these interfaces.
Real-time customization is being applied to an increasing number of new UI projects and creates a unique experience as it allows users to dynamically manipulate the organization
of information without needing to refresh a specific page (giving a user an uninterrupted experience).
Picture shows the ability to move modules on Facebook – giving the user freedom of control

Intelligent Social User-Interfaces

23:30 § Leave a comment

ISUIs encompass interfaces that create a perceptive computer environment rather than one that relies solely on active and comprehensive user input. ISUIs can be grouped into five categories:

  • Visual recognition (e.g. face, 3D gesture, and location) and output
  • Sound recognition (e.g. speech, melody) and output
  • Scent recognition and output
  • Tactile recognition and output
  • Other sensor technologies

Here, technologies like Easy Access are emerging. Easy Access recognizes a hook line from somebody humming, automatically compares it with a song database and plays the song on the room’s stereo equipment.

Where Am I?

You are currently browsing entries tagged with user interfaces at Ambient Environments.