Multimodal Interfaces

23:30 § Leave a comment

The use of speech and gestures simultaneously, is a relatively new area of research. Sharon Oviatt has studied the use of speech and gesture in a pen-based, geographic map application. When given the choice to issue commands with a pen and/or voice, users preferred to convey locatives (i.e. points, lines, and areas) with the pen, while they preferred speech for describing objects and giving commands.  For intelligent environments, this means that perhaps the most important gesture to detect is pointing, while other commands should, initially at least, be left for voice.

Classic Panel

23:30 § Leave a comment

Wall panel .The CLASSIC PANEL is the recipient of the 2003 ROEDER AWARD as well as the European REDDOT AWARD.  When in the TONE mode, the panel changes colors to voice and rhythm of music.

ULife South Korea

23:30 § Leave a comment

The ULife South Korea plans to spend $25 billion on New Songdo, the world’s largest “ubiquitous city,” with computers linking home life and life on its streets. Construction, 40 miles from Seoul, is to be done in 2014.

Ambient Control through Cognitive Data

23:30 § Leave a comment

The Emotiv headset allows users the ability to wirelessly control objects through expression, emotion and cognitive data.Based on the EEG technology, emotiv has transformed the cognitive control patterns into a wearable remote control. This headset allows for control of real-time data which can directly a UI or environment. For example, your mood state could drive the meta-data relationships in a user interface to show you only particular images from your Flickr account or directly affect the physical geometry of the user’s seat as they read a book.

Gesture Control Systems

23:30 § Leave a comment

Gesture control systems are one of the many systems being developed to control interfaces. Their success depends on the input of information which is sometimes uncontrollable as there are variants in how users interact. The trick for over coming this is to encourage a meta-data language to which we will all eventually become used to and therefore use in a similar way. Gesture controlled interfaces are significant to architecture in that it allows a user the ability to directly control objects  from a distance and that they are able to control their environment from where they are standing – the system can be place around a user.

Photosynth

23:30 § Leave a comment

Photosynth, developed by Microsoft Live Labs, allows for the creation of realistic 3d images using data from the internet, taken photos and sourced photos from the internet to inform a college in a more spatial way of viewing, allowing user to be able to visualise environments in a way that is more akin to real-life.

Intelligent Social User-Interfaces

23:30 § Leave a comment

ISUIs encompass interfaces that create a perceptive computer environment rather than one that relies solely on active and comprehensive user input. ISUIs can be grouped into five categories:

  • Visual recognition (e.g. face, 3D gesture, and location) and output
  • Sound recognition (e.g. speech, melody) and output
  • Scent recognition and output
  • Tactile recognition and output
  • Other sensor technologies

Here, technologies like Easy Access are emerging. Easy Access recognizes a hook line from somebody humming, automatically compares it with a song database and plays the song on the room’s stereo equipment.

Where Am I?

You are currently browsing entries tagged with Ambient Intelligence at Ambient Environments.