smart environments and smart objects

23:30 § Leave a comment

Smart environments sense objects as well as its occupants, leading to data and therefore a response which is contextualised.  This idea of sensing context means that a given environment can sense what goes on within it to determine an environment to assist the occupants needs. Features like person recognition, person location, person activity and person expression may all be sensed by smart architecture. Additionally, objects can be tracked through object tracking and object recognition.

 

A Question to Consider

So, what happens to architectural design as environments become smarter? How will the user interface design of architectural features look and feel? What will happen to interior design and architecture as ubiquitous computing becomes more widespread?

 

Occupants will begin to communicate with their environments more and more. Occupants will intuitively gesture and move, subconsciously sending signals to their surroundings, allowing the brain of the system to absorb behaviours and the state of mode to adapt accordingly.

 

Smart devices are the first step towards creating a habitat of smart sensing orientations. These smart objects are the interface of communication and productivity, providing a smart interface between us and our intended outcome. Combining smart devices and smart architecture will enable a two way perceptual process between the occupant to the environment and the environment to the occupant.

 

Ambient Control through Cognitive Data

23:30 § Leave a comment

The Emotiv headset allows users the ability to wirelessly control objects through expression, emotion and cognitive data.Based on the EEG technology, emotiv has transformed the cognitive control patterns into a wearable remote control. This headset allows for control of real-time data which can directly a UI or environment. For example, your mood state could drive the meta-data relationships in a user interface to show you only particular images from your Flickr account or directly affect the physical geometry of the user’s seat as they read a book.

Gesture Control Systems

23:30 § Leave a comment

Gesture control systems are one of the many systems being developed to control interfaces. Their success depends on the input of information which is sometimes uncontrollable as there are variants in how users interact. The trick for over coming this is to encourage a meta-data language to which we will all eventually become used to and therefore use in a similar way. Gesture controlled interfaces are significant to architecture in that it allows a user the ability to directly control objects  from a distance and that they are able to control their environment from where they are standing – the system can be place around a user.

New Spatial Thinking

23:30 § Leave a comment

New spatial thinking projects are very useful in understanding how new relationships are being developed between information in interfaces and users controlling or interacting with these interfaces.
Real-time customization is being applied to an increasing number of new UI projects and creates a unique experience as it allows users to dynamically manipulate the organization
of information without needing to refresh a specific page (giving a user an uninterrupted experience).
Picture shows the ability to move modules on Facebook – giving the user freedom of control

Daylight Linking

23:30 § Leave a comment

Jean Nouvels “Institute de Monde Arabe” in paris. Visibility and ambience controlled by actuators. These diaphragms operate like a camera lens to control the sun’s penetration into the interior of the building. The changes to the irises are dramatically revealed internally while externally a subtle density pattern can be observed.

Detail to show the facade of the “Institute de Monde Arabe” showing an actuator to control the openness in the façade.

Intelligent Social User-Interfaces

23:30 § Leave a comment

ISUIs encompass interfaces that create a perceptive computer environment rather than one that relies solely on active and comprehensive user input. ISUIs can be grouped into five categories:

  • Visual recognition (e.g. face, 3D gesture, and location) and output
  • Sound recognition (e.g. speech, melody) and output
  • Scent recognition and output
  • Tactile recognition and output
  • Other sensor technologies

Here, technologies like Easy Access are emerging. Easy Access recognizes a hook line from somebody humming, automatically compares it with a song database and plays the song on the room’s stereo equipment.

Personalised Targeting for Shoppers

23:30 § Leave a comment

New advertising techniques are using facial recognition software to identify a shopper’s gender (with 85-90% accuracy), ethnicity and approximate age. With obvious attractions for marketers, they can then be targeted with ads for appropriate products – perfumes for women, for example.

Tokyo are also producing camera-equipped vending machines that suggest drinks to consumers according to their age and gender. Weather conditions and the temperature are taken into account too.

Interfaces Beyond Multitouch

23:30 § Leave a comment

That future of interface development may include using neurotransmitters to help translate thoughts into computing actions, face detection combined with eye tracking and speech recognition, and haptics technology that uses the sense of touch to communicate with the user.

For instance, the Nintendo Wii made popular the idea of using natural, gestural actions and translating them into movements on screen. The Wii controller takes the swinging motion made by a user and translates it into a golf swing, or it takes a thrust with a remote and turns it into a punch on the screen. At Drexel University’s RePlay Lab, students are working on taking that idea to the next level. They are trying to measure the level of neurotransmitters in a subject’s brain to create games where mere thought controls gameplay.

The lab created a 3-D game called Lazybrains that connects a neuro-monitoring device and a gaming engine. At its core is the Functional Near-Infrared Imaging Device, which shines infrared light into the user’s forehead. It then records the amount of light that gets transmitted back and looks for changes to deduce information about the amount of oxygen in the blood. When a user concentrates, his or her frontal lobe needs more oxygen and the device can detect the change. That means a gamer’s concentration level can be used to manipulate the height of platforms in the game, slow down the motion of objects, or even change the color of virtual puzzle pieces.

Social Interfacing

23:30 § Leave a comment

As a concept of Social Interface Theory: Long (1989, 2001) definition was:

‘.. a social interface is a critical point of intersection between different lifeworlds, social fields or levels of social organization, where social discontinuities based upon discrepancies in values, interests, knowledges and power, are most likely to be located.’

The basic thesis of Social Interface Design is how a computer interface can be more akin to human gestures and facilitate correct responses from users during human-to-computer interaction. Software that can provide such humanizing cues often does it by creating interface with human-like quality; such as giving recognizable gender to a software.

CASE STUDY

An example of this has been conceived by designer John Villarreal, the “e-mote” is a remote electronic user interface to control any number of electronics with minimum fuss. The e-mote connects to your mobile phone using Bluetooth, while internal bio-sensor displays lighting to physical state such as heartbeat, blood pressure and body temperature. The lighting on the device indicates high stress level, and this e-motion level can be posted to your social network.

Where Am I?

You are currently browsing the interfaces category at Ambient Environments.