“eeyee” is an extended version of Optical Handlers that employs the concept of stereoscopic visual illusion to create a pair of mobile eyes with three-dimensional sense. The new device consists of four LCDs and four cameras rather then two as a set of the first version. Four LCDs are arranged as two in a pair, which placed side-by-side in the front of the right and left eye separately. The screens are perfectly aligned in a binocular distance so that your brain will overlap four images into two. All the LCDs’ videos are sourced from two sets of stereoscopic cameras on the right and left hand separately. Eventually mobile stereoscopic vision becomes possible when the brain function as normal under this setting.

Artist Statement

Stereoscopic, surrealistic sense of depth and Augmented reality/body.

In the first version of Optical Handlers, there is only one camera on each hand and one LCD in front of each eye. The device successfully transforms the vision but not the brain. Hence, the brain function as usual to combine the images from both eyes, which gives a hybrid spatial experience while you mobilize your vision by hands. “eeyee” makes use of the advantage of the overlapping function of the brain to eventually gives double clear stereoscope visions. As the users are seeing two “reasonable” images, they are able to observe the space in a more rational way and easier to register themselves in the space. However, they are still seeing through a mediated vision, and the perception is still being transformed in sensational ways.

Stereoscopic of “eeyee” provides you a sense of depth, but extended to your hands. Normally, you use to the distances between your head and objects, which allows you to move your hands to touch things. However, with “eeyee”, simple as touching things in front of you brings high challenges as you are having “POV” shots from your hands, seeing and hand movement are combined together. Hence, you do not perceive the situation from your skull and then comment your hand to touch things. In this sense, perception and body are highly integrated, and consequentially, you have to adjust you own body in an intimate way to deal with the environment. “The sense of depth with movement in distance” completely transforms the relationship between your body and vision.

Mediated vision and “new way of seeing – environmental driven selective vision”. Thus, Cyber-real space??

The mediation from the cameras limited your vision regarding frame space, field of view, focus and exposure in which all of them trigger your body experience. You no longer have a wide-open spatial perception, but rely on the 4:3 frame-space from the cameras. Hence, you see a small portion of space, and lost most of the spatial information that secure your personal safety along the way, which naturally encourages you to be highly caution to every single thing around you (even a single step forward). The only way for you to obtain spatial information is to constantly “point at” the directions around you. It turns out generate a very interesting behavioral phenomenon – “selective vision”.

The selective vision is highly driven by the spatial perception of different senses, for instances of which are sound and touch that draw your attention to aim your eyes/hands to the attractions. However, there is a significant delay of responses when it comes to action. The selective vision and the delay of responses constantly challenge your body-manipulation, which lead to the dislocation and uncertainty that requires intense body adjustment of the users to carry on the movement. Eventually, you follow the flow of “the creation with your body” and accustom yourself to the situation in which possibly advance to reflexive interaction.

Moreover, the limited frame space of the cameras not only suggests the “selective vision”, but also transforms the sense of depth by means of its stereoscopic feature. The frame space and the field of view of the cameras are small and narrow. In other words, it scales down the images and consequentially scales down your vision. The proportion between vision, experience and the concept of space/objects becomes blurry. It is essential for users to adopt the new “size” of vision and that the new sense of depth that associated with the body movement to generate a new conception of space uniquely amongst various users.

The selective vision and the new sense of depth dissect the relationship between body, space and the representation of reality (via media) and create mismatches of body movement, which encourages the possibility of “new” existence of human being in the (cyber) reality.

Profound application of dual mobile vision? Game space for real?

When you accustom yourself with the new mobile vision, it is possible to find advance applications of the device. The multiplicity of the dual dimension-free vision provides expanded access of seeing under an extended-axis of limbs. The flexibility of limbs enables you to mobilize your seeing in whatever-direction in which never happens to the vision that sits in your skull. These particular features create augmented body experience that transforms the spatial perception. When the perception comes to execution, the separated dual-vision encourages a dual-reception, which overwhelmingly interacts with your mind-set. Eventually, you engage in a stream of “attention surfing” among the right and left visions. Or, more profoundly, apply both. The ability of manipulating the dual-vision – i.e. point hands in different directions, look into somebody behind you, look at your feet and the cross road simultaneously or look at yourself, highly trigger the desire of “alternative seeing” that generates the freedom to observe and interact with the environment in a creative way. It transcends oneself to an amusing being, which worth to toy and explore with.
As the vision has been mediated to a playful experience that filled with explorations, which derived a very interesting observation – the similarity of video gaming. In video games, particularly in the first person games (e.g. shooting games), we use our hands to control the controller in order to move the character in space, which integrate the hands (fingers) movement and the virtual space (seeing). Even deeper integration, “eeyee” combines hands and eyes that resemble the video gaming experience in the reality, which suggests exclusive body experience regarding control and perception.

Social interaction!

The eeyee goggles has a set of lenses equipped with identical LCDs facing outward point to the bystander, so that people can see through it to observe what exactly “eeyee” is seeing as close as they want in real-time. The intention of including a set of out-facing LCDs is to encourage another level of interaction with the public. “eeyee” constantly goes through intimate interaction with himself and the device, and at the same time interacts with people around him, which embrace the situations of “inter-interaction” and “outer-interaction”. Nevertheless, the reactions of people are fruitful research materials that collect social responds, and how they move “eeyee’s” hands and perform in front of him becomes a very interesting phenomenon to study. “eeyee” as a moving attraction, successfully provides amusing intervention to social life in which encourages curiosity, creativity and communication.