I am getting some requests lately which involve quite some confusion about my apps: Project IRIS and the KinesicMouse. I will try to answer the question here so that I have a source for copy & paste in the future ;) So let's start with a short explanation of the two:
The KinesicMouse is an application that let's you control mouse, keyboard and joystick inputs using head rotations and facial expressions. With the KinesicMouse it is possible to replace the physical mouse with hands-free face gesture control. So the KinesicMouse is capable to fully control any application hands-free. Surfing the internet, writing e-mails, gaming or drawing in creative apps can be done using the KinesicMouse.
While you can assign certain keys to facial expression (e.g. keyboard shortcuts that are needed often, or keys for video game control) it is not possible to map a full keyboard to facial expression. So you would write any text with an on-screen keyboard of your choice.
The neat thing about the KinesicMouse is that it is very flexible. So you can combine it with any other input devices you may already have and just add on to the controls you already have. It is combine able with works with physical input devices, voice control and even eye/head tracking devices.
For detecting facial expressions the KinesicMouse requires a 3D camera. These 3D cameras need to be purchased seperatly and are available from different manufacturers.
For more information about the supported sensors you can check the following sources:
- KinesicMouse website
- KinesicMouse forums: differences between the sensors
- KinesicMouse forums: Kinect Hardware Requirements
Project IRIS is an application that is based on eye tracking technology. So this software knows where you are looking at your screen. IRIS implements a very powerful concept that is called "interactors". An interactor is a region on your screen that reacts when being looked at. These interactors are fully customizeable in size, position, color, transparency and the actions they trigger. These actions can be mouse and keyboard inputs and various other commands.
As an example: in your favorite game you can define an interactor that is placed in the same region where your ammo is displayed for your gun. You can now look at your gun to reload your weapon. The interactor must be configured so that it triggers the "R" key when it is looked at.
Interactors currently can:
- trigger keyboard keys (press once, hold, release)
- rotate the game camera to where you are looking (fps mode)
- switch the profile
- trigger programmable macros including keys, mouse actions, sounds and shell commands
IRIS currently only works with the tobii EyeX eyetracking device. The EyeX is available as an SDK and as a consumer product, called "Steelseries Sentry". Although the hardware device is the same there is a difference in the licensing options. The tobii EyeX is provided only by tobii and must not be used for retail! Tobii will not allow you to order large numbers of EyeX devices to a single individual.
So from the explanations above we can conclude the following differences:
- The KinesicMouse and IRIS are built upon different technologies, so you cannot use the same hardware sensor to run both applications.
- The KinesicMouse is a complete solution, which means that it is capable to fully control any applicaiton on your PC.
- The KinesicMouse requires you to be able to control your head movement and/or facial expressions. With IRIS you just need controlled eye gaze.
- IRIS is not (yet) a full mouse replacement. It is best for controlling simple point and click eye tracking apps (Accessible Games, Sensory GRID 2, GRID 3, ...) or supporting your controls in productivity apps and gaming.
- Eyes have the benefit of being very quick in response. So with IRIS you can make some very fast inputs especially for high end gaming.
I hope this helps to explain the differences. Reply to this topic if you think I missed something that is important.