I got a good collection of feature requests from a user and I would like to publish them along with my answers here in the forums as they might be interesting for others too.
You are right about that. User feedback is a very essential part because of the lack of haptic feedback. So there are two senses left we can use: audio and visual. So some kind of visual feedback is on my mind for quite a long time. I was not able to come up with a good concept how such visual feedback could look like. Now that I made IRIS the answer is already there: Interactors. Interactors are incredible flexible and provide great visual feedback. So my goal is to include them into the KM. Instead of gaze they are triggered by expressions.The most important for me is when using an expression to toggle a key being held down or released when something is not visually happening on the screen there is no way to know if it has been triggered or not. For example I use eyebrow raise to hold the shift key down so that I can queue commands in RTS games. When the shift key is held down nothing visually happens on screen but it does affect me and affect what I do on screen if it is held down or not. For example I cannot use jump to gaze point by puffing out my cheeks because when I puff out my cheeks project Iris detects shift F7 instead of just F7 and it doesn't work. If I performed an eyebrow raise but kinesic did not detect it then I will not be able to queue commands and will not realise this until after I have tried to queue some commands. This causes constant delays and frustrations as I am unaware whether or not the shift key has been held down, if it detected my eyebrow raise or not or if it detected an eyebrow raise that was unintentional. Wondering if it would be possible to add in audio cues. Obviously the user would select from different hold and release sounds or perhaps even an option to continually play a sound softly (user volume control possible?)while the key is held down. I believe this will make things much easier and less frustrating.
For audio feedback I will include a new macro command that plays a sound file.
Well this is actually a good idea but also a very difficult one to realize. The signals of the KM are currently the smallest unit you can detect in a human face. Trainable expressions would be a certain combination of the signals that are already available. There should be machine learning algorithms that can do this quite well. So this feature is technically possible but requires a lot of resources to implement that I currently do not have. Who knows, we might see this in the future.I asked before if it may be possible to train facial expressions in future as the way I do some may be slightly different. I don't think I saw a response?
Hmm, currently this is not planned and you are the first one suggesting this feature. If your setups are fixed you can maybe manually make such lists. You can actually use the settings.xml file for reference. You can find this file in "C:\Users\[**YOUR_USERNAME**]\AppData\Local\Xcessity Software Solutions\KinesicMouseWould it be possible to print out profiles? As I have several different profiles for several different games… My memory can fail me on which expressions perform which task in which game.
Just open this file using a text editor and you will find your key bindings at the end of a <SETTINGS> entry. The format is pretty readable except the key code. So you would have to replace that with the actual key you assigned.
Or you could maybe just make a few screenshots of the Keyboard tab in the settings. Since the UI is mostly white a print should not look to shabby.
Again a very good suggestion. There should be an indicator, constantly displaying the activation state of the signal. Currently you would need to check this in the Signals and you can only estimate where the threshold line would be. So this is generally a nice addition I would love to implement.When binding keys in the keyboard settings there is a threshold slider. Would it be possible to have some kind of test so you can see right there in kinesic how well your expression is being tracked? What I have started doing is opening WordPad and binding the expression to a key and practising there to see how accurate it it is with the threshold. Might be easier if you could do something like that in the program.
Yes this is also a top priority task. I did some research in signal filters and actually made a pretty nifty filter in IRIS. Currently in the KM you can get good results if you have 30+ fps and tweak the settings to have a good balance between stability and responsiveness. Yet there is room for improvement. Currently the smoothing in the KM is done with a normal mean filter that does not respect if you are focusing a small item or moving fast across the screen. So this will be something I will change in the future to get even better results.Will the mouse movement get smoother? Even in your videos I've noticed that the mouse cursor movement is quite jerky? I'm currently using face track no IR in relative mode as it is much smoother.