Multi model user input is an interesting subject because it can significantly change human computer interaction in a very unique way. I think that trying to implement my own multi model control system is an excellent challenge (not to mention very cool).
Eye tracking in particular is very interesting because I believe it is one of the most intuitive forms of input since we are always visually assessing the information given to us on a screen.
My University has agreed to lend me a Gazepoint GP3 Eye Tracker and I managed to acquire a second hand Leap Motion. The GP3 provides raw gaze data which will require some signal processing and then an implementation of a fixation algorithm while the Leap Motion already provides lots of useful data through its SDK.
The intial input system will be based on human computer interaction research (Wachs, JP., Kölsch, M., Stern, H., Edan, Y.. (2011). Vision-Based Hand-Gesture Applications. Communications of the ACM. 54 (2), p60-71.) and will later involve a heavy element of user testing..
The user testing will be done through one or two minigames and will measure user metrics, such as accuracy and time spent on tasks, which will be combined with subjective user feedback to give an overall benchmark of the effectiveness of my input system.