Sunday, August 31, 2008

Specifying Gestures by example

Author: Dean Rubine
Comment:
1. Daniel's blog
Summary:
This paper describes GRANDMA, a toolkit for adding gestures to click and drag interfaces and about the single stroke gesture recognizer used by the toolkit. The single stroke gesture recognition is chosen in order to avoid segmentation problems of multi stroke gesture based systems and to use eager recognition. Single stroke gestures also contribute to better usability of the UI.

Designing Gestures with GRANDMA
GRANDMA is a Model/View/Controller(MVC) framework. Gesture Designer must first determine the view classes and the associated gestures. A controller is associated with a view class and all its instances and instances of all the subclass. To add a gesture to the system, a gesture handler is created. Then a new gesture class is created with 15 training examples per class.
Semantics of each gesture has 3 actions: recog, manip and done. Recog is evaluated when gesture is recognized(when the mouse stops moving -0.2 seconds time out), Manip is evaluated on subsequent mouse points and Done is evaluated when the mouse button is released. Gesture drawn over multiple gesture handling views are recognized with the priority given to the topmost view.
Sketch recognition is done based on the statistical analysis on the vector of features(extracted from input gesture). Features are chosen based on constant computation time, ability work efficiently on large/small gestures and ability to differentiate between gestures. Author suggests not to use rejection when 'undo' can be used. Eager Recognition allows the system to recognise the gesture even before its completed without any explicit indication from the user.Multi Finger gesture recognition can be done by applying single recognition on each stroke and applying global features to discriminate between different multi path gestures.

Discussion:
First, thing i donot understand is the purpose of views. The next thing, I don't understand the significance of associating views with gestures.

Note: Rubine's features does not take care of scaling/ direction of stroke. So a large gesture may not be recognized if the training data were small in size.

1 comment:

Anonymous said...

I would say not to get caught up in the concept of views for this paper. You can learn more about the MVC paradigm here.