Authors:
Qingshan, Liu
Xiaoou, Tang
Hongliang, Jin
Hanqing, Lu
Songde, Ma
Summary:
This method uses LLE to probe sketch based on the pseudo sketch generated from the photos. The idea behind LLE is to compute neighbhor preserving mapping between high dimensiion data to the low dimension feature space (usually based on simple geometric intuition in this case this is complex). In the case of faces, patch based strategy is followed - the photos are divided into overlapping patches. The pseudo sketch and artist sketch is normalized by fixing the position of the eye. KNN is used to calculate the weights of patches - to provide smooth transition between patches k=5 is chosen. KNDA - non linear version of LDA is used to classify the sketch. KNDA provides a better classification rate than LDA and PCA.
Discussion:
Need to read the paper in detail and understand how the patch based transformation works. How does KNN help in the transformation.
Wednesday, July 15, 2009
Face Sketch Recognition System to Support Security Investigation
Authors:
Setiawan Hadi,
Iping Supriana Suwardi,
Farid Wazdi
Summary:
The three ideas behind component based face feature detection First, some object classes can be described well by a few characteristic object parts and theirgeometrical relation. Second, the patterns of some object parts might vary less under pose changes than the pattern belonging to the whole object. Third, a component based approach might be more robust against partial occlusions than a global approach. The method is performed in two stages. On the first stage, component classifiers independently detect components of the face. In the example shown these components are the eyes, the nose and the mouth. On the second stage, the geometrical configuration classifier performs the final face sketch detection by linearly combining the results of the component classifiers. Then SVMs are used to classify the faces based on the components in the sketch.
Discussion:
The interesting idea here is to divide the face into component features and compare them with features of the sketch which is kind of finding the middle ground between photos and the sketches.
Setiawan Hadi,
Iping Supriana Suwardi,
Farid Wazdi
Summary:
The three ideas behind component based face feature detection First, some object classes can be described well by a few characteristic object parts and theirgeometrical relation. Second, the patterns of some object parts might vary less under pose changes than the pattern belonging to the whole object. Third, a component based approach might be more robust against partial occlusions than a global approach. The method is performed in two stages. On the first stage, component classifiers independently detect components of the face. In the example shown these components are the eyes, the nose and the mouth. On the second stage, the geometrical configuration classifier performs the final face sketch detection by linearly combining the results of the component classifiers. Then SVMs are used to classify the faces based on the components in the sketch.
Discussion:
The interesting idea here is to divide the face into component features and compare them with features of the sketch which is kind of finding the middle ground between photos and the sketches.
Face sketch synthesis and recognition
Authors:
Xiaoou, Tang
Xiaogang, Wang
Summary:
The algorithm uses eigenfaces on sketch of the faces to find the weight vectors. This algorithm assumes that the transformation from the image of face to its sketch is linear. This is important because the mean representation of the face can converted to mean sketch using the linear transformation matrix. This linear assumption is also reasonable because the high pass filter can give rise to good sketch approximation.
The fiducial points in the sketch may not be the same when compared to the fiducial points on the face since the artist drawing the sketch may have exaggerated some features. This makes it difficult to find the linear transformation for the mapping of features and mapping of gray areas around the fiducial points. In order to remove this exaggeration and map the different fiducial points of the photos, The images are warped to mean image. affine transformation is performed on the face images and sketches to fix the different fiducial points. Eigenface method is then applied on the reduced sketch. To minimize the effect of transformation errors on the classification, the artist drawn sketch is classified using a bayesian classifier using the mahalanobis distance from the artist sketch vector to the eigenfaces of the reduced sketches of photos. The method than uses eigenfaces + bayes works better than PCA alone and PCA + eigenfaces.
Discussion:
This method again needs training data to find the eigenfaces.
Xiaoou, Tang
Xiaogang, Wang
Summary:
The algorithm uses eigenfaces on sketch of the faces to find the weight vectors. This algorithm assumes that the transformation from the image of face to its sketch is linear. This is important because the mean representation of the face can converted to mean sketch using the linear transformation matrix. This linear assumption is also reasonable because the high pass filter can give rise to good sketch approximation.
The fiducial points in the sketch may not be the same when compared to the fiducial points on the face since the artist drawing the sketch may have exaggerated some features. This makes it difficult to find the linear transformation for the mapping of features and mapping of gray areas around the fiducial points. In order to remove this exaggeration and map the different fiducial points of the photos, The images are warped to mean image. affine transformation is performed on the face images and sketches to fix the different fiducial points. Eigenface method is then applied on the reduced sketch. To minimize the effect of transformation errors on the classification, the artist drawn sketch is classified using a bayesian classifier using the mahalanobis distance from the artist sketch vector to the eigenfaces of the reduced sketches of photos. The method than uses eigenfaces + bayes works better than PCA alone and PCA + eigenfaces.
Discussion:
This method again needs training data to find the eigenfaces.
Face sketch recognition
Authors:
Xiaoou, Tang
Xiaogang, Wang
Summary:
To compare geometric features in ideal condition, a fiducial grid model is designed. 35 fiducial grid points are identified.
The eigenface method uses Karhunen–Loeve Transform (KLT) transform to transform images to vectors. This gives a highly compressed representation of images but when compared with sketches, the distance between vectors of same image may be larger than the vectors of different images. To overcome this difficulty, the images are first transformed to sketches and KLT is performed on them. Once the face image is converted to a sketch then comparison sketch is done using the eigenface method. The face to sketch comparison works better than theface image comparison with sketch (ordinary eigenface method) and geometry method.
Discussion:
This method needs training data to calculate the average and weight vectors of each image. We cannot afford this in IcanDraw since we have just one sketch drawn by user and the photo.
Xiaoou, Tang
Xiaogang, Wang
Summary:
To compare geometric features in ideal condition, a fiducial grid model is designed. 35 fiducial grid points are identified.
The eigenface method uses Karhunen–Loeve Transform (KLT) transform to transform images to vectors. This gives a highly compressed representation of images but when compared with sketches, the distance between vectors of same image may be larger than the vectors of different images. To overcome this difficulty, the images are first transformed to sketches and KLT is performed on them. Once the face image is converted to a sketch then comparison sketch is done using the eigenface method. The face to sketch comparison works better than theface image comparison with sketch (ordinary eigenface method) and geometry method.
Discussion:
This method needs training data to calculate the average and weight vectors of each image. We cannot afford this in IcanDraw since we have just one sketch drawn by user and the photo.
Face Recognition by Expression-Driven Sketch Graph Matching
Authors:
Zijian, Xu
Jiebo, Luo
Abstract:
We present a novel face recognition method using automatically extracted sketch by a multi-layer grammatical face model. First, the observed face is parsed into a 3-layer (face, parts and sketch) graph. In the sketch layer, the nodes not only capture the local features (strength, orientation and profile of the edge), but also remember the global information inherited from the upper layers (i.e. the facial part they belong to and status of the part). Next, a sketch graph matching is performed between the parsed graph and a pre-built reference graph database, in which each individual has a parsed sketch graph. Similar to the other successful edge-based methods in the literature, the use of sketch increases the robustness of recognition under varying lighting conditions. Furthermore, with high-level semantic understanding of the face, we are able to perform an intelligent recognition process driven by the status of the face, i.e. changes in expressions and poses. As shown in the experiment, our method overcomes the significant drop in accuracy under expression changes suffered by other edge-based methods
Summary:
The face features are divided into 3 layers - face ( whole - high level feature) , part layers ( features like mouth,eyes,...) and sketch layer( features like line segment, blob, ...) .The sketch layer which is the low level features carries the knowledge from the higher level parent ( like position, length, orientation,...). This is helps in differential weighting of the features where the mouth can be given lower weight (to minimize difference in open and closed mouths) while giving greater weights to the eyebrows. This paper also provides a similarity measure to find the similarity between the sketches and photos.
Zijian, Xu
Jiebo, Luo
Abstract:
We present a novel face recognition method using automatically extracted sketch by a multi-layer grammatical face model. First, the observed face is parsed into a 3-layer (face, parts and sketch) graph. In the sketch layer, the nodes not only capture the local features (strength, orientation and profile of the edge), but also remember the global information inherited from the upper layers (i.e. the facial part they belong to and status of the part). Next, a sketch graph matching is performed between the parsed graph and a pre-built reference graph database, in which each individual has a parsed sketch graph. Similar to the other successful edge-based methods in the literature, the use of sketch increases the robustness of recognition under varying lighting conditions. Furthermore, with high-level semantic understanding of the face, we are able to perform an intelligent recognition process driven by the status of the face, i.e. changes in expressions and poses. As shown in the experiment, our method overcomes the significant drop in accuracy under expression changes suffered by other edge-based methods
Summary:
The face features are divided into 3 layers - face ( whole - high level feature) , part layers ( features like mouth,eyes,...) and sketch layer( features like line segment, blob, ...) .The sketch layer which is the low level features carries the knowledge from the higher level parent ( like position, length, orientation,...). This is helps in differential weighting of the features where the mouth can be given lower weight (to minimize difference in open and closed mouths) while giving greater weights to the eyebrows. This paper also provides a similarity measure to find the similarity between the sketches and photos.
Accurate Dynamic Sketching of Faces from Video
Authors:
Zijian, Xu
Jiebo, Luo
Abstract:
A sketch captures the most informative part of an object, in a much more concise and potentially robust representation (e.g., for face recognition or new capabilities of manipulating faces). We have previously developed a framework for generating face sketches from still images. A more interesting question is can we generate an animated sketch from video? We adopt the same hierarchical compositional graph model originally developed for still images for face representation, where each graph node corresponds to a multimodal model of a certain facial feature (e.g., close mouth, open mouth, and wide-open mouth). To enforce temporal-spatial consistency and improve tracking efficiency, we constrain the transition of a graph node to be only between immediate neighboring modes (e.g. from closed mouth to open mouth but not to wide-open mouth), as well as by its corresponding parts in the neighboring frames. To improve the matching accuracy, we model the local structure of a given mode as a shape-constrained Markov network (SCMN) of image patches. The preliminary results show accurate sketching results from video clips.
Summary:
The method similar to the face to sketch recognition, it uses 3 layers - face --> parts --> sketch layers. AS it moves from one layer to the other, the algorithm identifies the miniscule details. First, moving to the parts layer, the face is fragmented into its parts (eyes, mouth,...) and the orientation, length, position and other features are found. Then moving on to the next layer, the line segments, interesections,... are identified and the information from the parts layer are then embedded on them.
Zijian, Xu
Jiebo, Luo
Abstract:
A sketch captures the most informative part of an object, in a much more concise and potentially robust representation (e.g., for face recognition or new capabilities of manipulating faces). We have previously developed a framework for generating face sketches from still images. A more interesting question is can we generate an animated sketch from video? We adopt the same hierarchical compositional graph model originally developed for still images for face representation, where each graph node corresponds to a multimodal model of a certain facial feature (e.g., close mouth, open mouth, and wide-open mouth). To enforce temporal-spatial consistency and improve tracking efficiency, we constrain the transition of a graph node to be only between immediate neighboring modes (e.g. from closed mouth to open mouth but not to wide-open mouth), as well as by its corresponding parts in the neighboring frames. To improve the matching accuracy, we model the local structure of a given mode as a shape-constrained Markov network (SCMN) of image patches. The preliminary results show accurate sketching results from video clips.
Summary:
The method similar to the face to sketch recognition, it uses 3 layers - face --> parts --> sketch layers. AS it moves from one layer to the other, the algorithm identifies the miniscule details. First, moving to the parts layer, the face is fragmented into its parts (eyes, mouth,...) and the orientation, length, position and other features are found. Then moving on to the next layer, the line segments, interesections,... are identified and the information from the parts layer are then embedded on them.
Pen based interface for a notepad - Patent 7032187
Summary:
Selection / Highlighting - proximity of the pointer to the object and orientation of the pen.
Tools / icons - located on the side of sheet - fades away when the pointer is moved away from it - moves to the front when focused on it.
2 types of navigation - page based
dog - ear pop up for prev next page navigation - a folded page like interface is provided on the bottom right corner of the current page. The shadow of the fold (prev page) and the fold is the next page.
multiple page stack - the pages are listed on the right side bottom vertically similar to the alphabets on telephone directory.
Discussion:
The mode switch seems to be nice idea. But there would arise certain conflicting cases where the mode switch was done wrong. How would the interface react to such wrong recognitions?
How intuitive is the mode switch based on the orientation of the pen? How can the user identify this feature? I believe this interface depends on the user's adaptability in this case.
Selection / Highlighting - proximity of the pointer to the object and orientation of the pen.
Tools / icons - located on the side of sheet - fades away when the pointer is moved away from it - moves to the front when focused on it.
2 types of navigation - page based
dog - ear pop up for prev next page navigation - a folded page like interface is provided on the bottom right corner of the current page. The shadow of the fold (prev page) and the fold is the next page.
multiple page stack - the pages are listed on the right side bottom vertically similar to the alphabets on telephone directory.
Discussion:
The mode switch seems to be nice idea. But there would arise certain conflicting cases where the mode switch was done wrong. How would the interface react to such wrong recognitions?
How intuitive is the mode switch based on the orientation of the pen? How can the user identify this feature? I believe this interface depends on the user's adaptability in this case.
Subscribe to:
Comments (Atom)
