Twelve Anchor Points Detection by Direct Point Calculation
Sign up for access to the world's latest research
Abstract
Facial features can be categorized it into three approaches; Region Approaches, Anchor Point (landmark) Approaches and Contour Approaches. Generally, anchor points approach provide more accurate and consistent representation. For this reason, anchor points approach has been chose to utilize. Although, as the experiment data sets have become larger, algorithms have become more sophisticated even if the reported recognition rates are not as high as in some earlier works. This will cause a higher complexity and computer burden. Indirectly, it also will affect the time for real time face recognition systems. Here, it is proposed the approach of calculating the points directly from the text file to detect twelve anchor points ( nose tip, mouth centre, right eye centre, left eye centre, upper nose and chin). In order to get the anchor points, points for the nose tip have to be detected first. Then the upper nose and face point is localization. Lastly, the outer and inner eyes corner is localized. An experiment has been carried out with 420 models taken from GavabDB in two positions with frontal view and variation of expressions and positions. Our results are compared with three researchers that is similar to and show that better result is obtained with a median error of the eight points is around 5.53mm.
Related papers
IJCER, 2012
This paper proposes the extraction of geometric and texture feature of face automatically from front view. For extracting the geometric feature cumulative histogram approach is used and co-occurrence matrices are used to extract the texture feature of face. From the input image, face location is detected using the viola-Jones algorithm, from the structure of human face, 4 relevant regions such as left eye, right eye, nose and mouth regions are cropped named as object. For extracting the geometric feature histogram of each Object is computed and its cumulative histogram values are employed by varying different threshold values to create a new filtered binary image of each Object, then the corner end-point of each object (binary image) is detected using the linear search technique. For extracting the texture feature co-occurrence matrices of each object is determined, using this matrix, angular second moment, entropy, maximum probability of occurrence pixels, inverse difference, inverse difference moment , mean, contrast of each object is detected.
International Journal of Engineering and Advanced Technology, 2019
This paper describes the human facial landmark points detection is very important in the field of image processing as face detect, face identifies, face re-construct, face corners alignment, different head pose and facial expression analysis. Facial landmark is an essential point for applying face processing operation ranging from biometric recognition to mental states. In this paper, Haar cascading face detection technique is used to face detection and tracking. Histogram of Oriented Gradients (hog) has been used for 68 landmark points detection in case of human tracking and detection and support vector machine (svm) classifier are used for 68 landmark points detection for right-left eyebrow, left-right eye, nose, lips, chin, and jaw. The existing methods work effectively but many issues occur in detection as of different head poses, facial expressions, facial occlusion, illumination, colour, shadowing and self-shadowing etc. The performance of experimental results shows the advant...
Proceedings of the 1st International Workshop on Bio-inspired Human-Machine Interfaces and Healthcare Applications, 2010
Facial features' localization is an important part of various applications such as face recognition, facial expression detection and human computer interaction. It plays an essential role in human face analysis especially in searching for facial features (mouth, nose and eyes) when the face region is included within the image. Most of these applications require the face and facial feature detection algorithms. In this paper, a new method is proposed to locate facial features. A morphological operation is used to locate the pupils of the eyes and estimate the mouth position according to them. The boundaries of the allocated features are computed as a result when the features are allocated. The results obtained from this work indicate that the algorithm has been very successful in recognising different types of facial expressions.
International Journal for Scientific Research and Development, 2015
Image feature detection is a fundamental issue in many intermediate level vision problems such as stereo, motion correspondence, image registration and object recognition. In this paper, we are going to present an approach to feature detection. We provide extensive experimental results to demonstrate its potential applications to several image analysis problems. Automatic recognition of people is a challenging problem which has received much attention during the recent years due to its many applications in different fields such as law enforcement, security applications or video indexing. Face recognition is a very challenging problem and up to date, there is no technique that provides a robust solution to all situations and different applications that face recognition may encounter.
Image Information Processing (ICIIP), 2011 IEEE International Conference on , 2011
Detection and location of the face as well as ex traction of facial features from images is an important stage for numerous facial image interpretation tasks. Detection of facial feature points, such as corners of eyes, lip corners, nostrils from the images are crucial. In this paper a method for automatic facial feature point detection in image sequences, is introduced. The method uses image normalization, and thresholding tech niques to detect 14 facial feature points. Algorithm proposed by Wolf Kienzle is used for face recognition. The detected face
2002
An algorithm for the automatic features detection in 2D color images of human faces is presented. It first identifies the eyes by means of a template matching technique, a neural network classifier, and a distance measure. It proceeds localizing lips and nose using a non-linear edge detector and color information. The method is scale-independent, works on images of either frontal, rotated or slightly tilted faces, and does not require any manual setting or operator intervention.
2011
The researches on human face detection are playing an important role in the world of biometrics and several studies have been going on over the last decades. Face detection for human is very easy concept and with a glance, human can differentiate faces and non-faces, nevertheless face detection is very challenging when it comes to computers and digital world. The reason is faces have wide range of variability in size, texture, color and structure. There are numerous researches have been done in the field of face detection and some of them have been analyzed for functioning under dissimilar conditions such as facial expression, occlusion, illumination and head rotation. In this paper, a new proposed method has been implemented, the method use an unique template of eyes and nose for detection which makes our system fast, simple and suitable for real time face detection. The experiments conducted indicate that the proposed technique has inspiring performance compared to benchmark methods.
2009
This project presents the facial feature extraction system and face recognition system. The test image that used for this project contain various type. There are ten different images of each of 40 disntinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions; open or closed eyes, smiling or not smiling, and facial details; glasses or no glasses. All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position with tolerance for some side movement. For the facial feature extraction system, it is more focus on eye extraction. The eye will extracted from the face by finding the centroid of the eye region using threshold technique. For the recognition system, Principle Component Analysis (PCA) is used to match the test image with the database image. The system will find which database image has a maximum percentage based on similarity of the pattern of the image
2004
In this paper a completely automatic face recognition system is presented. It consists of two main modules: in the first, the facial fiducial points are localized, and in the second the face is characterized applying a bank of Gabor filters in correspondence to the found fiducial points. This method is an evolution of the one we have presented in : the fiducial point estimation is more efficient and self-correcting, and the face characterization modified.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.