Clement Creusot, PhD

Home | 3D Faces (PhD archive) | Pedestrians | Silhouettes | Barcodes | Publications

PhD Webpage Archive


This is my PhD webpage. I submitted my PhD thesis on automatic 3D face landmarking in December 2011. I passed my viva in March 2012. This PhD project at the University of York, Computer Science, has been supervised by Dr. Nick Pears and Prof. Jim Austin. My work was to evaluate new techniques for automatic face landmarking and face recognition. I mainly focus on machine-learning-based 3D-shape-analysis techniques for facial landmarking. Landmarking using only 3D data is a difficult problem, especially in real-world non-cooperative cases where facial features has to be detected despite the presence of occlusions, pose variations, and spurious data in the captured 3D scans. This page provides information about this 3 years research project with links to publications and downloadable source code. For information beyond the scope of this page, please contact me directly by email.

Research Interests

The first expected outcome for my research is a face recognition technique more robust to face orientation than the current state-of-the-art. The applications for that kind of research are mainly Security (ex: 3D CCTV) and the Human-Machine interactions (ex: vision in robotics). Automatic landmark detection techniques can also help in all domains where face labelling is needed on big database, from computer vision to psychology.

Automatic landmark detection can be used in face recognition for two purposes. It can help to find correspondences between two faces before matching and it can help to extract discriminative information about the face being treated. The information can be featural and be supported by the neighbourhood of each landmark (e.g. local shape around the landmarks) or configural and be linked to the relationships between them (e.g. distances between landmarks).

For the landmark detection, I combine sets of simple fields, for example several types of curvature and volumetric information as well as crestline and isolines on the surface to detect points. The repeatability of those landmark are tested using manually landmarked datasets like the Face Recognition Grand Challenge database (FRGC) and the Bosphorus database.

Fig: Example of meshes from the FRGC (left) and Bosphorus (right) databases.
Examples - FRGC    Examples - Bosphorus

For both landmark labelling and face matching, we construct hypergraphs upon the detected landmarks and match them on models using hypergraph matching techniques. The hypergraph structure in our code allows us to store and match relational information of any degree. In the case of complete hypergraph (all-to-all connectivity) we usually don't go above degree 3 for speed concerns.

Fig: Global Process Workflow
Global Process Workflow
Fig: Detailed Landmarking Workflow
Detailed Landmarking Workflow

Our keypoint detection system evaluate for every vertex a score of being a point of interest by using a pre-computed statistical dictionary of local shapes. The correlation between the input vertices and the learnt features are computed using a large number of local shape descriptors (mainly based of discrete-differential-geometric properties) of local area patch around the vertex. The number and nature of the local descriptors, as well as the size of the neighborhoods on which they are computed and the way they are combined can be optimized using basic matching learning techniques such as LDA (linear discriminant analysis) or Adaboost (adaptative boosting).

Fig: Example of keypoint detection using a dictionary of 14 local shapes.
Keypoint detection

Here are examples of landmarking results obtained in October 2011 on the two databases using our keypoint-detection system coupled with a RANSAC geometric-registration technique.

Fig: Example of landmarking results for the FRGC (left) and Bosphorus (right) databases. In the right picture, The green points represent the ground truth, the blue points our landmarks.
Examples - FRGC    Examples - Bosphorus

For more details, please refer to the corresponding publications.


3D Landmark Model Discovery from a Registered Set of Organic Shapes

Learning Landmarks and Their Detector Functions for a Points-based Sparse 3D Face Model
Clement Creusot and Nick Pears
Under Review

A Machine-learning Approach to Keypoint Detection and Landmarking on 3D Meshes

A Machine-learning Approach to Keypoint Detection and Landmarking on 3D Meshes
Clement Creusot, Nick Pears, and Jim Austin
Special Issue on 3D Imaging, Processing and Modeling Techniques - International Journal of Computer Vision (IJCV), 2013

3D Landmark Model Discovery from a Registered Set of Organic Shapes

3D Landmark Model Discovery from a Registered Set of Organic Shapes
Clement Creusot, Nick Pears, and Jim Austin
Point Cloud Processing (PCP) Workshop at the Computer Vision and Pattern Recognition Conference (CVPR) 2012, Providence, Rhode Island.

Automatic Keypoint Detection on 3D Faces Using a Dictionary of Local Shapes

Automatic Keypoint Detection on 3D Faces Using a Dictionary of Local Shapes
Clement Creusot, Nick Pears, and Jim Austin
3DIMPVT 2011, pp. 204-211, Hangzhou, China.

3D face landmark labelling

3D face landmark labelling
Clement Creusot, Nick Pears, and Jim Austin
In Proc. ACM Workshop on 3D Object Retrieval, pp. 27-32, Firenze, Italy, Oct. 2010.


All the scripts and applications provided here have only been tested on Linux (Ubuntu and Linux Mint). If you try them on other operating systems, please keep me informed of any problem/fix you found related to interoperability.
All the programs provided here are under GPL v3, except if specified otherwise in the headers.

Where is the interesting stuff? In order to comply with the university regulation, I can not provide the sources of program that might be sensitive in term of intellectual property. This applies to most of my C++ code. However if you have a request about a specific program I have developed (feature detector, hypergraph matcher, ...) please contact the head of the ACA group or myself for more information.


This research project has been partly supported by the European Union FP6 Marie Curie Actions MRTN-CT-2005-019564 "EVAN"