PIE Database, CMU A database of 41,368 images of 68 people, each person under 13 different poses, 43 different illumination conditions, and with 4 different expressions.

Project - Face In Action (FIA) Face Video Database, AMP, CMU Capturing scenario mimics the real world applications, for example, when a person is going through the airport check-in point.

No use as a complete program, for sequential use with other BBC clips, for unofficial association with BBC, or in a manner that brings BBC into disrepute.

Face to face adult cam-8

SCface database is freely available to research community.

The paper describing the database is available here.

To address these issues researchers at Carnegie Mellon University collected the Multi-PIE database.

It contains 337 subjects, captured under 15 view points and 19 illumination conditions in four recording sessions for a total of more than 750,000 images.

Database contains 4160 static images (in visible and infrared spectrum) of 130 subjects.

Images from different quality cameras mimic the real-world conditions and enable robust face recognition algorithms testing, emphasizing different law enforcement and surveillance use case scenarios.

To the best of our knowledge this is the first available benchmark that directly assesses the accuracy of algorithms to automatically verify the compliance of face images to the ISO standard, in the attempt of semi-automating the document issuing process. Images were taken in uncontrolled indoor environment using five video surveillance cameras of various qualities.

Because the equipment had to be reassembled for each session, there was some minor variation in images collected on different dates.

The PIE database, collected at Carnegie Mellon University in 2000, has been very influential in advancing research in face recognition across pose and illumination.

Despite its success the PIE database has several shortcomings: a limited number of subjects, a single recording session and only few expressions captured.

User-dependent pose and expression variation are expected from the video sequences.