In numerous industries such as finance, telecommunications, and healthcare, online identity verification is a critical process. This is often done using a multi-step digital identity process, one example of which is face search, which matches an end-user’s face with those in an existing account. A variety of steps are necessary to build an accurate face search system, including detecting human faces in images, extracting these faces into a vector format, storing the vectors in a database, and comparing new faces to existing entries.
Amazon Rekognition is a service that makes the creation of such a system effortless by providing pre-trained models. These models can be called upon through simple API calls. Additionally, Amazon Rekognition can use multiple images of the same person’s face to create user vectors, thereby improving accuracy even further.
A recent post demonstrates the use of Amazon Rekognition Face Search APIs with user vectors. The system compares the results of performing face matching with and without user vectors. In the process, Amazon Rekognition also allows for the creation of a Collection object, which can be filled with faces from images using API calls.
Amazon Rekognition does not store actual face images when adding a face to a collection. Instead, it stores a face vector, a numerical representation of the face. In June 2023, AWS launched user vectors, a new feature that significantly improves face search accuracy by using multiple face images of a user.
In conclusion, this post describes how to use Amazon Rekognition user vectors to implement face search against a collection of users’ faces. It demonstrates how to improve face search accuracy by using multiple face images per user and then compares it with individual face vectors. The example code supplied provides a solid foundation for constructing a functional face search system.