Build Facial recognition with Python, OpenCV, OpenAI CLIP and pgvector

Facial recognition technology has taken the world by storm, revolutionizing industries such as security, social media, and even our smartphones. With Python, OpenCV, and PostgreSQL at our disposal, we embark on a journey to develop a powerful facial recognition system.

This blog will walk you through the process of creating a facial recognition system in four parts. We will explore the key components and step-by-step implementation of this exciting project.

Flow of program:

  1. Download all the pictures into a local folder.

  2. Identify and extract faces from the pictures.

  3. Calculate facial embeddings from the extracted faces.

  4. Store these facial embeddings in a PostgreSQL database using the pgvector data type.

  5. Obtain a colleague’s picture for identification purposes.

  6. Recognize the face within the provided picture.

  7. Calculate embeddings for the identified face in the provided picture.

  8. Utilize the pgvector distance function to retrieve the closest matching faces and corresponding photos from the database.

Part 1: The Foundation

Before we dive into the code, let’s grasp the fundamental building blocks of our project:

Face Detection: OpenCV, a widely-used computer vision library, will help us identify faces within images. Leveraging pre-trained models like the Haar Cascade Classifier, we’ll pinpoint the location of faces in our images.

Embeddings: To recognize faces, we need numerical representations of facial features. We will delve into calculating embeddings for detected faces using the imgbeddings library.

Database: PostgreSQL, a robust open-source relational database, will serve as the backbone of our system. It’s the perfect choice for storing image data and associated embeddings, thanks to its support for custom data types.

Similarity Matching: The final piece of the puzzle involves querying the database to find similar faces. We’ll rely on PostgreSQL’s vector extension to perform efficient similarity matching.

The Code Unveiled

We will cover the initial steps, including loading an image, detecting faces, and saving cropped face regions as individual images. These steps lay the groundwork for our facial recognition project.

# importing the cv2 library
import cv2

# loading the Haar Cascade algorithm file into alg variable
alg = "haarcascade_frontalface_default.xml"

# passing the algorithm to OpenCV
haar_cascade = cv2.CascadeClassifier(alg)

# loading the image path into file_name variable
file_name = '<INSERT YOUR IMAGE NAME HERE> for eg-> X1.jpg'

# reading the image
img = cv2.imread(file_name, 0)

# creating a black and white version of the image
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)

# detecting the faces
faces = haar_cascade.detectMultiScale(gray_img, scaleFactor=1.05, minNeighbors=2, minSize=(100, 100))

# for each face detected
for x, y, w, h in faces:
    # crop the image to select only the face
    cropped_image = img[y : y + h, x : x + w]

    # loading the target image path into target_file_name variable
    target_file_name = '<INSERT YOUR OUTPUT FACE IMAGE NAME HERE> for eg-> X2.jpg'
    cv2.imwrite(target_file_name, cropped_image)

In this code snippet, we utilize OpenCV to detect faces in an input image, crop the detected face regions, and save them as separate image files. This is the initial step towards building our facial recognition system

Part 2:

In the second part we will build a facial recognition system, and we’ll explore the fascinating world of embeddings. These numerical representations of facial features are essential for recognizing and comparing faces efficiently.

Understanding Embeddings

Embeddings are vectors that capture essential characteristics of a face, such as the position of facial landmarks and unique features. By converting facial images into embeddings, we can perform similarity matching and identify individuals accurately.

Calculating Face Embeddings

Let’s dive into the code to calculate embeddings for the faces we detected in part 1. We will leverage the imgbeddings library to perform this task.

# importing the required libraries
import numpy as np
from imgbeddings import imgbeddings
from PIL import Image

# loading the face image path into file_name variable
file_name = '<INSERT YOUR FACE FILE NAME> (X2.jpg)'

# opening the image
img =

# loading the `imgbeddings`
ibed = imgbeddings()

# calculating the embeddings
embedding = ibed.to_embeddings(img)[0]

In this code snippet, we use the imgbeddings library to calculate embeddings for a detected face. The resulting embedding is a numerical representation of the face's features.

Part 3

We’ll dive into the database aspect of our project. First of all we need a PostgreSQL up and running, we can navigate to the Aiven Console, create a new PostgreSQL selecting the favourite cloud provider, region. The pgvector extension is available in all plans. Once all the settings are ok, you can click on Create Service.

Once the service is up and running (it can take a couple of minutes), navigate to the service Overview and copy the Service URI parameter. We’ll use it to connect to PostgreSQL via psql .

Setting Up the Database

Let’s start by setting up our database. Assuming you have PostgreSQL installed, follow these steps:

  1. Open your terminal or command prompt.

  2. Connect to PostgreSQL using your service URI:


Replace <SERVICE_URI> with your actual PostgreSQL service URI.

Now, within the PostgreSQL command-line interface, run the following SQL commands to create the necessary extension and table:

CREATE TABLE pictures (picture text PRIMARY KEY, embedding vector(768));

These commands create a PostgreSQL extension for vectors and a "pictures" table with columns for image filenames and embeddings.

Storing Image Data and Embeddings

Now that our database is ready, we can proceed to store image data and their corresponding embeddings. Let's take a look at the code to achieve this:

# importing the required libraries
import psycopg2

# Connect to the PostgreSQL database
conn = psycopg2.connect('<SERVICE_URI>')
cur = conn.cursor()
cur.execute('INSERT INTO pictures values (%s,%s)', (file_name, embedding.tolist()))

In this code snippet, we establish a connection to our PostgreSQL database, provide the file name and corresponding embedding, and insert this data into the "pictures" table. This step allows us to build a repository of image data and their embeddings for future facial recognition.

With the database in place, we've laid the groundwork to store image data and embeddings efficiently.

Note: we will carry out the process outlined till Part 3 for different images of multiple individuals. This will result in a collection of face embedding vectors stored within our database, for future face matching endeavors.

Part 4

Matching a New Image

In the final phase of our facial recognition system, we take a new image as input and set out to find the closest matches within our database.

Understanding Similarity Matching

It involves comparing the embeddings of a queried face with the embeddings of known faces in our database. By finding the closest matches, we can identify individuals accurately.

Step 1: Image Preprocessing

# loading the image path into file_name variable

# reading the image
img = cv2.imread(file_name, 0)

# creating a black and white version of the image
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)

# detecting the faces
faces = haar_cascade.detectMultiScale(gray_img, scaleFactor=1.05, minNeighbors=2, minSize=(100, 100))

Here, we begin by loading the image we wish to match into our script. We then convert it to grayscale and employ a face detection algorithm to locate any faces within the image.

Step 2: Extracting and Embedding Faces

# for each face detected in the image
for x, y, w, h in faces:
    # crop the image to select only the face
    cropped_image = img[y : y + h, x : x + w]

    # Convert the NumPy array to a PIL image
    pil_image = Image.fromarray(cropped_image)

    ibed = imgbeddings()

    # calculating the embeddings
    slack_img_embedding = ibed.to_embeddings(pil_image)[0]

For each face detected in the image, we isolate the face and convert it into a format suitable for analysis. Using image embeddings, we calculate a unique numerical representation for this face.

Step 3: Querying the Database

conn = psycopg2.connect()
cur = conn.cursor()
string_rep = "[" + ",".join(str(x) for x in slack_img_embedding.tolist()) + "]"
cur.execute("SELECT picture FROM pictures ORDER BY embedding <-> %s LIMIT 5;", (string_rep,))
rows = cur.fetchall()
for row in rows:

In this code snippet, we connect to our PostgreSQL database, provide the embedding of the queried face, and convert it to a PostgreSQL vector. We then query the database for similar faces using the <-> operator, which calculates the distance between vectors. The LIMIT 5 clause ensures we retrieve the top 5 closest matches.

With this final piece of code, our facial recognition system is complete.🙌🥳

We can now query our database with a new face’s embedding and retrieve the closest matches from our repository. This capability opens up a world of possibilities, from enhancing security systems to simplifying photo organization.

Thank you for joining us on this exciting journey, and I encourage you to explore and expand upon what you’ve learned here. The possibilities are endless when it comes to the world of facial recognition.

For any inquiries or further discussions, feel free to reach out to me via email: You can also connect with me on

LinkedIn: Your feedback and questions are always welcome.