Teaching a GAN About Fairness

A Project on Mitigating Racial Bias in Face Aging Models

Authors: Cédric Caruzzo, Juchul Shin, Minshik Choi [GitHub Repository]
← Back to Projects & Writing Hub

The Challenge: AI Sees a Biased World

Facial Age Progression (FAP)—the ability to predict what someone will look like as they age—is a fascinating frontier in computer vision. But many state-of-the-art models have a critical flaw: they are trained on datasets that don't represent the diversity of the real world. Datasets like FFHQ, while high-quality, are heavily skewed towards caucasian faces.

Chart showing racial bias in the FFHQ dataset
The racial distribution in the popular FFHQ dataset. Truncation, a common technique in GANs, further amplifies this bias, increasing the ratio of generated white faces.

This imbalance means that models trained on this data often fail to accurately capture the unique aging patterns of non-white individuals. An age progression model that changes a person's perceived race is not just inaccurate; it's a failure of fairness. This project tackles that problem head-on.

The Tool: Style-Based Age Manipulation (SAM)

Our starting point was a powerful model called SAM (Style-based Age Manipulation). SAM is an image-to-image translation model built on StyleGAN, a renowned generative network. It works by encoding a person's face into a latent space—a compressed representation of facial features—and then cleverly manipulating that representation to change their age while preserving their identity.

SAM model architecture diagram
The baseline SAM architecture, which maps an input image and a target age to a set of style vectors to generate the aged face.

Our Idea: A "Fairness" Penalty

The core of our idea was simple: what if we could penalize the model every time it generated a face whose race was different from the input? To do this, we integrated a pre-trained race classifier, DeepFace, directly into the training loop.

During training, after SAM generated an aged face, DeepFace would analyze both the original and the new face. If it detected a change in racial characteristics, it would send a "race loss" signal back to the main model, nudging its parameters to correct the mistake. In essence, we taught the model to preserve racial identity as a core objective, alongside preserving personal identity and accurately portraying age.

Our modified SAM architecture with DeepFace for race loss
Our proposed architecture. We add a DeepFace classifier that compares the input and output images, calculating a "race loss" to guide the model towards fairer results.

The Results: Seeing is Believing

While quantitative metrics are still a work in progress, the qualitative results, even from early training, are striking. The most powerful demonstration is a direct video comparison.

The key result. First Half: the base model. Second Half: OURS Fairness-Aware Model. Notice how our model consistently preserves the subject's racial identity during age progression, while the baseline model often shifts towards caucasian features.

Side-by-Side Image Comparisons

The effect is also clear in still images. We compared generations from our model (trained from scratch with a race loss lambda of 15) against the original vanilla SAM.

Young Age

Comparison of our model vs baseline for a young face
Comparison at a younger age. Our model (left) vs. the baseline model (right).

Mature Age

Comparison of our model vs baseline for a mature face
Comparison at a mature age. Our model retains racial characteristics more faithfully.

Elderly Age

Comparison of our model vs baseline for an elderly face
Comparison at an elderly age. The baseline model shows a significant shift in features.

What We Learned & Future Directions

This project, while challenging due to computational constraints, shows significant promise. Our key takeaway is that incorporating auxiliary information—like a race classifier—directly into the loss function can be a powerful strategy for mitigating dataset bias without needing to curate a perfectly balanced dataset.

Ultimately, this work is a step towards building AI systems that are not only technologically advanced but also socially conscious and equitable for everyone.

Try It Yourself notebooks

You can experiment with our fairness-enhanced model yourself using these Google Colab notebooks. No setup is required.

← Back to Projects & Writing Hub