Inspiration

In light of the recent events which highlight the systemic biases deeply ingrained in our society, we were inspired to create a product that could combat this issue in machine learning models. Machine learning models that may have vast consequences and further feed a cycle of social bias are everywhere, from those used in healthcare systems, policing, hiring, and more.

What it does

Our API contains many metrics which may be used to evaluate the fairness of the predictions of a model during and after training. We include a monitor class which can be instantiated during training to plot fairness metrics in real-time. Our API provides an adversarial wrapper class which can help to reduce bias in the model itself (as opposed to evaluating the results of the model).

How we built it

Our metrics are standard fairness metrics which are made compatible with PyTorch Tensors and Numpy arrays. The monitor is built on continually updating a Matplotlib Pyplot figure with the performance of a PyTorch model. The adversary wrapper class augments a pre-trained PyTorch model with a feed-forward adversarial network. We created auto-built docs using Sphinx.

Challenges we ran into

Of course, collaborating virtually is a struggle with any team. Our team decided to use tools such as Asana, Slack, and Zoom to facilitate our collaboration. Becoming familiar with such platforms will be useful to us in future group projects and company settings.

Accomplishments that we're proud of

Learning to integrate Sphinx with our API was rewarding, as it is such a useful and beautiful documentation technique. It was also rewarding to see our tools integrated into an actual classifier which predicts gender from images of faces of various races.

What we learned

Two of our team members, Nadine and Joyce, did not have experience with machine learning before, so they were able to gain a basic understanding of the entire machine learning pipeline. Michelle had never used PyTorch before, so this was new for her. Max learned about Sphinx.

What's next for FairTorch

We would like to integrate more sophisticated adversarial wrapper classes, specifically for image classification tasks, as our current implementation is a basic model that is intended for general use. We would also like to implement more visualizations.

Built With

Share this project:

Updates