Facebook AI Research
Building Responsible AI to Connect People

This summer, I interned at Facebook AI in a small team of talented designers under the guidance of Margaret Stewart. I had the chance to work with multiple core ML teams and design ML interpretability solution for responsible AI.

Model Interpretability

At Facebook, machine learning models are the backbones that power a variety of different products and services. Interpretability is crucial for understanding errors and biases from our models that service millions of people. During my time in the AI Infra Org, I explored multiple ways with my engineers to visualize what the models “think” when making predictions. The team is working towards an open-source library for our work that could potentially benefit ML practitioners around the world.

Expanding my work on Interpretability I initiated a team of interns to work on ML transparency & control and participated the MPK Company Hackathon. Our project ended up winning the Judge’s choice award out of 140+ teams participated.


Get in touch for more details.