top of page
8f9a843e-45b0-4ee3-a02d-2a0daeb4032d.jpeg
Home: Welcome
2023-Fang-Han-0061.jpg

👨🏻‍💻 📊 🐶  🤖

Han Fang is a Senior Research Scientist Manager in GenAI at Meta, leading the applied research and core modeling on Llama models for Meta AI - which is launched to Meta's family of apps and RayBan Meta.

Han's team is responsible for post-training of Meta AI’s production Llama model, spanning SFT, RLHF, and RLUF(user feedback). He also bootstrapped and leads Meta AI’s data flywheel with billions of conversations. Han is now making Meta AI agentic- enabling multi-step planning and tool usage.

Han holds a PhD in Applied Mathematics and Statistics and has published in top-tier conferences and journals with 6000+ citations. He is a recipient of the President’s Award to Distinguished Doctoral Students, the Woo-Jong Kim Dissertation Award, and the Excellence in Research Award. Han is a strong believer in giving back to the community. In his spare time, he serves as an industry advisory board member for University of Washington.

Home: About
Home: Publications

AI @ Meta

438925018_1771407303346314_5174341760423869102_n.jpg

Meta AI on Llama3

Meta AI is our most advanced LLM-powered AI assistant. Built on Llama 3, our most advanced model to date, Meta AI is an intelligent assistant that is capable of complex reasoning, following instructions, visualizing ideas, and solving nuanced problems.

Now available within our family of apps and at meta.ai, you can learn more, imagine anything and get more things done.

 

See my talk at Meta's Connect Conference.

Screen Shot 2021-12-08 at 9.43.59 AM.png

Harmful content can evolve quickly. Our new AI system adapts to tackle it

We built and deployed Few-Shot Learner (FSL) that can adapt to new or evolving types of harmful content within days instead of months. It not only works in more than 100 languages, but also learns from different kinds of data, such as images and text, and it can strengthen existing AI models that are already deployed to detect other types of harmful content.

Screen Shot 2021-12-08 at 10.01.01 AM.png

Training AI to detect hate speech in the real world

We’ve built and deployed an innovative system called Reinforcement Integrity Optimizer (RIO). RIO is an end-to-end optimized reinforcement learning (RL) framework. It’s now used to optimize hate speech classifiers that automatically review all content uploaded to Facebook and Instagram.

ai for hate speech.jpg

AI advances to better detect hate speech

We have a responsibility to keep the people on our platforms safe, and dealing with hate speech is one of the most complex and important components of this work. To better protect people, we have AI tools to quickly — and often proactively — detect this content.

Contact me

Email: hanfang.info[at]gmail.com

Threads: @han_fang_

Twitter: @han_fang_

bottom of page