👨🏻💻 📊 🐶 🤖
Han Fang is currently a Senior Engineering Manager in Generative AI at Meta, leading the applied research and engineering of Meta AI, the AI assistant - launched to family of apps and Smart Glasses. He drives the research and technical innovations of the Llama family models in production.
Han's team is responsible for fine-tuning of Meta AI’s production LLM. He leads the development of Reinforcement Learning with User Feedback (RLUF) and the data flywheel system. He also leads the development the Orchestrator system that empowers AI agents with plugins and the directions of Tool-Llama for few-shot function calling.
Han holds a PhD in Applied Mathematics and Statistics and has published in top-tier conferences and journals with 4600+ citations. He is a recipient of the President’s Award to Distinguished Doctoral Students, the Woo-Jong Kim Dissertation Award, and the Excellence in Research Award. Han is a strong believer in giving back to the community. In his spare time, he serves as an industry advisory board member for University of Washington.
AI @ Meta
Meta AI - the most advanced LLM-powered AI
Meta AI is our most advanced LLM-powered AI assistant. It offers a natural texting experience like chatting with a friend, assisting you with many daily tasks, and equipped with powerful plugins. It builds upon the Llama foundation and has evolved and expanded its capabilities, making it an intelligent reasoning engine. See my talk at Meta's Connect Conference.
Harmful content can evolve quickly. Our new AI system adapts to tackle it
We built and deployed Few-Shot Learner (FSL) that can adapt to new or evolving types of harmful content within days instead of months. It not only works in more than 100 languages, but also learns from different kinds of data, such as images and text, and it can strengthen existing AI models that are already deployed to detect other types of harmful content.
Training AI to detect hate speech in the real world
We’ve built and deployed an innovative system called Reinforcement Integrity Optimizer (RIO). RIO is an end-to-end optimized reinforcement learning (RL) framework. It’s now used to optimize hate speech classifiers that automatically review all content uploaded to Facebook and Instagram.
AI advances to better detect hate speech
We have a responsibility to keep the people on our platforms safe, and dealing with hate speech is one of the most complex and important components of this work. To better protect people, we have AI tools to quickly — and often proactively — detect this content.