👨🏻💻 📊 🐶 🤖
Han Fang is currently a Senior Engineering Manager in Generative AI at Meta, leading the development of Meta AI, driving technical innovations of LLaMa and productionizing them at Meta scale.
Previously Han led a team of 30+ research scientists and machine learning engineers and drove step-function improvement in recommendation outcomes across IG and FB, including Feed and Reels. His team developed AI technologies for a world-class recommendation system, including Meta User Graph for billion-scale graph learning, efficient Transformer and multi-domain user modeling.
At Meta, Han's team developed state-of-the-art AI services, including the industry-first scalable Few-shot Learner (FSL), Reinforcement Learning based deep learning model optimizer (also known as RIO), extremely efficient Transformer architecture called Linformer, and multi-modal multi-lingual content understanding.
Han holds a PhD in Applied Mathematics and Statistics and has published in top-tier conferences and journals with 4000+ citations. He is a recipient of the President’s Award to Distinguished Doctoral Students, the Woo-Jong Kim Dissertation Award, and the Excellence in Research Award. Han is a strong believer in giving back to the community. In his spare time, he serves as an industry advisory board member for University of Washington.
AI @ Meta
Meta AI - the most advanced LLM-powered AI
Meta AI is our most advanced LLM-powered AI assistant. It offers a natural texting experience like chatting with a friend, assisting you with many daily tasks, and equipped with powerful plugins. It builds upon the Llama foundation and has evolved and expanded its capabilities, making it an intelligent reasoning engine
Harmful content can evolve quickly. Our new AI system adapts to tackle it
We built and deployed Few-Shot Learner (FSL) that can adapt to new or evolving types of harmful content within days instead of months. It not only works in more than 100 languages, but also learns from different kinds of data, such as images and text, and it can strengthen existing AI models that are already deployed to detect other types of harmful content.
Training AI to detect hate speech in the real world
We’ve built and deployed an innovative system called Reinforcement Integrity Optimizer (RIO). RIO is an end-to-end optimized reinforcement learning (RL) framework. It’s now used to optimize hate speech classifiers that automatically review all content uploaded to Facebook and Instagram.
AI advances to better detect hate speech
We have a responsibility to keep the people on our platforms safe, and dealing with hate speech is one of the most complex and important components of this work. To better protect people, we have AI tools to quickly — and often proactively — detect this content.