Hi, I'm Han
👨🏻💻 📊 🐶 🧬
Han Fang is currently a Research Scientist Manager at Meta AI, building Generative AI services to enable brand new experiences to billions of users across Meta family of apps. He is leading the applied research efforts on building and productionizing large language models at scale.
Previously Han led a team of 30 research scientists and machine learning engineers and drove step-function improvement in recommendation outcomes across IG and FB, including Feed and Reels. His team developed AI technologies for a world-class recommendation system, including Meta User Graph for billion-scale graph learning, efficient Transformer and multi-domain multi-task learning for user modeling.
Prior to that his team developed state-of-the-art AI services across Meta family of apps, including the industry-first scalable Few-shot Learner (FSL), Reinforcement Learning based deep learning model optimizer (also known as RIO), extremely efficient Transformer architecture called Linformer, and billion-scale multi-modal multi-lingual content understanding.
Han holds a PhD in Applied Mathematics and Statistics and has published highly influential papers in top-tier conferences and journals with 3700+ citations. He is a recipient of the President’s Award to Distinguished Doctoral Students, the Woo-Jong Kim Dissertation Award, and the Excellence in Research Award. Han is a strong believer in giving back to the community. In his spare time, he serves as an industry advisory board member for University of Washington.
Research @ Facebook
Harmful content can evolve quickly. Our new AI system adapts to tackle it
We’ve built and recently deployed a new AI technology called Few-Shot Learner (FSL) that can adapt to take action on new or evolving types of harmful content within weeks instead of months. It not only works in more than 100 languages, but it also learns from different kinds of data, such as images and text, and it can strengthen existing AI models that are already deployed to detect other types of harmful content.
Training AI to detect hate speech in the real world
We’ve built and deployed an innovative system called Reinforcement Integrity Optimizer (RIO). RIO is an end-to-end optimized reinforcement learning (RL) framework. It’s now used to optimize hate speech classifiers that automatically review all content uploaded to Facebook and Instagram.
Supporting economy recovery and vaccine distribution across the world
We develop the first micro-estimates of wealth and poverty that cover the populated surface of near 100 low and middle-income countries. Together with UC Berkeley, Facebook’s Data for Good team released the Relative Wealth Index data to support COVID-19 economic recovery and equitable vaccine distribution across the world using AI (paper).
AI advances to better detect hate speech
We have a responsibility to keep the people on our platforms safe, and dealing with hate speech is one of the most complex and important components of this work. To better protect people, we have AI tools to quickly — and often proactively — detect this content.
Tetris Planner: Optimizing Facebook Data Warehouse Data Placement
Facebook's data warehouse has 9+ data centers around the world, which hosts exabytes of data for analytics and machine learning. Planner was deployed to all Facebook's data centers and successfully rebalancing petabytes of data daily.
Research from PhD
Scikit-ribo enables accurate estimation and robust modeling of translation dynamics at codon resolution
Scikit-ribo, an open-source analysis package for accurate genome-wide A-site prediction and translation efficiency (TE) estimation from Ribo-seq and RNA sequencing data
Indel variant analysis of short-read sequencing data with Scalpel
Scalpel is an open-source software for reliable indel detection based on the microassembly technique