Jacqueline King
AI quality, curation, alignment
I work on making AI systems better. At YouTube, I build evaluation frameworks and curate preference data that shapes how models surface content for younger audiences. Interested in RLHF, model behavior, and the messy human side of alignment.
Experience
YouTube
Programming Specialist, Youth & Learning / 2022 - Present
- Design quality evaluation frameworks for recommendations serving 10M+ weekly users
- Curate preference data that shapes how models surface content
- Conduct safety evaluation for YouTutor, an unreleased conversational AI
- Impact: 50%+ reduction in inappropriate content, 45% increase in quality watch time
UC Berkeley Human Rights Center
Open Source Investigator / 2020 - 2021
Evidence verification for human rights documentation with Amnesty International and The Washington Post.
Education
UC Berkeley, BA Global Studies / Sciences Po Paris, Exchange
Looking For
Exploring roles in AI alignment and safety, particularly at labs working on model behavior, RLHF, or trust & safety.
Based in NYC. Open to SF, Paris, or remote.