Job Details

Job Information

Sr. Research Manager, Evaluation Science
AWM-4098-Sr. Research Manager, Evaluation Science
5/10/2026
5/15/2026
Negotiable
Permanent

Other Information

www.apple.com
Seattle, WA, 98194, USA
Seattle
Washington
United States
98194

Job Description

No Video Available
 

Role Number: 200661946-3337

Summary

AI systems are only as trustworthy as the methods used to evaluate them. At Apple, where AI powers experiences for billions of people, getting evaluation right is not a support function. It is a foundational science. As these systems grow in complexity , the quality of our products is increasingly constrained by the quality of our evaluation methods. Our team is building the scientific foundation and self-service tools for how AI evaluation is done at scale, spanning LLMs, agentic systems, and human-AI interaction. We don’t just publish methods; we productionize them. We are looking for a Sr. Research Manager to lead an ML research team that advances the state-of-the-art in evaluation methods that can be shipped as production tools for Apple developers and published in top venues.

Description

We are looking for a Sr. Research Manager to lead a ML research team advancing the frontier of evaluation methods. The team works in close collaboration with applied scientists and measurement scientists to build evaluation methodology and systems that are human-centered, psychometrically rigorous, and technically frontier. Y ou will set the research agenda, direct the team's portfolio across near-term and long-term bets, and ensure that novel methods are designed from the outset for productionization into evaluation SDKs and APIs. The team has active projects across multiple research areas; your most immediate contribution will be bringing strategic focus to this portfolio, leading a research lifecycle that turns your team’s work into high-impact internal applications, and positioning work for external impact at top-tier venues. Y ou will have a strong ML background and a track record of leading research teams that publish at venues like NeurIPS, ICML, and ICLR while simultaneously shipping methods into production tools.
What makes this team unusual is its interdisciplinary core. You will lead ML researchers working alongside measurement scientists and applied scientists, bringing together frontier ML research, psychometric rigor, and production engineering. What unites the strongest candidates is depth of thinking about evaluation as a research problem and the conviction that how we measure AI systems is as important as how we build them.

Minimum Qualifications

  • Ph.D. in Computer Science, Machine Learning, Statistics, or a closely related field

  • 5+ years of experience managing or leading research teams in an industry setting, with demonstrated ability to attractand retain strong research talent

  • Experience publishing research at top-tier AI/ML venues (NeurIPS, ICML, ICLR, ACL, EMNLP)

  • Experience partnering with applied science and engineering teams to translate research into production systems, tools, or capabilities adopted by others

  • Technical depth in AI evaluation, with the ability to critically assess and advance methods for measuring AI system behavior, whether through automated judgment, benchmark design, synthetic data, human evaluation, or other approaches

  • Demonstrated ability to set research strategy, manage a research portfolio with competing priorities, and make

  • disciplined investment decisions across near-term and long-term work

  • Excellent communication skills, including the ability to represent research to executive leadership, partner teams, and the external research community

Preferred Qualifications

  • Ability to bridge ML research and measurement science. This could mean a machine learning background with genuinefamiliarity with validity and evaluation design, or a measurement science background with strong technical depth in MLmethods

  • Publications or demonstrated expertise specifically in evaluation methodology (papers about how to evaluate, not just papers that use evaluation)

  • Demonstrated ability to coach researchers toward higher-impact publications: improving framing, identifying contribution clarity issues, and helping position work for acceptance at top-tier venues

  • Strong opinions about how evaluation methods should be implemented in user-facing tools: what defaults, abstractions, and guardrails make the difference between a generic SDK and a world-class evaluation platform

  • Experience designing research with self-service adoption as a first-class constraint, where the end goal is not a bespoke system your team operates but a method or tool that others can apply correctly without deep knowledge of the underlying research

  • Track record of personally recruiting research talent in competitive hiring markets, including sourcing candidates who would not have applied through standard channels

Other Details

No Video Available
--

About Organization

 
About Organization