Job Details
Job Information
Other Information
Job Description
Role Number: 200652910-0836
Summary
We are looking for a Research Scientist or Engineer to join our Foundation Model Evaluation
team. In this role, you will design and build evaluation methodology that measures what matters
- how well our models perform at the frontier of key capabilities, and how well they serve real
users across Apple products on billions of active devices. You will turn evaluation insights into
signals that make models better.
Description
This is a hands-on role focused on the models that power Apple products used daily by over a
billion people. You will design evaluation systems where the outcome is not just a score, but an
actionable signal - one that drives model improvement and predicts real user experience.
Working alongside model training and product teams, you will close the loop between evaluation
and improvement.
Our work spans three areas:
• Frontier capability assessment: benchmarking against the state of the art in reasoning,
code, knowledge, and agentic workflows
• Product-aligned evaluation: measuring model quality in ways that reflect real user
experience
• Evaluation-to-training integration: feeding actionable insights back into the model
development cycle
You may focus on one area or work across multiple, depending on your background and
interests.
Minimum Qualifications
3+ years of experience in AI model evaluation, NLP, or a related area (e.g., natural
language generation, information retrieval, or conversational AI)
Strong fundamentals in machine learning, natural language processing, and statistical
analysis
Proficiency in Python and experience with ML frameworks (PyTorch, JAX, or
equivalent)
Demonstrated ability to translate research insights into practical implementations
Strong experimental design skills: ability to design rigorous comparisons and draw valid
conclusions from results
Clear technical communication: ability to distill evaluation results into actionable
recommendations for cross-functional partners
MS or PhD in Computer Science, Machine Learning, Natural Language Processing or a related
technical field. Equivalent practical experience will be considered.
Preferred Qualifications
PhD in Computer Science, Machine Learning, NLP, or a related field
Direct experience evaluating large language models, e.g. benchmark design, model-based
judging
Track record of collaborating with model training and data teams to turn evaluation
findings into training improvements
Experience building reusable evaluation tooling or analysis frameworks adopted across
teams
Familiarity with human evaluation methodology and experience partnering with
annotation teams or vendors to assess model quality
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088_EEOC_KnowYourRights6.12ScreenRdr.pdf) .
Other Details

