Job Details
Job Information
Other Information
Job Description
Role Number: 200653150-3956
Summary
We're starting to see the incredible potential of multimodal foundation and large language models, and many applications in the computer vision and machine learning domain that previously appeared infeasible are now within reach. We are looking for highly motivated and skilled Machine Learning Platform Engineers to join our team in the VCV group and help us enable that potential for realtime human understanding on Apple devices.
The VCV org has pioneered human-centric real-time features such as FaceID, FaceKit, and Gaze and Hand gesture control which have changed the way millions of users interact with their devices. We balance research and product requirements to deliver Apple quality, pioneering experiences, innovating through the full stack, and partnering with HW, SW and AI teams to shape Apple's products and bring our vision to life.
Join us to build the infrastructure, MLOps platforms, and deployment systems that power Apple's next generation of intelligent products and experiences.
Description
As part of the VCV team, you will build and maintain the critical infrastructure that enables machine learning at scale across Apple's products. You will work on infrastructure, MLOps, cloud and on-device deployment systems, and data engineering platforms that support our ML development lifecycle.
You will be responsible for building and maintaining scalable machine learning infrastructure for training, evaluation, and deployment of computer vision and multimodal models. You will develop MLOps platforms and tools that streamline the ML development lifecycle from data ingestion to model deployment, create robust data pipelines for large-scale data collection, curation, preprocessing, and management, and implement on-device ML integration systems that deploy state-of-the-art algorithms to Apple devices.
Working closely with ML algorithms engineers, data scientists, and quality assurance teams, you'll help deploy state-of-the-art computer vision technologies on Apple devices, balancing performance with the compute and power constraints of on-device inference.
Minimum Qualifications
Bachelor's degree in Computer Science, Software Engineering, or related technical field, or equivalent practical experience
2+ years of relevant industry experience in software engineering, machine learning infrastructure, or related fields
Strong programming skills in Python, C++, and/or Swift
Experience with machine learning frameworks such as PyTorch, TensorFlow, or JAX
Knowledge of machine learning model development lifecycle, including data preprocessing, model training, evaluation, and deployment
Experience with distributed systems, cloud computing, or large-scale data processing
Strong foundational knowledge in Computer Science and software engineering principles
Preferred Qualifications
Master's degree in Computer Science, Machine Learning, or related technical field
2+ years of experience in ML infrastructure, platform engineering, or production ML systems
Experience with Apple's frameworks including CoreFoundation, RealityKit, and CoreML
Hands-on experience with CI/CD pipelines, DevOps practices, and infrastructure as code
Experience with containerization technologies (Docker, Kubernetes) and orchestration systems
Knowledge of cloud platforms (AWS, GCP, Azure) and distributed computing frameworks (Spark, Ray, etc.)
Experience with GPU programming and hardware acceleration (Metal, CUDA, OpenCL)
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088_EEOC_KnowYourRights6.12ScreenRdr.pdf) .
Other Details

