Job Details
Job Information
Other Information
Job Description
Role Number: 200651026-3543
Summary
Are you passionate about shaping the future of computational photography and video for billions of users worldwide? Apple's Camera Algorithm team is seeking an extraordinary and highly experienced Machine Learning Engineer to drive groundbreaking machine learning-based technologies that help define the photographic and cinematic real-time auto-focus (AF) capture experience across all of Apple's camera products.
Description
As a technical leader within the team, you will own the end-to-end architecture, rapid prototyping, and productization of advanced machine learning-based auto-focus algorithms. You will maneuver through ambiguity independently to drive the long-term ML roadmap for AF, spearheading the design of novel learning-based systems integrated on Apple camera platforms which achieve seamless auto-focus user experience in any scene condition in both bright and low light.
You will determine methods and procedures on complex projects, frequently acting as the representative for your area while leading cross-functional work to deploy enhanced machine learning based AF features. This includes coordinating the activities of sub-teams to create sophisticated architecture, training and tooling for machine learning based auto-focus development.
You will partner deeply with the SOC architecture team to influence future silicon designs, the hardware team to evaluate new camera components impacting auto-focus, and the firmware team to optimize system-level flows for machine learning algorithms.
The ideal candidate is a visionary problem-solver who thinks originally, resolves highly complex issues in creative ways, and is a proven mentor capable of inspiring innovation among others.
Minimum Qualifications
MS in Computer Science, Machine Learning, Electrical Engineering, or a related field.
Experience in defining datasets for machine learning network training on low-level vision tasks as well as dataset curation and data augmentation strategies for robust training.
Expertise in modern machine learning (ML) frameworks and libraries, specifically PyTorch or TensorFlow/TFLite/LiteRT.
Strong software engineering and architectural skills, highly skilled at coding in Python and C.
Preferred Qualifications
Experience with machine learning for practical low level computer vision applications including one or more of the following areas: auto-focus, stereo disparity/depth, depth estimation, defocus/blur estimation, optical flow estimation, sensor fusion.
Experience with defining datasets for training temporal networks.
Good knowledge of optics (Point Spread Functions, Depth-of-Field, etc.) and image quality metrics impacting critical image sharpness evaluation (Modulation Transfer Function, Spatial Frequency Response, Acutance, Blur/Defocus Estimation etc).
Track record of pioneering innovation comprising publications in top-tier computer vision conferences (e.g., CVPR, ICCV, ECCV) and/or patents.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088_EEOC_KnowYourRights6.12ScreenRdr.pdf) .
Other Details

