Job Details

Job Information

Data Solution Engineer, Data Quality and Governance
AWM-472-Data Solution Engineer, Data Quality and Governance
1/21/2026
1/26/2026
Negotiable
Permanent

Other Information

www.apple.com
Cupertino, CA, 95015, USA
Cupertino
California
United States
95015

Job Description

No Video Available
 

Weekly Hours: 40

Role Number: 200642585-0836

Summary

Imagine what you could do here. The people here at Apple don't just create products — they create the kind of wonder that's revolutionized entire industries. It's the diversity of those people and their ideas that inspires the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it.

Apple's Sales organization generates the revenue needed to fuel our ongoing development of products and services. This in turn, enriches the lives of hundreds of millions of people around the world. Our sales team is in many ways, the face of Apple to our largest customers. We are here to grow Apple's business through partnering with our enterprise, education & channel sales teams, resellers, partners and carriers.

We are seeking a Data Solution Engineer to lead the design and implementation of enterprise data quality solutions enhanced by generative AI and machine learning capabilities. This role bridges traditional data quality disciplines with emerging AI technologies to create intelligent, self-improving data quality frameworks that scale across global sales operations. The ideal candidate combines deep expertise in data quality architecture with hands-on experience applying large language models, semantic technologies, and automated reasoning to solve complex data challenges.

Description

In this role, you will:

Design end-to-end data quality architectures that integrate GenAI capabilities for automated profiling, anomaly detection, root cause analysis, and remediation recommendations

Develop reference architectures for AI-augmented data quality platforms, including integration patterns with enterprise data catalogs, knowledge graphs, and metadata management systems

Create technical blueprints for real-time data quality monitoring using ML-based pattern recognition and predictive quality scoring

Design and implement LLM-powered solutions for automated data quality rule generation, business glossary enrichment, and natural language data profiling interfaces

Create ML pipelines for automated data classification, sensitivity detection, and quality dimension scoring

Evaluate and integrate emerging AI technologies (foundation models, agents, multimodal capabilities) into the data quality technology stack

Lead technical design for data quality platforms built on modern data stack technologies (DataHub, Great Expectations, dbt, etc.)

Design API-first architectures enabling self-service data quality capabilities for business users and data product teams

Architect observability and lineage solutions providing end-to-end visibility into data quality across the data lifecycle

Partner with Data Stewards, Data Product Owners, and Business Analysts to translate quality requirements into technical solutions

Collaborate with enterprise architecture, information security, and infrastructure teams on platform decisions

Mentor engineers and analysts on data quality best practices and AI/ML implementation patterns

Present technical recommendations to senior leadership

Minimum Qualifications

  • Typically requires a minimum of 8+ years of related experience in software, data architecture, data engineering, or data quality roles

  • Expert-level knowledge of data quality dimensions, profiling techniques, and remediation patterns

  • Strong understanding of master data management, data cataloging, and metadata management principles

  • Deep understanding of data modeling, dimensional modeling, and architectural patterns for analytical and operational data stores

  • Proficiency in design and development of custom ETL pipelines using SQL and scripting languages (Python, etc.)

  • Experience with data pipeline orchestration (Airflow, Dagster, Prefect) and streaming technologies (Kafka, Spark)

  • Proficiency in major cloud platforms (AWS, Azure, or GCP) including data services and AI/ML offerings

  • Solid understanding and practical application of API design principles (REST, GraphQL) and microservices patterns

  • Demonstrated experience with version control systems (Git), implementing CI/CD pipelines, and Infrastructure-as-Code (IaC) practices.

  • Ability to thrive in a fast-paced environment while balancing speed, accuracy, and long-term design principles

  • Strong communication skills and comfort working with both technical and business stakeholders

  • Bachelor's degree in Computer Science, Data Science, Information Systems, or related field

Preferred Qualifications

  • Hands-on experience with LLMs such as GPT-4, Claude, or Llama, including expertise in prompt engineering, fine-tuning, and RAG implementations.

  • Proficiency with data visualization and BI tools like Superset or Tableau.

  • Strong understanding of authentication and authorization patterns

  • Master’s degree in related field preferred

Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088_EEOC_KnowYourRights6.12ScreenRdr.pdf) .

Other Details

No Video Available
--

About Organization

 
About Organization