Optum Logo

Optum

Senior Data Engineering Lead

Posted 3 Days Ago
Be an Early Applicant
In-Office
Bangalore, Bengaluru Urban, Karnataka
Senior level
In-Office
Bangalore, Bengaluru Urban, Karnataka
Senior level
Lead design, build, and operation of reliable batch and streaming data pipelines and AI training/evaluation datasets. Implement data quality, observability, governance for PII/PHI, and performance/cost optimizations. Partner with ML engineers on embedding/vector pipelines and RAG/LLM systems, define data contracts, create reusable frameworks, measure impact, and mentor engineers while contributing to platform architecture decisions.
The summary above was generated by AI
Requisition Number: 2340860
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
Primary Responsibilities:
  • Design, build, and operate reliable data pipelines (batch/stream) that power analytics and AI/ML use cases end-to-end
  • Develop and curate high-quality datasets for AI features (e.g., prompt/RAG inputs, training data, evaluation sets) with clear contracts, SLAs, and lineage
  • Implement/use data quality, observability, and incident response practices (tests, anomaly detection, monitoring, runbooks, on-call readiness)
  • Build and/or maintain data models and semantic layers that make data easy and safe for internal customers (analytics, operations, product)
  • Partner with ML/AI engineers and product teams to produce AI systems (feature stores where appropriate, embedding pipelines, vector index refresh, offline/online consistency)
  • Establish and adhere to governance for sensitive data used in AI (PII/PHI handling, access controls, retention, auditability) and ensure compliance requirements are met
  • Optimize performance and cost across the data platform (query tuning, partitioning/clustering, workload management, storage lifecycle)
  • Define and enforce data contracts with upstream/downstream teams; lead discovery to clarify requirements, value, and adoption readiness
  • Create reusable frameworks/templates for pipelines, validation, and AI data preparation to reduce friction and increase consistency
  • Measure impact: instrument usage, quality, and business outcomes; support experimentation and A/B testing where relevant
  • Mentor engineers, drive technical standards, and lead design reviews with a focus on long-term maintainability
  • Contribute to architecture decisions: cloud data platform, orchestration, streaming, metadata/lineage, and AI-enablement tools
  • Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so

Required Qualifications:
  • Bachelor's degree in fine arts, Industrial Design, or related field. Concentration in UCD, HCI, or similar highly desired
  • 8+ years of production data engineering experience delivering and operating data pipelines/datasets used by multiple teams
  • Experience building training data pipelines and evaluation frameworks: ground truth management, reproducibility, data leakage prevention, and regression testing
  • Advanced SQL plus solid Python skills (or Scala/Java) for building robust data pipelines, libraries, and automation
  • Hands-on experience with Snowflake and Databricks (Delta/Unity concepts helpful), including modeling, performance tuning, and operational best practices
  • Orchestration experience with Apache Airflow and/or Azure Data Factory/Data Lake Flow; solid understanding of scheduling, retries, idempotency, and backfills
  • Solid Azure experience (storage patterns, networking basics, identity/access patterns); Infrastructure-as-Code experience (Terraform/Bicep) preferred
  • Demonstrated experience supporting RAG and/or LLM applications: embedding pipelines, document preprocessing, metadata strategies, vector stores, and retrieval evaluation
  • Proven ability to implement data quality/observability (Great Expectations/dbt tests/monitoring equivalents) and operate on-call with clear runbooks
  • Solid security/governance mindset, especially for regulated domains (PII/PHI), including auditing and least-privilege access
  • Proven excellent cross-team communication: can turn ambiguous asks into scoped work with clear success metrics and adoption plans

Preferred Qualifications:
  • Experience with ML tooling (MLflow/model registries), vector database operations, prompt/version management, and drift monitoring signals for LLM systems

At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Top Skills

Sql,Python,Scala,Java,Snowflake,Databricks,Delta,Unity,Apache Airflow,Azure Data Factory,Data Lake Flow,Azure,Terraform,Bicep,Great Expectations,Dbt,Embedding Pipelines,Vector Stores,Llm,Rag

Similar Jobs at Optum

Yesterday
In-Office
Bangalore, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Lead architecture and implementation of enterprise ETL/ELT pipelines, data warehouses, and performance-optimized Spark/Delta Lake workflows on Azure. Collaborate with stakeholders to deliver analytics-ready datasets, enforce data governance and compliance, monitor pipeline health, and mentor junior engineers.
Top Skills: Azure Data Factory,Databricks,Databricks Notebooks,Azure Function App (Python),Mongodb,Spark,Delta Lake,Azure Monitor,Log Analytics,Git,Snowflake,Synapse Analytics,Data Lake Storage Gen2,Python,Pyspark,Scala,Sql
Yesterday
In-Office
Bangalore, Bengaluru Urban, Karnataka, IND
Mid level
Mid level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Maintain and secure server operating systems for a large, distributed estate. Perform patching, vulnerability and risk management, hardening and monitoring, server administration, documentation, and 24x7 on-call support while collaborating with engineering teams.
Top Skills: Windows Server,Vmware,Esx,Microsoft Msi,Scripting,Antivirus,Aws,Azure,Gcp
Yesterday
In-Office
Bangalore, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Lead architecture and implementation of enterprise ETL/ELT pipelines, data warehouses, and performance-optimized Spark/Delta Lake workflows on Azure. Collaborate with stakeholders to deliver analytics-ready datasets, enforce data governance and compliance, monitor pipeline health, and mentor junior engineers.
Top Skills: Azure Data Factory,Databricks,Databricks Notebooks,Azure Function App (Python),Mongodb,Spark,Delta Lake,Azure Monitor,Log Analytics,Git,Snowflake,Synapse Analytics,Data Lake Storage Gen2,Python,Pyspark,Scala,Sql

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account