Greenway Health Logo

Greenway Health

Manager, Product Development - Data Engineering/AWS/AI

Posted 2 Days Ago
Be an Early Applicant
Remote
Hiring Remotely in India
Expert/Leader
Remote
Hiring Remotely in India
Expert/Leader
Lead architecture, design, and execution of an AWS-based Data Lakehouse and AI engineering platform. Manage 2–4 scrum pods, enforce IaC/CI-CD/DevOps best practices, drive cost optimization and responsible AI usage, conduct architecture/code reviews, and partner with stakeholders to deliver scalable, secure data solutions while mentoring engineering teams.
The summary above was generated by AI

Job Description

Job Title

Manager, Product Development – Data Engineering/AWS/AI

Date

2/02/2026

Reports To

VP, Product Development

EEO Category

First/Mid Offs & Mgrs.

Organization

Product Development

FLSA

Exempt


Job SummaryThe Engineering Manager, Data Lakehouse & AI Engineering is responsible for leading the architecture, technical design, engineering execution, and operational excellence of Greenway’s AWS-based Data Lakehouse platform.This role provides technical leadership across multiple scrum pods (2–4 teams of 4–5 engineers each) and establishes engineering best practices across all phases of the software development life cycle. The incumbent will drive high-quality, scalable, secure, and cost-efficient data solutions while leveraging AI-powered engineering tools to improve productivity, automation, and code quality.This position combines deep technical expertise with strong people leadership, stakeholder management, and strategic vision to ensure alignment between engineering execution and business objectives.Essential Duties & Responsibilities

•  Provide architectural leadership and technical oversight for the AWS-based Data Lakehouse platform, ensuring scalability, security, reliability, and cost optimization.

•  Develop and enhance architectural design frameworks to ensure high-quality, compliance, and performant data systems aligned with business objectives.

•  Lead technical design reviews, architecture reviews, and code quality initiatives (Gerrit-based workflows).

•  Establish and enforce best practices across:

Infrastructure as Code (Terraform)

CI/CD pipelines

Automated testing and quality engineering

Data governance and security standards

Non-Functional Requirements (NFRs): scalability, availability, resiliency, observability, and performance

•  Ensure effective deployment of AWS technologies including but not limited to S3, Glue, EMR, Redshift, Athena, Lambda, ECS/EKS, IAM, VPC, CloudWatch, and relevant AI services such as Bedrock and SageMaker.

•  Drive AI-assisted engineering practices using tools such as GitHub Copilot, Claude, ChatGPT, MCP, and other emerging technologies to improve development efficiency, automation, and documentation quality.

•  Implement guardrails and governance for responsible AI usage within engineering teams.

•  Provide leadership, vision, and strategy to ensure daily operations of development teams align with both present and long-term business goals.

•  Manage technically focused scrum teams across multiple locations, ensuring predictable and high-quality delivery.

•  Partner closely with Product, Analytics, Data Science, Security, and business stakeholders to translate requirements into scalable technical solutions.

•  Drive cloud cost optimization and efficiency initiatives (FinOps mindset).

•  Mentor, coach, and develop engineering talent; build succession plans and foster a high-performance culture.



Experience

·         Minimum 10 years of progressive experience in software or data engineering.

·         3–5+ years of experience leading engineering teams in a managerial capacity.

·         Proven experience designing and implementing Data Lakehouse architecture on AWS.

·         Experience managing multiple scrum teams (2–4 pods preferred).

·         Experience driving technical strategy and engineering standards across distributed teams.

Education

·         Bachelor’s degree in computer science or related field required.

·         Master’s Degree preferred

Minimum Qualifications

·         Strong understanding of distributed systems and large-scale data platforms & data Lakehouse architecture.

·         Deep experience with AWS cloud architecture and services.

·         Strong hands-on experience with Infrastructure as Code (Terraform).

·         Experience with Gerrit or comparable enterprise code review systems.

·         Strong understanding of distributed systems and large-scale data platforms.

·         Experience implementing CI/CD, automated testing, and DevOps best practices.

·         Demonstrated ability to define and enforce NFRs across enterprise systems.

·         Experience driving engineering productivity improvements through AI tools.

·         Strong stakeholder management and executive communication skills.


Skills/Knowledge


·         Strategic thinker and proven leader with strong communication and collaboration skills.

·         Strong technical depth in:

o    Data Lakehouse technologies (e.g., Delta Lake, Iceberg, Hudi)

o    ETL/ELT pipelines

o    Streaming frameworks (Kafka/Kinesis)

o    Data modeling and governance

·         Experience with cloud architecture and DevOps practices.

·         Understanding of AI concepts including LLMs, Agentic AI, and RAG.

·         Experience using AI-assisted development tools such as GitHub Copilot, Claude, ChatGPT, MCP.

·         Experience with AWS AI services (e.g., Bedrock, SageMaker) preferred.

·         Ability to determine clear prioritization and manage trade-offs across roadmap, resources, and delivery timelines.

·         Strong people management skills with demonstrated ability to mentor and grow high-performing engineering teams in a fast-paced environment.

·         Strong executive presence and ability to communicate complex technical concepts to non-technical stakeholders.

·         DevOps.

o    Understanding of AI concepts like LLMs, Agentic AI, RAG

o    Experience using tools like GitHub Copilot

o    Experience with AWS Bedrock

Work Environment/Physical Demands

·         Office environment

·         Ability to use a computer, phone, and other office equipment

·         Ability to sit, stand, or walk for long periods of time

·         Ability for travel as needed




Job Description

Job Title

Manager, Product Development – Data Engineering/AWS/AI

Date

2/02/2026

Reports To

VP, Product Development

EEO Category

First/Mid Offs & Mgrs.

Organization
Product Development

FLSA

Exempt



Job Summary
The Engineering Manager, Data Lakehouse & AI Engineering is responsible for leading the architecture, technical design, engineering execution, and operational excellence of Greenway’s AWS-based Data Lakehouse platform.
This role provides technical leadership across multiple scrum pods (2–4 teams of 4–5 engineers each) and establishes engineering best practices across all phases of the software development life cycle. The incumbent will drive high-quality, scalable, secure, and cost-efficient data solutions while leveraging AI-powered engineering tools to improve productivity, automation, and code quality.
This position combines deep technical expertise with strong people leadership, stakeholder management, and strategic vision to ensure alignment between engineering execution and business objectives.
Essential Duties & Responsibilities
•  Provide architectural leadership and technical oversight for the AWS-based Data Lakehouse platform, ensuring scalability, security, reliability, and cost optimization.

•  Develop and enhance architectural design frameworks to ensure high-quality, compliance, and performant data systems aligned with business objectives.

•  Lead technical design reviews, architecture reviews, and code quality initiatives (Gerrit-based workflows).

•  Establish and enforce best practices across:

Infrastructure as Code (Terraform)

CI/CD pipelines

Automated testing and quality engineering

Data governance and security standards

Non-Functional Requirements (NFRs): scalability, availability, resiliency, observability, and performance

•  Ensure effective deployment of AWS technologies including but not limited to S3, Glue, EMR, Redshift, Athena, Lambda, ECS/EKS, IAM, VPC, CloudWatch, and relevant AI services such as Bedrock and SageMaker.

•  Drive AI-assisted engineering practices using tools such as GitHub Copilot, Claude, ChatGPT, MCP, and other emerging technologies to improve development efficiency, automation, and documentation quality.

•  Implement guardrails and governance for responsible AI usage within engineering teams.

•  Provide leadership, vision, and strategy to ensure daily operations of development teams align with both present and long-term business goals.

•  Manage technically focused scrum teams across multiple locations, ensuring predictable and high-quality delivery.

•  Partner closely with Product, Analytics, Data Science, Security, and business stakeholders to translate requirements into scalable technical solutions.

•  Drive cloud cost optimization and efficiency initiatives (FinOps mindset).

•  Mentor, coach, and develop engineering talent; build succession plans and foster a high-performance culture.





Experience
·         Minimum 10 years of progressive experience in software or data engineering.

·         3–5+ years of experience leading engineering teams in a managerial capacity.

·         Proven experience designing and implementing Data Lakehouse architecture on AWS.

·         Experience managing multiple scrum teams (2–4 pods preferred).

·         Experience driving technical strategy and engineering standards across distributed teams.

Education
·         Bachelor’s degree in computer science or related field required.

·         Master’s Degree preferred

Minimum Qualifications
·         Strong understanding of distributed systems and large-scale data platforms & data Lakehouse architecture.

·         Deep experience with AWS cloud architecture and services.

·         Strong hands-on experience with Infrastructure as Code (Terraform).

·         Experience with Gerrit or comparable enterprise code review systems.

·         Strong understanding of distributed systems and large-scale data platforms.

·         Experience implementing CI/CD, automated testing, and DevOps best practices.

·         Demonstrated ability to define and enforce NFRs across enterprise systems.

·         Experience driving engineering productivity improvements through AI tools.

·         Strong stakeholder management and executive communication skills.



Skills/Knowledge



·         Strategic thinker and proven leader with strong communication and collaboration skills.

·         Strong technical depth in:

o    Data Lakehouse technologies (e.g., Delta Lake, Iceberg, Hudi)

o    ETL/ELT pipelines

o    Streaming frameworks (Kafka/Kinesis)

o    Data modeling and governance

·         Experience with cloud architecture and DevOps practices.

·         Understanding of AI concepts including LLMs, Agentic AI, and RAG.

·         Experience using AI-assisted development tools such as GitHub Copilot, Claude, ChatGPT, MCP.

·         Experience with AWS AI services (e.g., Bedrock, SageMaker) preferred.

·         Ability to determine clear prioritization and manage trade-offs across roadmap, resources, and delivery timelines.

·         Strong people management skills with demonstrated ability to mentor and grow high-performing engineering teams in a fast-paced environment.

·         Strong executive presence and ability to communicate complex technical concepts to non-technical stakeholders.

·         DevOps.

o    Understanding of AI concepts like LLMs, Agentic AI, RAG

o    Experience using tools like GitHub Copilot

o    Experience with AWS Bedrock

Work Environment/Physical Demands
·         Office environment

·         Ability to use a computer, phone, and other office equipment

·         Ability to sit, stand, or walk for long periods of time

·         Ability for travel as needed



Text Box: Disclaimer: This Job Summary indicates the general nature and level of work expected of the incumbent(s). It is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities required of the incumbent. Incumbent(s) may be asked to perform other duties as requested. Greenway Health, LLC is an Equal Opportunity Employer. We do not discriminate based on race, religion, age, gender, national origin, sexual orientation, disability, or veteran status. 


Top Skills

Aws S3,Aws Glue,Aws Emr,Amazon Redshift,Amazon Athena,Aws Lambda,Amazon Ecs,Amazon Eks,Aws Iam,Aws Vpc,Amazon Cloudwatch,Aws Bedrock,Amazon Sagemaker,Terraform,Gerrit,Github Copilot,Claude,Chatgpt,Mcp,Delta Lake,Apache Iceberg,Apache Hudi,Kafka,Amazon Kinesis,Data Lakehouse,Etl/Elt,Ci/Cd,Devops,Finops

Similar Jobs

9 Minutes Ago
Remote or Hybrid
India
Senior level
Senior level
Security • Cybersecurity
The Senior Professional Services Solutions Architect will implement Tufin solutions, mentor project teams, and provide technical expertise in networking and security systems.
Top Skills: DockerGraph QlKubernetesLinuxPostgresPythonRest ApisSQL
9 Minutes Ago
Easy Apply
Remote
India
Easy Apply
Mid level
Mid level
Artificial Intelligence • Blockchain • Fintech • Financial Services • Cryptocurrency • NFT • Web3
The Compliance QA Analyst II will conduct QA reviews of EDD and screening cases, provide feedback to agents, conduct root cause analysis, collaborate on complaints handling, and ensure compliance with global regulations.
Top Skills: ExcelGoogle SheetsGoogle SlidesJIRAPowerPointSalesforce Service Cloud
9 Minutes Ago
Easy Apply
Remote
India
Easy Apply
Mid level
Mid level
Artificial Intelligence • Blockchain • Fintech • Financial Services • Cryptocurrency • NFT • Web3
As an Expense Analyst, you'll manage expense compliance, corporate card administration, support employees with guidance on expense systems, and analyze data for spending trends.
Top Skills: BrexExcelGoogle SheetsNavanNetSuiteRampZip

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account