Cardinal Health Logo

Cardinal Health

DATA ENGINEER (SRE)

Reposted 9 Days Ago
Be an Early Applicant
IND
Mid level
IND
Mid level
This role involves building and maintaining data pipelines on GCP, troubleshooting production issues, and developing automation frameworks. It requires collaboration with operations to ensure effective data handling and monitoring practices.
The summary above was generated by AI

DATA ENGINEER (SRE)

Headquartered in Dublin, Ohio, Cardinal Health, Inc. (NYSE: CAH) is a global, integrated healthcare services and products company connecting patients, providers, payers, pharmacists and manufacturers for integrated care coordination and better patient management. Backed by nearly 100 years of experience, with more than 50,000 employees in nearly 60 countries, Cardinal Health ranks among the top 20 on the Fortune 500.
Department Overview
Augmented Intelligence (AugIntel) builds automation, analytics and artificial intelligence solutions that drive success for Cardinal Health by creating material savings, efficiencies, and revenue growth opportunities. The team drives business innovation by leveraging emerging technologies and turning them into differentiating business capabilities.

We are seeking an experienced support analyst to join the Cloud Data Engineering team. This is an agile team focused on building various Platform Engineering and Data ingestion in Google Cloud.

High-level team responsibility will include:

• Process Analysis / Opportunity Assessment and create Roadmap for execution

• Troubleshoot Data Pipeline & automate the frameworks.

• Idea Incubation, Design Thinking, Conducting POCs / POVs

• Developing and fostering a culture of “Create”, “Consult”, and “Cultivate”

• User Experience at the core of development

• Ensure right levels of Logging/ monitoring/ alerting in solutions to help operations

Responsibilities for this role:

• GCP Big Query Operations – Support Ingestion jobs, dataset access management, dataset authorization, and troubleshooting •

Troubleshoot Data Pipeline & automate the frameworks.

• Airflow DAGs – Design, maintain, and optimize DAGs for data pipeline orchestration. • Developing and fostering a culture of “Create”, “Consult”, and “Cultivate”

• User Experience at the core of development • AtScale Management – Work on data virtualization for business intelligence

• Data Pipeline Automation – Develop automated workflows using Python, Shell scripting, and CI/CD tools (Concourse, Jenkins).

• Supports global 24x7 operations team in technical support for business operations by proactive overseeing, researching, and resolving production issues in automation solutions and platforms.

• Supports efforts to define monitoring metrics and practices of Data Platforms • Ensure right levels of Logging/ monitoring/ alerting in solutions to help operations.

• Analyze system monitoring data and make the corrections/enhancements to the Data and ML Ops Platform.

• Git & Version Control – Work extensively on Git repositories for version control, collaboration, and CI/CD

• Incident Management & Support – Work on SNOW tickets, resolve issues efficiently, and support production environments.

• Co-ordinate with developers and support resources to meet the SLA defined in Support governance document.

• Complex SQL & Procedures – Write and optimize complex SQL queries, stored procedures.

• Assist in improving support processes in place

• Contribute to continuous improvement initiatives for the Data and ML Ops Platform • CI/CD Implementation – Design and maintain continuous integration and deployment pipelines

• Bachelor degree in Computer Science, Information Systems, related technical degree, or equivalent industry experience, and a minimum three years of IT experience

• 3 to 5 years of experience in software development and support experience

• 3+ years of hands-on experience in Big Data systems, Data Analytics and Data Integration related fields

Qualifications:

• Bachelor’s degree in computer science, Information Systems, related technical degree, or equivalent industry experience, and a minimum five years of IT experience

• 3 to 5 years of experience in IT, with expertise in DevOps, Cloud Engineering, or Data Engineering.

• Working experience/knowledge in infrastructure/application architecture

• Working experience/knowledge in Public Cloud Infrastructure (GCP preferred) • Hands-on experience with GCP (Big Query, GCS, VPC, IAM, Cloud Functions)

• Strong Python and Shell scripting skills for automation • Proficiency in Airflow DAG support and scheduling.

• Experience with CI/CD pipelines such as Concourse, Jenkins

• Google Cloud Platform certification is a plus

• Excellent analytical ability to trouble shoots issues in Production environment

• Handson experience with Kubernetes, Terraform, and cloud security practices

• Experience working in Git workflows, branching strategies, and repository management

• Excellent verbal and written communication skills with ability to clearly articulate status of requests and issues both with IT and business partners

• Experience in troubleshooting Java, Python code and SQL query optimization

• Excellent verbal and written communication skills with ability to clearly articulate status of requests and issues both with IT and business partners

“Beware of online recruitment scams “Recruitment scams are being run by individuals that are not affiliated with Cardinal Health. These scams attempt to utilize the Cardinal Health brand name and logo in emails or false job postings. Victims of these scams are often asked to provide sensitive personal data, provide banking information, or send fees via check or wire transfer prior to receiving a job offer. During the application process, Cardinal Health will never request or solicit money, bank or credit card information or tax forms, require applicants to purchase equipment or communicate via online chat rooms, Google Hangout, or through social email accounts like Yahoo, Hotmail or Gmail. All legitimate job opportunities are posted on the Cardinal Health careers site. If you have any questions whether a job advertisement or solicitation is a legitimate Cardinal Health job opening, please email us.”

Candidates who are back-to-work, people with disabilities, without a college degree, and Veterans are encouraged to apply.

Cardinal Health supports an inclusive workplace that values diversity of thought, experience and background. We celebrate the power of our differences to create better solutions for our customers by ensuring employees can be their authentic selves each day. Cardinal Health is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, ancestry, age, physical or mental disability, sex, sexual orientation, gender identity/expression, pregnancy, veteran status, marital status, creed, status with regard to public assistance, genetic status or any other status protected by federal, state or local law.

To read and review this privacy notice click here

Top Skills

Airflow
Big Query
Ci/Cd
Cloud Functions
Concourse
GCP
Git
Jenkins
Kubernetes
Python
Shell Scripting
SQL
Terraform

Similar Jobs

8 Days Ago
Easy Apply
Hybrid
Bengaluru, Bengaluru Urban, Karnataka, IND
Easy Apply
Senior level
Senior level
Cloud • Software
As a Senior Site Reliability Engineer, you will ensure the reliability, scalability, and security of cloud and big data platforms, collaborating with cross-functional teams to optimize systems for ML and AI initiatives.
Top Skills: AirflowAWSAws SagemakerBig DataEmrGoKubernetesMl/AiPrometheusPythonSparkTerraformUnix/Linux
21 Days Ago
In-Office
Pune, Mahārāshtra, IND
Senior level
Senior level
Fintech • Financial Services
Lead Site Reliability Engineer responsible for maintaining system reliability and scalability, developing automation tools, and fostering best practices in operations.
Top Skills: Ab InitioAnsibleChefElk StackGitlabGrafanaIbm Business Automation ManagerInformaticaJavaJbpmJenkinsKubernetesOpen TelemetryPrometheusPython
16 Days Ago
In-Office
Bengaluru, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Cloud • Security • Software • Cybersecurity
As a Senior Data SRE, you will be responsible for managing third-party data services at Netskope. This includes designing, implementing, and automating production data services while providing technical leadership and monitoring system performance. You will ensure system reliability and scalability, communicate non-functional requirements, and collaborate with teams on various projects.

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account