Beghou Consulting Logo

Beghou Consulting

Senior DevOps Engineer

Posted 6 Days Ago
Be an Early Applicant
Hybrid
Hyderabad, Telangana
Senior level
Hybrid
Hyderabad, Telangana
Senior level
The Senior DevOps Engineer will manage cloud infrastructure using Azure and AWS, automate delivery pipelines, and collaborate with development teams to ensure security and scalability.
The summary above was generated by AI
Beghou brings over three decades of experience helping life sciences companies optimize their commercialization through strategic insight, advanced analytics, and technology. From developing go-to-market strategies and building foundational data analytics infrastructures to leveraging artificial intelligence to improve customer insights and engagement, Beghou helps life sciences companies maximize performance across their portfolios. Beghou also deploys proprietary and third-party technology solutions to help companies forecast performance, design territories, manage customer data, organize, and report on medical and commercial data, and more. Headquartered in Evanston, Illinois, we have 10 global offices.

Our mission is to bring together analytical minds and innovative technology to help life sciences companies navigate the complexity of health care and improve patient outcomes.

Purpose of Job

  • Beghou Consulting is seeking a hands-on, proactive Site Reliability / DevOps Engineer to strengthen and scale our cloud foundation across Azure and AWS. In this role, you will be responsible for designing, automating, and maintaining mission‑critical infrastructure that powers our internal platforms, data engineering pipelines, and client-facing solutions. You will own Infrastructure as Code practices, build secure and reliable CI/CD workflows, and ensure our cloud environments meet high standards of performance, security, and resilience.

  • As a key technical contributor, you’ll collaborate closely with development, data engineering, and client teams to deliver cloud solutions that are repeatable, cost‑efficient, and aligned to best practices. You will play a central role in enabling modern data workloads through Databricks, driving automation through Python-based tooling, and strengthening our disaster recovery and operational readiness. This is an opportunity to have real ownership, influence architectural decisions, and help evolve a rapidly growing cloud ecosystem.

We'll trust you to:

  • Infrastructure as Code 
  • Develop, maintain, and review Terraform modules to provision and manage cloud resources. 
  • Apply best practices for Terraform state management, module design, and testing. 
  • Leverage CDKTF (Terraform) or Terragrunt patterns where appropriate. 

  • Cloud Platform & Data Platform Management 
  • Architect and operate services in Azure and AWS (compute, storage, networking, databases). 
  • Provision, configure, and manage Databricks workspaces, clusters and jobs to support data pipelines. 
  • Implement tagging, cost-optimisation, and security controls across both clouds and Databricks environments. 

  • CI/CD for Infrastructure 
  • Build, test, and maintain Azure DevOps Pipelines or GitHub Actions to automate infrastructure provisioning and application deployment. 
  • Manage source in Azure DevOps or GitHub repos, enforce branching strategies, pull-request workflows and policy-as-code checks. 
  • Collaboration & Documentation 
  • Partner with client and data engineering teams to integrate infrastructure changes safely into development workflows. 
  • Produce clear architecture diagrams, runbooks, and “how-to” guides for both technical and non-technical stakeholders. 

  • Programming & Automation 
  • Write and maintain automation scripts and CLI tools in Python to streamline operational tasks. 
  • Contribute to internal SDKs or libraries for shared tooling and infrastructure utilities. 

You'll need to have:

  • Experience: 3+ years in a DevOps or SRE role with production workloads in Azure and AWS. 
  • IaC Proficiency: Deep hands-on Terraform experience, including Terraform Cloud, Terragrunt, module design, and testing. 
  • CI/CD Expertise: Proven ability building and maintaining Azure DevOps Pipelines and managing code in either Azure DevOps or GitHub repos. 
  • Programming Skills: Strong Python scripting skills; able to write clean, testable, and reusable code. 
  • Data Platform Understanding: Familiarity with provisioning and managing Databricks workspaces, clusters, and jobs. 
  • Disaster Recovery: Experience designing DR plans, automated backups, and conducting restore drills. 
  • Cloud Services: Solid understanding of core Azure and AWS services (VMs, networking, storage, IAM, RDS/SQL). 
  • Problem-Solving: Excellent troubleshooting abilities and a calm, methodical approach to incident response. 
  • Communication: Clear written and verbal communication; adept at documenting complex systems simply. 
  •  
  • Preferred Qualifications  
  • Advanced IaC Tools: Experience with CDK for Terraform (CDKTF) or Terragrunt for composing infrastructure patterns. 
  • Languages: PowerShell, Bicep, Python, Terraform 
  • Security & Compliance: Familiarity with cloud security best practices (CIS benchmarks, Azure Policy, AWS IAM policies). 
  • Networking: In-depth knowledge of VPCs, VNets, load balancers, VPNs and cross-region connectivity. 
  • Serverless & Containers: Experience with serverless platforms (Azure Functions, AWS Lambda) and container registries. 

At Beghou Consulting, you'll join a highly collaborative, values-driven team where technical excellence, analytical rigor, and personal growth converge. Whether you're passionate about AI innovation, building commercialization strategies, or shaping the next generation of data-first solutions in life sciences, this is a place to make an impact!

Top Skills

AWS
Azure
Azure Devops
Bicep
Databricks
Github Actions
Powershell
Python
Terraform

Similar Jobs

6 Hours Ago
In-Office
Hyderabad, Telangana, IND
Senior level
Senior level
Big Data • Marketing Tech • Analytics
Seeking an experienced AWS DevOps Engineer to design scalable cloud environments, implement CI/CD practices, and ensure application security on AWS. Responsibilities include infrastructure management, automation, monitoring, and collaboration with development teams.
Top Skills: AnsibleAWSAws CodepipelineBitbucketCloudFormationCloudwatchDockerEc2EcsEksGitGitlab CiGrafanaJenkinsKubernetesLambdaPrometheusRdsS3Terraform
8 Hours Ago
In-Office
Hyderabad, Telangana, IND
Senior level
Senior level
Artificial Intelligence
The Senior DevOps Engineer will deploy and manage on-premise systems, automate deployment pipelines, ensure high availability, and collaborate with various teams for effective incident management.
Top Skills: AnsibleBashChefElk StackGrafanaJenkinsKubernetesNagiosPowershellPrometheusPuppetPythonSplunkZabbix
3 Days Ago
Hybrid
4 Locations
Senior level
Senior level
Big Data • Fintech • Information Technology • Business Intelligence • Financial Services • Cybersecurity • Big Data Analytics
Design and implement scalable infrastructure and automated deployment pipelines, focusing on Infrastructure as Code, CI/CD, and cloud-native automation.
Top Skills: AWSAzureCi/CdDockerGCPGitGitlab CiHarnessJenkinsKubernetesPythonTerraform

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account