Fugro Logo

Fugro

Kafka - DevOps Engineer

Reposted 5 Days Ago
Be an Early Applicant
In-Office
Navi Mumbai, Thane, Maharashtra
Senior level
In-Office
Navi Mumbai, Thane, Maharashtra
Senior level
The Kafka DevOps Engineer will develop and deploy Kafka clusters on AWS, focusing on infrastructure management, automation, and compliance while collaborating with development teams for efficient operation.
The summary above was generated by AI

Job Description

Who we are: Do you want to join our Geo-data revolution? Fugro’s global reach and unique know-how will put the world at your fingertips. Our love of exploration and technical expertise help us to provide our clients with invaluable insights. We source and make sense of the most relevant Geo-data for their needs, so they can design, build and operate their assets more safely, sustainably and efficiently. But we’re always looking for new talent to take the next step with us. For bright minds who enjoy meaningful work and want to push our pioneering spirit further. For individuals who can take the initiative, but work well within a team.
Job Purpose: We are building the Common Data Backbone (CDB)—Fugro’s strategic data platform, which enables discovery, governance, and integration across our global geospatial data ecosystem. The CDB connects multiple cloud services and end-user applications through Apache Kafka, serving as the integration solution within the CDB for event orchestration and integration services.
To further develop and deploy our CDB, we want to strengthen the team with an experienced Kafka DevOps Engineer who will expand and mature the Kafka infrastructure on AWS. This role focuses on secure cluster setup, lifecycle management, performance tuning, Dev-Ops and reliability engineering to ensure Kafka runs at enterprise-grade standards.
Key Responsibilities
•Design, deploy, and maintain secure, highly available Kafka clusters in AWS (MSK or self-managed).
•Perform capacity planning, performance tuning, and proactive scaling.
•Automate infrastructure and configuration using Terraform and GitOps principles.
•Implement observability: metrics, Grafana dashboards, CloudWatch alarms.
•Develop runbooks for incident response, disaster recovery, and rolling upgrades.
•Ensure compliance with security and audit requirements (ISO27001).
•Collaborate with development teams to provide Kafka best practices for .NET microservices and Databricks streaming jobs.
•Conduct resilience testing and maintain documented RPO/RTO strategies.
•Drive continuous improvement in cost optimization, reliability, and operational maturity.
Required Skills & Experience
•5-6 years in DevOps/SRE roles, with 4 years hands-on Kafka operations at scale.
•Strong knowledge of Kafka internals: partitions, replication, ISR, controller quorum, KRaft.
•Expertise in AWS services: VPC, EC2, MSK, IAM, Secrets Manager, networking.
•Proven experience with TLS/mTLS, SASL/SCRAM, ACLs, and secure cluster design. Proficiency in Infrastructure as Code (Terraform preferred).
•Familiarity with CI/CD pipelines for cluster and topic configuration.
•Monitoring and alerting using Grafana, CloudWatch, and log aggregation.
•Disaster recovery strategies.
•Strong scripting skills (Bash, Python) for automation and tooling.
•Excellent documentation and communication skills.
•Kafka Certification.
Nice-to-Have
•Experience with AWS MSK advanced features.
•Knowledge of Schema Registry (Protobuf) and schema governance.
•Familiarity with Databricks Structured Streaming and Kafka Connect.
•Certifications: Confluent Certified Administrator, AWS SysOps/Architect Professional.
•Databrick data analyst/engineering and Certification
•Geo-data experience
What we offer:
Fugro provides a positive work environment as well as projects that will satisfy the most curious minds. We also offer great opportunities to stretch and develop yourself. By giving you the freedom to grow faster, we think you’ll be able to do what you do best, better. Which should help us to find fresh ways to get to know the earth better. We encourage you to be yourself at Fugro. So bring your energy and enthusiasm, your keen eye and can-do attitude. But bring your questions and opinions too. Because to be the world’s leading Geo-data specialist, we need the strength in depth that comes from a diverse, driven team.
Our view on diversity, equity and inclusion:
At Fugro, our people are our superpower. Their variety of viewpoints, experiences, knowledge and talents give us collective strength. Distinctive beliefs and diverse backgrounds are therefore welcome, but discrimination, harassment, inappropriate behavior and unfair treatment are not. Everybody is to be well-supported and treated fairly. And everyone must be valued and have their voice heard. Crucially, we believe that getting this right brings a sense of belonging, of safety and acceptance, that makes us feel more connected to Fugro’s purpose ‘together create a safe and livable world’ – and to each other.
HSE Responsibilities:
Responsible for ensuring safety of self and others at site. Prevent damage of equipment and assets Responsible for following all safety signs/procedures/ safe working practices Responsible for using appropriate PPE’s Responsible for participating in mock drills. Entitled to refuse any to undertake any activity considered unsafe. Responsible for filling up of hazard observation card, wherever hazard has been noticed at site. Responsible for safe housekeeping of his work place. To stop any operation that is deemed unsafe To be able to operate fire extinguisher in case of fire To report an incident as soon as possible to immediate supervisor and HSE manager To complete HSE trainings as instructed to do so.

Disclaimer for recruitment agencies:

Fugro does not accept any unsolicited applications from recruitment agencies. Acquisition to Fugro Recruitment or any Fugro employee is not appreciated.

Top Skills

.Net
Apache Kafka
AWS
Bash
Cloudwatch
Databricks
Grafana
Python
Terraform

Similar Jobs

An Hour Ago
In-Office
Mumbai, Maharashtra, IND
Senior level
Senior level
Fintech • Information Technology • Financial Services
Lead the RQA Control Assurance & Testing Team by designing control assurance plans, managing team performance, and evaluating risk management practices to identify and propose solutions for controls issues.
Top Skills: .NetJavaScriptMS OfficePower BIPythonSQLTableau
8 Hours Ago
Hybrid
4 Locations
Mid level
Mid level
Artificial Intelligence • Healthtech • Professional Services • Analytics • Consulting
As a Data Science Associate Consultant, you'll design analytics solutions, implement AI models, collaborate on machine learning projects, and develop algorithms for complex data sets.
Top Skills: Deep LearningGenerative AiJavaPythonR
8 Hours Ago
Hybrid
2 Locations
Senior level
Senior level
Artificial Intelligence • Healthtech • Professional Services • Analytics • Consulting
This role involves leading program execution and planning for large engagements, mentoring team members, and ensuring successful delivery of outcomes in a PMO environment.
Top Skills: AgilePmiPmo FrameworkPmpPrince2Waterfall

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account