The Data Engineer role involves server-side development, AI/ML integration, debugging services, and collaboration on data projects.
We’re Bringg! A delivery management leader, serving 800+ customers globally. Leading enterprise retailers and brands use Bringg to grow their delivery capacity, reduce costs, and improve customer experiences. Every year, we process over 200 million orders through our smart, automated omnichannel platform experience.
We are seeking a forward-thinking Data Engineer to join our team during a pivotal time of transformation. As we redesign our data pipeline solutions, you will play a key role in shaping and executing our next-generation data infrastructure. This is a hands-on, architecture-driven role ideal for someone ready to help lead this evolution.
In this role, you will:
- Drive the design and architecture of scalable, efficient, and resilient batch and streaming data pipelines.
- Shape the implementation of modern, distributed systems to support high-throughput data processing and real-time analytics.
- Collaborate cross-functionally with data scientists, engineers, and product stakeholders to deliver end-to-end data-driven capabilities.
- Optimize legacy systems during the migration phase, ensuring a seamless transition with minimal disruption.
- Contribute to DevOps and MLOps processes and enhance the reliability, monitoring, and automation of data infrastructure.
- Support the integration and deployment of AI/ML models within the evolving data platform.
- 4+ years of experience building and maintaining data pipelines using tools like Flink, Spark, Kafka, and Airflow.
- Deep understanding of SQL and NoSQL ecosystems (e.g., Postgres, Redis, Elastic, Delta Lake).
- Solid backend development experience, with a strong command of OOP/OOD principles and design patterns.
- Demonstrated experience designing and implementing new data architectures, especially in fast-paced or transitioning environments.
- Exposure to MLOps and the full lifecycle of AI/ML model deployment in production.
- Passion for learning and applying new technologies–whether a new stack or a paradigm shift.
- Experience in DevOps and asynchronous systems; familiarity with RabbitMQ, Docker, WebSockets, and Linux environments is a plus.
- Comfortable taking initiative and working independently with minimal structure.
- Advantageous: experience with routing and navigation algorithms.
Top Skills
Docker
Druid
Hadoop
Kafka
NoSQL
Postgres
Python
RabbitMQ
Redis
Spark
SQL
Typescript
What you need to know about the Pune Tech Scene
Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.