Synechron Logo

Synechron

Enterprise Data Engineer | Cloud (AWS, Azure) | Big Data (Spark, Hadoop) | ETL & Data Pipelines | SQL & NoSQL

Posted 8 Days Ago
Remote
Hiring Remotely in Hinjawadi, Pune, Mahārāshtra, IND
Senior level
Remote
Hiring Remotely in Hinjawadi, Pune, Mahārāshtra, IND
Senior level
The Data Engineer will design, develop, and maintain data pipelines and solutions, ensuring data quality and collaborating with cross-functional teams to meet analytics needs.
The summary above was generated by AI

Job Summary

Synechron is seeking a proficient Data Engineer to support the design, development, and maintenance of scalable, efficient data pipelines and enterprise data solutions. The role involves collaborating with cross-functional teams to gather requirements, implement data management strategies, and ensure data quality, security, and availability. The Data Engineer will leverage experience in cloud platforms, big data tools, and modern development practices to enable data-driven decision-making and operational excellence across the organization.

Software Requirements

Required:

  • Strong understanding of data management concepts, cloud platforms (preferably AWS or Azure), and scalable architectures.

  • Hands-on experience with programming languages such as Python, Java, or Node.js.

  • Practical experience with big data tools like Apache Spark, Hadoop, Flink, or similar frameworks.

  • Working knowledge of databases such as SQL (MySQL, SQL Server, PostgreSQL) and NoSQL databases (e.g., MongoDB, DynamoDB).

  • Experience with data orchestration and pipeline tools such as Apache Airflow, Luigi, or comparable frameworks.

  • Familiarity with version control systems such as Git and collaboration tools like JIRA and Confluence.

Preferred:

  • Knowledge of containerization (Docker, Kubernetes) and infrastructure as code (Terraform, CloudFormation).

  • Experience in deploying and managing data pipelines on cloud platforms like AWS Glue, Azure Data Factory, or GCP Dataflow.

  • Familiarity with stream processing tools like Kafka or Kinesis.

  • Exposure to data security protocols and compliance standards (GDPR, HIPAA, etc.).

Overall Responsibilities

  • Design, develop, and maintain large-scale data pipelines, ETL workflows, and data integrations to support analytics, reporting, and operational needs.

  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver reliable solutions.

  • Optimize and monitor data pipelines for performance, scalability, and data quality.

  • Implement data governance, validation, and cataloging processes to ensure data integrity and security.

  • Automate deployment, testing, and data infrastructure changes using CI/CD practices.

  • Participate in architecture discussions, technical reviews, and documentation to support data ecosystem growth.

  • Stay informed of emerging data technologies, industry standards, and best practices, and incorporate relevant innovations.

Expected outcomes:
Reliable, scalable, secure, and high-performing data pipelines that support organizational analytics and business intelligence initiatives.

Technical Skills (By Category)

Programming Languages:

  • Essential: Python, Java, or Node.js

  • Preferred: Spark (PySpark, Spark Scala), SQL for data manipulation

Databases/Data Management:

  • Essential: SQL database management (MySQL, PostgreSQL, SQL Server)

  • Preferred: NoSQL databases (MongoDB, DynamoDB)

Cloud Technologies:

  • Preferred: AWS (Glue, S3, EMR), Azure Data Factory, GCP Dataflow

Frameworks & Libraries:

  • Essential: Apache Spark, Kafka, Hadoop ecosystem components

  • Preferred: Dask, Flink

Development Tools & Methodologies:

  • Essential: Git, Jenkins, CI/CD pipelines, Agile/Scrum practices

  • Preferred: Terraform, Docker, Kubernetes, DataOps tools

Security & Compliance:

  • Awareness of data encryption, access controls, and compliance frameworks such as GDPR, HIPAA, and data masking best practices.

Experience Requirements

  • Minimum of 5+ years developing and maintaining enterprise data pipelines and big data solutions.

  • Proven experience in designing scalable ETL workflows, integrating cloud data services, and optimizing data processes.

  • Demonstrable success in deploying data solutions that support reporting, analytics, and machine learning initiatives.

  • Industry experience in finance, healthcare, retail, or enterprise sectors highly desirable; relevant open-source or academic projects also acceptable.

Day-to-Day Activities

  • Develop, test, and deploy scalable data pipelines and ETL workflows.

  • Collaborate with business and data science teams to gather requirements and deliver data solutions.

  • Monitor data pipelines and optimize for performance, reliability, and security.

  • Troubleshoot technical issues, perform root cause analysis, and apply fixes.

  • Automate deployment and infrastructure provisioning procedures.

  • Maintain detailed documentation of data architecture, workflows, and operational guidelines.

  • Proactively research emerging data tools and platforms to recommend innovation.

Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related disciplines.

  • 5+ years of experience supporting enterprise data ecosystems, especially on cloud platforms.

  • Experience with big data frameworks, cloud data services, and automation tools.

  • Certifications in cloud platforms (AWS Data Analytics, Azure Data Engineer, GCP Data Engineer) are advantageous.

  • Strong problem-solving, analytical thinking, and communication skills.

Professional Competencies

  • Critical thinking to design innovative and scalable data architectures.

  • Leadership skills to mentor junior staff and guide data projects.

  • Effective stakeholder management for cross-team collaboration.

  • Adaptability to rapidly evolving data technologies and organizational needs.

  • Ownership of data quality, security, and compliance standards.

  • Time management skills to effectively prioritize tasks and meet project deadlines.

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Similar Jobs

Yesterday
Remote or Hybrid
India
Internship
Internship
Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
As a TTF India Graduate Intern at Mondelēz, you will experience a supportive environment to grow, take on new challenges, and contribute to various areas in snack production and development.
Yesterday
Remote or Hybrid
India
Mid level
Mid level
Fintech • Professional Services • Consulting • Energy • Financial Services • Cybersecurity • Generative AI
As an AVP - Finance Data Quality, you will support data services for Finance regarding compliance and risk management, collaborate with IT and business stakeholders, and document data processes.
Top Skills: AlteryxConfluenceExcelMicrosoft PowerpointMicrosoft VisioPythonQlik SenseRational Team ConcertSASSQLTableauVBA
Yesterday
Remote or Hybrid
India
Mid level
Mid level
Fintech • Professional Services • Consulting • Energy • Financial Services • Cybersecurity • Generative AI
The AI Engineer will develop and implement AI models, optimize performance, manage data handling, and collaborate with teams to align solutions with business goals.
Top Skills: Aws SagemakerAzure MlC++GCPJavaKerasPythonPyTorchScikit-LearnSQLTensorFlow

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account