Synechron Logo

Synechron

Senior PySpark Developer (Data Engineering)

Posted 17 Days Ago
Be an Early Applicant
2 Locations
Senior level
2 Locations
Senior level
Develop and maintain data processing workflows using PySpark, optimize Spark jobs, deploy data pipelines, and troubleshoot data processing issues in collaboration with cross-functional teams.
The summary above was generated by AI

Software Requirements:

  • Proficiency in PySpark and Spark development.
  • Experience with Unix and HDFS (Hadoop Distributed File System).
  • Familiarity with libraries such as PyArrow.
  • Working knowledge of SQL and databases.

Overall Responsibilities:

  • Develop, optimize, and maintain data processing workflows using PySpark.
  • Collaborate with cross-functional teams to understand data requirements and provide data engineering solutions.
  • Implement and support data pipelines for large-scale data processing.
  • Ensure the reliability, efficiency, and performance of data processing systems.
  • Participate in code reviews and ensure adherence to best practices and standards.

Technical Skills:

Category-wise:

PySpark and Spark Development:

  • Proficiency in PySpark and Spark for data processing.
  • Experience with Spark modules and optimization techniques.

Unix and HDFS:

  • Basic experience with Unix operating systems.
  • Knowledge of Hadoop Distributed File System (HDFS).

Libraries and Tools:

  • Familiarity with PyArrow and other related libraries.

Databases and SQL:

  • Working knowledge of SQL and database management.
  • Ability to write complex SQL queries for data extraction and manipulation.

Experience:

  • Minimum of 6+ years of professional experience in data engineering.
  • Proven experience with PySpark and Spark development, particularly for data processing.
  • Experience working with Unix, HDFS, and associated libraries.
  • Demonstrated experience with SQL and database management.

Day-to-Day Activities:

  • Develop and maintain data processing workflows using PySpark.
  • Optimize Spark jobs for performance and efficiency.
  • Collaborate with data engineers, data scientists, and other stakeholders to understand data requirements.
  • Deploy and monitor data pipelines in a Hadoop and Spark environment.
  • Perform data extraction, transformation, and loading (ETL) tasks.
  • Troubleshoot and resolve issues related to data processing and pipeline failures.
  • Participate in code reviews and contribute to the improvement of coding standards and practices.

Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • Relevant certifications in data engineering or big data technologies are a plus.

Soft Skills:

  • Excellent communication and interpersonal skills.
  • Strong problem-solving and analytical skills.
  • Ability to work effectively in a fast-paced, dynamic environment.
  • Attention to detail and a commitment to delivering high-quality work.
  • Ability to prioritize tasks and manage time effectively.
  • Collaborative and team-oriented mindset.

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Top Skills

Hadoop
Hdfs
Pyarrow
Pyspark
Spark
SQL
Unix

Similar Jobs

6 Days Ago
Hybrid
Bhālewādi, Wardha, Mahārāshtra, IND
Mid level
Mid level
Artificial Intelligence • Information Technology • Machine Learning • Security • Software • Cybersecurity • Generative AI
The Regional Sales Manager at Exabeam is responsible for driving sales, building client relationships, and managing business development strategies in India.
Top Skills: CRMSalesforce
13 Days Ago
Hybrid
Pune, Mahārāshtra, IND
Mid level
Mid level
Fintech • Payments • Software
The Relationship Associate will drive strategies to increase payer growth among Education Agents, manage operations, and resolve client issues effectively.
Top Skills: ExcelPowerPointWord
22 Minutes Ago
Pune, Mahārāshtra, IND
Mid level
Mid level
Fintech • Financial Services
Manage technical project deliveries, collaborate with stakeholders, and ensure high-quality software through automation testing and Agile methodologies.
Top Skills: APIsBddBitbucketCucumberEclipseGitIntellijJavaJenkinsJIRALoad RunnerMavenPerformance CentreSeleniumTestng

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account