Synechron
Senior QA Engineer (SQL, Big Data (Spark, Hive), Automation & ETL Support)
Synechron is seeking a detail-oriented and experienced Senior Quality Assurance Engineer to join our team supporting data engineering, ETL, and analytics initiatives. In this role, you will oversee data validation processes, develop and optimize complex data pipelines, and collaborate with cross-functional teams to ensure data integrity and reliability. Your expertise in big data tools, SQL, and automation frameworks will play a key role in improving data accuracy, operational efficiency, and supporting strategic data-driven decision-making across the organization.
This position offers an opportunity to lead quality efforts in a fast-paced, cloud-enabled environment, contributing directly to the organization’s success in managing large datasets with precision.
Software RequirementsRequired Skills:
- Proficiency in writing, optimizing, and troubleshooting SQL queries (including complex joins, aggregations, and filters)
- Strong familiarity with big data tools such as Spark, Hive, Hadoop, and Kafka
- Experience designing, developing, and maintaining ETL pipelines to support data ingestion and transformation
- Hands-on experience in creating automation frameworks using tools like Selenium and Cucumber
- Programming knowledge in Java, Shell scripting, Python (Optional), and experience with Behavior-Driven Development (BDD)
- Familiarity with data warehousing platforms, especially Teradata and Oracle
Preferred Skills:
- Exposure to cloud data services such as AWS (Redshift, S3, Athena, EMR) and integration
- Knowledge of cloud-based tools for data processing and storage
- Develop, implement, and optimize complex SQL queries for data extraction, validation, and transformation
- Design and manage scalable ETL workflows ensuring data quality, accuracy, and completeness
- Validate data through comprehensive testing and scripting automation to improve efficiency
- Write and execute detailed test plans, test scenarios, and automate testing using Selenium and Cucumber
- Collaborate with data engineers, business analysts, and QA teams to identify gaps and improve data pipelines
- Support data ingestion, processing, and transformation activities in Hadoop ecosystems and cloud environments
- Monitor, troubleshoot, and resolve issues in data workflows, pipelines, and related infrastructure
- Document processes, testing procedures, and incident resolutions to promote knowledge sharing
- Drive process improvements, best practices, and automation initiatives to enhance data quality and operational efficiency
Strategic objectives:
- Ensure high data quality and integrity across all critical data pipelines
- Automate routine validation and testing activities to reduce manual effort
- Support reliable, timely, and scalable data delivery for analytics and reporting
Performance outcomes:
- Minimal data discrepancies and pipeline failures
- Faster turnaround for data validation and issue resolution
- Clear documentation and proactive communication with stakeholders
Data Processing & ETL (Essential):
- Deep experience building and maintaining large-scale ETL pipelines and data workflows
- Proficiency with Spark, Hive, Hadoop, and related big data tools
- SQL query development and optimization for diverse data sources
Programming & Scripting (Essential):
- Shell scripting (Bash/sh) for automation tasks and troubleshooting
- Experience with Java, Python (Optional) for automation and testing frameworks
- Knowledge of BDD tools like Cucumber and automation frameworks such as Selenium
Data Warehousing & Databases (Essential):
- Experience working with Teradata and Oracle data warehouses
- Data validation, reconciliation, and quality assurance practices
Tools & Methodologies (Preferred):
- Agile methodologies for software development, testing, and deployment
- Use of collaboration tools like Jira and Confluence for managing requirements, test cases, and documentation
Cloud Technologies (Preferred):
- Basic experience with AWS services such as Redshift, S3, Athena, and EMR
- 6-8 years supporting data engineering, data validation, or ETL processes in production environments
- Extensive hands-on experience in designing and testing data pipelines, especially in big data ecosystems
- Proven experience with automation testing, scripting, and data validation scripting
- Prior experience in financial services or data-intensive industries is a plus
- Experience working within Agile teams and supporting cloud deployment models
- Develop and optimize SQL queries and scripts for data validation and testing
- Build, maintain, and monitor ETL workflows in Hadoop and cloud environments
- Automate routine testing, validation, and data quality checks
- Collaborate with data engineers, business teams, and QA to address data discrepancies and pipeline issues
- Perform root cause analysis on data pipeline failures and recommend improvements
- Document testing procedures, incidents, and best practices
- Support deployment activities, ensuring minimal disruption during upgrades or changes
- Participate in daily stand-ups, planning sessions, and review meetings
- Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or related field
- Strong proficiency in SQL and big data tools such as Spark, Hive, and Hadoop
- Hands-on scripting experience in Shell and programming languages like Java and Python (preferred)
- Experience in test automation frameworks such as Selenium and Cucumber (preferred)
- Knowledge of cloud data services (AWS, Redshift, S3, Athena, EMR) is a plus
- Excellent analytical, troubleshooting, and communication skills
- Ability to work independently and collaboratively in an agile environment
- Strong analytical and problem-solving skills focused on data integrity
- Effective communication with technical teams and non-technical stakeholders
- Leadership qualities to support knowledge sharing and process improvements
- Organizational and time management skills to prioritize tasks efficiently
- Adaptability and eagerness to learn emerging data technologies and tools
- Capacity to work in fast-paced, evolving environments supporting continuous delivery
SYNECHRON’S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Candidate Application Notice