Xebia is a trusted advisor in the modern era of digital transformation, serving hundreds of leading brands worldwide with end-to-end IT solutions. The company has experts specializing in technology consulting, software engineering, AI, digital products and platforms, data, cloud, intelligent automation, agile transformation, and industry digitization. In addition to providing high-quality digital consulting and state-of-the-art software development, Xebia has a host of standardized solutions that substantially reduce the time-to-market for businesses.
Xebia also offers a diverse portfolio of training courses to help support forward-thinking organizations as they look to upskill and educate their workforce to capitalize on the latest digital capabilities. The company has a strong presence across 16 countries with development centres across the US, Latin America, Western Europe, Poland, the Nordics, the Middle East, and Asia Pacific.
D: Position 1
Sr Data Engineer
Summary
At Levi Strauss & Co, we are revolutionizing the apparel business and redefining the way denim is made.
We are taking one of the world's most iconic brands into the next century: from creating machine learning-powered denim finishes to using block-chain for our factory workers' wellbeing, to building algorithms to better meet the needs of our consumers and optimize our supply chain.
Be a pioneer in the fashion industry by joining our Digital Technology organization where you will have the chance to build exciting solutions that will impact our Americas business and at the same time be part of a bigger, across-continents, data community.
The Data, Analytics & AI/ML Engineering team at Levi is on a mission to build and deliver a modern, scalable data platform that will accelerate converting data into insights. As a Senior Data Engineer within this team, you will work in the Operations Data Domain, focusing on ingesting, cleansing, and delivering high-quality data products using Google Cloud Platform (GCP). You will play a key role in architecting robust data pipelines, enabling seamless data integration, and ensuring our data ecosystem supports advanced analytics. We need someone who will bring thoughtful solutions, perspective, empathy, creativity, and a positive attitude to solve tough data problems.
If you are a proactive, self-driven engineer who thrives on solving complex data challenges, enjoys ownership, have a strong reasoning to balance solution vs risk taking, and values collaboration, this is the perfect opportunity for you.
Key Responsibilities:
Design, develop, optimise and maintain scalable and reliable big data solutions, including data pipelines, data warehouses, and data lakes.
Collaborate with cross-functional teams including data product managers, data scientists, analysts, and software engineers to understand business data requirements and deliver efficient solutions.
Architect and optimize data storage, processing, and retrieval mechanisms for large-scale datasets.
Establish scalable, efficient, automated processes for data analyses, model development, validation, and implementation.
Implement and maintain data governance and security best practices to ensure data integrity and compliance with regulatory standards.
Write efficient and well-organized software to ship products in an iterative, continual-release environment.
Reporting key insight trends, using statistical rigor to simplify and inform the larger team of noteworthy story lines that impact the business.
Troubleshoot and resolve performance issues, bottlenecks, and data quality issues in the big data infrastructure.
Guide and mentor junior engineers, fostering a culture of continuous learning and technical excellence.
Communicate clearly and effectively to technical and non-technical audiences.
Contribute to internal best practices, frameworks, and reusable components to enhance the efficiency of the data engineering team.
Embody the values and passions that characterize Levi Strauss & Co., with empathy to engage with colleagues from multiple backgrounds.
Qualifications & Skills
Ability to lead offshore team of 8+ engineers
University or advanced degree in engineering, computer science, mathematics, or a related field
8+ years' experience developing and deploying data pipelines both batch and streaming into production.
Strong experience working with a variety of relational SQL and NoSQL databases.
Extensive experience with cloud-native data services, preferably Google Cloud Platform (BigQuery, Vertex AI, Pub/Sub, Cloud Functions, etc.).
Deep expertise in one of the popular data warehousing tools such as Snowflake, Big Query, RedShift, etc
Hands-on experience with dbt (Data Build Tool) for data transformation
Experience working with big data tools and frameworks such as Hadoop, Spark, Kafka, etc. Familiarity with Databricks is a plus.
Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Hands-on experience in the Data engineering Spectrum, e.g. developing metadata-based framework-based solutions for Ingestion, Processing, etc., building Data Lake/Lake House solutions.
Strong knowledge of Apache Airflow for orchestration and workflow management.
Working knowledge of Github /Git Toolkit.
Experience with providing operational support to stakeholders.
Expertise in standard software engineering methodology, e.g. unit testing, test automation, continuous integration, code reviews, design documentation.
Experience working with CI/CD pipelines using Jenkins and Github Actions.
Experience with data visualization using Tableau, PowerBI, Looker or similar tools is a plus
JD: Position# 2
• University or advanced degree in engineering, computer science, mathematics, or a related field
• Beginner to intermediate experience with SAP BODS data flow development, testing and deployment is required. Experience in developing SAP BODS jobs with BW/HANA as source and SQL server as the target is preferred.
• Strong experience working with a variety of relational SQL and NoSQL databases
• Strong experience working with big data tools: Big Data tech stack (Hadoop, Spark, Kafka etc.)
• Experience with at least one cloud provider solution (AWS, GCP, Azure, GCP preferred)
• Strong experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
• Strong hands-on experience in Databricks (Unity Catalog, workflows, Optimization techniques)
• Working knowledge in any transformation tools, DBT preferred.
• Ability to work with Linux platform.
• Strong knowledge of data pipeline and workflow management tools
• Expertise in standard software engineering methodology, e.g. unit testing, code reviews, design documentation
• Experience creating Data pipelines that prepare data for ingestion & consumption appropriately
• Experience in setting up, maintaining and optimizing databases/filesystems for production usage in reporting, analytics.
• Experience with workflow orchestration (Airflow, Tivoli etc.)
• Working knowledge of Git hub /Git Toolkit
• Working in a collaborative environment and interacting effectively with technical and non-technical team members equally well (Good verbal and written English)
• Relevant working experience with Containerization (Docker and Kubernetes) preferred
• Experience working with APIs (Data as a Service) preferred.
Some useful links:
Xebia | Creating Digital Leaders.
https://www.linkedin.com/company/xebia/mycompany/
http://twitter.com/xebiaindia
https://www.instagram.com/life_at_xebia/
http://www.youtube.com/XebiaIndia
Top Skills
Xebia Pune, Mahārāshtra, IND Office
S. No, AP81, Xebia, 83, N Main Rd, Koregaon Park Annexe, Mundhwa,, Pune, Maharashtra , India, 411036