Takeda Logo

Takeda

Platform Engineer V - Databricks

Posted Yesterday
Be an Early Applicant
Hybrid
Bengaluru, Bengaluru Urban, Karnataka
Senior level
Hybrid
Bengaluru, Bengaluru Urban, Karnataka
Senior level
Design, implement and manage Databricks platforms to ensure performance and scalability, optimize data pipelines, collaborate with teams, and ensure data quality standards.
The summary above was generated by AI
By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge.
Job Description
The Future Begins Here
At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet.
Bengaluru, the city, which is India's epicenter of Innovation, has been selected to be home to Takeda's recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement.
At Takeda's ICC we Unite in Diversity
Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team.
The Opportunity
As a Platform Engineer V - Databricks, you will be tasked with design, implement and manage Databricks Platforms to ensure optimal performance and scalability. Will define best practices for platform, evaluate new features, implement security and governance requirements. In addition, you will monitor and analyse platform metrics, troubleshoot issues related to cluster, jobs and configurations. You will work closely with engineering, DevOps and product teams to improve platform capabilities and enhance data management capabilities across the organization.
Responsibilities
  • Design, develop, and maintain data pipelines using Databricks, Apache Spark, and Delta Lake.
  • Collaborate with data scientists and analysts to deploy machine learning models and analytics workflows on the Databricks platform.
  • Optimize Spark jobs for performance and scalability, including partitioning, caching, and tuning resource allocation.
  • Implement both batch and real-time data processing pipelines using Spark Streaming and Databricks workflows (DLT).
  • Develop and manage Databricks notebooks for interactive data analysis, prototyping, and documentation.
  • Utilize SQL, Python (PySpark), or Scala to work with large datasets and optimize queries.
  • Manage and monitor Databricks clusters, jobs, and workflows for operational efficiency and cost control.
  • Assist in the development of data lakes using Delta Lake to ensure efficient storage and querying of large datasets.
  • Ensure data pipelines meet data quality, reliability, and compliance standards.
  • Write well-documented, efficient, and maintainable code in collaboration with the wider data engineering team.
  • Troubleshoot and resolve issues in data workflows and machine learning models.
  • Keep up with the latest advancements in big data technologies, Databricks, and Apache Spark.

Skills and Qualifications
  • Bachelor's degree in Computer Science, Information Technology, or a related field
  • More than 10+ years overall experience in Data Engineering and with at least 5+ years' experience in Databricks.
  • Experience as a developer in Databricks platform with a strong understanding of Lakehouse concepts and technologies.
  • Hands-on experience with Databricks Notebooks, jobs, clusters, and workflows.
  • Proven experience in Apache Spark and Databricks development.
  • Strong proficiency in Python (PySpark), SQL or Scala.
  • Experience with cloud platforms such as AWS(preferred), Azure, or Google Cloud is a plus.
  • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
  • Detail-oriented mindset with a focus on data integrity, confidentiality, and compliance.
  • Databricks and AWS related certifications
  • Experience with APIs and CI/CD tools like Github

BENEFITS:
It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are:
  • Competitive Salary + Performance Annual Bonus
  • Flexible work environment, including hybrid working.
  • Comprehensive Healthcare Insurance Plans for self, spouse, and children
  • Group Term Life Insurance and Group Accident Insurance programs.
  • Health & Wellness programs including annual health screening, weekly health sessions for employees.
  • Employee Assistance Program
  • 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves
  • Broad Variety of learning platforms
  • Diversity, Equity, and Inclusion Programs
  • Reimbursements - Home Internet & Mobile Phone
  • Employee Referral Program
  • Leaves - Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days)

ABOUT ICC IN TAKEDA:
  • Takeda is leading a digital revolution. We're not just transforming our company; we're improving the lives of millions of patients who rely on our medicines every day.
  • As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization.

Locations
IND - Bengaluru
Worker Type
Employee
Worker Sub-Type
Regular
Time Type
Full time

Top Skills

Spark
AWS
Azure
Databricks
Delta Lake
GCP
Pyspark
Python
Scala
SQL

Similar Jobs at Takeda

7 Days Ago
Hybrid
Bengaluru, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Healthtech • Software • Analytics • Biotech • Pharmaceutical • Manufacturing
The Senior Software Test Engineer will lead testing activities, create test plans, and ensure software quality in an agile environment using test automation tools.
Top Skills: Cloud TestingDevsecopsJIRANoSQLQtestRelational DatabasesTosca
Yesterday
Hybrid
Bengaluru, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Healthtech • Software • Analytics • Biotech • Pharmaceutical • Manufacturing
The Principal Data Engineer is responsible for developing data pipelines, ensuring data accuracy, collaborating with teams, and implementing data integration solutions.
Top Skills: AWSDatabricksInformaticaNoSQLPysparkPythonRestScalaSoapSQL
22 Hours Ago
Hybrid
Bengaluru, Bengaluru Urban, Karnataka, IND
Mid level
Mid level
Healthtech • Software • Analytics • Biotech • Pharmaceutical • Manufacturing
As a Data Engineer, you will build and maintain data systems, develop data pipelines using AWS, collaborate with teams for data-driven decisions, and ensure data quality and accessibility.
Top Skills: AWSData StreamingDatabricksDmsGraphLambdaNo-SqlSQLSqsStep Functions

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account