Optasia Logo

Optasia

Data Engineer, Fintech

Posted 18 Days Ago
Be an Early Applicant
In-Office or Remote
Hiring Remotely in Athens
Junior
In-Office or Remote
Hiring Remotely in Athens
Junior
Join Optasia as a Data Engineer to develop and optimize big data processing pipelines, collaborating with diverse teams on innovative solutions.
The summary above was generated by AI
Description

Optasia is a fully-integrated B2B2X financial technology platform covering scoring, financial decisioning, disbursement & collection. We provide a versatile AI Platform powering financial inclusion, delivering responsible financing decision-making and driving a superior business model & strong customer experience with presence in 30 Countries anchored by 7 Regional Offices.

We are seeking for enthusiastic professionals, with energy, who are results driven and have can-do attitude, who want to be part of a team of likeminded individuals who are delivering solutions in an innovative and exciting environment.

Data is at the core of Optasia growth plan and the Data Engineering team is a significant contributor to Optasia’s success and growth, achieved through data driven insights and decision making.  We are currently leveraging and ingesting data from multiple sources into our large-scale big data clusters and develop and run multiple analytical pipelines, over a state-of-the art big data technology stack. 
We are looking for a Data Engineer to join our growing Data Engineering team. As part of our team, you will be able to enrich and further evolve the company’s big data infrastructure, and the highly scalable end-to-end batch and streaming data pipelines, contributing to Optasia’s success. 

What you will do

  • Work closely with Solution Architects, Data Architects, ML engineers and Data Analysts for the design, delivery and support of the company’s end-to-end data processing workflows.
  • Develop, maintain and optimize our core libraries for batch processing and ingestion of large volumes of data to the big data infrastructure.
  • Design and develop new libraries for real-time streaming of data from multiple sources.
  • Design and develop end-to-end data processing pipelines.

What you will bring

  • Bachelor's or Master's degree in Computer Science or Informatics.
  • Working experience coding with at least one programming language, preferably PySpark, Scala or Python.
  • Working experience with Apache Spark.
  • Working experience with relational and NoSQL technologies.
  • Experience developing batch and/or real time data processing pipelines.

Optional requirements (will be considered a plus):

  • Experience with the Apache Hadoop ecosystem (YARN, HDFS, HBase, MapReduce).
  • Experience with data and ML flow engines and tools, e.g. Apache Airflow, Apache NiFi.
  • Passion for learning new technologies and eagerness to collaborate with other creative minds.
  • Experience in working with secure code development guidelines and coding practices (i.e. OWASP, NIST).

Why you should apply
What we offer:
💸 Competitive remuneration package
🏝 Extra day off on your birthday
💰 Performance-based bonus scheme
👩🏽‍⚕️ Comprehensive private healthcare insurance
📲 💻 All the tech gear you need to work smart

Optasia’s Perks:
🎌 Be a part of a multicultural working environment
🎯 Meet a very unique and promising business and industry
🌌 🌠 Gain insights for tomorrow market’s foreground
🎓 A solid career path within our working family is ready for you
📚 Continuous training and access to online training platforms
🥳 CSR activities and festive events within any possible occasion
🍜 Enjoy comfortable open space restaurant with varied meal options every day
🎾 🧘‍Wellbeing activities access such as free on-site yoga classes, plus available squash court on our premises

Optasia’s Values 🌟

#1 Drive to Thrive: Fully dedicated to evolving. We welcome all challenges and learning opportunities.
#2 Customer-First Mindset: We go above and beyond to meet our partners’ and clients’ expectations.
#3 Bridge the Gap: Knowledge is shared, information is exchanged and every opinion counts.
#4 Go-Getter Spirit: We are results oriented. We identify any shortcomings that hold us back and step up to do what’s needed.
#5 Together we will do it: We are committed to supporting one another and to understanding and respecting different perspectives, as we aim to reach our common goals.

Top Skills

Apache Airflow
Apache Hadoop
Apache Nifi
Spark
Pyspark
Python
Scala

Similar Jobs

3 Days Ago
In-Office or Remote
Athens, GRC
Junior
Junior
Artificial Intelligence • Fintech • Software • Financial Services
As a Data Support Engineer, you will manage ML workflows, ensure system performance, resolve customer issues, and foster continuous improvement.
Top Skills: AirflowSparkHadoopLinuxSQL
3 Days Ago
In-Office or Remote
Athens, GRC
Junior
Junior
Artificial Intelligence • Fintech • Software • Financial Services
As an ML & Data Support Engineer, you will monitor ML workflows, manage data infrastructure, resolve customer issues, and foster continuous improvement.
Top Skills: AirflowSparkHadoopLinuxSQL
16 Hours Ago
Remote
28 Locations
Entry level
Entry level
Machine Learning • Natural Language Processing
As a Dutch Expert Rater, you will review online ads to improve their relevance and usefulness, contributing to AI training and quality standards.
Top Skills: Ai SystemsOnline Ads

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account