Census Logo

Census

Data and AI Engineer (Remote Worldwide)

Posted 4 Days Ago
Be an Early Applicant
Remote
Hiring Remotely in Greece
Senior level
Remote
Hiring Remotely in Greece
Senior level
Design and implement data pipelines, optimize integrations, support AI features, and ensure scalability and reliability of data solutions in cybersecurity.
The summary above was generated by AI

About CENSUS 

CENSUS LABS is a cybersecurity engineering powerhouse specializing in building secure and resilient systems. Our work is research-driven and engineering-focused, enabling us to deliver bespoke development and custom solutions at the intersection of cybersecurity and emerging technologies. By addressing complex product challenges, we help our partners evolve their platforms across domains such as secure communications, IoT, AI-powered systems, and enterprise applications. 

Learn more at CENSUS-Labs.com

We are seeking a Data & AI Engineer to design, implement, and optimize data pipelines and backend integrations that support next-generation cybersecurity and data-intensive platforms. The role is primarily data-centric, with emphasis on scalable pipeline design, enrichment and annotation workflows, schema modeling, and performance-driven analytics on large datasets. 

As part of cross-functional teams, you will also contribute to the integration of AI-enabled features (such as natural language querying, contextual enrichment, and intelligent analytics) where they intersect with data engineering. Our bespoke projects operate across the spectrum of data engineering, applied AI, and cybersecurity, addressing complex challenges such as large-scale security processing and autonomous decision engines. As part of this role you will have the opportunity to shape architectures that are secure, scalable, and intelligent. 

Key Responsibilities 

  • Data Pipelines & Processing 
  • Design and build high-performance data ingestion and processing pipelines using frameworks such as Spark and Iceberg 
  • Implement workflows for data correlation, enrichment, and annotation 
  • Ensure data quality, lineage, and reproducibility across ingestion and transformation stages 
  • Support analytics and reporting through platforms such as Superset, Presto/Trino, and Spark SQL 
  • Database & Storage Design 
  • Design and maintain schemas for large-scale structured and semi-structured datasets 
  • Optimize storage strategies for performance and cost (SQL/NoSQL, MPP, object storage, distributed file systems) 
  • Apply indexing, partitioning, and tuning techniques for efficient big data analytics 
  • AI/ML Integration 
  • Integrate open-source models and embeddings into data workflows where applicable (e.g., RAG pipelines, FAISS) 
  • Build services that enable natural language querying and contextual enrichment of datasets 
  • Collaborate on fine-tuning pipelines to support security-driven use cases 
  • Scalability & Reliability 
  • Ensure pipeline and storage solutions scale to handle high-volume data 
  • Implement monitoring, error handling, and resilience patterns in production 
  • Work with DevOps teams to containerize and deploy data services efficiently 
  • Collaboration & Delivery 
  • Translate requirements from product/security teams into robust data engineering solutions 
  • Contribute to PoCs, demos, and integration pilots 
  • Document solutions and participate in knowledge transfer with internal and partner teams 

Minimum Qualifications 

  • BSc/MSc in Computer Science, Data Engineering, or related field (or equivalent practical experience). 
  • 5+ years of experience in data engineering or big data analytics, with exposure to AI/ML integration. 
  • Strong proficiency in Python (Pandas, PySpark, FastAPI) and familiarity with Java/Scala for Spark 
  • Solid understanding of data pipeline design, schema modeling, and lifecycle management 
  • Experience with big data ecosystems: Apache Spark, Iceberg, Hive, Presto/Trino, Superset, or equivalents 
  • Hands-on experience with SQL (query optimization, tuning) and at least one NoSQL or distributed storage technology 
  • Practical experience building and deploying APIs and services in cloud or on-prem environments 
  • Strong problem-solving, debugging, and communication skills 
  • Proficient in English and excellent communication skills 

Preferred / Nice-to-Have Skills 

  • Experience with retrieval-augmented generation (RAG) pipelines and LLM-based applications 
  • Knowledge of security concepts and experience with network and software cybersecurity domains 
  • Experience with GPU acceleration, model optimization (quantization, distillation), and performance tuning 
  • Familiarity with containerization (Docker, Kubernetes) and DevOps workflows 
  • Exposure to BI / analytics platforms and visualization integration 

This role offers the opportunity to work on cutting-edge data and AI
initiatives in cybersecurity, contributing to solutions that address
real-world technology challenges. You will be part of a
multidisciplinary team that blends security research, data engineering,
and AI innovation, shaping systems that are not only secure but also
intelligent and future ready. 

Top Skills

Spark
Docker
Fastapi
Hive
Iceberg
Java
Kubernetes
NoSQL
Pandas
Presto
Pyspark
Python
Scala
SQL
Superset
Trino

Similar Jobs

Yesterday
Easy Apply
Remote
29 Locations
Easy Apply
Mid level
Mid level
Cloud • Security • Software • Cybersecurity • Automation
As an Engineering Manager at GitLab, you will lead the Cells Infrastructure team, focusing on team growth, productivity, and delivering product commitments in a remote environment.
Top Skills: GoKubernetesRuby On RailsTerraformTypescript
Yesterday
In-Office or Remote
35 Locations
Entry level
Entry level
Machine Learning • Natural Language Processing
Join Welo Data as part of a global community focusing on data annotation and enhancing AI models with diverse linguistic expertise.
Top Skills: AIDigital ToolsMachine Learning
Yesterday
In-Office or Remote
34 Locations
Entry level
Entry level
Machine Learning • Natural Language Processing
Welo Data seeks individuals fluent in Ukrainian for remote roles in annotation, evaluation, and data collection to enhance AI models.
Top Skills: Ai ToolsDigital ToolsFrameworks

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account