Cognyte Logo

Cognyte

Data Engineer

Reposted 4 Days Ago
Be an Early Applicant
In-Office or Remote
Hiring Remotely in Pune, Maharashtra, IND
Senior level
In-Office or Remote
Hiring Remotely in Pune, Maharashtra, IND
Senior level
Design, build, and maintain data infrastructure and pipelines to support machine learning workflows. Implement and optimize data storage and databases, integrate ML models into pipelines, ensure data quality and metadata management, document data assets, troubleshoot performance issues, and support research team data needs while proposing improvements and staying current with data engineering best practices.
The summary above was generated by AI

Today’s world is crime-riddled. Criminals are everywhere, invisible, virtual and sophisticated. Traditional ways to prevent and investigate crime and terror are no longer enough… 

Technology is changing incredibly fast. The criminals know it, and they are taking advantage. We know it too.  

For nearly 30 years, the incredible minds at Cognyte around the world have worked closely together and put their expertise to work, to keep up with constantly evolving technological and criminal trends, and help make the world a safer place with leading investigative analytics software solutions. 

We are defined by our dedication to doing good and this translates to business success, meaningful work friendships, a can-do attitude, and deep curiosity.

If you want to be part of a powerful team, we are looking for a talented Data Engineer to join our research group at Cognyte. As a Data Engineer, you will be responsible for designing, building, and maintaining data infrastructure and pipelines to support the development and deployment of advanced machine learning models. This is an excellent opportunity professionals who are passionate about data engineering and want to contribute to cutting-edge AI technologies.

As a Cognyter you will:

  • Collaborate with senior data engineers, data scientists and research scientists to design, develop, and maintain data pipelines and infrastructure to support machine learning workflows.
  • Implement data storage solutions and optimize database systems for efficient data retrieval and storage.
  • Work closely within the research team to integrate machine learning models into data processing pipelines, ensuring data quality and consistency.
  • Contribute to the development and maintenance of data documentation, including data dictionaries and metadata.
  • Support data-related needs of the research team, such as data analysis, troubleshooting, and performance optimization.
  • Stay up to date with the latest data engineering technologies and practices, and propose innovative solutions to improve data processes and workflows.

Your toolbox

  • BSc or MSc degree in computer science, Statistics, Informatics, Information Systems or related fields.
  • 5+ years in infrastructure / data engineering / platform engineering roles (or equivalent experience).
  • Strong hands-on experience with Linux (administration, troubleshooting, performance analysis).
  • Strong coding skills in Python & SQL (automation, tooling, pipelines)
  • Experience with big data technologies and distributed processing such as Spark, Hadoop ecosystem, Kafka etc.
  • Experience in designing and maintaining data storage solutions: relational + NoSQL systems, data modeling, schemas, metadata, ETL processes and performance considerations.
  • Strong collaboration skills with demonstrated ability to work effectively across teams.
  • Strong problem-solving skills and the ability to handle and analyze data efficiently.
  • Excellent communication skills.
  • Passion for data engineering and a desire to develop expertise in machine learning and AI technologies.
  • Willingness to travel abroad (10%).

Top Skills

Data Modeling
Distributed Processing
ETL
Hadoop
Kafka
Linux
NoSQL
Python
Relational Databases
Spark
SQL

Similar Jobs

4 Days Ago
Remote or Hybrid
India
Senior level
Senior level
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
The Data Engineer will design, build, and maintain ETL/ELT pipelines, ensuring data quality, security, and compliance while collaborating across multiple teams.
Top Skills: SparkAzureAzure Data FactoryAzure DevopsAzure FunctionsCosmos DbDatabricksEventhubGitHadoopHbaseHiveMongoDBNifiNoSQLPythonScalaSQLSynapse
2 Days Ago
Remote or Hybrid
India
Senior level
Senior level
Big Data • Security • Software • Analytics • Cybersecurity
Design and build data infrastructure and architecture, optimize data processes, implement ELT applications, and develop high-quality data pipelines for clients.
Top Skills: AWSAzureDatabricksDockerFastapiKubernetesPandasPolarsPydanticPysparkPythonSparkSpark SqlSQL
2 Days Ago
Remote
IND
Senior level
Senior level
Artificial Intelligence • Information Technology • Software
The role involves building scalable data applications and APIs, optimizing data architectures, and developing cloud-native solutions to drive user experience at Zoom.
Top Skills: AWSDbtJavaKafkaKinesisPythonSnowflakeSQLSqsTerraform

What you need to know about the Pune Tech Scene

Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account