Senior Software Engineer – Back end and Inferencing– Technology (Maersk)
This position will be based in India – Bangalore/Pune
A.P. Moller - Maersk
A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains.
Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too.
The Brief
We are seeking a Senior Software Engineer with deep backend expertise to lead the development of scalable infrastructure for LLM inferencing, Model Context Protocol (MCP) integration, Agent-to-Agent (A2A) communication, prompt engineering, and robust API platforms. This role sits at the core of our AI systems stack — enabling structured, contextual, and intelligent communication between models, agents, and services. You'll design modular backend services that interface seamlessly with inferencing engines, orchestrate model contexts, and expose capabilities via APIs for downstream products and agents.
What I'll be doing – your accountabilities?
- Architect and implement backend services that support dynamic model context management via MCP for LLM-based systems.
- Build scalable and token-efficient inference pipelines with support for streaming, context merging, memory, and retrieval.
- Enable Agent-to-Agent (A2A) messaging and task coordination through contextual protocols, message contracts, and execution chains.
- Design and maintain developer-friendly, secure, and versioned APIs for agents, tools, memory, context providers, and prompt libraries.
- Lead efforts in prompt engineering workflows including templating, contextual overrides, and programmatic prompt generation.
- Collaborate across engineering, ML, and product teams to define and implement context-aware agent systems and inter-agent communication standards to enable closed-loop enterprise AI Services ready for consumption by the enterprise.
- Own end-to-end delivery of infrastructure, inferencing, back-end, API and communication management in multi-agent system.
- Ensure models are modular, extensible, and easily integrated with external services/platforms (e.g., dashboards, analytics, AI agents).
Foundational / Must Have Skills
- Bachelor’s, Master’s or Phd in Computer Science, Engineering, or related technical field.
- 8+ years of experience in backend systems design and development — ideally in AI/ML or data infrastructure domains.
- Strong proficiency in Python (FastAPI preferred); additional experience with Node.js, Go, or Rust is a plus.
- Experience with LLM inferencing pipelines, context windowing, and chaining prompts with memory/state persistence.
- Familiarity with or active experience implementing Model Context Protocol (MCP) or similar abstraction layers for context-driven model orchestration.
- Strong understanding of REST/GraphQL API design, OAuth2/JWT-based auth, and event-driven backend architectures.
- Practical knowledge of Redis, PostgreSQL, and one or more vector databases (e.g., Weaviate, Qdrant).
- Comfortable working with containerized applications, CI/CD pipelines, and cloud-native deployments (AWS/GCP/Azure).
Preferred to Have
- Experience building or contributing to agent frameworks (e.g., LangGraph, CrewAI, AutoGen, Agno etc.).
- Background in multi-agent systems, dialogue orchestration, or synthetic workflows.
- Familiarity with OpenAI, Anthropic, HuggingFace, or open-weight model APIs and tool-calling protocols.
- Strong grasp of software security, observability (OpenTelemetry, Prometheus), and system performance optimization.
- Experience designing abstraction layers for LLM orchestration across different provider APIs (OpenAI, Claude, local inference).
What you can expect
- Opportunity to lead backend architecture for cutting-edge, LLM-native systems.
- High-impact role in shaping the future of context-aware AI agent communication.
- Autonomy to drive backend standards, protocols, and platform capabilities across the org.
- Collaborative, remote-friendly culture with deep technical peers.
As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing teams.
Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements.
We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing [email protected].