CAPCO POLAND
DATA ARCHITECT
Location: Warsaw, Poland
Pref. work model - 3x per week from office
• Trailblazers in banking, payments, capital markets, wealth, and asset management
• Champions of an agile, nimble, and innovative work environment
• Dedicated to building a team of top-notch professionals who share our drive and vision
ROLE OVERVIEW
We're hiring the Data Architect that is responsible for designing, developing, and overseeing the data layer within the Knowledge Management ecosystem. This includes documents, metadata, taxonomies, indexes, PII policies, and ingestion processes.
The mission of the role is to ensure completeness, consistency, security, and continuously high data quality used by the KM Master Agent and domain agents.
The Data Architect owns the information model and defines data integration standards across SharePoint Online, Azure AI Search / Knowledge Bases, and GCP (BigQuery/Looker), ensuring a well-structured and reliable data foundation for the entire KM platform.
Fluency in Polish is mandatory.
RESPONSIBILITIES
- Designing information models for unstructured documents, including content, metadata, and related artifacts.
- Defining taxonomies and dictionaries: keywords, ontologies, business categories, and knowledge cataloging rules.
- Designing index and Knowledge Base strategies in Azure AI Search, including chunking, filtering, scoring, and scope policies.
- Defining data retention and lifecycle policies: storage, archiving, anonymization, deletion, and versioning.
- Designing Data Contracts for AI agents to ensure consistency between the Master Agent and domain agents.
- Building ingestion pipelines (automated and ad-hoc): processing PL/EN documents, OCR, multimodal inputs, table extraction, confidence scoring, and metadata enrichment.
- Ensuring PII compliance: masking, anonymization, and exclusion policies, with decisions at ingestion vs. retrieval stage.
- Designing authorization models: document/fragment-level access control integrated with Entra ID (roles, user/group scopes).
- Managing data sources: integration with SharePoint Online, Azure, GCP BigQuery/Looker, and potentially graph databases for document relationships.
- Monitoring data quality: metadata completeness, language variants, OCR accuracy, missing permissions, and anomaly detection.
- Designing repositories for processing artifacts (e.g., OCR tables, confidence scores, extracted insights) and defining their indexing rules.
SKILLS & EXPERIENCES TO GET THE JOB DONE
- Strong experience in data and metadata modeling, including unstructured documents and analytical datasets.
- Very good knowledge of SharePoint Online / M365 (data structures, metadata, integrations).
- Hands-on experience with Azure AI Search / Knowledge Bases / Dataverse: index creation, scoring profiles, filtering, and semantic search.
- Experience with GCP BigQuery / Looker (or equivalent such as Microsoft Fabric or Synapse): data analysis, aggregations, semantic models.
- Experience with PII, banking regulations, compliance, access control, and data masking.
- Experience with Graph API / REST API integrations, including metadata retrieval and updates.
- Hands-on experience with OCR, Document Intelligence, or Vision AI solutions.
- Knowledge of ingestion standards, chunking strategies, data extraction, and ETL/ELT pipelines.
- Experience working with multilingual document environments (PL/EN).
- Understanding of RBAC/ABAC models, Entra ID, and user/group-based access scopes.
- Knowledge of data quality and Data Governance practices (profiling, lineage, data contracts).
- Ability to design data architectures for multi-agent AI systems within a Knowledge Management environment.
Nice to have:
- Experience with Graph databases (Neo4j, Cosmos DB Graph, Amazon Neptune).
- Experience with DataHub, Apache Atlas, or Microsoft Purview.
- Experience using GitHub Actions / CI/CD for metadata schemas and data model deployments.
- Knowledge of MCP for data tools and agent-to-data pipelines.
- Experience with Azure Functions, Durable Functions, or Logic Apps.
- Experience working with Lakehouse architectures (Microsoft Fabric, Databricks).
IMPORTANT
- Fluent Polish (spoken and written) – mandatory
- Good command of English for documentation and collaboration.
- Availability to work on-site, with partial remote work - 3 days per week from the office.
ONLINE RECRUITMENT PROCESS STEPS
- Screening call with the Recruiter
- Hiring Manager Technical Interview
- Client stage
- Feedback/Offer
We offer a flexible collaboration model based on a B2B contract, with the opportunity to work on diverse projects.

