About this role:
Wells Fargo is seeking a Lead Software Engineer - Data Engineering to join the CALM (Corporate Asset and Liability Management) Data Engineering team within the Enterprise Functions Technology (EFT) organization. In this role, you will be responsible for designing, developing, optimizing, and maintaining metadata-driven, scalable, high-performance data engineering frameworks that power critical financial risk processes across Corporate Treasury.
You will work independently to build resilient data pipelines, APIs, wrappers, and supporting components to enable reliable data ingestion, transformation, validation, and delivery across cloud and on-prem ecosystems. This position plays a key role in Data Center exit migrations, DPC onboarding, and enterprise-wide modernization initiatives.
The role requires deep technical expertise, hands-on problem-solving, and technical leadership in distributed data engineering, cloud platforms, data quality, and performance engineering.
Key Responsibilities
Required Qualifications
Job Expectations
Wells Fargo is seeking a Lead Software Engineer - Data Engineering to join the CALM (Corporate Asset and Liability Management) Data Engineering team within the Enterprise Functions Technology (EFT) organization. In this role, you will be responsible for designing, developing, optimizing, and maintaining metadata-driven, scalable, high-performance data engineering frameworks that power critical financial risk processes across Corporate Treasury.
You will work independently to build resilient data pipelines, APIs, wrappers, and supporting components to enable reliable data ingestion, transformation, validation, and delivery across cloud and on-prem ecosystems. This position plays a key role in Data Center exit migrations, DPC onboarding, and enterprise-wide modernization initiatives.
The role requires deep technical expertise, hands-on problem-solving, and technical leadership in distributed data engineering, cloud platforms, data quality, and performance engineering.
Key Responsibilities
- Architect, develop, and optimize both batch and streaming data pipelines using Python, SQL, and Apache Spark. These pipelines are implemented across both cloud and on-premises environments to ensure flexibility and scalability.
- Lead lakehouse engineering efforts by utilizing open table formats such as Iceberg, Delta, or Hudi. This includes implementing Medallion architectures to govern data ingestion, transformation, and consumption processes.
- Establish comprehensive frameworks for data quality, observability, lineage, and service-level agreements (SLA), guaranteeing reliability and auditability of data at scale.
- Design secure REST and metadata APIs to facilitate governed data access and seamless integration with downstream applications.
- Collaborate closely with cross-functional product, architecture, and business teams, providing technical leadership through design reviews, mentoring, and contributions to the platform roadmap.
Required Qualifications
- Minimum 8+ of hands-on experience with large-scale data engineering using Python, SQL, and Apache Spark.
- Expertise in designing and implementing metadata-driven ETL/ELT pipelines and robust data ingestion and validation frameworks.
- Skilled in optimizing distributed Spark workloads, storage layouts, and schema evolution within lakehouse environments.
- Proficient with enterprise orchestration tools such as Autosys or Airflow; able to operate production-grade data pipelines meeting SLAs with effective alerting.
- Strong knowledge of data modeling, API development, and CI/CD or infrastructure automation processes.
- Familiarity with major cloud platforms, data governance tools, and secure engineering best practices.
- Preferred: Experience with financial data, risk or treasury systems, and large-scale migration programs.
- Applying GenAI for metadata extraction, data anomaly detection, automated documentation, or pipeline optimization.
Job Expectations
- Deliver high-quality engineering outcomes during Data Center exit migrations and DPC onboarding, ensuring validations, automation, and production readiness.
- Collaborate with cross-functional teams to build scalable, high-performance data solutions using Python, SQL, Spark, Iceberg, Dremio, and Autosys.
Top Skills
Airflow
Spark
Autosys
Delta
Dremio
Hudi
Iceberg
Python
SQL
Similar Jobs at Wells Fargo
Fintech • Financial Services
Lead complex technology initiatives, develop engineering standards, design and maintain large-scale software solutions, and oversee production operations while mentoring team members.
Top Skills:
Ca7CicsCobolDb2EndevorIbm Z/OsJavaJclPythonVsam
Fintech • Financial Services
Lead technology initiatives, design and develop large-scale tech solutions, and enhance workflow platform while ensuring best practices and collaboration with experts.
Top Skills:
JwtKafkaOauth 2.0OraclePega 24.XPega 8+PostgresRest Apis
Fintech • Financial Services
Lead complex software engineering projects, design and develop technology solutions, and mentor team members using Java and full stack technologies.
Top Skills:
AngularCi/CdDockerGenai ToolsHibernateJavaJpaKafkaKubernetesMongoDBOracleReactSpring BootSQL
What you need to know about the Pune Tech Scene
Once a far-out concept, AI is now a tangible force reshaping industries and economies worldwide. While its adoption will automate some roles, AI has created more jobs than it has displaced, with an expected 97 million new roles to be created in the coming years. This is especially true in cities like Pune, which is emerging as a hub for companies eager to leverage this technology to develop solutions that simplify and improve lives in sectors such as education, healthcare, finance, e-commerce and more.

