Skip to Main Content

Job Title


Principal AI Engineer, Agentic Models and Data Platforms


Company : RBC


Location : Calgary, Alberta


Created : 2026-02-06


Job Type : Full Time


Job Description

Job Description Drive the development of scalable, high-performance data pipelines within the Enterprise Data Platforms for enabling seamless access to and analysis of data across the organization. This role requires a strong technical background, strategic mindset, and excellent leadership capabilities. What will you do? This role will encompass an end-to-end Data Integration view from data sourcing, lineage, transformation, and storage to support complex advanced analytics and AI, and require extensive collaboration with Business architecture, System architecture, Business SME and Data Stewards. Architect and implement agentic systems, including tool using agents, workflow orchestrators, and multi step reasoning pipelines that reliably execute business tasks. Design and deliver Retrieval Augmented Generation solutions, including document ingestion, chunking, indexing, vector search, hybrid search, reranking, and grounding strategies over curated data products. Build evaluation harnesses and quality gates, including offline test sets, golden datasets, regression suites, and metrics for factuality, safety, latency, cost, and business outcomes. Implement observability for AI systems, including tracing across prompts and tool calls, telemetry, drift detection, and runbooks for production operations. Lead the build of batch and real time data pipelines, including inbound, outbound, and event driven flows that power analytics and AI use cases. Design governed data products with clear contracts, documentation, lineage, and SLAs, enabling consistent consumption across domains. Establish high quality ingestion, transformation, and serving patterns using lakehouse and warehouse paradigms, plus streaming where appropriate. Partner with data stewards and domain teams to define data standards, quality controls, and metadata that ensure trust and reusability. Design and build backend services and APIs that expose data products, agent capabilities, and AI workflows as reliable, secure services. Apply rigorous engineering practices, including code quality, automated testing, CI/CD, performance engineering, and secure by default design. Build scalable runtime patterns for AI systems, including caching, rate limiting, concurrency control, idempotency, and graceful degradation. Contribute to reference architectures, reusable libraries, and platform components that accelerate delivery across teams. Must Have: Bachelors degree in computer science or related technical field involving coding (e.g., physics or mathematics), or equivalent technical experience. 10+ years of professional software engineering experience with strong Python and SQL, Spark and Databricks SQL are a plus. Demonstrated experience designing and operating scalable data architectures, including schema design, dimensional modeling, and data lifecycle management. Strong knowledge of algorithms and data structures, plus systems engineering fundamentals, reliability, performance, and debugging. Hands on experience with data engineering platforms and tools, commonly including Python, PySpark, Databricks, Airflow, Kafka, Snowflake, and modern data integration patterns. Experience building production services and APIs, including service design, authentication and authorization, and integration patterns, Node.js and Apigee are a plus. Practical experience delivering AI powered systems, including one or more of: RAG systems and vector search, embeddings, reranking, and grounding strategies LLM application development, structured outputs, prompt and tool calling, orchestration patterns AI evaluation, test harnesses, regression testing, and lifecycle management for prompts and models Observability for AI systems, tracing, monitoring, alerting, and cost controls Working knowledge of security and identity frameworks such as OAuth 2.0, LDAP, Kerberos, and Vault integration, with experience operating in regulated environments. Nice to have: Masters degree in computer science or equivalent experience. Experience with agent frameworks and workflow patterns, such as graph based orchestration, tool routing, plan and execute loops, and human in the loop designs. MLOps and LLMOps experience, including CI/CD for ML and LLM applications, model registries, feature stores, experiment tracking, and safe rollout patterns. Automation and DevOps experience, such as GitHub Actions, infrastructure as code, and automated QA. Experience working in Agile or SAFe environments. Experience with frontend or portal integration for AI experiences, for example Angular based portals, analytics integration, or enterprise enablement tooling. Whats in it for you? A comprehensive Total Rewards Program including bonuses and flexible benefits, competitive compensation, commissions, and stock where applicable. Leaders who support your development through coaching and managing opportunities. Ability to make a difference and lasting impact. Work in a dynamic, collaborative, progressive, and high-performing team. A world-class training program in financial services. Opportunities to do challenging work. Equal Opportunity Employment At RBC, we believe an inclusive workplace that has diverse perspectives is core to our continued growth as one of the largest and most successful banks in the world. Maintaining a workplace where our employees feel supported to perform at their best, effectively collaborate, drive innovation, and grow professionally helps to bring our Purpose to life and create value for our clients and communities. RBC strives to deliver this through policies and programs intended to foster a workplace based on respect, belonging and opportunity for all. #J-18808-Ljbffr