Skip to Main Content

Job Title


Big Data DBT/Redshift Engineer


Company : Two Circles


Location : Vancouver, British Columbia


Created : 2026-03-03


Job Type : Full Time


Job Description

We are Two Circles. We are a Sports & Entertainment Marketing business. We grow audiences and revenues. We do that by knowing fans best. We work with clients to help them understand & influence what their fans are doing the way fans spend their money, the events that fans attend, the channels fans respond to, the content fans watch and more. And we use the understanding this gives us to help our clients grow. Grow their audiences and grow their revenues - both direct to consumer and business to business revenues. Our platforms and services are trusted by over 1000 clients globally, including the English Premier League, Red Bull, UEFA, VISA, the NFL, Nike and Amazon. We are over 1000 people, based out of 15 offices, and we deliver work for sports and entertainment businesses of all shapes and sizes all over the world. 12 Month contract Streaming experience preferred $450 - $500 CADp/d ROLE OVERVIEW We are seeking a Lead Data Engineer to join a client-focused data pod delivering large-scale data engineering solutions within a cloud-native AWS environment. This is a delivery-first role requiring deep hands-on expertise in streaming data architectures, big data systems, and modern data warehousing practices. You will own streaming architectural direction while remaining actively involved in implementation. The environment is AWS-centric (Redshift, S3, Glue, Step Functions, Lambda, EMR), with DBT as the transformation framework. We are actively integrating streaming data from GCP sources into our AWS data platform. You will define engineering standards across data modeling, DBT implementation, testing, CI/CD, and production resiliency while collaborating directly with the client''s data team. --- WHAT YOU''LL BE DOING Streaming Architecture & Distributed Systems Own the architectural direction for streaming data ingestion from GCP into AWS Design resilient ingestion frameworks including error handling, retry strategies, monitoring, and failure isolation Implement distributed processing pipelines using Spark / PySpark or similar frameworks Data Warehousing & DBT Leadership Create and maintain scalable data warehouses and associated ETL/ELT processes using DBT models in Amazon Redshift Design and implement DBT projects including macros, tests, documentation, and reusable modeling patterns Conduct Redshift query and DBT performance tuning to optimize warehouse efficiency and cost Engineering Standards & Quality Define and enforce best practices for: Data modeling Version control (Git-based workflows) CI/CD pipelines for DBT deployments Automated testing at model, transformation, and pipeline levels Ensure robust testing is embedded into every DBT model (schema tests, custom tests, data validation checks) Lead code reviews and architectural design reviews AWS Platform & Big Data Tooling Work with AWS services including Redshift, S3, Glue, Step Functions, Lambda (Python), Athena, and EMR Requirements REQUIREMENTS Experience 6+ years of data engineering experience in big data environments Proven experience designing and implementing streaming architectures Extensive hands-on DBT experience (models, macros, tests, documentation) Strong Amazon Redshift architecture and performance optimization expertise Experience building CI/CD pipelines for data platforms Experience working in client-facing delivery contexts Technical Skills AWS: Redshift, S3, Glue, Step Functions, Lambda (Python), Athena, EMR Strong SQL and Redshift performance tuning expertise Python and PySpark (or equivalent distributed processing frameworks) Git-based version control workflows Deep understanding of data warehousing, modeling, and big data systems