Skip to Main Content

Job Title


Senior Data Engineer


Company : TeraWatt Infrastructure https://static.whatjobs.co


Location : Toronto, Ontario


Created : 2026-05-07


Job Type : Full Time


Job Description

About Terawatt Infrastructure The once in a century transition to autonomous and electric vehicles is underway and will require a multitrilliondollar investment in energy and charging infrastructure, and the real estate to site it on. Terawatt is the leader in delivering large scale, turnkey charging solutions for companies rapidly deploying AV and EV fleets. Whether its an urban mobility hub, or a carefully located multifleet hub for semitrucks, Terawatt brings the talent, capabilities, and capital to create reliable, costeffective solutions for customers on the leading edge of the transition to the next generation of transport. With a growing portfolio of sites across the US in urban hubs and along key logistics and transportation corridors and logistics hubs, Terawatt is building the permanent transportation and logistics infrastructure of tomorrow through a robust combination of capital, real estate, development, and site operations solutions. The company develops, finances, owns, and operates charging solutions that take the cost and complexity out of electrifying fleets. At Terawatt, we execute humbly and with urgency to provide tailored solutions for fleets that delight our clients and support the transition of transportation. Role Description We are seeking a highly skilled Senior Data Engineer to join our growing team. In this role, you will design and implement scalable and efficient data architectures to support our business needs. You will collaborate closely with data scientists, analysts, and other crossfunctional teams to build and optimize data pipelines, ensuring that data is accessible, secure, and wellstructured for analytics and reporting. A key part of this role involves developing and maintaining data models, databases, and data lakes, while implementing robust data governance and quality assurance practices. You will drive the development of scalable data infrastructure aligned with company architecture standards and best practices. This role also requires curiosity and a commitment to building and maintaining production data lake pipelines that transform raw timeseries data into MLready features, training datasets, and batch predictions. This includes ensuring data quality, reproducibility, and reliable retraining so ML outputssuch as forecasts and risk scorescan be trusted by downstream systems. Problems You Will Solve Turning messy operational data into reliable signals by building pipelines that transform noisy, incomplete, and highvolume timeseries data into trusted datasets for analytics, product features, and ML workflows Design a resilient lakehouse platform by architecting a scalable Databricksbased platform that support both streaming and batch workloads while ensuring governance, observability, and reliability Enable productionready ML pipelines by creating reproducible workflows, reliable feature datasets, and batch prediction pipelines that downstream systems can depend on Enable selfservice analytics and ML by building infrastructure and abstractions that allow analysts, engineers, and data scientists to independently explore and use data Scale a platform for product and analytics by designing systems that support operational product features, internal reporting, and ML use cases without compromising performance or data quality Core Responsibilities Architect and evolve a Databricksbased data platform that serves as the scalable foundation for product features, internal reporting, and ML workflows. Set technical standards for modeling raw data into clean, reliable datasets, ensuring high integrity and pointintime accuracy for both BI and ML applications. Build and maintain selfservice tooling and infrastructure abstractions that improve the developer experience for data producers, analysts, and data scientists. Design and optimize highperformance ETL/ELT pipelines using Delta Live Tables and Structured Streaming to handle seamless ingestion from diverse data sources. Own platform observability, testing, and proactive monitoring to ensure the performance and reliability of critical data delivery and pipeline health. Architect and enforce data security, compliance, and access controls by implementing Unity Catalog and IAM (Identity and Access Management) best practices across the enterprise. Build and maintain productiongrade pipelines that transform raw data into MLready features, training datasets, and reliable batch predictions. Lead Infrastructure as Code (IaC) initiatives using Terraform and improve team productivity by identifying technical debt and automating complex deployment workflows. Partner with Engineering, Product, and Business teams to resolve ambiguities and ensure shipped data features are impactful, reliable, and aligned with business outcomes. Build and maintain a selfservice data lake environment, empowering nondata engineers and stakeholders to discover, explore, and analyze data independently. Promote engineering excellence through code reviews, documentation, and technical standards for orchestration and testing. Minimum Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. 6+ years in data engineering, platform development, or largescale data systems. Handson experience with Databricks or modern lakehouse platforms and cloud platforms (AWS, GCP, or Azure). Experience building scalable ETL/ELT pipelines using Spark and SQL. Proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB). Strong understanding of data modeling, schema design, and performance optimization. Experience building reliable, productiongrade data pipelines with a focus on data quality and observability. Experience supporting analytics and/or ML workflows, including preparing MLready datasets. Working knowledge of data governance, security, and access control frameworks. Familiarity with Infrastructure as Code (IaC) and automated deployment workflows (e.g., Terraform). Proven ability to collaborate across teams and contribute to technical direction. Preferred Qualifications Experience working with timeseries, IoT, or highvolume telemetry data systems. Familiarity with EV charging ecosystems, including OCPP (Open Charge Point Protocol). Domain experience in electric vehicles (EV), energy systems, or distributed energy resources (DERs). Experience building ML feature pipelines, training datasets, or batch inference workflows. Experience designing selfservice data platforms for analysts and data scientists. Background in eventdriven or realtime data architectures. Solid software engineering experience, including writing maintainable production code, testing, and applying engineering best practices to data systems. Proven ability to influence technical direction and collaborate across teams. $110,000 - $135,000 a year Compensation for this role is determined by several factors, including the cost of labor in specific geographic markets, and these ranges are intended to provide a helpful reference. The actual compensation offer will be based on the candidates location, skills, level of expertise and experience, and internal equity considerations. In addition to base salary, we offer a comprehensive benefits package and, where applicable, performancebased incentives. We are building a team that represents a variety of backgrounds, perspectives, and skills. At Terawatt, we continuously strive to foster inclusion, humility, energizing relationships, and belonging, and welcome new ideas. We're growing and want you to grow with us. We encourage people from all backgrounds to apply. If a reasonable accommodation is required to fully participate in the job application or interview process, or to perform the essential functions of the position, please contact . Terawatt Infrastructure is an equalopportunity employer. #J-18808-Ljbffr