Skip to Main Content

Job Title


Manager, Data Science


Company : Y-Axis


Location : Toronto, Ontario


Created : 2026-03-20


Job Type : Full Time


Job Description

Manager, Data Science at Whirlpool Corporation - Canada Full Time Start Date: Immediate Expiry Date: 05 Apr, 26 Posted On: 05 Jan, 26 Experience: 5 year(s) or above Remote Job: Yes Telecommute: Yes Sponsor Visa: No Skills: Data Engineering GCP SQL Python ETL Data Warehousing Data Governance Agile Cloud Services Data Pipelines Streaming Technologies CI/CD Problem Solving Communication Teamwork Data Modeling Description Whirlpool Corporation (NYSE: WHR) is a leading home appliance company, in constant pursuit of improving life at home. As the only major U.S.-based manufacturer of kitchen and laundry appliances, the company is driving meaningful innovation to meet the evolving needs of consumers through its iconic brand portfolio, including Whirlpool, KitchenAid, JennAir, Maytag, Amana, Brastemp, Consul, and InSinkErator. In 2024, the company reported approximately $17 billion in annual sales close to 90% of which were in the Americas 44,000 employees, and 40 manufacturing and technology research centers. Additional information about the company can be found at WhirlpoolCorp.com. The Data Science team is responsible for modeling complex business problems, discovering business insights and identifying opportunities through the use of statistical, algorithmic, mining and visualization techniques. In addition to advanced analytics skills, this role is also proficient at integrating and preparing large, varied datasets, architecting specialized database and computing environments, and communicating results. We are seeking a skilled Data Engineer to design, build, and maintain the data infrastructure that powers our analytics, products, and decisionmaking processes. You will be responsible for developing scalable data pipelines, optimizing data workflows, and ensuring highquality, accessible data across the organization. This role requires strong technical expertise, problemsolving skills, and the ability to collaborate with crossfunctional teams including data scientists, analysts, and software engineers. This role is also responsible for setting up Whirlpool Data Assets and Data Products. The individual will provide leadership through mentoring and partnership with other architects and data stewards within the organization. This role is part of the Global Value Streams roadmap, focusing on designing and implementing nextgeneration advanced and flexible tech products to enable business growth. Responsibilities Understand business priorities and success measures to design and implement appropriate data solutions. Possess strong handson experience in Data Engineering and demonstrate expertise in modern data architectures across the organization. Create robust and automated pipelines in GCP to ingest and process structured and unstructured data from source systems into analytical platforms, utilizing batch and streaming mechanisms leveraging cloudnative toolsets and DBT. Collaborate with data architects, ETL developers, engineers, BI developers/data scientists, and information designers to identify and define required data structures, formats, pipelines, metadata, and workload orchestration capabilities. Provide thought leadership and new perspectives on how to leverage GCP cloud services and capabilities. Maintain technical skills and knowledge of market trends and competitive insights, collaborating and sharing with the technical community. Design, develop, and maintain scalable ETL/ELT pipelines to support data ingestion, transformation, and integration from various sources. Build and manage data warehouses, data lakes, and streaming platforms. Ensure data quality, integrity, and governance through validation, monitoring, and automation. Optimize data workflows for performance, scalability, and costefficiency. Collaborate with stakeholders to understand business requirements and translate them into data solutions. Implement best practices for data security, compliance, and privacy. Troubleshoot data issues and support downstream analytics and reporting needs. Stay current with emerging data technologies and recommend improvements to existing infrastructure. Minimum Requirements Experience in product and/or data architecture and data engineering. Bachelors degree or higher in STEM fields such as Computer Science, Information Management, Big Data & Analytics, or equivalent work experience. Handson experience in designing and implementing creative data solutions using GCP (BigQuery, Dataflow, Dataproc, Pub/Sub, Spark/PySpark, Python, SQL, etc.). Strong background in architecting solutions for data extraction, transformation, and loading from a variety of structured, unstructured, and semistructured sources using SQL, NoSQL, and data pipelines for realtime, streaming, batch, and ondemand workloads. Solid understanding of Agile SDLC implementation in public cloud ecosystems, including environment management, test automation, CI/CD, and resource optimization. Experience in analytics/data management strategy, architectural blueprinting, business case development, and effort estimation for GCPbased analytics solutions. Knowledge of GCP services (BigQuery, DataFlow, Cloud Functions, Cloud Run, GCS) is a plus. Experience with software configuration management tools such as JIRA, Git, Jenkins, Bitbucket, and Confluence. Familiarity with Data Analytics tools such as Tableau and Looker. Proven experience as a Data Engineer or in a similar role. Proficiency in SQL and at least one programming language (Python, Java, Scala, etc.). Handson experience with cloud platforms (AWS, GCP, or Azure) and data services (Redshift, BigQuery, Snowflake, Databricks, etc.). Strong knowledge of data modeling, data warehousing concepts, and distributed systems. Experience with workflow orchestration tools (Airflow, Prefect, Luigi, etc.). Familiarity with streaming technologies (Kafka, Kinesis, Spark Streaming, Flink, etc.). Understanding of CI/CD pipelines, DevOps practices, and version control systems (Git). Excellent problemsolving, communication, and teamwork skills. Preferred skills and experiences Background in CPG/Retail or eCommerce. GCP Data Engineering or Cloud Architect Certification. Full stack development is a plus. Experience with machine learning pipelines or supporting data science workflows. Knowledge of data governance frameworks (GDPR, HIPAA, etc.). Familiarity with containerization and orchestration (Docker, Kubernetes). What we offer Flexible schedule ''''No dress code'''' Gympass Transportation voucher, Shuttle buses or Free parking at the company Meal on site Benefits such as paymentdeducted or social loans, health plan, dental plan, life insurance, and private pension plan compatible with the market Employee Support Program, with 24hour assistance from legal, social, financial, social workers, and psychologists Daycare assistance or Nursery at the company Services available onsite: beauty and aesthetics salon, internal bank agency, laundry, cafeteria, restaurant, and lactation room Two weeks of remote work from anywhere After 5 years with the company, eligible employees can take four weeks of paid leave Discount on products through Compra Certa Discount on insurance (pet, auto, home, bike, travel, and more) Extended maternity/paternity leave How To Apply Incase you would like to apply to this job directly from the source, please click here Equal Employment Opportunity Whirlpool Corporation is committed to equal employment opportunity and prohibits any discrimination on the basis of race or ethnicity, religion, sex, pregnancy, gender expression or identity, sexual orientation, age, physical or mental disability, veteran status, or any other category protected by applicable law. #J-18808-Ljbffr