Skip to Main Content

Job Title


Cloud Data Warehouse Engineer


Company : VLink Inc


Location : Montreal, Montreal (administrative regio


Created : 2025-10-17


Job Type : Full Time


Job Description

Job Title: Senior Cloud Data Warehouse Engineer Location: Montreal, QC (Hybrid 3 days onsite every week) Employment: Full-time opportunity Experience: 5+ years About VLink: Started in 2006 and headquartered in Connecticut, VLink is one of the fastest growing digital technology services and consulting companies. Since its inception, our innovative team members have been solving the most complex business, and IT challenges of our global clients. Job Description: We are looking for an experienced Cloud Data Warehouse Engineer to become a key member of our vibrant Data Warehouse team. In this role, the candidate will be essential in developing our next-generation data platform, which consolidates, sources, and manages data from multiple technology systems across the organization. This platform will enable sophisticated reporting and analytics solutions, specifically designed to support the Technology Risk functions. The candidate focus will be on building and refining our Cloud Data Warehouse using Snowflake and Python-based tools. The candidate's expertise will help create reliable data models, utilizing Snowflake features such as data sharing, time travel, Snow Park, workload management, and the ingestion of both structured and unstructured data. Additionally, the candidate will work on integrating Snowflake with internal platforms for purposes like data quality management, cataloging, discovery, and real-time monitoring. By collaborating closely with data engineers, analysts, ETL developers, infrastructure teams, and business stakeholders, the candidate will help develop a high-performance, scalable data environment that supports advanced analytics and AI initiatives. Responsibilities: Design, develop, and manage scalable Snowflake data warehouse solutions. Establish and promote best practices for efficient Snowflake usage, integrating tools like Airflow, DBT, and Spark. Assist in testing and deploying data pipelines using standard frameworks and CI/CD processes. Monitor, tune, and optimize query performance and data loads. Support QA & UAT processes to confirm data integrity and troubleshoot issues. Collaborate with cross-functional teams to ensure seamless integration of data solutions. Contribute to documentation, data governance, and operational procedures to sustain system health and security. Requirements: Bachelor's degree in Computer Science, Software Engineering, Information Technology, or related field. Minimum of 7 years' experience in complex data environments, managing large data volumes. At least 7 years of hands-on SQL/PLSQL experience with complex data analysis. 5+ years of experience developing data solutions on Snowflake. 3+ years of building data pipelines and warehousing solutions using Python (libraries such as Pandas, NumPy, PySpark). 3+ years of experience working in hybrid data environments (On-Prem & Cloud). Proven hands-on experience with Python is mandatory. Extensive experience with Airflow or similar orchestration tools (e.g., Dagster). Certification: Snowflake SnowPro Core is required; SnowPro Advanced Architect/Data Engineer is a plus. Experience with DBT is advantageous. Skill in performance tuning SQL queries, Spark jobs, and stored procedures. Solid understanding of E-R data models, data warehouse architecture, and advanced modeling concepts. Strong analytical capabilities to interpret complex requirements into technical solutions. Excellent verbal and written communication skills. Proven ability to manage multiple projects with minimal supervision, adaptable to changing priorities. Strong problem-solving skills with a focus on clarity and business impact. Preferred, but not required: Knowledge of advanced data warehouse concepts such as Factless Fact Tables, Temporal/Bi-Temporal models. Experience with data cataloging, data quality, and incident management platforms. Familiarity with AI/ML data pipeline integration. Contact: D: (289) 633-4046 Seniority level: Mid-Senior level Employment type: Full-time Job function: Information Technology Industries: IT Services and IT Consulting #J-18808-Ljbffr