About the Role We are seeking a highly experiencedData Engineerfor the development and optimization of our modern cloud data platform infrastructure. This role is ideal for someone who thrives in a fast-paced environment, is passionate about data architecture, and has a deep understanding of data transformation, modeling, and orchestration using modern tools likedbt-core ,Snowflake , andPython .Key Responsibilities Develop and implement scalable data pipelinesusing dbt-core, Python, and SQL to support analytics, reporting, and data science initiatives. Optimize data modelsin Snowflake to support efficient querying and storage. Development and maintenance of our data warehouse , ensuring data quality, governance, and performance. Collaborate with cross-functional teamsincluding data analysts, data architects, data scientists, and business stakeholders to understand data needs and deliver robust solutions. Develop and support the best practices SOPfor version control (Git), CI/CD pipelines, and data pipeline monitoring. Task Ownership and Self driven , work in a culture of technical excellence and continuous improvement. Independentlyexecute POC to recommend new tools and technologies to enhance the data platform. Provide on-going support for the existing ELT/ETL processes and procedures. Other duties as assigned including but not limited to possible reallocation of efforts to other organizations per business need and management request.Required Qualifications Bachelor's degree in computer science or related field (16 years of formal education related to engineering) Around 5 years of experience in data engineering with minimum of 1+ years of experience in Snowflake and Python. Expert-level proficiency inSQLfor data transformation and automation. Experience withdbt-corefor data modeling and transformation. Strong hands-on experience in cloud platforms ( Microsoft Azure)and cloud data platforms ( Snowflake ). Proficiency withGitand collaborative development workflows. Familiarity withMicrosoftVSCodeor similar IDEs. Knowledge ofAzure DevOpsorGitlabdevelopment operations and job scheduling tools. Solid understanding of moderndata warehousing architecture , dimensional modeling, ELT/ETL frameworks. Excellent communication skills and the ability to translate complex technical concepts to non-technical stakeholders. Proven expertise in designing and implementing batch and streaming data pipelines to support near real-time and large-scale data processing needs.Preferred Qualifications Experience working in a cloud-native environment (AWS, Azure, or GCP). Familiarity with data governance, security, and compliance standards. Prior experience with Apache Kafka (Confluent), DataOps.Live, Atlan. Hands-on experience with orchestration tools (e.g., Active Batch, Airflow, Prefect) is a plus.
Job Title
Data Engineer