Job Description: Data EngineerAbout BizAcuity BizAcuity is on a mission to help enterprises get most out of their data by providing Business Intelligence and Data Analytics services, product development and consulting services for clients across globe in various domains / verticals. Established in 2011, by a strong leadership team and a team of 200+ engineers, we have made a mark as a world class service provider and compete with large service providers to win business. BizAcuity has developed and delivered high class enterprise solutions to many medium to large clients using modern and the best technologies in the data engineering and analytics world. Our services include - Business Intelligence Consulting , Advanced Analytics, Managed Services, Data Management, Cloud Services, Technology Consulting , Application Development and Product Engineering. For more info on BizAcuity, logon to - We are seeking a highly skilled Data Engineer to join our team. You will be responsible for designing, developing, and maintaining scalable data pipelines and architectures. The ideal candidate will bridge the gap between traditional data warehousing and modern cloud-native solutions, with expertise in diverse technology stacks including AWS, Snowflake, and Microsoft Azure.Key ResponsibilitiesETL/ELT Development: Design and implement robust data integration workflows using modern tools such as AWS Glue, dbt, Airflow, SSIS, or cloud-native pipelines (e.g., Azure Data Factory, Snowpipe) to ingest and transform data.Database Engineering: Write complex, high-performance SQL queries (T-SQL, PL/SQL, or Snowflake SQL) and optimize database performance through advanced indexing, clustering, and execution plan analysis.Modern Data Architecture: Build and manage data workloads across cloud platforms, leveraging Data Lakes (e.g., Amazon S3, Azure Data Lake) and Data Warehouses (e.g., Snowflake, Redshift) to support analytics.Data Modelling: Develop and maintain dimensional models (Star/Snowflake schemas) and modern data architectures (e.g., Data Mesh, Medallion Architecture).Maintenance & Optimization: Monitor data pipelines, troubleshoot failures, and ensure data integrity and quality across Production and Test environments.Collaboration: Work closely with cross-functional teams to translate business requirements into technical specifications.Technical QualificationsCore RequirementsExpert SQL Skills: Mastery of DML/DDL, window functions, and performance tuning (experience with Snowflake, Redshift, or SQL Server preferred).ETL/Pipeline Orchestration: Extensive experience building, deploying, and managing data pipelines using tools like AWS Glue, Apache Airflow, dbt, or SSIS.Cloud Data Platforms: Hands-on experience with cloud ecosystems, specifically:AWS: S3, Lambda, Glue, Redshift, EMR.Snowflake: Warehouses, Snowpipe, Time Travel, Data Sharing.Optional: Azure (Synapse/Fabric) or GCP (BigQuery).Preferred SkillsBig Data & Scripting: Proficiency in Python or Scala for data processing (PySpark, Pandas) and scripting.DevOps & CI/CD: Familiarity with version control (Git/GitHub/GitLab) and CI/CD pipelines for automated deployment of data infrastructure (Terraform, Jenkins, etc.).Data Governance: Knowledge of data cataloguing, lineage, and security best practices within cloud environments.Education & ExperienceBachelor’s degree in Computer Science, Information Technology, or a related field.4+ years of experience in data engineering.
Job Title
Data Engineer