Senior Data Engineer – Microsoft Fabric & PySpark We’re hiring a Senior Data Engineer (4–7 years experience) to help us transition our legacy data systems (Azure Synapse, SQL Server) to a modern Microsoft Fabric lakehouse platform. If you’re skilled in building robust, scalable data solutions and excited about the latest in Azure data technologies, we’d love to hear from you! Key Responsibilities: Lead the migration of data pipelines and stored procedures to Microsoft Fabric using PySpark, Delta Lake, and OneLake. Build and orchestrate workflows with Apache Airflow and Azure Data Factory (ADF). Redesign legacy ADF pipelines for optimal Fabric integration. Design star schema (dimensional) models to enhance reporting and analytics. Implement data governance, version control (Git), and CI/CD practices. Collaborate with analysts, architects, and business stakeholders to deliver high-quality, trusted data solutions. Support Power BI integration, semantic modeling, and ongoing performance tuning. Requirements: 4+ years’ hands-on experience in data engineering with PySpark. Proficiency with Azure Synapse, SQL Server, ADF, and Microsoft Fabric. Strong experience in Apache Airflow, Delta Lake, OneLake, T-SQL, and KQL. In-depth knowledge of dimensional modeling, data governance, and Azure data security best practices. Experience with DevOps, Git, and automated deployment. Nice to Have: Experience modernizing ETL/ELT workloads to Fabric and lakehouse architectures. Microsoft Azure certification (e.g., DP-203). Advanced Power BI skills – DAX, MDX, DirectLake, Tabular Editor, ALM Toolkit, DAX Studio, etc.
Job Title
Senior Data Engineer – Microsoft Fabric & PySpark