Skip to Main Content

Job Title


Data Engineer


Company : Synechron


Location : Pune, Maharashtra


Created : 2026-02-23


Job Type : Full Time


Job Description

Job Title:Data Engineer Location:Pune (Hinjewadi) Experience Required:7-15 Years. Notice Period: Serving Notice period OR Immediate Joiners About Company: At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’ s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honoured with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 13,950+ and has 52 offices in 20 countries within key global markets. For more information on the company, please visit ourwebsiteorLinkedIn community . Diversity, Equity, and Inclusion Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and an affirmative-action employer. Our Diversity, Equity, and Inclusion (DEI) initiative‘Same Difference’is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.Role Overview We are seeking a highly skilledSenior Data Engineerto design, develop, and optimize data pipelines and data marts on the Azure ecosystem. The ideal candidate will have deep hands-on experience withPython Programming, PySpark, SQL , and Good to have experience onAzure Databricks OR Any cloudexposure. You will work closely with data architects, SMEs, and business stakeholders to deliver scalable, secure, and high-quality data solutions. Required Skills 7+ yearsof experience in Data Engineering. Strong expertise inPython ,PySpark , and advancedSQL . Hands-on experience inAzure Databricks Hands-on experience in buildinglarge-scale data pipelines ,data lakes , anddata marts . Excellent understanding of Delta Lake, distributed computing, performance optimization. Experience withAzure Data Factory ,ADLS ,Synapse ,Key Vault , CI/CD pipelines. Deep understanding of relational and dimensional modeling, star schemas, slowly changing dimensions (SCDs). Strong data quality, metadata, and governance practices. Knowledge ofbanking/financial servicesdata and processes. Key Responsibilities Data Engineering & Development Design and builddata marts , curated layers, and reusable data models for analytical and reporting needs. Develop high-performanceETL/ELT pipelinesusingAzure Databricks, PySpark, Python , andSQL . Work with large-scale structured and unstructured datasets using distributed computing techniques. Implementdata transformation workflowsaligned to medallion architecture (Bronze → Silver → Gold). Azure Platform & Databricks Develop and optimize notebooks, jobs, Delta Lake tables, and data workflows within Azure Databricks. ImplementDelta Lakebest practices—ACID transactions, schema enforcement, and time travel. Utilize Azure services such asADLS Gen2, Azure Data Factory, Azure Synapse, Key Vault, API integrations , etc. Monitor and optimize cluster configurations, job performance, and cost management. Data Governance & Quality Ensure data quality, lineage, security, and compliance with enterprise standards. Implement validation frameworks, unit tests, and automated QA processes. Work with data governance teams and tools (Collibra, Purview, Securiti.ai, etc.) to ensure compliance.If you find this opportunity interesting kindly share your updated profile on Shweta.patil@ With below details (Mandatory) Total IT Experience: Total Experience as Data Engineer (Pyspark): Total experience in Python Programming: