Skip to Main Content

Job Title


Senior Data Engineering Specialist (Databricks / Spark)


Company : Sky Systems, Inc. (SkySys)


Location : Pune, Maharashtra


Created : 2025-12-17


Job Type : Full Time


Job Description

Role: Senior Data EngineerPosition Type: Full-Time Contract (40hrs/week)Contract Duration: Long TermWork Schedule: 8 hours/day (Mon-Fri)Location: Hybrid(3x days onsite) - Pune, IndiaWe are seeking a highly skilled Senior Data Engineer with deep expertise in modern data technologies and a strong ability to design, build, and optimize large-scale data pipelines. The ideal candidate will partner closely with Product Managers, Data Scientists, and cross-functional engineering teams to turn ideas into production-ready data solutions.Key Responsibilities- Design, develop, and operationalize scalable, distributed data pipelines for both batch and real-time processing. - Collaborate end-to-end with Product Managers and Data Scientists to understand requirements, build prototypes, and deliver production-quality solutions. - Write clean, high-quality code following engineering best practices and contribute to setting new coding standards as needed. - Perform peer code reviews to maintain and improve quality across engineering teams. - Facilitate diagnosis and resolution of technical or functional data issues. - Advocate for data engineering best practices, process improvements, and developer experience enhancements. - Build strong relationships across teams and participate in Communities of Practice. - Engage in continuous learning to stay current with emerging technologies and business domain knowledge. - Contribute as an active member of an agile engineering team, participating in all aspects of the workflow.Required Skills & Experience- 5+ years of experience working with big data technologies in enterprise environments. - Strong expertise in designing and optimizing distributed, scalable, and reliable data pipelines. - Hands-on experience with Databricks, Apache Spark, and Spark Structured Streaming for real-time ingestion. - Proficiency working with Kafka in event-driven architectures. - Strong SQL skills and practical experience with cloud data warehouses such as Snowflake. - Experience with ETL development, data modeling, performance tuning, and data governance. - Programming/scripting experience; Java exposure is a plus. - Ability to translate business requirements into technical solutions and work effectively in cross-functional teams.