Job Title:Spark Scala Architect / Bigdata Experience:12 to 16 Years Location:Bangalore, Pune, Hyderabad, Mumbai Employment Type:Full-Time Interview Mode: VirtualRequirement Job Description Job Title:Spark Scala Architect / Bigdata Experience:12 to 16 Years Location:Bangalore, Pune, Hyderabad, Mumbai Notice Period:Immediate to 15 Days Maximum Employment Type:Full-TimeWe are looking for an experiencedDatabricks + PySpark Architectto lead the design and implementation of advanced data processing solutions on cloud. The ideal candidate will have a strong background inbig data architecture ,Databricks , andPySpark , with a solid understanding ofAWS services . Core Roles & Responsibilities: Architect and implement scalable data pipelines usingDatabricksandPySpark Lead end-to-end architecture and solution design for large-scale data platforms Collaborate with stakeholders to understand business requirements and translate them into technical solutions Optimize performance and scalability of data engineering workflows Integrate and deploy solutions onAWS cloudusing services like S3, Glue, EMR, Lambda, etc. Ensure best practices for data security, governance, and compliance Guide and mentor development teams in big data technologies and architecture Primary Skill: Expertise inDatabricksandPySpark Strong hands-on experience with data engineering on cloud platforms Secondary Skill: Proficiency withAWS servicesfor data processing and storage Familiarity with DevOps practices and CI/CD pipelines on cloud
Job Title
Spark Scala Architect / Bigdata