Skip to Main Content

Job Title


Databricks Architect


Company : Decision Minds


Location : Chennai, Tamil Nadu


Created : 2025-12-16


Job Type : Full Time


Job Description

Role DescriptionThis is a full-time hybrid role ( Multiple roles) for a Databricks lead and Architect located in Chennai and Pondicherry, with the flexibility to work remotely on occasion. The Databricks Architect will be responsible for designing scalable architectures, developing software solutions, integrating Databricks with other platforms, and creating efficient architectural designs. Additionally, the role involves collaborating with cross-functional teams, managing projects to ensure timely delivery, and supporting the implementation of Databricks solutions that align with business goals.Key Responsibilities:- Be the go to person for everything related to solutioning and Architecture. Should be leading the the team. - Lead the migration of Oracle Data Warehouse to Databricks, ensuring minimal downtime and data integrity. - Work on data pipelines using Databricks best practices for performance, cost, and reliability, while guiding junior engineers. - Optimize Spark/PySpark jobs for performance and efficiency in processing large datasets. - Implement data quality checks, validation, and reconciliation processes during migration. - Work on Hadoop ecosystem (HDFS, Hive, Sqoop) for legacy data integration where needed. - Leverage AWS cloud services (S3, EMR, EC2, Lamda, RDS, Glue) for data storage and processing. - Implement CI/CD pipelines for automated deployment of Databricks workflows. - Ensure data governance, security, and compliance in the cloud environment.Mandatory Skills & Experience:- 5+ years in Data Engineering, with strong expertise in Databricks, Oracle, and Data Warehousing. - Hands-on experience in migrating on-prem data warehouses (Oracle) to Databricks. - Hands on expertise using key features of Databricks ( Lakeflow connect, Lakebridge, Autoloader, Unity Catalog, Spark Declarative Pipelines, Lakehouse Architecture) - Proficiency in PySpark, SQL, and ETL/ELT frameworks. - Strong knowledge of Hadoop ecosystem (HDFS, Hive, Sqoop). - Experience with AWS cloud services (S3, EMR, Glue, Lambda, RDS). - Familiarity with data modeling, CDC, and performance tuning in Databricks. - Good understanding of orchestration tools (Airflow, Control-M) and CI/CD pipelines. - Proficiency in Architecture, Architectural Design, and Integration - Strong Project Management skills to lead and ensure project milestones are met - Ability to work with cloud-based platforms and familiarity with AI/ML and Big Data technologies - Excellent problem-solving and analytical skills - Strong communication and collaboration skills to work effectively in a hybrid environment - Experience with Databricks and end-to-end deployment of data solutions is highly desirable - Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field