Skip to Main Content

Job Title


Senior Data Architect


Company : COMPUNNEL TECHNOLOGY INDIA PRIVATE LIMITED


Location : Tiruchirappalli,


Created : 2025-07-20


Job Type : Full Time


Job Description

Senior Data Architect Work Location - Remote Lead Time - Immediate Key Skills – Databricks exp is required Must have excellent communication Must Have: 6+ years of experience in Big Data architecture, Data Engineering and AI assisted BI solutions within Databricks and AWS technologies. 3+ Years of Experience with AWS Data services like S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS and others 3+ Years of Experience in building Delta Lakes and open formats using technologies like Databricks and AWS Analytics Services. Proven expertise in Databricks, Apache Spark, Delta Lake, and MLflow. Strong programming skills in Python, SQL, and PySpark. Experience with SAP data extraction and integration (e.g., SAP BW, S/4HANA, BODS). Hands-on experience with cloud platforms (Azure, AWS, or GCP), especially in cost optimization and data lakehouse architectures. Solid understanding of data modeling, ETL/ELT pipelines, and data warehousing. We strongly believe that data and analytics are strategic drivers for future success. We are building a world class advanced analytics team that will solve some of the most complex strategic problems and deliver topline growth and operational efficiencies across our business The Analytics team is part of the Organization and is responsible for driving organic growth by leveraging big data and advanced analytics. The team reports to the VP and Chief Data Officer at TEIS, works closely with the SVP of Corporate Strategy, and has regular interactions with the company’s C-Suite. We are on an exciting journey to build and scale our advanced analytics practice. Looking for a Senior Data Architect that has experience in building data lake and data warehouse architectures using On-Prem and Cloud technologies. We are seeking a highly skilled and experienced Data Architect with a strong background in Big Data technologies, Databricks solutioning, and SAP integration within the manufacturing industry. The ideal candidate will have a proven track record of leading data teams, architecting scalable data platforms, and optimizing cloud infrastructure costs. This role requires deep hands-on expertise in Apache Spark, Python, SQL, and cloud platforms (Azure/AWS/GCP). Key Responsibilities: Design and implement scalable, secure, and high-performance Big Data architectures using Databricks, Apache Spark, and cloud-native services. Lead the end-to-end data architecture lifecycle, from requirements gathering to deployment and optimization. Design repeatable and reusable data ingestion pipelines for bringing in data from ERP source systems like SAP, Salesforce, HR, Factory, Marketing systems etc. Collaborate with cross-functional teams to integrate SAP data sources into modern data platforms. Drive cloud cost optimization strategies and ensure efficient resource utilization. Provide technical leadership and mentorship to a team of data engineers and developers. Develop and enforce data governance, data quality, and security standards. Translate complex business requirements into technical solutions and data models. Stay current with emerging technologies and industry trends in data architecture and analytics. Required Skills & Qualifications: 6+ years of experience in Big Data architecture, Data Engineering and AI assisted BI solutions within Databricks and AWS technologies. 3+ Years of Experience with AWS Data services like S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS and others 3+ Years of Experience in building Delta Lakes and open formats using technologies like Databricks and AWS Analytics Services. Bachelor’s degree in computer science, information technology, data science, data analytics or related field Proven expertise in Databricks, Apache Spark, Delta Lake, and MLflow. Strong programming skills in Python, SQL, and PySpark. Experience with SAP data extraction and integration (e.g., SAP BW, S/4HANA, BODS). Hands-on experience with cloud platforms (Azure, AWS, or GCP), especially in cost optimization and datalake house architectures. Solid understanding of data modelling, ETL/ELT pipelines, and data warehousing. Demonstrated team leadership and project management capabilities. Excellent communication, problem solving and stakeholder management skills. Preferred Qualifications: Experience in the manufacturing domain, with knowledge of production, supply chain, and quality data. Certifications in Databricks, cloud platforms, or data architecture. Familiarity with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform).