Job Summary: Lead Data Engineer to design, develop, and maintain data pipelines and ETL workflows for processing large-scale structured/unstructured data. The ideal candidate will have expertise in AWS Data Services (S3, Workflows, Databricks, SQL) along with big data processing, real-time analytics, and cloud data integration and Team Leading Experience. Key Responsibilities: Redesign optimized and scalable ETL using Spark, Python, SQL, UDF. Implement ETL/ELT databricks workflows for structured data processing. Quickly able to analyze the issues. Create Data quality check using Unity Catalog. Create DataStream in Adverity. Drive daily status call and sprint planning meeting. Ensure security, quality, and compliance of data pipelines. Contribute to CI/CD integration, observability, and documentation. Collaborate with data architects and analysts to meet business requirements. Qualifications: 8+ years of experience in data engineering; 2+ years working on AWS services. Hands-on with tools like S3, Databricks, or Workflows. Good to have knowledge in Adverity. Good to have experience in any ticketing tool like Asana or JIRA. Good to have experience Data Analyzing. Strong SQL and data processing skills (e.g., PySpark, Python).
Job Title
Lead Data Engineer