Job Title: Data Engineer – AWS & Databricks Job Summary:We are looking for a results-driven Data Engineer with strong expertise in Amazon Web Services (AWS) and Databricks to build scalable and efficient data solutions. The candidate will be responsible for developing robust data pipelines, optimizing data workflows, and enabling data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain scalable ETL/ELT pipelines using AWS and Databricks. Develop and optimize data processing using PySpark and SQL. Implement and manage Delta Tables and Delta Live Tables (DLT) pipelines. Orchestrate workflows using Apache Airflow (MWAA) and Databricks Workflows. Work extensively with Amazon S3 for data lake architecture. Integrate data from multiple sources such as RDS, APIs, and streaming systems. Monitor, troubleshoot, and optimize pipelines using AWS CloudWatch and logging frameworks. Ensure data quality, governance, and security best practices are followed. Collaborate with data analysts, scientists, and business stakeholders to deliver data solutions.Manage code repositories and deployments using GitHub and CI/CD pipelines. Core Technical Skills: AWS Expertise: Hands-on experience with: AWS Glue (ETL, Data Catalog)Amazon S3 Amazon EMR Amazon MWAA (Airflow) Amazon RDS AWS Lambda Amazon DynamoDB Amazon EC2 AWS CloudWatch & CloudTrail AWS IAM & Certificate Manager Databricks Skills: Strong working knowledge of Databricks platform Delta Lake & Delta TablesDelta Live Tables (DLT)Databricks Workflows / Job schedulingYAML-based pipeline configurations Programming & Data Skills:Strong proficiency in PySparkAdvanced SQL skillsData modeling and warehousing concepts
Job Title
Data Engineer