Role SummaryWe are looking for a highly experienced Senior AWS Big Data Technical Lead to spearhead end‑to‑end delivery of complex, large-scale AWS Data Lake and Big Data projects. The ideal candidate should have deep hands-on expertise in AWS data engineering, data pipeline development, PySpark, and AWS native services. This individual will serve as the technical leader, driving architectural discussions, solution design, stakeholder collaboration, and successful execution of regulatory projects (such as MHHS or similar).Key ResponsibilitiesProject Leadership & DeliveryOwn and lead end-to-end project delivery for large and complex AWS Big Data engagements (including regulatory projects like MHHS).Drive requirement understanding, clarifications, and technical solutioning in partnership with Solution Architects.Translate business requirements into high-level and low-level designs, ensuring accurate documentation and customer alignment.Lead a team of Data Engineers and oversee project governance, quality, and technical delivery standards.Independently manage E2E technical delivery for AWS Data Lake initiatives.Ensure timely delivery, risk mitigation, and alignment to project milestones.Technical Solutioning & ArchitectureEvaluate technical feasibility and propose optimal AWS-based data engineering solutions.Conduct detailed design reviews with customer stakeholders and internal teams.Drive architectural discussions and provide expert guidance in AWS data engineering best practices.Stakeholder & Third‑Party CollaborationCollaborate closely with customer technical teams, business SMEs, and architecture groups.Manage engagement with multiple external vendor organizations involved in the project delivery lifecycle.Required Technical SkillsCore AWS Big Data Engineering7+years of total experience with strong leadership in AWS-based data engineering.Deep expertise in building, orchestrating, and deploying data pipelines on AWS Cloud.Hands-on working experience with:AWS EMRAWS LambdaAmazon S3AWS GlueAWS AthenaAWS Data Lake architecturesProgramming & Data ProcessingStrong coding proficiency in PySpark (Python + Spark).Ability to design, develop, and optimize PySpark-based ETL/ELT pipelines.Tools & PlatformsExperience with CI/CD pipelines (CodePipeline, CodeBuild, GitHub, GitLab, etc.).Working knowledge of Airflow (Cloud Composer or self-managed) is a strong advantage.Familiarity with DevOps processes, automation, and version control (Git).
Job Title
AWS Big Data Engineer