Primary Responsibilities: Architect, design, and manage scalable, secure, and reliable data solutions leveraging the AWS ecosystem (e.g., Redshift, S3, Glue, Lambda).Build and optimize robust data pipelines for real-time and batch processing.Design and implement highly performant data warehouses and marts using Amazon Redshift and other AWS data services.Develop and optimize complex SQL queries to support large-scale data processing and analytics workloads.Collaborate with stakeholders across engineering, analytics, and business teams to define data requirements and deliver actionable insights.Establish and enforce best practices for data architecture, data governance, and data quality.Monitor system performance and ensure high availability, scalability, and security of data platforms.Mentor and guide junior data engineers and team members, fostering a culture of continuous learning and improvement.Stay up to date with the latest advancements in data engineering and AWS technologies, and recommend their adoption as appropriate.Experience & QualificationsMandatory (critical for the Role)Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.5 – 7 years of experience in data engineering or related roles, with a strong focus on cloud-based solutions.Advanced expertise in the AWS ecosystem, including services like Redshift, S3, Glue, Lambda, RDS, and Athena.Deep knowledge of SQL, including query optimization and performance tuning for large datasets.Extensive experience with ETL/ELT frameworks and processes.Proven ability to design and implement scalable, high-performance data architectures and data pipelines.Excellent problem-solving, analytical, and communication skills.Strong leadership and mentoring experience.Skills (Technical, Business, Leadership)AWS Services: Amazon Redshift, S3, Glue, Lambda, RDS, Athena, Kinesis, EMRProgramming/Scripting: Advanced SQL, Python, ScalaETL/ELT Tools: AWS Glue, Apache Airflow, Talend, InformaticaData Modelling: Dimensional Modelling, Star and Snowflake SchemaBig Data Frameworks: Apache Spark, HadoopVersion Control: Git, GitHub, BitbucketCI/CD: Jenkins, AWS CodePipeline, TerraformMonitoring & Logging: CloudWatchData Governance: Knowledge of compliance standards (e.g., GDPR, HIPAA) and metadata managementAWS Certified Data Analytics or AWS Solutions Architect certifications.Experience with data lake architectures and advanced data integration techniques.Familiarity with machine learning pipelines and AI integrations.Exposure to real-time data processing tools (e.g., Kafka, Flink, Spark Streaming).Experience in production support/BAU Working knowledge of change control processes and impactsShould be able to evaluate and prioritize tasksInvolvement in a mixture of new application development, maintenance and technical support.Ability to effectively liaise with internal customers at all levels within the organisation, and, in some cases, may be required to deal with external parties including development organisation and specialist consultants.
Job Title
Senior Data Engineer [T500-17989]