Skip to Main Content

Job Title


Data Engineer


Company : Water Corporation


Location : WA, Australia


Created : 2026-04-25


Job Type : Full Time


Job Description

Were entering a new chapter for Information Technology at Water Corporation as we deliver bold, futurefocussed solutions. With the launch of our new Information and Technology Group Operating Model, were transforming how we deliver technology services, making them faster, more responsive, and more aligned with the needs of our business and customers. As part of this transformation, were hiring multiple new positions. These roles are key to shaping a vibrant, forwardthinking Information Technology Group- one that adds real value, delivers better outcomes, and works in smarter, more agile ways. Its a strategic shift aligned with our organisational strategies, designed to modernise systems, embrace innovation, and create a more flexible, collaborative, and customercentric IT environment. If youre equally excited about innovation, transformation, and making a meaningful impact, now is the perfect time to join us. About the role We are seeking a Data Engineer to join our Data & Analytics team within the Information & Technology Group. This technical specialist role is responsible for implementation, optimisation and support of reliable, performant and secure data platform and pipelines that enable analytics, insights and data driven decision making across the business. In this role, you will work at scale with the Enterprise data and analytics platform, building productiongrade frameworks and data pipelines that ingest, transform, model and serve highquality data to analysts, data scientists and business stakeholders, using Databricks and AWS cloudnative technologies. Real benefits that matter Real flexibility with options to work from home, 9day fortnight, flexible work hours An additional 2 wellbeing days each year Access to long service leave pro rata after 3 years of service Generous cocontribution superannuation scheme, which offers up to 16%. This includes the standard 12% employer contribution, plus an additional 2% employer cocontribution that matches your own 2% contribution Purchase additional leave of up to 12 weeks or work 4 years at a reduced salary and take the fifth year off as paid leave Discover more benefits we offer to support the unique and individual ways our employees live. What the role will involve Design, build and optimise data pipelines, datasets and data products using Databricks and AWS services. Develop ELT pipelines using Spark, Delta Lake, Python and SQL, following approved patterns. Implement data modelling, data quality checks, validation and observability as part of productionready solutions. Deploy and support data pipelines across Dev, Test and Prod environments using CI/CD practices. Execute testing and validation to ensure the accuracy, reliability and performance of data transformations. Monitor and support production data products, conducting incident investigation, root cause analysis and remediation. Participate in code reviews and apply software engineering standards, patterns and best practices. Work closely with senior data engineers, analysts and stakeholders to understand requirements and implement fitforpurpose data solutions. Operation within Agile delivery, DataOps, DevOps and IT Service Management frameworks. Key skills and experience Qualifications & Experience Degree in Computer Science, Information Systems or a related discipline, or equivalent practical experience Considerable professional experience in data engineering roles Practical experience designing and building: Data ingestion and transformation pipelines Analytical data models Data quality checks and validation logic Considerable experience with cloudbased data and analytics platforms, particularly Databricks and AWS Strong experience using Python and SQL for data engineering workloads Experience working in Agile and DevOps environments, using Gitbased version control. Relevant AWS and Databricks certifications are highly regarded Technical Skills Databricks Lakehouse Apache Spark Delta Lake Structure Streaming (desirable) AWS Services S3, Glue, Lambda RDS, DynamoDB, API Gateway Orchestration Databricks Lakeflow Jobs Airflow or similar orchestration tools DataOps and CI/CD Gitbased workflows CI/CD pipelines Automated testing Strong Python, SQL and scripting skills for analytics and data transformations Personal Attributes Strong analytical mindset with a structured approach to problem solving Eager to learn and grow as a data engineer in a modern Lakehouse environment Collaborative and able to work effectively within crossfunctional teams Open to feedback and keen to continuously improve engineering practices Demonstrates behaviours aligned with a highperformance, valuesdriven culture. Our commitment to a diverse and inclusive workplace Diversity and inclusion are more than words. They guide us on building a thriving workforce that reflects the diversity of our customers and our community. We encourage applications from every background, including Aboriginal and Torres Strait Islander people, people with disability, women, youth, LGBTQIA+ folks and people from culturally and linguistically diverse backgrounds. Applicants with disability who require adjustments, or alternative methods of communication in the recruitment process, can contact a Recruitment Officer [email protected] or 9420 2000. To read our diversity and inclusion statement, please visit our website. #J-18808-Ljbffr