ANSR is hiring for one of its client.About Visy:Visy is a family-owned Australian business and a global pioneer in sustainable packaging, recycling and logistics. They operate across 150 sites globally, including operations in Asia, Europe, and the USA, supported by a dedicated workforce of over 7,000 employees. It is Australia and New Zealand’s largest manufacturer of food and beverage packaging, made from household recycling. As Australia’s largest recycler, it processes 40% of Australian households recycling. Visy also supports customers with logistics, packaging supplies, point of sale displays and more. At Visy India, their technology hub in Hyderabad, they are expanding their technical capabilities to support their global business.Primary Responsibilities:- Data Pipeline Development: Design and implement scalable data pipelines using AWS services such as Glue, Lambda, and Kinesis. - Redshift Management: Optimize and maintain Redshift clusters, including query tuning and performance optimization. - Data Integration: Automate the ingestion, transformation, and integration of diverse data sources into data lakes and warehouses. - Data Modelling: Develop efficient data models and schemas for analytical and operational use cases. - Collaboration: Work closely with data analysts, scientists, and stakeholders to understand requirements and deliver reliable solutions. - Data Governance: Ensure data quality, security, and compliance using AWS tools like IAM and Lake Formation. - Documentation: Create and maintain comprehensive documentation for workflows, pipelines, and infrastructure.Experience & Qualifications:Mandatory (critical for the Role):- Bachelor’s degree in Computer Science, Engineering, or a related field. - 3 to 5 years of experience in data engineering or related roles. - Proficiency in AWS services, including Redshift, Glue, S3, Lambda, and Athena. - Advanced SQL skills with experience in query optimization. - Strong understanding of ETL/ELT processes, data modelling, and data warehousing. - Familiarity with CI/CD pipelines and Infrastructure as Code (e.g., CloudFormation, Terraform). - Excellent problem-solving skills and ability to manage multiple tasks.Skills (Technical, Business, Leadership):- AWS Services: Amazon Redshift, S3, Lambda, Glue, RDS, Athena, Kinesis - Databases: Amazon Redshift, PostgreSQL, MySQL - Languages: SQL, Python, R - ETL Tools: AWS Glue, Apache Spark, SAP Data Services - Data Modelling: Star/Snowflake Schema, Data Warehousing - Version Control: Git, GitHub, Bitbucket - CI/CD Tools: AWS CodePipeline - Monitoring/Logging: CloudWatch, AWS X-Ray - Big Data: Experience with large datasets and distributed computing - AWS certification (e.g., AWS Certified Data Engineer – Specialty). - Experience with Python or other programming languages for data processing. - Knowledge of big data frameworks and machine learning workflows. - Knowledge SAP MM, SAP Sales and SAP Finance is desirable - Solid understanding of leading and contemporary practices and capabilities in information management, data governance, reporting and analytics - Experience in production support/BAU - Working knowledge of change control processes and impacts - A high degree of problem solving and technical skill is mandatory. - Should be able to evaluate and prioritize tasks - Involvement in a mixture of new application development, maintenance and technical support. - Ability to effectively liaise with internal customers at all levels within the organization, and, in some cases, may be required to deal with external parties including development organization and specialist consultants.
Job Title
Data Engineer [T500-17988]