At Wave, we help small businesses to thrive so the heart of our communities beats stronger. We work in an environment buzzing with creative energy and inspiration. No matter where you are or how you get the job done, you have what you need to be successful and connected. The mark of true success at Wave is the ability to be bold, learn quickly and share your knowledge generously. Reporting to the Manager, Data Engineering, as a Senior Data Engineer, you will be building tools and infrastructure to support the efforts of the Data Products and Insights & Innovation teams, and the business as a whole. Were looking for a talented, curious selfstarter who is driven to solve complex problems and can juggle multiple domains and stakeholders. This highly technical individual will collaborate with all levels of the Data and AI team as well as various engineering teams to develop data solutions, scale our data infrastructure, and advance Wave to the next stage in our transformation as a datacentric organization. This role is for someone with proven experience in complicated product environments. Strong communication skills are a must to bridge the gap between technical and nontechnical audiences across a spectrum of data maturity. At Wave, youll have the chance to grow and thrive by building scalable data infrastructure, enhancing a modern data stack, and contributing to highimpact projects that empower insights and innovation across the company. Heres How You Make an Impact: Youre a builder. You will design, build, and deploy components of a modern data platform, including CDCbased ingestion using Debezium and Kafka, a centralized Hudibased data lake, and a mix of batch, incremental, and streaming data pipelines. You ensure continuity while driving modernization. You will maintain and enhance the existing Amazon Redshift data warehouse and legacy Python ELT pipelines, ensuring stability and reliability, while accelerating the transition to a brandnew Databricksbased analytics and processing environment. This platform, integrated with dbt, will progressively replace the existing data environment. You balance innovation with operational excellence. You enjoy building faulttolerant, scalable, and costefficient data systems, and you continuously improve observability, performance, and reliability across both legacy and modern platforms. You collaborate to deliver impact. You will work closely with crossfunctional partners to plan and roll out data infrastructure and processing pipelines that support analytics, machine learning, and GenAI use cases. You enjoy enabling teams across Wave by ensuring data and insights are delivered accurately and on time. You thrive in ambiguity and take ownership. You are selfmotivated and comfortable working autonomously, identifying opportunities to optimize pipelines and improve data workflows, even under tight timelines and evolving requirements. You keep the platform reliable. You will respond to PagerDuty alerts, troubleshoot incidents, and proactively implement monitoring and alerting to minimize incidents and maintain high availability. Youre a strong communicator. Colleagues rely on you for technical guidance. Your ability to clearly explain complex concepts and actively listen helps build trust and resolve issues efficiently. Youre customerminded. You will assess existing systems, improve data accessibility, and deliver practical solutions that enable internal teams to generate actionable insights and enhance our external customers experience. You Thrive Here by Possessing the Following: Data Engineering Expertise: Bring 6+ years of experience in building data pipelines and managing a secure, modern data stack. This includes CDC streaming ingestion using tools like Debezium into a data warehouse that supports AI/ML workloads. AWS Cloud Proficiency: At least 3 years of experience working with AWS cloud infrastructure, including Kafka (MSK), Spark / AWS Glue, and infrastructure as code (IaC) using Terraform. Data modelling and SQL : Fluency in SQL, strong understanding of data modelling principles and data storage structures for both OLTP and OLAP Databricks experience : Experience developing or maintaining a production data system on Databricks. Strong Coding Skills: Write and review highquality, maintainable code that enhances the reliability and scalability of our data platform. We use Python, SQL, and dbt extensively, and you should be comfortable leveraging thirdparty frameworks to accelerate development. Data Lake Development: Prior experience building data lakes on S3 using Apache Hudi with Parquet, Avro, JSON, and CSV file formats. CI/CD Best Practices: Experience developing and deploying data pipeline solutions using CI/CD best practices to ensure reliability and scalability. Bonus: Data Governance Knowledge: Familiarity with data governance practices, including data quality, lineage, and privacy, as well as experience using cataloging tools to enhance discoverability and compliance. Data Integration Tools: Working knowledge of tools such as Stitch and Segment CDP for integrating diverse data sources into a cohesive ecosystem is a plus. Analytical and ML Tools Expertise: Knowledge and practical experience with Looker, Power BI, Athena, Redshift, or Sagemaker Feature Store to support analytical and machine learning workflows is a definite bonus! Salary: $145,000 - $154,000 a year. Final compensation is determined based on experience, expertise, and role alignment. Most candidates are hired within the middle of the range, with the upper end reserved for those bringing exceptional depth, impact, and immediate autonomy. We also offer: Bonus Structure Employerpaid Benefits Plan Health & Wellness Flex Account Professional Development Account Wellness Days Holiday Shutdown Wave Days (extra vacation days in the summer) Get AWave Program (work from anywhere in the world up to 90 days) At Wave, we value diversity of perspective. Your unique experience enriches our organization. We welcome applicants from all backgrounds. Lets talk about how you can thrive here! Wave is committed to providing an inclusive and accessible candidate experience. If you require accommodations during the recruitment process, please let us know by emailing . We will work with you to meet your needs. Please note that we use AIassisted notetaking in interviews for transcription purposes only. This helps ensure interviewers can remain fully present and engaged throughout the discussion. This advertised posting is a current vacancy. #J-18808-Ljbffr
Job Title
Senior Data Engineer