Skip to Main Content

Job Title


Data Engineer


Company : Ownly


Location : Bangalore, Karnataka


Created : 2026-04-16


Job Type : Full Time


Job Description

About Ownly:At OWNLY (Backed by Rapido) we’re on a mission to make food delivery simple, fair, and exactly what you ordered and nothing extra. Our team is built of problem-solvers, innovators, and doers, all working to make OWNLY the most trusted way to get your food without hidden charges or inflated prices and we’re just getting started. We connect customers to their favourite restaurants through our reliable delivery network, while helping restaurant partners keep more of what they earn. As we grow, our focus remains the same: fair pricing, honest service, and a platform that respects everyone involved in the journey from kitchen to doorstep.What You’ll DoBuild the Data Stack: Design, set up, and maintain a modern, scalable data infrastructure using cloud-native tools (AWS/GCP/Azure)Pipeline Development: Build robust ETL/ELT pipelines to collect and process data from multiple sources — app events, transactions, logistics, etc.Data Modeling: Define clean, efficient data models and schemas that support real-time dashboards and analyticsCross-Team Enablement: Collaborate with product, growth, ops, and engineering teams to understand data needs and deliver reliable pipelinesData Warehousing: Set up and maintain the data warehouse (e.g., BigQuery, Redshift, Snowflake)Monitoring & Quality: Implement tools to monitor pipeline health, ensure data accuracy, and prevent duplication or driftTooling & Automation: Build internal tools for easier data access, self-serve analytics, and automated reportingScalability: Design for growth — ensuring the system scales as new data sources and higher volumes come inWhat We’re Looking For2 – 4 years of experience as a Data Engineer, Backend Engineer (with data focus), or similar roleProficiency in Python or another scripting language used for ETLHands-on experience with cloud data platforms (AWS/GCP/Azure), especially services like S3, Lambda, Pub/Sub, BigQuery, Redshift, etc.Strong SQL skills and experience working with large-scale structured and semi-structured datExperience with tools like Airflow, DBT, Kafka, or similar is a big plusComfort with early-stage environments — willing to build fast, iterate, and own end-to-end systemsBonus: Exposure to analytics tools (Looker, Metabase, Tableau), product event tracking (Mixpanel, Segment), or ML pipelines