FinOpsly is an AI-native Value-Control™ platform for cloud (AWS, AZ, GCP), data (Snowflake, Databricks), and AI economics, built to help enterprises move beyond passive cost visibility to active, outcome-driven control. The platform unifies technology spend across cloud infrastructure (AWS, Azure, GCP), data platforms (Snowflake, Databricks), and AI workloads into a single system of action—combining planning, optimization, automation, and financial operations.Role DescriptionWe are looking for a highly skilled Product Engineer to build cost and usage optimization for Databricks that provides deep insights into cost buckets in the application. This role involves developing systems that can programmatically analyze spend generate, insights, recommendations and alert about anomalies. The ideal candidate has run Databricks pipelines at scale and felt the bill — someone who has made real architectural decisions driven by cost constraints and can now encode that experience into a product that delivers the same insight for others.What You'll do:You will be the technical owner and team lead for a FinOps module that helps customers understand, control, and reduce their Databricks spend on AWS. This is not just an IC role — you will lead a small team of engineers, mentor them through complex data modeling challenges, and serve as the bridge between technical depth and product clarity. The work spans three layers: discovering what data and APIs exist inside a customer's Databricks account, modeling that raw data into meaningful cost buckets, and translating those insights into concrete optimization recommendations grounded in real-world Databricks expertise. Map cost-relevant data across a customer's Databricks account on AWS. Identify which System Tables, Unity Catalog metadata, REST APIs, and AWS billing constructs (CUR, CloudTrail, S3 access logs) contain the signals needed to reconstruct total spend. Define the access model and schema contracts that downstream modeling depends on. Guide your team to implement and test ingestion reliably across diverse customer environments.Lead the design of the analytical data model that decomposes Databricks spend into attributable cost buckets — by workload, cluster tier, team, job, query, and AWS resource. Own the architecture of the Delta/dbt/DLT layer powering dashboards, trend analysis, and forecasting. Ensure the model accounts for both DBU charges and underlying AWS infrastructure (EC2, EBS, networking, S3). Review and guide team members' implementation of individual model components.Draw on your hands-on Databricks expertise — across analytics, data warehousing, and lakehouse patterns — to define the recommendation logic your team encodes into the product. Proposals span rate optimization (Spot, reserved capacity, instance pool sizing) and usage optimization (right-sizing, idle cluster elimination, job consolidation, Photon tuning, caching). Each recommendation must be evidence-backed, tied to actual patterns in the customer's data — not generic best-practice advice.Act as the technical voice in conversations with product, data, and customer-facing teams. Communicate tradeoffs clearly — scope, accuracy, latency — without losing non-technical stakeholders.What You’ll Bring5+ years of experience with Databricks in production — analytics, data warehousing, or lakehouse architectures.Prior experience leading or tech-leading a small engineering team, with demonstrated ability to mentor and grow others.Deep familiarity with Databricks System Tables, Unity Catalog metadata, and REST APIs.Strong SQL and PySpark skills; proven ability to model cost data from raw billing exports into clean analytical layers.Understanding of how AWS infrastructure costs map to Databricks workload patterns.Ability to translate ambiguous business requirements into crisp technical specifications — and hold the team to them.Clear, confident communicator who can present technical tradeoffs to non-engineers without dumbing them down.Startup mindset: hands-on, automation-driven, customer-value obsessed.Preferred:· Experience with transformation frameworks for cost modeling.· Prior work designing chargeback or showback models for platform teams.· Familiarity with Databricks pricing tiers — Standard, Premium, Enterprise DBU multipliers.· Knowledge of Photon, serverless compute, and their cost tradeoffs vs classic clusters.· FinOps Foundation practitioner certification or equivalent cloud cost background.· Experience building internal tools or platforms adopted by cross-functional teams.· Experience with multi-cloud environments Why Join usExposure to real-world multi-account, multi-tenant cloud environments. If you've spent years optimizing Databricks environments — debugging slow jobs, redesigning cluster configs, fighting egress costs — this is the role where that knowledge stops being tribal and starts being scalable. Every optimization insight you've accumulated gets encoded into a recommendation engine that delivers the same outcome for dozens of customers. Your expertise has a multiplier here that it simply doesn't have as an individual practitioner. We're deliberate about who we hire — not just for skills, but for how people think, collaborate, and raise each other's work. You'll be surrounded by people who care about doing things well: clear technical thinking, honest feedback, and a shared intolerance for cutting corners that create future pain. Early teams set the culture for everything that follows — we're building one where senior engineers mentor, not gatekeep, and where good ideas win regardless of seniority.
Job Title
Product Engineer - Databricks