MLOps Engineer — DatabricksClient: A large global enterprise (name not disclosed)Location: India Work Model: 100% RemoteContract: 6 months (initial) with possibility of extensionStart Date: ASAPEngagement: Full-time / Long-term contractRole OverviewWe are seeking an experienced Databricks MLOps Developer to design, build, and manage scalable machine learning operations on the Databricks Lakehouse Platform. The role involves automating ML workflows, operationalizing models, enabling reproducible pipelines, and ensuring governance and monitoring across the ML lifecycle.Key Responsibilities1. Develop Scalable MLOps PipelinesBuild automated ML pipelines for training, validation, deployment, and batch/real-time inference.Use Databricks Workflows, Jobs, Repos, and Delta Live Tables where applicable.Implement distributed training and inference pipelines using MLflow + PySpark.2. Model Lifecycle ManagementManage model versioning and promotion across dev → staging → production using MLflow Model Registry.Create reproducible workflows for model packaging, deployment, and rollback.3. CI/CD IntegrationBuild and integrate ML pipelines with CI/CD using Azure DevOps, GitHub Actions, or Jenkins.Automate testing, validation, and deployment for ML artifacts, notebooks, and infrastructure.4. Feature Engineering & Data PipelinesCollaborate with Data Engineering teams to build optimized Delta Lake pipelines (Bronze/Silver/Gold architecture).Implement feature engineering workflows and support feature reuse at scale.5. Monitoring & GovernanceSet up model monitoring for performance, drift, data quality, and lineage.Use Databricks-native tools, MLflow metrics, and cloud monitoring services (Azure/AWS).Ensure compliance through logging, auditing, permissions, and environment governance.6. Cross-Functional CollaborationWork closely with Data Scientists, Data Engineers, Cloud teams, and Product teams.Document workflows, best practices, and MLOps reusable components.Required Skills & QualificationsStrong hands-on experience with Databricks (Workflows, Repos, Jobs, Compute)Proficiency with MLflow (Tracking, Registry, Model Deployment)Expertise in Delta Lake, PySpark, and distributed data pipelinesSolid programming skills in Python and SQLExperience with CI/CD tools: Azure DevOps, GitHub Actions, JenkinsFamiliarity with cloud platforms: Azure, AWS, or GCPUnderstanding of containerization (Docker) and orchestration (Kubernetes)Background in ML model training, serving, and observabilityPreferred QualificationsDatabricks certifications:Databricks Certified Machine Learning ProfessionalDatabricks Certified Data Engineer Associate/ProfessionalExperience with Unity Catalog for governanceExperience implementing feature storesKnowledge of ML observability tools (WhyLabs, Monte Carlo, Arize AI, etc.)
Job Title
MLOps Engineer — Databricks