Skip to Main Content

Job Title


MLOps Developer


Company : BIG Language Solutions


Location : Tirunelveli, Tamil nadu


Created : 2026-02-08


Job Type : Full Time


Job Description

MLOps DeveloperRole: MLOps DeveloperLocation: Hybrid / RemoteTeam: AI & InnovationReports to: VP of Artificial IntelligenceCompensation: 28-32 LPA(Based on the experience and interview)About BIG Language SolutionsBIG Language Solutions is a global Language Service Provider (LSP) delivering world-class translation and interpretation services for clients across industries. We combine human linguistic expertise with cutting-edge AI to make multilingual communication faster, more accurate, and more accessible. Our innovation spans both written and spoken language solutions—helping organizations break barriers in real time and at scale.Job SummaryWe are looking for an MLOps Developer to own the deployment, scaling, and reliability of machine learning systems in production. You will be responsible for building containerized ML services, operating CI/CD pipelines, and running ML workloads on Azure Kubernetes Service (AKS).In this role, you’ll work closely with ML engineers and platform teams to take models from experimentation to high-performance, observable, and scalable production systems. This is a hands-on role for someone who enjoys working at the intersection of machine learning, cloud infrastructure, and distributed systems.MLOps Developer — Must-Have SkillsDocker & ContainerizationStrong experience writing and maintaining Dockerfiles for ML training and inference workloadsCI/CD PipelinesHands-on experience building and operating CI/CD pipelines for ML systems (model build, test, deploy, rollback)Azure Kubernetes Service (AKS)Production experience deploying, scaling, and operating ML services on AKS, including monitoring and troubleshootingMLOps & Model LifecycleExperience operationalizing ML models end-to-end: training → deployment → monitoringStrong understanding of model versioning, promotion, and rollbackModel Serving & InferenceExperience with production inference pipelines and model servingHands-on experience with NVIDIA Triton Inference ServerFamiliarity with ONNX, TensorRT, PyTorch, or TensorFlowPython & SystemsAdvanced Python skills for production ML systemsExperience debugging performance issues across CPU/GPU, memory, and distributed systemsNice-to-HaveKubernetes tooling (Helm, GitOps)CUDA / TensorRT optimizationFeature stores or vector databasesStreaming systems (Kafka, Redis, RabbitMQ)What We’re Looking ForOwns ML systems in production end to endStrong debugging and problem-solving mindsetComfortable working with ML, platform, and product teamsExperience taking ML systems from prototype to production at scaleThink global. Think BIG. Visit us: