Job Description – AI EngineerLocationDelhi / Mumbai (Work from office)Experience 5+ years of hands-on experiencein AI / ML engineering and data systems.About the Role We are looking for aseasoned AI Engineerwho can design, build, and scale intelligent systems end-to-end. This role requires strong depth inmachine learning, cloud platforms (GCP & AWS), and large-scale ETL data pipelines . You will work closely with product, data, and engineering teams to convert data into production-grade AI solutions.Key ResponsibilitiesAI & Machine Learning Design, train, fine-tune, and deploymachine learning, deep learning, and generative AI models . Work onNLP, embeddings, recommendation systems, and multimodal AIuse cases. Build and maintainRAG pipelines , similarity engines, and ranking models. Translate business and product requirements into scalable AI architectures.Data Engineering & ETL Pipelines Design and buildrobust ETL pipelinesfor high-volume data ingestion and processing. Handlestructured and unstructured dataincluding events, text, audio, video, and metadata. Ensure data reliability, quality checks, and monitoring across pipelines. Create ML-ready datasets, feature stores, and analytics layers. Optimize pipelines forcost, latency, and scalability .Cloud & Infrastructure (GCP + AWS) Architect and deploy AI and data workloads onGoogle Cloud Platform (GCP)andAmazon Web Services (AWS) . Hands-on experience with: GCP : BigQuery, Dataflow, Pub/Sub, Vertex AI, Cloud Storage, TerraForm, BigQuery, Python Scripting AWS : S3, EC2, Lambda, SageMaker, Glue, Redshift Manage training and inference workloads using CPU and GPU infrastructure. Implement secure, scalable, and fault-tolerant cloud architectures.MLOps & Production Systems Build and maintainend-to-end MLOps pipelinesfor model training, deployment, and monitoring. Use tools such asMLflow, Kubeflow, Airflow, Weights & Biases . Containerize models usingDockerand orchestrate withKubernetes . Monitor model drift, performance, and retraining cycles in production.Collaboration & Ownership Work closely with product, backend, and analytics teams. Communicate AI insights and system designs to technical and non-technical stakeholders. Own AI systems from design through production rollout.Required Skills & TechnologiesProgramming & Frameworks Strong proficiency inPython . ML frameworks:PyTorch, TensorFlow, Hugging Face . API development:FastAPI, REST, gRPC . Data & Pipelines ETL orchestration:Apache Airflow, Dataflow, AWS Glue . Streaming systems:Kafka, Pub/Sub . Databases: SQL, NoSQL (PostgreSQL, BigQuery, MongoDB). Vector databases:Pinecone, FAISS, Weaviate . Cloud & DevOps Deep production experience withGCP and AWS . Infrastructure as Code:Terraform or CloudFormation(preferred). CI/CD pipelines and Git-based workflows. Nice to Have Experience withmedia-tech, content platforms, or consumer-scale products . Exposure toLLMs, generative AI, and personalization systems . Experience handlingmillions of events per day .Why Join Us Work on real-world AI systems with clear business impact. High ownership role in a fast-growing product-led environment. Opportunity to architect AI and data systems from the ground up. Competitive compensation and long-term growth opportunities.
Job Title
Data Engineer