Job Description – AI EngineerLocationDelhi / Mumbai(Work from office)Experience5+ years of hands-on experience in AI / ML engineering and data systems.About the RoleWe are looking for a seasoned AI Engineer who can design, build, and scale intelligent systems end-to-end. This role requires strong depth in machine learning, cloud platforms (GCP & AWS), and large-scale ETL data pipelines. You will work closely with product, data, and engineering teams to convert data into production-grade AI solutions.Key ResponsibilitiesAI & Machine LearningDesign, train, fine-tune, and deploy machine learning, deep learning, and generative AI models.Work on NLP, embeddings, recommendation systems, and multimodal AI use cases.Build and maintain RAG pipelines, similarity engines, and ranking models.Translate business and product requirements into scalable AI architectures.Data Engineering & ETL PipelinesDesign and build robust ETL pipelines for high-volume data ingestion and processing.Handle structured and unstructured data including events, text, audio, video, and metadata.Ensure data reliability, quality checks, and monitoring across pipelines.Create ML-ready datasets, feature stores, and analytics layers.Optimize pipelines for cost, latency, and scalability.Cloud & Infrastructure (GCP + AWS)Architect and deploy AI and data workloads on Google Cloud Platform (GCP) and Amazon Web Services (AWS).Hands-on experience with:GCP: BigQuery, Dataflow, Pub/Sub, Vertex AI, Cloud Storage, TerraForm, BigQuery, Python ScriptingAWS: S3, EC2, Lambda, SageMaker, Glue, RedshiftManage training and inference workloads using CPU and GPU infrastructure.Implement secure, scalable, and fault-tolerant cloud architectures.MLOps & Production SystemsBuild and maintain end-to-end MLOps pipelines for model training, deployment, and monitoring.Use tools such as MLflow, Kubeflow, Airflow, Weights & Biases.Containerize models using Docker and orchestrate with Kubernetes.Monitor model drift, performance, and retraining cycles in production.Collaboration & OwnershipWork closely with product, backend, and analytics teams.Communicate AI insights and system designs to technical and non-technical stakeholders.Own AI systems from design through production rollout.Required Skills & TechnologiesProgramming & FrameworksStrong proficiency in Python.ML frameworks: PyTorch, TensorFlow, Hugging Face.API development: FastAPI, REST, gRPC.Data & PipelinesETL orchestration: Apache Airflow, Dataflow, AWS Glue.Streaming systems: Kafka, Pub/Sub.Databases: SQL, NoSQL (PostgreSQL, BigQuery, MongoDB).Vector databases: Pinecone, FAISS, Weaviate.Cloud & DevOpsDeep production experience with GCP and AWS.Infrastructure as Code: Terraform or CloudFormation (preferred).CI/CD pipelines and Git-based workflows.Nice to HaveExperience with media-tech, content platforms, or consumer-scale products.Exposure to LLMs, generative AI, and personalization systems.Experience handling millions of events per day.Why Join UsWork on real-world AI systems with clear business impact.High ownership role in a fast-growing product-led environment.Opportunity to architect AI and data systems from the ground up.Competitive compensation and long-term growth opportunities.
Job Title
Data Engineer