Company Description Loyyal is a loyalty and payments innovation company that offers an Enterprise SaaS Suite powered by patented blockchain technology. We focus on disrupting the loyalty industry by delivering efficiency, security, and scalability at a low cost. Our platform is designed to reduce operational complexity and boost revenue for loyalty programs, driving customer engagement and loyalty in a competitive marketplace. About the Role We’re looking for a seasoned AI Engineer who thrives on solving complex challenges and building intelligent systems that scale. This role is ideal for someone passionate about deep learning, GenAI, and production-grade AI systems. You’ll work closely with our data, engineering, and product teams to design, build, and deploy advanced AI models across a variety of real-world use cases. As a Senior AI Engineer, you’ll play a key role in architecting, developing, and optimizing our AI systems—from fine-tuning large language models to building robust MLOps pipelines. This is an opportunity to be part of a high-impact team shaping next-generation AI experiences. Key Responsibilities Design, build, and deploy scalable AI models, with a focus on NLP, LLMs, and Generative AI use cases Fine-tune open-source or proprietary LLMs (e.g., LLaMA, Mistral, GPT-J) for domain-specific tasks Collaborate with product and engineering teams to integrate AI models into user-facing applications Develop MLOps pipelines using tools like MLflow, Kubeflow, or Vertex AI for model versioning, monitoring, and deployment Optimize inference performance, memory usage, and cost efficiency in production environments Apply prompt engineering, retrieval-augmented generation (RAG), and few-shot techniques where appropriate Conduct experiments, A/B testing, and evaluations to continuously improve model accuracy and reliability Stay up to date with the latest developments in AI/ML research, especially in LLM and GenAI domains Write clean, modular, and well-documented code and contribute to technical design reviews Mentor junior team members and collaborate in agile sprint cycles Requirements 6+ years of experience in machine learning or AI engineering 2+ years working with LLMs, Transformers, or Generative AI models Proficiency in Python and deep learning frameworks (e.g., PyTorch, TensorFlow, Hugging Face Transformers) Experience deploying AI models in production (cloud-native or on-prem) Strong grasp of model fine-tuning, quantization, and serving at scale Familiarity with MLOps, including experiment tracking, CI/CD, and containerization (Docker, Kubernetes) Experience integrating AI with REST APIs, cloud services (AWS/GCP), and vector databases (e.g., Pinecone, Weaviate, FAISS) Understanding of ethical AI, data privacy, and fairness in model outcomes Strong debugging, problem-solving, and communication skills Experience working in agile teams with code review and version control (Git) Nice to Have Hands-on experience with Retrieval-Augmented Generation (RAG) pipelines Familiarity with OpenAI, Anthropic, or Cohere APIs and embedding models Knowledge of LangChain, LlamaIndex, or Haystack for AI application orchestration Experience with streaming data and real-time inference systems Understanding of multi-modal models (e.g., combining text, image, audio inputs) Prior experience in a startup, product-focused, or fast-paced R&D environment What We Offer Competitive compensation (base + performance-based bonuses or token equity) Fully remote and flexible work culture A front-row seat to build next-gen AI experiences in a high-growth environment Opportunity to shape AI strategy, tools, and infrastructure from the ground up Access to high-end GPU infrastructure and compute resources How to Apply Send your resume and a short cover letter highlighting: Your experience with LLMs, GenAI, and deployed AI systems Links to AI/ML projects, GitHub repos, or research (if public) Why you're interested in this role and how you envision contributing.
Job Title
Artificial Intelligence (AI) Engineer