Skip to Main Content

Job Title


Head of Artificial Intelligence (Agentic AI Platform)


Company : NorthStar HR Consultants


Location : Pune, Maharashtra


Created : 2026-01-31


Job Type : Full Time


Job Description

Job Description - Director / Head of Agentic AI Platform & Digital WorkersJob Location - Pune, MaharashtraExperience - 12+ yearsSalary Budget - INR 60 lacsOur client is hiring a Senior Engineering Leader to build from scratch, own, and scale an enterprise-grade Agentic AI Platform (“Agentic AI Fabric”) and a portfolio of production Digital Workers (agents) that execute real workflows safely across enterprise systems.What this leader will own -1) Build and own the platform (Agentic AI Fabric)Agent runtime/orchestration patterns for multi-step workflows (state, retries, queues/backpressure, approvals/HITL)Tool/action gateway and registry (governed connectors, allow lists, parameter validation, rate limits, rollback)Knowledge/RAG layer (ingestion pipelines, provenance, access controls, grounding and traceability)LLMOps/evaluations (versioned prompts/tools/policies, regression gates, safe rollouts, rollback)Observability and cost governance (telemetry, budgets/alerts, cost-per-transaction, SLOs/runbooks)Multi-cloud-ready architecture patterns (AWS first; extend to Azure later)2) Build and scale agents as productsDeliver a production-grade MVP agent and expand a portfolio of Digital WorkersEstablish a “Next Agent Kit” to accelerate subsequent agents (templates, tool patterns, eval harness, rollout checklists)3) Enterprise agent security baseline (must-have)This leader must set and enforce controls for:PII/PHI boundary controls and safe logging/data minimizationPrompt/tool injection defensesKnowledge base poisoning protections (provenance, approvals/versioning, trust filters)Tool supply-chain integrity (registry governance, version pinning, scanning/SBOM)Privilege escalation controls and step-up approvals / separation of dutiesWhy this is a senior roleThis is not an innovation lab position. It requires:shipping production systems under enterprise constraints,handling security/compliance scrutiny,orchestrating cross-functional teams (engineering, security, SMEs, delivery),and building something reusable and scalable (platform + agents).Ideal candidate profile (what “good” looks like)Must have12–18+ years in engineering/product/platform development5+ years leading teams and complex delivery programsTrack record building enterprise products/platforms (not just projects): multi-tenant SaaS, internal developer platforms, automation/workflow platforms, data/AI platformsStrong cloud architecture experience (AWS and/or Azure), including IAM/security, networking, and observabilityHands-on familiarity with LLM applications (RAG + tool calling + evals/monitoring), and the realities of production deploymentExperience with Strongly preferredAgent frameworks & orchestrationHands-on experience with agent/workflow orchestration frameworks such as LangGraph, CrewAI, AutoGen, and/or similar (e.g., LlamaIndex agents), including multi-step tool calling and human-in-the-loop patterns.Automation/workflow platformsExperience with workflow automation / integration platforms such as n8n (or comparable low-code workflow engines) to rapidly compose actions and integrate enterprise systems.Tool integration standards (optional)Familiarity with MCP-style tool servers/tool gateways and agent interoperability concepts (A2A-ready patterns), even if not fully implemented end-to-end.Has built or scaled a platform where customers/users rely on it daily (SLOs, on-call, release governance)Has shipped AI-assisted or autonomous workflow systems that integrate with enterprise apps (ITSM/CRM/ERP)Has worked in regulated contexts (BFSI/healthcare) or delivered audit-grade systemsNice-to-haveFamiliarity with agent frameworks, interoperability concepts (A2A/MCP patterns), secure tool executionExperience packaging offerings / enabling GTM narratives for enterprise stakeholdersInference engineering and cost/performance optimizationExperience building and operationalizing evaluation frameworks for LLM/agent workflows