Skip to Main Content

Job Title


Senior Data & AI Systems Engineer


Company : Wellborn Technologies


Location : Kolhapur, Maharashtra


Created : 2025-05-24


Job Type : Full Time


Job Description

Senior Data & AI Systems EngineerLocation: Bengaluru & HyderabadExperience Level: 5–12 yearsJob Type: Full-timeIndustry: Technology / AI / Data EngineeringJob SummaryWe are seeking an experienced and hands-on Senior Data & AI Systems Engineer to design, develop, and deploy intelligent, real-time data systems and AI applications. This role combines expertise in LLM frameworks, data streaming, and time-series analytics with large-scale NoSQL systems to build next-generation enterprise tools and platforms.You’ll be at the forefront of building agentic AI applications, real-time event pipelines, and intelligent data orchestration systems using technologies such as LangGraph, LangChain, Google Bigtable, Kafka/PubSub, and time-series processing frameworks.Key ResponsibilitiesDesign and implement scalable AI-powered applications using LangChain and LangGraph, enabling agent orchestration and tool integration.Develop stateful and memory-driven LLM workflows for document intelligence, research agents, or enterprise support automation.Architect and maintain real-time data pipelines using Kafka or Google Pub/Sub.Build and optimize high-throughput NoSQL data models using Google Bigtable for time-sensitive applications.Work with time-series data for monitoring, forecasting, anomaly detection, and predictive analytics use cases.Collaborate with cross-functional teams including ML engineers, product managers, and DevOps for end-to-end solution delivery.Ensure data reliability, scalability, and low-latency performance in distributed systems.Mentor junior engineers and contribute to architectural decisions and best practices.Required Skills and Experience5–12 years of experience in backend, data, or AI systems engineering.Strong proficiency in Python (preferred), Go, or Java.Deep understanding and hands-on experience with:LangChain and LangGraph for building LLM applications and agents.Pub/Sub (Google Cloud) or Apache Kafka for real-time messaging and stream processing.Google Bigtable or similar NoSQL systems for high-performance data storage.Time-series data analysis using tools such as InfluxDB, Prometheus, or custom solutions.Experience in building scalable, distributed systems on GCP, AWS, or Azure.Solid knowledge of data modeling, microservices, and event-driven architectures.Familiarity with MLOps pipelines, vector databases, or RAG pipelines is a plus.Preferred QualificationsExperience working with LLMs (OpenAI, Anthropic, Cohere, etc.) and vector search tools (e.g., Pinecone, FAISS, Weaviate).Exposure to containerization and orchestration tools (e.g., Docker, Kubernetes).Understanding of observability tools (e.g., Grafana, Datadog) for time-series and metric monitoring.Experience in AI agent frameworks or autonomous task planning systems.What We OfferOpportunity to work on cutting-edge AI systems.A collaborative environment with deep tech focus.Flexible work options and competitive compensation.Access to high-impact projects in enterprise AI and data systems.