Skip to Main Content

Job Title


Kafka Engineer


Company : Response Informatics


Location : Panipat, Haryana


Created : 2025-05-25


Job Type : Full Time


Job Description

Job Title: Kafka EngineerLocation : Offshore (Remote)Experience : 4+ YearsWorking Hours : 9 hours with 2 hours overlap with US working hoursPermanent with TechMProject Context: The organization is building a Near Real-Time (NRT) data service offering from its ODP (Data Platform). This involves:NRT Outbound: Replicating source data from the ODP data hub to outbound systems in near real-time.NRT Inbound: Replicating source data into ODP in near real-time. HVR is the primary tool, with Kafka/MKS playing a supporting role. The project includes design pattern development, POCs, and policy creation. Job Summary:The Kafka Engineer will play a crucial role in designing, implementing, and supporting the Kafka infrastructure required for near real-time data replication within the ODP environment. This individual will work closely with the data engineering team to ensure the successful integration of Kafka with HVR and other systems, enabling efficient and reliable data streaming for both inbound and outbound data flows. Responsibilities:Design and Implementation:Design, develop, and implement Kafka-based solutions for NRT data replication (inbound and outbound).Develop modular solution patterns for handling source/target differences.Build and configure Kafka clusters, topics, producers, and consumers.Integrate Kafka with HVR for data capture and delivery.Implement data serialization formats (e.g., Avro, JSON).Optimization and Performance:Tune Kafka performance for high throughput and low latency.Monitor Kafka cluster health and performance metrics.Identify and resolve Kafka-related issues, ensuring optimal system performance.Optimize Kafka configurations at the producer, broker, and consumer levels.Integration and Collaboration:Collaborate with data engineers, application developers, and other stakeholders to integrate Kafka into the overall data architecture.Work with various data sources (e.g., Oracle, RDS, APIs, files) and destinations (e.g., Kafka, Redshift, databases).Facilitate the synchronization of dataflow inflow and outflow.Documentation and Standards:Create and maintain technical documentation, including design patterns, usability guides, and build checklists.Contribute to the development of data standards and best practices.Document Kafka environments, configurations, and processes.POC and Pilot Support:Participate in POC execution, including tuning and due diligence.Support pilot programs, including execution, results reporting, and audit metadata collection.Assist in the productionalization of use case POCs.Security and Governance:Implement security measures for Kafka clusters, including access control and data encryption.Adhere to data access guidelines and guardrails.Contribute to cost and capacity planning, including storage, pricing tiers, and cost control.Automation and DevOps:Implement CI/CD pipelines for Kafka deployments.Automate Kafka cluster operations.Qualifications:Bachelor's degree in Computer Science, Information Technology, or a related field.5+ years of experience working with Apache Kafka in a production environment.Strong understanding of Kafka architecture, including brokers, topics, partitions, and replication.Experience with Kafka Connect, Kafka Streams, and Schema Registry.Proficiency in Java or Scala for Kafka development.Experience with distributed systems, data streaming, and messaging patterns.Solid understanding of data replication concepts.Experience with various data sources and destinations, including databases (e.g., Oracle, PostgreSQL), cloud storage (e.g., AWS S3), and APIs (e.g., Salesforce).Experience with HVR is highly desirable.Familiarity with cloud platforms (e.g., AWS, Azure) and related services.Experience with DevOps practices, including CI/CD, monitoring, and logging.Strong problem-solving and troubleshooting skills.Excellent communication and collaboration skills. Preferred Skills:Experience with containerization technologies (e.g., Docker, Kubernetes).Knowledge of data serialization formats (e.g., Avro, Protobuf).Experience with monitoring tools (e.g., Prometheus, Grafana).Knowledge of security best practices for Kafka.Key Skills and Experience:Based on the project description, the ideal candidate should possess the following skills and experience:Apache Kafka: Deep understanding of Kafka architecture, configuration, and administration.HVR: Experience with HVR for data replication.Data Integration: Ability to integrate Kafka with various data sources and destinations.Near Real-Time Data Processing: Experience in building and optimizing systems for NRT data delivery.Distributed Systems: Knowledge of distributed system concepts, including fault tolerance, consistency, and scalability.Data Modeling: Familiarity with data modeling principles.Performance Tuning: Ability to optimize Kafka performance for high throughput and low latency.Cloud Platforms: Experience with cloud-based Kafka deployments (AWS, Azure).DevOps: Experience with CI/CD pipelines and automation for Kafka.Problem-Solving: Strong analytical and troubleshooting skills.Communication: Effective communication and collaboration abilities.Security: Knowledge of Kafka security best practices.