Skip to Main Content

Job Title


Data Engineer – Process Intelligence (Contract)


Company : Blue Boy Consulting LLP


Location : Patna, Bihar


Created : 2025-08-01


Job Type : Full Time


Job Description

Location:Remote, Preferably Bangalore with occasional travel for collaboration and client meetingsEngagement Type:Contract (initial 3 months with potential for extension based on project needs and fitment) About Optron: At Optron (a venture of Blue Boy Consulting LLP), we are at the forefront of leveraging cutting-edge AI to transform how enterprises interact with and derive insights from their data. We believe in building intelligent, autonomous systems that drive unprecedented efficiency and innovation for our clients. Our culture is one of continuous learning, fearless exploration, and solving complex, real-world challenges with elegant, intelligent solutions. We are a lean, agile team passionate about pushing the boundaries of what's possible with AI. Our leadership team has extensive global top-tier strategy consulting experience, coupled with deep technical acumen. This unique blend means we don't just build technology; we build solutions that truly impact global businesses, and you'll have the freedom to shape the future direction of the company and its offerings.The Opportunity: Accelerate Enterprise Transformation with Data & Process Mining Are you a bright, driven data engineer with a passion for crafting robust data solutions and a knack for quickly mastering new technologies? Do you thrive in environments where your direct impact is tangible, and your innovative ideas can genuinely shape the future of enterprise data strategy? If so, we're looking for you! We're not just seeking a data engineer; we're seeking ahighly intelligent, exceptionally quick-learning problem-solvereager to delve into the intricate world of enterprise processes. This role is pivotal in building accelerators and tools that will empower our consultants to deliver best-in-class process mining and intelligent process execution solutions for our global enterprise clients. You'll bridge the gap between raw process data and actionable insights by building robust data models that automate the discovery, analysis, and optimization of complex business processes. This is not about maintaining legacy systems; it's about pioneering the next generation of data interaction and automation through intelligent data models. We are looking for a smart, foundational developer who thrives on intellectual challenge, possesses an insatiable curiosity, and is eager to dive deep into sophisticated data environments. We are looking for raw talent, a sharp mind, and the ability to rapidly acquire and apply new knowledge. If you're a problem-solver at heart, passionate about data, and want to build solutions that redefine industry standards, this is your chance to make a significant impact.What You'll Be Doing (Key Responsibilities & Goals) As a Data Engineer, you'll drive the data backbone of our process intelligence initiatives, specifically: Architecting Process Mining Data Models:Designing, developing, and optimizing highly efficient data models to capture and prepare event data for process mining analysis. This involves deep engagement with complex datasets from critical enterprise IT systems likeSAP ERP, SAP S/4HANA, Salesforce , and other bespoke client applications. Databricks & PySpark Development:Leveraging your experience (2-5 years preferred) with Databricks and PySpark (with occasional SQL Spark) to create scalable, robust, and efficient data ingestion and transformation pipelines. This includes working with core Databricks features such as Delta Lake, and optimizing data processing through techniques like Z-ordering and partitioning. End-to-End Data Pipeline Ownership:Implementing core data engineering concepts such asChange Data Capture (CDC) , to build real-time data ingestion and transformation pipelines from various sources Storage Management:Working with various data storage solutions like Azure Data Lake, Unity Catalogue, and Delta Lake for efficient data storage. Cloud & DevOps Setup:Taking ownership of setting up cloud environments, establishing robustCI/CD pipelines , and managing code repositories to ensure seamless, modular, and version-controlled development. This includes leveraging Git / Databricks Repos and Databricks Workflows for streamlined development and orchestration. Data Governance & Security:Implementing and maintaining data governance, privacy and security best practices within the Databricks environment to handle sensitive enterprise data. Synthetic Data Generation:Developing sophisticatedsynthetic training datasetsthat accurately emulate the complex data structures, event logs, and behaviours found within diverse enterprise IT systems, crucial for our analytical models. Staying Updated:Keeping up-to-date with the latest Databricks features, best practices, and industry trends to continuously enhance our solutions.What We're Looking For (Required & Preferred Qualifications) We prioritize a sharp mind and a strong foundation. While specific experience is valuable, your ability to learn and adapt quickly is paramount. Educational Background:A Bachelor of Engineering (B.E.) / Bachelor of Technology (B.Tech) in Computer Science, Information Technology, or a closely related engineering discipline is preferred. Core Data Engineering Acumen:Demonstrated understanding of fundamental data engineering principles, including data warehousing, ETL/ELT methodologies, data quality, and data governance. Databricks & Spark Exposure:2-5 years of practical experience with Databricks , with a focus on building pipelines and data solutions using PySpark. Conceptual Depth:A clear grasp of concepts like CDC, data pipeline creation, efficient data ingestion, optimization strategies, efficient cloud cost management, and modular code development. Problem-Solving & Adaptability:A proven track record of tackling complex technical challenges with innovative solutions and a genuine eagerness to quickly master new tools and paradigms. Enterprise Data Context (Preferred):While not mandatory, prior exposure to or understanding of data structures and IT workloads within large enterprise environments (e.g., SAP, Salesforce) would be advantageous.Why Join Us? Join a team where your contributions are celebrated, and your growth is prioritized: Groundbreaking Work:Be at the forefront of data innovation, building solutions that don't just optimize, but fundamentally transform how enterprises operate. Intellectual Challenge:Work on complex, unsolved problems that will stretch your abilities and foster rapid personal and professional growth. Learning-Centric Environment & 20% Time:We deeply value continuous learning. You'll receive20% dedicated timeto explore new technologies, learn new skills, or pursue personal pet projects that spark your interest and contribute to your growth. Global Exposure:Gain invaluable experience working with diverseglobal clients and collaborating with colleagues from various backgrounds , expanding your professional network and worldview. High Impact & Shaping the Future:Your contributions will directly influence our clients' success and, critically, you'll have thefreedom to shape the future direction of the company , contributing directly to product strategy, technical roadmap, and innovative service offerings, working closely with our visionary IIM alumni leadership. Autonomy & Trust:We trust our team members to take ownership, innovate, and deliver high-quality results. Collaborative & Supportive Team:Work alongside other bright, passionate individuals who are eager to learn and build together. Competitive Compensation:We offer attractive contractor rates commensurate with your skills and potential.Ready to Redefine Enterprise Intelligence with Data? If you're a brilliant problem-solver with a strong technical foundation and a burning desire to master the art of data engineering for enterprise transformation, we encourage you to apply. This is more than a contract; it's an opportunity to build something truly revolutionary. To Apply:Click on Easy Apply, and submit your latest resume. Ensure you have at least one key relevant project mentioned in detail on the resume.