Skip to Main Content

Job Title


Senior Data Engineer


Company : Giant Eagle GCC


Location : Bangalore, Karnataka


Created : 2026-04-15


Job Type : Full Time


Job Description

Grow. Learn. Thrive. Giant Eagle is where your career can soar high. At Giant Eagle, you are so much more than an employee. You are part of the Family.About the CompanySince our founding in 1931, Giant Eagle, Inc. has evolved into one of the top 40 largest private corporations in the U.S. and one of the country’s largest food retailers and distributors. With more than 37,000 Team Members and $9.7billion in revenue, we are committed to investing in people, technology, and data to elevate our customer’s experience across multiple touchpoints. It helps us follow on our commitment to serving others and improving our communities.About Giant Eagle BangaloreThe Giant Eagle GCC in Bangalore is our global capability center. Our team of more than 350 members at the GCC enables us to expand internal capabilities in the areas such as data analytics, merchandising and eCommerce, quality engineering, and automation to generate insights for faster decision-making and help us accelerate our business strategy. Our team in India plays a pivotal role in helping the company transition to new ways of working by redefining the food and grocery shopping experience for over 4.6 million customers.Job Description SummaryWe are looking for a Senior Data Engineer who combines deep technical expertise with strong architectural judgment and leadership influence. This person will design and optimize large-scale data platforms, guide engineering best practices, mentor other data engineers, and help shape the future-state data architecture for analytics, marketing technology, and AI-driven use cases.The ideal candidate brings hands-on experience with Databricks, Apache Spark, Python, and modern orchestration/ETL tools such as Azure Data Factory (ADF) or Airflow. The ideal candidate should be comfortable working with high-volume, complex datasets, improving performance and cost efficiency, and collaborating across product, analytics, and engineering teams to translate business needs into scalable technical solutions.Experience supporting or enabling agentic AI workflows is highly valued, and familiarity with the MarTech ecosystem (such as customer activation, audience platforms, personalization, campaign systems, or downstream marketing integrations) is a strong plus.Key ResponsibilitiesLead the design, development, and optimization of scalable data pipelines and data products for analytics, operational, and AI-driven use cases.Influence architectural decisions across the data platform, including ingestion, transformation, orchestration, storage, governance, and consumption patterns.Partner with engineering leaders, architects, analysts, product managers, and business stakeholders to define technical direction and implementation strategies.Build and maintain robust batch and/or near-real-time pipelines using Databricks, Spark, Python, and modern orchestration tools such as ADF or Airflow.Drive best practices in data engineering, including modular design, observability, testing, version control, CI/CD, and release management.Guide and mentor other data engineers through code reviews, technical design discussions, troubleshooting, and standards adoption.Optimize large-scale data workloads for performance, reliability, scalability, and cost efficiency.Design and improve data models and storage patterns that support downstream reporting, advanced analytics, personalization, and machine learning/AI applications.Contribute to platform modernization efforts, including migration of legacy pipelines or workflows to modern cloud-native and lakehouse architectures.Support data governance, lineage, privacy, and secure handling of sensitive data across the pipeline lifecycle.Collaborate on AI-enablement initiatives, including data foundations for agentic AI, intelligent automation, recommendation systems, or decision-support capabilities.Work closely with cross-functional teams to enable data consumption across analytics, operational systems, and marketing technology platforms.Required QualificationsBachelor’s degree in Computer Science, Engineering, Mathematics, Information Systems, or a related technical field (or equivalent practical experience).7+ years of experience in data engineering, software engineering, or related technical roles.Strong hands-on experience with Databricks in a production environment.Strong proficiency in Python and Apache Spark for large-scale data processing.Experience with enterprise ETL/orchestration tools such as Azure Data Factory (ADF), Airflow, or similar workflow orchestration platforms.Proven experience building and supporting data pipelines for large, complex, high-volume datasets.Experience in data optimization, including query tuning, Spark performance tuning, partitioning strategies, job design, cost optimization, and efficient data storage patterns.Strong knowledge of modern data architecture concepts including lakehouse/data lake/warehouse patterns, ELT/ETL frameworks, and scalable data platform design.Experience with SQL, relational and analytical data modeling, and schema design for downstream consumption.Ability to influence technical direction and make sound architecture recommendations across teams.Strong communication skills with the ability to explain technical tradeoffs to both engineering and business stakeholders.Demonstrated experience mentoring or guiding other engineers in a senior or lead capacity.Preferred QualificationsExperience building or supporting agentic AI or AI/ML-enabled data workflows.Familiarity with LLM-enablement patterns, vector-ready data preparation, prompt/input data orchestration, or event-driven data support for AI agents.Experience in the MarTech ecosystem, including customer data platforms, campaign systems, personalization platforms, audience activation, customer behavior data, or marketing analytics pipelines.Experience with cloud-native platform services in Azure and enterprise data governance capabilities.Experience with streaming/event-based architectures and APIs.Familiarity with DevOps/DataOps practices including CI/CD, automated testing, infrastructure-as-code, and monitoring.Exposure to privacy-sensitive data domains and secure processing patterns for regulated or customer-related data.