About the CompanyBrightedge is a global leader in AI-powered enterprise performance marketing and SEO solutions. We’re building scalable, intelligent, cloud-native data platforms to power real-time insights and decision-making across our customer ecosystem. As part of our growth, we’re hiring experienced data engineers to join our high-impact Professional Services team.About the RoleAs an SDE III – Data Engineer, you will play a key role in designing, building, and optimizing scalable data solutions on Google Cloud Platform (GCP) using BigQuery, Python, and modern orchestration tools. You will work on ingesting billions of rows of structured and semi-structured data, ensuring performance, reliability, and automation across the data stack.Responsibilities :Design, build, and maintain high-performance data pipelines for batch and streaming ingestion across diverse sources (APIs, Pub/Sub, external DBs, file stores). Optimize BigQuery queries and architecture for cost efficiency, scalability, and reliability. Build modular and reusable Python-based data transformation logic for data wrangling, validation, and loading. Architect and manage data orchestration workflows using Airflow, Cloud Composer, or similar tools. Implement CI/CD, testing, and observability for data pipelines using tools like Terraform, GitHub Actions, and DataDog. Partner with analytics, ML, and product engineering teams to design scalable data models that power reporting and AI/ML use cases. Own SLAs and quality metrics for mission-critical pipelines that impact internal analytics and external customer-facing platforms.Qualifications :6+ years of hands-on data engineering experience, with at least 3+ years in GCP ecosystem. Strong command of BigQuery, partitioning, clustering, query tuning, and data lifecycle management. Advanced proficiency in Python for building production-grade ETL/ELT pipelines and data tools. Solid understanding of large unstructured datasets in JSON format from the web crawlers. Experience with data orchestration frameworks (Airflow / Cloud Composer / Dagster). Deep knowledge of performance optimization across query logic, storage formats (Parquet/Avro), and compute resources. Solid understanding of data architecture, including dimensional modeling, CDC, and lakehouse patterns. Experience with infrastructure-as-code (Terraform or equivalent) for managing GCP resources.Required Skills :Familiarity with dbt, Looker, or data catalog tools. Experience with Kafka, Pub/Sub, or event-driven data pipelines. Exposure to data quality, lineage, and governance tools (e.g., Great Expectations, OpenLineage). Understanding of cost governance and resource optimization in a cloud-native data stack.
Job Title
Data Engineer - BigQuery, GCP