Good day,We have opportunity for Date Engineer.Job Role: Date EngineerJob Location: Synechron ( Bengaluru BCIT )Experience- 7 to 15 yearsNotice: Immediate joiner to 15 days.About Synechron:Synechron is a global technology consulting firm that helps leading organizations accelerate digital transformation through innovation, expertise, and agility. With more than 16,500 professionals across around 60 offices in over 20 countries, we combine deep industry knowledge with advanced capabilities in AI, cloud, cybersecurity, and data engineering.Our regional teams, supported by strategic delivery centers, provide scalable, cost-efficient solutions tailored to local markets. Through our award-winning Synechron FinLabs accelerators and strategic partnerships with AWS, Microsoft, Databricks, Salesforce, and ServiceNow, we enable clients to innovate fast and lead with confidence. For more information on the company, please visit our website or LinkedIn community.Job DescriptionJob Title: Data Engineer – AWS + HadoopLocation: BangaloreExperience: 7+ Years About the RoleWe’re looking for a seasoned Data Engineer with hands-on expertise in AWS data services and the Hadoop ecosystem. You will design, build, and optimize batch/streaming data pipelines, enable reliable data ingestion/processing, and support analytics, ML, and BI use cases at scale.Key ResponsibilitiesDesign and implement scalable ETL/ELT pipelines for batch and streaming workloads.Build data ingestion frameworks using Kafka/Kinesis, and process data with Spark (PySpark/Scala).Develop and optimize data lakes and data warehouses on AWS (S3, Glue, EMR, Athena, Redshift).Manage and tune Hadoop ecosystem components (HDFS, Hive, Spark, Oozie/Airflow, Sqoop).Model data (star/snowflake), manage schemas, partitioning, and metadata; ensure data quality (DQ checks).Implement data governance, security, and access controls (IAM, Lake Formation, encryption, key management).Set up orchestrations and CI/CD for data jobs (Airflow/AWS Step Functions, Jenkins/GitHub Actions).Monitor pipelines and optimize cost, performance, and reliability (CloudWatch, logs, metrics).Collaborate with Analytics/ML/BI teams; provide high-quality curated datasets and APIs/Views.Document solutions, conduct code reviews, and enforce engineering best practices.Required Skills & Qualifications7+ years in Data Engineering with large-scale distributed data systems.Strong experience with AWS data stack: S3, Glue, EMR, Athena, Lambda, Redshift, IAM, CloudWatch.Hands-on with Hadoop ecosystem: HDFS, Hive, Spark (PySpark/Scala), Kafka, Oozie/Airflow.Expertise in SQL (complex queries, performance tuning) and data modeling.Practical knowledge of streaming (Kafka/Kinesis, Spark Streaming/Structured Streaming).Experience with Python or Scala for data pipelines; Shell scripting.Familiarity with Orchestration (Airflow/AWS Step Functions) and CI/CD for data jobs.Strong understanding of security & governance (encryption, PII handling, RBAC, Lake Formation).Proficient with version control (Git) and containers (Docker) for reproducible jobs.Excellent problem-solving, communication, and collaboration skills.To expedite the application process, please share the following details at your earliest convenience:Tentative date to join (if selected)Current salaryExpected salaryTotal experienceRelevant experienceOfficial email confirmation of notice period or last working dayPrimary skills (hands-on)Secondary skillsReason for changeCurrent locationPreferred location (e.g., Pune, Bengaluru, etc.)For More information contact – (9322922764)
Job Title
Data Engineer