Skip to Main Content

Job Title


Cloud Data Engineer - Scala / Databricks


Company : Bison Global Technology Search


Location : Ghaziabad, Uttar pradesh


Created : 2025-05-15


Job Type : Full Time


Job Description

Cloud Data Engineer - Scala / Databricks is required 100% Remote working IMMEDIATE JOINER REQUIREDCloud Data Engineer is required ASAP by our global market leading IT Consultancy client!With a strong background in AWS, Azure, and GCP the ideal candidate will have extensive experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, and other ETL tools like Informatica, SAP Data Intelligence, etc.Key responsibilitiesYou main responsibility will be for designing, implementing, and maintaining robust data pipelines and building scalable data lakes, broken down into the following:Design and Development:Design, develop, and maintain scalable ETL pipelines using cloud-native tools (AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc.).Architect and implement data lakes and data warehouses on cloud platforms (AWS, Azure, GCP).Develop and optimize data ingestion, transformation, and loading processes using Databricks, Snowflake, Redshift, BigQuery and Azure Synapse.Implement ETL processes using tools like Informatica, SAP Data Intelligence, and others.Develop and optimize data processing jobs using Spark Scala.Data Integration and Management:Integrate various data sources, including relational databases, APIs, unstructured data, and ERP systems into the data lake.Ensure data quality and integrity through rigorous testing and validation.Perform data extraction from SAP or ERP systems when necessary.Performance Optimization:Monitor and optimize the performance of data pipelines and ETL processes.Implement best practices for data management, including data governance, security, and compliance.Collaboration and Communication:Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.Collaborate with cross-functional teams to design and implement data solutions that meet business needs.Documentation and Maintenance:Document technical solutions, processes, and workflows.Maintain and troubleshoot existing ETL pipelines and data integrations.Essential previous experience must include7+ years of experience as a Data Engineer in a similar role.Minimum 3 years of experience specifically working with "Databricks on AWS"MUST HAVE Strong hands on coding and platform development in Apache Spark / Scala / DatabricksExperience with data extraction from SAP or ERP systemsExperience with various Data platforms such as Amazon Redshift / Snowflake / SynapseProficient in SQL and query optimization techniques.Familiarity with data modeling, ETL/ELT processes, and data warehousing concepts.Knowledge of data governance, security, and compliance best practices.