Skip to Main Content

Job Title


Databricks


Company : Labcorp


Location : Bangalore, Karnataka


Created : 2025-05-03


Job Type : Full Time


Job Description

The ideal candidate will be responsible for Design and develop high performance and secured Databricks solutions using Python, Spark, PySpark, Delta tables, UDP and Kafka. They will also be responsible for designing and implementing testable and scalable code.Responsibilities of the RoleThe Senior Databricks Developer will be responsible for implementing and maintaining solutions on AWS Databricks platform.You will be responsible for coordinating data requests from the various teams, reviewing and approving efficient approaches to ingest, extracting, transforming and maintaining data in multi-hop models.In addition, you’ll work with team members to mentor other developers to grow their knowledge and expertise.You’ll be working in a fast-paced and high-volume processing environment, where quality and attention to detail are vital.Primary Responsibilities:Design and develop high performance and secured Databricks solutions using Python, Spark, PySpark, Delta tables, UDP and KafkaCreate high-quality technical documents including data mapping, data processes, and operational support guidesTranslate business requirements into data model design and technical solutionsDevelop data-ingest pipelines using Python, Spark & PySpark to support near real-time and batch ingestion processesMaintain data lake and pipeline processes which include troubleshooting issues, performance tuning and making data quality improvementsWork closely with technical leaders, product managers, and reporting team to gather functional and system requirementsWork in fast-paced environment and perform effectively in an agile development environmentRequired Skills/ExperienceBachelor’s degree in computer science or information systems or equivalent degree.Must have 5+ years of experience in developing applications using Python, Spark, PySpark, Java, Junit, Maven and its eco-system.Must have 4+ years of hands-on experience in AWS Databricks and related technologies like MapReduce, Spark, Hive, Parquet and AVRO.Good experience in end-to-end implementation of DW BI projects, especially in data warehouse and data mart developments.Extensive hands on RDD, Data frame and Dataset operations of Spark 3.x.Experience with design and implementation of ETL/ELT framework for complex warehouses/marts.Knowledge of large data sets and experience with performance tuning and troubleshooting.Plus, to have AWS Cloud Analytics experience in Lambda, Athena, S3, EMR, Redshift, Redshift spectrum.Must have RDBMS: Microsoft SQL Server, Oracle, MySQL.Familiarity with Linux OS.Understanding of Data architecture, replication, and administration.Experience in working with real-time data ingestion with any streaming tool.Strong debugging skills to troubleshoot production issues.Comfortable working in a team environment.Hands on experience with Shell Scripting, Java, and SQL.Ability to identify problems and effectively communicate solutions to peers and management.