Skip to Main Content

Job Title


Pyspark developer


Company : Tata Consultancy Services


Location : Kollam, Kerala


Created : 2025-12-19


Job Type : Full Time


Job Description

Role - Pyspark developerRequired Technical Skill Set -Pyspark, Redshift, PostgreSQL year of experience- 4 to 8 yearsLocation- Chennai / HyderabadDesired Competencies (Technical/Behavioral Competency) Must-Have 5+ years of experience in data engineering, with strong focus on PySpark/Spark for big data processing. Expertise in building data pipelines and ingestion frameworks from relational, semi-structured (JSON, XML), and unstructured sources (logs, PDFs). Proficiency in Python with strong knowledge of data processing libraries. Strong SQL skills for querying and validating data in platforms like Amazon Redshift, PostgreSQL, or similar. Experience with distributed computing frameworks (e.g., Spark on EMR, Databricks). Familiarity with workflow orchestration tools (e.g., AWS Step Functions, or similar). Solid understanding of data lake / data warehouse architectures and data modeling basics. Good-to-Have Experience with AWS data services: Glue, S3, Redshift, Lambda, CloudWatch, etc. Familiarity with Delta Lake or similar for large-scale data storage. Exposure to real-time streaming frameworks (e.g., Spark Structured Streaming, Kafka). Knowledge of data governance, lineage, and cataloging tools (e.g., AWS Glue Catalog, Apache Atlas). Understanding of DevOps/CI-CD pipelines for data projects using Git, Jenkins, or similar tools.