Job Title AWS Data Senior Total IT Experience (in Yrs.) 7 to 11 ; Location - Pune Key words to search in resume AWS, glue, S3 , ETL Developer, Pyspark Developer Technical/Functional Skills -MUST HAVE SKILLS Knowledge of broad AWS Services, Experience in Backend development/maintenance preferably using PySpark/Python, Awareness of aws cost calculations. Troubleshooting abilities. Secondary Skills Data Management solutions with capabilities such as Metadata and Catalog, Performance and capacity Optimization, Data Security, Data Modeling, Data Wrangling, DevOps and basic agile awareness. Responsibilities Write aws glue/lambda jobs for sourcing data from multiple source systems to DataLake. Unit Testing of these data ingested in landing/staging/curated layers. Understand Source system analysis documents and configure the environments in dev/QA/UAT/Prod deployments. Troubleshooting the production issues arises of our builds if any. Create detailed deployment plan and work with infra provisioning team (Terraform) for code/config deployment. Hands on experience in installing, Configuring and using AWS Glue/lamda/S3/IAM roles etc and Hadoop ecosystem components like DBFS, Parquet, Delta Tables, HDFS, Map Reduce programming, In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib. Hands on experience in Scripting languages like Python/Pyspark. Hands on experience in Analysis, Design, Coding & Testing phases of SDLC with best practices. Expertise in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair. Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala.
Job Title
AWS data lead