Skip to Main Content

Job Title


Junior Data Analyst


Company : Information Tech Consultants


Location : London, London


Created : 2026-01-09


Job Type : Full Time


Job Description

!! IMMEDIATE JOINERS !!You could be just the right applicant for this job Read all associated information and make sure to apply. Junior Big Data Developer (Python & SQL Focus) We're looking for an enthusiastic and detail-oriented Junior Big Data Developer to join our data engineering team. This role is ideal for an early-career professional with foundational knowledge in data processing, strong proficiency in Python, and expert skills in SQL. You'll focus on building, testing, and maintaining data pipelines and ensuring data quality across our scalable Big Data platforms.Key ResponsibilitiesData Pipeline Development: Assist in the design, construction, and maintenance of robust ETL/ELT pipelines to integrate data from various sources into our data warehouse or data lake.Data Transformation with Python: Write, optimize, and maintain production-grade Python scripts to clean, transform, aggregate, and process large volumes of data.Database Interaction (SQL): Develop complex, high-performance SQL queries (DDL/DML) for data extraction, manipulation, and validation within relational and data warehousing environments.Quality Assurance: Implement data quality checks and monitoring across pipelines, identifying discrepancies and ensuring the accuracy and reliability of data.Collaboration: Work closely with Data Scientists, Data Analysts, and other Engineers to understand data requirements and translate business needs into technical data solutions.Tooling & Automation: Utilize version control tools like Git and contribute to the automation of data workflows and recurring processes.Documentation: Create and maintain technical documentation for data mappings, processes, and pipelines.Required Skills and QualificationsCore Technical SkillsSkill AreaRequirementsProgrammingStrong proficiency in Python for data manipulation and scripting. Familiarity with standard Python data libraries (e.g., Pandas, NumPy).DatabaseExpert-level proficiency in SQL (Structured Query Language). Experience writing complex joins, stored procedures, and performing performance tuning.Big Data ConceptsFoundational understanding of Big Data architecture (Data Lakes, Data Warehouses) and distributed processing concepts (e.g., MapReduce).ETL/ELTBasic knowledge of ETL principles and data modeling (star schema, snowflake schema).Version ControlPractical experience with Git (branching, merging, pull requests).Preferred Qualifications (A Plus)Experience with a distributed computing framework like Apache Spark (using PySpark).Familiarity with cloud data services (AWS S3/Redshift, Azure Data Lake/Synapse, or Google BigQuery/Cloud Storage).Exposure to workflow orchestration tools (Apache Airflow, Prefect, or Dagster). xjdpvnf Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field.