We are assembling an A-team of highly skilled, autonomous, and AI-first engineers, and we are seeking an exceptional Full Stack Data Engineer to join our high-performing, co-located squads in Canada. This role is for a handson engineer who is passionate about leveraging data, proficient in building endtoend data solutions, and deeply committed to using AI tools to maximize productivity. The ideal candidate will be instrumental in designing, developing, and optimizing robust data pipelines, from ingestion to consumption, using Python, PySpark, and other big data technologies. We are looking for an AIfirst thinker who can profoundly understand the functional domains our work impacts, and significantly contribute to our data strategy and culture. Responsibilities - Operate end-to-end in the design, development, and implementation of fullstack data solutions, ensuring optimal performance, scalability, data quality, security, and compliance across the data lifecycle. - Collaborate closely within small, colocated squads (47 person teams), fostering an environment of high communication and minimal coordination overhead, to deliver impactful data products. - Develop, maintain, and optimize highly efficient and resilient data ingestion, processing, and transformation pipelines using advanced Python and PySpark techniques for largescale datasets. - Implement sophisticated data storage solutions leveraging a diverse set of big data technologies including Hive, distributed file systems (e.g., HDFS, S3), and enterprisegrade NoSQL databases (e.g., Cassandra, MongoDB). - Design and implement scalable data models and schemas that support advanced analytics, machine learning, and critical reporting needs, ensuring data integrity, accessibility, and discoverability. - Engage effectively with data consumers, data scientists, and business stakeholders to deeply understand their requirements, translating them into robust data solutions and providing expert guidance on data utilization and interpretation. - Implement realtime data streaming and complex eventdriven architectures using technologies like Apache Kafka, ensuring lowlatency data availability for critical business functions. - Adhere to and contribute to best practices in data engineering and software development, participating in rigorous code reviews, implementing comprehensive automated testing strategies, and supporting robust CI/CD pipelines within a DevOps culture. - Exhibit High Autonomy and Agency, taking ownership of technical challenges, making wellreasoned architectural decisions, and proactively identifying and implementing continuous improvements across the data landscape. - Innovate with AIPowered Development, actively leveraging, integrating, and contributing to AI coding tools (e.g., internal Citi AI tools, Copilot, Claude Code, Codex, Antigravity) to significantly enhance productivity, code quality, and development velocity, and inspiring others to do the same. - Participate in technical discussions and contribute to the evolution of our big data technology stack, evaluating new technologies, and making strategic recommendations that align with business objectives and architectural vision. - Expertly Troubleshoot and Resolve challenging technical issues within complex, distributed big data environments, applying advanced analytical and problemsolving methodologies. Required Skills & Experience - Experience: 4+ years of progressive, handson experience as a Data Engineer, with a proven track record of delivering complex, largescale data solutions. - Programming Languages: - Expertlevel proficiency in Python, with deep expertise in developing highly optimized, scalable, and productiongrade PySpark applications for missioncritical data processing. - Big Data Frameworks/Technologies: - Deep understanding and extensive handson experience with the entire Apache Spark ecosystem (Spark Core, Spark SQL, Spark Streaming). - Advanced proficiency with Hive for enterprise data warehousing, including optimization techniques for large and complex queries. - Expert knowledge of distributed computing fundamentals, HDFS, and other components of the Hadoop ecosystem. - Data Storage & Management: - Proficiency in SQL, complex query optimization, and advanced data warehousing concepts (e.g., dimensional modeling, data vault, data lakes). - Extensive experience with various data storage formats (e.g., Parquet, ORC, Avro) and leading data lake solutions (e.g., Delta Lake, Iceberg). - Proven experience with enterprisegrade NoSQL databases (e.g., Cassandra, MongoDB, HBase) and understanding of their architectural tradeoffs. - Messaging & Event Streaming: - Expertlevel experience with Apache Kafka, including design and implementation of highthroughput, lowlatency realtime data pipelines and eventdriven architectures. - Cloud Platforms: - Extensive experience with big data services on major cloud platforms (e.g., AWS EMR/Glue/Redshift/Kinesis, Azure Databricks/Data Factory/Synapse/Event Hubs, GCP Dataflow/Dataproc/BigQuery/Pub/Sub), including cloudnative architectural patterns. - AIPowered Development & Productivity: - Mandatory: Demonstrated mastery and innovative application of AI coding tools (e.g., Claude Code, Codex, Antigravity) to significantly enhance the development lifecycle. - A proactive, 'AIfirst thinker' mindset, with a proven ability to evaluate, integrate, and evangelize new AI tools and methodologies within the team to drive continuous improvement and innovation. - Domain Understanding: - Expert ability to articulate the intricacies of the functional domain, proactively identifying business challenges and opportunities, and translating them into impactful, datadriven solutions. - Other Essential Skills: - Advanced understanding of software engineering principles, design patterns, data structures, algorithms, and performance engineering for distributed systems. - Extensive experience with RESTful API design, development, and integration for data services. - Strong expertise in containerization technologies (e.g., Docker, Kubernetes) and orchestration for deploying and managing scalable data applications. - Masterlevel proficiency with version control systems, especially Git, including advanced branching, merging, and code review strategies. - Exceptional problemsolving, analytical, and debugging skills applied to highly complex, distributed big data ecosystems. - Superior communication, presentation, and interpersonal skills, with the ability to articulate complex technical concepts to diverse audiences and influence strategic decisions. - Demonstrated high autonomy and agency in driving strategic initiatives and delivering impactful, innovative data solutions. Education - Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related quantitative field is required. Equivalent advanced practical experience with a demonstrable track record of architecting and delivering major data initiatives will also be considered. This job opening is for an existing job vacancy. Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citis EEO Policy Statement and the Know Your Rights poster. #J-18808-Ljbffr
Job Title
Full Stack Data Engineer - Assistant Vice President