Job Purpose The Senior Data Engineer will design, construct, and maintain scalable data management systems using Azure Databricks. This role involves supervising the upkeep of existing data infrastructure workflows and creating data processing pipelines utilizing Databricks Notebooks, Spark SQL, Python, and other Databricks tools. The Senior Data Engineer will oversee and lead the module through planning, estimation, implementation, monitoring, and tracking. Desired Skills and experience • 7+ years of experience in software development with a focus on data projects using Strong Python Coding skills, PySpark and associated frameworks. • Hands-on SQL coding skills with RDMS or NoSQL databases. • Hands-on experience with IBM MQ administration in enterprise environments. • Strong expertise in: MQ clustering & distributed messaging, MQ object management ,SSL/TLS configuration, Messaging patterns (pub/sub, point-to-point, request/response) • Good experience of Broker-Dealer business and financial services industry concepts. • Proven experience as a Data Engineer with experience in Azure cloud. • Experience implementing solutions using Azure cloud services, Azure Data Factory, Azure Lake Gen 2, Azure Databases, Azure Data Fabric, API Gateway management, Azure Functions. • Experience with developing APIs using FastAPI or similar frameworks in Python. • Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes. • Good understanding of ETL/ELT processes. • Assist stakeholders with data-related technical issues and support their data infrastructure needs. • Develop and maintain documentation for data pipeline architecture, development processes, and data governance. • In-depth knowledge of data warehousing concepts, architecture, and implementation. • Extremely strong organizational and analytical skills with strong attention to detail. • Strong track record of excellent results delivered to internal and external clients. • Excellent problem-solving skills, with the ability to work independently or as part of a team. • Strong communication and interpersonal skills, with the ability to effectively engage with both technical and non-technical stakeholders. • Able to work independently without the need for close supervision and collaboratively as part of cross team efforts. Key Responsibilities • Interpret business requirements and work with internal resources as well as application vendors. • Design, develop and maintain Data Bricks solutions and relevant data quality rules. • Troubleshoot and resolve data-related issues. • Configure and create data models and data quality rules to meet customer needs. • Handle multiple database platforms like Microsoft SQL Server and Oracle. • Review and analyze data from multiple internal and external sources. • Analyze existing Python/ PySpark complex code and identify areas for optimization. • Write new optimized SQL queries or Python scripts to improve performance and reduce time. • Identify opportunities for efficiency and innovative approaches to completing the scope of work. • Write clean, efficient, and well-documented code that adheres to best practices and Council IT coding standards. • Maintain and operate existing custom code processes. • Participate in team problem-solving efforts and offer ideas to solve client issues. • Query writing skills with the ability to understand and implement changes to SQL functions and stored procedures. • Effectively communicate with business and technology partners, peers, and stakeholders. • Deliver results under demanding timelines to real-world business problems. • Work independently and multi-task effectively. • Configure system settings and options and execute unit/integration testing. • Develop end-user release notes, training materials, and deliver training to a broad user base. • Identify and communicate areas for improvement. • Responsible for quality checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Job Title
Data Engineer