Job Description Job Description eDNA Explorer is expanding through partnership with Dr. Caren Helbings laboratory at the University of Victoria to create eDNA Explorer Canada! We are building a cutting-edge platform for processing and analyzing environmental DNA (eDNA) data. Our system processes biological samples to identify species based on their genetic material, integrates environmental data, and provides insights into biodiversity and ecological patterns. We're using modern cloudnative data engineering principles to build robust, scalable pipelines for scientific data analysis. We're seeking a Fullstack Engineer to enhance and maintain our comprehensive eDNA Explorer Canada platform, which includes both cuttingedge web applications and scientific data processing systems. This role involves building sophisticated data visualization components, implementing complex user workflows, developing typesafe APIs, and maintaining Pythonbased data processing pipelines and report generation services. The ideal candidate will have strong React / TypeScript experience with a passion for creating intuitive interfaces for complex scientific data, combined with solid Python backend development skills for dataintensive applications. Our platform consists of : Frontend Web Applications : Modern Reactbased interfaces for scientific data analysis and research collaboration Python Data Processing Services : Flaskbased APIs and report generation systems handling largescale scientific datasets Data Pipeline Infrastructure : Dagsterbased workflows for processing genomic and environmental data Requirements Core Experience (Required) 4+ years of fullstack web development experience Strong experience with React 18+ and TypeScript Solid understanding of Next.js (App Router and Pages Router) Experience with Python web development using Flask or FastAPI Knowledge of modern database technologies (PostgreSQL, SQLAlchemy) Experience with tRPC for typesafe APIs Familiarity with modern testing frameworks (Vitest, Playwright, React Testing Library, pytest) Preferred Experience Componentdriven development and design systems Understanding of monorepo architecture and Turborepo (for TS) and Poetry (for Python) Knowledge of cloud services and deployment pipelines (Google Cloud Platform preferred) Experience with data visualization libraries and scientific applications Background in Redis / RQ for job queuing systems Experience with scientific data processing or bioinformatics applications Knowledge of containerization (Docker) and orchestration (Kubernetes) Experience with AIpowered development tools like Claude Code, GitHub Copilot, or similar agentic coding assistants Familiarity with AI frameworks such as Google AI SDK or PydanticAI (a plus) Technology Stack: Frontend Technologies React & Next.js : React 19 with functional components and hooks, Next.js 15 with both App Router and Pages Router patterns TypeScript : Comprehensive type safety across the entire application React 19 compatibility : With React Compiler integration UI & Styling : Custom component library (@caledna / ui) with Storybook documentation, Tailwind CSS for utilityfirst styling State Management : Zustand for client state, tRPC for server state management Data Fetching : tRPC for typesafe API calls with automatic TypeScript generation Forms : React Hook Form with Zod validation for typesafe form handling Testing : Vitest for unit testing, Playwright for E2E testing, React Testing Library for component testing Backend Technologies Python Web Frameworks : Flask 3.0+ for API services, with potential FastAPI integration Database : PostgreSQL with SQLAlchemy 2.0+ ORM for robust data modeling Job Processing : Redis with RQ (Redis Queue) for background job processing Authentication : Experience with JWTbased authentication Cloud Services : Google Cloud Platform (BigQuery, Cloud Storage, Secret Manager) Data Visualization : Plotly for interactive scientific visualizations Containerization : Docker with Kubernetes deployment Data Processing : polars for scientific data manipulation Scientific Computing : scipy, scikitbio, scikitlearn for data analysis Development & Infrastructure Monorepo Architecture : Turborepo for efficient builds and dependency management Package Management : yarn for frontend, Poetry for Python Version Control : Git with conventional commits CI / CD : GitHub Actions with automated testing and deployment Code Quality : ESLint, Prettier, Ruff (Python), precommit hooks Documentation : Storybook for component documentation, comprehensive API documentation Data Processing Pipeline Workflow Orchestration : Dagster for data pipeline management Data Storage : Google Cloud Storage, BigQuery for largescale data analytics Data Formats : Support for scientific data formats (FASTA, TSV, compressed formats) Performance Optimization : Polars for highperformance data processing Key Responsibilities - Frontend Development Build and maintain React applications for scientific data visualization and analysis Develop reusable UI components following design system principles Implement complex data visualization dashboards using modern charting libraries Create intuitive user workflows for researchers and scientists Ensure type safety across the entire frontend application stack Optimize application performance for large scientific datasets Backend Development Design and implement Flask APIs for data processing and report generation Manage database operations using SQLAlchemy for complex scientific data models Develop background job processing systems using Redis and RQ Build report generation services that process largescale genomic and environmental data Integrate with Google Cloud services for scalable data processing Implement robust authentication and authorization systems System Integration Connect frontend applications with Python backend services via tRPC Maintain data consistency across web applications and processing pipelines Optimize system performance for handling large scientific datasets Implement monitoring and logging for both web and data processing components Ensure security best practices across the entire platform Data & Analytics Work with scientific datasets including genomic sequences, environmental data, and biodiversity information Implement data validation and quality assurance processes Build interactive dashboards for scientific data exploration Create data export and download functionality for researchers What Youll Build Web Applications Interactive data visualization dashboards for biodiversity analysis Realtime data processing interfaces with progress tracking Complex form systems for scientific metadata collection Responsive data tables with advanced filtering and sorting Mapbased visualizations for geographic species distribution Backend Services Report generation APIs that process terabytes of scientific data Background job systems for longrunning data processing tasks Data validation services for scientific metadata Authentication and user management systems File processing and storage services for scientific datasets Integration Features Realtime updates between web interfaces and data processing jobs Typesafe API contracts between frontend and backend systems Scalable file upload and processing workflows Advanced search and filtering across scientific datasets Technical Challenges Performance optimization for applications handling large scientific datasets Complex state management across multiple interconnected applications Realtime updates for longrunning scientific computations Type safety across fullstack applications with complex data models Scientific data visualization with interactive and responsive charts Scalable architecture supporting growing research community Team & Culture AInative development leveraging modern coding assistants and tools for enhanced productivity Code quality and testing with comprehensive test coverage Type safety and robust error handling across all systems Performance and scalability for scientific computing workloads Documentation and knowledge sharing for complex scientific processes Collaborative problemsolving with domain experts and researchers Continuous learning and adoption of cuttingedge development tools and practices Growth Opportunities Scientific domain expertise in environmental biology and genomics Advanced data engineering and pipeline optimization Cloud architecture and distributed systems design Opensource contributions to scientific computing tools Research collaboration with academic institutions and environmental organizations Benefits This is a grantfunded position with the possibility of future hiring as an employee at the end of the grant. eDNA Explorer Canada is committed to building a diverse team. We encourage applications from candidates of all backgrounds. This position is available as remote within Canada with preference for candidates who can occasionally visit our offices located at the University of Victoria on Vancouver Island in beautiful British Columbia. Applicant must be a Canadian citizen or have a valid work permit to work in Canada. The Helbing lab is situated in the Department of Biochemistry & Microbiology at the University of Victoria. The eDNA Explorer platform can be viewed here : . Were looking for engineers who are excited about building tools that enable groundbreaking environmental research that can truly change the world. If youre passionate about creating robust, scalable applications that help scientists understand and protect biodiversity, wed love to hear from you. This role offers the unique opportunity to work at the intersection of modern web development and cuttingedge environmental science, building tools that have real impact on our understanding of the natural world. #J-18808-Ljbffr
Job Title
Senior Software Engineer