About Turing:Turing is one of the world’s fastest-growing AI companies, accelerating the advancement and deployment of powerful AI systems. Turing helps customers in two ways: working with the world’s leading AI labs to advance frontier model capabilities in thinking, reasoning, coding, agentic behavior, multimodality, multilinguality, STEM, and frontier knowledge; and leveraging that work to build real-world AI systems that solve mission-critical priorities for companies.Role Overview:We are looking for experienced SwarmBench Task Engineers — Code / SWE to design and build high-quality multi-agent benchmark tasks based on real-world software engineering workflows.In this role, you will create tasks grounded in real open-source code changes such as bug fixes, migrations, and refactors. These tasks are used to evaluate how effectively AI agents can understand large codebases, apply precise modifications, and produce correct, testable outputs.You will work within a structured evaluation framework (Harbor), define clear task instructions, design verification logic, and decompose complex engineering problems across multiple specialized agents.What does day-to-day look like:Build multi-agent benchmark tasks based on real-world open-source code changes (bug fixes, migrations, refactors)Work with the Harbor evaluation framework to run and validate tasks inside Docker environmentsWrite clear, precise task instructions specifying file paths, function signatures, expected behavior, and constraintsDesign and implement Python-based verification scripts to validate correctness of agent-generated code changesCreate decomposition strategies that split complex code changes across multiple independent sub-agentsRun, debug, and refine tasks within containerized environments to ensure reproducibility and determinismEvaluate task performance signals and improve task quality, clarity, and difficultyRequirements:5+ years of experience in Python and JavaScript developmentExperience with AI coding benchmarks (e.g., SWE-bench, Terminal-Bench)Strong experience reading and navigating large open-source codebases (e.g., Django, Flask, FastAPI, Node.js, or similar)Familiarity with Git workflows, including pull requests, diffs, cherry-picking, and working with specific commitsComfortable working with Docker (writing Dockerfiles, building images, debugging container issues)Experience writing test scripts (pytest, unittest, or custom assertion-based testing)Ability to write clear, precise, and unambiguous technical specificationsPerks of Freelancing With TuringWork on cutting-edge AI projects with leading foundation model companiesCollaborate on high-impact work at the frontier of LLM evaluation and reasoningRemote, flexible opportunities with global teamsOffer Details:Commitments Required: 8 hours per day with a 4-hour overlap with PST.Employment Type: Contractor position (Note: this role does not include medical/paid leave).Duration of Contract: 4 weeks; [expected start date is next week].
Job Title
SwarmBench Task Engineer - 75243