Skip to Main Content

Job Title


Software Engineer, Cloud Inference Safeguards


Company : Anthropic PBC


Location : Seattle, WA


Created : 2026-04-04


Job Type : Full Time


Job Description

Responsibilities:- Build, deploy and operate real-time safeguards infrastructure---classifiers, rate limits, enforcement actions, and intervention hooks---embedded directly in the third-party CSP inference serving path- Design and maintain the data residency and privacy architecture for safeguards signals on CSP platforms, ensuring we can detect abuse and monitor model behavior while honoring regionalization boundaries and enterprise contractual commitments- Develop telemetry, logging, and evaluation pipelines that give Safeguards, Policy, and T&S operational teams situational awareness over CSP traffic and close the visibility gap between third-party and first-party serving- Dive into the CSP serving stack to identify the lowest-impact points to gather signals or introduce interventions without degrading latency, stability, or overall architecture- Hold a high operational bar: own on-call, drive root-cause analyses and postmortems for safeguards incidents on CSP platforms, and build systems that reduce the human intervention required to keep Claude safe- Work closely with Safeguards research, Policy & Enforcement, the Cloud Inference team, and CSP partner contacts to turn detection research and policy decisions into production enforcement that works inside a partner's cloud.You may be a good fit if you:- Have a Bachelor's degree in Computer Science, Software Engineering, or comparable experience- Have 4--10+ years of experience in high-scale, high-reliability software development, ideally with exposure to trust & safety, anti-abuse, fraud, or integrity systems- Are proficient in Python and comfortable working across the stack---from request-path services to data pipelines to internal tooling- Think adversarially: you can see a system from a bad actor's perspective, anticipate how they will respond to countermeasures, and design defenses in depth rather than single points of enforcement- Have experience scaling infrastructure to accommodate rapid traffic growth while keeping latency and reliability within tight budgets- Are deeply interested in the potential transformative effects of advanced AI systems and are committed to ensuring their safe development- Have strong communication skills and can explain complex technical and risk tradeoffs to non-technical stakeholders across Policy, Legal, and partner organizations- Enjoy working in a fast-paced, early environment; comfortable with adapting priorities as driven by the rapidly evolving AI spaceStrong candidates may also have experience with:Building trust and safety, anti-spam, fraud, or abuse detection and mitigation mechanisms for AI/ML systems, or the infrastructure to support these systems at scaleMachine learning serving infrastructure (GPUs/TPUs, inference servers, load balancing) and the operational realities of running models in productionMajor cloud platform internals---IAM, Network/service perimeter controls, regional resource constraints, cloud-native logging/monitoring---or experience shipping software that runs inside a partner's cloud rather than your ownData residency, privacy engineering, or compliance-constrained architectures, particularly where telemetry has to stay within regional or contractual boundariesWorking closely with operational and human-review teams to build custom internal tooling, admin UX, and alertingAdversarial mindset: has shipped defenses against motivated attackers before, knows what it feels like when they adapt, and can sprint to close a gap before it becomes an incidentComfortable operating at the intersection of platform/infra engineering and trust & safety---neither a pure infra engineer nor a pure T&S engineer, but someone who can credibly do bothHas shipped software tha