Website Anthropic
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the team
The Tool Use Team within Research is responsible for making Claude the world’s most capable, safe, reliable, and efficient model for tool use and agentic applications. The team focuses on the foundational layer – solving core problems such as tool use safety (e.g. prompt injection robustness), tool call accuracy, long horizon & complex tool use workflow, large scale & dynamic tools, and tool use efficiency. These are foundations to the majority of Anthropic’s customers as well as internal teams building specific agentic applications such as Claude for Chrome, Computer Use, Claude Code, Search.
About the role
We’re looking for Research Engineers/Scientists to help us advance the frontier of safe tool use. With tool use adoption accelerating rapidly across our platform, the next generation requires even more breakthrough research to enable us to scale responsibly: for example, training Claude to be extremely robust against sophisticated prompt injection, preventing data exfiltration attempts through tool misuse, defending against adversarial attacks in realistic multi-turn agent conversations, and ensuring safety when agents operate autonomously for longer horizons with access to a large number of tools. You’ll collaborate with a diverse group of researchers and engineers to advance safe tool use in Claude. You’ll own the full research lifecycle—from identifying fundamental limitations to implementing solutions that ship in production models. This work is critical for derisking our model’s increasing capabilities and empowering Claude to more autonomously assist users. Note: For this role, we conduct all interviews in Python.
Key Responsibilities
- Design and implement novel and scalable reinforcement learning methodologies that push the state of the art of tool use safety
- Define and pursue research agendas that push the boundaries of what’s possible
- Build rigorous, realistic evaluations that capture the complexity of real-world tool use safety challenges
- Ship research advances that directly impact and protect millions of users
- Collaborate with other safety research (e.g. Safeguards, Alignment Science), capabilities research, and product teams to drive fundamental breakthroughs in safety, and work with teams to ship these into production
- Design, implement, and debug code across our research and production ML stacks
- Contribute to our collaborative research culture through pair programming, technical discussions, and team problem-solving
Requirements
- Passionate about our safety mission
- Are driven by real-world impact and excited to see research ship in production
- Have strong machine learning skills
To apply for this job please visit job-boards.greenhouse.io.