DeepMind welcomes applications from all sections of society. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, maternity or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at DeepMind investigates questions related to objective specification, robustness, interpretability, and trust in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of DeepMind Research: to build safe and socially beneficial AI systems.
Research on technical AI safety draws on expertise in deep learning, reinforcement learning, statistics, and foundations of agent models. Research Scientists work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating possible long-term risks, in close collaboration with other AI research groups within and outside of DeepMind.
DeepMind is active within the wider research community through publications and partners with many of the world’s top academics and academic institutions. We have built a hardworking and engaging culture, combining the best of academia with product led environments, providing an ambitious balance of structure and flexibility.
Our approach encourages collaboration across all groups within the Research team, leading to ambitious creativity and the scope for creative breakthroughs at the forefront of research.
- Identify and investigate possible failure modes for current and future AI systems, and dedicatedly develop solutions to address them
- Conduct empirical or theoretical research into technical safety mechanisms for AI systems in coordination with the team’s broader technical agenda
- Collaborate with research teams externally and internally to ensure that AI capabilities research is informed by and adheres to the most advanced safety research and protocols
- Report and present research findings and developments to internal and external collaborators with effective written and verbal communication
- PhD in a technical field or equivalent practical experience
- PhD in machine learning, computer science, statistics, computational neuroscience, mathematics, or physics.
- Relevant research experience in deep learning, machine learning, reinforcement learning, statistics, or computational neuroscience.
- A real passion for AI.
Competitive salary applies.