Research Scientist, Safety at DeepMind

Apply
Full-time
London
a month ago

DeepMind welcomes applications from all sections of society. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, maternity or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

About us 

Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at DeepMind investigates questions related to objective specification, robustness, interpretability, and trust in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of DeepMind Research: to build safe and socially beneficial AI systems.

Research on technical AI safety draws on expertise in deep learning, reinforcement learning, statistics, and foundations of agent models. Research Scientists work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating possible long-term risks, in close collaboration with other AI research groups within and outside of DeepMind.

Snapshot 

DeepMind is active within the wider research community through publications and partners with many of the world’s top academics and academic institutions. We have built a hardworking and engaging culture, combining the best of academia with product led environments, providing an ambitious balance of structure and flexibility.

Our approach encourages collaboration across all groups within the Research team, leading to ambitious creativity and the scope for creative breakthroughs at the forefront of research.

The role 

Key responsibilities:

  • Identify and investigate possible failure modes for current and future AI systems, and dedicatedly develop solutions to address them
  • Conduct empirical or theoretical research into technical safety mechanisms for AI systems in coordination with the team’s broader technical agenda
  • Collaborate with research teams externally and internally to ensure that AI capabilities research is informed by and adheres to the most advanced safety research and protocols
  • Report and present research findings and developments to internal and external collaborators with effective written and verbal communication

About you 

Minimum qualifications:

  • PhD in a technical field or equivalent practical experience

Preferred qualifications:

  • PhD in machine learning, computer science, statistics, computational neuroscience, mathematics, or physics.
  • Relevant research experience in deep learning, machine learning, reinforcement learning, statistics, or computational neuroscience.
  • A real passion for AI.

Competitive salary applies.

Your application

Apply directly on company listing page or fill out the form and we will forward it to the company contact.

This information will be shared with the company contact only. Review your information carefully, as the application cannot be edited after submission.

Why are you a great fit for this job?