AI Safety...Ok Doomer: with Anca Dragan

AI Safety...Ok Doomer: with Anca Dragan

Google DeepMind: The Podcast · 2024-08-28
38:11

Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.

For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".

Thanks to everyone who made this possible, including but not limited to:

Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind

Please leave us a review on Spotify or Apple Podcasts if you enjoyed this episode. We always want to hear from our audience whether that's in the form of feedback, new idea or a guest recommendation! 

 

Google DeepMind: The Podcast

Join mathematician and broadcaster Professor Hannah Fry as she goes behind the scenes of the world-leading research lab to uncover the extraordinary ways AI is transforming our world. No hype. No spin, just compelling discussions and grand scientific ambition.

Where can you listen?

Apple Podcasts Logo Spotify Logo Podtail Logo Google Podcasts Logo RSS

Episodes