AI Safety
AI Safety Resources
As I continue to discuss AI Safety publicly, I'm building an ever-changing list of resources for those who are interested. This page may be hosted somewhere else in the future, but I will make sure to provide the updated link if/when that happens.
Curated by Nicole Myrie
Start here
A compelling video exploring Eliezer Yudkowsky's foundational arguments about AI risk โ why some of the world's most serious thinkers believe advanced AI could be the most dangerous thing humanity has ever built.
WatchTim Urban's legendary two-part explainer that made AI existential risk accessible to millions. Long, illustrated, and surprisingly funny โ the best place to send anyone who's brand new to these ideas.
ReadAn interactive explainer showing just how rapidly AI capabilities have advanced โ and where dangerous thresholds may lie. Includes live demonstrations comparing models across time. Eye-opening and data-driven.
ExploreA thorough, honest primer on AGI timelines, what the research community believes, and what individuals can actually do about it. Written for a general audience with no technical background required.
ReadA detailed breakdown of why AI safety ranks among the most important challenges of our time โ covering the arguments, the uncertainties, and why this problem is so difficult to solve. Essential reading.
ReadA landmark long-form essay series by a former OpenAI researcher on the geopolitical, national security, and civilizational stakes of the race to AGI. Dense but deeply important for anyone who wants to understand the big picture.
ReadA thought-provoking long-form essay exploring what happens to economies, power structures, and social contracts when AI surpasses human intelligence โ and who gets left behind. A must-read for anyone thinking about AI and society.
ReadA rigorous paper by philosopher William MacAskill and Fin Moorhouse on what an intelligence explosion might actually look like โ and what society, institutions, and individuals need to do to prepare. Sobering and thorough.
ReadA striking research paper examining how advanced AI could be used to concentrate political and economic power in the hands of a very small group โ and what safeguards are needed to prevent it.
ReadOngoing writing from BlueDot Impact โ the organization behind some of the leading AI safety and governance courses. A great source to stay informed and follow the field as it develops.
VisitGenerate a QR Code
Once this page is live on your website, paste the URL below to generate a QR code you can print or add to a slide.