Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.
View more comments
Nov 14, 2024
"I think one of the reasons I took [shutting down my charity] so hard is because entrepreneurship is all about this bets-based mindset. So you say, “I’m going to take a bunch of bets. I’m going to take some risky bets that have really high upside.” And this is a winning strategy in life, but maybe it’s not a winning strategy for any given hand. So the fact of the matter is that I believe that intellectually, but l do not believe that emotionally. And I have now met a bunch of people who are really good at doing that emotionally, and I’ve realised I’m just not one of those people. I think I’m more entrepreneurial than your average person; I don’t think I’m the maximally entrepreneurial person. And I also think it’s just human nature to not like failing." —Sarah Eustis-GuthrieIn today’s episode, host Luisa Rodriguez speaks to Sarah Eustis-Guthrie — cofounder of the now-shut-down Maternal Health Initiative, a postpartum family planning nonprofit in Ghana — about her experience starting and running MHI, and ultimately making the difficult decision to shut down when the programme wasn’t as impactful as they expected.Links to learn more, highlights, and full transcript.They cover:The evidence that made Sarah and her cofounder Ben think their organisation could be super impactful for women — both from a health perspective and an autonomy and wellbeing perspective.Early yellow and red flags that maybe they didn’t have the full story about the effectiveness of the intervention.All the steps Sarah and Ben took to build the organisation — and where things went wrong in retrospect.Dealing with the emotional side of putting so much time and effort into a project that ultimately failed.Why it’s so important to talk openly about things that don’t work out, and Sarah’s key lessons learned from the experience.The misaligned incentives that discourage charities from shutting down ineffective programmes.The movement of trust-based philanthropy, and Sarah’s ideas to further improve how global development charities get their funding and prioritise their beneficiaries over their operations.The pros and cons of exploring and pivoting in careers.What it’s like to participate in the Charity Entrepreneurship Incubation Program, and how listeners can assess if they might be a good fit.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:00:58)The interview begins (00:03:43)The case for postpartum family planning as an impactful intervention (00:05:37)Deciding where to start the charity (00:11:34)How do you even start implementing a charity programme? (00:18:33)Early yellow and red flags (00:22:56)Proof-of-concept tests and pilot programme in Ghana (00:34:10)Dealing with disappointing pilot results (00:53:34)The ups and downs of founding an organisation (01:01:09)Post-pilot research and reflection (01:05:40)Is family planning still a promising intervention? (01:22:59)Deciding to shut down MHI (01:34:10)The surprising community response to news of the shutdown (01:41:12)Mistakes and what Sarah could have done differently (01:48:54)Sharing results in the space of postpartum family planning (02:00:54)Should more charities scale back or shut down? (02:08:33)Trust-based philanthropy (02:11:15)Empowering the beneficiaries of charities’ work (02:18:04)The tough ask of getting nonprofits to act when a programme isn’t working (02:21:18)Exploring and pivoting in careers (02:27:01)Reevaluation points (02:29:55)PlayPumps were even worse than you might’ve heard (02:33:25)Charity Entrepreneurship (02:38:30)The mistake of counting yourself out too early (02:52:37)Luisa’s outro (02:57:50)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
Nov 08, 2024
With kids very much on the team's mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them. Links to learn more and full transcript.After hearing 8 former guests’ insights, Luisa and Rob chat about:Which of these resonate the most with Rob, now that he’s been a dad for six months (plus an update at nine months).What have been the biggest surprises for Rob in becoming a parent.How Rob's dealt with work and parenting tradeoffs, and his advice for other would-be parents.Rob's list of recommended purchases for new or upcoming parents.This bonus episode includes excerpts from:Ezra Klein on parenting yourself as well as your children (from episode #157)Holden Karnofsky on freezing embryos and being surprised by how fun it is to have a kid (#110 and #158)Parenting expert Emily Oster on how having kids affect relationships, careers and kids, and what actually makes a difference in young kids’ lives (#178)Russ Roberts on empirical research when deciding whether to have kids (#87)Spencer Greenberg on his surveys of parents (#183)Elie Hassenfeld on how having children reframes his relationship to solving pressing global problems (#153)Bryan Caplan on homeschooling (#172)Nita Farahany on thinking about life and the world differently with kids (#174)Chapters:Cold open (00:00:00)Rob & Luisa’s intro (00:00:19)Ezra Klein on parenting yourself as well as your children (00:03:34)Holden Karnofsky on preparing for a kid and freezing embryos (00:07:41)Emily Oster on the impact of kids on relationships (00:09:22)Russ Roberts on empirical research when deciding whether to have kids (00:14:44)Spencer Greenberg on parent surveys (00:23:58)Elie Hassenfeld on how having children reframes his relationship to solving pressing problems (00:27:40)Emily Oster on careers and kids (00:31:44)Holden Karnofsky on the experience of having kids (00:38:44)Bryan Caplan on homeschooling (00:40:30)Emily Oster on what actually makes a difference in young kids' lives (00:46:02)Nita Farahany on thinking about life and the world differently (00:51:16)Rob’s first impressions of parenthood (00:52:59)How Rob has changed his views about parenthood (00:58:04)Can the pros and cons of parenthood be studied? (01:01:49)Do people have skewed impressions of what parenthood is like? (01:09:24)Work and parenting tradeoffs (01:15:26)Tough decisions about screen time (01:25:11)Rob’s advice to future parents (01:30:04)Coda: Rob’s updated experience at nine months (01:32:09)Emily Oster on her amazing nanny (01:35:01)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
Nov 01, 2024
"In that famous example of the dress, half of the people in the world saw [blue and black], half saw [white and gold]. It turns out there’s individual differences in how brains take into account ambient light. Colour is one example where it’s pretty clear that what we experience is a kind of inference: it’s the brain’s best guess about what’s going on in some way out there in the world. And that’s the claim that I’ve taken on board as a general hypothesis for consciousness: that all our perceptual experiences are inferences about something we don’t and cannot have direct access to." —Anil SethIn today’s episode, host Luisa Rodriguez speaks to Anil Seth — director of the Sussex Centre for Consciousness Science — about how much we can learn about consciousness by studying the brain.Links to learn more, highlights, and full transcript.They cover:What groundbreaking studies with split-brain patients and blindsight have already taught us about the nature of consciousness.Anil’s theory that our perception is a “controlled hallucination” generated by our predictive brains.Whether looking for the parts of the brain that correlate with consciousness is the right way to learn about what consciousness is.Whether our theories of human consciousness can be applied to nonhuman animals.Anil’s thoughts on whether machines could ever be conscious.Disagreements and open questions in the field of consciousness studies, and what areas Anil is most excited to explore next.And much more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:02)The interview begins (00:02:42)How expectations and perception affect consciousness (00:03:05)How the brain makes sense of the body it’s within (00:21:33)Psychedelics and predictive processing (00:32:06)Blindsight and visual consciousness (00:36:45)Split-brain patients (00:54:56)Overflow experiments (01:05:28)How much can we learn about consciousness from empirical research? (01:14:23)Which parts of the brain are responsible for conscious experiences? (01:27:37)Current state and disagreements in the study of consciousness (01:38:36)Digital consciousness (01:55:55)Consciousness in nonhuman animals (02:18:11)What’s next for Anil (02:30:18)Luisa’s outro (02:32:46)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
Oct 28, 2024
If you care about social impact, is voting important? In this piece, Rob investigates the two key things that determine the impact of your vote:The chances of your vote changing an election’s outcome.How much better some candidates are for the world as a whole, compared to others.He then discusses a couple of the best arguments against voting in important elections, namely:If an election is competitive, that means other people disagree about which option is better, and you’re at some risk of voting for the worse candidate by mistake.While voting itself doesn’t take long, knowing enough to accurately pick which candidate is better for the world actually does take substantial effort — effort that could be better allocated elsewhere.Finally, Rob covers the impact of donating to campaigns or working to "get out the vote," which can be effective ways to generate additional votes for your preferred candidate.We last released this article in October 2020, but we think it largely still stands up today.Chapters:Rob's intro (00:00:00)Introduction (00:01:12)What's coming up (00:02:35)The probability of one vote changing an election (00:03:58)How much does it matter who wins? (00:09:29)What if you’re wrong? (00:16:38)Is deciding how to vote too much effort? (00:21:47)How much does it cost to drive one extra vote? (00:25:13)Overall, is it altruistic to vote? (00:29:38)Rob's outro (00:31:19)Producer: Keiran Harris
Oct 23, 2024
"You have a tank split in two parts: if the fish gets in the compartment with a red circle, it will receive food, and food will be delivered in the other tank as well. If the fish takes the blue triangle, this fish will receive food, but nothing will be delivered in the other tank. So we have a prosocial choice and antisocial choice. When there is no one in the other part of the tank, the male is choosing randomly. If there is a male, a possible rival: antisocial — almost 100% of the time. Now, if there is his wife — his female, this is a prosocial choice all the time."And now a question: Is it just because this is a female or is it just for their female? Well, when they're bringing a new female, it’s the antisocial choice all the time. Now, if there is not the female of the male, it will depend on how long he's been separated from his female. At first it will be antisocial, and after a while he will start to switch to prosocial choices." —Sébastien MoroIn today’s episode, host Luisa Rodriguez speaks to science writer and video blogger Sébastien Moro about the latest research on fish consciousness, intelligence, and potential sentience.Links to learn more, highlights, and full transcript.They cover:The insane capabilities of fish in tests of memory, learning, and problem-solving.Examples of fish that can beat primates on cognitive tests and recognise individual human faces.Fishes’ social lives, including pair bonding, “personalities,” cooperation, and cultural transmission.Whether fish can experience emotions, and how this is even studied.The wild evolutionary innovations of fish, who adapted to thrive in diverse environments from mangroves to the deep sea.How some fish have sensory capabilities we can’t even really fathom — like “seeing” electrical fields and colours we can’t perceive.Ethical issues raised by evidence that fish may be conscious and experience suffering.And plenty more.Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
Oct 16, 2024
Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything.Links to learn more, highlights, video, and full transcript.On the Edge explores a cultural grouping Nate dubs “the River” — made up of people who are analytical, competitive, quantitatively minded, risk-taking, and willing to be contrarian. It’s a tendency he considers himself a part of, and the River has been doing well for itself in recent decades — gaining cultural influence through success in finance, technology, gambling, philanthropy, and politics, among other pursuits.But on Nate’s telling, it’s a group particularly vulnerable to oversimplification and hubris. Where Riverians’ ability to calculate the “expected value” of actions isn’t as good as they believe, their poorly calculated bets can leave a trail of destruction — aptly demonstrated by Nate’s discussion of the extended time he spent with FTX CEO Sam Bankman-Fried before and after his downfall.Given this show’s focus on the world’s most pressing problems and how to solve them, we narrow in on Nate’s discussion of effective altruism (EA), which has been little covered elsewhere. Nate met many leaders and members of the EA community in researching the book and has watched its evolution online for many years.Effective altruism is the River style of doing good, because of its willingness to buck both fashion and common sense — making its giving decisions based on mathematical calculations and analytical arguments with the goal of maximising an outcome.Nate sees a lot to admire in this, but the book paints a mixed picture in which effective altruism is arguably too trusting, too utilitarian, too selfless, and too reckless at some times, while too image-conscious at others.But while everything has arguable weaknesses, could Nate actually do any better in practice? We ask him:How would Nate spend $10 billion differently than today’s philanthropists influenced by EA?Is anyone else competitive with EA in terms of impact per dollar?Does he have any big disagreements with 80,000 Hours’ advice on how to have impact?Is EA too big a tent to function?What global problems could EA be ignoring?Should EA be more willing to court controversy?Does EA’s niceness leave it vulnerable to exploitation?What moral philosophy would he have modelled EA on?Rob and Nate also talk about:Nate’s theory of Sam Bankman-Fried’s psychology.Whether we had to “raise or fold” on COVID.Whether Sam Altman and Sam Bankman-Fried are structurally similar cases or not.“Winners’ tilt.”Whether it’s selfish to slow down AI progress.The ridiculous 13 Keys to the White House.Whether prediction markets are now overrated.Whether venture capitalists talk a big talk about risk while pushing all the risk off onto the entrepreneurs they fund.And plenty more.Chapters:Cold open (00:00:00)Rob's intro (00:01:03)The interview begins (00:03:08)Sam Bankman-Fried and trust in the effective altruism community (00:04:09)Expected value (00:19:06)Similarities and differences between Sam Altman and SBF (00:24:45)How would Nate do EA differently? (00:31:54)Reservations about utilitarianism (00:44:37)Game theory equilibrium (00:48:51)Differences between EA culture and rationalist culture (00:52:55)What would Nate do with $10 billion to donate? (00:57:07)COVID strategies and tradeoffs (01:06:52)Is it selfish to slow down AI progress? (01:10:02)Democratic legitimacy of AI progress (01:18:33)Dubious election forecasting (01:22:40)Assessing how reliable election forecasting models are (01:29:58)Are prediction markets overrated? (01:41:01)Venture capitalists and risk (01:48:48)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo engineering: Simon MonsourTranscriptions: Katy Moore
Oct 03, 2024
"In the human case, it would be mistaken to give a kind of hour-by-hour accounting. You know, 'I had +4 level of experience for this hour, then I had -2 for the next hour, and then I had -1' — and you sort of sum to try to work out the total… And I came to think that something like that will be applicable in some of the animal cases as well… There are achievements, there are experiences, there are things that can be done in the face of difficulty that might be seen as having the same kind of redemptive role, as casting into a different light the difficult events that led up to it."The example I use is watching some birds successfully raising some young, fighting off a couple of rather aggressive parrots of another species that wanted to fight them, prevailing against difficult odds — and doing so in a way that was so wholly successful. It seemed to me that if you wanted to do an accounting of how things had gone for those birds, you would not want to do the naive thing of just counting up difficult and less-difficult hours. There’s something special about what’s achieved at the end of that process." —Peter Godfrey-SmithIn today’s episode, host Luisa Rodriguez speaks to Peter Godfrey-Smith — bestselling author and science philosopher — about his new book, Living on Earth: Forests, Corals, Consciousness, and the Making of the World.Links to learn more, highlights, and full transcript.They cover:Why octopuses and dolphins haven’t developed complex civilisation despite their intelligence.How the role of culture has been crucial in enabling human technological progress.Why Peter thinks the evolutionary transition from sea to land was key to enabling human-like intelligence — and why we should expect to see that in extraterrestrial life too.Whether Peter thinks wild animals’ lives are, on balance, good or bad, and when, if ever, we should intervene in their lives.Whether we can and should avoid death by uploading human minds.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:00:57)The interview begins (00:02:12)Wild animal suffering and rewilding (00:04:09)Thinking about death (00:32:50)Uploads of ourselves (00:38:04)Culture and how minds make things happen (00:54:05)Challenges for water-based animals (01:01:37)The importance of sea-to-land transitions in animal life (01:10:09)Luisa's outro (01:23:43)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
Sep 27, 2024
In this episode from our second show, 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.Links to learn more, highlights, and full transcript.They cover:Keiran’s views on free will, and how he came to hold themWhat it’s like not experiencing sustained guilt, shame, and angerWhether Luisa would become a worse person if she felt less guilt and shame — specifically whether she’d work fewer hours, or donate less money, or become a worse friendWhether giving up guilt and shame also means giving up prideThe implications for loveThe neurological condition ‘Jerk Syndrome’And some practical advice on feeling less guilt, shame, and angerWho this episode is for:People sympathetic to the idea that free will is an illusionPeople who experience tons of guilt, shame, or angerPeople worried about what would happen if they stopped feeling tonnes of guilt, shame, or angerWho this episode isn’t for:People strongly in favour of retributive justicePhilosophers who can’t stand random non-philosophers talking about philosophyNon-philosophers who can’t stand random non-philosophers talking about philosophyChapters:Cold open (00:00:00)Luisa's intro (00:01:16)The chat begins (00:03:15)Keiran's origin story (00:06:30)Charles Whitman (00:11:00)Luisa's origin story (00:16:41)It's unlucky to be a bad person (00:19:57)Doubts about whether free will is an illusion (00:23:09)Acting this way just for other people (00:34:57)Feeling shame over not working enough (00:37:26)First person / third person distinction (00:39:42)Would Luisa become a worse person if she felt less guilt? (00:44:09)Feeling bad about not being a different person (00:48:18)Would Luisa donate less money? (00:55:14)Would Luisa become a worse friend? (01:01:07)Pride (01:08:02)Love (01:15:35)Bears and hurricanes (01:19:53)Jerk Syndrome (01:24:24)Keiran's outro (01:34:47)Get more episodes like this by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type "80k After Hours" into your podcasting app. Producer: Keiran HarrisAudio mastering: Milo McGuireTranscriptions: Katy Moore
Sep 19, 2024
"For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the next questions are, and then getting to the next stage and the next stage and so on. And occasionally there’ll be revolutionary ideas which will really completely change your view of science. And it is possible that some revolutionary breakthrough in our understanding will come about and we might crack this problem, but there’s no evidence for that. It doesn’t mean that there isn’t a lot of promising work going on. There are many legitimate areas which could lead to real improvements in health in old age. So I’m fairly balanced: I think there are promising areas, but there’s a lot of work to be done to see which area is going to be promising, and what the risks are, and how to make them work." —Venki RamakrishnanIn today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel Prize winner — about his new book, Why We Die: The New Science of Aging and the Quest for Immortality.Links to learn more, highlights, and full transcript.They cover:What we can learn about extending human lifespan — if anything — from “immortal” aquatic animal species, cloned sheep, and the oldest people to have ever lived.Which areas of anti-ageing research seem most promising to Venki — including caloric restriction, removing senescent cells, cellular reprogramming, and Yamanaka factors — and which Venki thinks are overhyped.Why eliminating major age-related diseases might only extend average lifespan by 15 years.The social impacts of extending healthspan or lifespan in an ageing population — including the potential danger of massively increasing inequality if some people can access life-extension interventions while others can’t.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:04)The interview begins (00:02:21)Reasons to explore why we age and die (00:02:35)Evolutionary pressures and animals that don't biologically age (00:06:55)Why does ageing cause us to die? (00:12:24)Is there a hard limit to the human lifespan? (00:17:11)Evolutionary tradeoffs between fitness and longevity (00:21:01)How ageing resets with every generation, and what we can learn from clones (00:23:48)Younger blood (00:31:20)Freezing cells, organs, and bodies (00:36:47)Are the goals of anti-ageing research even realistic? (00:43:44)Dementia (00:49:52)Senescence (01:01:58)Caloric restriction and metabolic pathways (01:11:45)Yamanaka factors (01:34:07)Cancer (01:47:44)Mitochondrial dysfunction (01:58:40)Population effects of extended lifespan (02:06:12)Could increased longevity increase inequality? (02:11:48)What’s surprised Venki about this research (02:16:06)Luisa's outro (02:19:26)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
Sep 13, 2024
"Perception is quite difficult with cameras: even if you have a stereo camera, you still can’t really build a map of where everything is in space. It’s just very difficult. And I know that sounds surprising, because humans are very good at this. In fact, even with one eye, we can navigate and we can clear the dinner table. But it seems that we’re building in a lot of understanding and intuition about what’s happening in the world and where objects are and how they behave. For robots, it’s very difficult to get a perfectly accurate model of the world and where things are. So if you’re going to go manipulate or grasp an object, a small error in that position will maybe have your robot crash into the object, a delicate wine glass, and probably break it. So the perception and the control are both problems." —Ken GoldbergIn today’s episode, host Luisa Rodriguez speaks to Ken Goldberg — robotics professor at UC Berkeley — about the major research challenges still ahead before robots become broadly integrated into our homes and societies.Links to learn more, highlights, and full transcript.They cover:Why training robots is harder than training large language models like ChatGPT.The biggest engineering challenges that still remain before robots can be widely useful in the real world.The sectors where Ken thinks robots will be most useful in the coming decades — like homecare, agriculture, and medicine.Whether we should be worried about robot labour affecting human employment.Recent breakthroughs in robotics, and what cutting-edge robots can do today.Ken’s work as an artist, where he explores the complex relationship between humans and technology.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:19)General purpose robots and the “robotics bubble” (00:03:11)How training robots is different than training large language models (00:14:01)What can robots do today? (00:34:35)Challenges for progress: fault tolerance, multidimensionality, and perception (00:41:00)Recent breakthroughs in robotics (00:52:32)Barriers to making better robots: hardware, software, and physics (01:03:13)Future robots in home care, logistics, food production, and medicine (01:16:35)How might robot labour affect the job market? (01:44:27)Robotics and art (01:51:28)Luisa's outro (02:00:55)Producer: Keiran HarrisAudio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon MonsourContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
Sep 04, 2024
"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra KargerIn today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.Links to learn more, highlights, and full transcript.They cover:How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.The challenges of predicting low-probability, high-impact events.Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.Whether large language models could help or outperform human forecasters.How people can improve their calibration and start making better forecasts personally.Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:07)The interview begins (00:02:54)The Existential Risk Persuasion Tournament (00:05:13)Why is this project important? (00:12:34)How was the tournament set up? (00:17:54)Results from the tournament (00:22:38)Risk from artificial intelligence (00:30:59)How to think about these numbers (00:46:50)Should we trust experts or superforecasters more? (00:49:16)The effect of debate and persuasion (01:02:10)Forecasts from the general public (01:08:33)How can we improve people’s forecasts? (01:18:59)Incentives and recruitment (01:26:30)Criticisms of the tournament (01:33:51)AI adversarial collaboration (01:46:20)Hypotheses about stark differences in views of AI risk (01:51:41)Cruxes and different worldviews (02:17:15)Ezra’s experience as a superforecaster (02:28:57)Forecasting as a research field (02:31:00)Can large language models help or outperform human forecasters? (02:35:01)Is forecasting valuable in the real world? (02:39:11)Ezra’s book recommendations (02:45:29)Luisa's outro (02:47:54)Producer: Keiran HarrisAudio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon MonsourContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
Aug 29, 2024
"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels like one of those things where you read it and it’s like, 'This is the thing that everyone is screaming about?' I think it’s a pretty modest bill in a lot of ways, but I think part of what they are thinking is that this is the first step to shutting down AI development. Or that if California does this, then lots of other states are going to do it, and we need to really slam the door shut on model-level regulation or else they’re just going to keep going. "I think that is like a lot of what the sentiment here is: it’s less about, in some ways, the details of this specific bill, and more about the sense that they want this to stop here, and they’re worried that if they give an inch that there will continue to be other things in the future. And I don’t think that is going to be tolerable to the public in the long run. I think it’s a bad choice, but I think that is the calculus that they are making." —Nathan CalvinIn today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.Links to learn more, highlights, and full transcript.They cover:What’s actually in SB 1047, and which AI models it would apply to.The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.Why California is taking state-level action rather than waiting for federal regulation.How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:00:57)The interview begins (00:02:30)What risks from AI does SB 1047 try to address? (00:03:10)Supporters and critics of the bill (00:11:03)Misunderstandings about the bill (00:24:07)Competition, open source, and liability concerns (00:30:56)Model size thresholds (00:46:24)How is SB 1047 different from the executive order? (00:55:36)Objections Nathan is sympathetic to (00:58:31)Current status of the bill (01:02:57)How can listeners get involved in work like this? (01:05:00)Luisa's outro (01:11:52)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Aug 26, 2024
"This is a group of animals I think people are particularly unfamiliar with. They are especially poorly covered in our science curriculum; they are especially poorly understood, because people don’t spend as much time learning about them at museums; and they’re just harder to spend time with in a lot of ways, I think, for people. So people have pets that are vertebrates that they take care of across the taxonomic groups, and people get familiar with those from going to zoos and watching their behaviours there, and watching nature documentaries and more. But I think the insects are still really underappreciated, and that means that our intuitions are probably more likely to be wrong than with those other groups." —Meghan BarrettIn today’s episode, host Luisa Rodriguez speaks to Meghan Barrett — insect neurobiologist and physiologist at Indiana University Indianapolis and founding director of the Insect Welfare Research Society — about her work to understand insects’ potential capacity for suffering, and what that might mean for how humans currently farm and use insects. If you're interested in getting involved with this work, check out Meghan's recent blog post: I’m into insect welfare! What’s next?Links to learn more, highlights, and full transcript.They cover:The scale of potential insect suffering in the wild, on farms, and in labs.Examples from cutting-edge insect research, like how depression- and anxiety-like states can be induced in fruit flies and successfully treated with human antidepressants.How size bias might help explain why many people assume insects can’t feel pain.Practical solutions that Meghan’s team is working on to improve farmed insect welfare, such as standard operating procedures for more humane slaughter methods.Challenges facing the nascent field of insect welfare research, and where the main research gaps are.Meghan’s personal story of how she went from being sceptical of insect pain to working as an insect welfare scientist, and her advice for others who want to improve the lives of insects.And much more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:02)The interview begins (00:03:06)What is an insect? (00:03:22)Size diversity (00:07:24)How important is brain size for sentience? (00:11:27)Offspring, parental investment, and lifespan (00:19:00)Cognition and behaviour (00:23:23)The scale of insect suffering (00:27:01)Capacity to suffer (00:35:56)The empirical evidence for whether insects can feel pain (00:47:18)Nociceptors (01:00:02)Integrated nociception (01:08:39)Response to analgesia (01:16:17)Analgesia preference (01:25:57)Flexible self-protective behaviour (01:31:19)Motivational tradeoffs and associative learning (01:38:45)Results (01:43:31)Reasons to be sceptical (01:47:18)Meghan’s probability of sentience in insects (02:10:20)Views of the broader entomologist community (02:18:18)Insect farming (02:26:52)How much to worry about insect farming (02:40:56)Inhumane slaughter and disease in insect farms (02:44:45)Inadequate nutrition, density, and photophobia (02:53:50)Most humane ways to kill insects at home (03:01:33)Challenges in researching this (03:07:53)Most promising reforms (03:18:44)Why Meghan is hopeful about working with the industry (03:22:17)Careers (03:34:08)Insect Welfare Research Society (03:37:16)Luisa's outro (03:47:01)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Aug 22, 2024
The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the original cofounders of Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.Links to learn more, highlights, video, and full transcript.As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way.As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.Nick points out what he sees as the biggest virtues of the RSP approach, and then Rob pushes him on some of the best objections he’s found to RSPs being up to the task of keeping AI safe and beneficial. The two also discuss whether it's essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.In addition to all of that, Nick and Rob talk about:What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).What it’s like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.What it’s like working at Anthropic, and how to get the skills needed to help with the safe development of AI.And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at podcast@80000hours.org.Chapters:Cold open (00:00:00)Rob’s intro (00:01:00)The interview begins (00:03:44)Scaling laws (00:04:12)Bottlenecks to further progress in making AIs helpful (00:08:36)Anthropic’s responsible scaling policies (00:14:21)Pros and cons of the RSP approach for AI safety (00:34:09)Alternatives to RSPs (00:46:44)Is an internal audit really the best approach? (00:51:56)Making promises about things that are currently technically impossible (01:07:54)Nick’s biggest reservations about the RSP approach (01:16:05)Communicating “acceptable” risk (01:19:27)Should Anthropic’s RSP have wider safety buffers? (01:26:13)Other impacts on society and future work on RSPs (01:34:01)Working at Anthropic (01:36:28)Engineering vs research (01:41:04)AI safety roles at Anthropic (01:48:31)Should concerned people be willing to take capabilities roles? (01:58:20)Recent safety work at Anthropic (02:10:05)Anthropic culture (02:14:35)Overrated and underrated AI applications (02:22:06)Rob’s outro (02:26:36)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo engineering: Simon MonsourTranscriptions: Katy Moore
Aug 15, 2024
"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. And as soon as [some courageous scientists] looked for evidence, it showed that this practice was completely indefensible and then the clinical practice was changed. People don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous." —Jonathan BirchIn today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)Links to learn more, highlights, and full transcript.They cover:Candidates for sentience, such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIsHumanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.Chilling tales about overconfident policies that probably caused significant suffering for decades.How policymakers can act ethically given real uncertainty.Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.Why Jonathan is so excited about citizens’ assemblies.Jonathan’s conversation with the Dalai Lama about whether insects are sentient.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:20)The interview begins (00:03:04)Why does sentience matter? (00:03:31)Inescapable uncertainty about other minds (00:05:43)The “zone of reasonable disagreement” in sentience research (00:10:31)Disorders of consciousness: comas and minimally conscious states (00:17:06)Foetuses and the cautionary tale of newborn pain (00:43:23)Neural organoids (00:55:49)AI sentience and whole brain emulation (01:06:17)Policymaking at the edge of sentience (01:28:09)Citizens’ assemblies (01:31:13)The UK’s Sentience Act (01:39:45)Ways Jonathan has changed his mind (01:47:26)Careers (01:54:54)Discussing animal sentience with the Dalai Lama (01:59:08)Luisa’s outro (02:01:04)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Aug 01, 2024
"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella NevoIn today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.Links to learn more, highlights, and full transcript.They cover:Real-world examples of sophisticated security breaches, and what we can learn from them.Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.New security measures that Sella hopes can mitigate with the growing risks.Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.And plenty more.Also, RAND is currently hiring for roles in technical and policy information security — check them out if you're interested in this field! Chapters:Cold open (00:00:00)Luisa’s intro (00:00:56)The interview begins (00:02:30)The importance of securing the model weights of frontier AI models (00:03:01)The most sophisticated and surprising security breaches (00:10:22)AI models being leaked (00:25:52)Researching for the RAND report (00:30:11)Who tries to steal model weights? (00:32:21)Malicious code and exploiting zero-days (00:42:06)Human insiders (00:53:20)Side-channel attacks (01:04:11)Getting access to air-gapped networks (01:10:52)Model extraction (01:19:47)Reducing and hardening authorised access (01:38:52)Confidential computing (01:48:05)Red-teaming and security testing (01:53:42)Careers in information security (01:59:54)Sella’s work on flood forecasting systems (02:01:57)Luisa’s outro (02:04:51)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Jul 26, 2024
"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik ButerinCan ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.Links to learn more, highlights, video, and full transcript.Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don’t need a business idea yet — just the hustle to start a technology company.In addition to all of that, host Rob Wiblin and Vitalik discuss:AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.Vitalik’s updated p(doom).Whether the social impact of blockchain and crypto has been a disappointment.Whether humans can merge with AI, and if that’s even desirable.The most valuable defensive technologies to accelerate.How to trustlessly identify what everyone will agree is misinformationWhether AGI is offence-dominant or defence-dominant.Vitalik’s updated take on effective altruism.Plenty more.Chapters:Cold open (00:00:00)Rob’s intro (00:00:56)The interview begins (00:04:47)Three different views on technology (00:05:46)Vitalik’s updated probability of doom (00:09:25)Technology is amazing, and AI is fundamentally different from other tech (00:15:55)Fear of totalitarianism and finding middle ground (00:22:44)Should AI be more centralised or more decentralised? (00:42:20)Humans merging with AIs to remain relevant (01:06:59)Vitalik’s “d/acc” alternative (01:18:48)Biodefence (01:24:01)Pushback on Vitalik’s vision (01:37:09)How much do people actually disagree? (01:42:14)Cybersecurity (01:47:28)Information defence (02:01:44)Is AI more offence-dominant or defence-dominant? (02:21:00)How Vitalik communicates among different camps (02:25:44)Blockchain applications with social impact (02:34:37)Rob’s outro (03:01:00)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore
Jul 18, 2024
"You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of compute requirements than the frontier large language models. And China has the compute to train these systems. And if you’re, for instance, building a cyber agent or something that conducts cyberattacks, perhaps you also don’t need the general reasoning or mathematical ability of a large language model. You train on a much smaller subset of data. You fine-tune it on a smaller subset of data. And those systems — one, if China intentionally misuses them, and two, if they get proliferated because China just releases them as open source, or China does not have as comprehensive AI regulations — this could cause a lot of harm in the world." —Sihao HuangIn today’s episode, host Luisa Rodriguez speaks to Sihao Huang — a technology and security policy fellow at RAND — about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.Links to learn more, highlights, video, and full transcript.They cover:Whether the US and China are in an AI race, and the global implications if they are.The state of the art of AI in China.China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain.How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people.Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control.How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:02)The interview begins (00:02:06)Is China in an AI race with the West? (00:03:20)How advanced is Chinese AI? (00:15:21)Bottlenecks in Chinese AI development (00:22:30)China and AI risks (00:27:41)Information control and censorship (00:31:32)AI safety research in China (00:36:31)Could China be a source of catastrophic AI risk? (00:41:58)AI enabling human rights abuses and undermining democracy (00:50:10)China’s semiconductor industry (00:59:47)China’s domestic AI governance landscape (01:29:22)China’s international AI governance strategy (01:49:56)Coordination (01:53:56)Track two dialogues (02:03:04)Misunderstandings Western actors have about Chinese approaches (02:07:34)Complexity thinking (02:14:40)Sihao’s pet bacteria hobby (02:20:34)Luisa's outro (02:22:47)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Jul 12, 2024
"Ring one: total annihilation; no cellular life remains. Ring two, another three-mile diameter out: everything is ablaze. Ring three, another three or five miles out on every side: third-degree burns among almost everyone. You are talking about people who may have gone down into the secret tunnels beneath Washington, DC, escaped from the Capitol and such: people are now broiling to death; people are dying from carbon monoxide poisoning; people who followed instructions and went into their basement are dying of suffocation. Everywhere there is death, everywhere there is fire."That iconic mushroom stem and cap that represents a nuclear blast — when a nuclear weapon has been exploded on a city — that stem and cap is made up of people. What is left over of people and of human civilisation." —Annie JacobsenIn today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario.Links to learn more, highlights, and full transcript.They cover:The most harrowing findings from Annie’s hundreds of hours of interviews with nuclear experts.What happens during the window that the US president would have to decide about nuclear retaliation after hearing news of a possible nuclear attack.The horrific humanitarian impacts on millions of innocent civilians from nuclear strikes.The overlooked dangers of a nuclear-triggered electromagnetic pulse (EMP) attack crippling critical infrastructure within seconds.How we’re on the razor’s edge between the logic of nuclear deterrence and catastrophe, and urgently need reforms to move away from hair-trigger alert nuclear postures.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:03)The interview begins (00:02:28)The first 24 minutes (00:02:59)The Black Book and presidential advisors (00:13:35)False alarms (00:40:43)Russian misperception of US counterattack (00:44:50)A narcissistic madman with a nuclear arsenal (01:00:13)Is escalation inevitable? (01:02:53)Firestorms and rings of annihilation (01:12:56)Nuclear electromagnetic pulses (01:27:34)Continuity of government (01:36:35)Rays of hope (01:41:07)Where we’re headed (01:43:52)Avoiding politics (01:50:34)Luisa’s outro (01:52:29)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Jul 05, 2024
This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.Links to learn more, highlights, and full transcript.As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability advance to the point that it's possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or 'best-guess' answers to far more cases.If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great. That's because when it's hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it's an unambiguous violation of honesty norms — but so long as there's no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable. But in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest.In the past we've usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.Carl Shulman and host Rob Wiblin discuss the above, as well as:The risk of society using AI to lock in its values.The difficulty of preventing coups once AI is key to the military and police.What international treaties we need to make this go well.How to make AI superhuman at forecasting the future.Whether AI will be able to help us with intractable philosophical questions.Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.Why Carl doesn't support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we're closer to 'crunch time.'Opportunities for listeners to contribute to making the future go well.Chapters:Cold open (00:00:00)Rob’s intro (00:01:16)The interview begins (00:03:24)COVID-19 concrete example (00:11:18)Sceptical arguments against the effect of AI advisors (00:24:16)Value lock-in (00:33:59)How democracies avoid coups (00:48:08)Where AI could most easily help (01:00:25)AI forecasting (01:04:30)Application to the most challenging topics (01:24:03)How to make it happen (01:37:50)International negotiations and coordination and auditing (01:43:54)Opportunities for listeners (02:00:09)Why Carl doesn't support enforced pauses on AI research (02:03:58)How Carl is feeling about the future (02:15:47)Rob’s outro (02:17:37)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore
© 2024 Archodia.com, Limited. All Rights Reserved.