{"id":40132,"date":"2023-03-28T00:00:52","date_gmt":"2023-03-28T00:00:52","guid":{"rendered":"https:\/\/80000hours.org\/?post_type=article&#038;p=40132"},"modified":"2024-11-29T13:01:53","modified_gmt":"2024-11-29T13:01:53","slug":"future-generations","status":"publish","type":"article","link":"https:\/\/80000hours.org\/articles\/future-generations\/","title":{"rendered":"Longtermism: a call to protect future generations"},"content":{"rendered":"<p>When the 19th-century amateur scientist Eunice Newton Foote filled glass cylinders with different gases and exposed them to sunlight, she uncovered a curious fact. Carbon dioxide became hotter than regular air and took longer to cool down.<\/p>\n<p>Remarkably, Foote saw what this momentous discovery meant.<\/p>\n<p>&#8220;An atmosphere of that gas would give our earth a high temperature,&#8221; she wrote in 1857.<\/p>\n<p>Though Foote could hardly have been aware at the time, the potential for global warming due to carbon dioxide would have massive implications for the generations that came after her.<\/p>\n<p>If we ran history over again from that moment, we might hope that this key discovery about carbon&#8217;s role in the atmosphere would inform governments&#8217; and industries&#8217; choices in the coming century. They probably shouldn&#8217;t have avoided carbon emissions altogether, but they could have prioritised the development of alternatives to fossil fuels much sooner in the 20th century, and we might have prevented much of the destructive climate change that present people are already beginning to live through \u2014 which will affect future generations as well.<\/p>\n<p>We believe it would&#8217;ve been much better if previous generations had acted on Foote&#8217;s discovery, especially by the 1970s, when climate models were beginning to reliably show the future course of warming global trends.<\/p>\n<p>If this seems right, it&#8217;s because of a commonsense idea: <em>to the extent that we are able to, we have strong reasons to consider the interests and promote the welfare of future generations.<\/em><\/p>\n<p>That was true in the 1850s, it was true in the 1970s, and it&#8217;s true now.<\/p>\n<p>But despite the intuitive appeal of this moral idea, its implications have been underexplored. For instance, if we care about generations 100 years in the future, it&#8217;s not clear why we should stop there.<\/p>\n<p>And when we consider how many future generations there might be, and how much better the future could go if we make good decisions in the present, our descendants&#8217; chances to flourish take on great importance. In particular, we think this idea suggests that <strong>improving the prospects for <bi>all<\/bi> future generations is among the most morally important things we can do.<\/strong><\/p>\n<p>This article will lay out the argument for this view, which goes by the name <em>longtermism<\/em>.<\/p>\n<p>We&#8217;ll say where we think the argument is strongest and weakest, respond to common objections, and say a bit about what we think this all means for what we should do.<\/p>\n<div class=\"well visible-if-not-newsletter-subscriber margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h2 class=\"no-toc\"><small>Prefer a book?<\/small> Enter your email and we&#8217;ll mail you a book<\/h2>\n<p>Sign up to our newsletter, and we&#8217;ll mail you a free copy of <em>The Precipice<\/em> by philosopher Toby Ord. It gives an overview of the moral importance of future generations, and what we can do to help them today.<\/p>\n<form data-80k-object-id=\"\" data-80k-form-action=\"newsletter__subscribe\" action=\"\/\" method=\"post\" class=\"form-newsletter-signup form-newsletter-signup-step-1 margin-bottom-smaller\">\n<div class=\"mc-field-group input-group compact-input-group \"> <input type=\"email\" value=\"\" name=\"email\" required class=\"form-control email\" placeholder=\"Email address\" id=\"input_email\"> <span class=\"submit input-group-btn input-group-btn-right\"> <input type=\"submit\" id=\"mc-embedded-subscribe\" value=\"GET THE FREE BOOK\" class=\"btn btn-primary \" \/> <\/span> <\/div>\n<div> <input name=\"_eightyk_action\" value=\"mailchimp_add_subscriber\" type=\"hidden\"> <input name=\"redirect_path_after_step_2\" value=\"\/newsletter\/welcome\/\" type=\"hidden\"> <\/div>\n<div style=\"position: absolute; left: -5000px;\"> <input type=\"text\" name=\"b_abc12f58bbe8075560abdc5b7_43bc1ae55c\" tabindex=\"-1\" value=\"\"> <\/div>\n<\/form>\n<p class=\"smallest\">You&#8217;ll be joining over 300,000 people who receive weekly updates on our research and job opportunities. <a href=\"https:\/\/80000hours.org\/free-book\/#giveaway-terms\">T&#038;Cs here<\/a>. You can unsubscribe in one click.  <\/p>\n<\/div>\n<div id=\"toc_container\" class=\"toc_white no_bullets\"><p class=\"toc_title\">Table of Contents<\/p><ul class=\"toc_list\"><li><a href=\"#the-case-for-longtermism\"><span class=\"toc_number toc_depth_1\">1<\/span> The case for longtermism<\/a><ul><li><a href=\"#1-we-should-care-about-how-the-lives-of-future-individuals-go\"><span class=\"toc_number toc_depth_2\">1.1<\/span> 1. We should care about how the lives of future individuals go<\/a><\/li><li><a href=\"#2-the-number-of-future-individuals-whose-lives-matter-could-be-vast\"><span class=\"toc_number toc_depth_2\">1.2<\/span> 2. The number of future individuals whose lives matter could be vast.<\/a><\/li><li><a href=\"#opportunity\"><span class=\"toc_number toc_depth_2\">1.3<\/span> 3. We have an opportunity to affect how the long-run future goes<\/a><\/li><li><a href=\"#summing-up-the-arguments\"><span class=\"toc_number toc_depth_2\">1.4<\/span> Summing up the arguments<\/a><\/li><\/ul><\/li><li><a href=\"#objections\"><span class=\"toc_number toc_depth_1\">2<\/span> Objections to longtermism<\/a><\/li><li><a href=\"#if-i-dont-agree-with-80000-hours-about-longtermism-can-i-still-benefit-from-your-advice\"><span class=\"toc_number toc_depth_1\">3<\/span> If I don&#8217;t agree with 80,000 Hours about longtermism, can I still benefit from your advice?<\/a><\/li><li><a href=\"#what-are-the-best-ways-to-help-future-generations-right-now\"><span class=\"toc_number toc_depth_1\">4<\/span> What are the best ways to help future generations right now?<\/a><\/li><li><a href=\"#learn-more\"><span class=\"toc_number toc_depth_1\">5<\/span> Learn more<\/a><\/li><li><a href=\"#read-next\"><span class=\"toc_number toc_depth_1\">6<\/span> Read next<\/a><\/li><\/ul><\/div>\n<em>We&#8217;d like to give special thanks to Ben Todd, who wrote a <a href=\"https:\/\/web.archive.org\/web\/20230314130427\/https:\/\/80000hours.org\/articles\/future-generations\/\">previous version of this essay<\/a>, and Fin Moorhouse, who gave insightful comments on an early draft.<\/em><br \/>\n<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/FallPando02-modified.png\" alt=\"\" width=\"1200\" height=\"795\" class=\"alignnone size-full wp-image-81479\" srcset=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/FallPando02-modified.png 1200w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/FallPando02-modified-300x199.png 300w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/FallPando02-modified-1024x678.png 1024w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/FallPando02-modified-768x509.png 768w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" \/><br \/>\n<center><i><small style=\"font-size: x-small;\">By J Zapell &#8211; <a rel=&quot;nofollow&quot; class=&quot;external text&quot; href=&quot;https:\/\/web.archive.org\/web\/20131109023828\/http:\/\/www.fs.usda.gov\/photogallery\/fishlake\/home\/gallery\/?cid=3823&amp;position=Promo&quot;>Public Domain<\/a>, <a href=\"https:\/\/commons.wikimedia.org\/w\/index.php?curid=27865175\">CC0<\/a><\/small><\/i><\/center><\/p>\n<h2><span id=\"the-case-for-longtermism\" class=\"toc-anchor\"><\/span>The case for longtermism<\/h2>\n<p>While most recognize that future generations matter morally to some degree, there are two other key premises in the case for longtermism that we believe are true and underappreciated. All together, the premises are:<\/p>\n<ol>\n<li><strong>We should care about how the lives of future individuals go.<\/strong><\/li>\n<li><strong>The number of future individuals whose lives matter could be <bi>vast<\/bi>.<\/strong> <\/li>\n<li><strong>We have an opportunity to affect how the long-run future goes<\/strong> \u2014 whether there may be many flourishing individuals in the future, many suffering individuals in the future, or perhaps no one at all.<\/li>\n<\/ol>\n<p>In the rest of this article, we&#8217;ll explain and defend each of these premises. Because the stakes are so high, this argument suggests that improving the prospects for all future generations should be a top moral priority of our time. If we&#8217;re able to make an exceptionally big impact, positively influencing many lives with enduring consequences, it&#8217;s incumbent upon us to take this seriously.<\/p>\n<p>This doesn&#8217;t mean it&#8217;s the <em>only<\/em> morally important thing \u2014 or that the interests of future generations matter to the total exclusion of the present generation. We disagree with both of those claims.<\/p>\n<p>There&#8217;s also a good chance this argument is flawed in some way, so much of this article discusses <a href=\"\/articles\/future-generations\/#objections\">objections<\/a> to longtermism. While we don&#8217;t find them on the whole convincing, some of them do reduce our confidence in the argument in significant ways.<\/p>\n<blockquote class=\"pullquote--right pullquote huge italics serif\"><p>\n      If we&#8217;re able to make an exceptionally big impact, positively influencing many lives with enduring consequences, it&#8217;s incumbent upon us to take this seriously.<\/p>\n<\/blockquote>\n<p>However, we think it&#8217;s clear that our society generally neglects the interests of future generations. Philosopher Toby Ord, an advisor to 80,000 Hours, has argued that at least by some measures, the world spends more money on ice cream each year than it does on reducing the risks to future generations.<\/p>\n<p>Since, as we believe, the argument for longtermism is generally compelling, we should do a lot more <em>compared to the status quo<\/em> to make sure the future goes well rather than badly.<\/p>\n<p>It&#8217;s also crucial to recognise that longtermism by itself doesn&#8217;t say anything about how best to help the future <em>in practice<\/em>, and this is a nascent area of research. Longtermism is often confused with the idea that we should do more long-term planning. But we think the primary upshot is that it makes it more important to urgently address extinction risks in the present \u2014 such as <a href=\"https:\/\/80000hours.org\/problem-profiles\/preventing-catastrophic-pandemics\/\">catastrophic pandemics<\/a>, an <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\">AI disaster<\/a>, <a href=\"https:\/\/80000hours.org\/problem-profiles\/nuclear-security\/\">nuclear war<\/a>, or extreme <a href=\"https:\/\/80000hours.org\/problem-profiles\/climate-change\/\">climate change<\/a>. We discuss the possible implications in the final section.<\/p>\n<p>But first, why do we think the three premises above are true?<\/p>\n<h3><span id=\"1-we-should-care-about-how-the-lives-of-future-individuals-go\" class=\"toc-anchor\"><\/span>1. We should care about how the lives of future individuals go<\/h3>\n<p>Should we actually care about people who don&#8217;t exist yet?<\/p>\n<p>The discussion of climate change in the introduction is meant to draw out the common intuition that we do have reason to care about future generations. But sometimes, especially when considering the implications of longtermism, people doubt that future generations matter at all.<\/p>\n<p>Derek Parfit, an influential moral philosopher, offered a simple thought experiment to illustrate why it&#8217;s plausible that future people matter:<\/p>\n<blockquote><p>\n  Suppose that I leave some broken glass in the undergrowth of a wood. A hundred years later this glass wounds a child. My act harms this child. If I had safely buried the glass, this child would have walked through the wood unharmed.<\/p>\n<p>  Does it make a moral difference that the child whom I harm does not now exist?\n<\/p><\/blockquote>\n<p>We agree it would be wrong to dispose of broken glass in a way that is likely to harm someone. It&#8217;s still wrong if the harm is unlikely to occur until 5 or 10 years have passed \u2014 or in another century, to someone who isn&#8217;t born yet. And if someone else happens to be walking along the same path, they too would have good reason to pick up the glass and protect any child who might get harmed at any point in the future.<\/p>\n<p>But Parfit also saw that thinking about these issues raised surprisingly tricky philosophical questions, some of which have yet to be answered satisfactorily. One central issue is called the <a href=\"https:\/\/plato.stanford.edu\/entries\/nonidentity-problem\/\">&#8216;non-identity problem&#8217;<\/a>, which we&#8217;ll discuss in the <a href=\"\/articles\/future-generations\/#objections\">objections section<\/a> below. However, these issues can get complex and technical, and not everyone will be interested in reading through the details.<\/p>\n<p>Despite these puzzles, there are many cases similar to Parfit&#8217;s example of the broken glass in the woods in which it&#8217;s clearly right to care about the lives of future people. For instance, parents-to-be rightly make plans based around the interests of their future children even prior to conception. Governments are correct to plan for the coming generations not yet born. And if it is reasonably within our power to prevent a totalitarian regime from arising 100 years from now, or to avoid using up resources our descendants may depend on, then we ought to do so.<\/p>\n<p>While longtermism may seem to some like abstract, obscure philosophy, it in fact would be much more bizarre and contrary to common sense to believe we shouldn&#8217;t care about people who don&#8217;t yet exist.<br \/>\n<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1614px-Pinturas_rupestres_-_Manos-modified.png\" alt=\"\" width=\"1050\" height=\"703\" class=\"alignnone size-full wp-image-81480\" srcset=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1614px-Pinturas_rupestres_-_Manos-modified.png 1050w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1614px-Pinturas_rupestres_-_Manos-modified-300x201.png 300w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1614px-Pinturas_rupestres_-_Manos-modified-1024x686.png 1024w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1614px-Pinturas_rupestres_-_Manos-modified-768x514.png 768w\" sizes=\"(max-width: 1050px) 100vw, 1050px\" \/><\/p>\n<h3><span id=\"2-the-number-of-future-individuals-whose-lives-matter-could-be-vast\" class=\"toc-anchor\"><\/span>2. The number of future individuals whose lives matter could be <em>vast<\/em>.<\/h3>\n<p>Humans have been around for hundreds of thousands of years. It seems like we <em>could<\/em> persist in some form for at least a few hundred thousand more.<\/p>\n<p>There is, though, serious risk that we&#8217;ll cause ourselves to go extinct \u2014 as we&#8217;ll discuss more below. But absent that, humans have proven that they are extremely inventive and resilient. We survive in a wide range of circumstances, due in part to our ability to use technology to adjust our bodies and our environments as needed.<\/p>\n<p>How long can we reasonably expect the human species to survive?<\/p>\n<p>That&#8217;s harder to say. More than 99 percent of Earth&#8217;s species have gone extinct over the planet&#8217;s lifetime, often within a few million years or less.<\/p>\n<blockquote class=\"pullquote--right pullquote huge italics serif\"><p>\n      It&#8217;s possible our own inventiveness could prove to be our downfall.<\/p>\n<\/blockquote>\n<p>But if you look around, it seems clear humans <em>aren&#8217;t<\/em> the average Earth species. It&#8217;s not <a href=\"https:\/\/en.wikipedia.org\/wiki\/Speciesism\">&#8216;speciesist&#8217;<\/a> \u2014 unfairly discriminatory on the basis of species membership \u2014 to say that humans have achieved remarkable feats for an animal: conquering many diseases through invention, spreading across the globe and even into orbit, expanding our life expectancy, and splitting the atom.<\/p>\n<p>It&#8217;s possible our own inventiveness could prove to be our downfall. But if we avoid that fate, our intelligence may let us navigate the challenges that typically bring species to their ends.<\/p>\n<p>For example, we may be able to <a href=\"https:\/\/www.nasa.gov\/press-release\/nasa-s-dart-mission-hits-asteroid-in-first-ever-planetary-defense-test\">detect and deflect comets and asteroids<\/a>, which have been implicated in past mass extinction events.<\/p>\n<p>If we can forestall extinction indefinitely, we may be able to thrive on Earth for as long as it&#8217;s habitable \u2014 which could be another 500 million years, <a href=\"https:\/\/www.researchgate.net\/publication\/253162292_Boundaries_of_life_estimating_the_life_span_of_the_biosphere\">perhaps more<\/a>.<\/p>\n<p>As of now, there are about 8 billion humans alive. In total, there have been around 100 billion humans who ever lived. If we survive to the end of Earth&#8217;s habitable period, all those who have existed so far will have been the first raindrops in a hurricane.<\/p>\n<p><iframe src=\"https:\/\/ourworldindata.org\/grapher\/population?time=-1000..latest\" loading=\"lazy\" style=\"width: 100%; height: 600px; border: 0px none;\"><\/iframe><\/p>\n<p>If we&#8217;re just asking about what seems <em>possible<\/em> for the future population of humanity, the numbers are breathtakingly large. Assuming for simplicity that there will be 8 billion people for each century of the next 500 million years, our total population would be on the order of <em>forty quadrillion<\/em>. We think this clearly demonstrates the importance of the long-run future.<\/p>\n<p>And even that might not be the end. While it remains speculative, space settlement may point the way toward outliving our time on planet Earth. And once we&#8217;re no longer planet-bound, the potential number of people worth caring about really starts getting big.<\/p>\n<p>In <a href=\"https:\/\/80000hours.org\/what-we-owe-the-future\/\"><em>What We Owe the Future<\/em><\/a>, philosopher and 80,000 Hours co-founder Will MacAskill wrote:<\/p>\n<blockquote><p>\n  \u2026if humanity ultimately takes to the stars, the timescales become literally astronomical. The sun will keep burning for five billion years; the last conventional star formations will occur in over a trillion years; and, due to a small but steady stream of collisions between brown dwarfs, a few stars will still shine a million trillion years from now.<\/p>\n<p>  The real possibility that civilisation will last such a long time gives humanity an enormous life expectancy.\n<\/p><\/blockquote>\n<p>Some of this discussion may sound speculative and fantastical \u2014 which it is! But if you consider how fantastical our lives and world would seem to humans 100,000 years ago, you should expect that the far future could seem at least as alien to us now.<\/p>\n<p>And it&#8217;s important not to get bogged down in the exact numbers. What matters is that there&#8217;s a reasonable possibility that the future is very long, and it could contain a much greater number of individuals. So how it goes could matter enormously.<\/p>\n<p>There&#8217;s another factor that expands the scope of our moral concern for the future even further. Should we care about individuals who aren&#8217;t even human?<\/p>\n<p>It seems true to us that the lives of non-human animals in the present day matter morally \u2014 which is why <a href=\"https:\/\/80000hours.org\/problem-profiles\/factory-farming\/\">factory farming<\/a>, in which billions of farmed animals suffer every day, is such a moral disaster. The suffering and wellbeing of future non-human animals matters no less.<\/p>\n<p>And if the far-future descendants of humanity evolve into a different species, we should probably care about their wellbeing as well. We think we should even potentially care about possible digital beings in the future, as long as they meet the criteria for <a href=\"https:\/\/www.openphilanthropy.org\/research\/2017-report-on-consciousness-and-moral-patienthood\/\">moral patienthood<\/a> \u2014 such as, for example, being able to feel pleasure and pain.<\/p>\n<p>We&#8217;re highly uncertain about what kinds of beings will inhabit the future, but we think humanity and its descendants have the potential to play a huge role. And we want to have a wide scope of moral concern to encompass all those for whom life can go well or badly.<\/p>\n<p>When we think about the possible scale of the future ahead of us, we feel humbled. But we also believe these possibilities present a gigantic opportunity to have a positive impact for those of us who have appeared so early in this story.<\/p>\n<p>The immense stakes involved strongly suggest that, <em>if there&#8217;s something we can do to have a significant and predictably positive impact on the future<\/em>, we have good reason to try.<br \/>\n<br \/>\n<center><i><small style=\"font-size: x-small;\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Coal_power_plant_Datteln_2-modified-modified.png\" alt=\"\" width=\"1000\" height=\"884\" class=\"alignnone size-full wp-image-81486\" srcset=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Coal_power_plant_Datteln_2-modified-modified.png 1000w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Coal_power_plant_Datteln_2-modified-modified-300x265.png 300w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Coal_power_plant_Datteln_2-modified-modified-768x679.png 768w\" sizes=\"(max-width: 1000px) 100vw, 1000px\" \/> <a href=\"https:\/\/commons.wikimedia.org\/wiki\/File:Coal_power_plant_Datteln_2.jpg\">Arnold Paul<\/a>, <a href=\"https:\/\/creativecommons.org\/licenses\/by-sa\/2.5\">CC BY-SA 2.5<\/a> (cropped)<\/small><\/i><\/center><\/p>\n<h3><span id=\"opportunity\" class=\"toc-anchor\"><\/span>3. We have an opportunity to affect how the long-run future goes<\/h3>\n<p>When Foote discovered the mechanism of climate change, she couldn&#8217;t have foreseen how the future demand for fossil fuels would trigger a consequential global rise in temperatures.<\/p>\n<p>So even if we have good reason to care about how the future unfolds, and we acknowledge that the future could contain immense numbers of individuals whose lives matter morally, we might still wonder: can anyone actually do anything to improve the prospects of the coming generations?<\/p>\n<blockquote class=\"pullquote--right pullquote huge italics serif\"><p>\n      It&#8217;d be better for the future if we avoid extinction, manage our resources carefully, foster institutions that promote cooperation rather than violent conflict, and responsibly develop powerful technology.<\/p>\n<\/blockquote>\n<p>Many things we do affect the future in <em>some<\/em> way. If you have a child or contribute to compounding economic growth, the effects of these actions ripple out over time, and to some extent, change the course of history. But these effects are very hard to assess. The question is whether we can <em>predictably<\/em> have a positive impact over the long term.<\/p>\n<p>We think we can. For example, we believe that it&#8217;d be better for the future if we avoid extinction, manage our resources carefully, foster institutions that promote cooperation rather than violent conflict, and responsibly develop powerful technology.<\/p>\n<p>We&#8217;re never going to be totally sure our decisions are for the best \u2014 but often we have to make decisions under uncertainty, whether we&#8217;re thinking about the long-term future or not. And we think there are reasons to be optimistic about our ability to make a positive difference.<\/p>\n<p>The following subsections discuss four primary approaches to improving the long-run future:<\/p>\n<ul>\n<li><a href=\"\/articles\/future-generations\/#extinction\">Reducing extinction risk<\/a><\/li>\n<li><a href=\"\/articles\/future-generations\/#trajectory\">Positive trajectory changes<\/a><\/li>\n<li><a href=\"\/articles\/future-generations\/#research\">Longtermist research<\/a><\/li>\n<li><a href=\"\/articles\/future-generations\/#capacity\">Capacity building<\/a><\/li>\n<\/ul>\n<h4 id=\"extinction\">Reducing extinction risk<\/h4>\n<p>One plausible tactic for improving the prospects of future generations is to increase the chance that they get to exist at all.<\/p>\n<p>Of course, if there was a nuclear war or an asteroid that ended civilization, most people would agree that it was an unparalleled calamity.<\/p>\n<p>Longtermism suggests, though, that the stakes involved could be <em>even higher<\/em> than they first seem. Sudden human extinction wouldn&#8217;t just end the lives of the billions currently alive \u2014 it would cut off the entire potential of our species. As the previous section discussed, this would represent an enormous loss.<\/p>\n<p>And it seems plausible that at least some people can meaningfully reduce the risks of extinction. We can, for example, create safeguards to reduce the risk of accidental launches of nuclear weapons, which might trigger a cataclysmic escalatory cycle that brings on nuclear winter. And NASA has been testing technology to potentially <a href=\"https:\/\/www.jpl.nasa.gov\/edu\/news\/2022\/9\/22\/the-science-behind-nasas-first-attempt-at-redirecting-an-asteroid\/\">deflect large near-Earth objects<\/a> on dangerous trajectories. Our efforts to detect asteroids that could pose an extinction threat have arguably already proven extremely cost-effective.<br \/>\n<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1077px-Barringer_Crater_aerial_photo_by_USGS-modified.png\" alt=\"\" width=\"1077\" height=\"720\" class=\"alignnone size-full wp-image-81481\" srcset=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1077px-Barringer_Crater_aerial_photo_by_USGS-modified.png 1077w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1077px-Barringer_Crater_aerial_photo_by_USGS-modified-300x201.png 300w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1077px-Barringer_Crater_aerial_photo_by_USGS-modified-1024x685.png 1024w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1077px-Barringer_Crater_aerial_photo_by_USGS-modified-768x513.png 768w\" sizes=\"(max-width: 1077px) 100vw, 1077px\" \/><br \/>\n<br \/>\nSo if it&#8217;s true that reducing the risk of extinction is possible, then people today <em>can<\/em> plausibly have a far-reaching impact on the long-run future. At 80,000 Hours, our current understanding is that the biggest risks of extinction we face come from <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\">advanced artificial intelligence<\/a>, <a href=\"https:\/\/80000hours.org\/problem-profiles\/nuclear-security\/\">nuclear war<\/a>, and <a href=\"https:\/\/80000hours.org\/problem-profiles\/preventing-catastrophic-pandemics\/\">engineered pandemics<\/a>. <a id=\"reducing-risk\" class=\"link-anchor\"><\/a><\/p>\n<p>And there are real things we can do to reduce these risks, such as:<\/p>\n<ul>\n<li>Developing broad-spectrum vaccines that protect against a wide range of pandemic pathogens<\/li>\n<li>Enacting policies that restrict dangerous practices in biomedical research<\/li>\n<li>Inventing more effective personal protective equipment<\/li>\n<li>Increasing our knowledge of the internal workings of AI systems, to better understand when and if they could pose a threat<\/li>\n<li>Technical innovations to ensure that AI systems behave how we want them to<\/li>\n<li>Increasing oversight of private development of AI technology <\/li>\n<li>Facilitating cooperation between powerful nations to reduce threats from nuclear war, AI, and pandemics.<\/li>\n<\/ul>\n<p>We will never know with certainty how effective any given approach has been in reducing the risk of extinction, since you can&#8217;t run a randomised controlled trial with the end of the world. But the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Expected_value\">expected value<\/a> of these interventions can still be quite high, even with significant uncertainty.<\/p>\n<p>One response to the importance of reducing extinction risk is to note that it&#8217;s only positive if the future is more likely to be good than bad on balance. That brings us onto the next way to  help improve the prospects of future generations.<\/p>\n<h4 id=\"trajectory\">Positive trajectory changes<\/h4>\n<p>Preventing humanity&#8217;s extinction is perhaps the clearest way to have a long-term impact, but other possibilities may be available. If we&#8217;re able to take actions that influence whether our future is full of value or is comparatively bad, we would have the opportunity to make an extremely big difference from a longtermist perspective. We can call these <em>trajectory changes<\/em>.<\/p>\n<p>Climate change, for example, could potentially cause a devastating trajectory shift. Even if we believe it <a href=\"https:\/\/80000hours.org\/problem-profiles\/climate-change\/\">probably won&#8217;t lead to humanity&#8217;s extinction<\/a>, extreme climate change could radically reshape civilisation for the worse, possibly curtailing our viable opportunities to thrive over the long term.<\/p>\n<p>There might even be potential trajectories that could be even worse. For example, humanity might get stuck with a value system that undermines general wellbeing and may lead to vast amounts of unnecessary suffering.<\/p>\n<p>How could this happen? One way this kind of value &#8216;lock-in&#8217; could occur is if a <a href=\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/\">totalitarian regime<\/a> establishes itself as a world government and uses advanced technology to sustain its rule indefinitely. If such a thing is possible, it could snuff out opposition and re-orient society away from what we have most reason to value.<\/p>\n<p>We might also end up stagnating morally such that, for instance, the horrors of poverty or <a href=\"https:\/\/80000hours.org\/problem-profiles\/factory-farming\/\">mass factory farming<\/a> are never mitigated and are indeed replicated on even larger scales.<\/p>\n<p>It&#8217;s hard to say exactly what could be done now to reduce the risks of these terrible outcomes. We&#8217;re generally less confident in efforts to influence trajectory changes compared to preventing extinction. If such work is feasible, it would be <em>extremely<\/em> important.<\/p>\n<p>Trying to <a href=\"https:\/\/80000hours.org\/problem-profiles\/liberal-democracy\/\">strengthen liberal democracy<\/a> and <a href=\"https:\/\/80000hours.org\/problem-profiles\/promoting-positive-values\/\">promote positive values<\/a>, such as by advocating on behalf of farm animals, could be valuable to this end. But many questions remain open about what kinds of interventions would be most likely to have an enduring impact on these issues over the long run.<\/p>\n<p>Grappling with these issues and ensuring we have the wisdom to handle them appropriately will take a lot of work, and starting this work now could be extremely valuable.<br \/>\n<br \/>\n<center><i><small style=\"font-size: x-small;\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1024px-Olba_ancient_city_Roman_aqueduct_ruins_2-modified.png\" alt=\"\" width=\"1024\" height=\"641\" class=\"alignnone size-full wp-image-81482\" srcset=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1024px-Olba_ancient_city_Roman_aqueduct_ruins_2-modified.png 1024w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1024px-Olba_ancient_city_Roman_aqueduct_ruins_2-modified-300x188.png 300w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1024px-Olba_ancient_city_Roman_aqueduct_ruins_2-modified-768x481.png 768w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/1024px-Olba_ancient_city_Roman_aqueduct_ruins_2-modified-360x224.png 360w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><a href=\"https:\/\/commons.wikimedia.org\/wiki\/File:Olba_ancient_city_Roman_aqueduct_ruins_2.JPG\">Cobija<\/a>, <a href=\"https:\/\/creativecommons.org\/licenses\/by-sa\/3.0\">CC BY-SA 3.0<\/a>, via Wikimedia Commons<\/small><\/i><\/center><\/p>\n<h4 id=\"research\">Longtermist research<\/h4>\n<p>This brings us to the third approach to longtermist work: further research.<\/p>\n<p>Asking these types of questions in a systematic way is a relatively recent phenomenon. So we&#8217;re confident that we&#8217;re pretty seriously wrong about at least some parts of our understanding of these issues. There are probably several suggestions in this article that are completely wrong \u2014 the trouble is figuring out which.<\/p>\n<p>So we believe much more research into whether the arguments for longtermism are sound, as well as potential avenues for having an impact on future generations, is called for. This is one reason why we include <a href=\"https:\/\/80000hours.org\/problem-profiles\/global-priorities-research\/\">&#8216;global priorities research&#8217;<\/a> among <a href=\"https:\/\/80000hours.org\/problem-profiles\/#most-pressing-world-problems\">the most pressing problems<\/a> for people to work on.<\/p>\n<h4 id=\"capacity\">Capacity building<\/h4>\n<p>The fourth category of longtermist approaches is capacity building \u2014 that is, investing in resources that may be valuable to put toward longtermist interventions down the line.<\/p>\n<p>In practice, this can take a range of forms. At 80,000 Hours, we&#8217;ve played a part in building the <a href=\"https:\/\/80000hours.org\/problem-profiles\/promoting-effective-altruism\/\">effective altruism community<\/a>, which is generally aimed at finding and understanding the world&#8217;s most pressing problems and how to solve them. Longtermism is in part an offshoot of effective altruism, and having this kind of community may be an important resource for addressing the kinds of challenges longtermism raises.<\/p>\n<p>There are also more straightforward ways to build resources, such as <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/phil-trammell-patient-philanthropy\/\">investing funds now<\/a> so they can grow over time, potentially to be spent at a more pivotal time when they&#8217;re most needed.<\/p>\n<p>You can also invest in capacity building by supporting institutions, such as government agencies or international bodies, that have the mission of stewarding efforts to improve the prospects of the long-term future.<\/p>\n<h3><span id=\"summing-up-the-arguments\" class=\"toc-anchor\"><\/span>Summing up the arguments<\/h3>\n<p>To sum up: there&#8217;s a lot on the line.<\/p>\n<p>The number and size of future generations could be vast. We have reason to care about them all.<\/p>\n<blockquote class=\"pullquote--right pullquote huge italics serif\"><p>\n      Those who come after us will have to live with the choices we make now. If they look back, we hope they&#8217;ll think we did right by them.<\/p>\n<\/blockquote>\n<p>But the course of the future is uncertain. Humanity&#8217;s choices now can shape how events unfold. Our choices today could lead to a prosperous future for our descendants, or the end of intelligent life on Earth \u2014 or perhaps the rise of an enduring, oppressive regime.<\/p>\n<p>We feel we can&#8217;t just turn away from these possibilities. Because so few of humanity&#8217;s resources have been devoted to making the future go well, those of us who have the means should figure out whether and how we can improve the chances of the best outcomes and decrease the chances of the worst.<\/p>\n<p>We can&#8217;t \u2014 and don&#8217;t want to \u2014 set our descendants down a predetermined path that we choose for them now; we want to do what we can to ensure they have the chance to make a better world for themselves.<\/p>\n<p>Those who come after us will have to live with the choices we make now. If they look back, we hope they&#8217;ll think we did right by them.<br \/>\n<br \/>\n<center><i><small style=\"font-size: x-small;\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/STScI-01GGWD12YEES5K5163RJFYQT20-modified.png\" alt=\"\" width=\"1100\" height=\"599\" class=\"alignnone size-full wp-image-81508\" srcset=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/STScI-01GGWD12YEES5K5163RJFYQT20-modified.png 1100w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/STScI-01GGWD12YEES5K5163RJFYQT20-modified-300x163.png 300w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/STScI-01GGWD12YEES5K5163RJFYQT20-modified-1024x558.png 1024w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/STScI-01GGWD12YEES5K5163RJFYQT20-modified-768x418.png 768w\" sizes=\"(max-width: 1100px) 100vw, 1100px\" \/>A protostar is embedded within a cloud of material feeding its growth. Credit: NASA, ESA, CSA, STScI<\/small><\/i><\/center><\/p>\n<h2><span id=\"objections\" class=\"toc-anchor\"><\/span>Objections to longtermism<\/h2>\n<p>In what follows, we&#8217;ll discuss a series of common objections that people make to the argument for longtermism.<\/p>\n<p>Some of them point to important philosophical considerations that are complex but that nonetheless seem to have solid responses. Others raise important reasons to doubt longtermism that we take seriously and that we think are worth investigating further. And some others are misunderstandings or misrepresentations of longtermism that we think should be corrected. (Note: though long, this list doesn&#8217;t cover all objections!)<\/p>\n<div class=\"panel-group\" id=\"custom-collapse-0\">\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"prioritisation\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-0\">Does longtermism mean we should focus on helping future people rather than people who need help today?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-0\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Does longtermism mean we should focus on helping future people rather than people who need help today?\">\n<div class=\"panel-body\">\n<p>Making moral decisions always involves tradeoffs. We have limited resources, so spending on one issue means we have less to spend on another. And there are many deserving causes we could devote our efforts to. If we focus on helping future generations, we will necessarily not prioritise as highly many of the urgent needs in the present.<\/p>\n<p>But we don&#8217;t think this is as troubling an objection to longtermism as it may initially sound, for at least three reasons:<\/p>\n<p><strong>1. Most importantly, many longtermist priorities, especially reducing extinction risk, are also incredibly important for people alive today.<\/strong> For example, we believe <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\">preventing an AI-related catastrophe<\/a> or <a href=\"https:\/\/80000hours.org\/problem-profiles\/preventing-catastrophic-pandemics\/\">a cataclysmic pandemic<\/a> are two of the top priorities, in large part because of their implications for future generations. But these risks could materialise in the coming decades, so if our efforts succeed <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/carl-shulman-common-sense-case-existential-risks\/\">most people alive today would benefit<\/a>. Some argue that preventing global catastrophes could actually be the single most effective way to save the lives of people in the present.<\/p>\n<p><strong>2. If we all took moral impartiality more seriously, there would be a lot more resources going to help the worst-off today \u2014 not just the far future.<\/strong> Impartiality is the idea that we should care about the interests of individuals equally, regardless of their nationality, gender, race, or other characteristics that are morally irrelevant. This impartiality is part of what motivates longtermism \u2014 we think the interests of future individuals are often unjustifiably undervalued.<\/p>\n<p>We think if impartiality were taken more seriously in general, we&#8217;d live in a much better world that would commit many more resources than it currently does toward alleviating all kinds of suffering, including for the present generation. For example, we&#8217;d love to see more resources go toward fighting diseases, improving mental health, reducing poverty, and protecting the interests of animals.<\/p>\n<p><strong>3. Advocating for any moral priority means time and resources are not going to another cause that may also be quite worthy of attention.<\/strong> Advocates for farmed animals&#8217; or prisoners&#8217; rights are in effect deprioritising the interests of alternative potential beneficiaries, such as the global poor. So this is not just an objection to longtermism \u2014 it&#8217;s an objection to any kind of prioritisation.<\/p>\n<p>Ultimately, this objection hinges on the question of whether future generations are really worth caring about \u2014 which is what the rest of this article is about.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-1\">Should we systematically discount future value?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-1\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Should we systematically discount future value?\">\n<div class=\"panel-body\">\n<p>Some people, especially those trained in economics, claim that we shouldn&#8217;t treat individual lives in the future equally to lives today. Instead, they argue, we should systematically <em>discount the value<\/em> of future lives and generations by a fixed percentage.<\/p>\n<p>(We&#8217;re not talking here about discounting the future due to uncertainty, which we cover <a href=\"https:\/\/80000hours.org\/articles\/future-generations\/#uncertainty\">below<\/a>.)<\/p>\n<p>When economists compare benefits in the future to benefits in the present, they typically reduce the value of the future benefits by some amount called the &#8220;discount factor.&#8221; A typical rate might be 1% per year, which means that benefits in 100 years are only worth 36% as much as benefits today, and benefits in 1,000 years are worth almost nothing.<\/p>\n<p>This may seem like an appealing way to preserve the basic intuition we began with \u2014 that we have strong reasons to care about the wellbeing of future generations \u2014 while avoiding the more counterintuitive longtermist claims that arise from considering the potentially astronomical amounts of value that our universe might one day hold. On this view, we would care about future generations, but not as much as the present generation, and mostly only the generations that will come soon after us.<\/p>\n<p>We agree there are good reasons to discount <em>economic<\/em> benefits. One reason is that if you receive money now, you can invest it, and earn a return each year. This means it&#8217;s better to receive money now rather than later. People in the future might also be wealthier, which means that money is less valuable to them.<\/p>\n<p>However, these reasons don&#8217;t seem to apply to welfare \u2014 people having good lives. You can&#8217;t directly &#8216;invest&#8217; welfare today and get more welfare later, like you can with money. The same seems true for other intrinsic values, such as justice. And longtermism is about reasons to care about the <em>interests<\/em> of future generations, rather than wealth.<\/p>\n<p>As far as we know, most philosophers who have worked on the issue don&#8217;t think we should discount the <em>intrinsic value<\/em> of future lives \u2014 even while they strongly disagree about other questions in population ethics. It&#8217;s a simple principle that is easy to accept: one person&#8217;s happiness is worth just the same amount no matter when it occurs.<\/p>\n<p>Indeed, if you suppose we can discount lives in the far future, we can easily end up with conclusions that sound absurd. For instance, a 3% discount rate would imply that the suffering of one person today is morally equal to the suffering of 16 trillion people in 1,000 years. This seems like a truly horrific conclusion to accept.<\/p>\n<p>And any discount rate will mean that, if we found some reliable way to save 1 million lives from intense suffering in either 1,000 years or 10,000 years, it would be <em>astronomically more important<\/em> to choose the sooner option. This, too, seems very hard to accept.<\/p>\n<p>If we reject the discounting of the value of future lives, then the many potential generations that could come after us are still worthy of moral concern. And this doesn&#8217;t stand in tension with the economic practice of discounting monetary benefits.<\/p>\n<p>If you&#8217;d like to see a more technical discussion of these issues, see <a href=\"http:\/\/users.ox.ac.uk\/~mert2255\/papers\/discounting.pdf\"><em>Discounting for Climate Change<\/em><\/a> by Hilary Graves. There is a more accessible discussion at 1h00m50s in <a href=\"https:\/\/80000hours.org\/articles\/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it\/\">our podcast with Toby Ord<\/a> and in Chapter 4 of <a href=\"https:\/\/web.archive.org\/web\/20170808184525\/medium.com\/stubborn-attachments\/stubborn-attachments-full-text-8fc946b694d\"><em>Stubborn Attachments<\/em><\/a> by Tyler Cowen.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"uncertainty\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-2\">How does uncertainty about the future factor in to longtermism?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-2\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"How does uncertainty about the future factor in to longtermism?\">\n<div class=\"panel-body\">\n<p>There are some practical, rather than intrinsic, reasons to discount the value of the future. In particular, our uncertainty about how the future will unfold makes it much harder to influence than the present, and even more near-term actions can be exceedingly difficult to forecast.<\/p>\n<p>And because of the possibility of extinction, we can&#8217;t even be confident that the future lives we think are so potentially valuable will come into existence. As we&#8217;ve argued, that gives us reason to reduce extinction risks when it&#8217;s feasible \u2014 but it also gives us reason to be less confident these lives will exist and thus to weight them somewhat less in our deliberations.<\/p>\n<p>In the same way, a doctor performing triage may choose to prioritise caring for a patient who had a good chance of surviving their injuries over one who has much less clear likelihood of survival regardless of the medical care they receive.<\/p>\n<p>This uncertainty \u2014 along with the extreme level of difficulty in trying to predict the long-term impacts of our actions \u2014 certainly makes it much harder to help future generations, all else equal. And in effect, this point lowers the value of working to benefit future generations.<\/p>\n<p>So even if we can affect how things unfold for future generations, we&#8217;re generally going to be very far from certain that we are actually making things better. And arguably, the further away in time the outcomes of our actions are, the less sure we can be that they will come about. Trying to improve the future will never be straightforward.<\/p>\n<p>Still, even given the difficulty and uncertainty, we think the potential value at stake for the future means that many uncertain projects are still well worth the effort.<\/p>\n<p>You might disagree with this conclusion if you believe that human extinction is <em>so likely<\/em> and practically unavoidable in the future that the chance that our descendants will still be around rapidly declines as we look a few centuries down the line. We don&#8217;t think it&#8217;s <em>that<\/em> likely \u2014 though we are worried about it.<\/p>\n<p>Journalist Kelsey Piper critiqued MacAskill&#8217;s argument for longtermist interventions focused on positive trajectory changes (as opposed to extinction risks) in <em><a href=\"https:\/\/asteriskmag.com\/issues\/01\/review-what-we-owe-the-future\">Asterisk<\/a><\/em>, writing:<\/p>\n<blockquote>\n<p>\n  What share of people who tried to affect the long-term future succeeded, and what share failed? How many others successfully founded institutions that outlived them\u2009\u2014\u2009but which developed values that had little to do with their own?<br \/>\n  \u2026<br \/>\n  Most well-intentioned, well-conceived plans falter on contact with reality. Every simple problem splinters, on closer examination, into dozens of sub-problems with their own complexities. It has taken exhaustive trial and error and volumes of empirical research to establish even the most basic things about what works and what doesn&#8217;t to improve peoples&#8217; lives.\n<\/p>\n<\/blockquote>\n<p>Piper does still endorse working on extinction reduction, which she thinks is a more tractable course of action. Her doubts about the possibility of reliably anticipating our impact on the trajectory of the future, outside of extinction scenarios, are worth taking very seriously.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"cluelessness\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-3\">Aren't we just totally clueless about our effects on the future?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-3\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Aren't we just totally clueless about our effects on the future?\">\n<div class=\"panel-body\">\n<p>You might have a worry about longtermism that goes deeper than just uncertainty. We act under conditions of uncertainty all the time, and we find ways to manage it.<\/p>\n<p>There is a deeper problem known as <em><a href=\"https:\/\/80000hours.org\/podcast\/episodes\/hilary-greaves-global-priorities-institute\/\">cluelessness<\/a><\/em>. While <em>uncertainty<\/em> is about having incomplete knowledge, <em>cluelessness<\/em> refers to the state of having essentially no basis of knowledge at all.<\/p>\n<p>Some people believe we&#8217;re essentially clueless about the long-term effects of our actions. This is because virtually every action we take may have extremely far-reaching unpredictable consequences. In time travel stories, this is sometimes referred to as the &#8220;butterfly effect&#8221; \u2014 because something as small as a butterfly flapping its wings might influence air currents just enough to cause a monsoon on the other side of the world (at least for illustrative purposes).<\/p>\n<p>If you think your decision of whether to go to the grocery store on Thursday or Friday might determine whether the next Gandhi or Stalin is born, you might conclude that actively trying to make the future go well is a hopeless task.<\/p>\n<p>Like some other important issues discussed here, cluelessness remains an active area of philosophical debate, so we don&#8217;t think there&#8217;s necessarily a decisive answer to these worries. But there is a plausible argument, advanced philosopher and advisor to 80,000 Hours <a href=\"https:\/\/users.ox.ac.uk\/~mert2255\/\">Hilary Greaves<\/a> that longtermism is, in fact, <a href=\"https:\/\/forum.effectivealtruism.org\/posts\/LdZcit8zX89rofZf3\/evidence-cluelessness-and-the-long-term-hilary-greaves#Response_five___Go_longtermist_\">the <em>best response<\/em> to the issue of cluelessness<\/a>.<\/p>\n<p>This is because cluelessness hangs over the impact of <em>all<\/em> of our actions. Work trying to improve the lives of current generations, such as direct cash transfers, may predictably benefit a family in the foreseeable future. But the long-term consequences of the transfer are a complete mystery.<\/p>\n<p>Successful longtermist interventions, though, may not have this quality \u2014 particularly interventions to prevent human extinction. If we, say, divert an asteroid that would otherwise have caused the extinction of humanity, we are <em>not<\/em> clueless about the long-term consequences. Humanity will at least have the chance to continue existing into the far future, which it wouldn&#8217;t have otherwise had.<\/p>\n<p>There&#8217;s still <em>uncertainty<\/em>, of course, in preventing extinction. The long-term consequences of such an action aren&#8217;t fully knowable. But we&#8217;re not clueless about them either.<\/p>\n<p>If it&#8217;s correct that the problem of cluelessness bites harder for some near-term interventions than longtermist ones, and perhaps least of all for preventing extinction, then this apparent objection doesn&#8217;t actually count against longtermism.<\/p>\n<p>For an alternative perspective, though, check out <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/alexander-berger-improving-global-health-wellbeing-clear-direct-ways\/\"><em>The 80,000 Hours Podcast<\/em> interview with Alexander Berger<\/a>.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"non-identity-problem\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-4\">What if my actions change the identities of individuals who are born in the future? (The non-identity problem)<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-4\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"What if my actions change the identities of individuals who are born in the future? (The non-identity problem)\">\n<div class=\"panel-body\">\n<p>Because of the nature of human reproduction, the identity of who gets to be born is highly contingent. Any individual is the result of the combination of one sperm and one egg, and a different combination of sperm and egg would&#8217;ve created a different person. Delaying the act of conception at all \u2014 for example, by getting stuck at a red light on your way home \u2014 can easily result in a different sperm fertilising the egg, which means another person with a different combination of genes will be born.<\/p>\n<p>This means \u2014 somewhat surprisingly \u2014 that pretty much all our actions have the potential to impact the future by changing which individuals get born in the future.<\/p>\n<p>If you care about affecting the future in a positive way, this creates a perplexing problem. Many actions undertaken to improve the future, such as trying to reduce the harmful effects of climate change or developing a new technology to improve people&#8217;s lives, may deliver the vast majority of their benefits to people who wouldn&#8217;t have existed had the course of action never been taken.<\/p>\n<p>So while it seems obviously good to improve the world in this way, it may be impossible to ever point to specific people in the future and say they were made better off by these actions. You can make the future better overall, but you may not make it better <em>for<\/em> anyone in particular.<\/p>\n<p>Of course, the reverse is true: you may take some action that makes the future much worse, but all the people who experience the consequences of your actions may never have existed had you chosen a different course of action.<\/p>\n<p><strong>This is known as the &#8216;non-identity problem.&#8217;<\/strong> Even when you can make the far future better with a particular course of action, you will almost certainly never make any particular individuals in the far future better off than they otherwise would be.<\/p>\n<p>Should this problem cause us to abandon longtermism? We don&#8217;t think so.<\/p>\n<p>While the issue is perplexing, accepting it as a refutation of longtermism would prove too much. It would, for example, undermine much of the very plausible case that policymakers should in the past have taken significant steps to limit the effects of climate change (since those policy changes can be expected to, in the long run, lead to different people being born).<\/p>\n<p>Or consider a hypothetical case of a society that is deciding what to do with its nuclear waste. Suppose there are two ways of storing it: one way is cheap, but it means that in 200 years time, the waste will overheat and expose 10,000,000 people to sickening radiation that dramatically shortens their lives. The other storage method guarantees it will never hurt anyone, but it is significantly more expensive, and it means currently living people will have to pay marginally higher taxes.<\/p>\n<p>Assuming this tax policy alters behaviour just enough to start changing the identities of the children being born, it&#8217;s entirely plausible that, in 200 years time, no one would exist who would&#8217;ve existed if the cheap, dangerous policy had been implemented. This means that none of the 10,000,000 people who have their lives cut short can say they would have been better off had their ancestors chosen the safer storage method.<\/p>\n<p>Still, it seems intuitively and philosophically unacceptable to believe that a society wouldn&#8217;t have very strong reasons to adopt the safe policy over the cheap, dangerous policy. If you agree with this conclusion, then you agree that the non-identity problem does not mean we should abandon longtermism. (You may still object to longtermism on other grounds!)<\/p>\n<p>Nevertheless, this puzzle raises pressing philosophical questions that continue to generate debate, and we think better understanding these issues is an important project.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-5\">But should I care that future generations come to exist in the first place, rather than not?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-5\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"But should I care that future generations come to exist in the first place, rather than not?\">\n<div class=\"panel-body\">\n<p>We said that we thought it would be very bad if humanity was extinguished, in part because future individuals who might have otherwise been able to live full and flourishing lives wouldn&#8217;t ever get the chance.<\/p>\n<p>But this raises some issues related to the &#8216;non-identity problem.&#8217; Should we actually care whether future generations come into existence, rather than not?<\/p>\n<p>Some people argue that perhaps we don&#8217;t actually have moral reasons to do things that affect <em>whether<\/em> individuals exist \u2014 in which case ensuring that future generations get to exist, or increasing the chance that humanity&#8217;s future is long and expansive or would be morally neutral in itself.<\/p>\n<p>This issue is very tricky from a philosophical perspective; indeed, a minor subfield of moral philosophy called <a href=\"https:\/\/plato.stanford.edu\/entries\/repugnant-conclusion\/\">population ethics<\/a> sets out to answer this and related questions.<\/p>\n<p>So we can&#8217;t expect to fully address the question here. But we can give a sense of why we think working to ensure humanity survives and that the future is filled with flourishing lives is a high moral priority.<\/p>\n<p>Consider first a scenario in which you, while travelling the galaxy in a spaceship, come across a planet filled with an intelligent species leading happy, moral, fulfilled lives. They haven&#8217;t achieved spaceflight, and may never do so, but they appear likely to have a long future ahead of them on their planet.<\/p>\n<p>Would it not seem like a major tragedy if, say, an asteroid were on course to destroy their civilization? Of course, any plausible moral view would advise saving the species for their own sakes. But it also seems like it&#8217;s an unalloyed good that, if you divert the asteroid, this flourishing species will be able to continue on for many future generations, flourishing in their corner of the universe.<\/p>\n<p>If we have that view about that hypothetical alien world, we should probably have the same view of our own planet. Humans, of course, aren&#8217;t necessarily that happy, moral, and fulfilled for their lives. But the vast majority of us want to keep living \u2014 and it seems at least possible that our descendants could have lives many times more flourishing than we have. They might even ensure that all other sentient beings have joyous lives well-worth living. This seems to give us strong reasons to make this potential a reality.<\/p>\n<p>For a different kind of argument along these lines, you can read Joe Carlsmith&#8217;s <a href=\"https:\/\/joecarlsmith.com\/2021\/03\/14\/against-neutrality-about-creating-happy-lives\">&#8220;Against neutrality about creating happy lives.&#8221;<\/a><\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"person-affecting\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-6\">Do 'person-affecting views' undermine the case for longtermism?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-6\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Do 'person-affecting views' undermine the case for longtermism?\">\n<div class=\"panel-body\">\n<p>Some people advocate a &#8216;person-affecting&#8217; view of ethics. This view is sometimes summed up with the quip: &#8220;ethics is about helping make people happy, not making happy people.&#8221;<\/p>\n<p>In practice, this means we only have moral obligations to help those who are already alive \u2014 not to enable more people to exist with good lives. For people who hold such views, it may be permissible to create a happy person, but doing so is morally neutral.<\/p>\n<p>This view has some plausibility, and we don&#8217;t think it can be totally ignored. However, philosophers have uncovered a number of problems with it.<\/p>\n<p>Suppose you have the choice to bring into existence one person with an amazing life, or another person whose life is barely worth living, but still more good than bad. Clearly, it seems better to bring about the amazing life.<\/p>\n<p>But if creating a happy life is neither good nor bad, then we have to conclude that both options are neither good nor bad. This implies the options are equal, and you have no reason to do one or the other, which seems bizarre.<\/p>\n<p>And if we accepted a person-affecting view, it might be hard to make sense of many of our common moral beliefs around issues like climate change. For example, it would imply that policymakers in the 20th century might have had little reason to mitigate the impact of CO<sub>2<\/sub> emissions on the atmosphere if the negative effects would only affect people who would be born several decades in the future. (This issue is discussed more <a href=\"https:\/\/80000hours.org\/articles\/future-generations\/#non-identity-problem\">above<\/a>.)<\/p>\n<p>This is a complex debate, and rejecting the person-affecting view also has counterintuitive conclusions. In particular, Parfit showed that if you agree that it&#8217;s good to create people whose lives are more good than bad, there is a strong argument for the conclusion that we could have a better world filled with a huge number of people whose lives are just barely worth living. He called this the <a href=\"https:\/\/plato.stanford.edu\/entries\/repugnant-conclusion\/\">&#8220;repugnant conclusion&#8221;<\/a>.<\/p>\n<p>Both sides make important points in this debate. You can see a summary of the arguments in this <a href=\"https:\/\/www.youtube.com\/watch?v=0cHT4yWUEaA\">public lecture by Hilary Greaves<\/a> (based on <a href=\"http:\/\/users.ox.ac.uk\/~mert2255\/papers\/population_axiology.pdf\">this paper<\/a>). It&#8217;s also discussed in <a href=\"https:\/\/80000hours.org\/articles\/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it\">our podcast with Toby Ord<\/a>.<\/p>\n<p>We&#8217;re uncertain about what the right position is, but we&#8217;re inclined to reject person-affecting views. Since many people hold something like the person-affecting view, though, we think it deserves some weight, and that means we should act as if we have somewhat greater obligations to help someone who&#8217;s already alive compared to someone who doesn&#8217;t exist yet. (This is an application of <a href=\"https:\/\/80000hours.org\/articles\/moral-uncertainty\/\">moral uncertainty<\/a>).<\/p>\n<p>One note however: even people who otherwise embrace a person-affecting view often think that is morally <em>bad<\/em> to do something that brings someone into existence who has a life full of suffering and who wishes they&#8217;d never been born. If that&#8217;s right, you should still think that we have strong moral reasons to care about the far future, because there&#8217;s the possibility it could be horrendously bad as well as very good for a large number of individuals. On any plausible view, there&#8217;s a forceful case to be made for working to avert <a href=\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/\">astronomical amounts of suffering<\/a>. So even someone who believes strongly in a person-affecting view of ethics might have reason to embrace a form of longtermism that prioritises averting large-scale suffering in the future.<\/p>\n<p>Trying to weigh this up, we think society should have far greater concern for the future than it does now, and that as with climate change, it often makes sense to prioritise making things go well for future individuals. In the case of climate change, for example, it was likely the case that society should have long ago taken on the non-trivial costs of financing efforts to develop highly reliable clean energy and navigating away from a carbon-intensive economy.<\/p>\n<p>Because of <a href=\"https:\/\/80000hours.org\/articles\/moral-uncertainty\/\">moral uncertainty<\/a>, though, we care more about the present generation than we would if we naively weighed up the numbers.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"arrogant\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-7\">Isn't it arrogant to think we'll know what will happen in hundreds, thousands, or millions of years?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-7\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Isn't it arrogant to think we'll know what will happen in hundreds, thousands, or millions of years?\">\n<div class=\"panel-body\">\n<p>Yes, it would be arrogant. But longtermism doesn&#8217;t require us to know the future.<\/p>\n<p>Instead, the practical implication of longtermism is that we take steps that are likely to be good over the wide range of possible futures. We think it&#8217;s likely better for the future if, as we said <a href=\"https:\/\/80000hours.org\/articles\/future-generations\/#opportunity\">above<\/a>, we avoid extinction, we manage our resources carefully, we foster institutions that promote cooperation rather than violent conflict, and we responsibly develop powerful technology. None of these strategies requires us knowing what the future will look like.<\/p>\n<p>We talk more about the importance of all this uncertainty in the sections above.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"obvious\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-8\">Isn't it just obvious that we should prevent extinction?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-8\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Isn't it just obvious that we should prevent extinction?\">\n<div class=\"panel-body\">\n<p>This isn&#8217;t exactly an objection, but one response to longtermism asserts not that the view is badly off track but that it&#8217;s <em>superfluous<\/em>.<\/p>\n<p>This may seem plausible if longtermism primarily inspires us to prioritise reducing extinction risks. As discussed above, doing so could benefit existing people \u2014 so why even bother talking about the benefits to future generations?<\/p>\n<p>One reply is: we agree that you don&#8217;t need to embrace longtermism to support these causes! And we&#8217;re happy if people do good work whether or not they agree with us on the philosophy.<\/p>\n<p>But we still think the argument for longtermism is true, and we think it&#8217;s worth talking about.<\/p>\n<p>Firstly, when we actually try to compare the importance of work in certain cause areas \u2014 such as <a href=\"https:\/\/80000hours.org\/problem-profiles\/health-in-poor-countries\/\">global health<\/a> or mitigating the <a href=\"https:\/\/80000hours.org\/problem-profiles\/nuclear-security\/\">risk of extinction from nuclear war<\/a> \u2014 whether and how much we weigh the interests of future generations may play a decisive role in our conclusions about prioritisation.<\/p>\n<p>Moreover, some longtermist priorities, such as ensuring that we avoid <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/will-macaskill-ambition-longtermism-mental-health\/#what-we-owe-the-future-preview-000923\">the lock-in of bad values<\/a> or developing a promising framework for <a href=\"https:\/\/80000hours.org\/problem-profiles\/space-governance\/\">space governance<\/a>, may be entirely ignored if we don&#8217;t consider the interests of future generations.<\/p>\n<p>Finally, if it&#8217;s right that future generations deserve much more moral concern than they currently get, it just seems good for people to know that. Maybe issues will come up in the future that aren&#8217;t extinction threats but which could still predictably affect the long-run future \u2013 we&#8217;d want people to take those issues seriously.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"total-utilitarianism\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-9\">Does longtermism depend on 'total utilitarianism'?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-9\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Does longtermism depend on 'total utilitarianism'?\">\n<div class=\"panel-body\">\n<p>In short, no. <a href=\"https:\/\/www.utilitarianism.net\/types-of-utilitarianism\/#population-ethics-the-total-view\">Total utilitarianism<\/a> is the view that we are obligated to maximise the <em>total<\/em> amount of positive experiences over negative experiences, typically by weighting for intensity and duration.<\/p>\n<p>This is one specific moral view, and many of its proponents and sympathisers advocate for longtermism. But you can easily reject utilitarianism <em>of any kind<\/em> and still embrace longtermism.<\/p>\n<p>For example, you might believe in &#8216;side constraints&#8217; \u2014 <a href=\"https:\/\/plato.stanford.edu\/entries\/ethics-deontological\/\">moral rules<\/a> about what kinds of actions are impermissible, regardless of the consequences. So you might believe that you have strong reasons to promote the wellbeing of individuals in the far future, so long as doing so doesn&#8217;t require violating anyone&#8217;s moral rights. This would be one kind of non-utilitarian longtermist view.<\/p>\n<p>You might also be a pluralist about value, in contrast to utilitarians who think <a href=\"https:\/\/www.utilitarianism.net\/theories-of-wellbeing\/\">a singular notion of wellbeing<\/a> is the sole true value. A non-utilitarian might intrinsically value, for instance, art, beauty, achievement, good character, knowledge, and personal relationships, quite separately from their impact on wellbeing.<\/p>\n<p>(See our <a href=\"https:\/\/80000hours.org\/articles\/what-is-social-impact-definition\/\">definition of social impact<\/a> for how we incorporate these moral values into our worldview.)<\/p>\n<p>So you might be a longtermist precisely because you believe the future is likely to contain vast amounts of all the many things you value, so it&#8217;s really important that we protect this potential.<\/p>\n<p>You could also think we have an obligation to improve the world for future generations because we owe it to humanity to &#8220;pass the torch&#8221;, rather than squander everything people have done to build up civilisation. This would be another way of understanding moral longtermism that doesn&#8217;t rely on total utilitarianism.<\/p>\n<p>Finally, you can reject the &#8220;total&#8221; part of utilitarianism and still believe longtermism. That is, you might believe it&#8217;s important to make sure the future goes well in a generally utilitarian sense without thinking that means we&#8217;ll need to keep increasing the population size in order to maximise total wellbeing.  You can read more about different kinds of views in population ethics <a href=\"https:\/\/utilitarianism.net\/population-ethics\/\">here<\/a>.<\/p>\n<p>As we discussed <a href=\"#person-affecting\">above<\/a>, people who don&#8217;t think it&#8217;s morally good to bring a flourishing population into existence usually think it&#8217;s still important to prevent future suffering \u2014 in which case you might support a longtermism focused on guarding against the worst outcomes for future generations.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"extremism\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-10\">Does longtermism justify taking extremist or unethical actions to help future generations?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-10\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Does longtermism justify taking extremist or unethical actions to help future generations?\">\n<div class=\"panel-body\">\n<p>No.<\/p>\n<p>We believe, for instance, that you shouldn&#8217;t have a <a href=\"https:\/\/80000hours.org\/articles\/harmful-career\/\">harmful career<\/a> just because you think you can do more good than bad with the money you&#8217;ll earn. There are practical, epistemic, and moral reasons that justify this stance.<\/p>\n<p>And as a general matter, we think it&#8217;s highly unlikely to be the case that working in a harmful career will be the path that has the best consequences overall.<\/p>\n<p>Some critics of longtermism say the view can be used to justify all kinds of egregious acts in the name of a glorious future. We do not believe this, in part because there are plenty of plausible intrinsic reasons to object to egregious acts on their own, <em>even if<\/em> you think they&#8217;ll have good consequences. As we explained in our article on the definition of <a href=\"https:\/\/80000hours.org\/articles\/what-is-social-impact-definition\/\">&#8216;social impact&#8217;<\/a>:<\/p>\n<blockquote>\n<p>\n  We don&#8217;t think social impact is all that matters. Rather, we think people should aim to have a greater social impact within the constraints of not sacrificing other important values \u2013 in particular, while building good character, respecting rights and attending to other important personal values. We don&#8217;t endorse doing something that seems very wrong from a commonsense perspective in order to have a greater social impact.\n<\/p>\n<\/blockquote>\n<p>Perhaps even more importantly, it&#8217;s bizarrely pessimistic to believe that the best way to make the future go well is to do horrible things now. This is very likely false, and there&#8217;s little reason anyone should be tempted by this view.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"sci-fi\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-11\">Isn't this all just science fiction?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-11\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Isn't this all just science fiction?\">\n<div class=\"panel-body\">\n<p>Some of the claims in this article may sound like science fiction. We&#8217;re aware this can be off-putting to some readers, but we think it&#8217;s important to be upfront about our thinking.<\/p>\n<p>And the fact that a claim <em>sounds<\/em> like science fiction is not, on its own, a good reason to dismiss it. Many speculative claims about the future have sounded like science fiction until technological developments made them a reality.<\/p>\n<p>From Eunice Newton Foote&#8217;s perspective in the 19th century, the idea that the global climate would actually be transformed based on a principle she discovered in a glass cylinder may have sounded like science fiction. But climate change is now our reality.<\/p>\n<p>Similarly, the idea of the &#8220;atomic bomb&#8221; had literally been science fiction before Leo Szilard discovered the possibility of the nuclear chain reaction in 1933. Szilard first read about such weapons in H.G. Wells&#8217; <em>The World Set Free<\/em>. As W. Warren Wager explained in <em>The Virginia Quarterly<\/em>:<\/p>\n<blockquote>\n<p>\n  Unlike most scientists then doing research into radioactivity, Szilard perceived at once that a nuclear chain reaction could produce weapons as well as engines. After further research, he took his ideas for a chain reaction to the British War Office and later the Admiralty, assigning his patent to the Admiralty to keep the news from reaching the notice of the scientific community at large. &#8220;Knowing what this [a chain reaction] would mean,&#8221; he wrote, &#8220;\u2014and I knew it because I had read H.G. Wells\u2014I did not want this patent to become public.&#8221;\n<\/p>\n<\/blockquote>\n<p>This doesn&#8217;t mean we should accept any idea without criticism. And indeed, you can reject many of the more &#8216;sci-fi&#8217; claims of some people who are concerned with future generations \u2014 such as the possibility of space settlement or the risks from artificial intelligence \u2014 and still find longtermism compelling.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-12\">Isn't this like Pascal's wager?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-12\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Isn't this like Pascal's wager?\">\n<div class=\"panel-body\">\n<p>One worry about longtermism some people have is that it seems to rely on having a very small chance of achieving a very good outcome.<\/p>\n<p>Some people think this sounds suspiciously like <a href=\"https:\/\/en.wikipedia.org\/wiki\/Pascal%27s_wager\">Pascal&#8217;s wager<\/a>, a highly contentious argument for believing in God \u2014 or a variant of this idea, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Pascal%27s_mugging\">&#8220;Pascal&#8217;s mugging.&#8221;<\/a> The concern is that this type of argument may be used to imply an apparent obligation to do absurd or objectionable things. It&#8217;s based on a thought experiment, as we described in <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/#is-this-a-form-of-pascals-mugging-taking-a-big-bet-on-tiny-probabilities\">a different article<\/a>:<\/p>\n<blockquote>\n<p>\n  A random mugger stops you on the street and says, &#8220;Give me your wallet or I&#8217;ll cast a spell of torture on you and everyone who has ever lived.&#8221; You can&#8217;t rule out with 100% probability that he won&#8217;t \u2014 after all, nothing&#8217;s 100% for sure. And torturing everyone who&#8217;s ever lived is so bad that surely even avoiding a tiny, tiny probability of that is worth the $40 in your wallet? But intuitively, it seems like you shouldn&#8217;t give your wallet to someone just because they threaten you with something completely implausible.\n<\/p>\n<\/blockquote>\n<p>This deceptively simple problem raises tricky issues in expected value theory, and it&#8217;s not clear how they should be resolved \u2014 but it&#8217;s typically assumed that we should reject arguments that rely on this type of reasoning.<\/p>\n<p>The argument for longtermism given above may look like a form of this argument because it relies in part on the premise that the number of individuals in the future could be so large. Since it&#8217;s a relatively novel, unconventional argument, it may sound suspiciously like the mugger&#8217;s (presumably hollow) threat in the thought experiment.<\/p>\n<p>But there are some key differences. To start, the risks to the long-term future may be far from negligible. Toby Ord <a href=\"https:\/\/80000hours.org\/articles\/existential-risks\/#whats-the-total-risk-of-human-extinction-if-we-add-everything-together\">estimated<\/a> the chance of an existential catastrophe that effectively curtails the potential of future generations in the next century at 1 in 6.<\/p>\n<p>Now, it may be true that any individual&#8217;s chance of meaningfully reducing these kinds of threats is much, much smaller. But we accept small chances of doing good all the time \u2014 that&#8217;s why you might wear a seatbelt in a car, even though in any given drive your chances of being in a serious accident are miniscule. Many people buy life insurance to guarantee that their family members will have financial support in the unlikely scenario that they die young.<\/p>\n<p>And while an individual is unlikely to be solely responsible for driving down the risk of human extinction by any significant amount (in the same way no one individual could stop climate change), it does seem plausible that a large group of people working diligently and carefully might be able to do it. And if the large group of people can achieve this laudable end, then taking part in this collective action isn&#8217;t comparable to Pascal&#8217;s mugging.<\/p>\n<p>But if we did conclude the chance to reduce the risks humanity faces <em>is<\/em> truly negligible, then we would want to look much more seriously into other priorities, especially since there are so many other <a href=\"https:\/\/80000hours.org\/problem-profiles\/\">pressing problems<\/a>. As long as it&#8217;s true, though, that there are genuine opportunities to have a significant impact on improving the prospects for the future, then longtermism does not rely on suspect and extreme expected value reasoning.<\/p>\n<\/div><\/div><\/div>\n<\/div>\n<p>This is a lot to think about. So what are our bottom lines on how we think we&#8217;re most likely to be wrong about longtermism?<\/p>\n<p>Here are a few possibilities we think are worth taking seriously, even though they don&#8217;t totally undermine the case from our perspective:<\/p>\n<ul>\n<li><strong>Morality may require a strong preference for the present:<\/strong> There might be strong moral reasons to give preference to existing people and individuals over future generations. This might be because something like a person-affecting view is true (<a href=\"https:\/\/80000hours.org\/articles\/future-generations\/#person-affecting\">described above<\/a>) or maybe even because we should systematically discount the value of future beings.\n<ul>\n<li>We don&#8217;t think the arguments for such a strong preference are very compelling, but given the high levels of uncertainty in our moral beliefs, we can&#8217;t confidently rule it out.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Reliably affecting the future may be infeasible.<\/strong> It&#8217;s possible that further research will ultimately conclude that the opportunities for impacting the far future are essentially non-existent or extremely limited. It&#8217;s hard to believe we could ever entirely close the question \u2014 researchers who come to this conclusion in the future could themselves be mistaken \u2014 but it might dramatically reduce our confidence that pursuing a longtermist agenda is worthwhile and thus leave the project as a pretty marginal endeavour.<\/p>\n<\/li>\n<li>\n<p><strong>Reducing extinction risk may be intractable beyond a certain point.<\/strong> It&#8217;s possible that there&#8217;s a base level of extinction risk that humans will have to accept at some point and that we can&#8217;t reduce any further. And if, for instance, there were an irreducible risk of an extinction catastrophe at 10 percent every century, then the future, in expectation, would be much less significant than we think. This would dramatically reduce the pull of longtermism.<\/p>\n<\/li>\n<li>\n<p><strong>A <a href=\"https:\/\/forum.effectivealtruism.org\/topics\/crucial-consideration\/\">crucial consideration<\/a> could change our assessment in ways we can&#8217;t predict.<\/strong> This falls into the general category of &#8216;unknown unknowns,&#8217; which are always important to be on the watch for.<\/p>\n<\/li>\n<\/ul>\n<p>You could also read the following essays criticising longtermism that we have found interesting:<\/p>\n<ul>\n<li>A <a href=\"https:\/\/ndpr.nd.edu\/reviews\/the-precipice-existential-risk-and-the-future-of-humanity\/\">review of <em>The Precipice<\/em><\/a> written by Theron Pummer<\/li>\n<li>A blog post called &#8220;<a href=\"https:\/\/schwitzsplinters.blogspot.com\/2022\/01\/against-longtermism.html\">Against Longtermism<\/a>&#8221; by Eric Schwitzgebel<\/li>\n<li>A post on the Effective Altruism Forum by Denise Melchin called &#8220;<a href=\"https:\/\/forum.effectivealtruism.org\/posts\/Jxfq6xCP9ZoTBFewA\/why-i-am-probably-not-a-longtermist\">Why I am probably not a longtermist<\/a>&#8220;<\/li>\n<\/ul>\n<h2><span id=\"if-i-dont-agree-with-80000-hours-about-longtermism-can-i-still-benefit-from-your-advice\" class=\"toc-anchor\"><\/span>If I don&#8217;t agree with 80,000 Hours about longtermism, can I still benefit from your advice?<\/h2>\n<p>Yes!<\/p>\n<p>We want to be candid about what we believe and what our priorities are, but we don&#8217;t think everyone needs to agree with us.<\/p>\n<p>And we have lots of advice and tools that are broadly useful for people thinking about their careers, regardless of what they think about longtermism.<\/p>\n<p>There are also many places where longtermist projects converge with other approaches to thinking about having a positive impact with your career. For example, working to prevent pandemics seems robustly good whether you prioritise near- or long-term benefits.<\/p>\n<p>Though we focus as an organisation on issues that may affect all future generations, we would generally be really happy to also see more people working for the benefit of the global poor and farmed animals, two tractable causes that we think are unduly neglected in the near term. We also discuss these issues on our podcast and list jobs for them on our job board.<br \/>\n<br \/>\n<center><i><small style=\"font-size: x-small;\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/9187256378_01c3a3940a_k-modified.png\" alt=\"\" width=\"1000\" height=\"667\" class=\"alignnone size-full wp-image-81485\" srcset=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/9187256378_01c3a3940a_k-modified.png 1000w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/9187256378_01c3a3940a_k-modified-300x200.png 300w, https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/9187256378_01c3a3940a_k-modified-768x512.png 768w\" sizes=\"(max-width: 1000px) 100vw, 1000px\" \/> Credit: <a href=\"https:\/\/www.flickr.com\/photos\/yenchao\/9187256378\/in\/photostream\/\">Yen Chao<\/a> CC2.0<\/small><\/i><\/center><\/p>\n<h2><span id=\"what-are-the-best-ways-to-help-future-generations-right-now\" class=\"toc-anchor\"><\/span>What are the best ways to help future generations right now?<\/h2>\n<p>While answering this question satisfactorily would require a sweeping research agenda in itself, we do have some general thoughts about what longtermism means for our practical decision making. And we&#8217;d be excited to see more attention paid to this question.<\/p>\n<p>Some people may be motivated by these arguments to find opportunities to donate to longermist projects or cause areas. We believe <a href=\"https:\/\/www.openphilanthropy.org\/\">Open Philanthropy<\/a> \u2014 which is a major funder of 80,000 Hours \u2014 does important work in this area.<\/p>\n<p>But our primary aim is to help people have impactful careers. Informed by longtermism, we have created a list of what we believe are the <a href=\"https:\/\/80000hours.org\/problem-profiles\/\">most pressing problems<\/a> to work on in the world. These problems are <a href=\"https:\/\/80000hours.org\/articles\/problem-framework\/\">important, neglected, and tractable<\/a>.<\/p>\n<p>As of this writing, the top eight problem areas are:<\/p>\n<ol>\n<li><strong><a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\">Risks from artificial intelligence<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/80000hours.org\/problem-profiles\/preventing-catastrophic-pandemics\/\">Catastrophic pandemics<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/80000hours.org\/problem-profiles\/promoting-effective-altruism\/\">Building effective altruism<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/80000hours.org\/problem-profiles\/global-priorities-research\/\">Global priorities research<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/80000hours.org\/problem-profiles\/nuclear-security\/\">Nuclear war<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/80000hours.org\/problem-profiles\/improving-institutional-decision-making\/\">Improving decision making (especially in important institutions)<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/80000hours.org\/problem-profiles\/climate-change\/\">Climate change<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/80000hours.org\/problem-profiles\/great-power-conflict\/\">Great power conflict<\/a><\/strong><\/li>\n<\/ol>\n<p>We&#8217;ve already given few examples of concrete ways to tackle these issues <a href=\"\/future-generations\/#reducing-risk\">above<\/a>.<\/p>\n<p>The above list is provisional, and it is likely to change as we learn more. We also list many other pressing problems that we believe are highly important from a longtermist point of view, as well as a few that would be high priorities if we rejected longtermism.<\/p>\n<blockquote class=\"pullquote--right pullquote huge italics serif\"><p>\n      We hope more people will challenge our ideas and help us think more clearly about them. As we have argued, the stakes are incredibly high.<\/p>\n<\/blockquote>\n<p>We have a related list of <a href=\"https:\/\/80000hours.org\/career-reviews\/\">high-impact careers<\/a> that we believe are appealing options for people who want to work to address these and related problems and to help the long-term future go well.<\/p>\n<p>But we don&#8217;t have all the answers. Research in this area could reveal <a href=\"https:\/\/forum.effectivealtruism.org\/topics\/crucial-consideration\/\">crucial considerations<\/a> that might overturn longtermism or cast it in a very different light. There are likely pressing cause areas we haven&#8217;t thought of yet.<\/p>\n<p>We hope more people will challenge our ideas and help us think more clearly about them. As we have argued, the stakes are incredibly high. So it&#8217;s paramount that, as much as is feasible, we get this right.<\/p>\n<div class=\"well bg-gray-lighter margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h2 class=\"no_toc\">Want to focus your career on the long-run future?<\/h2>\n<p>If you want to work on ensuring the future goes well, such as controlling nuclear weapons or shaping the development of artificial intelligence or biotechnology, you can speak to our team one-on-one.<\/p>\n<p>We&#8217;ve helped hundreds of people choose an area to focus, make connections, and then find jobs and funding in these areas. If you&#8217;re already in one of these areas, we can help you increase your impact within it.<\/p>\n<p><a href=\"https:\/\/80000hours.org\/speak-with-us\/?int_campaign=article__long-term-future\" title=\"\" class=\"btn btn-primary\">Speak to us<\/a><\/p>\n<\/div>\n<h2><span id=\"learn-more\" class=\"toc-anchor\"><\/span>Learn more<\/h2>\n<ul>\n<li>Toby Ord discussed these arguments in his book, <a href=\"https:\/\/theprecipice.com\/\"><em>The Precipice<\/em><\/a>, and he discussed the ideas with us on <a href=\"https:\/\/80000hours.org\/articles\/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it\/\">our podcast<\/a>.<\/li>\n<li>Will MacAskill also made the argument in his book, <a href=\"https:\/\/whatweowethefuture.com\/\"><em>What We Owe the Future<\/em><\/a>, and we interviewed him about it on our <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/will-macaskill-what-we-owe-the-future\/\">podcast<\/a>. <\/li>\n<li>Benjamin Todd and Arden Koehler discussed varieties of longtermism in <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/ben-todd-on-varieties-of-longtermism\/\">this podcast<\/a>.<\/li>\n<li>Hilary Greaves presented <a href=\"https:\/\/www.youtube.com\/watch?v=Wz8lgjBLTpI\">the case for longtermism<\/a> at Oxford University.<\/li>\n<li>In this podcast, Holden Karnofsky talked about the case that we&#8217;re living in <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/holden-karnofsky-most-important-century\/\">the most important century<\/a>.<\/li>\n<li>Article: <a href=\"https:\/\/80000hours.org\/articles\/extinction-risk\/\">The case for reducing existential risks<\/a><\/li>\n<li>Podcast: <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/carl-shulman-common-sense-case-existential-risks\/\">Carl Shulman on the common-sense case for existential risk work and its practical implications<\/a><\/li>\n<li>Podcast: <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/anders-sandberg-best-things-possible-in-our-universe\/\">Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe<\/a><\/li>\n<\/ul>\n<h2><span id=\"read-next\" class=\"toc-anchor\"><\/span>Read next<\/h2>\n<p>This article is part of our advanced series. See the <a href=\"\/advanced-series\">full series<\/a>, or keep reading:<\/p>\n<ul class=\"list-cards list-no-bullet row display-flex !tw--mb-0 margin-top\">\n<li class=\"col-sm-12 padding-bottom-small\">\n<div class=\"card card--horizontal row \">\n<div class=\"col-sm-4 col--card-image sm:!tw--pr-0\">          <a href=\"https:\/\/80000hours.org\/articles\/harmful-career\/\" class=\"card__anchor no-visited-styling\">\n<div class=\"card__image bg-gray-light\">            <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2017\/06\/chris-leboutillier-TUJud0AWAPI-unsplash-720x448.jpg\" alt=\"Decorative post preview\"                       class=\"tw--w-full \" style=\"\"                      width=\"720\" height=\"448\">            <\/div>\n<p>          <\/a>        <\/div>\n<div class=\"col-sm-8\">\n<div class=\"card__title\">\n<h3  class=\"no-toc\"><a href=\"https:\/\/80000hours.org\/articles\/harmful-career\/\" class=\"card__anchor no-visited-styling\">Is it ever OK to take a harmful job in order to do more good? An in-depth analysis<\/a><\/h3>\n<\/div>\n<div class=\"card__actions\">      <a href=\"https:\/\/80000hours.org\/articles\/harmful-career\/\" class=\"card__action no-visited-styling\">Read more<\/a>    <\/div><\/div><\/div>\n<\/li>\n<li class=\"col-sm-12 padding-bottom-small\">\n<div class=\"card card--horizontal row \">\n<div class=\"col-sm-4 col--card-image sm:!tw--pr-0\">          <a href=\"https:\/\/80000hours.org\/articles\/your-choice-of-problem-is-crucial\/\" class=\"card__anchor no-visited-styling\">\n<div class=\"card__image bg-gray-light\">            <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2021\/10\/joshua-earle-87JyMb9ZfU-unsplash-720x448.jpg\" alt=\"Decorative post preview\"                       class=\"tw--w-full \" style=\"\"                      width=\"720\" height=\"448\">            <\/div>\n<p>          <\/a>        <\/div>\n<div class=\"col-sm-8\">\n<div class=\"card__title\">\n<h3  class=\"no-toc\"><a href=\"https:\/\/80000hours.org\/articles\/your-choice-of-problem-is-crucial\/\" class=\"card__anchor no-visited-styling\">Why the problem you work on is the biggest driver of your impact<\/a><\/h3>\n<\/div>\n<div class=\"card__actions\">      <a href=\"https:\/\/80000hours.org\/articles\/your-choice-of-problem-is-crucial\/\" class=\"card__action no-visited-styling\">Read more<\/a>    <\/div><\/div><\/div>\n<\/li>\n<li class=\"col-sm-12 padding-bottom-small\">\n<div class=\"card card--horizontal row \">\n<div class=\"col-sm-4 col--card-image sm:!tw--pr-0\">          <a href=\"https:\/\/80000hours.org\/articles\/existential-risks\/\" class=\"card__anchor no-visited-styling\">\n<div class=\"card__image bg-gray-light\">            <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2017\/10\/rio_large-720x448.jpeg\" alt=\"Decorative post preview\"                       class=\"tw--w-full \" style=\"\"                      width=\"720\" height=\"448\">            <\/div>\n<p>          <\/a>        <\/div>\n<div class=\"col-sm-8\">\n<div class=\"card__title\">\n<h3  class=\"no-toc\"><a href=\"https:\/\/80000hours.org\/articles\/existential-risks\/\" class=\"card__anchor no-visited-styling\">The case for reducing existential risks<\/a><\/h3>\n<\/div>\n<div class=\"card__actions\">      <a href=\"https:\/\/80000hours.org\/articles\/existential-risks\/\" class=\"card__action no-visited-styling\">Read more<\/a>    <\/div><\/div><\/div>\n<\/li>\n<\/ul>\n<div class=\"well visible-if-not-newsletter-subscriber margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h3 class=\"no-toc\">Plus, join our newsletter and we&#8217;ll mail you a free book<\/h3>\n<p>Join our newsletter and we&#8217;ll send you a free copy of <em>The Precipice<\/em> \u2014 a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. <a href=\"https:\/\/80000hours.org\/free-book\/#giveaway-terms\">T&#038;Cs here<\/a>.<\/p>\n<form data-80k-object-id=\"\" data-80k-form-action=\"newsletter__subscribe\" action=\"\/\" method=\"post\" class=\"form-newsletter-signup form-newsletter-signup-step-1 margin-bottom-smaller\">\n<div class=\"mc-field-group input-group compact-input-group \"> <input type=\"email\" value=\"\" name=\"email\" required class=\"form-control email\" placeholder=\"Email address\" id=\"input_email\"> <span class=\"submit input-group-btn input-group-btn-right\"> <input type=\"submit\" id=\"mc-embedded-subscribe\" value=\"GET THE BOOK\" class=\"btn btn-primary \" \/> <\/span> <\/div>\n<div> <input name=\"_eightyk_action\" value=\"mailchimp_add_subscriber\" type=\"hidden\"> <input name=\"redirect_path_after_step_2\" value=\"\/newsletter\/welcome\/\" type=\"hidden\"> <\/div>\n<div style=\"position: absolute; left: -5000px;\"> <input type=\"text\" name=\"b_abc12f58bbe8075560abdc5b7_43bc1ae55c\" tabindex=\"-1\" value=\"\"> <\/div>\n<\/form>\n<\/div>\n<style>\nbi { \n  font-family: \"Helvetica Neue\", Helvetica, Arial, sans-serif; \n  font-style: italic;\n  font-weight: 700;\n  font-size: 0.9rem;\n}\n<\/style>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":435,"featured_media":81500,"parent":0,"menu_order":0,"template":"","meta":{"_acf_changed":false,"footnotes":"[fn 1] This discovery was discussed in an article by Clive Thompson in [JSTOR Daily](https:\/\/daily.jstor.org\/how-19th-century-scientists-predicted-global-warming\/). [\/fn]\r\n\r\n[fn 2] She added that \"if as some suppose, at one period of its history the air had mixed with it a larger proportion than at present, an increased temperature from its own action as well as from increased weight must have necessarily resulted.\"[\/fn]\r\n\r\n[fn 3] \"While some models showed too much warming and a few showed too little, most models examined showed warming consistent with observations, particularly when mismatches between projected and observationally informed estimates of forcing were taken into account. We find no evidence that the climate models evaluated in this paper have systematically overestimated or underestimated warming over their projection period. The projection skill of the 1970s models is particularly impressive given the limited observational evidence of warming at the time, as the world was thought to have been cooling for the past few decades.\" ['Evaluating the Performance of Past Climate Model Projections.'](https:\/\/agupubs.onlinelibrary.wiley.com\/doi\/10.1029\/2019GL085378) [\/fn]\r\n\r\n[fn 4] In his book [*What We Owe the Future*](https:\/\/80000hours.org\/what-we-owe-the-future\/), Will MacAskill (a co-founder and trustee of 80,000 Hours) is even more succinct: \"Future people count. There could be a lot of them. We can make their lives go better.\" (pg. 9) [\/fn]\r\n\r\n[fn 5] \"Setting aside climate change, all spending on biosecurity, natural risks and risks from AI and nuclear war is still substantially less than we spend on ice cream. And I'm confident that the spending actually focused on existential risk is less than one-tenth of this.\" [*The Precipice*](https:\/\/80000hours.org\/the-precipice\/) (pg. 313) [\/fn]\r\n\r\n[fn 6] Derek Parfit in [*Reasons and Persons*](https:\/\/academic.oup.com\/book\/12484) on pages 356-357 [\/fn]\r\n\r\n[fn 7] John Adams, the second president of the United States who laid some of the intellectual foundations for the US Constitution, pointed to the importance of enduring governmental structures in his own [writing](https:\/\/oll.libertyfund.org\/title\/adams-the-works-of-john-adams-vol-4): \"The institutions now made in America will not wholly wear out for thousands of years. It is of the last importance, then, that they should begin right. If they set out wrong, they will never be able to return, unless it be by accident, to the right path.\" Quoted in MacAskill's [*What We Owe the Future.*](https:\/\/80000hours.org\/what-we-owe-the-future\/) [\/fn]\r\n\r\n[fn 8] See *[The Biology of Rarity](https:\/\/books.google.co.uk\/books?id=4LHnCAAAQBAJ&pg=PA110&redir_esc=y#v=onepage&q=99&f=false)* edited by W.E. Kunin, K.J. Gaston [\/fn]\r\n\r\n[fn 9] \"The average lifespan of a species varies according to taxonomic group. It is as long as tens of millions of years for ants and trees, and as short as half a million years for mammals. The average span across all groups combined appears to be (very roughly) a million years.\" \u2014 [Professor Edward O. Wilson](https:\/\/www.harvardmagazine.com\/2016\/03\/the-mammalian-life-span) [\/fn]\r\n\r\n[fn 10] Some might believe it's just entirely implausible to believe humans could be around for another 500 million years. But consider, as Toby Ord precipice pointed out in [*The Precipice*](https:\/\/80000hours.org\/the-precipice\/), that [the fossil record indicates](https:\/\/www.sciencedaily.com\/releases\/2008\/02\/080207135801.htm) that horseshoe crabs have existed essentially unchanged on the planet for at least around 445 million. Of course, horseshoe crabs undoubtedly have features that make them particularly resilient as a species. But humans, too, have undeniably unique characteristics, and it's arguable that these features could confer comparable or even superior survival advantages. [\/fn]\r\n\r\n[fn 11]  See Chapter 8 of Toby Ord's [*The Precipice*](https:\/\/80000hours.org\/the-precipice\/) for a detailed discussion of the prospects for space settlement.[\/fn]\r\n\r\n[fn 12] Saulius \u0160im\u010dikas of Rethink Priorities in 2020 researched [the numbers of vertebrate animals](https:\/\/rethinkpriorities.org\/publications\/estimates-of-global-captive-vertebrate-numbers) in captivity. The report found that there were between 9.5 and 16.2 billion chickens, bred for meat in captivity, on any given day. There are also 1.5 billion cattle, 978 million pigs, and 103 billion farmed fish, among many other types of farmed animals. [\/fn]\r\n\r\n[fn 13] Altogether, this means there are many, many lives at stake in the way the future unfolds. A conservative estimate of the upper bound (assuming just Earth-bound humans) is 10<sup>16<\/sup>. But estimates using different approaches put the figure as high as 10<sup>35<\/sup>, or even \u2014 *very* speculatively \u2014 10<sup>58<\/sup>. These figures and other estimates are discussed in [\"How many lives does the future hold?\"](https:\/\/globalprioritiesinstitute.org\/wp-content\/uploads\/Toby-Newberry_How-many-lives-does-the-future-hold.pdf) by Toby Newbury.[\/fn]\r\n\r\n[fn 14] Note that we think the near-term risk from natural threats tends to be much lower than human-made threats.\r\n\r\nToby Ord explained on *[The 80,000 Hours Podcast](https:\/\/80000hours.org\/podcast\/episodes\/toby-ord-the-precipice-existential-risk-future-humanity\/#estimating-total-natural-risk-003634)* why he believes extinction risk from natural causes is relatively low: \"[We've] been around for about 2,000 centuries: homo sapiens. Longer, if you think about the homo genus. And, suppose the existential risk per century were 1%. Well, what's the chance that you would get through 2,000 centuries of 1% risk? It turns out to be really low because of how exponentials work, and you have almost no chance of surviving that. So this gives us a kind of argument that the risk from natural causes, assuming it hasn't been increasing over time, that this risk must be quite low.\" [\/fn]\r\n\r\n[fn 15] We're also very concerned about [mitigating climate change](https:\/\/80000hours.org\/problem-profiles\/climate-change\/), though at this point, we believe it's much less likely to cause human extinction on its own. [\/fn]\r\n\r\n[fn 16] Note that while reducing extinction risks and trajectory changes are split up in this explanation, they may, in practice, imply similar courses of action. Work to prevent, say, a catastrophic pandemic that kills all humans could likely also be effective at preventing a pandemic that allows some humans to survive but causes society to irreversibly collapse.[\/fn]\r\n\r\n[fn 17] It seems plausible that reducing the risk of this outcome could be the *most important cause to work on*. However, it's not clear to us what steps are available at this time to meaningfully do so.[\/fn]\r\n\r\n[fn 18] It's possible we'd prefer to act to prevent the suffering in 1,000 years rather than 10,000 years, because we feel *less confident we can predict what will happen* in 10,000 years. It seems plausible, for instance, that the greater length of time would make it more likely that someone else will find a way to prevent the harm. But if we assume that our uncertainty about the likelihood of the suffering in each case is the same, there seems to be no reason at all to prefer to prevent the sooner suffering rather than the later.[\/fn]\r\n\r\n[fn 19] If the radiation sickness is so bad that it makes their lives worse than nonexistence, they might be able to object to choosing any policy that allowed them to be born. But we can ignore this possibility for the point being made here. [\/fn]\r\n\r\n[fn 20] Some person-affecting views do assert that we have obligations to future individuals if a given individual or set of individuals will exist regardless of our actions. (Because of the extreme contingency in much of animal reproduction, the identity of future individuals is often not fixed.) For more information on this, see the entry on the non-identity problem in the [Stanford Encyclopedia of Philosophy](https:\/\/plato.stanford.edu\/entries\/nonidentity-problem\/#Prob). [\/fn]\r\n\r\n[fn 21]For an example of this view, read Leopold Aschenbrenner's blog post on [\"Burkean Longtermism.\"](https:\/\/www.forourposterity.com\/burkean-longtermism\/) [\/fn]\r\n\r\n\r\n[fn 22] Some researchers estimate that the chance of extinction is significantly lower; others believe it's much higher. But it seems hard to be confident the risks are extremely low. Assessing the level of risk we face is plausibly a top longtermist priority. [\/fn]\r\n\r\n\r\n[fn animals] What about non-human animals? One might wonder whether this emphasis on the extinction of our own species is overly human-centric.\r\n\r\nThere might be some scenarios in which humanity goes extinct, but many other animal species continue to live for the rest of Earth's habitable period. Does that mean that avoiding human extinction is much less important than we thought, since we believe non-human lives have value?\r\n\r\nProbably not, for at least three reasons:\r\n\r\n**1. Without the ability to migrate to the stars, Earth-derived life may fall well below its apparent potential.** It's possible another species on Earth would evolve human-level intelligence and capacities, but we shouldn't bet on it. As far as we can tell, it took around 3.5 billion years since life first emerged on Earth for human intelligence to reach its current state. It's possible that animals with human-like intelligence would emerge on our planet again more quickly if we went extinct, but we shouldn't rely on the idea that the planet has enough time left to pull off the same trick twice.\r\n\r\n**2. Wild animals may face extreme amounts of suffering, and it's not clear how often their lives are worth living.** If it's true that many wild animal lives are full of pain and suffering, we should hope humans are around in the future \u2013 if nothing else to [consider mitigating those harms](https:\/\/80000hours.org\/problem-profiles\/wild-animal-welfare\/). It could be best from the perspective of wild animals if humans did not go extinct, so that humans could improve the lives of wild animals.\r\n\r\n**3. We still have a lot of uncertainty about what a valuable future should look like, and it's important to preserve the one species we know of that is at least somewhat capable of seriously deliberating about what matters and acting on its conclusions.** We may yet fail to secure a valuable future, but it's much more likely that we'll get there by trying than if we leave it up to random chance or natural processes. If the course of the future were decided by random or natural processes, we might expect it to fall short of almost all its potential. [\/fn]\r\n\r\n\r\n[fn expected] In a previous version of this article, Ben Todd explained how simple expected value calculations can give a sense of how significantly the future can weigh in our deliberations:\r\n\r\n>If there's a 5% chance that civilisation lasts for 10 million years, then in expectation, there are over 5,000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce the risk of premature extinction by 1 percentage point, then these efforts would in expectation save 28 future generations. If each generation contains 10 billion people, that would be 280 billion additional individuals who get to live flourishing lives. If there's a chance civilisation lasts longer than 10 million years, or that there are more than 10 billion people in each future generation, then the argument is strengthened even further.\r\n\r\nThis is just a toy model, and it doesn't actually capture all the ways we should think about value. But it shows why we should care about future generations, even if we're not sure they'll come into existence. [\/fn]\r\n"},"categories":[1195,1216,330],"class_list":["post-40132","article","type-article","status-publish","has-post-thumbnail","hentry","category-future-generations-longtermism","category-moral-patients","category-moral-philosophy"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v23.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Longtermism: a call to protect future generations - 80,000 Hours<\/title>\n<meta name=\"description\" content=\"It would be better for the future if we avoid extinction, manage our resources carefully, foster institutions that promote cooperation rather than violent conflict, and responsibly develop powerful technology.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/80000hours.org\/articles\/future-generations\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Longtermism: a call to protect future generations\" \/>\n<meta property=\"og:description\" content=\"The course of the future is uncertain. But humanity\u2019s choices now can shape how events unfold.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/80000hours.org\/articles\/future-generations\/\" \/>\n<meta property=\"og:site_name\" content=\"80,000 Hours\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/80000Hours\" \/>\n<meta property=\"article:modified_time\" content=\"2024-11-29T13:01:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Joshua_Tree_Milky_Way.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2048\" \/>\n\t<meta property=\"og:image:height\" content=\"1391\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"Longtermism: a call to protect future generations\" \/>\n<meta name=\"twitter:description\" content=\"The course of the future is uncertain. But humanity\u2019s choices now can shape how events unfold.\" \/>\n<meta name=\"twitter:site\" content=\"@80000hours\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"54 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/80000hours.org\/articles\/future-generations\/\",\"url\":\"https:\/\/80000hours.org\/articles\/future-generations\/\",\"name\":\"Longtermism: a call to protect future generations - 80,000 Hours\",\"isPartOf\":{\"@id\":\"https:\/\/80000hours.org\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/80000hours.org\/articles\/future-generations\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/80000hours.org\/articles\/future-generations\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Joshua_Tree_Milky_Way.jpg\",\"datePublished\":\"2023-03-28T00:00:52+00:00\",\"dateModified\":\"2024-11-29T13:01:53+00:00\",\"description\":\"It would be better for the future if we avoid extinction, manage our resources carefully, foster institutions that promote cooperation rather than violent conflict, and responsibly develop powerful technology.\",\"breadcrumb\":{\"@id\":\"https:\/\/80000hours.org\/articles\/future-generations\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/80000hours.org\/articles\/future-generations\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/80000hours.org\/articles\/future-generations\/#primaryimage\",\"url\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Joshua_Tree_Milky_Way.jpg\",\"contentUrl\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Joshua_Tree_Milky_Way.jpg\",\"width\":2048,\"height\":1391,\"caption\":\"Benjamin Inouye, CC BY 4.0, via Wikimedia Commons\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/80000hours.org\/articles\/future-generations\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/80000hours.org\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Advanced series\",\"item\":\"https:\/\/80000hours.org\/advanced-series\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Future generations\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/80000hours.org\/#website\",\"url\":\"https:\/\/80000hours.org\/\",\"name\":\"80,000 Hours\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/80000hours.org\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/80000hours.org\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/80000hours.org\/#organization\",\"name\":\"80,000 Hours\",\"url\":\"https:\/\/80000hours.org\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/80000hours.org\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png\",\"contentUrl\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png\",\"width\":1500,\"height\":785,\"caption\":\"80,000 Hours\"},\"image\":{\"@id\":\"https:\/\/80000hours.org\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/80000Hours\",\"https:\/\/x.com\/80000hours\",\"https:\/\/www.youtube.com\/user\/eightythousandhours\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Longtermism: a call to protect future generations - 80,000 Hours","description":"It would be better for the future if we avoid extinction, manage our resources carefully, foster institutions that promote cooperation rather than violent conflict, and responsibly develop powerful technology.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/80000hours.org\/articles\/future-generations\/","og_locale":"en_US","og_type":"article","og_title":"Longtermism: a call to protect future generations","og_description":"The course of the future is uncertain. But humanity\u2019s choices now can shape how events unfold.","og_url":"https:\/\/80000hours.org\/articles\/future-generations\/","og_site_name":"80,000 Hours","article_publisher":"https:\/\/www.facebook.com\/80000Hours","article_modified_time":"2024-11-29T13:01:53+00:00","og_image":[{"width":2048,"height":1391,"url":"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Joshua_Tree_Milky_Way.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_title":"Longtermism: a call to protect future generations","twitter_description":"The course of the future is uncertain. But humanity\u2019s choices now can shape how events unfold.","twitter_site":"@80000hours","twitter_misc":{"Est. reading time":"54 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/80000hours.org\/articles\/future-generations\/","url":"https:\/\/80000hours.org\/articles\/future-generations\/","name":"Longtermism: a call to protect future generations - 80,000 Hours","isPartOf":{"@id":"https:\/\/80000hours.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/80000hours.org\/articles\/future-generations\/#primaryimage"},"image":{"@id":"https:\/\/80000hours.org\/articles\/future-generations\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Joshua_Tree_Milky_Way.jpg","datePublished":"2023-03-28T00:00:52+00:00","dateModified":"2024-11-29T13:01:53+00:00","description":"It would be better for the future if we avoid extinction, manage our resources carefully, foster institutions that promote cooperation rather than violent conflict, and responsibly develop powerful technology.","breadcrumb":{"@id":"https:\/\/80000hours.org\/articles\/future-generations\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/80000hours.org\/articles\/future-generations\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/articles\/future-generations\/#primaryimage","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Joshua_Tree_Milky_Way.jpg","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/03\/Joshua_Tree_Milky_Way.jpg","width":2048,"height":1391,"caption":"Benjamin Inouye, CC BY 4.0, via Wikimedia Commons"},{"@type":"BreadcrumbList","@id":"https:\/\/80000hours.org\/articles\/future-generations\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/80000hours.org\/"},{"@type":"ListItem","position":2,"name":"Advanced series","item":"https:\/\/80000hours.org\/advanced-series\/"},{"@type":"ListItem","position":3,"name":"Future generations"}]},{"@type":"WebSite","@id":"https:\/\/80000hours.org\/#website","url":"https:\/\/80000hours.org\/","name":"80,000 Hours","description":"","publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/80000hours.org\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/80000hours.org\/#organization","name":"80,000 Hours","url":"https:\/\/80000hours.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","width":1500,"height":785,"caption":"80,000 Hours"},"image":{"@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/80000Hours","https:\/\/x.com\/80000hours","https:\/\/www.youtube.com\/user\/eightythousandhours"]}]}},"_links":{"self":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/article\/40132"}],"collection":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/article"}],"about":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/types\/article"}],"author":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/users\/435"}],"version-history":[{"count":1,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/article\/40132\/revisions"}],"predecessor-version":[{"id":88220,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/article\/40132\/revisions\/88220"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media\/81500"}],"wp:attachment":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media?parent=40132"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/categories?post=40132"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}