{"id":77583,"date":"2022-09-16T11:49:54","date_gmt":"2022-09-16T11:49:54","guid":{"rendered":"https:\/\/80000hours.org\/?post_type=problem_profile&#038;p=77583"},"modified":"2024-01-10T12:03:29","modified_gmt":"2024-01-10T12:03:29","slug":"s-risks","status":"publish","type":"problem_profile","link":"https:\/\/80000hours.org\/problem-profiles\/s-risks\/","title":{"rendered":"\u2018S-risks\u2019"},"content":{"rendered":"<div id=\"toc_container\" class=\"toc_white no_bullets\"><p class=\"toc_title\">Table of Contents<\/p><ul class=\"toc_list\"><li><a href=\"#why-might-s-risks-be-an-especially-pressing-problem\"><span class=\"toc_number toc_depth_1\">1<\/span> Why might s-risks be an especially pressing problem?<\/a><ul><li><a href=\"#types-of-s-risks\"><span class=\"toc_number toc_depth_2\">1.1<\/span> Types of s-risks<\/a><\/li><li><a href=\"#how-likely-are-these-risks\"><span class=\"toc_number toc_depth_2\">1.2<\/span> How likely are these risks?<\/a><\/li><\/ul><\/li><li><a href=\"#what-can-you-do-to-help\"><span class=\"toc_number toc_depth_1\">2<\/span> What can you do to help?<\/a><ul><li><a href=\"#key-organisations-in-this-space\"><span class=\"toc_number toc_depth_2\">2.1<\/span> Key organisations in this space<\/a><\/li><\/ul><\/li><li><a href=\"#learn-more-about-s-risks\"><span class=\"toc_number toc_depth_1\">3<\/span> Learn more about s-risks<\/a><\/li><\/ul><\/div>\n<h2><span id=\"why-might-s-risks-be-an-especially-pressing-problem\" class=\"toc-anchor\"><\/span>Why might s-risks be an especially pressing problem?<\/h2>\n<p>We&#8217;re concerned about impacts on <a href=\"https:\/\/80000hours.org\/articles\/future-generations\/\">future generations<\/a>, such as from <a href=\"https:\/\/80000hours.org\/articles\/existential-risks\/\">existential threats<\/a> from <a href=\"https:\/\/80000hours.org\/problem-profiles\/global-catastrophic-biological-risks\/\">pandemics<\/a> or <a href=\"https:\/\/80000hours.org\/problem-profiles\/positively-shaping-artificial-intelligence\/\">artificial intelligence<\/a>.<\/p>\n<p>But these are primarily risks of extinction or of humanity&#8217;s potential being permanently curtailed &#8212; they don&#8217;t put special emphasis on avoiding the chance of extreme amounts of suffering, in particular.<\/p>\n<p>Research into <em>suffering risks<\/em> or <em>s-risks<\/em> attempts to fill this gap.<\/p>\n<p>New technology, for example the development of <a href=\"https:\/\/80000hours.org\/problem-profiles\/positively-shaping-artificial-intelligence\/\">artificial intelligence<\/a> or improved surveillance technology, but also new <a href=\"https:\/\/80000hours.org\/problem-profiles\/nuclear-security\/\">nuclear<\/a> or <a href=\"https:\/\/80000hours.org\/problem-profiles\/global-catastrophic-biological-risks\/\">biological<\/a> weapons, may well concentrate power in the hands of those that develop and control the technology. As a result, one possible outcome worse than extinction could be a <a href=\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/\">perpetual totalitarian dictatorship<\/a>, where people suffer indefinitely. But researchers on s-risks are often concerned with outcomes even worse than this.<\/p>\n<p>For example, what would happen if such a dictatorship developed the technology to settle space? And if we care about nonhuman animals or even <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-sentience\/\">digital minds<\/a>, the possible scale of future suffering seems astronomical. After all, right now humanity is almost completely insensitive to the welfare of nonhuman animals, let alone potential future digital consciousness.<\/p>\n<p>We don&#8217;t know how likely s-risks are.<\/p>\n<p>In large part this depends on how we define the term (we&#8217;ve seen various possible definitions). We think it&#8217;s very likely that there will be at least some suffering in the future, and potentially on very large scales &#8212; potentially vastly more suffering than has existed on Earth so far, especially if there are many, many more individuals in the future and they live a variety of lives. But often when people talk about s-risks, they are talking about the risk of outcomes so bad that they are <em>worse than the extinction of humanity<\/em>. Our guess is that the likelihood of such risks is very low, much lower than risks of human extinction &#8212; which is part of why we focus more on the latter.<\/p>\n<p>However, research on s-risks is so <a href=\"https:\/\/80000hours.org\/articles\/problem-framework\/#how-to-assess-how-neglected-a-problem-is\">neglected<\/a> that it&#8217;s hard to know. We think there are fewer than 50 people worldwide working explicitly on reducing s-risks.<\/p>\n<h3><span id=\"types-of-s-risks\" class=\"toc-anchor\"><\/span>Types of s-risks<\/h3>\n<p>While research in this area is at its early stages, the <a href=\"https:\/\/web.archive.org\/web\/20221022153323\/https:\/\/centerforreducingsuffering.org\/research\/intro\/\">Center for Reducing Suffering<\/a> has identified three possible kinds of s-risks:<\/p>\n<ul>\n<li><em>Agential<\/em> s-risks. These s-risks come from actors intentionally causing harm. This could happen because some powerful actor <a href=\"https:\/\/80000hours.org\/problem-profiles\/risks-from-malevolent-actors\/\">actively wants to cause harm<\/a> or has feelings of hatred or indifference to other groups (whether other ethnic groups, other species, or other forms of sentient life), or because of negative-sum strategic interactions.<\/li>\n<li><em>Incidental<\/em> s-risks. These s-risks arise as a side effect from some other process. For example, we could see suffering result as a side effect of some kinds of economic productivity (as we currently see from <a href=\"https:\/\/80000hours.org\/problem-profiles\/factory-farming\/\">factory farming<\/a>), attempts to gain information (like animal testing, or simulating conscious beings), or violent entertainment (think gladiator fights).<\/li>\n<li><em>Natural<\/em> s-risks. This is suffering that naturally occurs without intervention from any agent. It&#8217;s possible that things like <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/persis-eskander-wild-animal-welfare\/\">wild animal suffering<\/a> could someday exist on a huge scale across the universe (or might already).<\/li>\n<\/ul>\n<h3><span id=\"how-likely-are-these-risks\" class=\"toc-anchor\"><\/span>How likely are these risks?<\/h3>\n<p>It&#8217;s plausible that the risks of such suffering are sufficiently low that we shouldn&#8217;t focus on them. For example, perhaps we expect there to be strong incentives to make sure that these sorts of risks never happen. And since agents in general seem to strive for happiness and away from suffering, we might think this deep asymmetry will keep s-risks low &#8212; although it&#8217;s unclear <a href=\"https:\/\/web.archive.org\/web\/20221022153631\/https:\/\/www.cold-takes.com\/has-life-gotten-better\/\">whether historically life has in fact continually improved over time<\/a>.<\/p>\n<p>That said, there are a few reasons why we might be more concerned:<\/p>\n<ul>\n<li>If humans don&#8217;t go <a href=\"https:\/\/80000hours.org\/articles\/existential-risks\/\">extinct<\/a>, it seems pretty plausible that technological progress will continue. As a result, at some point it seems likely we&#8217;ll, in some sense, settle space (as discussed in our <a href=\"https:\/\/80000hours.org\/problem-profiles\/space-governance\/\">profile on space governance<\/a>), meaning that our future could hold positive or negative value on astronomical scales.<\/li>\n<li>In general, advanced technology will make it possible to do all sorts of things &#8212; and the more advanced our technology, the wider the scope of things that could be achieved. If there&#8217;s motivation to create suffering, this means there&#8217;s a reasonable possibility that this suffering could be created.<\/li>\n<li>There are precedents, like <a href=\"https:\/\/80000hours.org\/problem-profiles\/factory-farming\/\">factory farming<\/a>, <a href=\"\/problem-profiles\/wild-animal-welfare\">wild animal suffering<\/a> and slavery.<\/li>\n<\/ul>\n<h2><span id=\"what-can-you-do-to-help\" class=\"toc-anchor\"><\/span>What can you do to help?<\/h2>\n<p>There are two main ways of reducing s-risks:<\/p>\n<ul>\n<li><strong>Narrow interventions<\/strong> focusing on the safe development and deployment of specific new technologies like <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\">transformative AI<\/a> that could produce s-risks.<\/p>\n<\/li>\n<li>\n<p><strong>Broad interventions<\/strong>, for example, promoting international cooperation (which would reduce incentives for things like war, hostages, and torture).<\/p>\n<\/li>\n<\/ul>\n<p>Since our information on s-risks is so uncertain at this stage, current work on s-risks tends to focus on either research into these risks and ways to reduce them, or movement-building encouraging others to spend their time focusing on reducing these risks.<\/p>\n<p>As a result of this uncertainty, we think it&#8217;s particularly important that people working on s-risks understand the area well before trying to achieve substantive goals.<\/p>\n<p>It could also help reduce s-risks to work on related issues like:<\/p>\n<ul>\n<li><a href=\"https:\/\/80000hours.org\/problem-profiles\/great-power-conflict\/\">Great power conflict<\/a><\/li>\n<li><a href=\"https:\/\/80000hours.org\/problem-profiles\/risks-from-malevolent-actors\/\">Risks from malevolent actors<\/a><\/li>\n<li><a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\">Risks from artificial intelligence<\/a><\/li>\n<\/ul>\n<p>However, it&#8217;s important to note only <em>some<\/em> work in these areas is likely to be among the best ways to reduce s-risks in particular (rather than achieving other goals, like reducing <a href=\"\/articles\/existential-risks\">existential risks<\/a>).<\/p>\n<p>For example, out of the many ways we recommend working to <a href=\"\/problem-profiles\/artificial-intelligence\">reduce the risk of AI-related catastrophes<\/a>, only some seem directly relevant to s-risk reduction. S-risk-related AI work often focuses on the <em>interaction<\/em> of AI systems with each other (or with humans), and on ensuring that mistakes in the design or operation of AI systems don&#8217;t cause extremely bad outcomes. This includes work to build <a href=\"https:\/\/www.cooperativeai.com\/\">cooperative AI<\/a> (finding ways to ensure that even if individual AI systems seem safe, they don&#8217;t produce bad outcomes through interacting with other human or AI systems), as well as <a href=\"https:\/\/www.alignmentforum.org\/posts\/EzoCZjTdWTMgacKGS\/clr-s-recent-work-on-multi-agent-systems\">other work on multi-agent AI systems<\/a>.<\/p>\n<p>Read more about possible ways to avert s-risks <a href=\"https:\/\/web.archive.org\/web\/20221022153323\/https:\/\/centerforreducingsuffering.org\/research\/intro\/\">here<\/a>.<\/p>\n<h3><span id=\"key-organisations-in-this-space\" class=\"toc-anchor\"><\/span>Key organisations in this space<\/h3>\n<ul>\n<li>The <a href=\"https:\/\/centerforreducingsuffering.org\/\">Center for Reducing Suffering<\/a> researches the ethical views that might put more weight on s-risks, and considers practical approaches to reducing s-risks.<\/li>\n<li>The <a href=\"https:\/\/jobs.80000hours.org\/organisations\/center-on-long-term-risk\">Center on Long-Term Risk<\/a> focuses specifically on reducing s-risks that could arise from the development of AI, alongside community-building and grantmaking to support work on the reduction of s-risks.<\/li>\n<\/ul>\n<h2><span id=\"learn-more-about-s-risks\" class=\"toc-anchor\"><\/span>Learn more about s-risks<\/h2>\n<ul>\n<li><a href=\"https:\/\/web.archive.org\/web\/20221022153839\/https:\/\/longtermrisk.org\/s-risks-talk-eag-boston-2017\/\">S-risks: Why they are the worst existential risks, and how to prevent them<\/a> &#8212; an introductory talk by Max Daniel at EA Global Boston in 2017<\/li>\n<li><a href=\"https:\/\/web.archive.org\/web\/20221022153323\/https:\/\/centerforreducingsuffering.org\/research\/intro\/\">S-risks: An introduction<\/a> by Tobias Baumann <\/li>\n<li><a href=\"https:\/\/web.archive.org\/web\/20221022153847\/https:\/\/centerforreducingsuffering.org\/research\/a-typology-of-s-risks\/\">A typology of s-risks<\/a> by Tobias Baumann<\/li>\n<li><a href=\"https:\/\/web.archive.org\/web\/20221105162123\/https:\/\/centerforreducingsuffering.org\/wp-content\/uploads\/2022\/10\/Avoiding_The_Worst_final.pdf\"><em>Avoiding the Worst: How to Prevent a Moral Catastrophe<\/em><\/a> by Tobias Baumann<\/li>\n<li><a href=\"https:\/\/web.archive.org\/web\/20221022153925\/https:\/\/longtermrisk.org\/risks-of-astronomical-future-suffering\/\">Risks of astronomical future suffering<\/a> by Brian Tomasik<\/li>\n<li>Podcast: <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/jeff-sebo-ethics-digital-minds\/\">Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe<\/a><\/li>\n<li>Our article on <a href=\"https:\/\/80000hours.org\/articles\/existential-risks\/\">the case for reducing existential risks<\/a><\/li>\n<\/ul>\n<div class=\"tw--mt-6 tw--p-3 tw--pt-2 tw--bg-gray-lighter tw--rounded-md \">\n<h3 class=\"no-toc\">\t\t<a class=\"no-visited-styling tw--text-off-black hover:tw--text-off-black hover:tw--no-underline focus:tw--text-off-black\" href=\"https:\/\/80000hours.org\/problem-profiles\/\">\t\t\t<small>Read next:&nbsp;<\/small>\t\t\tExplore other pressing world problems\t\t<\/a>\t<\/h3>\n<div class=\"tw--grid xs:tw--grid-flow-col tw--gap-3\">\n<div class=\"xs:tw--order-last tw--pt-1\">\t\t\t<a href=\"https:\/\/80000hours.org\/problem-profiles\/\">\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/10\/sea-ocean-sky-night-cosmos-view-826635-pxhere.com_-720x448.jpg\" alt=\"Decorative post preview\" width=\"720\" height=\"448\">\t\t\t<\/a>\t\t<\/div>\n<div class=\"\">\n<div class=\"tw--pb-3\">\n<p>Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.<\/p>\n<\/div>\n<div class=\"\">\t\t\t\t<a href=\"https:\/\/80000hours.org\/problem-profiles\/\" class=\"btn btn-primary\">Continue &rarr;<\/a>\t\t\t<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"well visible-if-not-newsletter-subscriber margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h3 class=\"no-toc\">Plus, join our newsletter and we&#8217;ll mail you a free book<\/h3>\n<p>Join our newsletter and we&#8217;ll send you a free copy of <em>The Precipice<\/em> \u2014 a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. <a href=\"https:\/\/80000hours.org\/free-book\/#giveaway-terms\">T&#038;Cs here<\/a>.<\/p>\n<form data-80k-object-id=\"\" data-80k-form-action=\"newsletter__subscribe\" action=\"\/\" method=\"post\" class=\"form-newsletter-signup form-newsletter-signup-step-1 margin-bottom-smaller\">\n<div class=\"mc-field-group input-group compact-input-group \"> <input type=\"email\" value=\"\" name=\"email\" required class=\"form-control email\" placeholder=\"Email address\" id=\"input_email\"> <span class=\"submit input-group-btn input-group-btn-right\"> <input type=\"submit\" id=\"mc-embedded-subscribe\" value=\"GET THE BOOK\" class=\"btn btn-primary \" \/> <\/span> <\/div>\n<div> <input name=\"_eightyk_action\" value=\"mailchimp_add_subscriber\" type=\"hidden\"> <input name=\"redirect_path_after_step_2\" value=\"\/newsletter\/welcome\/\" type=\"hidden\"> <\/div>\n<div style=\"position: absolute; left: -5000px;\"> <input type=\"text\" name=\"b_abc12f58bbe8075560abdc5b7_43bc1ae55c\" tabindex=\"-1\" value=\"\"> <\/div>\n<\/form>\n<\/div>\n","protected":false},"author":423,"featured_media":78270,"parent":0,"menu_order":1,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":"[fn continueprogress]It's also possible that the sorts of technological progress required for s-risks could take place even if humans are extinct. For example, this could happen if an advanced AI system causes human extinction (a possibility we discuss in our article on [preventing AI-related catastrophes](\/problem-profiles\/artificial-intelligence\/)), the AI system continues technological progress, and there are still things that we care about that could be involved in extremely bad outcomes (such as [animals](\/problem-profiles\/factory-farming) or [digital minds](\/problem-profiles\/artificial-sentience)).[\/fn]\r\n\r\n[fn asymmetry]There are some reasons to think there is asymmetry in the opposite direction.\r\n\r\nFor example, while widespread and extreme suffering seems bad under many possible worldviews, the things needed to bring about a flourishing future may be more complex (including things beyond just happiness, such as justice or beauty). Anthony DiGiovanni discusses the idea that disvalue is not as complex as value [here](https:\/\/forum.effectivealtruism.org\/posts\/RkPK8rWigSAybgGPe\/a-longtermist-critique-of-the-expected-value-of-extinction-2#2_2__Disvalue_is_not_complex).[\/fn]"},"categories":[291],"class_list":["post-77583","problem_profile","type-problem_profile","status-publish","format-standard","has-post-thumbnail","hentry","category-world-problems"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v23.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>\u2018S-risks\u2019 - 80,000 Hours<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"\u2018S-risks\u2019\" \/>\n<meta property=\"og:description\" content=\"People working on *suffering risks* or *s-risks* attempt to reduce the risk of something causing vastly more suffering than has existed on Earth so far. We think research to work out how to mitigate these risks might be particularly important. You may also be able to do important work by building this field, which is currently highly neglected \u2014 with fewer than 50 people working on this worldwide.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/\" \/>\n<meta property=\"og:site_name\" content=\"80,000 Hours\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/80000Hours\" \/>\n<meta property=\"article:modified_time\" content=\"2024-01-10T12:03:29+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/05\/redcharlie-HxxmKwvUbgI-unsplash-1-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2048\" \/>\n\t<meta property=\"og:image:height\" content=\"2560\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@80000hours\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/\",\"url\":\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/\",\"name\":\"\u2018S-risks\u2019 - 80,000 Hours\",\"isPartOf\":{\"@id\":\"https:\/\/80000hours.org\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/05\/redcharlie-HxxmKwvUbgI-unsplash-1-scaled.jpg\",\"datePublished\":\"2022-09-16T11:49:54+00:00\",\"dateModified\":\"2024-01-10T12:03:29+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/#primaryimage\",\"url\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/05\/redcharlie-HxxmKwvUbgI-unsplash-1-scaled.jpg\",\"contentUrl\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/05\/redcharlie-HxxmKwvUbgI-unsplash-1-scaled.jpg\",\"width\":2048,\"height\":2560},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/s-risks\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/80000hours.org\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"\u2018S-risks\u2019\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/80000hours.org\/#website\",\"url\":\"https:\/\/80000hours.org\/\",\"name\":\"80,000 Hours\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/80000hours.org\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/80000hours.org\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/80000hours.org\/#organization\",\"name\":\"80,000 Hours\",\"url\":\"https:\/\/80000hours.org\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/80000hours.org\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png\",\"contentUrl\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png\",\"width\":1500,\"height\":785,\"caption\":\"80,000 Hours\"},\"image\":{\"@id\":\"https:\/\/80000hours.org\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/80000Hours\",\"https:\/\/x.com\/80000hours\",\"https:\/\/www.youtube.com\/user\/eightythousandhours\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"\u2018S-risks\u2019 - 80,000 Hours","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/80000hours.org\/problem-profiles\/s-risks\/","og_locale":"en_US","og_type":"article","og_title":"\u2018S-risks\u2019","og_description":"People working on *suffering risks* or *s-risks* attempt to reduce the risk of something causing vastly more suffering than has existed on Earth so far. We think research to work out how to mitigate these risks might be particularly important. You may also be able to do important work by building this field, which is currently highly neglected \u2014 with fewer than 50 people working on this worldwide.","og_url":"https:\/\/80000hours.org\/problem-profiles\/s-risks\/","og_site_name":"80,000 Hours","article_publisher":"https:\/\/www.facebook.com\/80000Hours","article_modified_time":"2024-01-10T12:03:29+00:00","og_image":[{"width":2048,"height":2560,"url":"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/05\/redcharlie-HxxmKwvUbgI-unsplash-1-scaled.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@80000hours","twitter_misc":{"Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/80000hours.org\/problem-profiles\/s-risks\/","url":"https:\/\/80000hours.org\/problem-profiles\/s-risks\/","name":"\u2018S-risks\u2019 - 80,000 Hours","isPartOf":{"@id":"https:\/\/80000hours.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/80000hours.org\/problem-profiles\/s-risks\/#primaryimage"},"image":{"@id":"https:\/\/80000hours.org\/problem-profiles\/s-risks\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/05\/redcharlie-HxxmKwvUbgI-unsplash-1-scaled.jpg","datePublished":"2022-09-16T11:49:54+00:00","dateModified":"2024-01-10T12:03:29+00:00","breadcrumb":{"@id":"https:\/\/80000hours.org\/problem-profiles\/s-risks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/80000hours.org\/problem-profiles\/s-risks\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/problem-profiles\/s-risks\/#primaryimage","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/05\/redcharlie-HxxmKwvUbgI-unsplash-1-scaled.jpg","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/05\/redcharlie-HxxmKwvUbgI-unsplash-1-scaled.jpg","width":2048,"height":2560},{"@type":"BreadcrumbList","@id":"https:\/\/80000hours.org\/problem-profiles\/s-risks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/80000hours.org\/"},{"@type":"ListItem","position":2,"name":"\u2018S-risks\u2019"}]},{"@type":"WebSite","@id":"https:\/\/80000hours.org\/#website","url":"https:\/\/80000hours.org\/","name":"80,000 Hours","description":"","publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/80000hours.org\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/80000hours.org\/#organization","name":"80,000 Hours","url":"https:\/\/80000hours.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","width":1500,"height":785,"caption":"80,000 Hours"},"image":{"@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/80000Hours","https:\/\/x.com\/80000hours","https:\/\/www.youtube.com\/user\/eightythousandhours"]}]}},"_links":{"self":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/77583"}],"collection":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile"}],"about":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/types\/problem_profile"}],"author":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/users\/423"}],"version-history":[{"count":0,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/77583\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media\/78270"}],"wp:attachment":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media?parent=77583"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/categories?post=77583"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}