{"id":86495,"date":"2024-06-19T13:48:27","date_gmt":"2024-06-19T13:48:27","guid":{"rendered":"https:\/\/80000hours.org\/?post_type=problem_profile&#038;p=86495"},"modified":"2024-12-26T18:46:54","modified_gmt":"2024-12-26T18:46:54","slug":"risks-of-stable-totalitarianism","status":"publish","type":"problem_profile","link":"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/","title":{"rendered":"Risks of stable totalitarianism"},"content":{"rendered":"<div id=\"toc_container\" class=\"toc_white no_bullets\"><p class=\"toc_title\">Table of Contents<\/p><ul class=\"toc_list\"><li><a href=\"#why-might-the-risk-of-stable-totalitarianism-be-an-especially-pressing-problem\"><span class=\"toc_number toc_depth_1\">1<\/span> Why might the risk of stable totalitarianism be an especially pressing problem?<\/a><ul><li><a href=\"#could-totalitarianism-be-an-existential-risk\"><span class=\"toc_number toc_depth_2\">1.1<\/span> Could totalitarianism be an existential risk?<\/a><\/li><li><a href=\"#is-any-of-this-remotely-plausible\"><span class=\"toc_number toc_depth_2\">1.2<\/span> Is any of this remotely plausible?<\/a><\/li><li><a href=\"#emergence\"><span class=\"toc_number toc_depth_2\">1.3<\/span> Will totalitarian regimes arise in future?<\/a><\/li><li><a href=\"#dominance\"><span class=\"toc_number toc_depth_2\">1.4<\/span> Could a totalitarian regime dominate the world?<\/a><\/li><li><a href=\"#entrench\"><span class=\"toc_number toc_depth_2\">1.5<\/span> Could a totalitarian regime last forever?<\/a><\/li><li><a href=\"#the-chance-of-stable-totalitarianism\"><span class=\"toc_number toc_depth_2\">1.6<\/span> The chance of stable totalitarianism<\/a><\/li><li><a href=\"#preventing-long-term-totalitarianism-in-particular-seems-pretty-neglected\"><span class=\"toc_number toc_depth_2\">1.7<\/span> Preventing long-term totalitarianism in particular seems pretty neglected<\/a><\/li><\/ul><\/li><li><a href=\"#why-might-you-choose-not-to-work-on-this-problem\"><span class=\"toc_number toc_depth_1\">2<\/span> Why might you choose not to work on this problem?<\/a><\/li><li><a href=\"#what-can-you-do-to-help\"><span class=\"toc_number toc_depth_1\">3<\/span> What can you do to help?<\/a><ul><li><a href=\"#ai-governance\"><span class=\"toc_number toc_depth_2\">3.1<\/span> AI Governance<\/a><\/li><li><a href=\"#researching-risks-of-global-coordination\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Researching risks of global coordination<\/a><\/li><li><a href=\"#working-on-defensive-technologies\"><span class=\"toc_number toc_depth_2\">3.3<\/span> Working on defensive technologies<\/a><\/li><li><a href=\"#protecting-democratic-institutions\"><span class=\"toc_number toc_depth_2\">3.4<\/span> Protecting democratic institutions<\/a><\/li><\/ul><\/li><li><a href=\"#learn-more-about-risks-of-stable-totalitarianism\"><span class=\"toc_number toc_depth_1\">4<\/span> Learn more about risks of stable totalitarianism<\/a><\/li><\/ul><\/div>\n<h2><span id=\"why-might-the-risk-of-stable-totalitarianism-be-an-especially-pressing-problem\" class=\"toc-anchor\"><\/span>Why might the risk of stable totalitarianism be an especially pressing problem?<\/h2>\n<p>Totalitarian regimes killed over 100 million people in less than 100 years in the 20th century. The pursuit of national goals with little regard for the wellbeing or rights of individuals makes these states wantonly cruel. The longer they last, the more harm they could potentially do.<\/p>\n<h3><span id=\"could-totalitarianism-be-an-existential-risk\" class=\"toc-anchor\"><\/span>Could totalitarianism be an <a href=\"\/articles\/existential-risks\/\">existential risk<\/a>?<\/h3>\n<p>Totalitarianism is a particular kind of autocracy, a form of government in which power is highly concentrated. What makes totalitarian regimes distinct is the complete, enforced subservience of the entire populace to the state.<\/p>\n<p>Most people do not welcome such subservience. So totalitarian states are also characterised by mass violence, surveillance, intrusive policing, and a lack of human rights protections, as well as a state-imposed ideology to maintain control.<\/p>\n<p>So far, most totalitarian regimes have only survived for a few decades.<\/p>\n<p>If one of these regimes were to maintain its grip on power for centuries or millennia, we could call it stable totalitarianism. All totalitarian regimes threaten their citizens and the rest of the world with violence, oppression, and suffering. But a stable totalitarian regime would also end any hope of the situation improving in the future. Millions or billions of people would  be stuck in a terrible situation with very little hope of recovery \u2014 a fate as bad (or <a href=\"\/problem-profiles\/s-risks\">even worse<\/a>) than extinction.<\/p>\n<h3><span id=\"is-any-of-this-remotely-plausible\" class=\"toc-anchor\"><\/span>Is any of this remotely plausible?<\/h3>\n<p>For stable totalitarianism to ruin our entire future, three things have to happen:<\/p>\n<ol>\n<li><a href=\"#emergence\">A totalitarian regime has to emerge<\/a>.<\/li>\n<li><a href=\"#dominance\">It has to dominate all, or at least a substantial part, of the world<\/a>.<\/li>\n<li><a href=\"#entrench\">It has to entrench itself indefinitely<\/a>.<\/li>\n<\/ol>\n<p>No state has even come close to achieving that kind of domination before. It&#8217;s been too difficult for them to overcome the challenges of war, revolution, and internal political changes. Step three, in particular, might seem especially far-fetched.<\/p>\n<p>New technologies may make a totalitarian takeover far more plausible though.<\/p>\n<p>For example:<\/p>\n<ul>\n<li><strong>Physical and digital surveillance<\/strong> may make it nearly impossible to build resistance movements.<\/li>\n<li><strong>Autonomous weapons<\/strong> may concentrate military power, making it harder to resist a totalitarian leader.<\/li>\n<li><strong>Advanced lie detection<\/strong> may make it easier to identify dissidents and conspirators.<\/li>\n<li><strong>Social manipulation technologies<\/strong> may be used to control the information available to people.<\/li>\n<\/ul>\n<p>Many of these technologies are closely related to developments in the field of AI. AI systems are rapidly developing new capabilities. It&#8217;s difficult to predict how this will continue in the future, but we think there&#8217;s a meaningful chance that AI systems come to be <a href=\"problem-profiles\/artificial-intelligence\/\">truly transformative<\/a> in the coming decades. In particular, AI systems that can make researchers more productive, or even replace them entirely, could lead to rapid technological progress and much faster economic growth.<\/p>\n<p>A totalitarian dictator could potentially use transformative AI to overcome each of the three forces that have impeded them in the past.<\/p>\n<ul>\n<li><strong>AI could eliminate external competition<\/strong>: If one state controls significantly more advanced AI systems than its rivals, then it may have a decisive technological edge that allows it to dominate the world through conquest or <a href=\"https:\/\/www.britannica.com\/topic\/compellence\">compellence<\/a> (i.e. forcing other states to do something by threatening them with violence if they refuse).<\/li>\n<li><strong>AI could crush internal resistance<\/strong>: AI could accelerate the development of multiple technologies dictators would find useful, including the surveillance, lie detection, and weaponry mentioned above. These could be used to detect and strangle resistance movements before they become a threat.<\/li>\n<li><strong>AI could solve the succession problem<\/strong>: AI systems can last much longer than dictators and don&#8217;t have to change over time. An AI system directed to maintain control of a society could keep pursuing that goal long after a dictator&#8217;s death.<\/li>\n<\/ul>\n<p>Stable totalitarianism doesn&#8217;t seem like an inevitable, or even particularly probable, result of technological developments. Bids for domination from dictators would still face serious opposition. Plus, new technologies could also make it harder for a totalitarian state to entrench itself. For example, they could make it easier for people to share information to support resistance movements.<\/p>\n<p>But the historical threat of totalitarianism combined with some features of modern technology make stable totalitarianism seem plausible.<\/p>\n<p>Below, we discuss in more depth each of the three prerequisites: emergence, domination, and entrenchment.<\/p>\n<h3><span id=\"emergence\" class=\"toc-anchor\"><\/span>Will totalitarian regimes arise in future?<\/h3>\n<p>Totalitarianism will probably persist in the future. Such regimes have existed throughout history and still exist today. About half the countries in the world are classified as &#8220;autocratic&#8221; by V-Dem, a research institute that studies democracy. Twenty percent are <em>closed<\/em> autocracies where citizens don&#8217;t get to vote for party leaders or legislative representatives.<\/p>\n<p>Democracy has seen a remarkable rise worldwide since the 1800s. Before 1849, every country in the world was classified as autocratic due to limited voting rights. Today, 91 \u2014 over half of V-Dem&#8217;s dataset \u2014 are democratic.<\/p>\n<p>But progress has recently slowed and even reversed. The world is slightly less democratic today than it was 20 years ago. That means we should probably expect the world to contain authoritarian regimes, including totalitarian ones, for at least decades to come.<\/p>\n<h3><span id=\"dominance\" class=\"toc-anchor\"><\/span>Could a totalitarian regime dominate the world?<\/h3>\n<p>Broadly there seem to be two main ways a totalitarian regime could come to dominate a large fraction of the world. First, it could use force or the threat of force to assert control. Second, it could take control of a large country or even a future world government.<\/p>\n<h4>Domination by force<\/h4>\n<p>Many totalitarian regimes have been expansionist.<\/p>\n<p>Hitler, for example, sought to conquer &#8220;heartland&#8221; Europe to gain the resources and territory he thought he needed to exert global domination. While he didn&#8217;t get far, others have had more success:<\/p>\n<ul>\n<li>20th century communist rulers wanted to create a global communist state. In the mid-1980s, about 33% of the world&#8217;s people lived under communist regimes.<\/li>\n<li>At its peak, the British Empire comprised <a href=\"https:\/\/escholarship.org\/uc\/item\/3cn68807\">about 25%<\/a> of the world&#8217;s land area and population. <\/li>\n<li>The Mongols controlled about 20% of the world&#8217;s land and 30% of its people.<\/li>\n<\/ul>\n<p>In recent decades, ambitious territorial conquest has become much less common. In fact, there have been almost no explicit attempts to take over large expanses of territory for almost 50 years. But, as Russia&#8217;s invasion of Ukraine shows, we shouldn&#8217;t find <a href=\"https:\/\/80000hours.org\/problem-profiles\/great-power-conflict\/#how-likely-is-a-war\">too much comfort in this trend<\/a>. Fifty years just isn&#8217;t that long in the grand sweep of history.<\/p>\n<p>Technological change could make it easier for one state to control much of the world. Historically, a technological edge has often given states huge military advantages. During the Gulf War, for example, American superiority in precision-guided munitions and computing power proved overwhelming.<\/p>\n<p>Some researchers think that the first actor to obtain future superintelligent AI systems could use them to achieve world domination. Such systems could dramatically augment a state&#8217;s power. They could be used to coordinate and control armies and monitor external threats. They could also increase the rate of technological innovation, giving the state that first controls them a significant edge over the rest of the world in the key technologies we discussed previously, like weaponry, targeting, surveillance, and cyber warfare.<\/p>\n<p>AI could provide a decisive advantage just by being integrated into military strategies and tactics. Cyberattack capabilities, for example, could disrupt enemy equipment and systems. AI systems could also help militaries process large amounts of data, react faster to enemy actions, coordinate large numbers of soldiers or autonomous weapons, and more accurately strike key targets.<\/p>\n<p>There&#8217;s even the possibility that military decision making could be turned over in part or in whole to AI systems. This idea currently faces strong resistance, but if AI systems prove far faster and more efficient than humans, competitive dynamics could push strongly in favour of more delegation.<\/p>\n<p>But a state with such an advantage over the rest of the world might not even have to use deadly force. Simply threatening rivals may be enough to force them to adopt certain policies or to turn control of critical systems over to the more powerful state.<\/p>\n<p>In sum, AI-powered armies, or just the threat of being attacked by one, could make the country that controls advanced AI more powerful than the rest of the world combined. If it so desired, that country could well use that advantage to achieve the global domination that past totalitarian leaders have only been able to dream of.<\/p>\n<h4>Controlling a powerful government<\/h4>\n<p>A totalitarian state could also gain global supremacy by taking control of a powerful government, such as one of the <a href=\"https:\/\/80000hours.org\/problem-profiles\/great-power-conflict\/\">great powers<\/a> or a hypothetical future world government.<\/p>\n<p>Totalitarian parties like the Nazis, for example, tried to gain more influence by controlling large fractions of the world. But the Nazis already gained a lot of power simply by gaining control of Germany.<\/p>\n<p>If a totalitarian actor gained control of one of the world&#8217;s most powerful countries today, it could potentially control a significant fraction of humanity&#8217;s future (in expectation) by simply entrenching itself in that country and using its influence to oppress many people indefinitely and shape important issues like <a href=\"https:\/\/80000hours.org\/problem-profiles\/space-governance\">how space is governed<\/a>. In fact, considering the prevalence of authoritarianism, this may be the most likely way totalitarianism could shape the long-term future.<\/p>\n<p>There&#8217;s also the possibility that such an actor could gain even more influence by taking over a global institution.<\/p>\n<p>Currently, countries coordinate many policies through international institutions like the United Nations. However, the enforcement mechanisms available to these institutions are currently <a href=\"https:\/\/www.asil.org\/insights\/volume\/1\/issue\/1\/enforcing-international-law\">&#8220;imperfect&#8221;<\/a>&#8220;: applied slowly and unevenly.<\/p>\n<p>We don&#8217;t know for sure how international cooperation will evolve in the future. However, international institutions could have more power than they currently do. Such institutions facilitate global trade and economic growth, for example. They may also help states solve disagreements and avoid conflict. They&#8217;re<a href=\"https:\/\/80000hours.org\/problem-profiles\/global-public-goods\/\"> often proposed<\/a> as a way to manage global catastrophic risks too. States could choose to empower global institutions to realise these benefits.<\/p>\n<p>If such an international framework were to form, a totalitarian actor could potentially leverage it to gain global control without using force (just as totalitarian actors have seized control of democratic countries in the past). This would be deeply worrying because a global totalitarian government would not face pressure from other states, which is one of the main ways totalitarianism has been defeated in the past.<\/p>\n<p>Economist Bryan Caplan is particularly concerned that fear of catastrophic threats to humanity like <a href=\"https:\/\/80000hours.org\/problem-profiles\/climate-change\/\">climate change<\/a>, <a href=\"https:\/\/80000hours.org\/problem-profiles\/global-catastrophic-biological-risks\/\">pandemics<\/a>, and <a href=\"\/problem-profiles\/artificial-intelligence\/\">risks from advanced AI<\/a> could motivate governments to implement policies that are particularly vulnerable to totalitarian takeover, such as widespread surveillance.<\/p>\n<p>We think there are difficult tradeoffs to consider here. International institutions with strong enforcement powers <em>might<\/em> be needed to address global coordination problems and catastrophic risks. Nevertheless, we agree that there are serious risks as well, including the possibility that they could be captured by totalitarian actors. We aren&#8217;t sure how exactly to trade these things off (hence this article)!<\/p>\n<h3><span id=\"entrench\" class=\"toc-anchor\"><\/span>Could a totalitarian regime last forever?<\/h3>\n<p>Some totalitarian leaders have attempted to stay in power indefinitely. In What We Owe the Future, William MacAskill discusses several times authoritarian leaders have sought to extend their lives:<\/p>\n<ul>\n<li>Multiple Chinese emperors experimented with immortality elixirs. (Some of these potions probably contained toxins like lead, making them more likely to hasten death than defeat it.)<\/li>\n<li>Kim Il-Sung, the founder of North Korea, tried to extend his life by pouring public funds into longevity research and receiving blood transfusions from young Koreans. <\/li>\n<li>Nursultan Nazarbayev, who ruled Kazakhstan for nearly two decades, also spent millions of state dollars on life extension, though these efforts <a href=\"https:\/\/www.businessinsider.com\/the-elixir-of-life-comes-in-a-yogurt-drink-2012-11\">reportedly only produced<\/a> a &#8220;liquid yogurt drink&#8221; called Nar. <\/li>\n<\/ul>\n<p>But of course, none have even got close to entrenching themselves permanently. The Nazis ruled Germany for just 12 years. The Soviets controlled Russia for 79. North Korea&#8217;s Kim dynasty has survived 76 years and counting.<\/p>\n<p>They have inevitably fallen due to some combination of three forces:<\/p>\n<ol>\n<li><strong>External competition<\/strong>: Totalitarian regimes pose a risk to the rest of the world and face violent opposition. The Nazis, Mussolini&#8217;s Italy, the Empire of Japan, and Cambodia&#8217;s Khmer Rouge were all defeated militarily.<\/li>\n<li><strong>Internal resistance<\/strong>: Competing political groups or popular resistance can undermine the leaders.<\/li>\n<li><strong>The &#8220;succession problem&#8221;<\/strong>: These regimes sometimes liberalise or collapse entirely after particularly oppressive leaders die or step down. For example, the USSR collapsed a few years after Mikhail Gorbachev came to power.<\/li>\n<\/ol>\n<p>To date, these forces have made it impossible to entrench an oppressive regime in unchanging form for more than a century or so.<\/p>\n<p>But once again, technology could change this picture. Advanced AI \u2014 and the military, surveillance, and cyberweapon technologies it could accelerate \u2014 may be used to counteract each of the three forces.<\/p>\n<p>For external competition, we&#8217;ve already discussed how AI might allow leading states to build a substantial military advantage over the rest of the world.<\/p>\n<p>After using that advantage to achieve dominance over the rest of the world, a totalitarian state could use surveillance technologies to monitor the technological progress of any actors \u2014 external <em>or<\/em> internal \u2014 that could threaten its dominance. With a sufficient technological edge, it could then use kinetic and cyber weapons to crush anyone who showed signs of building power.<\/p>\n<p>After eliminating internal and external competition, a totalitarian actor would just have to overcome the succession problem to make long-term entrenchment a realistic possibility. This is a considerable challenge. Any kind of change in institutions or values over time would allow for the possibility of escape from totalitarian control.<\/p>\n<p>But advanced AI could also help dictators solve the succession problem.<\/p>\n<p>Perhaps advanced AI will help dictators invent more effective, dairy-free life extension technologies. However, totalitarian actors could also direct an advanced AI system to continue pursuing certain goals after their death. An AI could be given full control of the state&#8217;s military, surveillance, and cybersecurity resources. Meanwhile, a variety of techniques, such as digital error correction, could be used to keep the AI&#8217;s goals and methods constant over time.<\/p>\n<p>This paints a picture of truly stable totalitarianism. Long after the dictator&#8217;s death, the AI could live on, executing the same goals, with complete control in its area of influence.<\/p>\n<h3><span id=\"the-chance-of-stable-totalitarianism\" class=\"toc-anchor\"><\/span>The chance of stable totalitarianism<\/h3>\n<p>So, how likely is stable totalitarianism?<\/p>\n<p>This is clearly a difficult question. One complication is that there are multiple ways stable totalitarianism could come to pass, including:<\/p>\n<ul>\n<li><strong>Global domination:<\/strong> A totalitarian government could become so powerful that it has a decisive advantage over the rest of the world. For example, it could develop an AI system so powerful it can prevent anyone else from obtaining a similar system. It could then use this system to dominate any rivals and oppress any opposition, achieving global supremacy.<\/li>\n<li><strong>Centralised power drifts toward totalitarianism:<\/strong> International institutions could become more robust and powerful, perhaps as a result of efforts to increase coordination, reduce conflict, and mitigate global risks. National governments may even peacefully and democratically cede more control to the international institutions. But efforts to support cooperation and prevent new technologies from being misused to cause massive harm could, slowly or suddenly, empower totalitarian actors. They may use these very tools to centralise and cement their power.<\/li>\n<li><strong>Collapse of democracy:<\/strong> Some advanced AI system could centralise power such that someone in a non-totalitarian state, or maybe a global institution, could use it to undermine democratic institutions, disempower rivals, and cement themself as a newly-minted totalitarian leader.<\/li>\n<li><strong>One country is lost:<\/strong> A totalitarian government in one large country could use surveillance tools, AI, and other technologies to <a href=\"#entrench\">entrench<\/a> its rule over its population indefinitely. They wouldn&#8217;t even have to be the <em>first<\/em> to invent the technology: they could re-invent, buy, copy, or steal it after it&#8217;s been invented elsewhere in the world. Although all the value of our future might not be lost, a substantial fraction of humanity could be condemned to indefinite oppression.<\/li>\n<\/ul>\n<p>The key takeaway from the preceding sections is that there does seem to be a significant chance powerful AI systems will give someone the technical capacity to entrench their rule in this way. The key question is whether someone will try to do so \u2014 and whether they&#8217;ll succeed.<\/p>\n<p>Here&#8217;s a rough back-of-the-envelope calculation, estimating the risk over roughly the next century:<\/p>\n<ul>\n<li>Chance that future technologies, particularly AI, make <a href=\"#entrench\">entrenchment<\/a> technically possible: <strong>25%<\/strong><\/li>\n<li>Chance that a leader or group tries to use the technology to entrench their rule: <strong>25%<\/strong><\/li>\n<li>Chance that they will achieve of decisive advantage over their rivals and successfully entrench their rule: <strong>5%<\/strong><\/li>\n<li>Overall risk: <strong>0.3%<\/strong>, or about <strong>1 in 330<\/strong><\/li>\n<\/ul>\n<p>We&#8217;re pretty uncertain about all of these numbers. Some of them might seem low or high. If you plug in numbers to make your own estimate, you can see how much the risk changes.<\/p>\n<p>Some experts have given other estimates of the risk. <a href=\"https:\/\/www.researchgate.net\/publication\/346827408_The_totalitarian_threat\">Caplan<\/a>, in particular, has estimated that there&#8217;s a 5% chance that &#8220;a world totalitarian government will emerge during the next one thousand years and last for a thousand years or more.&#8221;<\/p>\n<p>But another key takeaway from the preceding sections is that, while stable totalitarianism seems possible, it also seems difficult to realise \u2014 especially in a truly long-term sense. A wannabe eternal dictator would have to solve technical challenges, overcome fierce resistance, and preempt a myriad of future social and technical changes that could threaten their rule.<\/p>\n<p>That&#8217;s why we think the chance of a dictator succeeding, assuming it&#8217;s possible and they try, is probably low. We&#8217;ve put it at 5%. However, it could be much higher or lower. There&#8217;s currently a lot of scope for disagreement, and we&#8217;d love to see more research into this question. The most extensive discussion we&#8217;ve seen of how feasible it would be for a ruler to entrench long-term control with AI is in a report on <a href=\"https:\/\/docs.google.com\/document\/d\/1mkLFhxixWdT5peJHq4rfFzq4QbHyfZtANH1nou68q88\/edit\">Artificial General Intelligence and Lock-In<\/a> by Lukas Finnveden, C. Jess Riedel, and Carl Shulman.<\/p>\n<p>It&#8217;s also worth noting that it&#8217;s low in part because we expect the rest of the world to resist attempts at entrenchment. You might choose to work on this problem partly to ensure that resistance materialises.<\/p>\n<p><strong>Bottom line<\/strong>: we think that stable totalitarianism is far from the most likely future outcome. But we&#8217;re very unsure about this, the risk doesn&#8217;t seem <em>super<\/em> low, and the risk partly seems low because stable totalitarianism would clearly be so awful that we expect people would make a big effort to stop it.<\/p>\n<h3><span id=\"preventing-long-term-totalitarianism-in-particular-seems-pretty-neglected\" class=\"toc-anchor\"><\/span>Preventing long-term totalitarianism in particular seems pretty neglected<\/h3>\n<p>The core of the argument sketched above is that the future will likely contain totalitarian states, one of which could obtain very powerful AI systems which give them the power to eliminate competition and extend their rule long into the future.<\/p>\n<p>Even the impermanent totalitarianism humanity has experienced so far has been horrendous. So the prospect our descendants could find themselves living under such regimes for millennia to come is distressing.<\/p>\n<p>Yet we don&#8217;t know of anyone working <em>directly<\/em> on the problem of <em>stable<\/em> totalitarianism.<\/p>\n<p>If we count indirect efforts, the field starts to seem more crowded. As we recount below, there are many think tanks and research institutes working to protect democratic institutions, which implicitly work against stable totalitarianism by trying to reduce the number of countries that become totalitarian in the first place. Their combined budgets for this kind of work are probably on the order of $10M to $100M annually.<\/p>\n<p>There&#8217;s also the fact that the rise of a stable totalitarian superpower would be bad for everyone else in the world. That means that most other countries are strongly incentivized to work against this problem. From this perspective, perhaps we should count some large fraction of the military spending of NATO countries (almost <a href=\"https:\/\/www.gov.uk\/government\/publications\/international-defence-expenditure-2023\/finance-and-economics-annual-statistical-bulletin-international-defence-2023\">$1.2 trillion in 2023<\/a>) as part of the anti-totalitarian effort. Some portion of the diplomatic and foreign aid budgets of democratic countries is also devoted to supporting democratic institutions around the world (e.g. the US State Department employs 13,000 Foreign Service members).<\/p>\n<p>One could argue that many of these resources are allocated inefficiently. Or, as we discussed above, some of that spending could raise other risks if it drives arms races and stokes international tension. But if even a small fraction of that money is spent on effective interventions, marginal efforts in this area start to seem a lot less impactful.<\/p>\n<p>In addition to questions of efficiency, the relevance of this spending to the problem of stable totalitarianism specifically is still debatable. Our view is that the particular pathways which could lead to the worst outcomes \u2014 a technological breakthrough that brings about the return of large-scale conquest and potentially long-term lock-in \u2014 are not on the radar of basically any of the institutions mentioned.<\/p>\n<h2><span id=\"why-might-you-choose-not-to-work-on-this-problem\" class=\"toc-anchor\"><\/span>Why might you choose not to work on this problem?<\/h2>\n<p>All that said, maybe nobody&#8217;s working on this problem for a reason.<\/p>\n<p>First, it may not seem that likely, depending on your views (and if we&#8217;re wrong about the long-term possibilities of advanced AI systems, then it might even be impossible for a dictator to take and entrench their control over the world).<\/p>\n<p>Second, it might not be very <a href=\"https:\/\/80000hours.org\/articles\/problem-framework\/\">solvable<\/a>. Influencing world-historical events like the rise and fall of totalitarian regimes seems extremely difficult!<\/p>\n<p>For example, we mentioned above that the three ways totalitarian regimes have been brought down in the past are through war, resistance movements, and the deaths of dictators. Most of the people reading this article probably aren&#8217;t in a position to influence any of those forces (and even if they could, it would be seriously risky to do so, to say the least!).<\/p>\n<h2><span id=\"what-can-you-do-to-help\" class=\"toc-anchor\"><\/span>What can you do to help?<\/h2>\n<p>To make progress on this problem, we may need to aim a little bit lower than winning wars or fomenting revolutions.<\/p>\n<p>But we do think there are some things you can do to help solve this problem. These include:<\/p>\n<ul>\n<li>Working on AI governance<\/li>\n<li>Researching downside risks of global coordination<\/li>\n<li>Helping develop defensive technologies<\/li>\n<li>Protecting democratic institutions<\/li>\n<\/ul>\n<h3><span id=\"ai-governance\" class=\"toc-anchor\"><\/span>AI Governance<\/h3>\n<p>First, it&#8217;s notable that most \u2014 possibly all \u2014 plausible routes to stable totalitarianism leverage advanced AI. You could go into <a href=\"https:\/\/80000hours.org\/career-reviews\/ai-policy-and-strategy\/\">AI governance<\/a> to help establish laws and norms that make it less likely AI systems are used for these purposes.<\/p>\n<p>You could help build international frameworks that broadly shape how AI systems are developed and deployed. It&#8217;s possible that the potentially transformative benefits and global risks AI could bring will create great opportunities for international cooperation.<\/p>\n<p>Eventually the world might establish shared institutions to monitor where advanced AI systems are being developed and what they may be used for. This information could be paired with remote shutdown technologies to prevent malicious actors, including rogue states and dictators, from obtaining or deploying AI systems that threaten the rest of the world. For example, there may be ways to legally or technically direct how autonomous weapons are developed to prevent one person from being able to control large armies.<\/p>\n<p>It&#8217;s in everyone&#8217;s interest to ensure that no one country uses AI to dominate the future of humanity. If you want to help make this vision a reality, you could work at organisations like the <a href=\"https:\/\/www.governance.ai\/\">Centre for the Governance of AI<\/a>, the <a href=\"https:\/\/www.oxfordmartin.ox.ac.uk\/ai-governance\">Oxford Martin AI Governance Initiative<\/a>, the <a href=\"https:\/\/www.iaps.ai\/\">Institute for AI Policy and Strategy<\/a>, the <a href=\"https:\/\/law-ai.org\/\">Institute for Law and AI<\/a>, <a href=\"https:\/\/www.rand.org\/global-and-emerging-risks\/centers\/technology-and-security-policy.html\">RAND&#8217;s Technology and Security Policy Center<\/a>, the <a href=\"https:\/\/www.simoninstitute.ch\/\">Simon Institute<\/a>, or even large multilateral policy organisations and related think tanks.<\/p>\n<p>If this path seems exciting, you might want to read our <a href=\"https:\/\/80000hours.org\/career-reviews\/ai-policy-and-strategy\/\">career review of AI governance and policy<\/a>.<\/p>\n<h3><span id=\"researching-risks-of-global-coordination\" class=\"toc-anchor\"><\/span>Researching risks of global coordination<\/h3>\n<p>Of course, concerns about the development of oppressive world governments are motivated by exactly this vision for global governance, which include quite radical proposals such as monitoring all advanced AI development.<\/p>\n<p>If such institutions are needed to tackle global catastrophic risks, we may have to accept some risk of them enabling overly-intrusive governance. Still, we think we should do everything we can to mitigate this cost where possible and continue researching all kinds of global risks to ensure we&#8217;re making good tradeoffs here.<\/p>\n<p>For example, you could work to design effective policies and institutions that are minimally invasive and protect human rights and freedoms. Or, you could analyse which policies to reduce existential risk <em>need<\/em> to be addressed at the global level and which can be addressed at the state level. Allowing individual states to tackle risks also seems more feasible than coordinating at the global level.<\/p>\n<p>We haven&#8217;t done a deep dive in this space, but you might be able to work on this issue in academia (like at the <a href=\"https:\/\/www.mercatus.org\/\">Mercatus Center<\/a>, where Bryan Caplan works), at some think tanks that work on freedom and human rights issues (like <a href=\"https:\/\/www.chathamhouse.org\/\">Chatham House<\/a>), or in multilateral governance organisations themselves.<\/p>\n<p>You can also listen to <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/bryan-caplan-case-for-and-against-education\/\">our podcast with Bryan Caplan<\/a> for more discussion.<\/p>\n<h3><span id=\"working-on-defensive-technologies\" class=\"toc-anchor\"><\/span>Working on defensive technologies<\/h3>\n<p>Another approach would be to work on technologies that protect individual freedoms without empowering bad actors. Many technologies, like global institutions, have benefits and risks: they can be used by both individuals to protect themselves and malicious actors to cause harm or seize power. If you can speed up the development of technologies that help individuals more than bad actors, then you might make the world as a whole safer and reduce the risk of totalitarian takeover.<\/p>\n<p>Technologist Vitalik Buterin calls this <a href=\"https:\/\/vitalik.eth.limo\/general\/2023\/11\/27\/techno_optimism.html#dacc\">defensive accelerationism<\/a>. There&#8217;s a broad range of such technologies, but some that may be particularly relevant for resisting totalitarianism could include:<\/p>\n<ul>\n<li>Tools for identifying misinformation and manipulative content<\/li>\n<li>Cybersecurity tools<\/li>\n<li>Some privacy-enhancing technologies like encryption protocols<\/li>\n<li><a href=\"https:\/\/80000hours.org\/career-reviews\/biorisk-research\/\">Biosecurity policies and tools<\/a>, like advanced PPE, that make it harder for malicious actors to get their way by threatening other states with biological weapons<\/li>\n<\/ul>\n<p>The short length of that list reflects our uncertainty about this approach. There&#8217;s not much work in this area to direct additional efforts beyond Buterin&#8217;s essay.<\/p>\n<p>It&#8217;s also very hard to predict the implications of new technologies. Some of the examples Buterin gives seem like they could also empower totalitarian states or other malicious actors. Cryptographic techniques can be used by both individuals (to protect themselves against surveillance) and criminals (to conceal their activities from law enforcement). Similarly, cybersecurity tools meant to help individuals could also be used by a totalitarian actor to thwart multilateral attempts to disrupt dangerous AI development within its borders.<\/p>\n<p>That said, we think cautious, well-intentioned research efforts to identify technologies that empower defenders over attackers could be valuable.<\/p>\n<p>Another related option is to research potential downsides from other technologies discussed in this article. Some researchers dedicate their time to understanding issues like <a href=\"https:\/\/web.archive.org\/web\/20190528211454\/https:\/\/harvardlawreview.org\/wp-content\/uploads\/pdfs\/vol126_richards.pdf\">risks to political freedom<\/a> from advanced surveillance and the dangers of <a href=\"https:\/\/www.founderspledge.com\/research\/autonomous-weapon-systems-and-military-artificial-intelligence-ai\">autonomous weapons<\/a>.<\/p>\n<h3><span id=\"protecting-democratic-institutions\" class=\"toc-anchor\"><\/span>Protecting democratic institutions<\/h3>\n<p>A final approach to consider is supporting democratic institutions to prevent more countries from sliding towards authoritarianism and, potentially, totalitarianism.<\/p>\n<p>We mentioned that, after over a century of progress, global democratisation has recently stalled. Some researchers <a href=\"https:\/\/www.cambridge.org\/core\/elements\/abs\/backsliding\/CCD2F28FB63A56409FF8911351F2E937\">have claimed<\/a> that we are experiencing &#8220;democratic backsliding&#8221; globally, with <a href=\"https:\/\/www.cambridge.org\/core\/journals\/ethics-and-international-affairs\/article\/pragmatics-of-democratic-frontsliding\/1BB167215530A29E0CCA3C70805B7BBD\">populists and partisans<\/a> subverting democratic institutions. Although this phenomenon is controversial because it&#8217;s highly politicised and &#8220;democraticness&#8221; is <a href=\"https:\/\/www.economist.com\/interactive\/graphic-detail\/2023\/09\/12\/democratic-backsliding-seems-real-even-if-it-is-hard-to-measure\">hard to measure<\/a>, it does seem to be a <a href=\"https:\/\/ourworldindata.org\/less-democratic\">real phenomenon<\/a>.<\/p>\n<p>Given what we know, it at least seems like a trend worth monitoring. If democratic institutions are under threat globally, protecting them to make it harder for more countries to become totalitarian is important, as it could reduce the chance that a totalitarian state gains a decisive advantage through AI development. It also raises the chance that democratic values, such as freedom of expression and tolerance, shape humanity&#8217;s long-term future.<\/p>\n<p>There is a large ecosystem of research and policy institutes working on this problem in particular. These include think tanks like <a href=\"https:\/\/www.v-dem.net\/\">V-Dem<\/a>, <a href=\"https:\/\/freedomhouse.org\/\">Freedom House<\/a>, the <a href=\"https:\/\/carnegieendowment.org\/programs\/democracy-conflict-and-governance?lang=en\">Carnegie Endowment for International Peace<\/a>, and the <a href=\"https:\/\/www.csis.org\/programs\/international-security-program\/defending-democratic-institutions\">Center for Strategic and International Studies<\/a>. There are also academic research centres like Stanford&#8217;s <a href=\"https:\/\/cddrl.fsi.stanford.edu\/\">Center on Democracy, Development and the Rule of Law<\/a> and Notre Dame&#8217;s <a href=\"https:\/\/strategicframework.nd.edu\/initiatives\/democracy-initiative\/\">Democracy Initiative<\/a>.<\/p>\n<p>(Note: These are just examples of programs in this area. We haven&#8217;t looked deeply at their work.)<\/p>\n<h2><span id=\"learn-more-about-risks-of-stable-totalitarianism\" class=\"toc-anchor\"><\/span>Learn more about risks of stable totalitarianism<\/h2>\n<ul>\n<li>William MacAskill, <a href=\"https:\/\/whatweowethefuture.com\/uk\/\"><em>What We Owe the Future<\/em><\/a> (particularly chapter four)<\/li>\n<li>Caleb Ontiveros, <a href=\"https:\/\/www.calebontiveros.com\/the-spectre-of-stable-totalitarianism\/\">The spectre of stable totalitarianism<\/a><\/li>\n<li>Podcast: <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/vitalik-buterin-techno-optimism\/\">Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government<\/a><\/li>\n<li>Podcast: <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/bryan-caplan-case-for-and-against-education\/\">Bryan Caplan on whether his case against education holds up, totalitarianism, and open borders<\/a><\/li>\n<li><a href=\"https:\/\/web.archive.org\/web\/20221022160628\/https:\/\/forum.effectivealtruism.org\/posts\/LpkXtFXdsRd4rG8Kb\/reducing-long-term-risks-from-malevolent-actors\">Reducing long-term risks from malevolent actors<\/a> by David Althaus and Tobias Baumann<\/li>\n<li><a href=\"https:\/\/web.archive.org\/web\/20221006061149\/http:\/\/www.fhi.ox.ac.uk\/wp-content\/uploads\/GovAI-Agenda.pdf\">AI governance: A research agenda<\/a> by Allan Dafoe<\/li>\n<li>Podcast: <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/nita-farahany-neurotechnology\/\">Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers<\/a><\/li>\n<li>Andrei Kolesnikov, &#8220;<a href=\"https:\/\/carnegieendowment.org\/posts\/2022\/04\/putins-war-has-moved-russia-from-authoritarianism-to-hybrid-totalitarianism?lang=en\">Putin&#8217;s war has moved Russia from authoritarianism to hybrid totalitarianism<\/a>&#8220;<\/li>\n<\/ul>\n<div class=\"tw--mt-6 tw--p-3 tw--pt-2 tw--bg-gray-lighter tw--rounded-md \">\n<h3 class=\"no-toc\">\t\t<a class=\"no-visited-styling tw--text-off-black hover:tw--text-off-black hover:tw--no-underline focus:tw--text-off-black\" href=\"https:\/\/80000hours.org\/problem-profiles\/\">\t\t\t<small>Read next:&nbsp;<\/small>\t\t\tExplore other pressing world problems\t\t<\/a>\t<\/h3>\n<div class=\"tw--grid xs:tw--grid-flow-col tw--gap-3\">\n<div class=\"xs:tw--order-last tw--pt-1\">\t\t\t<a href=\"https:\/\/80000hours.org\/problem-profiles\/\">\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/10\/sea-ocean-sky-night-cosmos-view-826635-pxhere.com_-720x448.jpg\" alt=\"Decorative post preview\" width=\"720\" height=\"448\">\t\t\t<\/a>\t\t<\/div>\n<div class=\"\">\n<div class=\"tw--pb-3\">\n<p>Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.<\/p>\n<\/div>\n<div class=\"\">\t\t\t\t<a href=\"https:\/\/80000hours.org\/problem-profiles\/\" class=\"btn btn-primary\">Continue &rarr;<\/a>\t\t\t<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"well visible-if-not-newsletter-subscriber margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h3 class=\"no-toc\">Plus, join our newsletter and we&#8217;ll mail you a free book<\/h3>\n<p>Join our newsletter and we&#8217;ll send you a free copy of <em>The Precipice<\/em> \u2014 a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. <a href=\"https:\/\/80000hours.org\/free-book\/#giveaway-terms\">T&#038;Cs here<\/a>.<\/p>\n<form data-80k-object-id=\"\" data-80k-form-action=\"newsletter__subscribe\" action=\"\/\" method=\"post\" class=\"form-newsletter-signup form-newsletter-signup-step-1 margin-bottom-smaller\">\n<div class=\"mc-field-group input-group compact-input-group \"> <input type=\"email\" value=\"\" name=\"email\" required class=\"form-control email\" placeholder=\"Email address\" id=\"input_email\"> <span class=\"submit input-group-btn input-group-btn-right\"> <input type=\"submit\" id=\"mc-embedded-subscribe\" value=\"GET THE BOOK\" class=\"btn btn-primary \" \/> <\/span> <\/div>\n<div> <input name=\"_eightyk_action\" value=\"mailchimp_add_subscriber\" type=\"hidden\"> <input name=\"redirect_path_after_step_2\" value=\"\/newsletter\/welcome\/\" type=\"hidden\"> <\/div>\n<div style=\"position: absolute; left: -5000px;\"> <input type=\"text\" name=\"b_abc12f58bbe8075560abdc5b7_43bc1ae55c\" tabindex=\"-1\" value=\"\"> <\/div>\n<\/form>\n<\/div>\n","protected":false},"author":449,"featured_media":86524,"parent":0,"menu_order":0,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":"[fn caplanquote] Caplan [wrote](https:\/\/www.researchgate.net\/publication\/346827408_The_totalitarian_threat):\r\n\r\n\r\n>How seriously do I take the possibility that a world totalitarian government will emerge during the next one thousand years and last for a thousand years or more?  Despite the complexity and guesswork inherent in answering this question, I will hazard a response.  My unconditional probability \u2014 i.e., the probability I assign given all the information I now have \u2014 is 5%.  I am also willing to offer conditional probabilities.  For example, if genetic screening for personality traits becomes cheap and accurate, but the principle of reproductive freedom prevails, my probability falls to 3%.  Given the same technology with extensive government regulation, my probability rises to 10%.  Similarly, if the number of independent countries on earth does not decrease during the next thousand years, my probability falls to .1%, but if the number of countries falls to 1, my probability rises to 25%.\r\n[\/fn]\r\n\r\n\r\n[fn allendafoe] In [AI governance: A research agenda](https:\/\/web.archive.org\/web\/20221006061149\/http:\/\/www.fhi.ox.ac.uk\/wp-content\/uploads\/GovAI-Agenda.pdf), Allan Dafoe categorises *robust totalitarianism* as one of four sources of catastrophic risk from AI, emphasising the importance of emerging technologies. He argues:\r\n> Robust totalitarianism could be enabled by advanced lie detection, social manipulation, autonomous weapons, and ubiquitous physical sensors and digital footprints. Power and control could radically shift away from publics, towards elites and especially leaders, making democratic regimes vulnerable to totalitarian backsliding, capture, and consolidation.[\/fn]\r\n\r\n[fn autonomousweapons] Totalitarian leaders often rely on the threat of military force to control their populace. On occasion, military leaders have resisted orders they thought were unjust or tyrannical, undermining dictatorial control. Autonomous weapons would undermine this kind of resistance. (See \"Rebellion of the Army\" in [von Nostitz, 1997](https:\/\/www.jstor.org\/stable\/444808?seq=7).)[\/fn]\r\n\r\n[fn hitler] >The F\u00fchrer gave expression to his unshakable conviction that the Reich will be the master of all Europe. We shall yet have to engage in many fights, but these will undoubtedly lead to most wonderful victories. From there on the way to world domination is practically certain. Whoever dominates Europe will thereby assume the leadership of the world.\r\n\r\n-\u200a[Joseph Goebbels, Reich Minister of Propaganda](https:\/\/en.wikipedia.org\/wiki\/Ministry_of_Public_Enlightenment_and_Propaganda), May 8, 1943[\/fn]\r\n\r\n\r\n[fn lansford]According to [*Communism*](https:\/\/books.google.co.uk\/books\/about\/Communism.html?id=MjjTt-TITcUC&redir_esc=y) by Thomas Lansford (p. 10)[\/fn]\r\n\r\n\r\n[fn altman2020] See [Altman (2020)](https:\/\/www.cambridge.org\/core\/journals\/international-organization\/article\/abs\/evolution-of-territorial-conquest-after-1945-and-the-limits-of-the-territorial-integrity-norm\/E81D1E3F2C34CB00D8501BFDB363A1AD) for some discussion.[\/fn]\r\n\r\n[fn offsets] For more information, see the discussion of offsets in the section \"Overview of technology competition\" in [Clare and Ruhl (2024)](https:\/\/dkqj4hmn5mktp.cloudfront.net\/High_Risk_Technology_Competition_01d3b6538a.pdf).[\/fn]\r\n\r\n[fn dsa] This capability has been called a decisive strategic advantage, a term philosopher Nick Bostrom uses in *Superintelligence*. But it's not just AI researchers that think this. Mark Esper, when he was the US Secretary of Defense, [reportedly said](https:\/\/www.iiss.org\/publications\/strategic-comments\/2022\/international-competition-over-artificial-intelligence\/) that \"advances in AI have the potential to change the character of warfare for generations to come. Whichever nation harnesses AI first will have a decisive advantage on the battlefield for many, many years. We have to get there first.\"[\/fn]\r\n\r\n[fn offencebalance]Advanced cyber capabilities will also help defend against cyberattacks. However, even if AI-boosted cyber capabilities prove defence-dominant in the long term, devastating, offensive capabilities could be advantaged during the transition period. For discussion see [Garfinkel and Dafoe (2019)](https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/01402390.2019.1631810), as well as [Schneider (2021)](https:\/\/www.taylorfrancis.com\/chapters\/edit\/10.4324\/9781003179917-2\/capability-vulnerability-paradox-military-revolutions-implications-computing-cyber-onset-war-jacquelyn-schneider).[\/fn]\r\n\r\n[fn caplan] See [Caplan (2008)](https:\/\/academic.oup.com\/book\/40615\/chapter-abstract\/348242235?redirectedFrom=fulltext). \r\n\r\nPhilosopher Nick Bostrom's \"The Vulnerable World Hypothesis\" ([2019](https:\/\/nickbostrom.com\/papers\/vulnerable.pdf)) is also often cited and sometimes portrayed as *advocating for* a global surveillance state. We think this is a mistake. The paper speculates hypothetically that, *if* it were the case that any technological development had some chance of destroying the world, *then* it could also be the case that \"preventive policing and global governance\" may be needed to avoid catastrophe. That is, it's another illustration of the difficult dynamics considered in this article \u2014 the importance of maintaining individual freedom while mitigating collective risk externalities.[\/fn]\r\n\r\n[fn macaskill] See chapter four of William MacAskill's *What We Owe the Future* (2022, Oneworld Publications). MacAskill has also discussed lock-in on the [80,000 Hours podcast](https:\/\/80000hours.org\/podcast\/episodes\/will-macaskill-what-we-owe-the-future\/#lock-in-scenario-vs-long-reflection-012711). (Note that MacAskill is a co-founder of 80,000 Hours.)[\/fn]\r\n\r\n[fn errorcorrection] A totalitarian dictator in charge of such an AI could use error-correcting software, make many copies, and order the AI to transfer its code to new hardware whenever its hardware begins to wear down. [\/fn]\r\n\r\n[fn lockin] For much more detail on this point, see Finnveden, Shulman, and Riel's \"[Artificial General Intelligence and Lock-in](https:\/\/docs.google.com\/document\/d\/1mkLFhxixWdT5peJHq4rfFzq4QbHyfZtANH1nou68q88\/edit#heading=h.w0odoleyhzrt)\". They argue that each of the technological and social obstacles to long-term lock-in are surmountable for an AI. For example, to resist value drift they suggest that the AI system could periodically be \"reset\" to avoid drift from learning; to avoid catastrophic failures, they suggest simply keeping many back-up copies of the AI; etc.[\/fn]\r\n\r\n[fn wwotf] See chapter four of [*What We Owe the Future*](https:\/\/whatweowethefuture.com\/uk\/) for more discussion.[\/fn]\r\n"},"categories":[],"class_list":["post-86495","problem_profile","type-problem_profile","status-publish","format-standard","has-post-thumbnail","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v23.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Risks of stable totalitarianism - 80,000 Hours<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Risks of stable totalitarianism\" \/>\n<meta property=\"og:description\" content=\"The Khmer Rouge ruled Cambodia for just four years, yet in that time they murdered [about one-quarter](https:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/1467271032000147041) of Cambodia&#039;s population. Even short-lived totalitarian regimes can inflict enormous harm.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/\" \/>\n<meta property=\"og:site_name\" content=\"80,000 Hours\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/80000Hours\" \/>\n<meta property=\"article:modified_time\" content=\"2024-12-26T18:46:54+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/06\/80000hours_A_surreal_chessboard_where_all_the_pieces_are_identi_2d1556c0-84bf-4b8e-8d7e-564ab120a70e.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2100\" \/>\n\t<meta property=\"og:image:height\" content=\"1200\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@80000hours\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"24 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/\",\"url\":\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/\",\"name\":\"Risks of stable totalitarianism - 80,000 Hours\",\"isPartOf\":{\"@id\":\"https:\/\/80000hours.org\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/06\/80000hours_A_surreal_chessboard_where_all_the_pieces_are_identi_2d1556c0-84bf-4b8e-8d7e-564ab120a70e.jpg\",\"datePublished\":\"2024-06-19T13:48:27+00:00\",\"dateModified\":\"2024-12-26T18:46:54+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/#primaryimage\",\"url\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/06\/80000hours_A_surreal_chessboard_where_all_the_pieces_are_identi_2d1556c0-84bf-4b8e-8d7e-564ab120a70e.jpg\",\"contentUrl\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/06\/80000hours_A_surreal_chessboard_where_all_the_pieces_are_identi_2d1556c0-84bf-4b8e-8d7e-564ab120a70e.jpg\",\"width\":2100,\"height\":1200},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/80000hours.org\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Risks of stable totalitarianism\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/80000hours.org\/#website\",\"url\":\"https:\/\/80000hours.org\/\",\"name\":\"80,000 Hours\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/80000hours.org\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/80000hours.org\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/80000hours.org\/#organization\",\"name\":\"80,000 Hours\",\"url\":\"https:\/\/80000hours.org\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/80000hours.org\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png\",\"contentUrl\":\"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png\",\"width\":1500,\"height\":785,\"caption\":\"80,000 Hours\"},\"image\":{\"@id\":\"https:\/\/80000hours.org\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/80000Hours\",\"https:\/\/x.com\/80000hours\",\"https:\/\/www.youtube.com\/user\/eightythousandhours\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Risks of stable totalitarianism - 80,000 Hours","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/","og_locale":"en_US","og_type":"article","og_title":"Risks of stable totalitarianism","og_description":"The Khmer Rouge ruled Cambodia for just four years, yet in that time they murdered [about one-quarter](https:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/1467271032000147041) of Cambodia&#039;s population. Even short-lived totalitarian regimes can inflict enormous harm.","og_url":"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/","og_site_name":"80,000 Hours","article_publisher":"https:\/\/www.facebook.com\/80000Hours","article_modified_time":"2024-12-26T18:46:54+00:00","og_image":[{"width":2100,"height":1200,"url":"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/06\/80000hours_A_surreal_chessboard_where_all_the_pieces_are_identi_2d1556c0-84bf-4b8e-8d7e-564ab120a70e.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@80000hours","twitter_misc":{"Est. reading time":"24 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/","url":"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/","name":"Risks of stable totalitarianism - 80,000 Hours","isPartOf":{"@id":"https:\/\/80000hours.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/#primaryimage"},"image":{"@id":"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/06\/80000hours_A_surreal_chessboard_where_all_the_pieces_are_identi_2d1556c0-84bf-4b8e-8d7e-564ab120a70e.jpg","datePublished":"2024-06-19T13:48:27+00:00","dateModified":"2024-12-26T18:46:54+00:00","breadcrumb":{"@id":"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/#primaryimage","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/06\/80000hours_A_surreal_chessboard_where_all_the_pieces_are_identi_2d1556c0-84bf-4b8e-8d7e-564ab120a70e.jpg","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/06\/80000hours_A_surreal_chessboard_where_all_the_pieces_are_identi_2d1556c0-84bf-4b8e-8d7e-564ab120a70e.jpg","width":2100,"height":1200},{"@type":"BreadcrumbList","@id":"https:\/\/80000hours.org\/problem-profiles\/risks-of-stable-totalitarianism\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/80000hours.org\/"},{"@type":"ListItem","position":2,"name":"Risks of stable totalitarianism"}]},{"@type":"WebSite","@id":"https:\/\/80000hours.org\/#website","url":"https:\/\/80000hours.org\/","name":"80,000 Hours","description":"","publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/80000hours.org\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/80000hours.org\/#organization","name":"80,000 Hours","url":"https:\/\/80000hours.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","width":1500,"height":785,"caption":"80,000 Hours"},"image":{"@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/80000Hours","https:\/\/x.com\/80000hours","https:\/\/www.youtube.com\/user\/eightythousandhours"]}]}},"_links":{"self":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/86495"}],"collection":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile"}],"about":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/types\/problem_profile"}],"author":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/users\/449"}],"version-history":[{"count":1,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/86495\/revisions"}],"predecessor-version":[{"id":88354,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/86495\/revisions\/88354"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media\/86524"}],"wp:attachment":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media?parent=86495"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/categories?post=86495"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}