How good would a CCP-dominated AI future be?
Not ideal, but not terrible
I am not American. But compared to China, I have plenty of friends from the U.S., Australia is more culturally similar to the U.S., and I have spent more time there. So, is the fact that I support the U.S. over China in the AI race just selfish and downstream of my personal circumstances?
I would like to think not. But it is worth seriously considering what the future would be like if the CCP becomes the dominant player in AI development. Dominance in AI development could (but would not necessarily) give China a decisive strategic advantage and control over the galactic future. Here, I think through what a CCP-led future would mean for human flourishing, and avoiding risks from misalignment and war.1
Present-day harms
I care about creating a Better Future – not just today’s world minus poverty and disease, but a utopian future qualitatively different from today. How likely are we to achieve such a future if the CCP is in charge?
First, I think some common reasons to dislike the CCP aren’t as decisive in the long term as they might seem. I take a fairly utilitarian perspective here; placing intrinsic value on diversity, democracy, or human rights regardless of welfare outcomes makes the picture look considerably worse.
Persecuting minorities: China is (in)famously not a nice place for ethnic or religious minorities (e.g. Tibetans, Uyghurs). But in a CCP-controlled future, the vast majority of people (either biological or digital) will likely be culturally Han Chinese. This is because minorities may be successfully (if brutally) assimilated, or they could simply be underrepresented in space colonization and digital minds programs. If we’re scope sensitive and thinking about trillions of future beings, the persecution of minorities in the 21st century, while deeply tragic, will not feature prominently in the overall (dis)value of the future.
Repressing freedom: But it isn’t just minorities that have a hard time in China – everyone’s speech and action is closely policed, which is arguably incompatible with a flourishing society. However, this too might change: you only need to rule with an iron fist if you are scared of losing your grip on power. In an ASI-enabled CCP dictatorship, there could be common knowledge that overthrowing the government is impossible, so leaders might not have as much to fear from some protest action and dissident speech. For instance, minituarized drones could simply incapacitate anyone attempting serious violence, reducing the need for pre-emptive thought-policing. Of course, such a society would still be meaningfully unfree in some senses, but having a far narrower set of activities (violent rebellion) that are impermissible could be a vast improvement from today’s repression of free speech.
That said, there are strong counterarguments here. Historical authoritarian regimes have rarely relaxed control even when firmly entrenched. And free speech that cannot change anything arguably doesn’t contribute much to flourishing, given it is constrained. So while I think some loosening of day-to-day repression is possible, I’m far from confident. I tentatively think an ASI-powered CCP might allow somewhat more personal freedom than exists today.
Economic stasis: Centrally planned economies have historically not produced innovations at the same rate as liberal democracies. Restricting the free market is putatively the road to serfdom. However, ASI could for the first time allow centralised information processing to be competitive with the distributed information processing of the market. It may not be fully efficient, but an ASI-powered centralised economy is likely to avoid catastrophic blunders like the Great Leap Forward.
Based on these considerations, I tentatively expect that the average welfare of individual subjects in a CCP-led future would be fairly high—perhaps better than many pessimistic portrayals suggest. However, I think this still misses out on most possible value.
A Flourishing Future
Moral innovation: The truly best futures may require substantial moral reflection and innovation, ending up very different from today. Recent centuries have seen enormous moral progress: increasing consideration of the interests of peasants, women, ethnic minorities, animals, and future people. My impression is that most of this innovation has originated in the West and been exported later, if at all, to China and other authoritarian states.2 Moral philosophy research also seems far stronger in the West than in China. The ethical schools of thought I’m most aligned with—longtermism, sentientism, effective altruism, and utilitarianism—are far more prominent in the West (though still very niche).3
Western countries appear more likely to expand the moral circle to include animals.4 If the far future contains vast numbers of animals (or especially digital minds), the ruling culture being more pro-animal might matter greatly. Of course, the U.S. has awful factory farming too, so perhaps isn’t that much better.
It is also interesting that China ranked last out of 24 major countries on charitable giving as a percentage of GDP, with 0.03%, compared to the U.S. at 1.44%. But I don’t put much weight on this, given the very different cultures and economies of the two countries.5
Pluralism, liberalism, and the long reflection: Despite my tentative prediction that China might become less repressive if it controlled the future, I don’t expect China to become a liberal democracy. Power will likely remain immensely concentrated in one or a few CCP leaders. And for all their faults, liberal democracies still seem far better at dynamism and taking new ideas seriously. If something like “the moral truth” exists to be discovered, it will probably look quite weird and different from any current ideology. A pluralistic, liberal society has a better chance of progressing towards the moral truth; Xi Jinping Thought surely isn’t the last word on moral truths in the universe. Even under moral anti-realism, a more pluralistic moral reflection process may produce better outcomes by most people’s lights.
It’s worth noting that Taiwan, which shares Chinese cultural heritage but developed democratic institutions, scores much better on liberalism, pluralism, and moral/institutional innovation than the mainland. This suggests the issue is less about “Chinese values” and more about the governance system the CCP has imposed.
So, even if a CCP-run future delivers reasonable welfare for most beings, I expect it to miss out on the vastly greater value that could be unlocked through continued moral progress and liberal dynamism. The difference between a “pretty good” future and a truly excellent one could be astronomical in a universe-spanning civilization.
Avoiding AI catastrophe
But before we even get to designing utopia, humanity needs to safely navigate the acute risks associated with developing ASI. How would a Chinese lead in AI affect our chances of avoiding misaligned AI takeover and war?
Misalignment: Historically, most work outlining risks from misaligned AI and potential solutions has come from the West. Some safety work is emerging from China, but my impression is that there are still far fewer people who deeply grasp the risks from misaligned ASI. Part of this simply reflects that the West leads in AI research generally, not some deep cultural difference. Still, by default, I expect a Chinese lead in AI development to mean less effort from the leading AI project in preventing AI takeover.
Moreover, given that the US is currently ahead, if China has a lead, it will likely be a narrow one, with both the US and China racing recklessly to not fall behind. This would be terrible for doing deep alignment work. Conversely, it is more possible that the US will have a large lead, allowing them to slow down and invest more in safety work at the crucial moment (though whether they actually would is another question).6
One countervailing consideration: conditional on a Chinese lead, China’s AI developers have probably been centralized under state control, which could reduce within-country racing between projects and potentially allow for more safety work. But this effect seems relatively weak, and the centralization itself creates other problems. Overall, I expect a Chinese lead to significantly harm our chances of solving alignment in time.
War: Forecasting which AI development pathways are more likely to lead to a US-China war is extremely difficult. As I’ve argued previously, the commitment problems created by the possibility of decisive strategic advantage make rational war more likely than in typical geopolitical contexts.
One side (likely the US) having a large lead could reduce the chance of war, as the laggard would recognize their low chances of success (whereas in a close race the laggard would try to catch up legitimately). Conversely, the laggard might be desperate if they are far behind, or unwilling to “lose face” by accepting a lopsided bargain. Overall, the interplay between the size and direction of an AI lead and the risk of war seems murky.
Conclusion
So, I have reaffirmed the traditional conclusion that a US lead is good. What should we do about this? Probably nothing new – I think this validates the AI governance community’s focus on denying China access to AI compute, and on making the US government take AI more seriously. Still, given the possibility of a Chinese lead in AI (and, thereafter, maybe domination of space futures), an increase in people thinking about AI safety and moral innovation in China seems great.
I focus less on concentration of power as a distinct risk here because, as discussed in the flourishing section, power is already highly concentrated in China. The question is more “what do they do with it?” than “will they accumulate it?”
A partial counterexample is that Soviet gender norms were arguably more egalitarian than those in the West from earlier on, such as having higher employment participation rates.
Interestingly, one could argue that the CCP’s willingness to sacrifice individuals for collective goals reflects a kind of crude utilitarianism. But I don’t think the CCP is particularly utilitarian in the philosophical sense; it is more just that they don’t value individuals much.
An alternative hypothesis is that animal-friendliness is a ‘luxury belief’ associated with living in a rich society, and that China hasn’t been rich for long enough for the cultural effects to flow through.
I don’t think this ranking is a great guide to the moral fiber of a country or anything (e.g. the Nordics are also relatively low). Just one small piece of evidence.
The longer a lead the US has, the more likely it becomes that key decision-makers vote to slow down and do more safety work. If the US is one year ahead, taking a three-month safety pause seems more feasible than if they’re only one month ahead.


Great endeavor (to discuss something I self-censored and chose not to talk about in my post), but the post could benefit from actually exploring what it means to be liberal, plural, and democratic, and whether the U.S. actually fulfills those values, even before the Trump administration. Don't we have enough wars, occupation, and settler colonialism in the name of democracy and liberalism that kill more than they save? Perhaps also worth thinking about: Does the CCP have the capability and the intention to police the world? Also, perhaps talking to some Chinese ppl would make one not jump to the conclusion that ccp will make the world all Han Chinese.
There is another issue to consider: if AI displace human labor, and leaders become more confident in their ability to remain in power, the incentives to maintain welfare for humans vanishes. You'd have to bet on the leaders' goodwill.
To be fair, this could turn out really bad if the US maintains their lead too, but China is starting from a worse position.