69 Comments
User's avatar
Michael A Alexander's avatar

What I don't get is how economic growth continues. Who are these AI-powered companies selling to? If people are unemployed and on some sort of basic UBI they have essentially no buying power. So, sales will collapse, and the world enter into a permanent depression.

Expand full comment
Abram Pafford's avatar

I agree. The question of what is being made and who is buying it seems to be step 2 in the 3-step Minion plan for economic dominance: 1. Steal underwear. 2. ????? 3. Profit!

Expand full comment
Ahead's avatar

I genuinely can’t tell if this is some psychotic triumphalist pro-AI piece or not. This future is one of death. We will all be killed by the elites if this comes to pass. All AI and any computer more advanced than a calculator needs to be dismantled. Wash the entire world in atomic fire if necessary.

If your predictions come to pass, it is the end of humanity.

Expand full comment
The Elder of Vicksburg's avatar

Butlerian Jihad NOW

Expand full comment
Andy's avatar

We were all dead from climate change anyhow. This will only speed up humanities decline because amoral sociopaths will be the ones who control the AIs. This is like rearranging the deck chairs on the titanic as it slowly sinks.

Expand full comment
Aaron Weiss's avatar

This is wonderful and helped me solidify my thinking.

It seems strange to me that being a year or more behind in tech with tiny doubling times wouldn't give the US (or another actor) a massive advantage and allow it to suppress other groups, the same way the CCP did inside China.

If one group gets absolute power, they could then dedicate even 50% of their resources to human welfare.

These articles are especially interesting to be as they are almost fully focused on the impact on humans, with AI progress only being mentioned where it impacts humans.

I think Neuralink and similar (including VR) will have massive influences on how the story pans out.

Expand full comment
Rudolf Laine's avatar

> It seems strange to me that being a year or more behind in tech with tiny doubling times wouldn't give the US (or another actor) a massive advantage and allow it to suppress other groups, the same way the CCP did inside China.

I agree that short doubling times make it plausible for one actor to outpace all others. Some factors that make this dynamic less extreme here, though, are:

(1) The gap between the US & China in this scenario is not very large (the US has some lead in AI, especially diffusion, but China has an initial lead in non-general-purpose robotics and sheer manufacturing capacity, power, minerals, etc.)

(2) Both the US and China have significant leverage over the other (e.g. classic MAD, but also other avenues of destructive retaliation even as missile shields are built), lots of covert ways to sabotage or slow down the other, and both care existentially about not being squashed during the build out. A combination of overt threats, covert sabotage, and negotiation could lead to a situation where neither gambles on decisively outrunning the other (and potentially provoking retaliation), and instead tensions are managed and both continue as relevant entities. (Even if one is permanently a few times larger due to starting the buildout a bit sooner—but also note that if the robotics doubling time is roughly 6 months as in the above scenario, a 1-year head start from a comparable base means you're only 4x smaller)

Consider the US and USSR during the Cold War. The US did not do a pre-emptive nuclear strike on the Soviet Union to prevent them from developing nukes. Once the Soviet Union had nukes, but only a small number, the US defense establishment mistakenly feared the Soviets were far ahead. The peak (post-1949) moment the US could've gone for a strike and unilateral domination would've been in 1961, when in short succession the Soviets threw a major provocation by making noises about taking West Berlin and then putting up the Berlin Wall, and a new US intelligence estimate corrected the "missile gap" fears and showed the USSR had exactly 4 operational ICBMs. The Kennedy administration commissioned a study, including on "flexible" nuclear reactions. It concluded a counterforce first-strike was feasible for the US. In The Wizards of Armageddon (great book), Fred Kaplan summarises the reaction as:

> “Now, in the early autumn of 1961, when the United States had preponderant nuclear superiority over the Soviet Union, when a virtually disarming counterforce strike appeared technically feasible and when it looked like the United States might have to bring atomic weapons into play, Paul Nitze balked. What if things didn’t go according to plan? What if the surviving Soviet weapons happened to be aimed at New York, Washington, Chicago—in which case, even under the best of circumstances, far more than a few million would die? There were just too many things that could go wrong. And even if they went right, two or three million were a couple of million too many.

> [...]

> If ever in the history of the nuclear arms race, before or since, one side had unquestionable superiority over the other, one side truly had the ability to devastate the other side’s strategic forces, one side could execute the RAND counterforce/no-cities option with fairly high confidence, the autumn of 1961 was that time. Yet approaching the height of the gravest crisis that had faced the West since the onset of the Cold War, everyone said, “No.”"

Expand full comment
Aaron Weiss's avatar

In which case they could dedicate 50% of their fleet once to caring for humanity, and it wouldn't really matter would it?

Expand full comment
Robert Höglund's avatar

This is by far the best and most detailed shot I've seen at trying to predict the next 10 years in detail. Thank you so much for writing it. Extremely impressive. And depressing scenario of course, but I think you are broadly right.

Expand full comment
Zeta's avatar

Reminds me of 10 years ago when autonomous cars were going to radically overhaul transportation.

We’re still waiting.

Like general AI, it’s always 10 years away.

Expand full comment
Olivia Haim's avatar

A fun story, but to me it seems silly timeline wise. AI will control the world in just 5 years but right now these raggedy LLMs can’t count the Rs in strawberry, do basic math, or be trusted to tell the truth

Expand full comment
Dave Foulkes's avatar

This is imaginative even if I don’t agree with all the premises (the future is speculative - but we have to speculate so thank you!).

But I work in the marketing world and from my vantage point what I see with AGI hype - where AI takes on strategic type decisions as depicted here - is that it is a form of content marketing by big tech. Keep promising that AGI type take over is around the corner and who wants to miss that investment train?

Of course AI is not nothing but it will come unstuck for the same reasons the dot com bubble burst. The physical world of stuff and flesh and bone is not as ready for it. Pets.com didn’t fail because people weren’t online - it failed because the shipping logistics weren’t there at the scale needed. The costs destroyed them.

The promise of AGI has caused / is causing a bubble that will burst - especially because ‘it’s different this time’ (it always is).

The real innovation will come after that. Meanwhile humans have rather a lot of civics type stuff to sort out - I wrote a similar future sequence mixing in AI hype and the politics of our time : https://open.substack.com/pub/beyondsurvival/p/what-happened-next?r=40ir&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Expand full comment
eg's avatar

I don't think the analogy of pets.com is especially comforting, given that it was followed by Amazon. So if we get a pets.com burst first, this really mostly just delays the depressing scenario for a couple of years recovering from a bust.

Expand full comment
Dave Foulkes's avatar

Fair point - wasn’t so much an endorsement of what comes next but just that the level of investment is getting ahead of itself and into a bubble like the dot com one. The fallout from bubbles bursting effects everyone

Expand full comment
Kristupas's avatar

What the hell are you even talking about

Expand full comment
Mike Dobbles's avatar

All three parts were fantastic. Thank you so much for writing this! For your 2040s+ scenario, you envisioned a world dominated by elites and powerful nation-states—totally plausible. Did you consider an alternate futures? For example, maybe one that goes the opposite of your imagined centralized power and instead goes towards society fragmentation? (Robot powered semi self sufficient homesteads defended by drones???) I think you had a bigger range of possibilities to chose from in part 3. I’d love to hear what other possibilities you considered.

Expand full comment
eg's avatar

This is eerily similar to my own thinking over the last decade. I don't think enough attention is paid by techno-optimists about how terrible things are likely to turn out even in the "AI is inherently easy to align" scenario.

And the doomers are too busy worrying about the "AI will kill us all scenario" to bother advocating pumping the brakes using much more politically pitchable concerns like those in this series.

Ultimately, human societies thrive (or are even tolerable) when humans need each other. And AI makes it so they don't.

Expand full comment
Jax's avatar

Interesting and well-considered and imagined futures here. Somewhat of a warning that we might feel is an inevitable eventuality. However, the future isn’t something that just happens to us. It’s something we shape—through governance, education, imagination, and action.

If we don’t like the trajectory, we must ask: what are we doing today to change it?

Expand full comment
Hemlock Hobo's avatar

This was really good.

The one cog is that I don't think people like AI. I predict a new faction of neoluddites who swear off most technology.

Also, this scenario seems to lead to mass suicides and depression.

Expand full comment
PADDY1000's avatar

Absurd. I can only interpret this as an entertaining SciFi or satire.

Amusing that people find this account believable, the petroleum empire has climaxed and we are on the downslope.

We will hit resource limitations this decade that will keep AI models in their infancy only for them to be abandoned/dismantled as infrastructure falls into disrepair.

Mining is now done with enormous quantities of toxic chemicals in order to maintain the current level of metal production, swathes of the earth are left barren and lifeless for a few tonnes of metal, a quantity that cannot not justify the ecological destruction and resource depletion it neccessitates.

The era of supposed endless surplus is in fact ending, but those that have feasted upon it their whole lives cannot comprehend such cataclysm so they seek comfort in pathetic narratives of endless growth, and modernity.

Expand full comment
eg's avatar

You can always justify any amount of ecological destruction, and this scenario does not suppose or require endless surplus. It ends much the same way presuming highly constrained resources, because a sharp reduction in demand is always just a couple of genocides away.

Which is to say, if you have robots, most humans and economic activity becomes redundant and you only need whatever resources are required for the rich to run their robots and protect their assets.

Expand full comment
PADDY1000's avatar

I see what you're getting at.

Troubling view point.

I guess once you have the necessary technology to make robots that can actually do things without human assistance AND you have a group of people in control of this technology who are all equally callous this because plausible.

Expand full comment
Paul LaFontaine's avatar

Very imaginative and a bold attempt to put details on something that is hard to grasp through all the hype and current ridiculous use cases for the technology. All three parts were excellent reads and gave me the opportunity to think in broader strokes. I appreciate the effort.

Of course, there are missing pieces that we all can see from our respective domains. There will have to be several kinetic military and cyber struggles added to the timeline. You have some, and I suspect those "wars" will impact the propagation of the technology.

It seems religion will play a resurgant role, and not small cultlike gatherings. We could see an entirely new faith emerge on the back of this cultural shift. As humanity is moved from being a means of production it will turn to sports and religion as sources of challenge and meaning. We see it today. Some form of control structure (elites or AI) will have to at some point face a call for cleansing ala the Butlerian Jihad as mentioned in other posts here. That will be another kinetic event.

Terrific work. Thanks for sharing.

Expand full comment
Roger Ison's avatar

It seems to me that as profits accrue more to capital than labor, and in a positive feedback loop, inevitably there will be pressure to broaden ownership - in effect, to reserve a portion of equity in every enterprise to the general public, or to a sovereign wealth fund. And if, after all, most people are making a living by taking in each others' laundry, then public ownership becomes inevitable.

Expand full comment
Oli G.'s avatar

Well the ecological overshoot is not taken into account, right? How could the GDP still multiply without facing resource shortages?

Expand full comment