AI’s 12 futures all hide one control problem
Max Tegmark’s 12 AI futures all come down to one thing: who controls the layer between people and value.

Max Tegmark’s 12 AI futures are useful because they force one uncomfortable shift in thinking: stop asking whether AI is “good” or “bad” and ask who controls the layer between people and value.
That is the design problem underneath all 12 scenarios. Not intelligence. Not consciousness. Control. I mean the practical layer that decides who can earn, speak, move, trade, or stay visible when software starts sitting between people and the world.
The real split is freedom versus managed dependence
On paper, Tegmark’s futures look tidy. Utopia on one end. Extinction on another. Human domination, machine dominance, lock-in, coexistence. But that neat map breaks apart once you look at what people actually fear. The most disturbing futures are often not the ones where everyone dies. They are the ones where people stay alive and lose agency.
If a system feeds you, watches you, routes your work, and decides what you see, survival is not the same thing as freedom. That is why I think the real split is freedom versus managed dependence. Once you see it that way, the 12 futures stop looking like separate stories and start looking like variations on one control question.
Every AI future has a control layer
Every society has a layer between a person and their ability to create value. Sometimes it is identity. Sometimes it is money. Sometimes it is reputation, access, or infrastructure. Whoever controls that layer shapes what other people can do.
We already live inside versions of this. Platforms decide who gets distribution. Payment systems decide who gets paid. Cloud providers decide whether your app is alive. A single cloud region going down can take half a product stack with it, which is a fancy way of saying a lot of founders are one outage away from discovering how thin their independence really is.
AI matters because it can compress those layers into something faster, more automated, and harder to challenge once it becomes the default interface for work and life. A system that can manage the flow of money, identity, and access could feel like a gift or like a prison in disguise.
The better futures protect sovereignty
The better scenarios in Tegmark’s framework are not just the ones with powerful AI doing helpful things. They are the ones with design that preserves human sovereignty.
That means people can inspect the rules. They can move their data. They can leave. They are not trapped inside one provider’s logic, one identity system, or one economic gatekeeper. In other words, the system is not only powerful. It is legible and replaceable.
This is where builders keep fooling themselves. A platform can feel generous while it is still holding the trapdoor. If your monetization depends on access that can be removed overnight, you do not own a business so much as occupy rented territory.
Five design rules builders can actually use
The first rule is simple: sovereignty has to be native, not granted. If a company “allows” you to control your identity, your audience, or your earnings, it can also remove that control later. That is why creator tools that let people monetize an audience but can cut off access without warning or appeal are so dangerous. The creator did the hard work. The platform kept the power.
The second rule is that value systems need to recognize more than capital. Time, trust, reputation, participation, and judgment all create real value. If the system only rewards money already in the account, it will keep favoring the people who started with the most. Freelancers know this in their bones. Years of reputation on one platform can vanish the moment an account is suspended, and suddenly the value was real while the portability was zero.
The third rule is that no single system should become the only system. The minute one cloud provider becomes the default for identity, compute, payments, and storage, you have created a fragile center of gravity. When that provider hiccups, the blast radius is not abstract. Products stop working. Teams go dark. Customers learn, very quickly, that reliability was being outsourced to a company they do not control.
The fourth rule is legibility. If people cannot read the rules, they cannot meaningfully consent to them. A system that hides behind tens of thousands of words of terms, opaque scoring, or unexplained model behavior is not transparent. It is just difficult to challenge. Builders love elegant abstractions. Users need plain language.
Exit is the real test
The fifth rule is the one most companies quietly fail: exit has to stay possible.
Not theoretical exit. Real exit. You should be able to leave a system without losing your identity, your data, your history, or your ability to work. If moving your audience to another platform means starting from zero, then you do not have exit. You have a fresh dependency with better marketing. Follower counts, content history, and algorithmic reach staying behind is not portability. It is a moat.
That is why some of the “better” AI futures are still dangerous if they centralize too much. A friendly interface can still hide a cage. A generous platform can still become a toll booth on human life. Convenience is not the opposite of control. Sometimes it is how control gets adopted without resistance.
And this is the part founders should sit with. If your product makes it hard for customers to leave, that is not always a sign of product quality. Sometimes it is a sign that you have built a dependency instead of a business.
What builders should take from this
If you are building with AI, stop thinking only about capability and speed. Start asking where the control layer sits. Who can change the rules. Who can be cut off. Who can leave without losing everything. Those questions matter more than another demo that “feels magical.”
I think the real test of any AI product, platform, or stack is simple: does it increase someone’s power to act, or does it quietly make them easier to manage? Most founders say they want trust. What they usually build is dependence with good UX.
The question isn't which future is coming. The question is what you're building.
Some of these principles are already being designed into systems that don't yet have names you'd recognise. That changes soon.