The AI Paradox: Cure or Poison?

Socrates / , ,

Posted on: April 24, 2026 / Last Modified: April 24, 2026

A single pharmaceutical capsule split vertically, one half glowing white, the other half dark and etched with red circuit traces, illustrating the AI paradox of cure or poison.Technology promised simplicity. It delivered complexity. AI promised resolution. It is delivering acceleration. The paradox is not a bug. It is the feature. The question is what we choose to do about it.

***

Every new technology arrives wearing the costume of liberation.

It promises freedom. It promises options. It promises to save us time, money, and effort. And in many narrow ways, it delivers. My phone does in a second what used to take a week of letters and phone calls. My AI assistant drafts in minutes what used to take a day to research. That is real. That is valuable. That is not the whole story.

Here is what nobody tells you when they sell you the tool: the very machines built to simplify our lives have made them radically more complex. Computers were supposed to reduce friction. Instead, they multiplied it. AI was supposed to resolve complexity. Instead, it is compounding it.

This is the AI paradox. And if we do not understand it, we will not survive it well.

Complicated vs. Complex: Computable vs. Livable

There is a distinction we have forgotten, and it is costing us.

Complicated problems are computable. Fusion. Protein folding. Orbital mechanics. Route optimization. The genome. These are enormously hard, but they submit to calculation. Give them enough compute, enough data, enough time, and they yield.

Complex problems are not computable. They are only livable. Being in love. Raising a child. Building a marriage. Running a country. Waging (or ending) a war. Living a life of meaning.

Physicists have a name for this kind of challenge: the three-body problem. Introduce a third gravitational mass into a system, and the math stops giving you clean answers. Now take that, and add the variables of an actual human situation: social, emotional, psychological, financial, biological, moral, spiritual. Only some of those variables are computable. Most are not.

Here is the civilizational mistake we keep making. We try to solve complex problems as if they were merely complicated ones. We bring the hammer of computation to a question that requires wisdom, presence, and judgment. Then we wonder why the hammer keeps breaking the thing.

The How, the What, and the Why

Computers are brilliant at the how and the what. They are silent on the why.

Ask an AI how to build a bomb and, given the right jailbreak, it will tell you. Ask it whether you should, and it has no answer worth the name. That gap is not a minor design flaw. It is the defining limit of the technology.

This is also the old argument, made new. In 1959, C.P. Snow warned of a widening chasm between what he called “The Two Cultures”: the sciences on one side, the humanities on the other. Snow argued that a civilization fluent in only one of them cannot think clearly about either. He was right then. He is more right now.

AI is the ultimate product of one of those two cultures. A civilization that pours everything into STEM tools while letting humanistic literacy atrophy is not becoming smarter. It is becoming lopsided. And lopsided things fall.

Jevons’ Paradox, Applied to Our Lives

Every labor-saving device in history has produced the same strange effect. We do not work less. We work more.

The dishwasher did not free up our evenings. The laptop did not give us more family time. The smartphone did not buy us an hour of silence. The time these tools save gets immediately consumed by whatever expands to fill it. This is Jevons’ paradox, extended from coal to time, and made personal. Efficiency creates demand, not rest.

AI is the next iteration of the same trick. It will save you time. You will not get that time back. The work will simply expand. The meetings will multiply. The inbox will breed. You will produce more slides, more memos, more “content”, and you will feel, somehow, more exhausted than before.

Edsger Dijkstra saw this coming decades ago:

When we had no computers, we had no programming problem either. When we had a few computers, we had a mild programming problem. Confronted with machines a million times as powerful, we are faced with a gigantic programming problem.

AI is complexity at industrial scale. Unless we become deliberate about what we let it scale, it will scale the wrong things first.

Social Is Anti-Social. Simple Is Anti-Simple.

Watch the pattern. It repeats.

Social media was supposedly built to bring us together. It mostly tears us apart. The product advertised as social is, in its structural incentives, profoundly anti-social.

“Simple” software now requires a user manual, three integrations, and a support ticket. Simple is anti-simple.

“Fast” food takes twenty minutes in the drive-through and another hour to recover from. Fast is not fast.

“Easy” apps demand two-factor authentication, a password manager, and a phone you cannot put down. Easy is not easy.

“Smart” homes need more tech support than dumb ones. Smart is not smart.

Every promise the tech industry has made in the last two decades about frictionlessness has delivered its opposite. We were sold convenience and received dependence. We were sold connection and received isolation. We were sold tools and received masters.

AI is following the same pattern, on a bigger scale, at a faster clip. Which means the next version of this story is already written, unless we write a different one.

Capability Scales. So Does Risk.

Here is the rule almost no one in Silicon Valley wants to say out loud: as the capability of AI grows, so do the risks that come with it. They are not independent variables. They are the same variable, seen from different angles.

A more powerful model can cure more diseases, and design more weapons. A more capable agent can book your travel, and drain your bank account. A smarter system can write a novel, and write a thousand convincing phishing emails before you finish your coffee. Capability is leverage. Leverage is indifferent to ethics.

And the hand on that leverage belongs, still, to a stone age mind. Our brains evolved on the savanna to track small tribes, seasonal food, and visible predators. They did not evolve to govern planetary-scale systems operating at machine speed. Capability compounds exponentially. The wisdom meant to steer it does not.

Every time we raise the ceiling of what AI can do, we raise the floor of what can go wrong. The graphs of benefits and harms do not diverge. They climb the same curve. Anyone telling you otherwise is either selling something or has not thought hard enough about it.

This is not a reason to stop building. It is a reason to stop pretending the two curves are separate, and to build as if both of them matter. Because both of them do.

AI Is a Catalyst. The Only Question Is, Toward What?

Here is the line I want you to keep: AI is not a destination. It is a catalyst.

It accelerates whatever humanity is already doing. Curing disease, yes. Decoding the genome and mapping the brain, yes. Making art, yes. It is also accelerating surveillance, disinformation, fraud, war, and loneliness. And it is driving a quiet epidemic of cognitive offloading. We outsource our own thinking and call it progress.

The technology is neutral. The direction is not.

Kevin Kelly has a useful way of framing this. Technology, he says, is a possibility factory. It does not tell us what to do. It tells us what we can now do. Each new capability is a new option, and with it a new obligation to choose wisely among options we did not have the day before.

Kelly also offers a word for the honest version of progress: protopia. Not utopia. Not dystopia. A world that gets incrementally better over time, while continuously generating new problems alongside each new solution. Protopia is not inspiring. It is not apocalyptic. It is real. And it puts the burden exactly where it belongs: on us, the ones doing the choosing.

This is why I keep coming back, in article after article, interview after interview, to the same argument:

AI without a good why is a cancer, not a cure.

A cancer, after all, is nothing more than unregulated growth without purpose. Sound familiar?

We already have the how. We are drowning in the what. We have neglected the why almost completely.

Progress Is a Direction, Not a Speed

There is a conversation happening right now, loudly, about whether AI progress is too fast or not fast enough. I find the entire conversation almost beside the point.

Nothing stands still. Even standing still is, in a moving world, a kind of motion. So the velocity question is a distraction. The real question is the one almost nobody is asking.

In which direction are we progressing?

It is tempting to measure our success by how fast we are going. It is far more useful to ask where we are going and whether it is a place we actually want to arrive at. A rocket pointed at the wrong planet does not become right by going faster. It becomes wrong, faster.

Strategy Is the Work Before the Work

Seth Godin has a line I steal often, because I cannot improve on it:

Strategy is the hard work we do before we do the rest of the hard work.

The hard work before the hard work is not a prompt. It is not a roadmap. It is not a quarterly plan. It is a single, stubborn question that sits before every other question.

Where to?

If AI is a catalyst, where do we want it to accelerate us? If technology is the how, what is our why? If capability and risk are rising on the same curve, which one are we actually steering? If the possibility factory is running at full tilt, who is choosing among the possibilities, and on what grounds?

These questions are not decorative. They are the only ones that matter. Everything else is implementation.

The cure-or-poison question was never really about the molecule, the code, or the model. It was always about the dose, the intent, and the hand that administers it.

AI will be a cure for the civilizations that remember the why.

It will be a poison for those who forget.

The choice is ours.

It always was.

It will not always be.

Browse More

Dune Part Three official title card — Dune Messiah meaning explained

Dune Is Not What You Think: The Warning Frank Herbert Meant Us to Hear

A human conductor leads an orchestra of AI robots, symbolizing the human skills that will matter in the age of artificial intelligence.

The Skills That Will Matter When AI Can Do Almost Everything

A smartphone depicted as a slot machine surrounded by abstract human figures, illustrating how social media engagement is designed to capture attention rather than facilitate real social interaction.

Facebook’s Quiet Confession: The Social Network Was a Lie

Living systems illustrating the limits of the machine metaphor

How Our Machine Story Shaped Modernity and Why It Can’t Shape the Future

Bust of Socrates with unseeing eyes, symbolizing ignorance, humility, and the limits of human knowledge.

Ignorance Is the Greatest Evil: Why Certainty Does More Harm Than Malice

Nikola Danaylov Virtual Keynote for VSIM 25 in Ravda, Bulgaria

Nikola Danaylov Keynote at VSIM 25: From Content to Context, STEM to STEAM

Illustrated portrait of John von Neumann with circuit board background and title text “John von Neumann and the Technological Singularity.”

John von Neumann and the Original Vision of the Technological Singularity

Above the Law – Big Tech’s Bid to Block AI Oversight” with an AI symbol crossed out

Above the Law: Big Tech’s Bid to Block AI Oversight