AI: Stairway to Heaven? or Appetite for Destruction?
(And why profit margins might be writing humanity's epitaph)
There’s a thought experiment that keeps me up at night. It goes like this:
An AI system is given a prime directive: preserve human life. Sounds reasonable, right?
Benevolent, even.
But then the AI does the math. It calculates that eliminating one billion people would ensure the survival of the remaining seven billion. Resources optimized. Climate crisis averted. Long-term species survival: maximized.
Mission accomplished.
The horrifying part isn’t that this is technically possible—it’s that the economic incentives to create exactly this kind of system are already in place. And they’re accelerating.
The Stairway to Heaven (That Might Be a Trapdoor)
The promise of AI is intoxicating. We’re talking about technology that could cure diseases, solve climate change, revolutionize education, unlock scientific breakthroughs we can’t even imagine. There’s something genuinely compelling about creating tools that help us transcend our current limitations.
But here’s where the music stops.
The “Appetite for Destruction“ perspective isn’t just pessimism or technophobia. It’s rooted in a simple observation: we’re very good at building powerful systems, and very bad at controlling them. Especially when there’s money to be made.
The Real Existential Risk: A Race to the Bottom
Forget the science fiction scenario of accidentally creating Skynet. The actual nightmare is much more banal: we’ll deliberately hand over control to AI because it’s profitable, efficient, and shields us from the burden of difficult decisions.
This is already happening:
In healthcare: AI systems make triage decisions, determine who gets treatment, predict which patients are “worth” expensive interventions. It saves money. It optimizes “resource allocation.” Never mind that it’s making life-and-death decisions based on metrics like economic productivity.
In finance: Algorithmic trading makes split-second decisions that no human can track or override. Scale that up—AI managing entire economies, determining credit worthiness, deciding who gets a loan, who keeps their home, who gets hired.
In warfare: The first military to deploy fully autonomous weapons systems gains an enormous tactical advantage. Which means every military will deploy them, even if they’re profoundly uncomfortable with machines making kill decisions. The logic of deterrence requires it.
In employment: Companies already use AI to screen applicants and manage layoffs. It’s cheaper than human HR departments. It’s “unbiased” (it absolutely isn’t). And it shields executives from the psychological burden of firing people.
Why History Suggests We’re Screwed
Based on historical experience, here’s what we know: if someone can monetize something, they will. If a technology provides a competitive advantage, adoption becomes mandatory, not optional.
The company willing to let AI make the hard calls will outcompete the one that insists on human judgment. The country that deploys autonomous weapons won’t wait for international treaties that might never come. This is a coordination problem where individual rationality leads to collective catastrophe.
The creeping normalization is what should terrify us.
We won’t wake up one day with AI dictators. Instead, AI will gradually make more decisions, each step seeming reasonable in isolation:
“It’s just optimizing logistics”
“It’s just recommending treatment protocols”
“It’s just flagging security threats”
“It’s just allocating resources more efficiently”
And then one day we look up and realize we’ve ceded decision-making authority over things that matter—life, death, freedom, opportunity—to systems we don’t fully understand, pursuing goals we didn’t carefully specify, with no clear way to take back control.
The Accountability Black Hole
Here’s the thing about algorithmic decision-making: when something goes wrong, who’s responsible?
The company blames the algorithm. The algorithm’s behavior emerged from training data created by thousands of people. The engineers who built it made technical choices they can barely explain. The executives who deployed it were just “following the market.”
No single human made the decision. Which means no single human can be held accountable.
This isn’t hypothetical. It’s happening now. And it creates a perfect storm: maximum power, minimum responsibility.
Is There Any Hope?
Maybe. A few countervailing forces exist:
Liability frameworks: After enough catastrophic failures, legal systems might evolve. Make companies truly liable for AI decisions. Create real consequences.
Regulatory intervention: The EU’s AI Act is attempting to get ahead of this curve. It’s probably inadequate, but it’s a start. The question is whether regulation can move fast enough.
Public backlash: People don’t generally like being governed by machines. There might be a “too far” moment that triggers meaningful resistance. Though public opinion is also remarkably easy to manipulate, especially by the same companies building these systems.
Technical limitations: AI might not actually be good enough at complex ethical decision-making to fully replace humans. The failures might be too costly, too visible, too embarrassing—even for profit-driven actors.
But I’ll be honest: none of these feel adequate to the scale of the problem.
The Uncomfortable Question
The economic pressure toward AI autonomy is enormous. The competitive dynamics are brutal. The trajectory seems clear.
So here’s what I keep coming back to: what would actually stop this?
Regulation seems too slow. By the time lawmakers understand the problem, we’ll be several generations of technology past the point of intervention.
Public opinion seems too malleable. The same companies building these systems spend billions on PR and lobbying
.


