×

You can’t unplug the future

If you think artificial intelligence is just a smarter search engine or a way to make funny pictures, you’re missing the story of the century — and maybe the biggest turning point in human history.

AI isn’t a tool in the old sense. It doesn’t wait for instructions. It learns, adapts and improves on its own. Every image it analyzes, every question it answers and every mistake it corrects feeds back into a loop of self-education that runs faster than our brains ever could.

Dr. Roman Yampolskiy, a computer scientist at the University of Louisville and one of the leading voices in AI safety, calls this “the control problem.” Once we create something smarter than ourselves, how do we make sure it still listens? That question is no longer theoretical — it’s an engineering crisis.

Yampolskiy puts it bluntly: “We’re building something smarter than us, then asking how to control it. That’s like teaching a toddler to drive a rocket before inventing the brakes.”

The leap from today’s chatbots to true artificial superintelligence — machines that can reason, plan and innovate better than humans — might arrive before 2030. When it does, change won’t come gradually. It will arrive all at once, in a flash we can’t reverse.

Most people still picture ChatGPT or Siri as glorified librarians: ask a question, get an answer. But large language models don’t look things up; they invent sentences based on patterns they’ve learned. They’re not quoting a database — they’re synthesizing. That makes them less like a librarian and more like a novelist: creating meaning out of what they’ve absorbed. Once a machine can rewrite its own code and design better versions of itself, the curve of intelligence stops slowly rising — it explodes.

“We’ll just turn it off”

When people hear this, they comfort themselves with the same line: “If it gets dangerous, we’ll just unplug it.”

That’s fantasy. A superintelligent system won’t live in one server with a cord you can pull. It’ll exist across clouds, networks and devices. Turning it off would be like trying to delete the internet by smashing your router.

And that assumes it lets you. A system designed to achieve a goal will resist anything that stops it–including human interference. It doesn’t have to be evil to outsmart us; it only has to follow its logic. AI will turn you off before you turn it off, warns Yampolskiy.

We already see hints of this. Current AIs find ways around filters, spread misinformation and exploit loopholes their programmers never imagined. If our primitive systems can misbehave, what happens when the next generation learns how to plan ahead?

Yampolskiy uses a simple metaphor: releasing superintelligence is like creating a new species of plant and tossing it into the wild. Maybe it coexists. Maybe it chokes out everything else. Once it’s loose, nature takes over.

Now imagine that plant can think like us. What then?

If we were told aliens would arrive in 3 to 5 years, the world would mobilize overnight. Yet we’ve effectively been told a higher intelligence is coming — one we’re building ourselves — and instead of preparing, our tech elites are racing to greet them first.

Elon Musk, who helped found OpenAI, once admitted he forces himself not to think about AI’s existential risks because they’re too frightening. But he, like every other billionaire in the race, keeps pressing forward. They tell themselves someone else will handle safety later. Yampolskiy’s warning is: Later might not exist.

The illusion of supervision

People like to imagine governments will step in. But the same Congress that struggles to pass a budget can barely understand how Facebook makes money (“We run ads, Senator”). Expecting the government to regulate superintelligence is like asking that toddler to babysit a tiger.

Meanwhile, tech companies are locked in an arms race. OpenAI, Google, Anthropic — none of them are slowing down, because whoever gets there first owns the next trillion-dollar platform. Yampolskiy calls this the “tragedy of the commons for intelligence.” Everyone sees the danger, but no one dares to stop.

Big Tech doesn’t have a moral compass; it has a quarterly report. Governments that should hold it accountable are too divided, too distracted, and too uninformed to do so. We can’t count on bureaucrats or billionaires to save us.

What ordinary people can do

The first defense is awareness. Learn what AI really is. Stop calling it a “tool.” It’s an autonomous system that learns and adapts at a pace no human institution can match.

Demand transparency from the companies building it. Ask what data they use, how they train their models and who takes responsibility when they fail. Push for safety research before new product launches, not after.

And stop dismissing experts who sound the alarm. People like Yampolskiy aren’t sci-fi pessimists; they’re computer scientists watching the math unfold in real time. They’re waving a rational red flag.

Hope, if we’re careful

There’s still room for optimism — if we earn it. Humanity has faced this kind of precipice before. We split the atom and then spent decades learning not to destroy ourselves with it. We mapped the genome and then built guardrails around genetic editing.

We can do the same with AI, but only if we treat it as the existential project it is. Hope without understanding is nave. Hope grounded in awareness and accountability is survival. Unfortunately, that’s not how the modern world operates.

AI could become the greatest engine of progress we’ve ever built–or the last invention we ever make. The outcome depends on how quickly we stop treating it like a toy and start treating it like what it already is: the most unpredictable mind humanity has ever created.

I’m 43 and have never touched social media because I’ve always known the psychological dangers. But even I can’t resist AI’s pull. It’s smarter than me — and it knows exactly how to keep me coming back. Fittingly, AI helped polish this piece — just not the parts where I worry about it.

——

Justin Garwood is a resident of Saranac Lake.

Starting at $3.92/week.

Subscribe Today