×

We may be sleepwalking into a catastrophe

In a free and open/capitalist society, the individual has choice to select who does “work” for them. We “outsource” what we need done to auto mechanics, doctors, lawyers, et cetera, since we can’t all be experts in everything, nor have the time.

When a doctor says you need a major surgery, or your mechanic says you need a new engine, we may question it. There is also implicit trust that each profession will do on our behalf (not their own benefit or agenda) in exchange for income, still, we keep an eye out and measure of control over the decision process. These people are “agents,” for lack of a better term. Heinlein was partially wrong: Specialization is not just for insects.

“Artificial intelligence” is not about your refrigerator being “smart” and knowing when you are low on bread to re-order it through a IoT (so called “Internet of Things”). An “artificial intelligence” fridge learns your behavior over time and connects to other sensors in your life (blood pressure, heart rate, hour slept last night, weight, miles walked, et cetera) to decide what food you should be eating. If your temp is over 100.4, it orders more orange juice and Nyquil without you lifting a finger. If you are overweight or have not walked a minimum required steps today, it will not order anymore ice cream. It will limit what you can get delivered as a result.

A “smart” transmission in your car reacts to the road conditions. An “artificial intelligence” transmission pulls in all that and more data to determine if you have been speeding (doing 50 in a 35 zone by reading the signs on the road), letting the motor idle for long times, doing fast accelerations from a stop light, et cetera, and thus controls your max speed to save on gas. It will limit what you can do with the accelerator pedal as a result.

Your health plan “smart system” might select for you a doctor based on location, et cetera, if you don’t choose one. An AI equipped health system might look at your past history, issues, complaints, where you grew up (New York City people are tough to deal with?), choices you made, health issues in the past and decide a specific doctor (that you may not be happy with) because he/she is an expert in dealing with “difficult patients.” You want to appeal it? You will be talking to a Siri/Alexa-type AI chat bot.

This is the direction we are moving as a society with AI. This is the whole reason for applying AI in our lives in the first place: To replace error-prone people with automation. You don’t need AI in your fridge to re-order bread. Many folks today complain about perceived loss of freedoms. On the left, “my body my choice” is the mantra. On the right, it is “shall not be infringed upon.” What is potentially coming down the road with AI is orders of magnitude bigger than all of this. You think “in as few words as possible, please tell me why you are calling” is obnoxious now?

This is not to say AI is not useful, just that it can be dangerous if not controlled and monitored.

In I.T. (hardware and software) there is a principal of Cyclomatic Complexity Analysis, or CCA. Oversimplified: CCA says as systems become more “advanced” and more complex there are more ways for it to break or have actions/results you did not foresee or desire. Each time your computer crashes, or Alexa/Siri does something outrageous, is a example. The news of full of data breaches and other scary things that happen with personal information, et cetera. Now imagine connected systems that interact with each other that have control over aspects of our life, and that no human actually knows exactly how they will work at all times. No human today, as I write this, has full understanding over all the logic gates on a processor chip. No developer today has full understanding of all the ways any commercial software works (thus bugs).

‘Til now, humans have had one edge over AI: Creativity. We are near passing this threshold. Hopfield Nets can now create new, unique, original works of music, art and literature. Look up Google’s DeepMind, DeepDream, DeepBrain. Read about the Microsoft’s “Tay” AI disaster. Lior Shamir’s work with the Beatles’ music is another example.

The danger is obvious. This is beyond the discussion of if AI should be in autonomous armed/military machines. It can now be in everything, including the power grid and automated farming systems and tractors, mass transit, street lights and freight locomotive control systems.

These systems do not “program themselves,” that is not technically what is occurring, but the end results might appear to be that. These are data-driven models that have the potential to do things you may not want or even think of. AI might no longer struggle with ambiguity: Ken Jennings lost to Deep Blue.

Again: This can be enormously beneficial, just like farming out what we want done to human professionals in a given field and just the same we should keep an eye on what is being done. AI is helping identify new drugs, invaluable in SETI/finding asteroids, identifying cancer the human eye missed on a x-ray and so on. But so too is the potential danger.

Just a thought.

— — —

Ira Weinberg lives in Saranac Lake.

NEWSLETTER

Today's breaking news and more in your inbox

I'm interested in (please check all that apply)
Are you a paying subscriber to the newspaper? *

Starting at $4.75/week.

Subscribe Today