Justin Garwood made some good points
When you mention “true artificial superintelligence,” do you mean AGI (Artificial General Intelligence)? This is orders of magnitude harder to do than any AI that exists today.
Rudimentary, non-learning “AI” has been around for centuries. A toilet, a fuse, the front door lock and an automatic transmission are all AI, freeing up humans by having a machine make the decisions/do the tasks. Your fridge or heater cycles on or off. It makes a decision, based on a single input, on its own. You don’t sit there with a thermometer and your thumb on the on/off switch. The carburetor and centrifugal/vacuum distributor in old cars are mechanical (analog) computers doing AI. A mousetrap is AI. Then what’s the difference between “AI” and my Excel spreadsheet? A lot … but the philosophical principle is the same: free up humans from making decisions/doing (usually repetitive, tedious) tasks. Program Evaluation and Review Technique, Linear Programming, The Simplex Method and others are pure math “machines.” Today’s AI takes it to the next level by using new tools and looking at historical points.
Unlike the mechanical examples I mentioned, today’s AI learns by taking pools of data and using math and statistics (with feedback). It finds relationships and trends that it calculates are there, to make decisions and predictions
In a prior life, I taught (informally) some folks about learning/decision-making computing, and how it would impact the technical work we were doing. “For 20 years, I set my alarm clock to ring 15 minutes before sunrise every day, except for a few days a year when I am sick. Based on the data, my alarm clock causes the sun to rise,” was the make-believe example I gave. Imagine you ask a friend if she liked a movie. She says, with a smile, “Yeah, that sh*t was bad!” You know she means the movie was great. An AI system (LLMs / deep learning if not properly trained, may come to the opposite conclusion (The words “sh*t” and “bad”). AI does not understand. It tries to correlate. Every summer, ice-cream sales boom in June and July, followed by an increase in shark attacks in August. Save lives: ban ice cream.
Recently, ChatGPT, Microsoft’s AI and other AI models were defeated in three moves by a simple ancient chess game: A 1MHz 1979 Atari home video game running on an 8-bit processor.
You mention “self-writing code.” More important is refactoring and code maintenance: AI can do it … Sort of … For small, limited and specifically constrained programs, it is a time saver. Ask it to create an application based on user requirements. Go ahead, I’ll wait. But coders are doing AI black-box modules right now. The problem comes when that coder has to fix or modify the code. Try jumping into someone else’s code, no matter what the program or language. So we may one day have programs that only AI can maintain. What if an enemy inserts a “back door” or other bad things in the AI engine that generates the code? Would we ever know? The movie “Telefon” comes to mind. How do you QA such a system?
Today’s AI can not rationalize, does not have intuition. It takes existing data to find correlations, relationships via involved math. Nothing more. Pools of data and algorithms are all it knows. It tries to interpolate and extrapolate. There are loads of examples in the news of GIGO (garbage in, garbage out) disasters with AI. If the Beatles were still around today and were into Native American chants and songs, there is no way for AI to make such music because it has not been done by the Beatles in the past.
To the best of my knowledge, the first large-scale/corporate-wide implementation of AI was a cobbled-together homemade system by a guy who never took a computer class in his life. There were no YouTube videos back then. In 1997, NYCHSRO/MedReview in NYC had an AI system that learned. Crude, slow, rat’s nest of crazy wiring, but it ran reliably (when in doubt, rebuild indexes!) They were the largest medical peer-review organization in the nation, headquartered then at 23rd Street in Manhattan. Hundreds of doctors and nurses were on the staff. Their first director of management information systems (he is a lifelong friend of mine) built this kludge by himself. It used weighted feedback similar to today’s AI models. Franken-ware lived in a farm of Pentium 133 machines (custom-built to his specifications) meshed together by NetWare 4.0. The firm increased its profitability eight times in one year. Rube Goldberg learned as data came in (Extended Binary Coded Decimal Interchange Code tapes). But it was not Artificial General Intelligence (AGI). It didn’t get up one day and nuke Putin (…sigh …).
What I am trying to say is AGI is possible by 2030 … but low probability based on the current state of the art. The comet 3i/Atlas has been strongly suggested by an eminent astrophysicist (Avi Leob) to be an alien spaceship. No science to support that stands to date. Less than 5% of computer scientists think bad outcomes will come from AGI (2023 survey). Ninety percent think AGI will come in 100 years. Not five.
I agree with you, AI is a serious thing, and I have a few past comments in this newspaper about it. It is a dangerous tool and should be treated as such, like a firearm. Again: One of the key dangers is dependency. Like Chinese-made goods, “pulling the plug” today is possible … but gets harder and harder as we become more reliant on it.
Just a thought … and … “I’ll be back …”
Ira Weinberg is a resident of Saranac Lake

