Why the Vaunted “Artificial Intelligence Boom” is a Bubble
About three times a month I go to Delaware to avoid sales tax on certain items. When the trip falls on a Saturday, there’s no ‘Cluck & Buck Show’ to keep me awake on the open highway, so I listen to Kim Comando on WILM radio. She’s impressive. So impressive I should probably have included her on my NIMH list of significant female voices. She really knows her stuff about the complicated world of phone and desktop applications, communications, and security products, as well as the broader business world in which they operate. I’ve heard her give would-be entrepreneurs outline business plans for computer-based services that could be started and run from home. Off the top of her head, mind. She likes to help. I’m getting somewhere with this, believe me.
Last time I heard her she did a segment on her show about ways to earn money on your computer without getting a job in an office. She mentioned one opportunity in particular, at Amazon, whereby it’s possible to make a living income doing things a computer can’t do. Huh? You do it and convey the result by computer. What might that be. The example she cited was recognizing what color things are… You can get paid for looking at things Amazon wants to use in AI development projects and identifying colors of things. Because computers can’t do that…
*******************
Here’s what happened since I wrote that intro. I spent a couple of very intense hours writing a post about the real issues associated with developing proficient Artificial Intelligence systems. The result was about 2,000 words long and went deeper into the subject than I’d gone in several other posts about AI. I finished the post and went to grab a couple links to the older essays. When I returned, everything I’d written after the introduction above was gone. I was petty upset about it. Losing work to the anomalies of computer dysfunction is nothing new, and I usually just rewrite what is lost, but I’m taking a different approach this time.
Lately, I’ve been in the mode of trusting the universe more. In small ways and big ways, I’ve been paying closer attention to what seems to occur or pop up randomly, even in my choice of entertainments to watch in background as I work on other stuff. Even losses like this one, which hurt. I really liked what I had written but… But what? It wasn’t good enough? I’d missed something? What was the universe (or more palatable to many of you, my subconscious mind) trying to tell me? Was there something in the stream of questions I routinely send into the ether that was in fact being answered with a two-by-four of sorts? Maybe. May be…
What had I written in the missing post?
Overall I was suggesting that what the world is calling Artificial Intelligence is a misnomer. The body of applications and programming design strategies being peddled to us is better described as “Imitation Intelligence,” which is a very different thing.
The specific impetus for my essay was Kim Comando’s nugget of info about color, which offered a proof of the difference that makes it more visible and definable.
A computer can perform calculations about color, specifically exact shades of color as defined by a quantitative identification system called RGB.
In other words, the computer can follow instructions about color in great detail. What it can’t do is actually recognize a color. You can’t point at something and ask the computer, “What color is that?” It has no basis for answering that simple question.
It can’t see. It can’t hear. It can’t taste. It can’t feel sensation or emotion. It can’t experience the sensation of living through time. Because it’s not physical and it’s not alive.
My post began from this point and made reference to an academic philosophical article I’d encountered years ago, which was all about the problems posed by the human question “What is Red?” This wasn’t a technological treatment of the question but a far deeper plunge into the nature of consciousness itself.
The philosopher was at length to make distinctions between the words “red” and “redness,” and terms like “things that are red,” “the essence of redness,” and what he called, finally, “the sense of redness.”
You’ll have to trust me when I tell you the article’ author did not really solve the problems of definition he was attaching labels to. The net effect for me as a reader was observing evidence of what had killed philosophy as an academic discipline many decades ago, the collapse of words and language alone as a means of addressing existential questions. Why I had given up such readings after unpleasant bouts with Immanuel Kant as a youth. He destroys the meanings of words and then pretends that he has better words to fill in the blanks he’s created. Which he doesn’t.
The article on redness, however, did have one takeaway worth remembering. It asked readers to examine what came to mind when the word red is thought of. Not as a thing but a concept. Generally there is a shapeless smear in some shade of red, not a memory, not an answer, just a symbolic image made up to meet a mind/brain request. This raises the consciousness problem, which is that mental experience is not strictly physical but metaphysical. It functions by giving us a transcendent version of the physical experience of the body. Not the imitative version of reality cobbled together by digital algorithms but something higher than the mental, yes, even conscious experience of great apes, whales, elephants, and dogs. Human consciousness includes the “sense of redness” but also much more than that.
My next step in the post text was to reference a memorable nugget from one of Rex Stout’s novels about his iconic armchair detective Nero Wolfe. In the course of hunting down some troublesome detail in a murder investigation, Wolfe asks a young woman, “Do you know the difference between Invention and Imagination?” (I forget the specific context…) She cocked her pretty head and replied, “I can’t define it for you, but I know which is which when I see it.” Wolfe liked her answer. (Critics came to call this woman Wolfe’s Irene Adler, one of many comparisons to the Holmes oeuvre,)
“Imagination” is the great leap ahead from “Redness” that exposes the problem with “Artificial Intelligence.” Through long practice (deliberate?) we have used the words “intelligent” and “smart” almost interchangeably when, in fact, “smart” is only a subset of the real meaning of intelligence, which is “understanding.” Understanding is not something IQ tests measure, for example, except in the narrowest and shallowest of ways. Reading comprehension is not synonymous, for example, with understanding what has been read. It is in the testing universe confined to remembering the common definitions of words used in a sentence and what the logic of the sentence might be. The questions Why? and Who?, for example, never appear in IQ tests, which measure skills and cultural data sets by means of timed, context-free exercises with clearly identifiable “right” and “wrong” answers. An IQ score is not a measure of any one mind’s powers of understanding, only their “smarts” in terms that make sense to decision makers looking to winnow out the dull and unacceptably ignorant.
No one anywhere has ever demonstrated that a computer program or system has powers of Imagination that transcend mere mechanical Invention. By mechanical, I made clear in my essay that in using the word “mechanical” I was not talking about physical cogs and gears but the step by step 1-to-‘n’ progressions we see in literal and figurative assembly lines. Best, quickest way to see what I’m talking about? The Rube Goldberg Machine:
Comments