The Artificial Intelligence Hoax 3
Joe Allen, author of ‘AEon,’ on Steve Bannon’s War Room
Facebook, August 24, 2023: Tired of all the blathering about the Great Sedate? Well, put up your feet and spend a little time with this post. It’s long, so make an appointment with yourself to read it. Or don’t. Your choice, as usual.
Joe Allen is the author of a book called AEon that Steve Bannon has been pushing hard as the new Bible on Artificial Intelligence. The book is probably an excellent introduction to the technology itself. But I have now watched about three long interviews with this Allen on Bannon’s War Room, and I want to set the record straight about a few things.
First of all, I have a bone to pick with Bannon. He insists on pronouncing the title as “AAy-on,” which is flat-out wrong. I understand the spelling on the book cover. The capitalized, run-together ‘AE’ is meant to reference the AI abbreviation of artificial intelligence. Does that. Clever. But Bannon’s insistence that his pronunciation is the historically correct one is wrong and fatuous. Others have apparently pointed this out, citing the Cambridge Dictionary (yeah, THAT Cambridge), which says the English word “eon” is pronounced, no surprise here, ‘ee-on,’ the way all of us have been spelling and pronouncing the word forever. Bannon isn’t even right about the pronunciation of ‘ae’ in the original Latin. That’s pronounced ‘I,’ like the word representing the ego of a man who’s still stuck on his Georgetown University church latin. He’s just being annoying. God knows there’s no end of the words he mispronounces in his broadcasts, which may be the real reason I’m making a fuss about a seeming bit of trivia.
Trivia? No. Title pronunciation is the most visible but least important of what Bannon, and the author, have wrong about the whole subject of Artificial Intelligence. Which is, since we’re on the subject of words, a revealing oxymoron, a compound term that contradicts itself. What is artificial is not real but an imitation. ‘Intelligence’ is a very deep word, meaning not smart or clever or encyclopedic about facts but understanding. Understanding is a function of consciousness, a profound capacity for synthesis of all inputs into a resonant, comprehensible (and communicable) whole.
AI is none of that. It is computer code. It is digital instructions to interact digitally with finite sets of data, perform calculations, and deliver pre-formatted output to a computer screen or mechanical device. Computers are not like brains. They are artificial imitations of subsets of functions carried out autonomously and continuously inside human brains. But computers are not like brains because brains are not computers. Brains are organic living flesh that give rise to something far beyond computation. Brains are the seat of the human mind, which is by no means a synonym for brain. Minds, for example, never shut down completely between birth and death because there is no on-off switch or reboot function. Meaning that they operate in time, all the time, and downtime is a word that comes to us only from the computer world. They are therefore continuously engaged with the phenomenon we call reality. This is important.
Our own physics has been, for a hundred years now, hammering a truth no scientist wants to accept. The reality we perceive and interact with is inextricably linked to and even altered by human consciousness, which cannot be reduced to mere mechanical transactions between brain cells, nerves, and sensory organs. The famous “wave collapse function,” was discovered in a quantum experiment in the mid-1920s by Werner Heisenberg. The only possible inference of the experiment was that reality is itself altered by the mere act of being consciously observed.
The implications of this transformative inference about reality have been ignored (or explained away) by virtually every scientific and philosophical discipline ever since. Certain physicists, however, have defied the suppression of attempts to define a “quantum theory” (which still does not exist in a formal sense akin to Darwin’s obsolete Theory of Evolution) by speculating that consciousness is actually a prime force of the universe co-equal or more with gravity, electromagnetism, and the strong and weak nuclear forces, an integral part of how the whole shooting match works.
The refusal to confront prime causes is — not coincidentally —epidemic in multiple fields of 20th century science. Neo-Darwinians, for example, explain adaptive changes in animal species by describing them in terms of their positive effects, communicating these in a language of cause and effect which they use to secure credulity and then subtract these from the explanation at the very end, ascribing all changes to random mutation as their Theory requires.
Computer scientists perform similar (il)logical gymnastics. A brilliant theorist named Stephen Wolfram wrote a giant book about how the universe works as a kind of continuously self-writing and evolving computer program. His analogies make enormous sense intuitively except for the fact that Wolfram cannot prevent himself from denying intelligence in the functioning of the universe. Computer code improves itself, corrects itself as a normal by-product of its functioning, and if you want to look for God, go to church. Except… Except he does not ever acknowledge that there must be a FIRST program, one written like our silicon-based digital programs at a computer screen operated by a conscious human being. (btw, this parallels the logic box microbiologists find themselves in when it comes to discussing the origins of DNA, which just shows up in the fossil record hundreds of millions of years ago, so anachronistically that one of the DNA Nobelists asserted that DNA did not develop naturally on earth but dropped in from outer space… Kind of like Wolfram’s first program…)
Which brings us back to all the absurd rhetoric we’re being hectored with about AI, trans-humans, meta-humans, and the imminent takeover of the human race by digital brains that are far more sophisticated and capable than we are.
Why I recorded the couple minutes linked here of a Joe Allen interview with Bannon, which is full of ominous descriptions of AI far exceeding human capabilities in dozens of ways. He avoids using the word consciousness but frequently frames technological AI as intimidatingly smarter than humans in measurable, i.e., numerical, terms.
His evidence is always, necessarily, a description of computer interactions with finite data sets. In the recording below he boasts of the immense prowess computers now have in facial recognition of CCTV images. But no computer can walk down the street and recognize a suspect by a gesture or odor. He describes computer performance on the verbal portion of the Graduate Record Exam as off the charts, an amazing AI breakthrough. But the GRE exams are multiple choice, which makes right and wrong answers a finite set of choices, not real mentation. Computers have memory, but not memories. And they are, obviously, confined to the purely digital realm of existence, which has never been anything more than a symbolic representation or translation of authentic experience guided by the human imagination of the people who wrote the program that framed the universe of possibilities.
This constraint applies to all claims made about AI advancements. Computer scientists boast of chess mastery by machines. Except that chess is, for all its PR mystique and legendary association with the word genius, a finite data set. The number of all possible chess moves can be counted. The people who excel at chess are very often prodigies, seemingly hard-wired for the game from earliest childhood. Interestingly, prodigies are found principally in just three disciplines: chess, mathematics, and music. Mathematics being the common factor.
The other critical constraint is something I like to call the box. What is a box in this context? It is the conversion via artificial means of a non-finite environment or condition into a — TA DA — finite set. The best example is the fabled Turing Test, which is an impossibility to beat in real terms and resembles most closely the phony million-dollar prize of the conman conjuror James Randi, who swore he would award the prize to anyone who could prove psychic ability — in his lab, according to his rules, in a series of experiments designed by him. These are exactly the same kinds of constraints applied to the subjects who are induced to interact with the latest imitation human being under strict laboratory conditions. The test would only be beaten if the experiment did not end with revelation to the subject of the reality behind the screen and refusal to accept the truth by that subject.
In the science journalism world, the Turing Test may be a reliably entertaining punchline, but the box is already with us in multiple real world realms with life and death stakes. The best example is the much ballyhooed role of AI in commercial aircraft design. Both Airbus and Boeing are committed to developing airliners that can fly essentially without human assistance. When their technology fails and people die, the fault is more often than not traced back to human frailty, the failure of human bus driver pilots to know the technology well enough to save the flight from a fiery crash. The truth of the matter is troubling, given the corporate commitment to omnipotent computer control of airplanes. The number of combinations of flight circumstances that can end in catastrophe is, in point of fact, infinite. The box can not, will never be able to, count and contain them all.
The best possible perspective on the dangers of AI as an objective is a long-running show on the Smithsonian Channel called “Air Disasters.” There are two kinds of shows that are intensely relevant and recur with great frequency in the episodes. Most striking are the ones about nearly unbelievable safe (or nearly safe) landings by planes facing seemingly unsurmountable odds. Their problems are varied, multiple, and in their combination unique. The saving actions are in many cases attributable to extraordinary experience in the past of the pilot in command. The other teaching episodes are about crashes or near-crashes caused by the limitations of the AI box. A high-tech system designed to keep pilots from inadvertently stalling the aircraft does not recognize circumstances that mimic what the computer recognizes as danger in a particular situation where those circumstances mean the opposite. One example. There are many. Both Airbus and Boeing have killed hundreds of people with AI software, and the truth is those computers will never be conscious, motivated by faith and determination and inexplicable intuitions, to find a way when there is no way to save the day.
Joe Allen gets kind of scary on the subject of computer learning. He asserts that AI systems already in existence do not always produce the same outcome on multiple encounters with the same initial conditions. They are unpredictable, he says, implying that this is proof of intelligence rivaling human intelligence. Great baseball pitchers are unpredictable, which is part of what makes them great. But a lot of bad pitchers are unpredictable too. Allen doesn’t get specific about how and what kind of unpredictable he’s talking about. Old dynamite is unpredictable. Kill you in a second.
At some point in his War Room interviews, Allen even references poetry as something that AI can produce credibly. He’s a fool. There’s nothing easier than assembling an imitation poem. There’s nothing harder than writing a great one. The million monkeys who could supposedly write the plays of Shakespeare never have and never will.
About 10 days ago, probably in response to a Steve Bannon rant about the meta-human peril, I put up a very short post with a video of neurons firing in a human brain headlined by a single sentence of text: “Anyone who believes a computer is, or can be, as sophisticated as the human mind is not fully conscious.”
The danger of AI is that shallow minds do not understand the real dangers it poses. That it can be used to eliminate individuality, smother creativity, radically reduce human lifespans, and rule millions with brute statistics and algorithms written by subhuman drones of both the silicon and carbon variety.
Talking about it in hyperbolic terms as a serious threat to surpass the capabilities of the human mind is the biggest mistake we can make about now. But understanding why requires some effort few seem prepared or willing to make.
Try listening to the audio one more time…
Joe Allen is the author of a book called AEon that Steve Bannon has been pushing hard as the new Bible on Artificial Intelligence. The book is probably an excellent introduction to the technology itself. But I have now watched about three long interviews with this Allen on Bannon’s War Room, and I want to set the record straight about a few things.
First of all, I have a bone to pick with Bannon. He insists on pronouncing the title as “AAy-on,” which is flat-out wrong. I understand the spelling on the book cover. The capitalized, run-together ‘AE’ is meant to reference the AI abbreviation of artificial intelligence. Does that. Clever. But Bannon’s insistence that his pronunciation is the historically correct one is wrong and fatuous. Others have apparently pointed this out, citing the Cambridge Dictionary (yeah, THAT Cambridge), which says the English word “eon” is pronounced, no surprise here, ‘ee-on,’ the way all of us have been spelling and pronouncing the word forever. Bannon isn’t even right about the pronunciation of ‘ae’ in the original Latin. That’s pronounced ‘I,’ like the word representing the ego of a man who’s still stuck on his Georgetown University church latin. He’s just being annoying. God knows there’s no end of the words he mispronounces in his broadcasts, which may be the real reason I’m making a fuss about a seeming bit of trivia.
Trivia? No. Title pronunciation is the most visible but least important of what Bannon, and the author, have wrong about the whole subject of Artificial Intelligence. Which is, since we’re on the subject of words, a revealing oxymoron, a compound term that contradicts itself. What is artificial is not real but an imitation. ‘Intelligence’ is a very deep word, meaning not smart or clever or encyclopedic about facts but understanding. Understanding is a function of consciousness, a profound capacity for synthesis of all inputs into a resonant, comprehensible (and communicable) whole.
AI is none of that. It is computer code. It is digital instructions to interact digitally with finite sets of data, perform calculations, and deliver pre-formatted output to a computer screen or mechanical device. Computers are not like brains. They are artificial imitations of subsets of functions carried out autonomously and continuously inside human brains. But computers are not like brains because brains are not computers. Brains are organic living flesh that give rise to something far beyond computation. Brains are the seat of the human mind, which is by no means a synonym for brain. Minds, for example, never shut down completely between birth and death because there is no on-off switch or reboot function. Meaning that they operate in time, all the time, and downtime is a word that comes to us only from the computer world. They are therefore continuously engaged with the phenomenon we call reality. This is important.
Our own physics has been, for a hundred years now, hammering a truth no scientist wants to accept. The reality we perceive and interact with is inextricably linked to and even altered by human consciousness, which cannot be reduced to mere mechanical transactions between brain cells, nerves, and sensory organs. The famous “wave collapse function,” was discovered in a quantum experiment in the mid-1920s by Werner Heisenberg. The only possible inference of the experiment was that reality is itself altered by the mere act of being consciously observed.
The implications of this transformative inference about reality have been ignored (or explained away) by virtually every scientific and philosophical discipline ever since. Certain physicists, however, have defied the suppression of attempts to define a “quantum theory” (which still does not exist in a formal sense akin to Darwin’s obsolete Theory of Evolution) by speculating that consciousness is actually a prime force of the universe co-equal or more with gravity, electromagnetism, and the strong and weak nuclear forces, an integral part of how the whole shooting match works.
The refusal to confront prime causes is — not coincidentally —epidemic in multiple fields of 20th century science. Neo-Darwinians, for example, explain adaptive changes in animal species by describing them in terms of their positive effects, communicating these in a language of cause and effect which they use to secure credulity and then subtract these from the explanation at the very end, ascribing all changes to random mutation as their Theory requires.
Computer scientists perform similar (il)logical gymnastics. A brilliant theorist named Stephen Wolfram wrote a giant book about how the universe works as a kind of continuously self-writing and evolving computer program. His analogies make enormous sense intuitively except for the fact that Wolfram cannot prevent himself from denying intelligence in the functioning of the universe. Computer code improves itself, corrects itself as a normal by-product of its functioning, and if you want to look for God, go to church. Except… Except he does not ever acknowledge that there must be a FIRST program, one written like our silicon-based digital programs at a computer screen operated by a conscious human being. (btw, this parallels the logic box microbiologists find themselves in when it comes to discussing the origins of DNA, which just shows up in the fossil record hundreds of millions of years ago, so anachronistically that one of the DNA Nobelists asserted that DNA did not develop naturally on earth but dropped in from outer space… Kind of like Wolfram’s first program…)
Which brings us back to all the absurd rhetoric we’re being hectored with about AI, trans-humans, meta-humans, and the imminent takeover of the human race by digital brains that are far more sophisticated and capable than we are.
Why I recorded the couple minutes linked here of a Joe Allen interview with Bannon, which is full of ominous descriptions of AI far exceeding human capabilities in dozens of ways. He avoids using the word consciousness but frequently frames technological AI as intimidatingly smarter than humans in measurable, i.e., numerical, terms.
His evidence is always, necessarily, a description of computer interactions with finite data sets. In the recording below he boasts of the immense prowess computers now have in facial recognition of CCTV images. But no computer can walk down the street and recognize a suspect by a gesture or odor. He describes computer performance on the verbal portion of the Graduate Record Exam as off the charts, an amazing AI breakthrough. But the GRE exams are multiple choice, which makes right and wrong answers a finite set of choices, not real mentation. Computers have memory, but not memories. And they are, obviously, confined to the purely digital realm of existence, which has never been anything more than a symbolic representation or translation of authentic experience guided by the human imagination of the people who wrote the program that framed the universe of possibilities.
This constraint applies to all claims made about AI advancements. Computer scientists boast of chess mastery by machines. Except that chess is, for all its PR mystique and legendary association with the word genius, a finite data set. The number of all possible chess moves can be counted. The people who excel at chess are very often prodigies, seemingly hard-wired for the game from earliest childhood. Interestingly, prodigies are found principally in just three disciplines: chess, mathematics, and music. Mathematics being the common factor.
The other critical constraint is something I like to call the box. What is a box in this context? It is the conversion via artificial means of a non-finite environment or condition into a — TA DA — finite set. The best example is the fabled Turing Test, which is an impossibility to beat in real terms and resembles most closely the phony million-dollar prize of the conman conjuror James Randi, who swore he would award the prize to anyone who could prove psychic ability — in his lab, according to his rules, in a series of experiments designed by him. These are exactly the same kinds of constraints applied to the subjects who are induced to interact with the latest imitation human being under strict laboratory conditions. The test would only be beaten if the experiment did not end with revelation to the subject of the reality behind the screen and refusal to accept the truth by that subject.
In the science journalism world, the Turing Test may be a reliably entertaining punchline, but the box is already with us in multiple real world realms with life and death stakes. The best example is the much ballyhooed role of AI in commercial aircraft design. Both Airbus and Boeing are committed to developing airliners that can fly essentially without human assistance. When their technology fails and people die, the fault is more often than not traced back to human frailty, the failure of human bus driver pilots to know the technology well enough to save the flight from a fiery crash. The truth of the matter is troubling, given the corporate commitment to omnipotent computer control of airplanes. The number of combinations of flight circumstances that can end in catastrophe is, in point of fact, infinite. The box can not, will never be able to, count and contain them all.
The best possible perspective on the dangers of AI as an objective is a long-running show on the Smithsonian Channel called “Air Disasters.” There are two kinds of shows that are intensely relevant and recur with great frequency in the episodes. Most striking are the ones about nearly unbelievable safe (or nearly safe) landings by planes facing seemingly unsurmountable odds. Their problems are varied, multiple, and in their combination unique. The saving actions are in many cases attributable to extraordinary experience in the past of the pilot in command. The other teaching episodes are about crashes or near-crashes caused by the limitations of the AI box. A high-tech system designed to keep pilots from inadvertently stalling the aircraft does not recognize circumstances that mimic what the computer recognizes as danger in a particular situation where those circumstances mean the opposite. One example. There are many. Both Airbus and Boeing have killed hundreds of people with AI software, and the truth is those computers will never be conscious, motivated by faith and determination and inexplicable intuitions, to find a way when there is no way to save the day.
Joe Allen gets kind of scary on the subject of computer learning. He asserts that AI systems already in existence do not always produce the same outcome on multiple encounters with the same initial conditions. They are unpredictable, he says, implying that this is proof of intelligence rivaling human intelligence. Great baseball pitchers are unpredictable, which is part of what makes them great. But a lot of bad pitchers are unpredictable too. Allen doesn’t get specific about how and what kind of unpredictable he’s talking about. Old dynamite is unpredictable. Kill you in a second.
At some point in his War Room interviews, Allen even references poetry as something that AI can produce credibly. He’s a fool. There’s nothing easier than assembling an imitation poem. There’s nothing harder than writing a great one. The million monkeys who could supposedly write the plays of Shakespeare never have and never will.
About 10 days ago, probably in response to a Steve Bannon rant about the meta-human peril, I put up a very short post with a video of neurons firing in a human brain headlined by a single sentence of text: “Anyone who believes a computer is, or can be, as sophisticated as the human mind is not fully conscious.”
The danger of AI is that shallow minds do not understand the real dangers it poses. That it can be used to eliminate individuality, smother creativity, radically reduce human lifespans, and rule millions with brute statistics and algorithms written by subhuman drones of both the silicon and carbon variety.
Talking about it in hyperbolic terms as a serious threat to surpass the capabilities of the human mind is the biggest mistake we can make about now. But understanding why requires some effort few seem prepared or willing to make.
Try listening to the audio one more time…
__________
Comments
Post a Comment