The Coming AI-mageddon

Exhibit 1. The Myth of Scientific Omniscience

Where to start. There are so many places to start, all with consistently alarming intimations of one deep problem. The inference is seditious of rationality as we have deified it in science. I’m going to show you some of the more accessibly evocative starting points, not to drag you into the weeds of higher mathematics, but to stress some elements of simplicity that overthrow the fallacy called Artificial Intelligence.

Bear with me. Just look for now. I’ll make connections as we proceed after viewing the items below.


Exhibit 2. Turing Test Fallacy


Exhibit 3. The Proof of Digital Inferiority


Exhibit 4. The Implementation Overreach


Exhibit 5. The Implicatuons of “Sensitive Dependence”



Exhibit 6. The Measurement Problem


Exhibit 7. The Illusion of Control



Exhibit 8. The Hammer/Nail Delusion



Exhibit 9. The Impossibility of Artificial Intelligence



Exhibit 10. The Oversimplification Problem



Exhibit 11. The Consciousness Problem



Exhibit 12. The Implications of Natural Variability


Exhibit 13. The Modeling Problem


Exhibit 14. The Arrogance Problem


Exhibit 15. The Recklessness Problem


Exhibit 16. The Secret Seduction Strategy

Did that last one get your attention? It’s a kind of Summary Status Report. Where are we in the implementation of so-called Artificial Intelligence. We are mired in a process of seduction that conceals as much as it demonstrates about the dangers that have already seduced the masters of math and science. Exhibit 16 is a lie being perpetrated via the reliable tool of sex. The photos are supposedly candid snaps of Walmart shoppers caught by chance. They are not candid of course. The pics are just too good to be true. Too sharply focused, too compelling in their composition, too posed, too similar to one another to be credible. The girls are also too much of everything. These are AI photographs. These women don’t exist at all. Their pics  are included in a much longer sequence that includes true candids, off angles, less spectacular bodies, and various accidental properties. The AI component is an emergent phenomenon in a well established genre of teaser sidebar photo medleys. Historically, they have been bait and switch affairs, teasing an irresistible photo that never appears in a very long sequence filled with the pop-up ads that are the real point.

The computer app market is currently being flooded with opportunities for Internet users to experience the easy accessibility and power of AI entertainment tools featuring breakthrough capabilities of various kinds. Answers to direct, specific questions asked of once cumbersome browsers. The ability to idealize selfies into beautiful, unreal poster-quality versions of the end-user. The ability to turn still pictures, even old ones from scrapbooks of dead relatives, into videos including dubbed dialogue and dance maneuvers. It’s not an accident that such apps are presented as 3- and 7- day trials that go away when in fact their termination can easily be circumvented by means of rudimentary trial and error. 

All of this is the yellow brick road the TekLords are building for you. You’re not supposed to see the issues represented by the first 15 exhibits above. In particular, you’re not supposed to see the two walls were headed for at high speed. The first will be a short-term crash in the AI industry due to the over-optimistic bubble of usage we’re seeing now. That will give way to inevitable recovery as ambitious new implementations already funded and in development are brought online. The real crash will occur at some moment unknown in time future. All that we can say about it is that it will be cataclysmic, largely because of the inevitable occurrence of a phase transition shown in Exhibit 1 above and implicit in the definition provided by Exhibit 5.

The smarter pioneers are already warning about both crashes. The big money backers of AI don’t know enough about the subject to believe the warnings. Or they know just enough to have committed themselves to a suicidal course. Interestingly, the brains behind the big money guys — the academics, technologists, and math/science professionals generally — are equally committed to the same showdown with nature they’ve been courting for 150+ years now. 

Math and science are close partners in this enterprise. They both have a cosmology, not identical between them but complementary, that has been historically useful and productive in ways we can measure and far less successful pin ways we can’t or won’t measure in specific terms.

Their cosmologies are wrong. Provably so. They know it but argue successfully to themselves that the differences between their cosmologies and reality don’t matter, except insofar as they have come to believe that their cosmologies are preferable to reality and can be imposed upon reality to positive effect.

How did this happen? Math first. As it was historically. Best example? Euclid. His mathematics of geometry, a largely self-contained whole, lays out principles of rational thought that have persisted for the thousands of years till today. His world of polygons and postulates and theorems are a brilliant thought experiment that illustrate the power and precision of disciplined logic. In this world there is such a thing as a proof, indeed absolute proof, which never changes anymore than the shape of a triangle changes in different weather conditions.

On top of Euclid comes Pythagoras, who introduces measurement to the polygon world. He gives us onemof the most memorable equations in history, regarding the relationship of the legs of a triangle.

a-squared + b-squared = c-squared

Equations are the life blood of mathematics. They are always the proof of something. This bias toward stating/describing/defining variables in terms of equalities via logical reformulations came to rule science as it was practiced after Isaac Newton laid down his Scientific Method for the study and documentation of the physical universe crafted by the Creator. Observation, experimentation, and measurement [rinse, repeat “n” times], followed by equations demonstrating proof of something.

Reality (that is, real reality) was a problem even in the earliest days. The only perfect polygons are the ones created by man in the products of art, architecture, and engineering. Even right angles and perfect circles are nowhere to be found in the real physical world we occupy.  This means that equations are all fictitious in real world terms. They are ideals we have inferred to be inherent in the world we observe and measure, and even where actual calculations can be taken and seem to confirm pretty precisely the equations that inspired the calculations, they cannot be absolutely right.

How do we know this? Exhibit 6. The video shows the Mandelbrot set, which is a specific and famous example of fractal, “a geometric shape with a self-similar, infinitely complex boundary”. Self-similar means that it repeats at random intervals across scales, no matter how deep you go in your observations. What makes it infinite. What makes it impossible, for example, to measure the absolute length of a coastline on earth. The measurement we assign to it for the sake of simplicity and convenience is always an approximation because the real length is infinite. 

Does this sound like getting into the weeds? It isn’t. It’s a simple demonstration of the fact that what we call hard science is itself only an approximation in all specific instances. Reducing the complexity of calculations to a degree that makes them understandable/teachable is a built-in practical objective of all scientific endeavor. Why there is rounding of repeating and nonrepeating decimals, logarithms (Exhibit 10), an imaginary number system, random number tables, and yes, a mathematical grammar called the equation, based on its verbal equivalent, the grammatically correct sentence. And one further step of tremendous importance, the binary code used to write computer instructions called programs, which also format and manipulate binary information. Which information necessarily consists solely of symbolic representations strikingly similar to Euclid’s polygons, existing entirely outside the universe of human senses, contemplation, and understanding.

Simplicity has important virtues. Without it, the descent into chaos is inevitable. We live in a real world of weeds. Science bravely attempted to clear a path through the weeds by measuring what could be measured and laying all the things which couldn’t be measured aside. We have an outstanding example in present day culture of how quickly the weeds can take over is certain traditional simplicities are examined too closely. The sudden proliferation of genders is its own descent into the finger trap of the Mandelbrot set (Exhibit 6) .


Descending through scales

Two sexes (gender was originally a noun classification) was an obvious and simple reduction of evident complexity based on the primary distinction between the fertilizing sex and the childbearing sex. The decision to expand the definition of gender ran into the insuperable problem inherent in Exhibit 12, Dr. Deming’s heuristic about natural variability. No two things, however defined, are exactly the same. Variability can be reduced by externally imposed constraints but never eliminated. The process of abandoning the simple model that’s governed human social contracts for tens of thousands of years has in a very short time led to this:

Click the graphic to descend another scale

All of which brings us to Figure 1 at the top of the page. This graphic shows us four different states of water flow. The third one is shown to be the one with an orderly scheme. The smooth flow of water from a tap is lovely, clear, symmetrical, and useful. The flow state to its right is what happens when we increase the rate of flow until we reach a state of turbulence, in which the path of no single drop can be predicted and usefulness is reduced by splashing and widening of flow diameter in unmeasurable ways. Importantly, the change from order to turbulence is called a phase transition because it occurs instantaneously when a tipping point is reached. 

The new bra punch of math/science called Chaos Theory (Figure 5) was brought about by thinkers who realized that almost all of science was focused on the two percent (or so) of cases in nature that come out even, that is, in a state of controlled and controllable order. The rest is the weeds long left to the side to look out for themselves. Their particular concern was the potentially calamitous nature of tipping points, in which a seemingly stable system suddenly — and often unexpectedly — plunges into a chaos indistinguishable from random noise. Unless a new science could learn something useful about the behavior of randomness.

Their early researches resulted in several important discoveries and formulations, hypotheses if you will. First, thanks to innovative concepts like the Mandelbrot set, they learned that there is order even in randomness, which suggests that the natural state of nature is in fact more orderly than anyone had bothered to imagine. Why certain shapes and configurations repeat in varying but recognizably consistent types: leaves, seashells, flock behavior, erosion patterns, and coastlines for example. Something in there was building things, countering entropy from a set of templates less cautious scientists would call design archetypes. There were signs, in fact, that changes observed through time exhibited signs of built-in programming code that could reuse, amend, and rewrite itself on the fly.

For some of us who dwell outside the orthodoxies of science, academia, and a determinedly dualistic view of reality, there were additional intimations in the historical human record. What preexisted the formality of equations? The poetic figure of speech called metaphor. This is like that. This part of this is like that part of that.

What if the real purpose of time and humankind’s obsession with counting it, remembering its past epochs and eras, and learning more about themselves by continuously revising and updating the metaphors on which the narrative of human history was based and interpreted? What if the history of metaphor and its uses was also a record of the development and refinement of human imagination, the creativity that has given rise to the explosive speedup in the rate of human societal advancements in matters physical and intellectual?

Hard to find a better example of human imagination than Leonardo da Vinci (Exhibit 4). He was a polymath and a genius, not just a Renaissance man, but in many respects the Renaissance man. Painter of the most famous portrait in history, he was also a technologist so far ahead of his time that he has been more an anomaly than a turning point in scientific history. He foresaw world-changing inventions that could not be implemented in his time and so sat for hundreds of years hidden in notebooks behind layers of code to keep him safe from rogue inquisitors. The helicopter design shown above was aerodynamically sound but for the lack of a powerful enough engine to drive it. All he had available was a human-powered mill wheel to turn his rotor. Why does this matter? How did he get the idea in the first place? Pattern recognition transformed by metaphor. Where all breakthrough ideas come from. A specific capability no contemplated version of Artificial Intelligence is capable of.

In all likelihood, Da Vinci made his breakthrough by observing a child’s toy, the spinning top, and learned from its ability to hold itself upright by the force of its rotation, which defies gravity, a force he understood inherently long before Newton formalized its definition. Defying gravity is the key mission of flight. He surmised that if the rotation could be accompanied by a source of lift, he would have a flying machine. So he made drawings that would have worked in reality if he had had an internal combustion engine to drive it. He did not. Nobody seized on his idea as an incentive because there is always a shortage of Renaissance men.

Metaphor, this is like that, is the fundamental driver of creativity in every human endeavor. New ideas do not come from thin air. They come from the continuous generation of new metaphors that enable improved understanding of the way everything works, from pencil sharpeners to human cerebration, and every form and function along the way, from poetry to interstellar travel, and incredibly sophisticated labor-saving devices that transform everyday life and the people who live it.

This is the real value of time in the human experience. Human beings have a native, collective sense of the importance of enshrining certain forms in physical creations that will survive for many generations of inspiration to future creative minds. The pyramids of Egypt are a potent cosmological metaphor, as are the great cathedrals of Europe, the onion domes of Moscow, the Eiffel Tower, the Brooklyn Bridge, the Empire State Building, and the cross of Christ. All of these point heavenward and make an esthetic demand for inspiration of those sensitive to the ineffable attractions of aspiration. 

And, yes, a metaphor embedded in time can also be an idea. Historically, this was the most valuable role played by math and science in the building of progressively more complex and prosperous civilizations. These two closely related disciplines created sets of ideals that led to exploitation of simple metaphors into increasingly complex traditions of investigation, documentation, and physical, mechanical manifestations of observed principles of physics and biology. All the “ology” words come from the original Newtonian mission of employing rational processes to learn as much as possible about the universe bequeathed us by the Creator of all things.

It was in the rules set required to maintain strictly rational applications of human curiosity that a new problem was introduced into the mix, a problem that would lead eventually but inevitably to a major power struggle between science as an institution and the original mission of science as laid out by Newton. This is a struggle that must result in a decisive confrontation between the two. We have been building toward thot confrontation for close to two centuries now. And what is called Artificial Intelligence will be the battleground for that confrontation.

The search mission of science has had two principal preoccupations, two cardinal questions to answer. What is the real history of human beings? And how does the greatest creation of all creations we know of, the human brain, do what it does? We have reached the point at which science, as an institution, believes it knows the answers to both questions, or enough of the answers that what is not yet known is not necessary for human survival. This is incorrect. 

The most dramatic scientific and technological developments in human civilization have occurred since the Industrial Revolution, when machinery became the driving metaphor in all its forms in place of monumental architecture. In the last two centuries the dominant metaphors of most importance to the two big questions have been the pocket watch and the computer.

Really? Yes, really. The first pocket watch was produced c. 1810. It was the smartphone of its day. All men in the various professions carried them on their persons. You could pull them from your vest pocket and watch time pass. You could hear the tick of seconds passing. You could count the ticks and the hours and calculate elapsed times, schedule appointments and deadlines, all ordered by the power of the watch. 

The smartphone of its day

The two most important developments in the era of the pocket watch are Darwinian Evolution and the manufacturing assembly line, each in a different but complementary way that created a feedback loop.

As the sciences of paleontology, anthropology, and archaeology grew from isolated expeditionary luxuries of rich men to established academic fields of study and systematic research, time was the great quarry to be hunted down and routed out of its concealing lairs. Fossils buried in rock and layers of ancient sediment. Dinosaurs. No more gargoyles and griffons.real creatures whose bones long predated those of men. No more Seventh Day as you’d count it on a watch. Biblical creation relegated to mere metaphor and poetry and blowhard oral tradition. What’s more, the fossil record indicates that species rise and fall, seemingly at random, and some successful species slowly transform from one configuration to one that’s almost entirely different. Physiological bearers of pocket watches discover that the ruling force is genes, which like everything subject to variability, change for no particular reason, and thereby introduce new features in the anatomical superset. Man is not a solitary burst of divine genius but a function of very long, very slow, random changes that add up to a sea change in body configuration and capability.

Somewhere down the hall at the university, archaeologists are making their dramatic discoveries of ancient civilizations, their great accomplishments, deeds, and constructions, as well as the inevitable decay and collapse that befalls all of them, without exception. Things fall apart. Everything falls apart. The problem of how new things are also always being created is covered over by the convenient excuse of time, the great grinder and crusher and accidental creator of changes that look positive only to those with a vested (pun intended) interest. Entropy is the ruling force of the universe, an ironic destroyer whose deadly engine of variability makes it the great and self-sufficient replacement for God. The inherent contradiction of this institutional postulate got filed in a box on a shelf that could be ignored for a good long time.

The cat was locked in the box long before Schrödinger was born.

One of the most potent aids to concealment of a relevant but unrecognized physics problem was the success of the extraordinary productive and bountiful assembly line, also inspired by the pocket watch. Physical products could be made the same way a watch churns out seconds as ticks, via gears and cogs precisely synchronized and powered by the slow, steady (entropic!) unwinding of a coiled spring called lead time to manufacture. The watch-face was the schedule. The workers were the increments of motion in the process, interchangeable and effectively faceless, component units of a machine.

A machine that made the watch owners rich and even the component units collectively more prosperous than they had ever been before.

Science and engineering were a marriage made in rational heaven. Between them they embedded two new controlling metaphors that would change life very dramatically in the 19th, 20th, and 21st centuries: geology and machinery.

Geology came to prominence from the Darwinian side of the equation. Knowledge is available to us as layers of rock and sediment and fossils that must be dug for, sifted through, and analyzed by technical experts until they deliver an end product called information. Call it an assembly line of thought rather than Model T’s. The concept of layers showed up everywhere. In geology itself, of course, where it inspired the creation of names and dates for different stages of dominant flora and fauna, even though dating rocks is a chancy business for many reasons, chiefly the huge variability between the sediments created by massive water events versus those associated with extremely lengthy arid epochs. Judgment is called for. Dating can be facilitated by radio-carbon degradation up to a point. File any remaining questions in the box. (You know the one.)

Physiologists working in the question of how the human brain works applied the layer concept to the brain, discovering a reptilian layer, a mammalian layer, and a human layer which interact with one another in ways we’re still(!) investigating, but we know basically how it works. Haven’t heard much (ever!) about the missing avian layer that would account for this pernicious human obsession with flight, but it’s time to move on.

While the physiologists are still grappling with physical how to’s of the human brain, the emerging science of psychology came to noisy birth late in the 19th century and discovered that what we call the mind is also constructed of — TA DA — layers we can identify as the ego, the superego, and the id, unless you prefer the alternate configuration consisting of the conscious mind, the subconscious mind, and the unconscious or autonomic mind. There is also a layered hierarchy of “drives” (survival, sex, pleasure, etc) that affects the communications among the more architectural, psychological layers described above.

On top of all this imaginative psychology there was a perspective on the brain itself that approached it in machine terms. Its function is to propel its human animal through a lifetime of youth, maturity, and entropic decay. It operates the mechanisms for seeing, hearing, tasting, touching, and moving, thinking, and communicating. When one of these functions is impaired it can be fixed in many cases by chemicals and/or physical surgery. Eventually, like all machines, it wears out and stops functioning altogether, which we poetically term death or “passing.”

The dominion of the machine metaphor began to fail in the second half of the twentieth century and has been almost entirely replaced by the computer metaphor. That’s how metaphors work. They also have lifetimes, which end when a better metaphor is found. Sports, dance, drama, and all the other creative arts are metaphors for aspects of life, and all of them embody the principles of holography, which suggests that every piece of the whole in fact contains the whole. Everything is in everything, a uniquely elegant expression of unity if we perceive it with open eyes. But not all expressions characterize everything. Why there is variation, and evolution, and phase transition. Metaphors are the means of continuing the eternal debate about the two big questions referenced above.

There is no perfect metaphor. Why we keep searching for new ones. For every instance of ‘this is like that’ there is likely another instance where ‘this is not like that.’ 

They can’t show you the machinery inside that mechanical head.

Brains are not machines. They are like machines in some respects, but their physical, part-to-part intricacy and consequent performance boundaries are inconsistent with the powers of the brain and the human mind, which is capable of far more than repetitive tasks with minor accidental variations. 

Computers were born during World War II as an outgrowth of the effort to decide elaborate machine-based German cryptography. The machines were capable of formatting and communicating information. They could not perform operations on the information itself and report in the results of those operations. New technology was needed to model the machine behavior and analyze it for the purpose of deconstructing it. This was achieved by Alan Turing, Like Samuel Morse before him, he used electricity as a physical medium for translating alphanumeric characters into code — electrical signals symbolizing yes/no, bit/no bit — and he designed the means to translate logical and mathematical operands into code the same way. 

Computers embody a rudimentary subset of capabilities human beings use in thinking. Given that code is a subset existing within a tightly controlled physical environment, it can execute instructions at very high speed. Its capacity grows with the amount of storage capacity available for performing sequential operations and storing the intermediate and final results. The extremely sharp focus this provides to designers of the physical media employed to manufacture computers has caused a rate of development equivalent or greater to that experienced by the aeronautical industry after the first airplane flight.

The incredible speed and flexibility of computer applications led almost immediately to adoption of the computer as the best ever metaphor for the human brain. Which it is. What it does it does immaculately well, faster than humans can do it. Memory for large volumes of input, computation, storage, communication, annunciation, and a large subset of logic-based decision making. This is like that or better.

Like all good metaphors, computer technology inspired new human insights into the ways things work and don’t. Computers can be used to create working metaphorical models of all kinds of processes, and the models can project repeated iterations for as much elapsed time as you want at, again, very high speed. The complexities of such modeling exercises led to the development of specialized programs capable of detecting and correcting errors in code and programs autonomously. Which is when the first inklings of  “artificial intelligence” were introduced into the brain~computer metaphor.

[This hubris was aggravated by the early appearance of attempts at “artificial life,” which were jumped up versions of Pong and Space Invaders tweaked with flourishes like programmed predation, mutation, reproduction, and evolution. Ding ding ding ding, you win!]

How to show you the problem with all this? A brain is not like a computer because a computer is not like a brain. They just share certain finite capabilities. The value of the metaphor ends when it is extended into areas where it does not, cannot apply. See Exhibits 2, 3, 5, 8, 9, 14, and 15 above for starters.

Exhibit 2 shows us an artist’s depiction of a self-flying plane, promoted as being available to the mass consumer market in the near term. Never mind that the artificial pilot won’t look like this. That kind of robot is as far in the future as Da Vinci’s helicopter was in the 15th century. 

Self-driving cars are already fraught with problems that won’t go away anytime soon. Yes, AI applications can physically operate an automobile, given the right array of warning sensors and algorithms governing evasive behavior and speed modulation. But they will never be able to match the performance you can experience virtually in Exhibit 3. At the Formula 1 level driving is closely akin to flying, with huge changes in vehicle behavior associated with tiny changes in manual inputs. The human performance on display in the video is not strictly computation and sequencing at high speed. It is decision making based on input from the eyes, ears, nose, skin musculature, and emotional response, which no computer system on earth can duplicate or replace because it has none of those organic capabilities. 

Computers are not alive. They see nothing, hear nothing, smell nothing, feel nothing, and can’t move a muscle they don’t have any of.  Their minds, if you want to call them that, are operationally equivalent to Aldous Huxley’s gammas, able to perform rote tasks in specialized kinds of behavior. Unlike gammas, they also have no sex drive, no hunger, thirst, or need for the sleep and dreaming that are integral to the human experience of time. Computers have no experience of time, no matter how perfectly they can count it, because their activities are discontinuous. They also have no means of knowing just how infinite is the amount of information between the electrical ‘yes’ impulse and the ‘no’ impulse of which computer ‘data’ are composed. Without a human programmer at the beginning of the process to design and guide it, every AI system is inert and impotent. The software can’t actually ‘know,’ anything, any more than owning the Library of Congress and a copy of the Dewey Decimal System makes you a genius; it makes you a neophyte librarian who probably sells plagiarized term papers on the side. Worst of all, the absence of emotions and sensory inputs deprives the computer of having any capacity to deal effectively with matters of morality. The word has no meaning in any terms a computer can ‘comprehend’. Why would any rational scientist believe that the majority of human systems can be turned over to computer control by so-called ‘Artificial Intelligence’?

Answer? Because they are rational scientists. They made a huge trade when they committed the entire discipline of science to a definition of reality as consisting only of those phenomena which can be counted, measured, and documented under laboratory conditions in accordance with their own rules regarding experimentation and proof.

A computer can calculate and depict the complexities of the Mandelbrot set, but it cannot see or understand them. Nor can it understand or compensate for the infinite effects of universal variability, which also expands the universe of variables required to make any computer model successful. Climate change models all fail because there are variable not yet identified which account for effects that invalidate the model’s predictions. Always. Why the most famous model of meta-human supercomputer ‘intelligence’ is victory over human grand masters. Convenient that chess is a finite game that responds to trying all possible variations, which cannot be done when possible variations are infinite. 

Science defies all kinds of self-evident reality because they have no mathematics devoted to summing the probability of realities based on anecdotal evidence, no matter how numerous, fundamentally consistent, and perceived by multiple witnesses even unto the thousands at one instance. When anecdotal evidence conflicts with quantitative authenticity standards, anecdotal witnesses are all deluded, drunk, hoaxers, or dupes of conjuring tricks.

Why are they so obsessive about this part in particular? Because they plighted their troth to rationalism during the machine age, and their mental models have become obsolete in the computer age. They believe they can control AI because they view it as a kind of advanced and beautifully intricate machine of their devise. Which represents a world-changing opportunity to complete the philosophical process of replacing the follies of religion and superstition with a perfectly logical machine possessed of intelligence far superior to the human brain.

Except that they are mistaken. Profoundly, tragically mistaken. The computer as a metaphor may be pure in concept, but it’s tainted by the original sin of computer industry history, conforms to the layering conventions we’ve already seen used and abused in other disciplines.the new AI systems software being rolled out daily now is operating on top of institutional computer infrastructures established well over 50 years ago and still ruling the roost. Underneath all the newfangled stuff are system and network architectures/protocols dating back to the era of IBM’s monopoly on mainframe computers. This built itself in layers of new communications and hardware upgrades for decades until, ironically, IBM’s own Personal Computer product wrought a revolution that made computers universally available in ever more powerful forms. However… the layers are still there, impeding transparency, efficiency, and performance. The original IBM PC operating system had 768KB of memory, and the subsequent transformation of the product from a standalone w/o communications to nodes in a vast distributed communicating environment were achieved by add-ons to the original operating system. For years, you could start up your brand new hotshot Windows miracle worker and it booted up the same old way, beginning with the annunciation of 768KB of memory. The Model T was still there, idling chunkily along underneath the Formula 1 track layered on top of it.

It’s still there now. But we are not supposed to look back for warning signs from the past about our future. Everyone has forgotten the close call western civilization had with the Y2K bug, which got fixed by the extraordinary last minute effort by thousands of unheralded programmers around the world. Indeed the whole trauma was almost instantly dismissed as a joke, including those whose warnings saved us, equivalent to the loons who report seeing UFOs, Bigfoot, and ghosts in every developed nation on earth.

The TEK-lords don’t want us to see hidden dangers, because this is their best chance to be the gods of the new technological order that will make most human beings superfluous in every activity but menial jobs involving muscles and bleeding.

Artificial Intelligence is a Big Lie. Another absurd,,wildly unrealistic promise of order-of-magnitude improvement in our lives. In the 1980s, businesses spent millions/billions on desktop computers that were going to create the “paperless office.” Anyone can see how that worked out. Airbus and Boeing committed huge chunks of their R&D budgets to designing computer-operated commercial airliners in which pilots would become obedient system operators rather than ship captains. Both Airbus and Boeing have paid for their premature ambition with deadly and costly failures of software that encountered a unique emergency not anticipated by programmers adding AI flourishes in top of older layers of software.

Every one of us on social networks has a clue about the headaches to come from AI algorithms imposed on top of established systems. Autocorrect (Exhibit 8). If you haven’t noticed yet, you will. A supposed breakthrough in AI called Large Language Models (LLMs) is turning apps loose to expand their fill-in capabilities by bearing typists to the punch with spellings and corrections of common errors, even to the extent of completing sentences that seem to have predictable ending words. As someone who types a lot, I’ve experienced a 10-20 percent increase in the amount of time it takes to undo fallacious AutoCorrect replacements, even having to fight them about the spelling of a name that isn’t famous yet because the app, knows better than I do what I’m trying to say. On FB it’s so bad that any word which corresponds in part to the name of a FB ‘friend’ is immediately filled in with that name and its associated hyperlink. There’s no rhyme or, uh, reason to this. It’s just junk software in the current AI craze. Explorations of graphical apps reveal the same kinds of gaps in capability. Graphic programs that turn photos into animations don’t actually know what a face is. If the mapping algorithm of light and dark doesn’t find the right cues, it’s not a face. Find another pic. 

A little test I put to Google. A search for houses that look like faces. Proved me wrong? 
Look closer. No. Google searched for the wording in captions, not the pics themselves. 

My iPad image app, even after constant AI-infused point releases, can’t even recognize the color blue as a search term, only the word. Hard to remember that none of these devices can see, hear, etc, and the only world they know is the one you see in Figure 9. This, needless to say, is the world in which Figure 11 is a nonexistent anecdote. The God spot has been appropriated by the Tek-lords. And thanks to their Large Language Models, all the computer memory and storage capacity will be devoured in the gods-given AI-mageddon to come. A real whopper of a Big Lie this time, ain’t it?

It’s not their first Big Lie. Not even the paperless office was that. No, the first Big Lie of the computer age was a ridiculous utopian objective called the Turing Test, which some genius or other claims have passed every year or so with a new model of human consciousness that can fool a human into thinking he/she is communicating with another real human being.

It’s a joke. It always occurs in a laboratory, one keyboard to another with a wall in between. The whole exercise is a setup. To pass a real Turing Test, the computer would have to open a door in the wall, step through it, shake hands, sit down for a cup of coffee, and chew the fat about what they’d done this morning or back when they were six in Syracuse. Not going to happen. Not now, not ever. It can’t succeed because robots are tinker toys compared to human anatomy. It doesn’t work even when the programmers have only one test to pass, providing a two-dimensional image of a real person by artificial means, so convincingly that they the observer does not suspect he/she is being fooled. People invariably do detect it, just as you can detect in Figure 16 above.

It’s called the ‘Uncanny Valley.’ People know. It gives them the creeps. Why this lavishly expensive and ambitious children’s movie was a box office disappointment:




The Uncanny Valley is expanding day by day. I know that’s not Samuel Jackson in all those Capital One ads. They give me the creeps. We are also breaking like a wave into the kind of danger zone the nuclear scientists trespassed into when they thought they knew more about radiation than God. When AI has consumed all the juice, the tipping point can be reached suddenly indeed.

Me, I can’t wait till William Blake’s wielder of the divine compass gets his own AI television series and gets busy punishing the programming zombies who’ve been messing up his creation with a duller, dumber, more deadly creation of their own. Guest starring AI cast members might include uncanny avatars of Arnold Schwarzenegger, Scarlett Johansson, and Keanu Reeves in the pilot episode. Something to watch till the moment of the Big Crash that turns off all the lights.

Comments

Readers also liked…

Our Idiot Judiciary

Sick of the Fascist<>Commie Circular Firing Squad

The Blobfish Boomlet

Profile in Courage on the Epstein Fiasco

My World and Welcome to It

We can learn from useful archetypes of the liberal elites

I wouldn’t be surprised…

TDS is not a monolith but a coalition