If you work on the Hill, then you know Artificial Intelligence (AI), machine learning and 5G are all “next-big-things” in technology, and, when they are combined in the near future, computers will rule our world by functioning in ways that seem almost human. Well, just not quite human.
That little bit of “just not quite” is the critical difference between a strictly material world in which there are only objective measures and processes, and one with time, space and matter, plus subjective characteristics like will and choice among alternatives. And in an ultimate sense, it points to the existence of God.
The reason is simple: Computers simply can’t be as smart as human beings.
The pointing to God stuff is mine. That computers can’t be as smart as you and I is from Professor Gary N. Smith, an economics professor at Pomona College and the author of “The AI Delusion.”
Smith is one smart guy. His academic research focuses on things like “stock market anomalies, statistical fallacies, and misuse of data,” according to his recent post on Mind Matters. The post is intriguingly titled “Computers’ Stupidity Makes Them Dangerous.”
Noting the successes some years ago of computers AlphaGo and Deep Blue in gaming contests with humans, Smith observes that:
“Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking.
“Deciding whether to accept a job offer, sell a stock, or buy a house is very different from recognizing that moving a bishop three spaces will checkmate an opponent. That is why it is perilous to trust computer programs we don’t understand to make decisions for us.”
“Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking.”
Take, for example, the sentence “I can’t cut that tree down with that axe; it is too [thick/small].” To what does the “it” refer in the sentence? If it refers to the tree, then the last word of the sentence should be “thick.” If it refers to the axe, then the last word ought to be “small.”
That’s an example from situations posited by Stanford computer science professor Terry Winograd, which have come to be known as Winograd schemas, according to Smith.
“Sentences like these are understood immediately by humans but are very difficult for computers because they do not have the real-world experience to place words in context,” Smith writes.
“Paraphrasing Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, how can machines take over the world when they can’t even figure out what ‘it’ refers to in a simple sentence?”
“How can machines take over the world when they can’t even figure out what ‘it’ refers to in a simple sentence?”
People understand intangibles, the subjective elements in the working of the material world that computers are incapable of recognizing and cannot duplicate, no matter how the Xs or Os are sequenced by a human coder.
It’s this inability of computers that makes them dangerous. To illustrate, Smith points to a statistical exercise he did:
“To demonstrate the dangers of relying on computer algorithms to make real-world decisions, consider an investigation of risk factors for fatal heart attacks.
“I made up some household spending data for 1,000 imaginary people, of whom half had suffered heart attacks and half had not. For each such person, I used a random number generator to create fictitious data in 100 spending categories.
“These data were entirely random. There were no real people, no real spending, and no real heart attacks. It was just a bunch of random numbers. But the thing about random numbers is that coincidental patterns inevitably appear.”
“Subjectivity is a descriptor for choice. Choice requires will. Will requires a mind. Who created the mind?”
In Smith’s example, the coincidental patterns would prompt computers to conclude that people who have heart attacks spend less on small appliances and household paper products.
So if you want to avoid a heart attack, buy that new Keurig!
But now where does the God factor enter into this discussion? Subjectivity is a descriptor for choice. Choice requires the exercise of will. Will requires a mind. Who created the mind?
And unless you can account for the origin of the forces in the original vacuum from which something sprang from “nothing,” quantum mechanics and multiverses can’t account for the mind of man, except, we speculate, as the result of an infinitesimally unlikely chance occurrence.
And if that’s all you and I are … then everybody better go for all the gusto we can get because we only go around once.