This Is Why Computers Will Never Be As Smart As Humans (And Why God Must Exist)

If you work on the Hill, then you know Artificial Intelligence (AI), machine learning and 5G are all “next-big-things” in technology, and, when they are combined in the near future, computers will rule our world by functioning in ways that seem almost human. Well, just not quite human.

Gary N. Smith, Fletcher Jones Professor of Economics at Pomona College

That little bit of “just not quite” is the critical difference between a strictly material world in which there are only objective measures and processes, and one with time, space and matter, plus  subjective characteristics like will and choice among alternatives. And in an ultimate sense, it points to the existence of God.

The reason is simple: Computers simply can’t be as smart as human beings.

The pointing to God stuff is mine. That computers can’t be as smart as you and I is from Professor Gary N. Smith, an economics professor at Pomona College and the author of “The AI Delusion.”

Smith is one smart guy. His academic research focuses on things like “stock market anomalies, statistical fallacies, and misuse of data,” according to his recent post on Mind Matters. The post is intriguingly titled “Computers’ Stupidity Makes Them Dangerous.”

Noting the successes some years ago of computers AlphaGo and Deep Blue in gaming contests with humans, Smith observes that:

“Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking.

“Deciding whether to accept a job offer, sell a stock, or buy a house is very different from recognizing that moving a bishop three spaces will checkmate an opponent. That is why it is perilous to trust computer programs we don’t understand to make decisions for us.”

“Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking.”

Take, for example, the sentence “I can’t cut that tree down with that axe; it is too [thick/small].” To what does the “it” refer in the sentence? If it refers to the tree, then the last word of the sentence should be “thick.” If it refers to the axe, then the last word ought to be “small.”

That’s an example from situations posited by Stanford computer science professor Terry Winograd, which have come to be known as Winograd schemas, according to Smith.

“Sentences like these are understood immediately by humans but are very difficult for computers because they do not have the real-world experience to place words in context,” Smith writes.

“Paraphrasing Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, how can machines take over the world when they can’t even figure out what ‘it’ refers to in a simple sentence?”

“How can machines take over the world when they can’t even figure out what ‘it’ refers to in a simple sentence?”

People understand intangibles, the subjective elements in the working of the material world that computers are incapable of recognizing and cannot duplicate, no matter how the Xs or Os are sequenced by a human coder.

It’s this inability of computers that makes them dangerous. To illustrate, Smith  points to a statistical exercise he did:

“To demonstrate the dangers of relying on computer algorithms to make real-world decisions, consider an investigation of risk factors for fatal heart attacks.

“I made up some household spending data for 1,000 imaginary people, of whom half had suffered heart attacks and half had not. For each such person, I used a random number generator to create fictitious data in 100 spending categories.

“These data were entirely random. There were no real people, no real spending, and no real heart attacks. It was just a bunch of random numbers. But the thing about random numbers is that coincidental patterns inevitably appear.”

“Subjectivity is a descriptor for choice. Choice requires will. Will requires a mind. Who created the mind?”

In Smith’s example, the coincidental patterns would prompt computers to conclude that people who have heart attacks spend less on small appliances and household paper products.

So if you want to avoid a heart attack, buy that new Keurig!

But now where does the God factor enter into this discussion? Subjectivity is a descriptor for choice. Choice requires the exercise of will. Will requires a mind. Who created the mind?

And unless you can account for the origin of the forces in the original vacuum from which something sprang from “nothing,” quantum mechanics and multiverses can’t account for the mind of man, except, we speculate, as the result of an infinitesimally unlikely chance occurrence.

And if that’s all you and I are … then everybody better go for all the gusto we can get because we only go around once.

Author: Mark Tapscott

Follower of Christ, devoted husband of Claudia, doting father and grandfather, conservative lover of liberty, journalist and First Amendment fanatic, former Hill and Reagan aide, vintage Formula Ford racer, Okie by birth/Texan by blood/proud of both, resident of Maryland. Go here: https://hillfaith.blog/about-hillfaith-2/

41 thoughts on “This Is Why Computers Will Never Be As Smart As Humans (And Why God Must Exist)”

  1. I just finished editing a chapter about AI in a book about health care, so I’m tuned to the subject.

    This article hits the bulls-eye and it’s really crucial for everyone to remember that Artificial Intelligence is misnamed. It’s just a form of number-crunching that should be called Imitation Intelligence. Imitation Intelligence.

    I really mean that. All the makes AI impressive to us dunderheads is that it imitates conclusions that an actually intelligent person might make. But actually intelligent people make stupid decisions all the time, too.

    Worshiping AI is a dangerous kind of idol worship–all idol worship is, of course. But AI is a particularly harmful human creation. It seems Godlike–unknowable in its mysterious neural net conclusions–but it’s a really hollow entity. It finds “patterns” in coincidences but finding patterns at that level is not intelligence. Maybe it’s a sub-intelligence, but it’s not intelligence.

    Intelligence is an aspect of life. Intelligence is the name we give to the organic mental effort all creatures exert toward survival and procreation. Intelligence is inherently in the service of organism survival.

    If we ever invent computers that are mortal and that must be intelligent to survive, then perhaps we will have invented intelligence. But the current sorting of electrons in gigantic inanimate machines…is empty number crunching…

    There’s nothing wrong with that, mind you. Within narrow limits, computer pattern-finding can have uses. The old Netflix algorithm for suggesting movies to watch worked well, and it worked on huge number-crunching and pattern formation. But in life-and-death situations, abandoning our own intelligence to genuflect to a machine is irresponsible enough to be almost evil.

    Consider the Boeing 737 Max. Its “intelligent” software overruled the pilots and two airplanes full of men, women and children perished. Did the intelligent computer at the heart of the decision care? To state the question proves the point: the computer is not intelligent, and can never be intelligent, because it is not alive.

    Like

  2. Try an experiment.
    1. Select and copy to the clipboard a few arbitrary paragraphs of prose off the internet, for example the first paragraphs of this article.
    2. Browse to Google Translate.
    3. Using copy and paste, translate the selection from English to German.
    4. Repeating, translate from German to Russian.
    5. Repeating, translate from Russian to Chinese.
    6. Repeating, translate from Chinese to English.
    7. Compare input to final output.
    Then ask yourself if Professor Smith isn’t a little too insouciant about the prospects of humans vs. AI.

    Like

  3. Three things about this are rarely appreciated:

    First, there are lots of kinds of AI work, broken into subsets addressing different problems (natural language processing, pattern recognition, etc.). Only a tiny amount of work has yet been done to try to simulate full-fledged Artificial Persons. It is truly the least-advanced, most-purely pie-in-the-sky part of the project. All the other stuff is well underway and present in commercial products today, but Artificial Personhood isn’t even as far-along as large-scale teleportation is. Major players in the industry admit this freely…but quietly, because their corporate marketing strategies and research funding often depend on projecting a much more exciting picture of robots and Jarvis (and hopefully not Skynet).

    Second, all of the subsets and techniques described above, when combined together, have the effect of SIMULATING a personality, not CREATING a personality.

    The “Turing Test” is the controlling paradigm: If you’re trying to make an Artificial Person as a consumer product, nobody cares if you really succeed at creating a Person. All that matters is that, day-in, day-out, the user is adequately fooled by their user-experience, so that they can live in the illusion that they have a “loyal helper.”

    And even THAT isn’t required, beyond a certain point. The consumer doesn’t want to be frustrated by situations where the robot/digital assistant can’t figure out things that a human could. But so long as that doesn’t happen, the consumer doesn’t really CARE if, every now and again, the mask slips and the “digital assistant” makes misunderstandings that reveals his personality as only a simulation.

    In fact, if people become too-completely-fooled into thinking their digital assistants are Real Persons, then they’ll feel obligated to “liberate” them or give them voting rights, won’t they? And suddenly, selling a Digital Person as a consumer product will be slavery. (Not a very good business model for the firm selling them!)

    Thirdly, and most importantly: You cannot ever, even in principle, create an Artificial Person by gradually perfecting the techniques of simulating an Artificial Person. Even if a way of creating new Persons were possible for humans (other than the, uh, “old fashioned way”), the act of creating is just an entirely different exercise than simulating. To be an Elvis impersonator, you merely have to dress yourself up to look like Elvis and say, “Ah-uh-huh” now and again with a hip rotation. To be ELVIS, you have to be conceived and born and live the life of Elvis Presley.

    All existing AI work on Artificial Persons falls under the “refine our techniques for simulating personhood” category. All of it. Nobody has any idea how to create an artificial person. If there were a way, it would doubtless be entirely different. Work has not even begun.

    Keep those three things in mind, and you’ll more easily avoid saying nonsense on the topic of AI.

    Like

  4. This all leads to a god of the gaps fallacy. Argument from Ignorance or Incredulity. Please just demonstrate positive evidence for your positive claim that God exists. How would we ever be able to detect that which is outside of nature?

    Like

    1. No, the “God of the Gaps” claim is a useful rhetorical device but suggesting a supernatural cause of an otherwise unexplainable natural event is no less reasonable than claiming we must have faith that sooner or later, science will find a natural explanation. Both can be dismissed as speculative or considered as possible explanations.

      Like

      1. Disproving alternatives does nothing to demonstrate the positive claims you make. It’s basically God of the gaps, hoping to prove by the Incredulity of your claim not being true. Fallacy

        Like

      2. Understood. And your point cuts both ways, which at one level was my point. As for the “positive” evidence that God exists, shall we start with the cosmological argument? 1: Whatever begins to exist has a cause. 2: The universe began to exist. 3: Therefore, the universe has a cause.

        Like

      3. True, if we ignore the logical implications of the argument. But, if we follow those implications to understand what must characterize such a cause, doing so produces a description of a being very much like the traditional Christian understanding of “God.” The Cosmological Argument, like all others, exists in a context, not in isolation.

        Cross-examined.org’s Evan Minton puts it this way: “Whatever begins to exist has a cause, given that the universe began to exist, if follows that the universe has a cause of its existence. The cause of the universe must be a spaceless, timeless, immaterial, powerful, supernatural, uncaused, personal Creator.”

        Like

      4. But the real reason Craig changed it to “…begins to exist..” rather than just “… Exists…” Is that if it exists, it must have a beginning also includes the God he is trying to say is eternal. It’s circular and special pleading.

        Like

      5. I assume you are referring to the philosopher William Lane Craig, but my introduction to the (Kalam) Cosmological Argument was via Frank Turek. To move this discussion forward, you assert that its “circular and special pleading” without defining your terms and applying them to the discussion at hand. It’s easy to throw out the charge, now justify it.

        Like

      6. But that word “Definitions” I’d probably the issue here. I agree. The issue is that instead of being able to provide actual evidence of a God’s existence, you’re trying to Define Him into existence by Conceptualizing. No amount of Conceptualizing or defining will actually demonstrate that something actually exists.
        Just because we can conceptualize or define what “must” be, doesn’t mean it actually Is.
        Demonstrate a God’s actual existence.

        Like

      7. That’s not your responsibility to prove that it might be, your burden is to demonstrate that it is.
        Making people prove the negative is a shift of the burden of proof. Your claim isn’t automatically proven if someone else can’t prove it wrong. You must still demonstrate the positive. Russell’s Tea Pot.

        Like

      8. And atheism, not atheist myself, doesn’t claim That god does not exist. Only that they don’t believe because you haven’t met your burden of proof. Instead of playing word and definitional games, you should actually demonstrate that God exists. Or admit that you can’t demonstrate it, but choose to believe by faith alone.

        Like

      9. Ah, there’s that corner to which you thought you were backing me into! My point in this whole discussion of the GotG fallacy is that nobody has definitive proof beyond an infinite being because nobody has sufficient brain power to comprehend such proof. That is why we, being finite in our understanding, are all forced to look at the evidence and logic, then decide what are the logical possibilities to be drawn from them. It’s the process of reason and faith together. The GotG rhetorical device is a clever way of imposing that infinite proof condition on a syllogism that only points to a logical deduction.

        Like

      10. I don’t think you are dishonest mark. But the Kalam as turek and Craig employ it is a dishonest argument. Circular, begging the question, special spreading and shifting the burden. Just a dishonest argument altogether.

        Like

      11. I posted a review earlier on this blog. I think Sam was the strongest patriot/person of the two cousins. Not president material though. Very zealous for Christ

        Like

      12. Thanks, will definitely read it. Madison is my favorite among the Founders. The Scottish Common Sense school always appealed to me and I think it’s evident in Madison’s treatment of factions in The Federalist Papers.

        Like

      13. All questions of the “existence” of God aside — how are we to map that earthly verb existo onto a transcendent Being, anyway?–what’s most important is that we do not project our desire to be related to a Higher Power onto the circuitry of Artificial Intelligence. With Artificial Intelligence we are again invited to believe that a man-made thing is God. It’s always tempting–maybe that’s why the command against Idolotry is Command One. AI is another clay idol presented to us–probably unconsciously by its makers–but it’s important not to take the bait.

        Like

Comments are closed.