Has the Turing Test deceived us?
A reader writes:
I liked your essay about ChatGPT. You're right that it just mixes and matches human-created content and lacks any understanding of what the content means. It provides a simulacrum of human intelligence, but actually is not intelligent. This is why I think of ChatGPT as a "Fake AI", and why I think we've entered "The Age of Fake AI," which is a time period where people will, based on their interactions with machines, claim they are as smart as humans when they actually aren't at all.However, ChatGPT also shows that a machine doesn't necessarily need any conceptual understanding of the data it is being fed to generate an output that is useful to humans. It's enormously valuable in certain applications.
This is an interesting thought and puts into words what I have been trying to express. The "AI" that is bandied about these days isn't intelligent, nor was it designed to be intelligent, it is just a learning program that farms text (or images) to create a mashup of data.
I question our reader's second idea, though - that ChatGPT will have value. If it gives wrong answers (as indeed, humans often do) is it really useful? I suppose as a chatbot to tell you your bank balance it might be helpful to put a "human" face on your business. But as a bot to read bedtime stories to your children? I think you are better off with drag queens. ChatGPT can go off the rails.
So far, the only use I can see is in writing Trump speeches. Not only does it nail the simple vocabulary and cadence, it is chock full of lies - just like the real thing. Telling that Trump is the human most easily replaced by a robot.
But I digress.
Alan Turing created what is called the "Turing Test" back in 1950, anticipating that someday computers might be programmed to imitate humans. He called it the "imitation game":
The Turing test, originally called the imitation game by Alan Turing in 1950,[2] is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech.[3] If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give.
The problem with this test, besides the fact it can barf up a Trump speech (not "intelligent behavior") is that it, as the name implies, only tests the ability to mimic, not to actually think. A trained parrot could arguably pass the test, under certain circumstances.
Yet, the press and some certain narcissistic billionaires are all up in arms about it, claiming it is actually a "thinking machine" and not merely a program that scans copy from the Internet and produces facile responses to thoughrful questions.
It is akin to "Deep Blue" - the chess-playing computer. A human being, playing chess, works through a number of moves and scenarios, but is incapable of figuring them all out. While there are millions (if not more) of possible outcomes to a single chess move, a computer can literally plot them all out and figure out which move is the most optimal - i.e., has the highest probability of winning. This is not "thinking" so much as it is rote learning.
We had a professor in school teaching semiconductor design. He would go through these convoluted equations on how to calculate the current through a semiconductor transistor (including something called "hole current" or current produced by lack of electrons). The final exam was one question - calculate the current through a semiconductor transistor, using this convoluted equation.
The brute force approach was to plug in all the numbers and spend an hour calculating the answer. Provided you didn't miss a single digit, you might finish the equation before the bell rang. It was like that movie Mad Max, where they give the guy a hacksaw - he can try cutting off the handcuffs, or cut off his leg instead. But he only has so much time!
The correct answer was knowing that of the 15 elements in the equation, only four were necessary to calculate the current to three significant digits. You could ignore the other elements as trivial. Smart people who understood the overall concept, got the test done in ten minutes. People like me, who were too literal. cranked out a wrong answer after sweating for an hour, and got a B-.
But like with AI, some people cheated. Well, they didn't cheat, but went to the Fraternity test archives and realized that there was a "trick" to the test and then followed the trick. So they appeared smart, but actually just cut corners. In the real world, they likely would get a lot of wrong answers - unless they had an endless supply of crib sheets. And life doesn't come with many crib sheets.
So why all the paranoia and talk about "AI" when in reality it is just a chatty chatbot and a shitty artists who draws people with six fingers? Well, for starters, I think it is another example of hype being used to gin up the price of various stocks. Maybe that is why Musk wants a six-month "moratorium" (which he can call for, being rich guy and all) - he hasn't had a chance to buy-in yet! Stop the train! I haven't gotten on yet!
I think we will see a lot of hype about this in the near future. We already have. "AI passes entrance exam to law school!" one screams. "AI passed medical test!" Surely, it is only a matter of time before a Chatbot replaces our doctors and lawyers. "If you cannot afford a real lawyer, a chatbot will be appointed for you!" - the Miranda warnings will have to be updated.
But then again, I doubt it. Can a chatbot bribe a judge or sway a jury of subtle hints of racism? After all, could we really survive a justice system that is truly blind and gives rich kids from the suburbs the same sentences as black kids from the ghetto? No,no, we can't have that, can we? Next thing you'll tell me is Billionaires would be held accountable to the law as same as poor people? Geez, no wonder Musk is scared.
But seriously, I am not sure that ChatGPT is ready for prime-time or is going to "replace humans" in many (or any) roles. We already have chatbots online - your bank or utility company website helpfully offers these all the time. Do you click on them? Of course not. Because when you have an issue with your bank account or credit card or utility service, the chatbot can only cough up facile, servile answers that are not really helpful. Better versions of a chatbot aren't the answer.
In a way, they are like overseas "help lines" where the person on the other end of the phone barely speaks English, has an incomprehensible thick foreign accent, and whose only tools in their toolbox are to tell you your balance or say, "Have you tried unplugging it and plugging it back in again?" We appreciate your efforts Sanjay, but what do I have to do to escalate this call to a US call center? Thanks.
Fear of AI makes for great headlines (and accompanying graphics). It creates a "news story" when there is no news. But I am not sure it is really intelligent, as our reader notes, but really more of a parlor trick.
I guess time will tell. But I am not sure how we can "create" a real intelligence in silicon and software, anymore than we can become Gods. It's just a machine that does what we tell it.