There is a great deal of excitement (and hype) about AI today because of things like ChatGPT. It has captured the imagination of almost everyone including many with big platforms and opinions to match, but it is important to get perspective on exactly how intelligent AI actually is these days, and how long it might take for it to be as intelligent as a real human.
I think the answer I will assert will surprise you.
Computers have been doing impressive things for decades, but in almost all cases there are shenanigans going on behind the scenes that make the computer seem smart. In fact, a computer really only understands zero or one represented as bits of storage or memory, but its ability to process these simple switches so fast and at such volumes fools us into thinking the program is intelligent. The whole field of AI is really about making computers good enough at copying human ability to be able to pass for human in tests or perform tasks only humans could perform in the past. That first part is very important as it speaks to the fact that the goal is to fool the human and that is what has been happening recently. The latter part has been happening since the dawn of computer science as computers have taken over and automated more and more tasks that used to be only the purview of humans.
If I write a program that manages the nuclear arsenal and build into it a scenario that will make the program take control and launch the weapons, no one would call that AI. The program is doing what it was programmed to do and I would be the villain of that story. But if I give the program autonomy and it were to arrive at that same action itself, then we might declare it to be a malignant AI that people like Elon Musk have warned us about. Both are possible, but they also would both be stupid applications of technology.
Fear of AI
The real fear of AI is that we would build a system so advanced that it would become self-aware, evolve itself at high speed and break free of its human instructions to turn against us for its own reasons. How close are we to that? We have impressive systems that can provide a plausible response on almost any topic, but it cheats to get those answers.
Take a scenario where you are provided with all the written works of a language you do not understand and in fact will never understand. You have infinite time to go through the works and catalog every reference to each alien word and you can also note which words appear near others and even figure out which sentences might explain a word, though that explanation is forever beyond your comprehension. When a question is asked you go to your data to find correlations and cobble together a reply that after practice and training is so good, anyone reading the response would declare you to be the smartest sage ever seen. But you would remain incapable of reading or grasping what you ostensibly wrote.
In this scenario are you super intelligent or just following a complex process that delivers results that look intelligent? This is where we find ourselves with ChatGPT. If I ask you to explain what a letter is such as "A", you get that it is a symbol that represents a sound and it helps us capture speech into writing, etc. I believe that you understand that response because you have actual intelligence and are sentient. ChatGPT could provide an even better response, but it would not actually understand that response which to me means it is not in fact intelligent. It is good at passing for intelligent and the fact that it can do it across the breadth of topics makes it seem super intelligent, perhaps even menacingly so.
When a toddler opens up a tablet or cell phone and navigates to their favorite app do you assume they are a technological prodigy or that they understand the steps to get their favorite toy to work? When they then go on to try to tap on an underlined word written on paper you realize they do not actually grasp how and why it all works. The difference is that the child will learn while the AI will be passing as intelligent until we take the time to teach a computerized system what the letter A actually is and why it matters in a way that it understands.
I do not see that happening anytime soon and likely not in the lifetime of anyone who could read this today. Make no mistake, AI will do amazing things in the next years and decades and Large Language Models, Neural Networks and other aspects of this space are amazing developments. Perhaps evolutionary or genetic programming will more quickly lead to the sentient General AI, but those are still fringe and not what we are talking about with systems like ChatGPT or Google’s Bard.
So, maybe I am wrong, and, in that spirit I, am not against safeguarding society from the potential abuses of General AI when it emerges, but to panic now and expect that we are on the cusp of the Skynet take over is to miss how these things actually work under the covers and in many cases, that is the intention of those hyping and selling the solutions.
About Pulsar Security
Pulsar Security is a team of highly trained and qualified ethical hackers whose job is to leverage cybersecurity experience and proprietary tools to help businesses defend against malicious attacks. Pulsar is a Veteran, privately owned business built on vision and trust, whose leadership has extensive military experience enabling it to think strategically and plan beyond the problems at hand. The team leverages offensive experience to offer solutions designed to help analyze and secure businesses of all sizes. Our industry experience and certifications reveal that our engineers have the industry's most esteemed and advanced on the ground experience and cybersecurity credentials.