In the Information Age, the impact of Artificial Intelligence can be realized in the most futile of tasks. From the song we instruct Amazon’s Alexa to play when we wake up in the morning to the chatbots, we converse with frequently- our modern lives are wholly immersed in machines that learn for us, think for us, and help us make better decisions.
With that said, however, the ever-evolving cybersecurity threat landscape sees cybercriminals turning to AI, and then using it as a tool to wreak havoc on individuals and organizations. Fortunately, AI can also be utilized in the fight against cybersecurity, which is a belief shared by a majority of executives and cybersecurity specialists as well.
Regardless of the deep-rooted impact that AI and Machine Learning have had on the life, the primary goal that Artificial Intelligence seeks out to fulfill is this: to create a machine powerful and smart enough to handle any level of cognitive difficulty, in a variety of different settings- quite similar to how human brains function.
Generally speaking, the AI ecosystem creates two tiers for AI programs to be molded into, namely, weak and strong AI. However, as time and technology progress, we need to rethink our approach to AI and its classifications, along with the significance (if any) that they bear.
Before we can get into that, however, let’s dive into what the terms mean.
Usually, when a conversation revolves around Artificial Intelligence, people frequently throw around examples of Apple’s Siri and Amazon’s Alexa in the mix, without realizing that they represent a profoundly limited view of the AI ecosystem.
As we’ve mentioned before, there are two base classifications when it comes to AI, known as ‘Strong’ and ‘Weak’ AI. Put, weak, or narrow AI refers to any AI program that focuses on a single task and works based on pre-programmed algorithms. The mechanism seen in weak AI is limited to organization power, rather than the ability of the AI to comprehend or understand the command being given.
An example of this is when you ask Siri for the nearest restaurant. The algorithm understands keywords such as “nearest” and “restaurant” and responds accordingly, based on the information it’s already been fed. Instead of being smart, the pre-fed algorithms make the weak AI appear intelligent.
On the other end of the spectrum, a strong, or general AI replicates the cognitive functions of the human brain. Instead of using sheer organization power, a strong AI program, machine, or software relies on clustering and association to process data. Just like a human brain works, it’s hard to tell how a strong AI would respond to specific keywords- only that it’d make an independent decision based on how it evaluates and reacts to the data provided to it.
With the ‘basic’ definitions out of the way, we must analyze how these definitions hold up in the current age of technology, particularly when we take into account the burgeoning influence that AI has had on modern-day cybersecurity solutions.
When it comes to an understanding of the differences between strong and weak AI, often, cybersecurity specialists tend to overlook the underlying meaning behind the definitions mentioned above. To gain a more in-depth understanding of Strong vs. Weak AI, we need to dig around a bit deeper than what the fundamental definitions represent.
To get our points across quickly, we’ll have a look at and analyze both the definitions of AI separately, starting with Strong, or general AI.
As we’ve mentioned above, the conventional definition of strong AI states that the term refers to a form of artificial intelligence that is meant to replicate human intelligence, or cognitive functions- which refers to the ability to think and decide.
Broad or Strong AI refers to artificial general intelligence (AGI), which focuses on the creation of machines that can mimic and successfully perform any intellectual task as well as a human being can. The ability to analyze and come up with solutions for intelligent components can be broken down into the following parts:
- The ability to interchange knowledge from different domains and apply it into different settings.
- The capability to plan for the future, based on prior knowledge and experience.
- The capacity to adapt to any changes being made in the surrounding environment.
In addition to the components mentioned, some back-hand aspects of a strong, or general AI include the abilities to solve complex puzzles, reason, along with the capacity to display intelligence and common sense.
With that said, several specialists, and researchers tend to disagree with the somewhat ‘limiting’ definition of Strong AI, stating their doubts in classifying Strong AI as intelligent, simply because it can mimic the cognitive abilities of the human brain, such as the ability to communicate.
Bearing witness to the rather newfound notion of the legitimacy of how we define Strong AI, is the Turing test, which represents a system in which human beings are rendered unable to distinguish between the cognitive abilities between a human and a machine. The Turing test puts a machine, an interrogator, and a human in a conversational setting. If the interrogator fails to distinguish between the machine and human, the Turing test is passed.
Keeping the Turing test in mind, the vagueness surrounding the definition of Strong AI becomes blatantly apparent. For starters, as demonstrated by the Turing test, if an AI can successfully replicate the ability to communicate- can we classify it as smart?
The dilemma surrounding the age-old question of what classifies as intelligence only gets more challenging to answer when you take into account the Chinese Room test, which aims to check whether a machine understands Chinese, or simulates the ability to understand Chinese. John Searle, the creator of the Chinese Room test, states that without the process of understanding, it’s impossible to determine whether a strong AI is just a glorified version of a less smart simulation.
Furthermore, several researchers and philosophers attribute the capacity to experience consciousness as a Strong AI system, which sets the mark high for artificial intelligence to achieve in general.
As we mentioned above in the run-off-the-mill conventional definition of Weak AI, the term refers to any AI that responds to situations based on a pre-fed set of information. However, by the varying definitions of Strong AI, it’s quite easy to conclude that all the artificial technology we’ve created up until this point is technically Weak AI.
With that being said, the implications of the phrase “Weak AI” creates an image in which AI systems are inefficient in performing against intellectual tasks, which isn’t the case. Instead, weak, or the much better alternative, narrow AI refers to technologies that progress gradually in complexity as times change.
Today, Weak AI is represented by speech recognition, chatbots, or even self-driving cars such as Tesla. A couple of years ago, however, the situation was completely different as modern technologies of the time, such as OCR (Optical Character Recognition), were considered as artificial intelligence.
Keeping this vagueness surrounding artificial intelligence in mind, cybersecurity analysts and researchers must come up with a gradual system of classification rather than grouping AI technologies into distinct categories that carry a lot of vagueness.