Artificial Intelligence: A Summary of Strength and Architecture

Not all AI is created equal

Types of Artificial Intelligence: Redux

Artificial intelligence has a problem: no one can precisely tell you what it is supposed to be. This makes discussing its future difficult. Current machine learning methods are impressive by the standards of what has come before, and certainly we can give various systems and networks enough power to rival and exceed human capabilities in narrow areas. The contention is whether or not these networks qualify as “artificial intelligence”.

My personal definition of artificial intelligence is a controversial one, as I am privy towards lumping even basic software calculations under the umbrella of “AI”. This is because there are essentially two separate kinds of “artificial intelligence”— there is the field of artificial intelligence research, which is a branch of computer science, and there is the popular connotations of artificial intelligence. AI is popularly known as being “computers with brains of varying intelligence”.

Business rule management systems are not commonly considered to be “artificial intelligence” in the popular imagination. Indeed, the very name conjures a sort of gunmetal-boring corporate software model. Yet BRMS software is one of the most widely commercialized forms of AI, dating back to the late ’80s.

If we limit all AI to “computers capable of creative thinking”, even many classic sci-fi depictions of AI would not qualify. Yet if my terrifying and anarchist definition became dominant, then we would have to presume that the Sumerians created the first AIs when they invented abacuses.

This is one reason why the original post on the various types of AI doesn’t work. But there are plenty more.

 

Another bottleneck in understanding the future of AI research is our limited imaginings of artificial general intelligence— a feat of engineering considered equal to the creation of practical nuclear fusion, room-temperature superconductors, and metallic hydrogen. As with all of these, the possibilities are much wider than we initially conceived. Yet it’s with AGI that I feel there is a great deal of hype and misunderstanding that would be more easily turned to practical breakthroughs if there were a shift in how we perceived it.

I, for one, always found it odd that we equate “artificial general intelligence” with “human-level AI” despite the fact that every animal lifeform possesses general intelligence— yet no one seriously claims that nematodes and insects are our intellectual rival.

“Surely,” I said as far back as 2012, “there has to be something that comes before human-level AI but is still well past what we have now.”

A related issue is that we compare and contrast narrow AI software with imagined general AI many decades henceward, allowing ourselves nothing to bridge the gap. There is no ‘intermediate’ AI.

All AI is either narrow and weak or general and strong. We have no popular ideas for “narrow and strong” AI despite the fact we have developed a multitude of narrow networks that have far surpassed human capabilities. We also have no popular ideas for “general and weak” AI, which is to say an AI that is capable of generalized learning but is not as intelligent as a human. This could be for a motley variety of factors, many of them coming down to basic neuroscience— for example, something that learns on a generalized level may still lack agency.

So here is a basic rundown on my revised “map” of AI, which has three degrees: ArchitectureStrength, and Ability.


Architecture

Architecture is defined by an AI’s structural learning capacity and is already known by the terms “narrow AI” and “general AI“. Narrow AI describes software that is designed for one task. General AI describes software that can learn to accomplish a wide variety of tasks. Of course, this is usually synonymous with software that can learn to accomplish any task at the same level as a human being, which I’ll explain later why that isn’t necessarily always the case.

I wish to add one more category: “expert AI“. Expert artificial intelligence, or artificial expert intelligence (XAI or AXI) describes artificial intelligences that are generalized across a certain area but are not fully generalized, as I’ll explain in greater detail below. You may see it as “less-narrow AI”, with computers capable of learning a variety of like narrow tasks. AXI is very likely the next major step in AI research over the next five to ten years.

 

Mechanical Calculations: These are calculators and traditional computer software. They only do calculations. Addition, subtraction, multiplication, division, etc. There is no intelligence involved. Mechanical calculations can be considered the ‘DNA’ of AI, the root by which we are able to construct intelligences but by itself is not a form of AI. As aforementioned, this level starts with ancient abacuses.

Artificial Narrow Intelligence: Artificial narrow intelligence (ANI), colloquially referred to as “weak AI”, refers to software that is capable of accomplishing one singular task, whether that be through hard coding or soft learning. This describes almost all AI that currently exists, and is also possibly the most consistently underestimated technology of the past 100 years. Just about any AI you can think of, from Siri down to motion sensors, qualify as ANI. Once you program an ANI to do a certain task, it is locked into that task. Just as you cannot make a clock play music unless you reformat its gears for that purpose, you must reprogram an ANI if you want it to do something it was not programmed to do. This includes narrow machine learning networks that are limited to cohesive parameters. Machine learning involves using statistical techniques to refine an agent’s performance, and while this can be generalized for much more interesting uses, it is not magical and is natively a narrow field of AI.

Artificial Expert Intelligence: Artificial expert intelligence (AXI), sometimes referred to as “less-narrow AI”, refers to software that is capable of accomplishing multiple tasks in a relatively narrow field. This type of AI is new, having become possible only in the past five years due to parallel computing and deep neural networks. The best example is DeepMind’s AlphaZero, which utilized a general-purpose reinforcement learning algorithm to conquer three separate board games— chess, go, and shogi. Normally, you would require three separate networks, one for each game, but with AXI, you are able to play a wider variety of games with a single network. Thus, it is more generalized than any ANI. However, AlphaZero is not capable of playing any game. It also likely would not function if pressed to do something unrelated to game playing, such as baking a cake or business analysis. This is why it is its own category of artificial intelligence— too general for narrow AI, but too narrow for general AI. It is more akin to an expert in a particular field, knowledgeable across multiple domains without being a polymath. This is the next step of machine learning, the point at which transfer learning and deep reinforcement learning allow for computers to understand certain things without needing to be mechanically fed rules and capable of expanding its own hyperparameters.

Artificial General Intelligence: Artificial general intelligence (AGI), sometimes referred to as “strong AI”, refers to software capable of accomplishing any task, or at least any task accomplishable by biological intelligence. Currently, there are no AGI networks on Earth and we have no idea when we’ll create the first truly general-purpose artificial intelligence. However, AGI is a much greater qualitative improvement over AXI than AXI is over ANI— whereas AXI is multi-purpose, AGI is omni-purpose. Theoretically, a sufficiently advanced AGI is indistinguishable from a healthy adult human— and even this represents the lower end of the true capabilities of strong artificial intelligence.


Strength

Strength in AI is defined by an AI’s intellectual capacity compared to humans.

Weak Artificial Intelligence is any AI that is intellectually less capable than humans but is colloquially used to describe all narrow AI.

Strong Artificial Intelligence is any AI that is intellectually as capable or more capable than humans but is colloquially used to describe all general AI.

Because of colloquial usage, “weak” and “narrow” are interchangeable terms. Likewise, “strong” and “general” are used to mean the same thing. However, as AI progresses and increasingly capable computers leave the realm of science fiction and enter reality, we are discovering that there is a spectrum of strength even within AI architectures.

For example: we used to claim that only human-level general intelligence will be capable of defeating humans at Chess. Yet DeepBlue accomplished the task over twenty years ago and no one seriously claims that we are being ruled over by superintelligent machine overlords. People said only strong AI could beat humans at for Go, as well as for interpersonal game shows like Jeopardy. Yet “weak” narrow AIs were able to trounce humans in all these tasks and general AI is still nowhere in sight.

My belief is that nearly any task we can conceive can be accomplished by a sufficiently strong narrow intelligence, but since we conflate strong AI with general AI, we consistently blind ourselves to this truth. That’s why I’ve decided to decouple strength from architecture.

Weak Narrow Artificial Intelligence: Weak Narrow AI (WNAI) describes software that is subhuman or approaching par-human in strength in one narrow task. Most smart speakers/digital assistants like Amazon Echo and Siri occupy this stratum as they do not possess any area of ‘smarts’ that is equal to that of humans, though their speech recognition abilities does lead to us psychologically imbuing them with more intelligence than they actually possess. These are merely the most visible WNAI— most AI in the world is in this category by nature and this will always be the truth, as there is only so much intelligence you need to accomplish certain tasks. As I mentioned in the original post, you don’t need artificial superintelligence to run task manager or an industrial robot. Doing so would be like trying to light a campfire with Tsar Bomba. Interestingly, this is a lesson a lot of sci-fi overlooks due to the belief that all software in the distant future will become superintelligent, no matter how inefficient it may be.

Strong Narrow Artificial Intelligence: Strong Narrow AI (SNAI) describes software that is par-human or superhuman in strength in one narrow task. In my original post, I made the grievously idiotic mistake of conflating ‘public’ AI with SNAI, despite the fact that SNAI have essentially been around since the early 1950s— even a program that can defeat humans more than 50% of the time at tic-tac-toe can be considered a “strong narrow AI”. This is one reason why the term likely never went anywhere, as our popular idea of any strong AI requires worlds more intelligence than a tic-tac-toe lord. But strength is subjective when it comes to narrow AI. What’s strong for plastic may be incredibly weak for steel. What’s usefully strong for glass is likely far too brittle for brick. This is still true for narrow AI. Right now, SNAIs are more popularly represented by game-mastering software such as AlphaGo and IBM Watson because they require some level of proto-cognition and somewhat recognizable intellectual capability that is utterly alien compared to the likes of Bertie the Brain.

Weak Expert Artificial Intelligence: Weak expert AI (WXAI) describes software that is subhuman or approaching par-human in strength in a field of tasks. Due to expert AI still being a novel development as of the time of writing, we don’t have many examples, and ironically one of few examples we have is actually strong expert AI.  However, I can imagine WXAI as being similar to what Google DeepMind and OpenAI are currently working on with their Atari-playing programs. DeepMind in particular uses one generalized network to play a wide variety of games, as aforementioned. And while many of them have reached par-human and superhuman levels of playing, so far we have not received any word that this algorithm has achieved par-human across all Atari games. This would make it closer to approaching par-human strength. This becomes even more noticeable when taking into consideration that this network’s play experience likely has not been transferred to games from more advanced consoles such as the NES and SNES.

Strong Expert Artificial Intelligence: Strong expert AI (SXAI) describes software that is par-human or superhuman in strength in a field of tasks. Currently, the best (and probably only) known example is DeepMind’s AlphaZero network. To a layman, an SXAI will likely seem indistinguishable from an AGI, though there will still be obvious parameters it cannot act beyond. This is also likely going to be a very peculiar and frightening place for AI research, an era where AIs will begin to seem too competent to control despite their actual limitations. One major consideration is that since SXAI will have capabilities beyond one narrow field, it can’t be considered “strong” if it’s only competent in a single field. I would reckon that if it’s parhuman in 30% of all capabilities, it qualifies as SXAI.

Weak General Artificial Intelligence: Weak general AI (WGAI) describes software that is subhuman or approaching par-human in strength in general, perhaps with a stronger ability in a particular narrow field but otherwise not as strong as the human brain. Oddly enough, I’ve very rarely heard of the possibility of WGAI. If anything, it’s usually believed that the moment we create a general AI, it will rapidly evolve into a superintelligence. However, WGAI is very likely going to be a much longer-lived phenomenon than currently believed due to computational limits. WGAI is not nearly as magical as SGAI or ASI— should the OpenWorm project bear fruit, the result would be a general AI. The only difference being that it would prove to be an extraordinarily weak general AI, which gives this term a purpose. Most robots used for automation will likely lie in this category, if not SXAI, since most tasks merely require environmental understanding and some level of creative reactivity rather than higher order sapience.

Strong General Artificial Intelligence: Strong general AI (SGAI) describes software that is par-human or superhuman in strength across all general tasks. This is sci-fi AI, agents of such incredible intellectual power that they rival our own minds. When people ask of when “true AI” will be created, they typically mean this.

Artificial Superintelligence: Artificial superintelligence (ASI) describes a certain kind of strong general artificial intelligence, one that is so far beyond the capabilities of the human brain as to be virtually godlike. The point at which SGAI becomes ASI is a bit fuzzy, as we tend to think of the two much of the same way we think of the difference between stellar-mass and supermassive black holes. My hypothesis is that SGAI can still be considered superhuman and not break beyond theoretical human capabilities— the point at which SGAI becomes ASI is the exact point at which a computer surpasses all theoretical human capabilities. If you took our intelligence and brought it to as many standard deviations down the curve as genetically possible, you will eventually come across some limit. Biological brains are electrochemical in nature, and the fastest brain signal travels at around 270 miles per second. There is, in theory, a maximum human intelligence. ASI is anything beyond that point. All the heavens lie above us.


Ability

Ability in AI is defined by an AI’s cognitive capabilities, ranging from complete lack of self-awareness all the way to sapience. I did not create this list, but I find it to be extremely useful towards understanding the future development of artificial intelligence.

Reactive: AI that only reacts. It doesn’t remember anything; it only experiences what exists and reacts. Example: Deep Blue.

Limited Memory: This involves AI that can recall information outside of the immediate moment. Right now, this is more or less the domain of chatbots and autonomous vehicles.

Theory of Mind: This is AI that can understand the concept that there are other entities than itself, entities that can affect its own actions.

Sapience: This is AI that can understand the concept that it is an individual separate from other things, that it has a body and that if something happens to this body, its own mind may be affected. By extension, it understands that it has its own mind. In other words, it possesses self-awareness. It is capable of reflecting on its sentience and self-awareness and can draw intelligent conclusions using this knowledge. It possesses the agency to ask why it exists. At which point it is essentially conscious.

 

 

Author: Yuli Ban

I'm an aspiring novelist with a terminal lack of a life.

One thought on “Artificial Intelligence: A Summary of Strength and Architecture”

Leave a comment