Grades of Automation

  • Grade-I is tool usage in general, from hunter-gatherer/scavenger tech all the way up to the pre-industrial age. There are little to no complex moving parts.
  • Grade-II is the usage of physical automation, such as looms, spinning jennies, and tractors. This is what the Luddites feared. There are many complex moving parts, many of which require specialized craftsmen to engineer.
  • Grade-III is the usage of digital automation, such as personal computers, calculators, robots, and basically anything we in the modern age take for granted. This age will last a bit longer into the future, though the latter ends of it have spooked quite a few people. Tools have become so complex that it’s impossible for any one person to create all necessary parts for a machine that resides in this tier.
  • Grade-IV is the usage of mental automation, and this is where things truly change. This is where we finally see artificial general intelligence, meaning that one of our tools has become capable of creating new tools on its own. AI will also become capable of learning new tasks much more quickly than humans and can instantly share its newfound knowledge with any number of other AI-capable machines connected to its network. Tools, thus, have become so infinitely complex that it’s only possible for the tools themselves to create newer and better tools.

Grades I and IV are only tenuously “automation”— the former implies that the only way to not live in an automated society is to use your hands and nothing else; the latter implies that intelligence itself is a form of automation. However, for the sake of argument, let’s keep with it.

Note: this isn’t necessarily a “timeline of technological development.” We still actively use technologies from Grades I and II in our daily lives.

Grade-I automation began the day the first animal picked up a stone and used it to crush a nut. By this definition, there are many creatures on Earth that have managed to achieve Grade-I automation. Grade-I lacks complex machinery. There are virtually no moving parts, and any individual person could create the whole range of tools that can be found in this tier. Tools are easy to make and easy to repair, allowing for self-sufficiency. Grade-I automation is best represented by hammers and wheels.

A purely Grade-I society would be agricultural with the vast majority of the population ranging from sustenance farmers to hunter-gatherer-scavengers. The lack of machinery means there is no need for specialization; societal complexity instead derives from other roles.

Grade-II automation introduces complex bits and moving parts, things that would take considerably more skill and brainpower to create. As far as we know, only humans have reached this tier— and only one species of humans at that (i.e. Homo sapiens sapiens). Grade-II is best represented by cogwheels and steam engines, as it’s the tier of mechanisms. One bit enables another, and they work together to form a whole machine. As with Grade-I, there’s a wide range of Grade-II technologies, with the most complex ends of Grade-II becoming electrically powered.

A society that has reached and mastered Grade-II automation would resemble our world as it was in the 19th century. Specialization rapidly expands— though polymaths may be able to design, construct, and maintain Grade-II technologies through their own devices, the vast majority of tools require multiple hands throughout their lifespan. One man may design a tool; another will be tasked with building and repairing it. However, generally, one person can grasp all facets of such tools. Using Grade-II automation, a single person can do much more work than they could with Grade-I technologies. In summary, Grade-II automation is the mark of an industrial revolution. Machines are complex, but can only be run by humans.

Grade-III automation introduces electronic technology, which includes programmable digital computers. It is at this point that the ability to create tools escapes the ability of individuals and requires collectives to pool their talents. However, this pays off through vastly enhanced productivity and efficiency. Computers dedicate all resources towards crunching numbers, greatly increasing the amount of work a single person can achieve. It is at this point that a true global economy becomes possible and even necessary, as total self-sufficiency becomes near impossible. While automation unemploys many as computational machines take over brute-force jobs that once belonged to humans, the specialization wrought is monumental, creating billions of new jobs compared to previous grades. The quality of life for everyone undergoes enormous strides upwards.

A society that has reached and mastered Grade-III automation would resemble the world of many near-future science fiction stories. Robotics and artificial intelligence have greatly progressed, but not to the point of a Singularitarian society. Instead, a Grade-III dominant society will be post-industrial. Even the study of such a society will be multilayered and involve specialized fields of knowledge. Different grades can overlap, and this continues to be true with Grade-III automation. Computers have begun replacing many of the cognitive tasks that were once the sole domain of humans. However, computers and robots remain tools to complete tasks that fall upon the responsibility of humans. Computers do not create new tools to complete new tasks, nor are they generally intelligent enough to complete any task they were not designed to perform. The symbol of Grade-III is a personal computer and industrial robot.

Grade-IV automation is a fundamental sea change in the nature of technology. Indeed, it’s a sea change in the nature of life itself, for it’s the point at which computers themselves enter the fray of creating technology. This is only possible by creating an artificial brain, one that may automate even higher-order skills. Here, it is beyond the capability of any human— individuals or collectives— to create any tool, just as it is beyond the capability of any chimpanzee to create a computer. Instead, artificial intelligences are responsible for sustaining the global economy and creating newer, improved versions of themselves. Because AI matches and exceeds the cognitive capabilities of humans, there is a civilization-wide upheaval where what jobs remain from the era of late Grade-III domination are then taken by agents of Grade-IV automation, leaving humans almost completely jobless. This is because our tools are no longer limited to singular tasks, but can take on a wide array of problems, even problems they were not built to handle. If the tools find a problem that is beyond their limits, they simple improve themselves to overcome their limitations.

It is possible, even probable, that humans alone cannot reach this point— ironically, we may need computers to make the leap to Grade-IV automation.

A society that has reached Grade-IV automation will likely resemble slave societies the closest, with an owner class composed of humans and the highest order AIs profiting from the labor of trillions, perhaps quadrillions of ever-laboring technotarians. The sapient will trade among themselves whatever proves scarce, and the highest functions of society will be understood only by those with superhuman intelligence. Societal complexity reaches its maximal state, the point of maximum alienation. However, specialization rapidly contracts as the intellectual capabilities of individuals— particularly individual AI and posthumans— expands to the point they understand every facet of modern society. Unaugmented humans will have virtually no place in a Grade-IV dominant society besides being masters over anadigital slaves and subservient to hyperintelligent techno-ultraterrestrials. What few jobs remain for them will, ironically, harken back to the days of Grade I and II automation, where the comparative advantage remains only due to artificial limitations (i.e. “human-only labor”).

Grade-IV automation is alien to us because we’ve never dealt with anything like it. The closest analog is biological sapience, something we have only barely begun to understand. In a future post, however, I’ll take a crack at predicting a day in the life of a person in a Grade-IV society. Not just a person, but also society at large.

Types of Artificial Intelligence

Not all AI is created equal. Some narrow AI is stronger than others. Here, I redefine AI, separating the “weak=narrow” and “strong=general” correlation.

Let’s talk about AI. I’ve decided to use the terms ‘narrow and general’ and ‘weak and strong’ as modifiers in and of themselves. Normally, weak AI is the same thing as narrow AI; strong AI is the same thing as general AI. But I mentioned elsewhere on this wide, wild Internet that there certainly must be such a thing as ‘less-narrow AI.’ AI that’s more general than the likes of, say, Siri, but not quite as strong as the likes of HAL-9000.

So my system is this:

    • Weak Narrow AI
    • Strong Narrow AI
    • Weak General AI
    • Strong General AI
    • Super AI

Weak narrow AI (WNAI) is AI that’s almost indistinguishable from analog mechanical systems. Go to the local dollar store and buy a $1 calculator. That calculator possesses WNAI. Start your computer. All the little algorithms that keep your OS and all the apps running are WNAI. This sort of AI cannot improve upon itself meaningfully, even if it were programmed to do so. And that’s the keyword— “programmed.” You need programmers to define every little thing a WNAI can possibly do.
We don’t call WNAI “AI” anymore, as per the AI Effect. You ever notice when there’s a big news story involving AI, there’s always a comment saying “This isn’t AI; it’s just [insert comp-sci buzzword].” Problem being, it is AI. It’s just not AGI.
I didn’t use that mention of analog mechanics passingly— this form of AI is about as mechanical as you can possibly get, and it’s actually better that way. Even if your dollar store calculator were an artificial superintelligence, what do you need it to do? Calculate math problems. Thus, the calculator’s supreme intellect would go forever untapped as you’d instead use it to factor binomials. And I don’t need ASI to run a Word document. Maybe ASI would be useful for making sure the words I write are the best they could possibly be, but actually running the application is most efficiently done with WNAI. It would be like lighting a campfire with Tsar Bomba.
Some have said that “simple computation” shouldn’t be considered AI, but I think it should. It’s simply “very” weak narrow AI. Calculations are the absolute bottom tier of artificial intelligence, just as the firing of synapses are the absolute bottom of biological intelligence.
WNAI can basically do one thing really well, but it cannot learn to do it any better without a human programmer at the helm manually updating it regularly.

Strong narrow AI (SNAI) is AI that’s capable of learning certain things within its programmed field. This is where machine learning comes in. This is the likes of Siri, Cortana, Alexa, Watson, some chatbots, and higher-order game AI, where the algorithms can pick up information from their inputs and learn to create new outputs. Again, it’s a very limited form of learning, but learning’s happening in some form. The AI isn’t just acting for humans; it’s reacting to us as well, and in ways we can understand. SNAI may seem impressive at times, but it’s always a ruse. Siri might seem smart at times, for example, but it’s also easy to find its limits because it’s an AI meant for being a personal virtual assistant, not your digital waifu ala Her. Siri can recognize speech, but it can’t deeply understand it, and it lacks the life experiences to make meaningful talk anyhow. Siri might recognize some of your favorite bands or tell a joke, but it can’t also write a comedic novel or actually genuinely have a favorite band of its own. It was programmed to know these things, based on your own preferences. Even if Siri says it’s “not an AI”, it’s only using preprogrammed responses to say so.
SNAI can basically do one thing really well and can learn to do that thing even better over time, but it’s still highly limited.

Weak general AI (WGAI) is AI that’s capable of learning a wide swath of things, even things it wasn’t necessarily programmed to learn. It can then use these learned experiences to come up with creative solutions that can flummox even trained professional humans. Basically, it’s as intelligent as a certain creature— maybe a worm or even a mouse— but it’s nowhere near intelligent enough to enhance itself meaningfully. It may be par-human or even superhuman in some regards, but it’s sub-human in others. This is what we see with the likes of DeepMind— DeepMind’s basic algorithm can basically learn to do just about anything, but it’s not as intelligent as a human being by far. In fact, DeepMind wasn’t even in this category until they began using the differentiated neural computing system because it could not retain its previously learned information. Because it could not do something so basic, it was squarely strong narrow AI until literally a couple months ago.
Being able to recall previously learned information and apply it to new and different tasks is a fundamental aspect of intelligence. Once AI achieves this, it will actually achieve a modicum of what even the most cynical can consider “intelligence.”
DeepMind’s yet to show off the DNC in any meaningful way, but let’s say that, in 2017, they unveil a virtual assistant to rival Siri and replace Google Now. On the surface, this VA seems completely identical to all others. Plus, it’s a cool chatbot. Quickly, however, you discover its limits— or, should I say, its lack thereof. I ask it to generate a recipe on how to bake a cake. It learns from the Internet, but it doesn’t actually pull up any particular article— it completely generates its own recipe, using logic to deduce what particular steps should be followed and in what order. That’s nice— now, can it do the same for brownies?
If it has to completely relearn all of the tasks just to figure this out, it’s still strong narrow AI. If it draws upon what it did with cakes and figures out how to apply these techniques to brownies, it’s weak general AI. Because let’s face it— cakes and brownies aren’t all that different, and when you get ready to prepare them, you draw upon the same pool of skills. However, there are clear differences in their preparation. It’s a very simple difference— not something like “master Atari Breakout; now master Dark Souls; now climb Mount Everest.” But it’s still meaningfully different.
WGAI can basically do many things really well and can learn to do them even better over time, but it cannot meaningfully augment itself. That it has such a limit should be impressive, because it basically signals that we’re right on the cusp of strong AGI and the only thing we lack is the proper power and training.

Strong general AI (SGAI) is AI that’s capable of learning anything, even things it wasn’t programmed to learn, and is as intellectually capable as a healthy human being. This is what most people think of when they imagine “AI”. At least, it’s either this or ASI.
Right now, we have no analog to such a creation. Of course, saying that we never will would be as if we were in the year 1816 and discussing whether SNAI is possible. The biggest limiting factor towards the creation of SGAI right now is our lack of WGAI. As I said, we’ve only just created WGAI, and there’s been no real public testing of it yet. Not to mention that the difference between WGAI and SGAI is vast, despite seemingly simple differences between the two. WGAI is us guessing what’s going on in the brain and trying to match some aspects of it with code. SGAI is us building a whole digital brain. Not to mention there’s the problem of embodied cognition— without a body, any AI would be detached from nearly all experiences that we humans take for granted. It’s impossible for an AI to be a superhuman cook without ever preparing or tasting food itself. You’d never trust a cook who calls himself world-class, only come to find out he’s only ever made five unique dishes, nor has he ever left his house. For AI to truly make the leap from WGAI to SGAI, it’d need someone to experience life as we do. It doesn’t need to live 70 years in a weak, fleshy body— it could replicate all life experiences in a week if needbe if it had enough bodies— but having sensory experiences helps to deepen its intelligence.

Super AI or Artificial Superintelligence (SAI or ASI) is the next level beyond that, where AI has become so intellectually capable as to be beyond the abilities of any human being.
The thing to remember about this, however, is that it’s actually quite easy to create ASI if you can already create SGAI. And why? Because a computer that’s as intellectually capable as a human being is already superior to a human being. This is a strange, almost Orwellian case where 0=1, and it’s because of the mind-body difference.
Imagine you had the equivalent of a human brain in a rock, and then you also had a human. Which one of those two would be at a disadvantage? The human-level rock. And why? Because even though it’s as intelligent as the human, it can’t actually act upon its intelligence. It’s a goddamn rock. It has no eyes, no mouth, no arms, no legs, no ears, nothing.
That’s sort of like the difference between SGAI and a human. I, as a human, am limited to this one singular wimpy 5’8″ primate body. Even if I had neural augmentations, my body would still limit my brain. My ligaments and muscles can only move so fast, for example. And even if I got a completely synthetic body, I’d still just have one body.
An AI could potentially have millions. If not much, much more. Bodies that aren’t limited to any one form.
Basically, the moment you create SGAI is the moment you create ASI.

From that bit of information, you can begin to understand what AI will be capable of achieving.


Recap:

“Simple” Computation = Weak Narrow Artificial Intelligence. These are your algorithms that run your basic programs. Even a toddler could create WNAI.
Machine learning and various individual neural networks = Strong Narrow Artificial Intelligence. These are your personal assistants, your home systems, your chatbots, and your victorious game-mastering AI.
Deep unsupervised reinforcement learning + differentiable spiked recurrent progressive neural networks = Weak General Artificial Intelligence. All of those buzzwords come together to create a system that can learn from any input and give you an output without any preprogramming.
All of the above, plus embodied cognition, meta neural networks, and a master neural network = Strong General Artificial Intelligence. AGI is a recreation of human intelligence. This doesn’t mean it’s now the exact same as Bob from down the street or Li over in Hong Kong; it means it can achieve any intellectual feat that a human can do, including creatively coming up with solutions to problems just as good or better than any human. It has sapience. SGAI may be very humanlike, but it’s ultimately another sapient form of life all its own.

All of the above, plus recursive self-improvement = Artificial Superintelligence. ASI is beyond human intellect, no matter how many brains you get. It’s fundamentally different from the likes of Einstein or Euler. By the very nature of digital computing, the first SGAI will also be the first ASI.