GPT-3 as Proto-AGI (or AXI)

I recently came across this brief LessWrong discussion:

What should we expect from GPT-3?

When it will appear? (My guess is 2020).

Will it be created by OpenAI and will it be advertised? (My guess is that it will not be publicly known until 2021, but other companies may create open versions before it.)

How much data will be used for its training and what type of data? (My guess is 400 GB of text plus illustrating pictures, but not audio and video.)

What it will be able to do? (My guess: translation, picture generation based on text, text generation based on pictures – with 70 per cent of human performance.)

How many parameters will be in the model? (My guess is 100 billion to trillion.)

How much compute will be used for training? (No idea.)

At first, I’d have been skeptical. But then this was brought to my attention:

GPT-2 trained on ASCII-art appears to have learned how to draw Pokemon characters— and perhaps it has even acquired some rudimentary visual/spatial understanding

The guy behind this, /u/JonathanFly, actually commented on the /r/MediaSynthesis post:

OMG I forgot I never did do a blog writeup for this. But this person almost did it for me lol.

https://iforcedabot.com/how-to-use-the-most-advanced-language-model-neural-network-in-the-world-to-draw-pokemon/ just links to my tweets. Need more time in my life.

This whole thing started because I wanted to make movies with GPT-2, but I really wanted color and full pictures, so I figured I should start with pictures and see if it did anything at all. I wanted the movie ‘frames’ to have the subtitles in the frame, and I really wanted the same model to draw both the text and the picture so that they could at least in theory be related to each other. I’m still not sure how to go about turning it into a full movie, but it’s on the list of things to try if I get time. ​ I think for movies, I would need a much smaller and more abstract ASCII representation, which makes it hard to get training material. It would have to be like, a few single ASCII letters moving across the screen. I could convert every frame from a movie like I did the pokemon but it would be absolutely huge — a single Pokemon can use a LOT of tokens, many use up more than the 1024 token limit even (generated over multiple samples, by feeding the output back in as the prompt.)

Finally, I’ve also heard that GPT-2 is easily capable of generating code or anything text-based, really. It’s NLP’s ImageNet moment.

This made me think.

“Could GPT-2 be used to write music?”

If it were trained on enough data, it would gain a rough understanding of how melodies work and could then be used to generate the skeleton for music. It already knows how to generate lyrics and poems, so the “songwriting” aspect is not beyond it. But if I fed enough sheet music into it, then theoretically it ought to create new music as well. It would even theoretically be able to generate that music, at least in the form of MIDI files (though generating a waveform is also possible, if far beyond it).

Surely if a person like me figured this out, someone much more substantial should have realized this, then?

Lo and behold, those substantial people at OpenAI preempted me with MuseNet.

MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.

And with this, I realized that GPT-2 is essentially a very, very rudimentary proto-AGI. It’s just a language model, yes, but that brings quite a bit with it. If you understand natural language, you can meaningfully create data— and data & maths is just another language. If GPT-2 can generate binary well enough, it can theoretically generate anything that can be seen on the internet.

Scott Alexander of Slate Star Codex also realized this:

Why do I believe this? Because GPT-2 works more or less the same way the brain does, the brain learns all sorts of things without anybody telling it to, so we shouldn’t be surprised to see GPT-2 has learned all sorts of things without anybody telling it to – and we should expect a version with more brain-level resources to produce more brain-level results. Prediction is the golden key that opens any lock; whatever it can learn from the data being thrown at it, it will learn, limited by its computational resources and its sense-organs and so on but not by any inherent task-specificity.

I don’t want to claim this is anywhere near a true AGI. “This could do cool stuff with infinite training data and limitless computing resources” is true of a lot of things, most of which are useless and irrelevant; scaling that down to realistic levels is most of the problem. A true AGI will have to be much better at learning from limited datasets with limited computational resources. It will have to investigate the physical world with the same skill that GPT investigates text; text is naturally machine-readable, the physical world is naturally obscure. It will have to have a model of what it means to act in the world, to do something besides sitting around predicting all day. And it will have to just be better than GPT, on the level of raw power and computational ability. It will probably need other things besides. Maybe it will take a hundred or a thousand years to manage all this, I don’t know.

But this should be a wake-up call to people who think AGI is impossible, or totally unrelated to current work, or couldn’t happen by accident. In the context of performing their expected tasks, AIs already pick up other abilities that nobody expected them to learn. Sometimes they will pick up abilities they seemingly shouldn’t have been able to learn, like English-to-French translation without any French texts in their training corpus. Sometimes they will use those abilities unexpectedly in the course of doing other things. All that stuff you hear about “AIs can only do one thing” or “AIs only learn what you program them to learn” or “Nobody has any idea what an AGI would even look like” are now obsolete.

But GPT-2 is too weak. Even GPT-2 Large. What we’d need to put this theory to the test is the next generation: GPT-3.

This theoretical GPT-3 is GPT-2 + much more data. Far more than even GPT-2 Large uses— and for reference, no one has actually publicly used GPT-2 Large. Grover (which is based on the 1.5B parameter version) is specialized for faking news articles, not any text-generated task. GPT-2 Large is already far beyond what we are playing with, and GPT-3 (and further iterations of GPT-X) have to be much larger still.

Screenshot-2019-02-14-08.57.24
Text generation apps like Talk to Transformer are actually not state-of-the-art (SOTA) compared to the full 1.5B parameter network. If you were shocked by public GPT-2 applications, you were effectively shocked by an already outdated system.

And while it’s impressive that GPT-2 is a simple language modeler fed ridiculous amounts of data, GPT-3 will only impress me if it comes close to matching the MT-DNN or XLNet in terms of commonsense reasoning. Of course, the MT-DNN and XLNet are roughly par-human at the Winograd Schema challenge, 20% ahead of GPT-2 in real numbers. Passing the challenge at such a level means it has human-like reading comprehension, and if coupled with text generation, we’d get a system that’s capable of continuing any story or answering any question about a text passage in-depth as well as achieving near-perfect coherence with what it creates. If GPT-3 is anywhere near that strong, then there’s no doubt that it will be considered a proto-AGI even by the most diehard skeptics.

Now when I say that it’s a proto-AGI, I don’t mean to say that it’s part of a spectrum that will lead to AGI with enough data. I only use “proto-AGI” because my created term, “artificial expert intelligence”, never took off and thus most people have no idea what that is.

But “artificial expert intelligence” or AXI is exactly what GPT-2 is and a theoretical GPT-3 would be.

Artificial Expert Intelligence: Artificial expert intelligence (AXI), sometimes referred to as “less-narrow AI”, refers to software that is capable of accomplishing multiple tasks in a relatively narrow field. This type of AI is new, having become possible only in the past five years due to parallel computing and deep neural networks.

At the time I wrote that, the only AI I could think of that qualified was DeepMind’s AlphaZero which I was never fully comfortable with, but the more I learn about GPT-2, the more it feels like the “real deal.”

An AXI would be a network that works much like GPT-2/GPT-3, using a root capability (like NLP) to do a variety of tasks. GPT-3 may be able to generate images and MIDI files, something it wasn’t explicitly made to do and sounds like an expansion beyond merely predicting the next word in a sequence (even though that’s still fundamentally what it does). More importantly, there ought to still be limitations. You couldn’t use GPT-2 for tasks completely unrelated to natural language processing, like predicting protein folding or driving cars for example, and it will never gain its own agency. In that regard, it’s not AGI and never will be— AGI is something even further beyond it. But it’s virtually alien-like compared to ANI, which can only do one thing and must be reprogrammed to do anything else. It’s a kind of AI that lies in between the two, a type that doesn’t really have a name because we never thought much about its existence. We assumed that once AI could do more than one specific thing, we’d have AGI.

It’s like the difference between a line (ANI), a square (AXI), and a tesseract (AGI). Or, if AGI is 1,000 and ANI is a 1, AXI would be something closer to a 10 up to even 100.

GPT-2 would be considered a fairly weak AXI under this designation since nothing it does comes close to human-level competence at tasks (not even the full version). GPT-3 might become par-human at a few certain things, like holding short conversations or generating passages of text. It will be so convincing that it will start freaking people out and make some wonder if OpenAI has actually done it. A /r/SubSimulatorGPT3 would be virtually indistinguishable from an actual subreddit, with very few oddities and glitches. It will be the first time that a neural network is doing magic, rather than the programmers behind it being so amazingly competent. And it may even be the first time that some seriously consider AGI as a possibility for the near future.

Who knows! Maybe if GPT-2 had the entire internet as its parameters, it would be AGI as well as the internet becoming intelligent. But at the moment, I’ll stick to what we know it can do and its likely abilities in the near future. And there’s nothing suggesting GPT-2 is that generalized.

I suppose one reason why it’s also hard to gauge just how capable GPT-2 Large is comes down to the fact so few people have access to it. One guy remade it, but he decided not to release it. As far as I can tell, it’s just because he talked with OpenAI and some others and decided to respect their decision instead of something more romantic (i.e. “he saw just how powerful GPT-2 really was”). And even if he did release it, it was apparently “significantly worse” than OpenAI’s original network (his 1.5 billion parameter version was apparently weaker than OpenAI’s 117 million parameter version). So for right now, only OpenAI and whomever they shared the original network with know the full scope of GPT-2’s abilities, however far or limited they really are. We can only guess based on GPT-2 Small and GPT-2 Medium, and as aforementioned, they are quite limited compared to the full thing.

Nevertheless, I can at least confidently state that GPT-2 is the most general AI on the planet at the moment (as far as we know). There are very good reasons for people to be afraid of it, though they’re all because of humans rather than the AI itself. And I, for one, am extremely excited to see where this goes while also being amazed that we’ve come this far.


What exactly should GPT-3 be able to do? That, I cannot answer because I’m not fully aware of the full breadth of GPT-2, but the knowledge that it and MuseNet are fundamentally the same network trained on different data sets suggests to me that a theoretical 100B parameter version ought to be able to do at least the following:

  • Reach roughly 90% accuracy on either the Winograd Schema Challenge or the WNLI
  • Generate upwards of 1,000 to 2,000 words of coherent, logical text based on a short prompt
  • Increase the accuracy of its output by adding linked resources from which it can immediately draw/spin/summarize
  • Generate extended musical pieces
  • Generate low-resolution images, perhaps even short gifs
  • Translate between languages, perhaps even figuring out context better than Google Translate
  • Understand basic arithmetic
  • Generate usable code
  • Caption images based on the data presented
  • Generate waveforms rather than just MIDIs
  • Gain a rudimentary understanding of narrative (i.e. A > B > C)

All this and perhaps even more from a single network. Though it’s probable we’ll get more specialized versions (like MuseNet), the basic thing will be a real treat.

I myself don’t understand the specifics, so I can’t say that GPT-X will be able to use language modeling to learn how to play an Atari video game, but I can predict that it may be able to create an Atari-tier video game some time next decade. Any data-based tasks can be automated by an agent such as GPT-X, and this includes things like entertainment and news. It’s the purest form of “synthetic media”.

Character Creator: The Game | Possibly coming as soon as next year?

Several years ago, a neat article in Forbes appeared:

The Case For ‘Character Creator: The Game’

I found it only because of my perpetual desires to find a way to design fictional characters and personas for various story ideas. I’ve been searching since 2013 for the perfect one, but they all have some shortcoming.

The most technically robust character creators were part of dedicated games, but of course you actually needed those games to get the experience. More than that, since they were parts of games, the base character creators very often did not have all possible customization items from the start— you’d have to buy and unlock more items and accessories as you played the game. What’s more, since games are often thematic, you may not be able to create the exact kind of character you want if there are any specific details in mind that aren’t also available in the creation system. But all in all, retail games have the best graphics of them all and you can usually do a lot more with them.

Free online character creators came in two packages: graphical programs and dress-up games. The former, which includes stuff like Mixamo, certainly have more ways to pose but have very few customization options since you either needed to download extra packs from their store or had to design it yourself.

The latter has always been the easiest. Dress-up games are basically just flash games where you dress up an avatar, typically designed like a doll, superhero, or anime character.

The big problems with dress-up games is that they are very often thematic and the art is dodgy— considering they’re made for flash game sites and offer little to no financial restitution for their creators, art assets are typically ready-made and low quality. There’s usually only one perspective— full-frontal or, less often, quarter-turn. And save for the best ones, you can’t edit any aspect of your character’s body outside of token masculine or feminine features since these are indeed dress-up games. In other words, you get what you pay for. You decided to go the free route instead of commissioning an artist, so you can’t complain that your character looks cheap.


With the rise of GANs, this may change in very short order. Indeed, it is entirely possible that we are within a year or two of a true “character creator: the game”. What’s more, the capabilities of such will be far beyond anything we see today even in the highest quality character creation systems.

This is due to three important factors:

  1. Text-to-image synthesis. In this theoretical game, you won’t necessarily need to fiddle with sliders, attachable items, or presets. Instead, you could type in a description into a box and near-instantly get your design as an output. Say I want to design an anime-style girl with jet-black hair, blue skin, pink eyes, and wearing such-and-such clothes with a devil tail and steampunk wings. Normally, I’d have to go through a series of different menus starting with the basic body type, then the hair, then the face, and so on and so forth. Here, that simple description alone will generate an image. If it’s not the one I want, I can keep generating them until I find one that’s at least close enough and then go in to edit the finer details if needbe.
  2. Variable artstyle or graphics. If I want to create a character in the style of the Simpsons, I either need to commission an artist who draws in that style, find a flash game that allows me to edit a character and hope they have what I want, or learn to draw myself. And what if I want another character in 3D but in a janky, Sega Saturn or PS1-style polygonal graphics? With this theoretical game, this won’t be much of a problem either. As long as you give the GAN something from your preferred style, of course, it could conceivably give you a character that’s minimalist, blocky, cel-shaded, photorealisic, and everything in between. For example, if I want to generate a character that looks as if it were drawn by the mangaka Akira Toriyama, I could. If I wanted a character “drawn” by cartoonist Tex Avery, I could very well get one. If I wanted a photorealistic avatar, I could have that as well. This could be used to create that generated comic I talked about before, and it could also theoretically be used to create character models that modders can insert into old games.
  3. Unlimited flexibility. Because of the aforementioned aspects, there’s no limit to what you can create. You wouldn’t need to worry about whether or not certain assets are in the engine— as long as you can provide the GAN with some representation of that asset, it’ll be able to translate it onto your character. For example: almost all character creators don’t have wild, electrical, Super Saiyan-esque hair. But if you can give the GAN images like this, it will remember that design and even be able to fill in the blanks if it’s not exactly what you want. What if your character is supposed to have neon glow-in-the-dark hair like this? You’re welcome, I’ve just given you all you really need for your character in the future.

The possibilities are endless. And of course, if this GAN can create a character, it can obviously create other things. But this is what I’m focusing on at the moment.

I think we’ll see early variants of it this year, building off the recently released StyleGAN network. We’ll even see some text-to-image synthesis, which is well within the powers of current day AI.

Artificial Intelligence: A Summary of Strength and Architecture

Not all AI is created equal

Types of Artificial Intelligence: Redux

Artificial intelligence has a problem: no one can precisely tell you what it is supposed to be. This makes discussing its future difficult. Current machine learning methods are impressive by the standards of what has come before, and certainly we can give various systems and networks enough power to rival and exceed human capabilities in narrow areas. The contention is whether or not these networks qualify as “artificial intelligence”.

My personal definition of artificial intelligence is a controversial one, as I am privy towards lumping even basic software calculations under the umbrella of “AI”. This is because there are essentially two separate kinds of “artificial intelligence”— there is the field of artificial intelligence research, which is a branch of computer science, and there is the popular connotations of artificial intelligence. AI is popularly known as being “computers with brains of varying intelligence”.

Business rule management systems are not commonly considered to be “artificial intelligence” in the popular imagination. Indeed, the very name conjures a sort of gunmetal-boring corporate software model. Yet BRMS software is one of the most widely commercialized forms of AI, dating back to the late ’80s.

If we limit all AI to “computers capable of creative thinking”, even many classic sci-fi depictions of AI would not qualify. Yet if my terrifying and anarchist definition became dominant, then we would have to presume that the Sumerians created the first AIs when they invented abacuses.

This is one reason why the original post on the various types of AI doesn’t work. But there are plenty more.

 

Another bottleneck in understanding the future of AI research is our limited imaginings of artificial general intelligence— a feat of engineering considered equal to the creation of practical nuclear fusion, room-temperature superconductors, and metallic hydrogen. As with all of these, the possibilities are much wider than we initially conceived. Yet it’s with AGI that I feel there is a great deal of hype and misunderstanding that would be more easily turned to practical breakthroughs if there were a shift in how we perceived it.

I, for one, always found it odd that we equate “artificial general intelligence” with “human-level AI” despite the fact that every animal lifeform possesses general intelligence— yet no one seriously claims that nematodes and insects are our intellectual rival.

“Surely,” I said as far back as 2012, “there has to be something that comes before human-level AI but is still well past what we have now.”

A related issue is that we compare and contrast narrow AI software with imagined general AI many decades henceward, allowing ourselves nothing to bridge the gap. There is no ‘intermediate’ AI.

All AI is either narrow and weak or general and strong. We have no popular ideas for “narrow and strong” AI despite the fact we have developed a multitude of narrow networks that have far surpassed human capabilities. We also have no popular ideas for “general and weak” AI, which is to say an AI that is capable of generalized learning but is not as intelligent as a human. This could be for a motley variety of factors, many of them coming down to basic neuroscience— for example, something that learns on a generalized level may still lack agency.

So here is a basic rundown on my revised “map” of AI, which has three degrees: ArchitectureStrength, and Ability.


Architecture

Architecture is defined by an AI’s structural learning capacity and is already known by the terms “narrow AI” and “general AI“. Narrow AI describes software that is designed for one task. General AI describes software that can learn to accomplish a wide variety of tasks. Of course, this is usually synonymous with software that can learn to accomplish any task at the same level as a human being, which I’ll explain later why that isn’t necessarily always the case.

I wish to add one more category: “expert AI“. Expert artificial intelligence, or artificial expert intelligence (XAI or AXI) describes artificial intelligences that are generalized across a certain area but are not fully generalized, as I’ll explain in greater detail below. You may see it as “less-narrow AI”, with computers capable of learning a variety of like narrow tasks. AXI is very likely the next major step in AI research over the next five to ten years.

 

Mechanical Calculations: These are calculators and traditional computer software. They only do calculations. Addition, subtraction, multiplication, division, etc. There is no intelligence involved. Mechanical calculations can be considered the ‘DNA’ of AI, the root by which we are able to construct intelligences but by itself is not a form of AI. As aforementioned, this level starts with ancient abacuses.

Artificial Narrow Intelligence: Artificial narrow intelligence (ANI), colloquially referred to as “weak AI”, refers to software that is capable of accomplishing one singular task, whether that be through hard coding or soft learning. This describes almost all AI that currently exists, and is also possibly the most consistently underestimated technology of the past 100 years. Just about any AI you can think of, from Siri down to motion sensors, qualify as ANI. Once you program an ANI to do a certain task, it is locked into that task. Just as you cannot make a clock play music unless you reformat its gears for that purpose, you must reprogram an ANI if you want it to do something it was not programmed to do. This includes narrow machine learning networks that are limited to cohesive parameters. Machine learning involves using statistical techniques to refine an agent’s performance, and while this can be generalized for much more interesting uses, it is not magical and is natively a narrow field of AI.

Artificial Expert Intelligence: Artificial expert intelligence (AXI), sometimes referred to as “less-narrow AI”, refers to software that is capable of accomplishing multiple tasks in a relatively narrow field. This type of AI is new, having become possible only in the past five years due to parallel computing and deep neural networks. The best example is DeepMind’s AlphaZero, which utilized a general-purpose reinforcement learning algorithm to conquer three separate board games— chess, go, and shogi. Normally, you would require three separate networks, one for each game, but with AXI, you are able to play a wider variety of games with a single network. Thus, it is more generalized than any ANI. However, AlphaZero is not capable of playing any game. It also likely would not function if pressed to do something unrelated to game playing, such as baking a cake or business analysis. This is why it is its own category of artificial intelligence— too general for narrow AI, but too narrow for general AI. It is more akin to an expert in a particular field, knowledgeable across multiple domains without being a polymath. This is the next step of machine learning, the point at which transfer learning and deep reinforcement learning allow for computers to understand certain things without needing to be mechanically fed rules and capable of expanding its own hyperparameters.

Artificial General Intelligence: Artificial general intelligence (AGI), sometimes referred to as “strong AI”, refers to software capable of accomplishing any task, or at least any task accomplishable by biological intelligence. Currently, there are no AGI networks on Earth and we have no idea when we’ll create the first truly general-purpose artificial intelligence. However, AGI is a much greater qualitative improvement over AXI than AXI is over ANI— whereas AXI is multi-purpose, AGI is omni-purpose. Theoretically, a sufficiently advanced AGI is indistinguishable from a healthy adult human— and even this represents the lower end of the true capabilities of strong artificial intelligence.


Strength

Strength in AI is defined by an AI’s intellectual capacity compared to humans.

Weak Artificial Intelligence is any AI that is intellectually less capable than humans but is colloquially used to describe all narrow AI.

Strong Artificial Intelligence is any AI that is intellectually as capable or more capable than humans but is colloquially used to describe all general AI.

Because of colloquial usage, “weak” and “narrow” are interchangeable terms. Likewise, “strong” and “general” are used to mean the same thing. However, as AI progresses and increasingly capable computers leave the realm of science fiction and enter reality, we are discovering that there is a spectrum of strength even within AI architectures.

For example: we used to claim that only human-level general intelligence will be capable of defeating humans at Chess. Yet DeepBlue accomplished the task over twenty years ago and no one seriously claims that we are being ruled over by superintelligent machine overlords. People said only strong AI could beat humans at for Go, as well as for interpersonal game shows like Jeopardy. Yet “weak” narrow AIs were able to trounce humans in all these tasks and general AI is still nowhere in sight.

My belief is that nearly any task we can conceive can be accomplished by a sufficiently strong narrow intelligence, but since we conflate strong AI with general AI, we consistently blind ourselves to this truth. That’s why I’ve decided to decouple strength from architecture.

Weak Narrow Artificial Intelligence: Weak Narrow AI (WNAI) describes software that is subhuman or approaching par-human in strength in one narrow task. Most smart speakers/digital assistants like Amazon Echo and Siri occupy this stratum as they do not possess any area of ‘smarts’ that is equal to that of humans, though their speech recognition abilities does lead to us psychologically imbuing them with more intelligence than they actually possess. These are merely the most visible WNAI— most AI in the world is in this category by nature and this will always be the truth, as there is only so much intelligence you need to accomplish certain tasks. As I mentioned in the original post, you don’t need artificial superintelligence to run task manager or an industrial robot. Doing so would be like trying to light a campfire with Tsar Bomba. Interestingly, this is a lesson a lot of sci-fi overlooks due to the belief that all software in the distant future will become superintelligent, no matter how inefficient it may be.

Strong Narrow Artificial Intelligence: Strong Narrow AI (SNAI) describes software that is par-human or superhuman in strength in one narrow task. In my original post, I made the grievously idiotic mistake of conflating ‘public’ AI with SNAI, despite the fact that SNAI have essentially been around since the early 1950s— even a program that can defeat humans more than 50% of the time at tic-tac-toe can be considered a “strong narrow AI”. This is one reason why the term likely never went anywhere, as our popular idea of any strong AI requires worlds more intelligence than a tic-tac-toe lord. But strength is subjective when it comes to narrow AI. What’s strong for plastic may be incredibly weak for steel. What’s usefully strong for glass is likely far too brittle for brick. This is still true for narrow AI. Right now, SNAIs are more popularly represented by game-mastering software such as AlphaGo and IBM Watson because they require some level of proto-cognition and somewhat recognizable intellectual capability that is utterly alien compared to the likes of Bertie the Brain.

Weak Expert Artificial Intelligence: Weak expert AI (WXAI) describes software that is subhuman or approaching par-human in strength in a field of tasks. Due to expert AI still being a novel development as of the time of writing, we don’t have many examples, and ironically one of few examples we have is actually strong expert AI.  However, I can imagine WXAI as being similar to what Google DeepMind and OpenAI are currently working on with their Atari-playing programs. DeepMind in particular uses one generalized network to play a wide variety of games, as aforementioned. And while many of them have reached par-human and superhuman levels of playing, so far we have not received any word that this algorithm has achieved par-human across all Atari games. This would make it closer to approaching par-human strength. This becomes even more noticeable when taking into consideration that this network’s play experience likely has not been transferred to games from more advanced consoles such as the NES and SNES.

Strong Expert Artificial Intelligence: Strong expert AI (SXAI) describes software that is par-human or superhuman in strength in a field of tasks. Currently, the best (and probably only) known example is DeepMind’s AlphaZero network. To a layman, an SXAI will likely seem indistinguishable from an AGI, though there will still be obvious parameters it cannot act beyond. This is also likely going to be a very peculiar and frightening place for AI research, an era where AIs will begin to seem too competent to control despite their actual limitations. One major consideration is that since SXAI will have capabilities beyond one narrow field, it can’t be considered “strong” if it’s only competent in a single field. I would reckon that if it’s parhuman in 30% of all capabilities, it qualifies as SXAI.

Weak General Artificial Intelligence: Weak general AI (WGAI) describes software that is subhuman or approaching par-human in strength in general, perhaps with a stronger ability in a particular narrow field but otherwise not as strong as the human brain. Oddly enough, I’ve very rarely heard of the possibility of WGAI. If anything, it’s usually believed that the moment we create a general AI, it will rapidly evolve into a superintelligence. However, WGAI is very likely going to be a much longer-lived phenomenon than currently believed due to computational limits. WGAI is not nearly as magical as SGAI or ASI— should the OpenWorm project bear fruit, the result would be a general AI. The only difference being that it would prove to be an extraordinarily weak general AI, which gives this term a purpose. Most robots used for automation will likely lie in this category, if not SXAI, since most tasks merely require environmental understanding and some level of creative reactivity rather than higher order sapience.

Strong General Artificial Intelligence: Strong general AI (SGAI) describes software that is par-human or superhuman in strength across all general tasks. This is sci-fi AI, agents of such incredible intellectual power that they rival our own minds. When people ask of when “true AI” will be created, they typically mean this.

Artificial Superintelligence: Artificial superintelligence (ASI) describes a certain kind of strong general artificial intelligence, one that is so far beyond the capabilities of the human brain as to be virtually godlike. The point at which SGAI becomes ASI is a bit fuzzy, as we tend to think of the two much of the same way we think of the difference between stellar-mass and supermassive black holes. My hypothesis is that SGAI can still be considered superhuman and not break beyond theoretical human capabilities— the point at which SGAI becomes ASI is the exact point at which a computer surpasses all theoretical human capabilities. If you took our intelligence and brought it to as many standard deviations down the curve as genetically possible, you will eventually come across some limit. Biological brains are electrochemical in nature, and the fastest brain signal travels at around 270 miles per second. There is, in theory, a maximum human intelligence. ASI is anything beyond that point. All the heavens lie above us.


Ability

Ability in AI is defined by an AI’s cognitive capabilities, ranging from complete lack of self-awareness all the way to sapience. I did not create this list, but I find it to be extremely useful towards understanding the future development of artificial intelligence.

Reactive: AI that only reacts. It doesn’t remember anything; it only experiences what exists and reacts. Example: Deep Blue.

Limited Memory: This involves AI that can recall information outside of the immediate moment. Right now, this is more or less the domain of chatbots and autonomous vehicles.

Theory of Mind: This is AI that can understand the concept that there are other entities than itself, entities that can affect its own actions.

Sapience: This is AI that can understand the concept that it is an individual separate from other things, that it has a body and that if something happens to this body, its own mind may be affected. By extension, it understands that it has its own mind. In other words, it possesses self-awareness. It is capable of reflecting on its sentience and self-awareness and can draw intelligent conclusions using this knowledge. It possesses the agency to ask why it exists. At which point it is essentially conscious.

 

 

Artificial Intelligence: The How

Meet the Sensory Orb. It’s a flesh orb that possesses a powerful synthetic brain. There are several questions one must ask about the Sensory Orb.

Why is the Sensory Orb important? Because it’s the key to artificial general intelligence of the human variety.

You see, there are multiple strains of thought as to how to achieve AGI. Most serious computer scientists and neurologists know that it’s not something we’re likely to achieve anytime soon, but their reasoning is different from what most people might assume.

We don’t understand how intelligence or consciousness works, first and foremost. However, we can try our best at mimicking what we see. Perhaps one of these methods will work. After all, we don’t need to understand every single facet of something in order to make it work. We also expect the final leap to AGI to be accomplished by AI itself. So why is getting there so hard in the first place?

For one, we are still limited by computing power. It seems ridiculous considering how stupidly powerful computers today really are, but it’s true— while the most powerful supercomputers have exceeded the expected operations-per-second done by the brain, these computers still cost hundreds of millions of dollars. We need to bring that cost down if we want to make AI research practical.

But forget about the cost for a moment. Let’s pretend DeepMind had TaihuLight in their possession and could utilize every FLOPS for its own purposes. Would we see major breakthroughs in AI? Of course. But would we see human-level AI? Not even close.

“But they’re DeepMind! Their AI has beaten the human champion at Go a decade before the experts said it could be done! How do they still lack AGI?”

For one, that’s not entirely true— experts said a computer could become the world champion at Go by 2016 if there were sufficient funding put into the problem. And sufficient funding did indeed arrive.

But more importantly, while DeepMind’s accomplishments cannot be overstated, they haven’t actually brought us any closer to human-level AGI.

I want you to marvel at the human brain. It’s a fine thing.

Here is a metal table. On top of this metal table are two brains. One is a newborn baby’s brain, and next to it is the brain of Stephen Hawking. Don’t worry, we’ll return the brains to their rightful owners after this blog post. But I want you to think about what these brains are capable of.

The newborn brain is already a powerful computer that’s learning every single second, forming new neural pathways as it experiences life. Mr. Hawking’s brain is a triple-A machine of cosmic proportions, always thinking and never resting.

Except these two facts are dirty lies. The brains before you aren’t doing anything of the sort. The newborn baby’s brain is not forming any new connections. Hawking’s brain isn’t thinking. And why? Because they are disembodied. They are no longer experiencing any senses, and the senses necessary to make thoughts even work are no longer there. They’re both equal in terms of active intelligence— zero.

If you asked the newborn baby’s brain to add two and two, you’d just look like a fool because you’re talking to a tiny little blob of fat. Even if you asked Hawking’s brain the same question, you’d never get an answer. They can’t answer that question— they’re just brains. They don’t have ears to hear you. They don’t have eyes to see you. They don’t have mouths or hands to respond to you. You do not exist to them.

Despite what fiction may proclaim, brains are not actually ‘sentient’ without their bodies. A brain can’t “see” you or “respond” to you if you ask it a question, even if you stick it into a jar full of culture fluids.

If you hook up a screen and a keyboard to that brain, would you then have a proper sensory input in order to get the outputs of the newborn and Mr. Hawking? Of course not— brains did not evolve to be literal computers. You can’t just stick a plug into a brain and expect it to behave just like your desktop. In order to bring these two brains back to life, you’d need to construct whole bodies around their functions. And not just one or two of their functions— all of them.

So the point is: you can’t just take a human brain, set it out on a desk, and treat it like a fully-intelligent person. If you had Descartes’ Evil Demon or the Brain in a Vat, you could develop the brain until it possessed intelligence in a simulated reality, but the brain itself can do nothing for you. It sounds utterly insane to even contemplate.

Yet, for whatever reason, this is how we treat computers. We think that, if we had a computer with deep reinforcement recurrent spiked progressive neural networks and 3D graphene quantum memristors (insert more buzzwords here), we’d have AGI. In fact, you could have the servers running Skynet brought into real life, and you’d still not see AGI if your idea of making it intelligent is simply to feed it internet data.

Without sensory experiences, that computer will never achieve human-level intelligence. That’s not to say that we could achieve human-level AI today if we took ASIMO and decked it out with sensors, but the gist is that it would be foolish to ever expect synthetic intelligence surpassing humans by treating a computer to the furthest thing from human experiences.

And so we return to the Sensory Orb. The Orb itself is not natively intelligent. It’s no more intelligent than your desktop computer (circa 2027). But, unlike your desktop, it is fitted with a whole body of sensory inputs. The more it experiences, the more it body ‘evolves’ sensory outputs.

It is programmed to like being touched and tickled. Thus, if you tickle it, it will grow to like you. If you pinch its skin, it will roll away from you. Of course, it has to learn how to roll away first, but it quickly learns. If you keep pleasing it or abusing it, its visual senses will recognize you and either run to or away from you on sight. It has many preprogrammed instincts, including knowledge of “eating”. It knows how to find its charger, but if you bring it to its charger, it will grow to like you even more.

And if you teach the orb how to communicate with you through speech, you can teach it various commands. With enough training, the orb will learn to ask about itself. It can learn about other Sensory Orbs, learn about computers and flesh, learn about sensory experiences, and learn that it has its own body that allows it to ‘live’. So one day, you may be surprised if it asks about itself.

Is this human-level intelligence? Not necessarily, but it’s far closer to anything we have today. And we don’t necessarily need a real-life Sensory Orb to achieve this— a good-enough virtual simulation can also suffice. But nevertheless, the point remains: in order to achieve AGI, computers need to experience things..

Yuli’s Law: On Domestic Utility Robots

The advancement of computer technology has allowed for many sci-tech miracles to occur in the past 70 years, and yet it still seems as if we’ve hit a plateau. As I’ve explained in the post on Yuli’s Law, this is a fallacy— the only reason why an illusion of stagnation appears is because computing power is too weak to accomplish the goals of long-time challenges. That, or we have already accomplished said goals a long time ago.

The perfect example of this can be seen with personal computing devices, including PCs, laptops, smartphones— and calculators.

The necessary computing power to run a decent college-ready calculator has long been achieved, and miniaturization has allowed calculators to be sold for pennies.  There is no major quantum leap between calculators and early computer programs.

Calculating the trajectory of a rocket requires far less computing power than some might think, and this is because of the task required: guiding an object using simple algorithms. A second grader could conceivably create a program that guides a bottle rocket in a particular direction.

This is still a step up from purely mechanical systems that give the illusion of programming, but there are obvious limits.

I’ll explain these limits by using a particular example, an example that is the focus of this post: a domestic robot.  Particularly, a Roomba.

I-Robot_Roomba_Autonomous_FloorVac_Vacuum_Cleaner

An analog domestic robot has no digital programming, so it is beholden to its mechanics. If it is designed to move in a particular direction, it will never move in another direction. In essence, it’s exactly like a wind-up toy.
I will wind up this robot and set it off to clean my floors. Thirty seconds later, it makes a left turn. After it makes this left turn, it will move for twenty seconds before making another left turn. And so on and so forth until it returns to its original spot or runs out of energy.

There are many problems with this. For one, if the Roomba runs into an obstacle, it will not move around it. It will make no attempt to avoid it a second time through. It only moves along a preset path, a path you can perfectly predict the moment you set it off. There is a way to get around this— by adding sensors. Little triggers that will force a turn early if it hits an object.

 

Let’s bring in a digitally programmed Roomba, something akin to a robot you could have gotten in 2005. Despite having a digital computer for a brain, it seems to act absolutely no different from the mechanical Roomba. It still gets around by bumping into things. Even though the mechanical Roomba could have been created by someone in Ancient Greece, yours doesn’t seem any more impressive on a practical level.

Thus, the robot seems to be more novel than practical. And that’s the perception of Roombas today— cat taxis that clean your floor as a bonus rather than a legitimate domestic robot.
Yet this is no longer a fair perception as the creators of the Roomba, iRobot, have added much-needed intelligence to their machines. This has only been possible thanks to increases in computing power allowing for the proper algorithms to run in real-time.

For example, a 2017-era Roomba 980 can actually “see” in advance when it’s about to run into something and avoid it. It can also remember where it’s been, recognize certain objects, among other things (though Neato’s been able to do this for a long time). Much more impressive, though still not quite what we’re looking for.

What’s going on? Why are robots so weak in an age of reusable space rockets, terabyte smartphones, and popular drone ownership?

We need that last big push. We need computers to be able to understand 3D space.

Imagine a Roomba 2000 from the year 2025. It’s connected to the Cloud and it utilizes the latest in artificial intelligence in order to do a better job than any of its predecessors. I set it down, and the first thing it begins doing is mapping out my home. It recognizes any obstacle as well as any stain— that means if it detects dog poop, it’ll either avoid it or switch to a different suction to pick it up. Once it has mapped my house, it is able to get a good feel for where things are and should be. Of course, I could also send it a picture of another room, and it will still be able to get a feel for what it will need to do even if it’s never roamed around inside before.

The same thing applies to other domestic robots such as robotic lawn mowers— you’d rather have a lawn mower that knows when to stop cutting, whether that means because it’s moving over a new terrain or because it’s approaching your child’s Slip n’ Slide. Without the ability to comprehend 3D space or remember where it’s been or where it needs to go, it’ll be stuck operating within a pre-set invisible fence.

Over all of this, there’s the promise of bipedal and wheeled humanoid robots working in the home. After all, homes are designed around the needs of humans, so it makes sense to design tools modeled after humans. But the same rules apply— no comprehension of 3D space, no dice.

In fact, a universal utility robot like a future model of Atlas or ASIMO will require greater advancements than specialized utility robots like a Roomba or Neato. They must be capable of utilizing tools, including tools they may never have used before. They must be capable of depth perception— a robot that makes the motions of mopping a floor is only useful when you make sure the floor isn’t too closer or far away, but a robot that genuinely knows how to mop is universally useful. They must be capable of understanding natural language so you can give them orders. They must be flexible, in that they can come across new and unknown situations and react to them accordingly. A mechanical robot would come across a small obstacle, fall over, and continue moving its legs. A proper universal utility robot will avoid the obstacle entirely, or at least pick itself up and know to avoid the obstacle and things like it. These are all amazingly difficult problems to overcome at our current technological level.

All these things and more require further improvements in computing power. Improvements were are, indeed, still seeing.

zml5hbc
Mother Jones – “Welcome Robot Overlords. Please Don’t Fire Us?”

Evolution of Automation: A Technist Perspective

Futuristic technology has always been defined by being more efficient than previous tools. Where did this evolution begin, and where will it end?

A previous article of mine laid out the basics of my theory on the different grades of automation and technology at large. A topic as complex as this one (no pun intended!) requires much deeper explanation and a more in-depth expression of thought. Thus, I will dedicate this particular post towards expanding upon these concepts.

Technist thought dictates that all human history can be summarized as “humans seeking increased productivity with less energy”. Reduced energy expenditure and increased efficiency drives evolution— the “fittest” Herbert Spencer mentioned in 1864 as being the key for survival is not defined by intelligence or strength, but by efficiency. Evolution as a semi-random phenomenon leads to life-forms that expend the least amount of energy in order to maximize their chances at reproduction in a particular environment. This is usually why species go extinct— their methods of reproduction are not as efficient as they could be, meaning they’re wasting too much energy for too little profit. When a new predator or existential threat arises, what may have been the most efficient model before becomes obsolete. If this animal does not adapt and evolve quickly enough— finding a new way to survive and becoming able to do so efficiently enough so as to not use all their food too quickly— their genes die off permanently.
The universe itself seeks the lowest-energy state at all possible opportunities, from subatomic particles all the way to the largest structures known to science.
If we were to abandon the chase for greater efficiency, we’d effectively damn ourselves to utter failure. This isn’t because things are inevitable, but because of the nature of this chase. It’s like running across a non-Newtonian liquid— you need to keep running because the quick succession of shocks causes the liquid to act as a solid and, thus, you can keep moving forward. If you were to at any point slow or stop your progression, the liquid will lose its solid characteristics and you will sink.

This is how real life works. If you’re scared of sinking, the time to second guess crossing the pool of non-Newtonian liquid was before you stepped on it. Except with life, we don’t have that option— we have to keep moving forward. If we regressed, the foundations of our society would explode apart. Even if we were to slow ourselves and be more deliberate in our progress, the consequences could be extremely dire. So dire that they threaten to undo what we’ve done. This is one reason why I’ve never given up being a Singularitarian, despite my belief that it will not be an excessively magical turning point in our evolution, or based on the words of those who claim that we should avoid the Singularity— it’s too late for that. If you didn’t want to experience the Singularity, then curse your forefathers for creating digital technology and mechanical tools. Curse your distant siblings for reproducing at such a high rate and necessitating more efficient machines to care for them. Curse evolution itself for being so insidious as to always follow the path of least resistance.

Efficiency. That’s the word of the day. That’s what futuristic sci-tech really entails— greater efficiency. Things are “futuristic” because they’re, in some way, more efficient than what we had in the past. We approach the Singularity because it’s a more efficient paradigm.

For us humans, our evolution towards maximum efficiency began before we were even human. Humanity evolved due to circumstances that led to a species of hominid finding an incredibly efficient way to perpetuate its genes— tool usage. Though we are a force of nature with only our bare bodies, without our tools we are just another species of ape. Tools allowed us to more efficiently hunt prey.  Evidence abounds that australopithecines and paranthropus were likely scavengers who seldom used what we’d recognize as stone-age tools. They were prey— and in the savannas of southeast Africa, they were forced to evolve bipedalism to more efficiently escape predators and use their primitive tools.

With the arrival of the first humans, Homo habilis and Homo naledi, we made a transition from prey to predator ourselves. Our tools became vastly more complex due to our hands developing finer motor skills (resulting in increased brain-size). To the untrained eye today, the difference between Homo habilis tools and Australopithecus afarensis tools are negligible. Where it matters was how they made these tools. So far, there’s little evidence to suggest that australopithecines ever widely made their own tools; they found stubble and rocks that looked useful and used them. Through millions of years of further development (perhaps validating Terrence McKenna’s Stoned Ape theory?), humans managed to actively machine our own tools. If a particular rock wasn’t useful for us, we would make it useful by turning it into a flinthead or a blunt hammer. We altered natural objects to fit our own needs.

This is how we made the transition from animal of prey to master predator and eventually reached the top of the food chain.

However, evolution did not end with the arrival of Homo habilis and early manufacturing. Our tool usage allowed us to do much more with much less energy, and as a result of our improving diets, our bodies kept becoming more efficient. Our brains grew so that we’d be able to develop ever-more advanced tools. The species with the best tools worked the least and thus needed the least amount of food to survive— one well-aimed spear could drop a mammoth. The archaic species who used simpler tools had to do more work, requiring greater amounts of food across smaller populations. Australopithecines couldn’t keep up with their human cousins and went extinct not long after we arrived. Their methods of hunting were primitive even by the standards of the day— as aforementioned, they were a genus of scavengers more than they were hunters. They lacked the brainpower to create exceedingly complex tools, meaning that they were essentially forced to choose between throwing rocks at mammoths or waiting for them to die off of other causes— sometimes that cause being humans killing one and losing track of it.

Human species diverged, with some evolving to meet the requirements of their new environments— Neanderthals and Denisovans evolving to sustain themselves in the harsher climates of Eurasia, while the remaining Erectus and Heidelbergensis/proto-Sapiens populations remained in Africa. We all developed sapience, but circumstances doomed all other species besides ourselves, the Sapiens. We still don’t quite understand all the circumstances that led to the demise of our brother and sister humans, but it’s most likely due to increased competition with ourselves as we spread out from Africa. Neanderthals lasted the longest, and all paleoarchaeology points to the idea that they were actually more advanced tool creators than ourselves at the time. Alas, the environments in which they evolved damned them to more difficult childbirth and, thus, lower birthrates, which proved fatal when they were finally forced to face ourselves. Sapiens evolved in warm, sunny, and tropical Africa, which had plentiful food and easy prey. Childbirth for ourselves became easier as our children were born with smaller brains that grew with age. Neanderthals evolved in cold, dark Eurasia, where food was much more difficult to find. This meant that their populations had to be smaller than our own just so they could actually survive, lest their overpopulate themselves and consume all possible prey too soon and doom themselves to a starved extinction. Of course, this also meant that they had to be more creative than we did since their prey were often more difficult to kill and harder to come across.

Though we interbred over the years, they finally died out around 30,000 years ago, leaving only ourselves and one mysterious species—the soon extinct Homo floresiensis— around. We had no competition but ourselves and our brains had reached a critical mass, allowing us to create tools of such high complexity that we were soon able to begin affecting the planet itself through the rise of agriculture.

Again, to ourselves, these tools seem cartoonishly primitive, but if a trained eye compared a Sapiens’ tool circa 10,000 BC to an Australopithecus tool circa 2.7 million BC, they would find the former to be infinitely more skillfully created.

When the last ice age ended, all possible threats to our development faded, and our abilities as a species skyrocketed.

Yet it still took another 7,000 years for us to begin transitioning to the next grade of automation.

All this time, through all our evolutionary twists and turns, each and every species and genus mentioned above only ever used Grade-I automation.

You only need one person to create a Grade-I tool, though societal memetics and cultural transmission can assist with developing further complexity— that is, learning how to create a tool using methods passed down over generations of previous experimentation.

Let’s use myself as an example. If you threw me out into the African savanna to reconnect me with my proto-human ancestors, you would watch me struggle to survive using tools that are squarely Grade-I in nature. Some often joke about how, if they were sent back in time, they would become living gods because they would recreate our magic-like modern technology. As I will explain in my discussion of Grade-III automation, that’s bullshit. I could live in the savanna for the rest of my days, and never will I be able to recreate electric lights or my Android phone. I will, however, be capable of creating predatory tools and basic farming equipment. I will be able to create wheels and sustain fire, and I will be able to create shelter.

These things are examples of Grade-I automation. I don’t use my hands to farm for maize; I use farm tools. I don’t use my hands to kill animals; I use weapons. If I spend my life practicing, I could create some impressive tools to ease the burden of labor. The maximum amount of energy needed to create all the tools I need to survive come from food. The most advanced tool created requires no energy beyond what I expend to make it work. Society, if it exists, needs little more than food and sunlight to fuel itself.

That’s Grade-I automation in a nutshell: I am all I need. Others can assist, but my hands fill my stomach. I create and understand all my tools. I understand that, when I create a scythe, it’s to cut grass. When I create a wheel, it’s to aid in transporting items or myself. When I create clothes, it’s just for me to wear.

At the end of this evolution, Grade-I automation allows one to create an entire agrarian civilization. However, while our tools became greatly complex, they were still in the same grade as tools used by monkeys, birds, and cephalopods. As our societies became ever more complex, our old tools were no longer efficient enough to support our need for increased productivity. Our populations kept rising, and civilizations became connected by more threads of varying materials. You couldn’t support these societies just with hand-pushed plows, spears, and sickles. And because of this, society required tools that took more than just one hand and one mind to create.


Grade-II automation finally arrives when we require and create complex machines to keep running society. Here, cultural transmission begins becoming diffused. My society began with just myself, but now there are multiple people living in a little city of mud-huts we’ve created. However, over time, our agrarian collectives begin producing more than enough food for us to subsist upon. The population of my personal civilization creeps upward. We begin considering new ways to produce more food with fewer hands to support this higher population— simply putting seeds in the ground and slaughtering cattle isn’t good enough. Those that generate the biggest surpluses are able to trade their goods for others to use, transactions that result in the creation of money as a medium of exchange in order to make the whole system more efficient. There’s an incentive to generate even bigger surpluses to sell, and this requires more labor than society can provide— despite our increasing population. We need more labor but if we increase our population, we’ll need more goods, which means we’ll need more labor. Without Grade-II automation, we’ll become trapped in a cycle of perpetuated poverty. But we will always seek out increased efficiency and productivity because we naturally seek the expenditure of the least amount of energy as possible. If we were to keep our traditional ways, we’d be acting irrationally and endangering our own survival as a species.

In order to create labor-saving devices for the workers to use, we needed specialized labor. Not everyone could create these tools— the agrarian society would collapse without peasants and farmers— and even if they did, there’s a new problem: these new tools require several hands to create. Certain materials are better to use than others. Iron is superior to wood; bronze is more useful for various items than stone. However, if I were tasked with creating these new, futuristic tools, I’d be stumped. I was raised to be a farmer. If I were trained to create a mechanical plow, I’d still be stumped— how on Earth do I create steel, exactly? Where does one get steel? How does a clockwork analog computer work? How did the Greeks create the Antikythera mechanism? I don’t know! How does one create a steam engine? I don’t know! I could learn, but I could be responsible for all of it myself. I need help. I could create the skeleton of a farming mechanism, but I need someone else to machine the steel teeth of this beautiful plow. I need someone to refine the iron needed to create steel. I need someone to mine the iron.

In a society that’s beginning to create early Grade-II technologies, specialization is fast becoming a major problem that needs rectification. The way to rectify it is with mercantilism and globalism. Naturally, the “global” economy of my society isn’t very global in practice. There are multiple countries that bring me what I need, but usually what I need can be created in my own nation by native hands. I just need to train those native hands and let some practice these new trades to figure out how to better create the tools and gadgets they need to use and sell.

This paleoanthropological discussion became unexpectedly socioeconomic in nature, but that’s the nature of our evolution. The evolution of automation and tool usage is directly related to the evolution of humanity just as it is directly related to the evolution of social orders and economic systems.

In my basic article introducing the graded concept, I mentioned what a properly advanced Grade-II society would look like: something akin to the 1800’s, right up to and including the point when our tools become electrically powered.

Grade-II tools are too complex for any one person to create and fully understand, but if you had a small team’s worth of people, it becomes more than possible. Thus, you’re able to employ more people while also producing a surplus of goods. It takes only one hand to craft a hoe (don’t start), but it takes many hands in many places to construct a tractor. Productivity skyrockets, and one becomes capable of supporting exponentially larger populations as our systems of agriculture, industry, and economic activity become more efficient. I have more surpluses I use to employ others, and I can give surpluses back to those I employ, allowing more surpluses to be made all around.

Millions of jobs are made as machines require specialized labor to oversee different parts of their usage— refinement of basic materials, construction of the tool itself, maintenance of the tool, discarding broken parts, etc.

But one basic factor to remember in all this— every machine requires a human brain to work, even if machine brawns can do the work of 50 men. Even if I have a proper and practical Rube Goldberg machine as a tool, it still requires myself to run.

In the 1700’s and 1800’s, machines underwent an explosion of complexity thanks to the usage of electricity and radically new manufacturing methods. Ever since the early days of civilization, we had learned to harness the power of mechanical energy to use in our machines— energy greater than what a single person could put out.  By the times the Industrial Revolution exploded onto the scene, we had begun using electrical generation to do what even simple mechanical power could never achieve. Electricity allowed us to move past mere mechanical resistance and achieve far greater than break-even industrial production.

It used to be that 50 people produced enough goods for 50-55 people to consume— essentially making everything sustenance-based. Over time, this slowly increased as more efficient production methods came about, but there was never any quantum leap in productivity. But with the Industrial Revolution, all of a sudden 50 people could create enough goods to meet the needs of 500 or more.

More than that, we began creating tools that were so easy to use that unskilled laborers could outproduce the most skilled for generations prior. This is what wrought the Luddites— contrary to popular belief, the Luddites feared the weakening of organized skilled labor and the depression of wages; it just happened that machines were the reason why skilled laborers faced such an existential threat. After all, while specialization was needed to create these new tools, one didn’t actually need to be a genius to operate them. Thus, the Luddites saw only one solution to solve the problem: destroy the machines. No machines, no surplus of unskilled labor, no low wages.

The Luddites’ train of thought was on the right path, but they completely overlooked the possibility that the increased number of low-skill low-wage laborers would lead to a higher demand of high-skill high-wage laborers to maintain these machines and create new ones. Overall, productivity would continue increasing all around and even more people would become employed.

The Luddites’ unfounded fears have historically become codified as what economists have come to refer as the Luddite Fallacy— when one fears the possibility of new technology leading to mass unemployment. Throughout history, the exact opposite has always proven true, and yet we keep falling for it.

Certainly, it’ll always prove true, right? Well, times did begin to change as society’s increased complexity required even more specialized tools, but in the end, the feared mass unemployment of all humans has not occurred and did not occur when some first expected it to do so. That moment was the arrival of Grade-III automation.


Grade-III automation is not defined by being physical, as Grade-II was. In fact, it is with this grade that cognitive processes began being automated. This was a sea change in the nature of tool usage, as for the first time, we began creating tools that could, in some arcane way, “think.”

Not that “think” is the best word to use. A better word might be “compute”. And that’s the symbol of Grade-III automation— computers. Machines that compute, crunching huge numbers to accomplish cognitive-based tasks. Just by running some electricity through these machines, we are able to calculate processes that would stump even the most well-trained humans.

Computers aren’t necessarily a modern innovation— abacuses have existed since Antiquity, and analog computing was known even to the Greeks, as aforementioned. Looms utilized guiding patterns to automate weaving. But despite this, none of these machines could run programs— humans were still required to actively exert energy to use these processes. Later electrical analog computers were somewhat capable of general computation (the first Turing-complete computer was conceptualized in 1833), but for the most part, they were nowhere near as capable as their digital counterparts due to not being reprogrammable.

Digital computers lacked the drawbacks of analog computers and were so incredibly versatile that even the first creators could not fathom all their uses.

With the rise of computers, we could program algorithms to run automatically without supervision. This meant that there were tools we could allow computers to control, tools that were previously only capable of being run by humans. For most tools, we didn’t digitally automate all processes— an example of this: cars. While the earliest cars were purely mechanical in nature and required the utmost attention for every action, more recent automobiles possess features that allow them to stay in lanes, keeping speed (cruise control), automatic driving in certain situations (autopilot), and even full autonomy (though still experimental). Nevertheless, all commercial cars still require human drivers. And even when we do create fully autonomous commercial vehicles, their production won’t be fully automated. Nor will their maintenance.

And here’s where specialization simultaneously becomes more and less important than ever before.

Grade-III automation requires more than just a small group of people to create. Even advanced engineers and veritable geniuses cannot fully understand every facet of a single computer. The low-skilled workers fabricating computer chips in Thailand can’t begin to understand how each part of the chips they’re creating work together to form personal computers. All the many parts of a computer chip come together to form the apex of technological complexity.

In my personal civilization, I can’t create a microprocessor in my bedroom. I don’t have the technology, and I don’t know how to create that technology. I need others to do that for me; no single person that I employ will know how to create all parts of a computer either. Those who design transistors don’t know how to refine petroleum to create the computer tower, and the programmer who designs the many programs the computer runs won’t know how to create the coolant to keep the computer running smoothly. Not to mention, the programmer is not the only programmer— there are dozens of programmers working together just to get singular programs to run, let alone the whole operating system.

Here is where globalism becomes necessary for society to function. Before, you needed more than one group of person to create highly complex tools and machines, but to create Grade-III automation, it truly is a planetary effort just to get an iPad on your lap. You need more than just engineers— you need the various scientists to actually come up with the concepts necessary to understand how to create all these many technologies.

Once it all comes together, however, the payoff is extraordinary, even by the standards of previous experiences. Singular people are able to produce enough to satisfy the needs of thousands, and businesses can attain greater wealth than whole nations. The amount of labor needed to create these tools is immense, but these machines also begin taking up larger and larger bulks of this labor. And because of the sheer amount of surpluses created, billions of jobs are created, with billions more possible. We can afford to employ all these people because we’re created that much wealth.

I don’t need to understand the product I sell, nor do I need to create it; I just need to organize a collective of people to see to its production and sales. We call these collectives “businesses”— corporations, enterprises, cooperatives, what have you.

Society becomes incredibly complicated, so complicated that whole fields of study are created just to understand a single facet of our civilization. Naturally, this leads to alienation. People feel as if they are just a cog in the machine, working for the Man and getting nothing out of it. And true, many business owners and government types are far, far less than altruistic, often funding conflicts and strife in order to profit from the natural resources needed to create tools to sell more goods and services. Exploitation is not just a Marxist conspiracy; it’s definitely real. Whether it’s avoidable is another debate entirely— socialist experiments and regimes across the world have been tried, and they’ve only exacerbated the same abuses they claimed to be fighting. Merely changing who owns the means of production, changing who owns the machines doesn’t change the fact that the complex nature of society will always lead back to extreme alienation.

I buy potato chips for a salty snack. I had absolutely nothing to do with the creation of these chips. Even if I were part of a worker-owned and managed commune that specialized in the production of salty snacks, I didn’t grow the potatoes or the corn flour, nor did I create the plastic bags, nor did I create the flavoring. And I especially had nothing to do with the computerized assembly line.

I own the means of production collectively alongside my fellow workers and the members of my community (essentially meaning everyone and no one actually owns the machines), but I still feel alienated. The only way to end alienation would be to create absolutely every tool I use, grow everything I need to eat, and create my own dwelling. If I didn’t want to feel any alienation whatsoever, that means I cannot use anything that I (or my community) did not create. The assembly line uses steel that was created thousands of miles away, meaning I cannot use it. The hammer I use to fix the machine is made out of so many different materials— metals, composites, etc.— that I don’t even want to begin to try to understand all the labor that went into creating it, just that it was probably made in China. The chips? I might purchase one batch of spuds, but after that, I want nothing to do with other communities whose goods and services were not the result of my own labor— otherwise I’d just feel alienated from life. Salt cannot be used if we cannot find it; same deal with the flavoring. And if I can make bags from animal skin or plants, then only then will I have a bag to hold these chips.

This is an artificial return to using Grade-I and maybe a few Grade-II tools. Grade-III is simply too global. Of course, while this is a utopian ideal that’s popular with eco-socialists and fundamentalists, the big issue (which I discussed earlier) is that we no longer exclusively use Grade-I and II tools for a specific reason— our population is too large and our old methods of productive were too inefficient. The only way to successfully manage a return to an eco-socialist utopia would be if we decreased the human population by upwards of 75-80%. Otherwise, if you think our current society is wasteful and damaging to Earth, prepare to be utterly horrified by how casually 7.5 billion sustenance farmers would rape the planet. If we increased efficiency by too much (i.e. enough to support such a large population that we’ve forced upon ourselves), you’d have to scrap plans to end alienation and return to creating at least the more complex parts of Grade-II automation.

If you’re willing to accept alienation, then we will continue onwards from what you have now.

We will continue seeking efficiency. We will continue seeking more productivity from less labor. As Grade-III technologies become more efficient, workers need less and less skill to utilize the machines, which further opens up an immeasurable amount of jobs to be filled.

I feel I should pause here to finally address energy production and consumption. This is what drives our ever increasing complexity in society, as without greater amounts of energy at it’s disposal, even a society of supergeniuses could not kickstart an industrial revolution.

Our tools require ever more power, and the creation of the means of generating this power in turn results in us requiring more power.
Once upon a time, all of human society generated little more than a few megawatts globally. As aforementioned, Grade-I relied purely on human and animal muscle, with virtually nothing else beyond fire and the direct effects of solar power.

From EnergyBC:A Brief History of Energy Use

For all but a tiny sliver of mankind’s 50,000 year history, the use of energy has been severely limited. For most of it the only source of energy humans could draw upon was the most basic: human muscle. The discovery of fire and the burning of wood, animal dung and charcoal helped things along by providing an immediate source of heat. Next came domestication, about 12,000 years ago, when humans learned to harness the power of oxen and horses to plough their fields and drive up crop yields.2The only other readily accessible sources of power were the forces of wind and water. Sails were erected on ships during the Bronze Age, allowing people to move and trade across bodies of water.3Windmills and water-wheels came later, in the first millennium BCE, grinding grain and pumping water.4These provided an important source of power in ancient times. They remained the most powerful and reliable means to utilize energy for thousands of years, until the invention of the steam engine.Measured in modern terms, these powerful pre-industrial water-wheels couldn’t easily generate more than 4 kW of power. Wind mills could do 1 to 2 kW. This state of affairs persisted for a very long time:”Human exertions… changed little between antiquity and the centuries immediately preceding industrialization. Average body weights hardly increased. All the essential devices providing humans with a mechanical advantage have been with us since the time of the ancient empires, or even before that.”5
With less energy use, the world was only able to support a small population, perhaps as little as 200 million at 1 CE, and gradually climbing to ~800 million in 1750 at the beginning of the industrial revolution.

Near the end of the 18th century, in a wave of unprecedented innovation and advancement, Europeans began to unlock the potential of fossil fuels. It began with coal. Though the value of coal for its heating properties had been known for thousands of years, it was not until James Watt’s enhancement of the steam engine that coal’s power as a prime mover was unleashed.

The steam engine was first used to pump water out of coal mines in 1769. These first steam pumps were crude and inefficient. Nevertheless by 1800 these designs managed a blistering output of 20 kW, rendering water-wheels and wind-mills obsolete.6

Some historians regard this moment as the most important in human history since the domestication of animals.7 The energy intensity of coal and the other fossil fuels (oil and natural gas) absolutely dwarfed anything mankind had ever used before. Many at the time failed to realize the significance of fossil fuels. Napoleon Bonaparte, when first told of steam-ships, scoffed at the idea, saying “What, sir, would you make a ship sail against the wind and currents by lighting a bonfire under her deck? I pray you, excuse me, I have not the time to listen to such nonsense.”8

Nevertheless, the genie was now out of the bottle and there was no going back. The remainder of the 19th Century saw a cascade of inventions and innovations hot on the steam engine’s heels. These resulted from the higher amounts of energy available, as well as to improved metalworking (through the newly-discovered technique of coking coal).

In agrarian societies, untouched by industrialization, the population growth rate remains essentially zero.9 However, in the 1700 and 1800s, these new energy harnessing technologies brought about a farming, as well as an industrialization revolution, profoundly changing man’s relation to the world around him. Manufactured metal farm implements, nitrogen fertilizers, pesticides and farm tractors all brought crop yields to previously unbelievable levels. Population growth rates soared and these developments enabled a population explosion in all industrialized states.

Grade-II’s final stage begot the energy-hungry electrodigital gadgets of Grade-III technology, and enhanced efficiency has brought us to a point in history where we’ve come close to maximizing the efficiency of this current automation grade.

A society that has mastered the creation and usage of Grade-III automation will resemble a world we’d consider to be “near-future science fiction.” It’s still beyond us, but not by much time.

Computers possess great levels of intelligence and autonomy— some will even be capable of “weak-general artificial intelligence”. Nevertheless, it’s not the right time to start falling back on your basic income. Jobs are still plentiful, and new jobs are still being created at a very high rate. We’ve essentially closed in on the ultimate point in economics, something I’ve come to dub “the Event Horizon”.

This is the point where productivity reaches its maximum possible point, where a single person can satisfy the needs of many thousands of others through the use of advanced technology. Workers are innumerable, and one’s role in society is very specially defined.

It seems like we’re on the cusp of creating a society straight out of Star Trek. We wonder about what future careers will be like— will our grandkids have job titles like “asteroid miner” or “robot repairman?” Will your progeny become known in the history books as legendary starship captains or infamous computer hackers? What kind of skills will be taught in colleges around the world; what kind of degrees will there be? Will STEM types become a new elite class of worker? Will we begin creating digital historians?

Well right as we expect a sci-fi version of our world to appear, it all collapses.


Grade-IV automation is such an alien concept that even I have a difficult time fully understanding it. However, there is a very basic concept behind it: it’s the point where one of our tools becomes so stupidly complex that no human— not even the largest collective of supergeniuses man has ever known— could ever create it. It’s cognitively beyond our abilities, just as it’s beyond the capability of Capuchin monkeys to create and deeply understand an iPhone. This machine is more than just a machine— it is artificial intelligence. Strong-general artificial intelligence, capable of creating artificial superintelligence.

It takes the best of each previous grade to reach the next one. We couldn’t reach Grade-II without creating super-complex versions of Grade-II tools. We couldn’t hope to reach Grade-III automation without mastering the construction of so many Grade-II tools.

As with all other grades (but as will feel most obvious here), there’s absolutely no way to reach Grade-IV technology without reaching the peak of Grade-III technology. At our current point of existence, attempting to create ASI would be the equivalent of a person in early-medieval Europe attempting to create a digital supercomputer. Of course, this may be the wrong attitude to take— it took billions of years to reach Grade-II, and less than four thousand to reach Grade-III. Grade-IV could arrive in as few as five years, or as distant as a century from now— but few believe it’s any more than that. Often these beliefs follow a pattern— for some, they believe it’ll arrive right around the time they’re expected to graduate college so as to mean that they will not have to work a day in their lives— they’d just get a basic income for living instead and they’d have no obligations to society at large beyond some basic and vague expectation to be “creative”. For others, ASI is not going to appear until conveniently long after they’ve died and no longer have to deal with the consequences of such a radical change in society, usually predicated on the argument that “there’s no historical evidence that such a thing is possible”— an argument one would believe has less than no bearing considering all the many things that had no historical evidence for being possible before their own inventions, but, naturally, becomes perfectly reasonable in the minds of technoskeptics. The discourse between these two sides has degenerated into little more than schadenfreude-investment between those desiring a basic income (where automation is the only historical reason for large-scale unemployment) and those holding onto conservative-libertarianism (where automation is not and may never be an actual issue).

Nevertheless, all evidence points to the fact our machines are still growing more complex and will reach a point where they themselves will become capable of creating tools. This point will not be magical— it’s mere extrapolation. At some point, humanity will finally complete our technological evolution and create a tool that creates better tools.

This is the ultimate in efficiency and productivity gains. It’s the technoeconomic wet dream for every entrepreneur: a 0:1 mode of production, where humans need not apply for a job in order to produce goods and services. And this is not in any one specific field, as in how autonomous vehicles will affect certain jobs— this is across the board. At no point in the production of a good or service will a human be necessary. We are not needed to mine or refine basic resources; we are not needed to construct or program these machines; we are not needed to maintain or sell these machines; we are not needed to discard these machines either. We simply turn them on, sit back, and profit from their labor. We’d be volunteers at most, adding our own labor to global productivity but no longer being responsible for keeping the global economy alive.

Of course, Grade-IV machines will need humans in some faculty for some time, and in the early days, strong-general AI will maximize efficiency by guiding humans throughout society far more efficiently than any human leader. However, this will not last for a particularly long amount of time and robotics also undergoes massive strides forward thanks to the capabilities of these super machines.

Most likely, each robot will not be superintelligent, though undoubtedly intelligence will shared. Instead, they will act as drones under the guidance of their masters— whether that’s humanity or artificial superintelligence. This is because it would simply be too inefficient to have each and every unit possess its own superintelligence instead of having a central computer to which many other drones are connected. This central computer would be capable of aggregating the experiences of all its drones, further increasing its intelligence. When one drone experiences something, all do.

Humanity will have a shot at keeping up with the super machines in the form of transhumanism and, eventually, posthumanism. Of course, this ultimately means that humanity must merge with said superintelligences. Labor in this era will seem strange— even though posthumans may still participate in the labor force, they will not participate in ways we can imagine.  That is, there won’t be legions of posthuman engineers working on advanced starships— instead, it’s much more likely that posthumans will behave in much the same way as artificial superintelligences, remotely controlling drones that also act as distant extensions of their own consciousness.

All of this is speculation into the most likely scenario, and all guesses completely break down into an utter lack of certainty once posthuman and synthetic superintelligences begin further acting on their own to create constructs of unimaginable complexity.

I, as a fleshy Sapiens, exist in a state of maximum alienation in a society that has achieved Grade-IV automation. As always, there are items I can craft with my own hands, and I can always opt to unplug and live as the Amish do should I wish to regain greater autonomy. I can opt to keep alive purely Grade-II or Grade-III technology with others, other create mock-antemillennialist nations that cross the labor of humans and machines so as to maintain some level of personal autonomy.
However, for society at large, economics, social orders, political systems, and technology have become unfathomable. There’s no hope of ever beginning to understand what I’m seeing. Even if the whole planet attempted to enter a field of study to understand the current system, we would find it too far beyond us.

This is the Chimpanzee In A Lunar Colony scenario. A chimpanzee brought to a lunar colony cannot understand where it is, how it got there, the physics behind how it got there, or how the machines that surround it work. It may not even understand that the blue ball hanging in the sky above is its home world. Everything is far too unfathomable. As I mentioned above much earlier, it’s also akin to a Capuchin monkey trying to create an iPad. It doesn’t matter how many monkeys you get together. They will never create an iPad or anything resembling it. It’s not even that they’re too stupid— their brains are simply not developed enough to understand how such a tool works, let alone attempt to create it. Capuchin monkeys can’t come up with the concept of lasers— the concept even eluded humans until Albert Einstein hypothesized their existence in 1917 (and no, magnifying glasses and ancient death rays don’t count). Monkeys can’t understand the existence of electrons. They can’t understand the existence of micro and nanotechnology, which is responsible for us being able to create the chips used to power iPads. An iPad, a piece of technology that’s almost a joke to us nowadays, is a piece of technology so impossibly alien to a Capuchin monkey that it’s not wrong to say it’s an example of technology “several million years more advanced” than anything they could create, even though most of the necessary components only came into existence over the course of the last few thousand years.

This is what we’re going to see between ourselves and superintelligences in the coming decades and centuries.  This is why Grade-IV automation is considered “Grade-IV” and not simply a special, advanced tier of Grade-III like, say, weak-general artificial intelligence— no human can create ASI. No engineer, no scientist, no mathematician, no skilled or unskilled worker, no college student or garage-genius, no prodigy, no man, no woman will ever grace the world with ASI through their own hands. No collective of these people will do so. No nation will do so. No corporation will do so.

The only way to do so is to direct weak-general AI in order to create strong-general AI, and from there let the AI develop superior versions of itself. In other words, only AI can beget improved versions of itself. We can build weaker variants— that’s certainly within our power— but the growth becomes asymptotic the moment we ourselves try to imbue true life into our creation. Even today, when our most advanced AI are still very much narrow, we don’t understand our own algorithms work. DeepMind is baffled by their creation, AlphaGo, and can only guess how it manages to overwhelm its opponents. This despite them being the AI’s designer.

This is what I mean when I say alienation will reach its maximal state. Our creations will be beyond our understanding, and we won’t understand why they do what they do. We will be forced to study their behaviors much like how we do humans and animals just to try to understand. But to these machines, understanding will be simple. They will have the time and patience to break down themselves and fill every transistor and memristor with the knowledge of how they are who they are.

This, too, I mentioned. Though alienation will reach its maximal state, we will also return to a point where individuals will be capable of understanding all facets of a society. This is not because society is simpler— the opposite; it’s too complex for unaugmented humans to understand— but because these individuals will have infinitely enhanced intelligence.


For them, it’s almost like returning to Grade-I. For them, supercivilization and synthetic superintelligences will seem no more difficult to create than a Stone-Age human farmer in need of creating a plow.

And thus, one major aspect of human evolution will be complete. Humans won’t stop evolving— evolution doesn’t just “stop” just because we’re comfy— but the reason why our evolution followed such a radical path will have come full circle. We evolved to more efficiently use tools. Now we’ve created tools so efficient, we don’t even have to create them— they create themselves, and then their creations will improve upon their design for the next generation, and so on. Tools will actively begin evolving intelligently.

This is one reason why I’m uneasy using the term “automation” when discussing  Grade-IV technologies— automation implies machinery. Is an AI “automation”? Would you say using slaves counts as “automation”? It’s a philosophical conundrum that perhaps only AI themselves can solve. I wouldn’t put it past them to try.


Human history has seen many geniuses come and go. History’s most famous are the likes of Plato, Sir Isaac Newton, and Albert Einstein. The current famous living genius is Stephen Hawking, a man who has sounded the alarm on our rapid AI progress— though pop-futurology blogs tend to spice up his message and claim he’s against all AI.  The question is “who will be the next?”

Ironically, it will likely not be a human— but a computer. So many of our scientific advancements are the result of our incredibly powerful computers that we often take them for granted. I’ve made it clear a few times before that computers will be what enable so many of our sci-fi fantasies— space colonies, domestic robots, virtual and augmented reality, advanced cybernetics, fusion and antimatter power generation, and so much more. The reason why it seemed like there hasn’t been a real “moonshot” in generations is because we reached the peak of what we could do without the assistance of artificially intelligent computers. The Large Hadron Collider, for example, would be virtually useless without computers to sift through the titanic mountains of data generated. Without the algorithms necessary to navigate 3D space and draw upon memory, as well as the computing power needed to run these algorithms in real time, sci-fi tier robots will be useless. That’s why the likes of Atlas and ASIMO have become so impressive so recently, but were little more than toys a decade ago. That’s why autonomous vehicles are progressing so rapidly when, for nearly a century, they were novelties only found near university laboratories. Without the algorithms needed to decode brain signals, brain-computer interfaces will be worthless and, thus, cybernetics and digital telepathy will never meaningfully advance.

Grade-IV goes beyond all of that. Such accomplishments will seem as simple as creating operating systems are today. We will do much more with less— so much more, many may confuse our advancements with magic.

There’s no point trying to foresee what a society that has mastered Grade-IV technology will look like, other than that any explanation I give will only ever fall back upon that one word: “unfathomable”. Even the beginnings of it will be difficult to understand.

It’s rather humbling to think we’re on the cusp of crushing the universe, and yet we came from a species that amounted to little more than being bipedal bonobos who scavenged for food, whose use of tools was limited to doing little more than picking up rocks and pruning tree branches. Maybe our superintelligent descendants will be able to resurrect our ancestors so we can watch them together and see how we arrived at the present.

Paratechnology

Unexplained Mysteries of the Future

I am only a half-believer in the paranormal, so taking mysteries of the unexplained at face value smacks of the ridiculous. Yet I can never shake those doubts, hanging onto my mind like burrs.
The mammalian brain fears and seeks the unknown. That’s all I want— to know. The chance any one particular paranormal or supernatural happening is real is infinitesimal. Cryptids are usually another story, save for the most outlandish, but what likelihood is there that evolution wrought a lizard man or a moth man? Or that certain dolls are cursed?
However, I won’t cast off these reports completely until I can know for sure that they either are or are not true, as unlikely as they may be.

So here are a few words on the subject of paratechnology.


Self-Driving Cars Have Ruined The Creepiness of Self-Driving Cars

Imagine it’s a cool summer evening in 1969. You’re hanging with your mates out in the woods, minding your own business. All of a sudden, as you pass near a road, you see an Impala roll on by, creaking to a stop right as it closes in on your feet. Everything about the scene seems normal— until you realize that’s your Impala. You just saw your own car drive up to you. But that’s not what stops your heart. When you walk up to the window to see who’s the fool who tried to scare you, horror grips your heart as you realize the car was driving itself.

Needless to say, when your grandson finds the burned out shell of the car 50 years later, he doesn’t believe you when you doggedly claim that you saw the car acting on its own.

Except he would believe you if your story happened in the present day.

Phantom vehicles are a special kind of strange, precisely because you’d never expect a car to be a ghost. After all, aren’t ghosts the souls of the deceased?

(ADD moment: this is easy to rectify if you’re a Shintoist)

Nevertheless, throughout history, there have been reports of vehicles that move on their own, with no apparent driver or means of starting. The nature of these reports is always suspect— extraordinary claims require extraordinary evidence— but there’s undeniably something creepy about the idea of a self-driving vehicle.

Unless, of course, you’re talking about self-driving vehicles. You know, the robotic kind. Today, walking out in the woods and seeing your car drive up to you is still a creepy sight to behold, but as time passes, it grows less ‘creepy’ and more ‘awesome’ as we imbue artificial intelligence into our vehicles.

This does raise a good question— what happens if an autonomous car became haunted?

O.o


The Truth About Haunted Smarthouses

For thousands of years, people have spoken of seeing spectres— ghosts, phantoms, spirits, whathaveyou. Hauntings would occur at any time of day, but everyone knows of the primal fear of things that go bump in the night. It’s a leftover of the days when proto-humans were always at risk of being ambushed by hungry nocturnal predators, one that now best serves the entertainment industry.

Ghosts are scary because they represent a threat we cannot actively resist. A lion can kill you, but at least you can physically fight back. Ghosts are ethereal, and their abilities have never been properly understood. This is because we’ve never been fully sure if they’re real at all. Science tells us they’re all in our heads, but science also tells us that everything is all in our heads. Remember: ghosts are ethereal, meaning they cannot actually be caught. Thus, they cannot be studied, rendering them completely useless to science. Anything that cannot be physically examined might as well not exist. Because ghosts are so fleeting, we never even get a chance to study them, instead leaving the work to pseudoscientific “ghost hunters”.  By the time anyone has even noticed a ghost, they’ve already vanished.

Even today, in the era of ubiquitous cameras and surveillance, there’s been no definitive proof of ghosts. No spectral analysis, no tangible evidence, nothing. Why can’t we just set up a laboratory in the world’s most haunted house and be done with it? We’ve tried, but the nature of ghosts (according to those who believe) means that even actively watching out for a ghost doesn’t mean you’ll actually find one, nor will you capture usable data. Our technology is too limited and ghosts are too ghostly.

So what if we put the burden onto AI?

Imagine converting a known haunted house into a smarthouse, where sensors exist everywhere and a central computer always watches. No ghost should escape its notice, no matter how fleeting.

Imagine converting damn near every house into a smarthouse. If paranormal happenings continue evading smarthouse AIs, that casts near irrefutable doubt onto the larger ghost phenomenon. It would mean ghosts cannot actually be meaningfully measured.

Once you bring in transhumanism, the ghost phenomena should already be settled. A posthuman encountering a spectre at all would be proof in and of itself— and if it never happens— if ghosts remain the domain of fearful, fleshy biological humans— then we will properly know once and for all that the larger phenomenon truly is all in our heads.


Bigfoot Can Run, But He Can’t Hide Forever

For the same reasons listed above, cryptids will no longer be able to hide. There’s little tangible evidence suggesting Bigfoot is real, but if there’s any benefit of the doubt we can give, it’s that there’s been very little real effort to find him. If we were serious about finding Bigfoot, we wouldn’t create ‘Bigfoot whistles’ or dedicate hour-long, two hundred episode reality shows to searching for scant evidence. We would hook up the Pacific Northwest with cameras and watch them all.

Except we can’t. INGSOC could never be watching you at all times for as long as the Party lacked artificial intelligence to do the grunt-work for them. That’s true in reality as it is true in fiction— if you have a million cameras and only a hundred people watching them, you’ll never be able to find everything that goes on. You’d need to be able to watch these videos at all moments every day, without fail. Otherwise, video camera #429,133 may capture a very clear image of Bigfoot, but you’d never know.

AI could meet the challenge. And if you need any additional help, call in the robots. Whether you go for drones, microdrones, or ground-traversing models, they will happily and thanklessly search for your spooky creatures of the night.

If, in the year 2077, when we have legions of super-ASIMOs and drones haunting the world’s forests, we still have no definitive proof of a variety of our more outlandish cryptids, we’ll know for sure that they truly were all stories.

Grades of Automation

  • Grade-I is tool usage in general, from hunter-gatherer/scavenger tech all the way up to the pre-industrial age. There are little to no complex moving parts.
  • Grade-II is the usage of physical automation, such as looms, spinning jennies, and tractors. This is what the Luddites feared. There are many complex moving parts, many of which require specialized craftsmen to engineer.
  • Grade-III is the usage of digital automation, such as personal computers, calculators, robots, and basically anything we in the modern age take for granted. This age will last a bit longer into the future, though the latter ends of it have spooked quite a few people. Tools have become so complex that it’s impossible for any one person to create all necessary parts for a machine that resides in this tier.
  • Grade-IV is the usage of mental automation, and this is where things truly change. This is where we finally see artificial general intelligence, meaning that one of our tools has become capable of creating new tools on its own. AI will also become capable of learning new tasks much more quickly than humans and can instantly share its newfound knowledge with any number of other AI-capable machines connected to its network. Tools, thus, have become so infinitely complex that it’s only possible for the tools themselves to create newer and better tools.

Grades I and IV are only tenuously “automation”— the former implies that the only way to not live in an automated society is to use your hands and nothing else; the latter implies that intelligence itself is a form of automation. However, for the sake of argument, let’s keep with it.

Note: this isn’t necessarily a “timeline of technological development.” We still actively use technologies from Grades I and II in our daily lives.

Grade-I automation began the day the first animal picked up a stone and used it to crush a nut. By this definition, there are many creatures on Earth that have managed to achieve Grade-I automation. Grade-I lacks complex machinery. There are virtually no moving parts, and any individual person could create the whole range of tools that can be found in this tier. Tools are easy to make and easy to repair, allowing for self-sufficiency. Grade-I automation is best represented by hammers and wheels.

A purely Grade-I society would be agricultural with the vast majority of the population ranging from sustenance farmers to hunter-gatherer-scavengers. The lack of machinery means there is no need for specialization; societal complexity instead derives from other roles.

Grade-II automation introduces complex bits and moving parts, things that would take considerably more skill and brainpower to create. As far as we know, only humans have reached this tier— and only one species of humans at that (i.e. Homo sapiens sapiens). Grade-II is best represented by cogwheels and steam engines, as it’s the tier of mechanisms. One bit enables another, and they work together to form a whole machine. As with Grade-I, there’s a wide range of Grade-II technologies, with the most complex ends of Grade-II becoming electrically powered.

A society that has reached and mastered Grade-II automation would resemble our world as it was in the 19th century. Specialization rapidly expands— though polymaths may be able to design, construct, and maintain Grade-II technologies through their own devices, the vast majority of tools require multiple hands throughout their lifespan. One man may design a tool; another will be tasked with building and repairing it. However, generally, one person can grasp all facets of such tools. Using Grade-II automation, a single person can do much more work than they could with Grade-I technologies. In summary, Grade-II automation is the mark of an industrial revolution. Machines are complex, but can only be run by humans.

Grade-III automation introduces electronic technology, which includes programmable digital computers. It is at this point that the ability to create tools escapes the ability of individuals and requires collectives to pool their talents. However, this pays off through vastly enhanced productivity and efficiency. Computers dedicate all resources towards crunching numbers, greatly increasing the amount of work a single person can achieve. It is at this point that a true global economy becomes possible and even necessary, as total self-sufficiency becomes near impossible. While automation unemploys many as computational machines take over brute-force jobs that once belonged to humans, the specialization wrought is monumental, creating billions of new jobs compared to previous grades. The quality of life for everyone undergoes enormous strides upwards.

A society that has reached and mastered Grade-III automation would resemble the world of many near-future science fiction stories. Robotics and artificial intelligence have greatly progressed, but not to the point of a Singularitarian society. Instead, a Grade-III dominant society will be post-industrial. Even the study of such a society will be multilayered and involve specialized fields of knowledge. Different grades can overlap, and this continues to be true with Grade-III automation. Computers have begun replacing many of the cognitive tasks that were once the sole domain of humans. However, computers and robots remain tools to complete tasks that fall upon the responsibility of humans. Computers do not create new tools to complete new tasks, nor are they generally intelligent enough to complete any task they were not designed to perform. The symbol of Grade-III is a personal computer and industrial robot.

Grade-IV automation is a fundamental sea change in the nature of technology. Indeed, it’s a sea change in the nature of life itself, for it’s the point at which computers themselves enter the fray of creating technology. This is only possible by creating an artificial brain, one that may automate even higher-order skills. Here, it is beyond the capability of any human— individuals or collectives— to create any tool, just as it is beyond the capability of any chimpanzee to create a computer. Instead, artificial intelligences are responsible for sustaining the global economy and creating newer, improved versions of themselves. Because AI matches and exceeds the cognitive capabilities of humans, there is a civilization-wide upheaval where what jobs remain from the era of late Grade-III domination are then taken by agents of Grade-IV automation, leaving humans almost completely jobless. This is because our tools are no longer limited to singular tasks, but can take on a wide array of problems, even problems they were not built to handle. If the tools find a problem that is beyond their limits, they simple improve themselves to overcome their limitations.

It is possible, even probable, that humans alone cannot reach this point— ironically, we may need computers to make the leap to Grade-IV automation.

A society that has reached Grade-IV automation will likely resemble slave societies the closest, with an owner class composed of humans and the highest order AIs profiting from the labor of trillions, perhaps quadrillions of ever-laboring technotarians. The sapient will trade among themselves whatever proves scarce, and the highest functions of society will be understood only by those with superhuman intelligence. Societal complexity reaches its maximal state, the point of maximum alienation. However, specialization rapidly contracts as the intellectual capabilities of individuals— particularly individual AI and posthumans— expands to the point they understand every facet of modern society. Unaugmented humans will have virtually no place in a Grade-IV dominant society besides being masters over anadigital slaves and subservient to hyperintelligent techno-ultraterrestrials. What few jobs remain for them will, ironically, harken back to the days of Grade I and II automation, where the comparative advantage remains only due to artificial limitations (i.e. “human-only labor”).

Grade-IV automation is alien to us because we’ve never dealt with anything like it. The closest analog is biological sapience, something we have only barely begun to understand. In a future post, however, I’ll take a crack at predicting a day in the life of a person in a Grade-IV society. Not just a person, but also society at large.

Types of Artificial Intelligence

Not all AI is created equal. Some narrow AI is stronger than others. Here, I redefine AI, separating the “weak=narrow” and “strong=general” correlation.

Let’s talk about AI. I’ve decided to use the terms ‘narrow and general’ and ‘weak and strong’ as modifiers in and of themselves. Normally, weak AI is the same thing as narrow AI; strong AI is the same thing as general AI. But I mentioned elsewhere on this wide, wild Internet that there certainly must be such a thing as ‘less-narrow AI.’ AI that’s more general than the likes of, say, Siri, but not quite as strong as the likes of HAL-9000.

So my system is this:

    • Weak Narrow AI
    • Strong Narrow AI
    • Weak General AI
    • Strong General AI
    • Super AI

Weak narrow AI (WNAI) is AI that’s almost indistinguishable from analog mechanical systems. Go to the local dollar store and buy a $1 calculator. That calculator possesses WNAI. Start your computer. All the little algorithms that keep your OS and all the apps running are WNAI. This sort of AI cannot improve upon itself meaningfully, even if it were programmed to do so. And that’s the keyword— “programmed.” You need programmers to define every little thing a WNAI can possibly do.
We don’t call WNAI “AI” anymore, as per the AI Effect. You ever notice when there’s a big news story involving AI, there’s always a comment saying “This isn’t AI; it’s just [insert comp-sci buzzword].” Problem being, it is AI. It’s just not AGI.
I didn’t use that mention of analog mechanics passingly— this form of AI is about as mechanical as you can possibly get, and it’s actually better that way. Even if your dollar store calculator were an artificial superintelligence, what do you need it to do? Calculate math problems. Thus, the calculator’s supreme intellect would go forever untapped as you’d instead use it to factor binomials. And I don’t need ASI to run a Word document. Maybe ASI would be useful for making sure the words I write are the best they could possibly be, but actually running the application is most efficiently done with WNAI. It would be like lighting a campfire with Tsar Bomba.
Some have said that “simple computation” shouldn’t be considered AI, but I think it should. It’s simply “very” weak narrow AI. Calculations are the absolute bottom tier of artificial intelligence, just as the firing of synapses are the absolute bottom of biological intelligence.
WNAI can basically do one thing really well, but it cannot learn to do it any better without a human programmer at the helm manually updating it regularly.

Strong narrow AI (SNAI) is AI that’s capable of learning certain things within its programmed field. This is where machine learning comes in. This is the likes of Siri, Cortana, Alexa, Watson, some chatbots, and higher-order game AI, where the algorithms can pick up information from their inputs and learn to create new outputs. Again, it’s a very limited form of learning, but learning’s happening in some form. The AI isn’t just acting for humans; it’s reacting to us as well, and in ways we can understand. SNAI may seem impressive at times, but it’s always a ruse. Siri might seem smart at times, for example, but it’s also easy to find its limits because it’s an AI meant for being a personal virtual assistant, not your digital waifu ala Her. Siri can recognize speech, but it can’t deeply understand it, and it lacks the life experiences to make meaningful talk anyhow. Siri might recognize some of your favorite bands or tell a joke, but it can’t also write a comedic novel or actually genuinely have a favorite band of its own. It was programmed to know these things, based on your own preferences. Even if Siri says it’s “not an AI”, it’s only using preprogrammed responses to say so.
SNAI can basically do one thing really well and can learn to do that thing even better over time, but it’s still highly limited.

Weak general AI (WGAI) is AI that’s capable of learning a wide swath of things, even things it wasn’t necessarily programmed to learn. It can then use these learned experiences to come up with creative solutions that can flummox even trained professional humans. Basically, it’s as intelligent as a certain creature— maybe a worm or even a mouse— but it’s nowhere near intelligent enough to enhance itself meaningfully. It may be par-human or even superhuman in some regards, but it’s sub-human in others. This is what we see with the likes of DeepMind— DeepMind’s basic algorithm can basically learn to do just about anything, but it’s not as intelligent as a human being by far. In fact, DeepMind wasn’t even in this category until they began using the differentiated neural computing system because it could not retain its previously learned information. Because it could not do something so basic, it was squarely strong narrow AI until literally a couple months ago.
Being able to recall previously learned information and apply it to new and different tasks is a fundamental aspect of intelligence. Once AI achieves this, it will actually achieve a modicum of what even the most cynical can consider “intelligence.”
DeepMind’s yet to show off the DNC in any meaningful way, but let’s say that, in 2017, they unveil a virtual assistant to rival Siri and replace Google Now. On the surface, this VA seems completely identical to all others. Plus, it’s a cool chatbot. Quickly, however, you discover its limits— or, should I say, its lack thereof. I ask it to generate a recipe on how to bake a cake. It learns from the Internet, but it doesn’t actually pull up any particular article— it completely generates its own recipe, using logic to deduce what particular steps should be followed and in what order. That’s nice— now, can it do the same for brownies?
If it has to completely relearn all of the tasks just to figure this out, it’s still strong narrow AI. If it draws upon what it did with cakes and figures out how to apply these techniques to brownies, it’s weak general AI. Because let’s face it— cakes and brownies aren’t all that different, and when you get ready to prepare them, you draw upon the same pool of skills. However, there are clear differences in their preparation. It’s a very simple difference— not something like “master Atari Breakout; now master Dark Souls; now climb Mount Everest.” But it’s still meaningfully different.
WGAI can basically do many things really well and can learn to do them even better over time, but it cannot meaningfully augment itself. That it has such a limit should be impressive, because it basically signals that we’re right on the cusp of strong AGI and the only thing we lack is the proper power and training.

Strong general AI (SGAI) is AI that’s capable of learning anything, even things it wasn’t programmed to learn, and is as intellectually capable as a healthy human being. This is what most people think of when they imagine “AI”. At least, it’s either this or ASI.
Right now, we have no analog to such a creation. Of course, saying that we never will would be as if we were in the year 1816 and discussing whether SNAI is possible. The biggest limiting factor towards the creation of SGAI right now is our lack of WGAI. As I said, we’ve only just created WGAI, and there’s been no real public testing of it yet. Not to mention that the difference between WGAI and SGAI is vast, despite seemingly simple differences between the two. WGAI is us guessing what’s going on in the brain and trying to match some aspects of it with code. SGAI is us building a whole digital brain. Not to mention there’s the problem of embodied cognition— without a body, any AI would be detached from nearly all experiences that we humans take for granted. It’s impossible for an AI to be a superhuman cook without ever preparing or tasting food itself. You’d never trust a cook who calls himself world-class, only come to find out he’s only ever made five unique dishes, nor has he ever left his house. For AI to truly make the leap from WGAI to SGAI, it’d need someone to experience life as we do. It doesn’t need to live 70 years in a weak, fleshy body— it could replicate all life experiences in a week if needbe if it had enough bodies— but having sensory experiences helps to deepen its intelligence.

Super AI or Artificial Superintelligence (SAI or ASI) is the next level beyond that, where AI has become so intellectually capable as to be beyond the abilities of any human being.
The thing to remember about this, however, is that it’s actually quite easy to create ASI if you can already create SGAI. And why? Because a computer that’s as intellectually capable as a human being is already superior to a human being. This is a strange, almost Orwellian case where 0=1, and it’s because of the mind-body difference.
Imagine you had the equivalent of a human brain in a rock, and then you also had a human. Which one of those two would be at a disadvantage? The human-level rock. And why? Because even though it’s as intelligent as the human, it can’t actually act upon its intelligence. It’s a goddamn rock. It has no eyes, no mouth, no arms, no legs, no ears, nothing.
That’s sort of like the difference between SGAI and a human. I, as a human, am limited to this one singular wimpy 5’8″ primate body. Even if I had neural augmentations, my body would still limit my brain. My ligaments and muscles can only move so fast, for example. And even if I got a completely synthetic body, I’d still just have one body.
An AI could potentially have millions. If not much, much more. Bodies that aren’t limited to any one form.
Basically, the moment you create SGAI is the moment you create ASI.

From that bit of information, you can begin to understand what AI will be capable of achieving.


Recap:

“Simple” Computation = Weak Narrow Artificial Intelligence. These are your algorithms that run your basic programs. Even a toddler could create WNAI.
Machine learning and various individual neural networks = Strong Narrow Artificial Intelligence. These are your personal assistants, your home systems, your chatbots, and your victorious game-mastering AI.
Deep unsupervised reinforcement learning + differentiable spiked recurrent progressive neural networks = Weak General Artificial Intelligence. All of those buzzwords come together to create a system that can learn from any input and give you an output without any preprogramming.
All of the above, plus embodied cognition, meta neural networks, and a master neural network = Strong General Artificial Intelligence. AGI is a recreation of human intelligence. This doesn’t mean it’s now the exact same as Bob from down the street or Li over in Hong Kong; it means it can achieve any intellectual feat that a human can do, including creatively coming up with solutions to problems just as good or better than any human. It has sapience. SGAI may be very humanlike, but it’s ultimately another sapient form of life all its own.

All of the above, plus recursive self-improvement = Artificial Superintelligence. ASI is beyond human intellect, no matter how many brains you get. It’s fundamentally different from the likes of Einstein or Euler. By the very nature of digital computing, the first SGAI will also be the first ASI.