Types of Artificial Intelligence

Not all AI is created equal. Some narrow AI is stronger than others. Here, I redefine AI, separating the “weak=narrow” and “strong=general” correlation.

Let’s talk about AI. I’ve decided to use the terms ‘narrow and general’ and ‘weak and strong’ as modifiers in and of themselves. Normally, weak AI is the same thing as narrow AI; strong AI is the same thing as general AI. But I mentioned elsewhere on this wide, wild Internet that there certainly must be such a thing as ‘less-narrow AI.’ AI that’s more general than the likes of, say, Siri, but not quite as strong as the likes of HAL-9000.

So my system is this:

    • Weak Narrow AI
    • Strong Narrow AI
    • Weak General AI
    • Strong General AI
    • Super AI

Weak narrow AI (WNAI) is AI that’s almost indistinguishable from analog mechanical systems. Go to the local dollar store and buy a $1 calculator. That calculator possesses WNAI. Start your computer. All the little algorithms that keep your OS and all the apps running are WNAI. This sort of AI cannot improve upon itself meaningfully, even if it were programmed to do so. And that’s the keyword— “programmed.” You need programmers to define every little thing a WNAI can possibly do.
We don’t call WNAI “AI” anymore, as per the AI Effect. You ever notice when there’s a big news story involving AI, there’s always a comment saying “This isn’t AI; it’s just [insert comp-sci buzzword].” Problem being, it is AI. It’s just not AGI.
I didn’t use that mention of analog mechanics passingly— this form of AI is about as mechanical as you can possibly get, and it’s actually better that way. Even if your dollar store calculator were an artificial superintelligence, what do you need it to do? Calculate math problems. Thus, the calculator’s supreme intellect would go forever untapped as you’d instead use it to factor binomials. And I don’t need ASI to run a Word document. Maybe ASI would be useful for making sure the words I write are the best they could possibly be, but actually running the application is most efficiently done with WNAI. It would be like lighting a campfire with Tsar Bomba.
Some have said that “simple computation” shouldn’t be considered AI, but I think it should. It’s simply “very” weak narrow AI. Calculations are the absolute bottom tier of artificial intelligence, just as the firing of synapses are the absolute bottom of biological intelligence.
WNAI can basically do one thing really well, but it cannot learn to do it any better without a human programmer at the helm manually updating it regularly.

Strong narrow AI (SNAI) is AI that’s capable of learning certain things within its programmed field. This is where machine learning comes in. This is the likes of Siri, Cortana, Alexa, Watson, some chatbots, and higher-order game AI, where the algorithms can pick up information from their inputs and learn to create new outputs. Again, it’s a very limited form of learning, but learning’s happening in some form. The AI isn’t just acting for humans; it’s reacting to us as well, and in ways we can understand. SNAI may seem impressive at times, but it’s always a ruse. Siri might seem smart at times, for example, but it’s also easy to find its limits because it’s an AI meant for being a personal virtual assistant, not your digital waifu ala Her. Siri can recognize speech, but it can’t deeply understand it, and it lacks the life experiences to make meaningful talk anyhow. Siri might recognize some of your favorite bands or tell a joke, but it can’t also write a comedic novel or actually genuinely have a favorite band of its own. It was programmed to know these things, based on your own preferences. Even if Siri says it’s “not an AI”, it’s only using preprogrammed responses to say so.
SNAI can basically do one thing really well and can learn to do that thing even better over time, but it’s still highly limited.

Weak general AI (WGAI) is AI that’s capable of learning a wide swath of things, even things it wasn’t necessarily programmed to learn. It can then use these learned experiences to come up with creative solutions that can flummox even trained professional humans. Basically, it’s as intelligent as a certain creature— maybe a worm or even a mouse— but it’s nowhere near intelligent enough to enhance itself meaningfully. It may be par-human or even superhuman in some regards, but it’s sub-human in others. This is what we see with the likes of DeepMind— DeepMind’s basic algorithm can basically learn to do just about anything, but it’s not as intelligent as a human being by far. In fact, DeepMind wasn’t even in this category until they began using the differentiated neural computing system because it could not retain its previously learned information. Because it could not do something so basic, it was squarely strong narrow AI until literally a couple months ago.
Being able to recall previously learned information and apply it to new and different tasks is a fundamental aspect of intelligence. Once AI achieves this, it will actually achieve a modicum of what even the most cynical can consider “intelligence.”
DeepMind’s yet to show off the DNC in any meaningful way, but let’s say that, in 2017, they unveil a virtual assistant to rival Siri and replace Google Now. On the surface, this VA seems completely identical to all others. Plus, it’s a cool chatbot. Quickly, however, you discover its limits— or, should I say, its lack thereof. I ask it to generate a recipe on how to bake a cake. It learns from the Internet, but it doesn’t actually pull up any particular article— it completely generates its own recipe, using logic to deduce what particular steps should be followed and in what order. That’s nice— now, can it do the same for brownies?
If it has to completely relearn all of the tasks just to figure this out, it’s still strong narrow AI. If it draws upon what it did with cakes and figures out how to apply these techniques to brownies, it’s weak general AI. Because let’s face it— cakes and brownies aren’t all that different, and when you get ready to prepare them, you draw upon the same pool of skills. However, there are clear differences in their preparation. It’s a very simple difference— not something like “master Atari Breakout; now master Dark Souls; now climb Mount Everest.” But it’s still meaningfully different.
WGAI can basically do many things really well and can learn to do them even better over time, but it cannot meaningfully augment itself. That it has such a limit should be impressive, because it basically signals that we’re right on the cusp of strong AGI and the only thing we lack is the proper power and training.

Strong general AI (SGAI) is AI that’s capable of learning anything, even things it wasn’t programmed to learn, and is as intellectually capable as a healthy human being. This is what most people think of when they imagine “AI”. At least, it’s either this or ASI.
Right now, we have no analog to such a creation. Of course, saying that we never will would be as if we were in the year 1816 and discussing whether SNAI is possible. The biggest limiting factor towards the creation of SGAI right now is our lack of WGAI. As I said, we’ve only just created WGAI, and there’s been no real public testing of it yet. Not to mention that the difference between WGAI and SGAI is vast, despite seemingly simple differences between the two. WGAI is us guessing what’s going on in the brain and trying to match some aspects of it with code. SGAI is us building a whole digital brain. Not to mention there’s the problem of embodied cognition— without a body, any AI would be detached from nearly all experiences that we humans take for granted. It’s impossible for an AI to be a superhuman cook without ever preparing or tasting food itself. You’d never trust a cook who calls himself world-class, only come to find out he’s only ever made five unique dishes, nor has he ever left his house. For AI to truly make the leap from WGAI to SGAI, it’d need someone to experience life as we do. It doesn’t need to live 70 years in a weak, fleshy body— it could replicate all life experiences in a week if needbe if it had enough bodies— but having sensory experiences helps to deepen its intelligence.

Super AI or Artificial Superintelligence (SAI or ASI) is the next level beyond that, where AI has become so intellectually capable as to be beyond the abilities of any human being.
The thing to remember about this, however, is that it’s actually quite easy to create ASI if you can already create SGAI. And why? Because a computer that’s as intellectually capable as a human being is already superior to a human being. This is a strange, almost Orwellian case where 0=1, and it’s because of the mind-body difference.
Imagine you had the equivalent of a human brain in a rock, and then you also had a human. Which one of those two would be at a disadvantage? The human-level rock. And why? Because even though it’s as intelligent as the human, it can’t actually act upon its intelligence. It’s a goddamn rock. It has no eyes, no mouth, no arms, no legs, no ears, nothing.
That’s sort of like the difference between SGAI and a human. I, as a human, am limited to this one singular wimpy 5’8″ primate body. Even if I had neural augmentations, my body would still limit my brain. My ligaments and muscles can only move so fast, for example. And even if I got a completely synthetic body, I’d still just have one body.
An AI could potentially have millions. If not much, much more. Bodies that aren’t limited to any one form.
Basically, the moment you create SGAI is the moment you create ASI.

From that bit of information, you can begin to understand what AI will be capable of achieving.


Recap:

“Simple” Computation = Weak Narrow Artificial Intelligence. These are your algorithms that run your basic programs. Even a toddler could create WNAI.
Machine learning and various individual neural networks = Strong Narrow Artificial Intelligence. These are your personal assistants, your home systems, your chatbots, and your victorious game-mastering AI.
Deep unsupervised reinforcement learning + differentiable spiked recurrent progressive neural networks = Weak General Artificial Intelligence. All of those buzzwords come together to create a system that can learn from any input and give you an output without any preprogramming.
All of the above, plus embodied cognition, meta neural networks, and a master neural network = Strong General Artificial Intelligence. AGI is a recreation of human intelligence. This doesn’t mean it’s now the exact same as Bob from down the street or Li over in Hong Kong; it means it can achieve any intellectual feat that a human can do, including creatively coming up with solutions to problems just as good or better than any human. It has sapience. SGAI may be very humanlike, but it’s ultimately another sapient form of life all its own.

All of the above, plus recursive self-improvement = Artificial Superintelligence. ASI is beyond human intellect, no matter how many brains you get. It’s fundamentally different from the likes of Einstein or Euler. By the very nature of digital computing, the first SGAI will also be the first ASI.

What Is Futuristic Realism?

Definitive Explanations, Breakdowns, and Examples of Futuristic Realism, Sci-Fi Realism, Slice of Tomorrow, and Science Non-Fiction

I get asked a lot, “Yuli, what is futuristic realism?”

And that’s a bad thing. I’ve explained what futuristic realism is around five hundred times now, and the fact people still ask me what it means suggests that I, as usual, have failed to give the world a concise definition. That makes sense— I am a legendary rambler.

So I’m here to finally put to bed these questions.

Note: there will be a short version where I get right to the point, and afterwards, there’ll be a long version where I allow myself to ramble go in depth with what I mean.


Short Version

Sci-Fi Realism is a visual style that attempts to fool the viewer into thinking fantastic technologies are actually real and well-used, giving such tech a sort of photographic authenticity. 

Futuristic Realism is a subgenre of both science fiction and literary fiction that draws from science fiction and uses the structure of literary and realistic fiction in order to tell a story that feels familiar and contemporary.

Slice of Tomorrow is the fusion of science fiction and slice of life fiction.

Science Non-Fiction describes fantastic technologies, happenings, stories, and narratives that have already occurred and cause the person to say “I’m living in the future!”


Long Version

Let’s start with slice of tomorrow. Slice of tomorrow fiction is what you get when you take science fiction and mix it with slice of life. In order to understand what that means, you first need to know what “slice of life” is.

Slice of life is mundane realism depicting everyday experiences in art and entertainment.

There’s no grand plot.

There’s no quest, no corporate spooks, no governments overthrown, no countdown timer, no running from an explosion. The climax of the story is as soft as it gets. That’s not to say high-intensity events can’t happen— they just aren’t the focus of the story. Slice of life does not necessarily have to be “literary”— it doesn’t have to focus on incredibly deep themes of human relationships. It doesn’t necessarily have to be about anything at all, other than showing one’s daily life.

Slice of tomorrow is mundane realism depicting everyday experiences, with the twist being that the events take place in an otherwise “sci-fi” or “cyberpunk” environment. The intention is in the name of the genre— “slice of tomorrow.” Show the world how humanity would react to futuristic technologies, tomorrow’s social mores, and perhaps even different conditions and modes of existence. However, slice of tomorrow does not have to be relateable, nor does one have to intertwine a deeper narrative into one that identifies as “slice of tomorrow.”

 

Adding depth and length to mundanity brings you futuristic realism. Futuristic realism carries with it more of a ‘literary’ swagger. And in order to understand what that means, you must define literary and realistic fiction.

Literary fiction comprises fictional works that hold literary merit; that is, they involve social commentary, or political criticism, or focus on the human condition. Literary fiction is deliberately written in dialogue with existing works, created with the above aims in mind and is focused more on themes than on plot.


Realistic fiction is fiction that uses imagined characters in situations that either actually happened in real life or are very likely to happen. It further extends to characters reacting in realistic ways to real-life type situations. The definition is sometimes combined with contemporary realism, which shows realistic characters dealing with realistic social issues such as divorce, drug abuse, teenage pregnancy and more.

Literary fiction is a style of realism depicting real people in realistic situations, often as a means of exploring the human condition. Here, simply showing a different mode of existence isn’t enough— you have to thoroughly explore it. There is a humongous opportunity to be had in science fiction when it comes to exploring foreign and alien modes of existence, and many sci-fi authors have exploited that opportunity. One fine example of futuristic realism would have to be the Sprawl Trilogy, by William Gibson— in fact, the literary work that gave birth to cyberpunk.

Indeed, futuristic realism and cyberpunk’s origins overlap heavily, and there’s no better way to illustrate this than by telling you how cyberpunk began in the first place, as well as describing what it’s become.

Cyberpunk was born when Gibson felt dissatisfied with the increasingly stagnant Utopian sci-fi, such as Star Trek. Gene Roddenberry’s Star Trek gave us a nearly-utopian world where advanced technology solved all of humanity’s problems and men lived in egalitarian harmony and prosperity; the only sources of conflict came from either other species or the occasional disagreement.
Gibson looked at the world around himself and concluded that, even if we had starships and communicators, there would still be drug dealers and prostitutes. If anything, the acceleration of technology would most likely only greatly benefit a rich few, leaving the rest to get by with whatever scraps are left over. This wasn’t a completely baseless extrapolation, precisely because that’s what had been occurring hitherto the present moment— the developed nations, and in particular the rich, were able to enjoy high-tech consumer goods such as cable television, personal computers, video games, and credit cards, while the poor in many parts of the planet lived in nations that may very well have never experienced the Industrial Revolution. And even in developed nations, the poor were getting shafted by the system at large, especially as corporations grew in power and influence and enacted their will upon the governments of the world. Thus, Neuromancer and subsequently cyberpunk and futuristic realismwas born.

Cyberpunk and futuristic realism quickly branched off into different paths, however, as cyberpunk began becoming “genre” fiction itself— nowadays, in an almost ironic fashion considering how it started, when one thinks of ‘cyberpunk’, they think of ‘aggressively cynical dystopian action science fiction’, with the actual ‘punk’ aspect added in as an afterthought.

 

nn7ltjx
Bringing in elves and orcs sextuples the action! Source: Shadowrun

 

To truly get a feel for futuristic realism, try to follow this one: it’s the genre Ernest Hemingway or Cormac McCarthy would write if they lived in the 2050s.

I have long said that the easiest way to achieve futuristic realism would be to take Sarah, Plain and Tall and add humanoid robots, drones, and smartglasses into the mix. And why? Because there is a very intense disconnect. I even said as much in a previous article:

That’s why I say it’s easiest to pull of futuristic realism with a rustic or suburban setting— it’s already much closer to individual people doing their own thing, without being able to fall back on the glittering neon cyberscapes of a city or cold interiors of a space station to show off how sci-fi/cyberpunk it is. It makes the writer have to actually work. Also, there’s a much larger clash. A glittering neon cyberscape of a megalopolis is already very sci-fi (and realistic); adding sexbot prostitutes and a cyber-augmented population fitted with smartglasses doesn’t really add to what already exists. Add sexbot prostitutes and cyber-augments with smartglasses to Smalltown, USA, however, and you have a jarring disconnect that needs to be rectified or at least expanded upon. That doesn’t mean you can’t have a futuristic realist story in a cyberpunk city, or in space, etc. It’s just much easier to tell one in Smalltown, USA because of the very nature of rural and suburban communities. They’re synonymous with tradition and conformity, with nostalgic older years and pleasantness, of a certain quietness you can’t find in a city.

Last but not least, there is sci-fi realism. This spawned futuristic realism and slice of tomorrow, and once upon a time, it was the catch-all term for the style. However, once I decoupled literary content from visual aesthetics, sci-fi realism became its own thing, and the best way to describe sci-fi realism would be to understand “visual photo-authenticity.”

This is my own term (because I just love making up jargon), and it refers to a visual style that attempts to recreate the feel of a photograph. This doesn’t just mean “ultra-realistic graphics”— it can be 8-bit as long as it looks like something you snapped with your smartphone camera. Of course, ultra-realism does greatly help.

Sci-fi realism is perhaps simultaneously the easiest and hardest to understand because of the nature of photography. After all, don’t many photographs attempt to capture as much artistic merit as paintings and renders? What qualifies as “photographic?”

And I won’t lie that it is, indeed, a subjective matter. However, there is one basic rule of thumb I’ll throw out there.

Sci-fi realism follows the rules of mundanity, even if it’s capturing something abnormal. There are few intentional poses and very little Romanticizing of subjects. It’s supposed to look as if you took a photograph in the future and brought it back to the past.

g8sizra
Source: Vitaly Bulgarov (and his dogs)

Most photographs are taken from ground or eye level, maybe even at bad angles and with poor lighting. Very few of them ever manage to capture wide-open scenes— it’s nearly impossible to get both a shady alleyway and towering skyscrapers in the background from a realistic perspective. There are very few vistas or wide-shots. 

As aforementioned, hyper-realism comes in handy when dealing with sci-fi realism, and wide-shots can be done to be “realistic” from a sci-fi perspective.

34mmdav
Future Dubai, by Thomas Galad


And, also as aforementioned, it doesn’t necessarily have to be photorealistic as long as it carries a photographic quality.

by_burned_2560
“Burned” by Simon Stålenhag

It was watching movies like Real Steel, Chappie, District 9, and Star Wars: A New Hope that really got me interested in this “what if” style. Those movies possessed ‘visual authenticity.’ When I watched Real Steel, I was amazed by how seamlessly the CGI mixed with live action. Normally, the CGI is blatantly obvious; it feels obviously fake. It doesn’t look real. But Real Steel took a different route. It fused CGI with practical props, and it was amazing to see. For the first time, I felt like I was watching a movie sent back from the future rather than a science fiction film. Other films came close, but it was Real Steel that I first really noticed it.

 


The Bait And Switch

All of this refers to fiction. Slice of tomorrow is about slice of life science fiction. Futuristic realism is about literary science fiction. Sci-fi realism is about photographic science fiction.
However, with the obvious exception of slice of tomorrow, these can also fit non-fiction.

I mentioned quite a bit ago the concept of “science non-fiction.” This is a very new genre that has only become possible in the most recent years, and can best be described as “science fiction meets creative non-fiction.”

In recent years, many facets of science fiction have crossed over into reality. Things are changing faster than ever before, and what’s contemporary this decade would be considered science fiction last decade. As time goes on, this will only grow even more extreme, until each next year could be considered “sci-fi” compared to the previous one. At some point, people’s ability to take for granted this rapidly accelerating rate of technological advancement will wane, and there will be medically diagnosed cases of acute future shock. When we reach that point, even things that may have been on the market for years or decades will still be seen as “science fiction.”

We are already seeing a rudimentary form of this in the form of smartphones— smartphones have been a staple of mass consumer culture for well over a decade. Despite this, people still experience future shock when they take time to think about these immensely powerful gadgets. As smartphones grew more powerful and ubiquitous, the effect did not fade but in fact became more intense. This inability to accept the existence of a new technology is virtually unprecedented— we grew used to airplanes, atomic energy, space exploration, personal computers, and the internet faster than we have smartphones. Virtual reality is poised to push this future shock into an even more precarious level, as now we’re beginning to actually infringe upon concepts and technologies with which science fiction has been teasing us for nearly a century.

Space exploration had a bit of an Antiquity moment in the 1960s— we proved we could do it but found no practical way to expand on our accomplishments, much like the ancient Greeks working with analog computers and steam engines— and the actual space revolution remains beyond us, lying at an undetermined point in the future. To prove this point, we still see things like space stations and landing on other celestial bodies as being “science fiction.” This raises a conundrum— a story where a man lands on the moon qualifies as “science fiction”, but we already took that leap roughly 50 years ago. Does that mean Neil Armstrong and Buzz Aldrin actually experienced science fiction? It can’t because of the very definition of the word ‘fiction.’

That’s where this new term— science non-fiction— comes in. When real life crosses over into territories usually only seen in science fiction, you get science non-fiction.

Science fiction has many tropes, and even as we invent and commercialize the technologies behind these tropes, they don’t leave science fiction. Space exploration, artificial intelligence, hyper-information technology, advanced robotics, genetic engineering, virtual and augmented reality, human enhancement, experimental material science, unorthodox transportation— these are staples of science fiction, and merely making them real doesn’t make them any less sci-fi. From a technical perspective, virtual reality and smartphones are no longer sci-fi. However, from a cultural perspective, they’ll never be able to escape the label.

Science non-fiction is extremely subjective precisely because it’s based on the cultural definition of sci-fi. Some people may think smartphones, smartwatches, and VR are sci-fi, but others might have already grown too used to them to see them as anything other than more tech gadgets. Even when we have people and synths on Mars, there will be those who say that missions to Mars no longer qualify as science fiction.

And it’s this disconnect that helps make science non-fiction work.

There’s that word again— disconnect.

Reading about events in real life that seem ripped from sci-fi is one thing. Actually seeing them is another altogether.

bqzoezm
Photograph of Pepper, 2016

We’re back to sci-fi realism. I am reusing the term “science non-fiction”, but this is discussing its visual form. I admit, sometimes I call it ‘sci-fi realism’, but I’ve begun moving away from that (to the detriment of the Sci-Fi Realism subreddit and to the benefit of the Futuristic Realism subreddit). As mentioned, this is what science non-fiction looks likepictures, gifs, videos, and movies of real events that happen to have science non-fiction technologies.

Science non-fiction is not necessarily slice of life or mundane, though it can be (and often is, due to the nature of everyday life). In this case, science non-fiction can actually be everything slice of tomorrow and futuristic realism isn’t— including things we’d consider like cyberpunk, military sci-fi, and space operas. The only prerequisite is that the events have to be real.

For example: glittery cyberpunk-esque cityscapes already exist. There aren’t even a shortage of them— off the top of my head, there’s Dubai, Moscow, Hong Kong, Shanghai, Guangzhou, Tokyo, Singapore, Seoul, and Bangkok. Posting pictures of them can net you thousands of upvotes on /r/Cyberpunk. The vistas may lack flying cars, but who knows how much longer that’ll be the case?

ec8wm
That moment when Dubai starts looking like Coruscant

If I bought a Pepper and brought it into my home, that would also qualify as science non-fiction. Domestic artificially intelligent utility robots are a major staple of science fiction, and them simply existing doesn’t change the fact sci-fi literature, films, and video games will continue utilizing them.

dwf7imt
This is an actual Japanese showroom in 2016

Likewise, if I donned a TALOS exosuit fitted with a BCI-powered augmented reality visor, and picked up a 25 KW pulse-laser Gauss rifle, and then got flown into Syria where I could also pilot semi-autonomous drones and command killer Atlas robots, that too would be science non-fiction.

osaxxkd
The TALOS suit, one of the coolest things I’ve ever seen

Funny thing is, both these examples are already possible. Not fully— ASIMO as yet to see a commercial release, Atlas is not finished its construction into a Terminator, and no one has yet constructed a handheld laser gun stronger than 500 watts. But none of it is beyond us.

And that’s the gist behind all of this. Science non-fiction is based on what we have done.

“So why did you create all this uber-pretentious sci-fi tripe?”

1- Because I wanted to.

2- Because I noticed a delightful trend occurring over and over again online. Even outside of sci-fi forums, I was repeatedly reading stories and anecdotes of people being amazed at how technologically advanced our present society really is— but they then lamented that they didn’t “feel” like they were really living in a sci-fi story.

I am a fantastic example of that myself. I live out in the sticks— I even counted the seconds: if you drive at sixty miles per hour for one minute and twenty-eight seconds, you will come across literally bucolic farmland straight out of a Hallmark Channel movie. The tallest building in my town (and for many miles around it) is the local theatre, which comes in at seven stories. It’s the kind of town where, if you drive down any particular road too late at night, you’ll get abducted by aliens and/or the CIA. I live behind some trees on the very outskirts of this town. And despite that, I still own a drone, several smartphones, a VR headset, and a dead Roomba. If I saved up, I could even potentially buy an artificially intelligent social droid— Aldebaran’s Pepper. It feels so mundane, but my life truly is science non-fiction. A while ago, I lamented that I wasn’t living in one of the aforementioned proto-cyberpunk cities precisely because I thought I had too much technology to be living in the country.

I’ve since decided to bring science fiction to me, and that requires quite a few changes. I’m no revolutionary street urchin. I have no coding skills whatsoever. I can count on a broken hand how many times in my life I’ve held a gun. There’s nothing thrilling about me, my past, or my future. And yet I still feel like I live in a world that’s fast becoming sci-fi. So I needed to find a way to express that. A way to tell a story I— in my unfit, very much kung-fu-challenged world— could relate with. I’m no hero, nor am I an anti-hero, nor am I a villain. I’m basically an NPC, a background character. Yet I still feel I have stories to tell.


Futuristic Realism and Transrealism

So what about transrealism? Isn’t it futuristic realism? In fact, it is. However, it’s a situation where “X is Y, but Y isn’t always X.” Transrealism is futuristic realism, but not all futuristic realism is transrealism. And the best way to understand this is by looking at the definition of transrealism.

Transrealism is a literary mode that mixes the techniques of incorporating fantastic elements used in science fiction with the techniques of describing immediate perceptions from naturalistic realism. While combining the strengths of the two approaches, it is largely a reaction to their perceived weaknesses. Transrealism addresses the escapism and disconnect with reality of science fiction by providing for superior characterization through autobiographical features and simulation of the author’s acquaintances. It addresses the tiredness and boundaries of realism by using fantastic elements to create new metaphors for psychological change and to incorporate the author’s perception of a higher reality in which life is embedded. One possible source for this higher reality is the increasingly strange models of the universe put forward in theoretical astrophysics.


Some final words on the subject, starting with Kovacs from the Cyberpunk forums:

Well… the only real way that sci-fi realism works – for me – is if the science fiction is invisible and ubiquitous.
Today, I could write a fully non-fiction or ‘legit literature’ fiction (e.g. non-genre) story using tech that, a decade or two ago, would have been cyberpunk. For example: 20 years ago if you wrote a murder mystery about a detective that could track a victim’s every thought and action the day they were murdered, all withing 5 minutes or so, that would be sci-fi or even ‘magic’. Today, you just access to the victim’s phone and scroll though their various social media profiles. Same with having a non-static-y video conference with someone halfway around the world; it use to be Star Trek, now it’s Skype. So how would this prog rock of sci-fi work? I suppose you tell a tale where the tech… doesn’t matter. It’s all about human relationships.
Ooooh I bet you think that’s boring, don’t you? Well, maybe. But we can cheat by playing with the definition of ‘human’.

I’m thinking about the movie Her. Artificial intelligence is available and there’s no paradigm shift. A romantic relationship with an AI is seen as odd… but not unimaginable, or perverse. There’s no quest, no corporate spooks, no governments overthrown, no countdown timer, no running from an explosion. The climax of the story is as soft as it gets [OP: do these sentences look familiar?]Robot and Frank is another good example; it’s a story where the robot isn’t exactly needed, but it makes the story make more sense that if it was say a collage student Scent of a Woman style.
(hun… Scent of a Robot anyone? Al Pachino piloting Asimo?)
So I guess what I’m leading to is take the action-adventure component out of sci-fi. Take the dystopia out of cyberpunk. Take out the power fantasy elements. Take out the body horror. What are you left with? Something a little less juvenile? In order to develop this you’d have to have a really good dramatic story as a basis and sneak in the sci-fi elements. You can’t by, definition, rest on them.
Which is tough for me to approach, because I really like my space katanas.

Finally, what is futuristic realism not?: “X can be Y, but Y isn’t X.” Futuristic realism can use these things, but these things aren’t futuristic realism by themselves.

  • Hyper-realistic science fiction. As I said, visual authenticity started futuristic realism, but that’s not what it is anymore. Nowadays, that’s just straight ‘sci-fi realism.’
  • Hard science fiction. Futuristic realism can be hard or soft or anything in between; it’s the story that matters. Hell, you can write fantastic realism if you want to.
  • Military science fiction. Some people kept thinking sci-fi realism meant ‘hard military sci-fi’, which is why I rebranded the style ‘futuristic realism’. Military sci-fi can be futuristic realism, but a story simply being military sci-fi isn’t enough.
  • Rustic science fiction. After the whole spiel on /r/SciFiRealism when a whole bunch of people were angry that I kept posting images of robots in homes and hover cars instead of really gritty battle scenes and dystopian fiction, the pendulum swung way too far in the other direction. I have said that ‘the best way to write futuristic realism is to take Sarah, Plain and Tall and add robots’, but I didn’t say ‘the only way to write futuristic realism is… yadayada.’
  • Dark ‘n gritty science fiction. As aforementioned, some thought ‘sci-fi realism’ meant ‘dark and gritty science fiction’. And I won’t lie, it is easy for a realistic story to be dark and even gritty and edgy. But see above, I had to hit the reset button. 
  • Actionless science fiction. You’d think that, after all this bureaucratic bullshit, I’m trying to force people to write happy science fiction about neighborhood kids with robots. Not at all. In fact, you can have a hyper-realistic, dark and gritty hard military science fiction story that’s pure, raw futuristic realism. It depends on what the story’s about. A story about a space marine genociding Covenant scum, fighting to destroy an ancient superweapon, can indeed be futuristic realism. It just depends on what part of the story you focus on and how you portray it. Novelizing Halo isn’t how you do it. In fact, there’s a futuristic realist story I desperately want to read— a space age War and Peace. Something of that caliber. If you want to attempt that, then I think the first thing you’d have to do before writing is whether you can pull it off without turning it into a space opera. Take myself for example: fuck that noise. I’m not even going to try it. I know it would fast become an emo Gears of War if I tried to write it. It’s not supposed to be Call of Duty in Space, it’s a space-age War and Peace. There are twenty trillion ways you can fuck that up.

Try to think back to the last major sci-fi film, video game, book, or short that didn’t have one of the following—

  • Someone brandishing a weapon
  • A chase sequence
  • Fight sequence
  • Military tech wank
  • Paramilitary tech wank
  • Wide shots over either a city, alien planet, or space vehicle
  • Over-exposed mechanics or cybernetics
  • Romance between lead character and designated lover, usually as a result of the two working together to overcome the Big Bad and realizing they have feelings for each other
  • High-octane stakes, where the life of the protagonist or someone the protagonist cares about is at risk
  • Death of the antagonist, someone close to the protagonist, or the protagonist him/herself
  • Actions causing death in the first place
  • Bands of mooks for someone to mow down
  • Stakes where one side (e.g. space navy; evil megacorporation, warlord, etc.) has to suffer a total, epic defeat in order for the plot to be resolved, usually in the form of a climatic and tense battle

 

I’m not trying to be a creativity fascist; I’m merely attempting to define what futuristic realism and slice of tomorrow fiction aren’t. Hell, I’ve even said that you can have a whole bunch of these things and still come off as futuristic realism. It’s all about execution and perspective.

I suppose, what I’m trying to get at is that if you want to write futuristic realism and slice of tomorrow fiction, you have to ask yourself a very basic question: “Can the central plot be resolved with a gun battle without any major consequences?” Replace ‘gun’ with any weapon of your choice— space katana, quark bomb, logic bomb, giant mecha— the point remains the same. If the answer is no, you may have futuristic realism.

 

You can resolve just about any plot with a good shot from a Lawgiver; the key phrase is “without any major consequences”. Filling a flatmate’s skull with a magnetically-pressurized ionic plasma bolt because he’s not happy over how many sloppy sounds you make with your “sexbot sexpot” is going to have worlds’ different consequences as gunning down Locust filth in an interstellar war— unless, of course, you go deep into the psychological profile of someone who’s spent their lives killing aliens and has never before contemplated why he’s doing this and suddenly gains a keen interest in understanding the other side, particularly those not directly participating in the war.

 

It’s easy to say your story’s about the human condition more than it is about the science and technology, and I suppose that would make it more highbrow than a lot of other sci-fi. But futuristic realism/slice of tomorrow doesn’t have to be highbrow either. 

 

 

So let me use a story instead of just similes, analogies, and overbloated rules of thumb.

 

 

You have three characters: Phil, Daria, and Edward. Phil and Daria live in New York City in 2189. A war for independence has just broken out between Earth forces and Martian colonists. A Martian separatist has masterminded a terrorist attack in New York (what else is new?). What neither Daria or Phil know is their Martian penpal, Edward, is also the terrorist who masterminded the attack. This sounds like a traditional sci-fi plotline in the making. How do you make it into a traditional military sci-fi story? Simple— Phil and Daria sign up for military service, get their own mech suits, and start rolling across Cydonia where they fight communist Martian droids at the now terraformed, statue-like Face on Mars. The climax involves them facing down Edward and realizing their friendship has been put to the ultimate test as a result of a war. That’s a story that’s definitely character driven and engaging— but it’s not necessarily “slice of tomorrow” fiction. How do you turn it into a slice of tomorrow story? You don’t have to change a damn thing, except focus on where the story’s set. For example, Phil and Daria, in the short period of time after the attack and before they join the military, may be utterly shellshocked by the terrorist attack. They’ve seen dead and injured people, and a major landmark has been destroyed. They just want a moment to be thankful for the fact they’re alive. They may want to contact Edward to get his opinion on events considering he’s a Martian and Martians are implicated in the attack. They’re just keeping up with the news to find out more about what just happened, and they grow ever more angry as time goes on. The climax could be them actually joining the military, or maybe something else entirely. Something not involved in the military. The terrorist attack was just a background event to their daily lives— a pretty big and impactful event, but a background event nonetheless. The real drama lies elsewhere. It’s drama you can’t just shoot at to make it go away, either. Thus, the story’s ultimately resolved well before the first mech suit ever gets to fire a shot at separatists.

 

Even writing that mini-blurb proved my point, because I was going to write something after “the real drama lies elsewhere”. Something more specific than “it’s a drama you can’t just shoot at to make it go away, either.” But as I typed it out, I could actually hear the groans of boredom in my head— “if this were an actual sci-fi story,” I thought, “having that plotline would just evoke nothing but frustration.” And what was that plotline?

Phil or Daria calling their parents. That’s it! The actual conversation would follow recent events, yes, but that’s the climax. When I wrote that out, I thought “That’s the dumbest/gayest thing I’ve ever heard” because it sounded a bit like a waste. I have this nice, big universe filled with juicy potential sci-fi action— I even have a fantastic trigger that present-day readers can relate to in the form of a traumatic terrorist attack— and I spent it by having one of the lead characters calling Mommy to wish her a tearful Merry Christmas?

That doesn’t sound sci-fi at all.

 

And that’s the point! Because even though it doesn’t sound like sci-fi, it still is sci-fi.


 

TL;DR:
Sci-Fi Realism: Candid, prosaic, and/or photographic sci-fi
Futuristic Realism: Science fiction as told by F. Scott Fitzgerald
Slice of Tomorrow: Science fiction as told by the Hallmark Channel.
Science Non-Fiction: Neil Armstrong’s autobiography

Debating Basic Income

Why I Think UBI Will Actually Be Social Credit-Based Income

While I’m not one of the reactionary Luddites who claims AI is suddenly not capable of doing anything or is only as capable as looms and tractors ever were, and I’m not going to bother using the same an!capistan arguments against basic income that clearly aren’t swaying anyone (I don’t know why anarchocapitalists and libertarians even bother), I will say that we’re giving basic income too much credit.

Keyword: credit. That’s what I’m leading into. Whenever I keep promoting Vyrdism, I also mention why I don’t trust basic income— the State, which is the agency who will distribute said income, is not and never has been altruistic. They’re not going to give out a basic income unconditionally, and if you believe they will, you’re wrong. I know it’s your opinion, but your opinion is wrong. Literally 8,000+ years of experience with the ruling owner class proves you’re wrong— there will be conditions, even if the elite says there won’t be.

And China gave me the idea as to what that condition would actually be.

China is allegedly bringing out a social credit system, and your social credit score determines your ability to function in modern society. That sounds to me like the perfect opportunity to bring about a basic income— your social credit score determines the amount of your income. Lose too much social credit and you might be cut off from the basic income, and the justification will be “you’ve proven that you can’t be helped, even with a basic income.” So yes, you’ll get a basic income, and you’ll allegedly be allowed to do whatever you please with it— but those in power are closely watching what you’re spending it on, as well as your actions in other parts of your life.

Let’s say that there’s a baseline that everyone receives a month— $1,000— which supposedly cannot be altered. The State is promoting a ‘healthy’ lifestyle. In other words, if you buy too many greasy foods and sugary snacks, your social credit takes a hit and you might get less income. It’s not going to be overt— the easiest way to take money away from you while also keeping up with the “unconditional” basic income would be to penalize you elsewhere, such as with higher taxes and fees for goods. You may still receive $1,000 a month, but your expenses jump from $800 to $1,000.

That’s still manageable, and your basic income can still cover most of it. However, if you subscribe to anyone the ruling elite doesn’t like on Facebook, that’s more of a hit. Hell, if the ruling elite decides you can only use certain social media sites or search engines or only use certain ISPs and you defy them, you might get a big hit to your social credit score. Your $1,000 income becomes worthless as your expenses reach $2,000 or more a month. And I don’t even think I need to say what would happen if you protested against the government or its corporate-bourgeois masters. And by that point, it’s too late, because artificially intelligent technotarians have already rendered human workers utterly obsolete, meaning there’s no other way to improve your social credit score again other than to accept whatever the State demands.

Of course, it works in the other direction as well. If the State tells you to jump, you ask “How high?” You become their drone, doing absolutely anything and everything you can to be a Model Citizen™. You may be rewarded with laxed expenses, effectively increasing your basic income every month from $1,000 to $1,200.

Now if you ask me, we are going to see a universal basic income in our lifetimes. Not even just in our lifetimes, but very soon. And it’s not going to hit the ground running as a totalitarian social regimen.

I’m not against basic income. I just recognize the potential for abuse. Basic income-esque schemes have been tried throughout history, even though they’ve never been called basic income. And always, they’ve been part of a “deal” rather than being unconditional. For example, with feudalism, you need only work for the local lord and you get free protection. It’s just that feudalism also gave us serfdom, and basic income could very well lead us to a dystopian existence that few proponents seem to believe it could lead to because they opt to believe a false dichotomy that anything other than basic income is a dystopia as well.

And if you’re alright with this or already accepted that basic income was never going to be unconditional, then fine; I’m not talking to you. I’m talking to the wide-eyed idealists who still believe it’s the ends and of itself instead of a means to an end.

“But Yuli,” one might ask, “isn’t this more of a critique against a social credit score?”

Yes of course. My point is that, at least in our current mode of existence, the two will likely be intertwined. We won’t see UBI without a social credit score— it might even prove to be one of the compromises that must be made!

So in summary, I don’t blindly trust basic income. There’s been no proper debate on it because the opposing argument’s almost nothing but an!cap whinging about how taxation is theft, welfare is Stalinism, and the very-thinly-veiled “Tyrone’s just going to buy crack and beer and play Call of Duty all day on my paycheck”, which backfires and results in more people accepting basic income by making it seem only an!caps and closeted fascists oppose basic income. This, in turn, makes the Left look even more like Statist Sheep that the Right oft claims they are.
A legitimate concern is that the ruling elite won’t make it unconditional because there is literally no evidence in history of them being altruistic in such a way. China’s social credit score is almost certainly what basic income is going to be tied to.

One last word: I’m not against basic income. I know I’m repeating myself, and I know most people are smart, but I’ve long since become cynical enough to realize that I must keep repeating this, as there will always be someone who decides that I’m actually a denizen of the aforementioned An!Capistan all because I dared to say anything against UBI.

If you want a true alternative to the current mode of existence, look to Vyrdism. Maybe read this: OPINION: Why I am pro-Vyrdism and not pro-Universal Basic Income (UBI).

The Coming Madness

I dedicate this post to the late Alvin Toffler, who helped to popularize the phrase “future shock”. By the end of this post, you will either despise or adore the phrase.

What is “future shock“? In simple terms, it’s what happens to a person when the rate of sci-tech development outstrips their mental ability to handle it. Mr. Toffler defined it as “too much change in too little time”. Though that is a fine description, I feel it also presents a tinge of vagueness.  If I were to change houses fifty times in a month, would the weariness and anxiety of all the change be considered future shock? Not at all. As a phrase, it’s always been used to describe our response to sci-tech, and that’s how Mr. Toffler meant for it to be regarded.

The damnedest irony of it is that Mr. Toffler popularized the phrase in 1970, a phrase that had already been floating around for roughly a decade by that point. Yet when I think of that time period, I think of an almost laughably primitive state of technology— with all its cathode ray-tube TVs, Kodak cameras, and rotary phones. This stems from my privilege of living in the Future™*, having reasonably fast internet in my pocket and all. With this in mind, it becomes amazing to think that people in the 1960s— the early ’60s at that— were experiencing future shock.

What happened in the early ’60s that beckoned sci-tech anxiety? We experienced the Cuban Missile Crisis, which threatened all of human civilization thanks to the existence of atomic weaponry…. The world’s first industrial robot, Unimate, was unveiled….  We created the first supercomputer, the CDC 6600 (whose top performance was 500 kiloFLOPS)— my IoT-capable washing machine is thousands of times more powerful than that thing…

So from my perspective, the 1960s were a hellish time to be a futurist. The idea that you could be overwhelmed by the technology of the day sounds comical, and yet as I mentioned, I’m coming from an era where washing machines are orders of magnitude more powerful than the era’s top supercomputer.

I want to go on about this, about how my imagining of the ’60s and ’70s paint them as the last ‘Luddite’ decades due to them possessing so little computing power across so few computers. But I won’t. That isn’t what this post is about.

No. I want to raise the point that the fact people were shocked by sci-tech in the early ’60s only means we are going to be in for a hell of a time in the coming years.

It used to be that a person experienced such great changes in their lives only ever so often, even as late as the ’70s. Nowadays, we’ve turned it into a meme. The iPhone 30SE -1000 is the latest hot thing— now it’s the iPhone 95DR006. You blinked, and now we’re all using the Samsung Omniverse ∞. Last year, we were into 3D TVs. This year, the Oculus Rift is the hot new thing. But you’d better hurry, because cortical modems are on their way. And now you’re a ball of pure super sapient energy.

This shocking rate of change can prove to be too much for some people.

I know people who are still living in 2006. No, no, they haven’t invented time travel— they just don’t care enough about the latest gadgets to keep up with them. While futurists like myself are jizzing over augmented reality glasses and domestic robots, they’re not even aware that 3D TVs are a thing. Some may have only recently upgraded to Blu-Ray, and that’s considering they’ve not yet noticed that Blockbusters and Hollywood Videos are extinct.

Should you present them with something like, say, the Guinness Book of World Records: 2006, and flip to its technology section, they would be impressed by developments from over 10 years ago.

The last time I was impressed by ASIMO was in 2014, and that was only because I was a Born-Again Singularitarian.

Such technologically retarded people have been sheltered by the relative inability of truly futuristic sci-tech to penetrate the mainstream. They’re not shocked by the iPhone nor are they exceptionally interested in social media. These things came gradually, more as conveniences than shocks.

This won’t last. We’ve begun seeing virtual reality headsets infiltrate the shelves of warehouse stores across America, and Aldebaran’s social droid, Pepper, is about to go on sale across the world. Passenger drones and hyperloops are being teased by companies and governments, while augmented reality glasses are drowning in investor money. All of these things are coming all at once, and they’re merely the first wave of a massive sea change in the mainstream.

Not to mention the stupidly fast progress in the field of artificial intelligence. Billions are being poured into this industry, keeping it in a perpetual AI Spring. Once upon a time, the best AI could not so much as navigate a 2D maze. Now, they’re defeating humans in games we’ve dominated for millennia.

The biggest limiting factor for utility droids has been the ability to navigate 3D space autonomously. This is why ASIMO rediscovered gravity back in 2008 while attempting to walk up stairs, despite all the progress the robot had made in its previous 20 years. Now that we have the proper algorithms and sufficient computing power, this isn’t a problem anymore— all we have left to do is to fuse the likes of Google DeepMind with a robot like Boston Dynamic’s Atlas or Honda’s ASIMO.

What happens when all of these things converge on the common man, something that most expect to occur sometime between 2018 and 2022?

Future shock. The world’s most sweeping and intense case of future shock is upon us. Not only that, but I feel that there will be a trigger for this grand cybernetic anxiety, and it involves the world’s biggest sporting event.

 

In 2020, Tokyo will hold the Olympics. As you may know, Japan is commonly considered to be the most technophilic country on Earth, and their love for the synthetic and digital is not hampered by Western notions of creepiness— hence why Japanese news sites never have to bring up the likes of Terminator even when discussing humanoid droids, whereas American ones will eagerly call a medieval suit of armor a ‘Terminator.’ Prime Minister Abe isn’t holding back punches— his exact words were “I want a Robot Olympics.” Likewise, Tokyo 2020 is fast developing into a spectacle of advanced technology rather than a mere contest between the world’s finest athletes. It is where and when most average people will first see autonomous vehicles and personal robots in action.

If that isn’t enough, also consider that the World Expo 2020 will be held in Dubai, which is perhaps the world’s most futurism-obsessed city-state that has based its whole tourism industry off being our closest replica of a cyberpunk cityscape.

Never mind the very sound of the year being futuristic— “2020” is usually seen as being far in the future, not three and a half years away. One of my favorite video games, 2000’s Perfect Dark for the Nintendo 64, was set in 2023. We’re closer to the year it’s set than its release! As time continues its relentless forward march, as children grow into adults and adults into geriatrics, we will be ever more reminded by how quickly things are changing.

One day— one day soon— we will grow used to the sight of utility droids, passenger drones, and cyborgs, and then some of us will wonder “When did the Future™ arrive?”

Such changes will have come so quickly that there will be people in need of medical attention in order to cope. People who need to be institutionalized, or at the very least need a counselor. Some will desire the old world, a world before all of these changes. Some will experience an intense hiraeth for the old days, not understanding change was always happening even when they weren’t paying attention.

We’re transitioning from a Post-Industrial Society to a Singularity Society. This period we’re living in right now, codified by the existence of strong-Narrow AI and social media, can best be described as “Pre-Singularity Society”. All of these societal shifts bring with them psychological upheaval. However, never have these changes been so rapid. They’re coming so fast that we’re becoming blind to them, or perhaps we’re coping by entering a delusional state where we believe nothing has changed in years. Clearly life now and life in 1996 are not the same, but we’re willfully blind as to how and why, thus inflicting upon ourselves the illusion of stagnation.

It won’t last. It won’t last at all. As I’ve said, future shock will smack us all some time within the next 5 years, and I will put money down that it will be in 2o2o. After 2020, the concept of people needing mental help to cope with rapid sci-tech change will become more and more common.


*The Future™ is a term describing the commonly accepted tropes of what a sci-fi future is supposed to look like; i.e. flying cars, robot butlers, AI, techno music, space colonies, starscrapers, etc.

Decentralized Democracy

Whenever you get into an argument with a socialist over what socialism means, they always claim that it means “worker ownership of the means of production.” Yet when the argument is over and everyone’s back where they were beforehand, the socialist will frequently claim that it’s the State— not the working class— that should possess the means of production.

I’ve noted this many times and it’s been a bit hilarious to keep seeing socialists flip back and forth over what actually qualifies as socialism. That’s not to say that all socialists behave this way— there are some who never claim it’s anything other than State ownership of the means of production, and there are others who never claim it’s anything other than worker ownership of the means of production. Those in the latter category have largely been forgotten in popular discourse because of how socialism has become to mean “any form of Big Government.”

Naturally, I’m keen on wondering what’s so great about Big Government. It’s said that the government needs to regulate industry in order to prevent abuses, and without this regulation, the working class would be a downtrodden, abused, and impoverished underclass without any rights. Yet whenever I look to nations that have the most oppressed working classes, it’s always those with authoritarian or totalitarian governments that attempt to control every facet of the economy.

Of course, is this a damning condemnation of government? Not at all. I can’t say I’d like more privatized prisons, after all. However, there is an aspect of this that I’m starting to realize may prove the socialists right— of course, it’s proving them right in a manner that works against them.

Businesses do need to minimize expenses and maximize profits. That’s just how it works. And often, that will mean that the workers get the short end of the stick. Not always, but that’s how it’ll usually happen, and when most businesses manage to lower wages for workers, they wouldn’t want anything to happen to destroy their hegemony. Just look at what happened with Henry Ford— some of his rivals called him a socialist all because he paid his workers so well.

But that’s not what I’m getting at. No, my point is that socialists are very much right when they call socialist countries “State Capitalist.” And why? Because, as they say, the State takes the role of the capitalist enterprise. Most businesses are run by the State, after all.

However, I’m going deeper than that. It’s not just that the State runs most businesses— it’s also that the State itself has become a business. In order for it to be successful, it needs to be run like a business, like a corporation. However, whenever socialists overthrow the bourgeoisie and implement the Dictatorship of the Proletariat, they see themselves as revolutionary Marxist heroes, not bourgeois businessmen. That’s one reason why socialist nations always fail— those running it fail to realize they’ve essentially turned their parent nation into one giant corporation.

Imagine if wide-eyed idealists tried running Microsoft. Rather than engaging in traditional business practices, they did everything according to some outdated pamphlet or religious document that has no bearing on modern society. Would Microsoft last long? No, it wouldn’t. It would suffer yearly deficits that got worse and worse, with the workers going unpaid and the higher ups reaping any and all money that can be made.

So it seems like you have a choice between one corporation or several corporations. Is there any way out of this matrix? To be blunt, not really. I’m not going to try to sugarcoat anything, because no matter what happens, we’re going to return to a somewhat similar set up in society. However, I do have one hypothesis.

It goes that there is a conflict between authoritarianism and democracy. Democracy is inherently more successful than authoritarianism, as all examples of authoritarianism will eventually collapse in on themselves due to the centralization of power. However, authoritarianism is a dominant gene, whereas democracy is a recessive gene.

Anyone who has ever gone through 8th grade biology class knows these terms, “dominant” and “recessive.” Dominant alleles can show themselves even if they came from only one parent and it is a minority of a person’s alleles. Recessive alleles must come from both parents, and even then they will rarely appear.

This holds true for sociopolitics and economics. You can’t have authoritarianism and democracy. That’s one reason why I feel anarcho-capitalism and Chavismo are doomed ideologies— one claims to respect sociopolitical democracy, and yet all but demands economic authoritarianism. The other claims to pursue economic democracy, but did so by abusing sociopolitical authoritarianism— and as we’re seeing in Venezuela, it’s led to disastrous results.

This is because you need both to be democracies if you want success. If one is authoritarian, soon enough both will be authoritarian. An authoritarian government will never keep its hands off the economy, and authoritarian business structures will always want to corrupt government. You need government in order to create a monopoly, and you need a business powerful enough to get government to create a monopoly in its favor. That’s why the argument over whether monopolies are the result of Big Business or Big Government is a pointless argument that’s very obviously divided along political lines— you need two to tango. If there’s a monopoly, breaking up the business with bigger government won’t solve anything. Likewise, shrinking the government wouldn’t solve anything either. You’d need to do both if you wanted to prevent it from happening again. However, as long as both are authoritarian structures, it will happen again. It’s just something we’d have to deal with time and time again.

That’s one reason why I’ve been extolling the virtues of worker cooperatives, worker self management, decentralized business models, and fully automated businesses (technates).

We probably won’t see the rise of decentralized democracy anytime soon, not unless there were an aggressive move towards it. Digital technologies can aid this movement, as we’ve seen with the likes of the DAO, but it’s still too early to see which way we’re heading.

 

Yuli’s Law

I want to coin a new term: “Yuli’s Law“. Yuli’s Law states that any attempt at discussing futurology or emerging technologies will always result in someone expressing skepticism or pessimism based on past developments and failures.

For example: most discussions about the capabilities robots in the future always fall back on the limited abilities of robots in the past. How often have you read a news story about artificial intelligence only to find in the comments someone saying something to the effect of “You still need to program every action, and that’s why AI will never happen”? Or perhaps when you try discussing technostism— when the world is fully or nearly-fully automated? What’s the standard rebuttal? “It didn’t happen with looms, spinning jennies, tractors, and computers, so it won’t happen now.”

Another example, one I’ll use to go in depth= flying cars. Flying cars have been a staple of futurist optimism for nearly a century now, and yet they’ve never materialized. We’ve had planes for over a century, and helicopters have been around for almost as long. Fuel isn’t a problem— there are a plethora of fuels to use, even if some aren’t as savory as others (i.e. nuclear-powered cars). We’ve even seen an electric plane circumnavigate the globe. So what is keeping a flying family sedan out of your driveway? You are. Not you in particular, but humans in general. We humans evolved to navigate a 2D plane— we move forwards, backwards, side to side, diagonally, and little else. We didn’t evolve to move up and down. The limits of our 3D movements involve standing up, sitting down, squatting, falling down, and the like— not flying through the heavens at lightning speed. And even then, we were never meant to traverse 2D planes at high speeds either, hence why car accidents claim the lives of over a million people each year globally.
Flying cars just aren’t going to happen beyond those novelties like the Avrocar unless you address the pilot problem. What’s an innovation in that field?

Passenger Drone
Ehang 184 cruising through Dubai. Imgur

 

Passenger drones. At CES 2016, a Chinese drone company, Ehang, unveiled the Ehang 184. It’s not the first passenger drone, but it’s arguably the most famous.

This technology is nascent, but it’s already proving itself— Ehang tested their passenger drone in Nevada during the summer of 2016, apparently with success. They may not be the first to usher in passenger drones to the masses, however, as Alphabet’s Larry Page is investing in the development of flying cars. Surely he’s well aware of the crippling limitations of flying cars (which turns them into roadable planes). After all, his company is leading the way in the field of autonomous vehicles. Adding a third dimension to the Google driverless car won’t be a problem because it will be computers that have to deal with it.

And that brings me to my next example. Some people might still say that flying cars won’t ever appear because they haven’t appeared on the market yet, but once you introduce them to passenger drones, and interesting thing happens— they begin to ponder why we didn’t create passenger drones before now.

One thing that every futurist knows (or should know) is that many of our beloved and desired technologies will only be possible with greater computing power. This hasn’t always been the case— we didn’t need computers to create airplanes or automobiles or even atomic bombs and space-faring rockets. However, once we did create these things, we hit a plateau. It’s a plateau that’s been the bane of futurists, sci-fi fans, Singularitarians, transhumanists, everyone with an interest in technology in general. All the low-hanging post-industrial technology fruits have been plucked, and in order to progress further (and to use video game terminology), we need to unlock the AI upgrade. We can still develop futuristic technologies without AI, but it will take exponentially longer timescales to do so, especially considering we aren’t funding sci-tech at anywhere near the levels some people think we are (i.e.: we’re funding fusion energy at a “Fusion Never” level, and NASA’s budget in 2016 is one of their ten lowest funded years ever).

However, even then we still won’t be able to develop some technologies such as domestic robotics and augmented reality. These two technologies are wholly dependent on algorithms that can decipher the incredible amounts of data fed to them by the world at large, and without sufficiently powerful algorithms, they will never take off like we imagined.

When I use the term “AI”, I’m not necessarily referring to artificial superintelligence (ASI) or even artificial general intelligence (AGI). I mean any algorithm, no matter how narrow. With sufficient computing power, even narrow AI becomes impressively capable.
And they need to be capable if we want the futuristic fantasies we’ve always desired.

In the early days of science fiction, we were amused by visions of robot butlers. So amused that we tried making them ourselves. However, every attempt thus far has failed. Does it stand to reason that every attempt will fail, or will there come a day when every middle class family possesses their own automaton slave?

I won’t even let you answer that question— of course that day will come.

There’s a wonderful infogif from Mother Jones that shows why we’re about to get our own domestic droids sooner than many think.

zml5hbc
Mother Jones – “Welcome Robot Overlords. Please Don’t Fire Us?”

For a slower version, click here.

In the 1960s, our computers had so little computing power available that you might have gotten better results with a wind-up toy. Things didn’t improve much in the ’70s, though we were doing the best we could with what we had. It was depressing, to say the least. That there were proto-Singularitarians from that era is remarkable, as I cannot imagine how awfully little hope they had.

I say this from the comfortably robotic year of 2016. I’ve talked about my late Roomba, though I don’t believe I’ve mentioned how frustrating the little bugger really was. Even a 2010 model Roomba felt like a glorified McDonalds toy. Thus, it makes sense why people high on the Jetsons and Star Wars back in vintage decades would be disillusioned by the seemingly stagnant rate of progress in robotics.

But things are changing.

Now that we’re developing computers powerful enough to run the algorithms necessary for a robot to navigate a real life space— as well as wireless networks fast and sturdy enough to drive Cloud computing— we are witnessing a robotics boom time.
How is it that ASIMO went from falling on itself trying to navigate stairs to being able to hop on one leg stably? Better algorithms that could process more information at a much quicker pace. How is it that Boston Dynamics went from the awkward PETMAN to the creepily impressive Atlas 2.0? Better algorithms that could process more information at a much quicker pace. Not forgetting more efficient robotic design, of course— design that still needs to be utilized by said algorithms.

We may have had the necessary algorithms in the ’70s and ’80s, but computers were far, far too weak to exploit them. Thus, if you brought home something that was marketed as a home robot butler for Christmas ’78, you were going to be sorely disappointed. And if you were a 6 year old who had high hopes for robot butlers, the failings of one would scar you for life. Imagine living the rest of your life, working at a decent job and starting a family, and then in 2018, you hear that Honda is selling its first home-ready ASIMOs while Google is readying a passenger drone. Chances are you wouldn’t believe them. Sure, technology’s gotten better, but there’s no way it could have gotten that much better, right…?

And yet it has.

The World of 2029: 1000-Man Algorithms

It’s late Friday, November 16th, 2029.

Samantha Jones wraps up what she was working on and goes into the kitchen for a glass of water. Nothing happened today that really shocked her, and there was little to write about.

Then again, her computer continues to type even though she’s away from the keyboard.

When she sits back down, she rolls her chair around the corner to check up on Miranda. She’s watching cartoons on their 8k TV. The TV itself is as flat as paper and sticks to the wall, making it appear as if it were magic wallpaper.

As the sun sets on the crisp autumn day, the wind blows with vigor and the sky darkens.

Beautiful, Samantha thinks. There’s nothing better than a rain-cooled night.

Immediately, those words appear on the computer screen. Along with them, a stock photo of such a cloudy, cool, and dreary evening.

Chui speaks, “You love rain, right?”

“Better believe it, honey.” Then she begins typing.

Chui saw that these inputs came from the keyboard. “Isn’t it easier to use the iMind?”

“Meh, I grew up typing. Old habits die hard.”

“I thought you loved new technology.”

“When it suits me.” She turns off the speaker and types in the next response. ‘I just don’t like playing cyborg all the time.’

‘I understand.’

“So how far along is the game?”

“79% finished.”

Samantha brings up a new window and sees a message box that displays multiple lines of code. The code generates itself and fills whole pages. On top of the box are the words ‘Sam’s Game.’

So she logs off from her blog and brings up a social media website. Her eyes glow from all the information thrown at her, and she hastes to put on her glasses.

Instantly, she sees a new world around her, one more vibrant than any she’s ever known. She’s in the website, experiencing its cybernetic wonders without any middleman.

In fact, the more she works with this, the less she uses the traditional computer and keyboard. If anything, they’re vestigial. Yes, the tower is necessary, but she wears the screen as glasses, and she uses her mind as the keyboard.

Along with her glasses and cyberkinetic headband, she also wears wireless earbuds. From here, she can listen to any of her 120,000 downloaded songs. Don’t tell anyone, but she used a YouTube-MP3 converter for almost all of it. It’s not like anyone can do anything about it anyway. The last lawsuit over pirated music was nearly a decade ago, and now the music industry doesn’t bother.

And an avatar of Chui is smiling at her. That little thing is one of the reasons why.

Chui, and on a larger scale, artificial intelligence in general, have become what the media has dubbed ‘supercapable.’

Supercapable AI as a term was first used in 2016 after a Google AI beat the world champion at Go, a game whose very function requires some form of generalized intelligence. It has come to bridge the gap between ‘narrow’ AI and ‘general’ AI. For a refresher, narrow AI refers to any programming that can complete a specific task. Computers from the 1960s, thus, relied on narrow AI.

General AI has proven to be a tougher nut to crack, as it requires an algorithm that can learn any task and operate on a human level.

Up until the late 2010s, they were seen as separate worlds. After computer intelligence’s domination of Go, however, the term ‘supercapable AI’ entered parlance to describe AI that was able of some level of generalized learning, even if it was not general intelligence in and of itself.

“Your game is almost ready,” Chui says. “95%.”

Samantha can’t keep her jaw off the desk. “That fast?”

“Yep.” Chui gives an ‘XD’ smiley. “Just putting the finishing touches on the lighting engine. All 18 levels are done, and the game’s AI is working properly.”

Supercapable AI has been the dream of nerds and dreamers— and the nightmare of wage laborers. Chui isn’t the one who built the game, but it is telling her about the progress of its construction. However, Sam isn’t the head of a game design studio— it is another AI that is building the game. Sam wrote in instructions and descriptions of what she wanted from the game and guided the AI in its early design phases, but otherwise she (or any other human) has not put in a single line of code.

That’s not to say this has killed entertainment.

One of the sites Samantha opens up next is a hub for such games to be shared and sold. There are thousands of such games, uploaded by people across the world. Human-developed games are specially marked, though algorithm-developed games dominate the site.

She checks her account. $542.99 made in the past week off game downloads. She’s among the top 500 ‘developers’, as well as one of the site’s oldest accounts.

Samantha is a technophile who keeps her finger on the pulse of the tech world. For years, she lauded the coming of decentralized game development. Indie games have grown in complexity thanks to algorithms, to the point they are indistinguishable from ‘professionally developed games.’

In fact, in the site’s newsfeed is a headline that reads ‘EA Closes Doors— Millions Cheer’. This has been the somber reality of the gaming industry ever since the algorithms first hit the market in 2018. It took a while for them to be noticed. In fact, as late as 2021, many in the games industry claimed that these “silly algorithms would never present a threat to the millions of manhours put in by the industry’s best”, since the best the algorithms seemed to do were basic stages with uncreative utilitarianism.

By 2024, this delusion had shattered when an algorithm-designed game became the biggest selling title of the year. Google and their ilk warned the game industry years prior, saying that top-of-the-line algorithms from 2020 were already capable of ‘creative design’ and that it was only a matter of time before regular consumers got their hands on them. For the industry, it’s only gone downhill since. A few algorithms can outdo a team of 500 skilled programmers and designers with a millionth of a percent of the cost, so what’s the point of having the latter outside of ‘human cred’?

“Aaaaand 100% Game’s finished, Sam.”

“Alright, cool, I’ll check it out in a sec,” she thinks.

A small preview opens in the lower right of her vision, and she sends a mental note to close it and send it to the house’s main computer.

Ben and Miranda run into the living room and sit in front of the TV. The game opens, and they get a screen full of cartoony graphics and bright colors. If one didn’t know, they’d say this were from Nintendo.

This seemingly simple layout was the intention. Samantha knows that, if she wanted to, she could’ve created a sprawling epic featuring the most realistic graphics possible.

And there’s another point of contention. Ever since the early 2020s, computer graphics have been photorealistic. Video games oft feature CG cutscenes. It doesn’t take a rocket scientist to figure out that one can use these game algorithms to create movies and serials.

To Samantha, decoupling power away from a centralized few is the greatest thing to happen to the entertainment industry. Billions of minds hold quadrillions of ideas, but only a few thousand ever get the privilege of seeing them to light.

It just helps that she was an early adopter. She’s been a blogger for decades now, and using AI to write her content has made her life easier (and wealthier). And it’s not like her readers don’t know— she’s one of the Internet’s most outspoken technophiles, openly praising the neverending progression of artificial intelligence and robotics.

This is what makes her opposition to Vyrdism so strange. She profits off of automation and AI, yet she claims that others should rely on the State to pay them benefits and not worry about owning anything. Hence her last article, ‘Giving Vyrd the Bird.’ It’s not been one of her more popular articles, with many Vyrdists attacking it as ‘bourgeois apologism.’

“What do you think about it?” she asks Chui.

“I think it’s a fine article that raises many good points,” it responds.

The rain falls.

The World of 2016

Exponential growth is for Luddites

I dedicate this post to my Roomba, who served me well for four and a half great years. R.I.P., 2011-2016.

Okay, maybe not. But there’s a reason why I trashed my Roomba— it’s outdated. It was outdated when I bought it and it’s worlds outdated today. Every second that passes inches it closer to the Roomba Obsoletion Singularity, the point at which all Roombas are obsolete the moment they’re created.

But I don’t mind, because I didn’t need another Roomba for half of a decade. That one served me well. The same applies to a lot of technology— I recently blogged about how I have a smartphone from 2013, and how I’ll probably keep it until I buy an iPhone 8s Plus, which I’ll then keep until 2026.

It feels good to experience exponential growth. It’s hard to experience it when you’re constantly riding the curve, so making a stop at one point and picking back up some time later is a joyride that can’t be beat.

The same will still be true 13 years from now. Imagine 13 years of exponential growth from where we’re at now. There’s a reason why the World of 2029 posts are increasingly ‘out there’— the more I consider the real nature of the future, the more I realize that I’m badly underestimating the rate of change. When I went into writing The World of 2029: Part One, I was still thinking linearly.

Imagine if this year were 2003, and I was writing about the year 2016. Yeah, there are some things I’ll get wrong, wrong and terribly wrong. But there are other things I might get partially right, except that I wasn’t being creative enough.

Back in 2003, I thought about the future a lot. I oft consider that period in 2003 to be my ‘proto-futurist’ phase of youth. The years that really interested me were 2015 and 3000. Why 2015? I dunno, it just sounded so futuristic to me. What’s more, it’s cute how little difference there was between 2015 and 3000— the year 3000 AD was brighter, taller, and had flying cars, but it was recognizable. The year 2015 had a lot of robots, jetpacks, neon, and some “primitive flying cars.” Mind you, I was 9 years old, so I wasn’t totally aware of all the great changes that had occurred and could occur.

Still, it’s interesting to return to those times and try to wrap my mind around the idea of just how much had actually changed between then and 2015.

The biggest thing was access to the Future. A lot of futurist thinking is predicated upon the idea that the Future will be mostly-evenly distributed. To an extent, this is actually correct: think of smartphones and their prevalence in society. The rich might have snazzy covers and larger storage sizes, but for the most part, a millionaire’s iPhone isn’t very different from my own.

To another extent, it’s totally wrong. Were there robots, jetpacks, and primitive flying cars in 2015? Absolutely. Did everyone have them? Not at all. In fact, only a handful of people altogether had any of the above.

Things get cheaper, as smartphones have, so I’m confident the Future will arrive in the lap of the less fortunate. It’s just that, for now, we have to watch and imagine.

The amount of exponential growth between 2003 and 2015 didn’t seem to be all that great. Towards the end, there was a noticeable curve upwards in progress, but it took a while to get going. Between 2003 and 2010, not much in my daily life changed. I had an iPhone and 7th generation video game consoles, but that was just about it. Never mind the more subtle changes, such as the rise of social media and YouTube.

Compare 2010 to 2015, and I’ll definitely say there was a change. For one, I got a Roomba. I also got a more powerful computer, a much better smartphone, a brand new video game console, and I started using Siri and Cortana. Oh, and then there was this totally nothing drone I got in 2014.

I’ll always use the drone as an example of when the Future hit me and my mother in the face. The thing’s a flying robot. I got a freaking flying robot for Christmas. In 2003, that was the solely the realm of science fiction. My mother? She still can’t get over it. It looks like a flying saucer, which just drives the point home even further.

Now it’s 2016 and I’m already impressed with what I’m seeing, whether it be autonomous manned drones or heavily expanded IoT services. I’ve become used to the overwhelming amount of change because I’ve accepted and embraced it. That makes it easier to see just how much change we’re undergoing and predict how much will occur in the future. Yet I still made the linearist mistake.

So I feel I should spend time talking about where information technology will be in 13 to 14 years. I  can talk about where it’ll be in 4 years all the same. If I use an exponential growth model, things begin making sense.

So expect the World of 2029 posts to get exponentially weirder.

 

National Networked Federation of Worker Cooperatives

It’s a staple of Vyrdist philosophy— that we need a national network of worker cooperatives.

Why do we need them? Because as a Vyrdist, I believe that owning automation will be more fruitful than simply getting a check every month, ala universal basic income.

This is Vyrd’s best idea, IMO. According to him, one reason why income inequality is so dangerously high is because workers have so little power. It’s at the point that workers are relying on hope that bourgeois bureaucrats will tax themselves to pay for welfare to ease their pain.

Thanks to Conservative Leftism, the liberalist ideals embodied by the Democrats, the worker has been bamboozled into thinking reliance on bourgeois welfare is empowerment. So complete is this brainwashing, some workers who wish for expanded welfare actually oppose programs dedicated toward cooperativization.

The nuvo-left knows better. The only way to empower the workers is through empowerment. This sounds nonsensical and silly, but think about it for a second: what does “empowerment” really mean? Economically, it means owning the means of production, being business leaders, and controlling the distribution of wealth at personal enterprises.

Vyrd was smart. He realized that a decentralized market is the best way to allocate resources in a technostist society. Centralized planning has failed both socialism and capitalism. To those confused by the latter, recognize that a traditional capitalist enterprise involves workers creating wealth and a central authority distributing that wealth. It is beholden to market realities, yes, but this is the gist of any business.

Decentralized State power coupled with decentralized economic power is the best way forward. The only way to decentralize economic power is to create a national networked federation of worker cooperatives. Vyrd said that the best way to establish technostism is to create a mixed economy. Not one in the traditional sense, but an economy that features a number of worker cooperatives and traditional enterprises. He then bet on labor flight, where low-skilled and mid-skilled workers at traditional enterprises fled to the worker cooperatives for their vastly higher wages, leaving the high-skilled with the capitalists and spurring capitalists to invest heavily in automation to make up for their lost labor base. The worker cooperatives will profit from this automation along with them as the cost of automation plummets. Ownership is extended to whole communities via helotism, until eventually all enterprises— worker or capitalist owned— are automated and ownership of automation is either fully or quasi common.

A strong worker federation means an empowered working class.

Cyberkinesis

Cyberkinesis: The manipulation of digital and robotic apparatuses through one’s mind. Also known as technokinesistechnopathy, and psychotronics.

Which one is technically correct? I don’t believe it matters, though I have heard more use ‘technopathy’ to describe a superpower where one literally controls machines with their native mind while ‘cyberkinesis’ is used to describe augmentation that allows a person to do such. Thus, I tend towards ‘cyberkinesis.’

Cyberkinesis is a fun little thing; I remember a cyberkinetic toy I played with back in 2010.

 

There are also other cyberkinetic products one can purchase right now, such as Emotiv’s Insight.

So it’s not science fiction, but the applications are still rather fleeting. Fast forward a decade, when algorithms will be much more capable of deciphering our brain waves, and you’ll begin to notice that our phones have become ‘telepathy machines.’