Paratechnology

Unexplained Mysteries of the Future

I am only a half-believer in the paranormal, so taking mysteries of the unexplained at face value smacks of the ridiculous. Yet I can never shake those doubts, hanging onto my mind like burrs.
The mammalian brain fears and seeks the unknown. That’s all I want— to know. The chance any one particular paranormal or supernatural happening is real is infinitesimal. Cryptids are usually another story, save for the most outlandish, but what likelihood is there that evolution wrought a lizard man or a moth man? Or that certain dolls are cursed?
However, I won’t cast off these reports completely until I can know for sure that they either are or are not true, as unlikely as they may be.

So here are a few words on the subject of paratechnology.


Self-Driving Cars Have Ruined The Creepiness of Self-Driving Cars

Imagine it’s a cool summer evening in 1969. You’re hanging with your mates out in the woods, minding your own business. All of a sudden, as you pass near a road, you see an Impala roll on by, creaking to a stop right as it closes in on your feet. Everything about the scene seems normal— until you realize that’s your Impala. You just saw your own car drive up to you. But that’s not what stops your heart. When you walk up to the window to see who’s the fool who tried to scare you, horror grips your heart as you realize the car was driving itself.

Needless to say, when your grandson finds the burned out shell of the car 50 years later, he doesn’t believe you when you doggedly claim that you saw the car acting on its own.

Except he would believe you if your story happened in the present day.

Phantom vehicles are a special kind of strange, precisely because you’d never expect a car to be a ghost. After all, aren’t ghosts the souls of the deceased?

(ADD moment: this is easy to rectify if you’re a Shintoist)

Nevertheless, throughout history, there have been reports of vehicles that move on their own, with no apparent driver or means of starting. The nature of these reports is always suspect— extraordinary claims require extraordinary evidence— but there’s undeniably something creepy about the idea of a self-driving vehicle.

Unless, of course, you’re talking about self-driving vehicles. You know, the robotic kind. Today, walking out in the woods and seeing your car drive up to you is still a creepy sight to behold, but as time passes, it grows less ‘creepy’ and more ‘awesome’ as we imbue artificial intelligence into our vehicles.

This does raise a good question— what happens if an autonomous car became haunted?

O.o


The Truth About Haunted Smarthouses

For thousands of years, people have spoken of seeing spectres— ghosts, phantoms, spirits, whathaveyou. Hauntings would occur at any time of day, but everyone knows of the primal fear of things that go bump in the night. It’s a leftover of the days when proto-humans were always at risk of being ambushed by hungry nocturnal predators, one that now best serves the entertainment industry.

Ghosts are scary because they represent a threat we cannot actively resist. A lion can kill you, but at least you can physically fight back. Ghosts are ethereal, and their abilities have never been properly understood. This is because we’ve never been fully sure if they’re real at all. Science tells us they’re all in our heads, but science also tells us that everything is all in our heads. Remember: ghosts are ethereal, meaning they cannot actually be caught. Thus, they cannot be studied, rendering them completely useless to science. Anything that cannot be physically examined might as well not exist. Because ghosts are so fleeting, we never even get a chance to study them, instead leaving the work to pseudoscientific “ghost hunters”.  By the time anyone has even noticed a ghost, they’ve already vanished.

Even today, in the era of ubiquitous cameras and surveillance, there’s been no definitive proof of ghosts. No spectral analysis, no tangible evidence, nothing. Why can’t we just set up a laboratory in the world’s most haunted house and be done with it? We’ve tried, but the nature of ghosts (according to those who believe) means that even actively watching out for a ghost doesn’t mean you’ll actually find one, nor will you capture usable data. Our technology is too limited and ghosts are too ghostly.

So what if we put the burden onto AI?

Imagine converting a known haunted house into a smarthouse, where sensors exist everywhere and a central computer always watches. No ghost should escape its notice, no matter how fleeting.

Imagine converting damn near every house into a smarthouse. If paranormal happenings continue evading smarthouse AIs, that casts near irrefutable doubt onto the larger ghost phenomenon. It would mean ghosts cannot actually be meaningfully measured.

Once you bring in transhumanism, the ghost phenomena should already be settled. A posthuman encountering a spectre at all would be proof in and of itself— and if it never happens— if ghosts remain the domain of fearful, fleshy biological humans— then we will properly know once and for all that the larger phenomenon truly is all in our heads.


Bigfoot Can Run, But He Can’t Hide Forever

For the same reasons listed above, cryptids will no longer be able to hide. There’s little tangible evidence suggesting Bigfoot is real, but if there’s any benefit of the doubt we can give, it’s that there’s been very little real effort to find him. If we were serious about finding Bigfoot, we wouldn’t create ‘Bigfoot whistles’ or dedicate hour-long, two hundred episode reality shows to searching for scant evidence. We would hook up the Pacific Northwest with cameras and watch them all.

Except we can’t. INGSOC could never be watching you at all times for as long as the Party lacked artificial intelligence to do the grunt-work for them. That’s true in reality as it is true in fiction— if you have a million cameras and only a hundred people watching them, you’ll never be able to find everything that goes on. You’d need to be able to watch these videos at all moments every day, without fail. Otherwise, video camera #429,133 may capture a very clear image of Bigfoot, but you’d never know.

AI could meet the challenge. And if you need any additional help, call in the robots. Whether you go for drones, microdrones, or ground-traversing models, they will happily and thanklessly search for your spooky creatures of the night.

If, in the year 2077, when we have legions of super-ASIMOs and drones haunting the world’s forests, we still have no definitive proof of a variety of our more outlandish cryptids, we’ll know for sure that they truly were all stories.

What Is Futuristic Realism?

Definitive Explanations, Breakdowns, and Examples of Futuristic Realism, Sci-Fi Realism, Slice of Tomorrow, and Science Non-Fiction

I get asked a lot, “Yuli, what is futuristic realism?”

And that’s a bad thing. I’ve explained what futuristic realism is around five hundred times now, and the fact people still ask me what it means suggests that I, as usual, have failed to give the world a concise definition. That makes sense— I am a legendary rambler.

So I’m here to finally put to bed these questions.

Note: there will be a short version where I get right to the point, and afterwards, there’ll be a long version where I allow myself to ramble go in depth with what I mean.


Short Version

Sci-Fi Realism is a visual style that attempts to fool the viewer into thinking fantastic technologies are actually real and well-used, giving such tech a sort of photographic authenticity. 

Futuristic Realism is a subgenre of both science fiction and literary fiction that draws from science fiction and uses the structure of literary and realistic fiction in order to tell a story that feels familiar and contemporary.

Slice of Tomorrow is the fusion of science fiction and slice of life fiction.

Science Non-Fiction describes fantastic technologies, happenings, stories, and narratives that have already occurred and cause the person to say “I’m living in the future!”


Long Version

Let’s start with slice of tomorrow. Slice of tomorrow fiction is what you get when you take science fiction and mix it with slice of life. In order to understand what that means, you first need to know what “slice of life” is.

Slice of life is mundane realism depicting everyday experiences in art and entertainment.

There’s no grand plot.

There’s no quest, no corporate spooks, no governments overthrown, no countdown timer, no running from an explosion. The climax of the story is as soft as it gets. That’s not to say high-intensity events can’t happen— they just aren’t the focus of the story. Slice of life does not necessarily have to be “literary”— it doesn’t have to focus on incredibly deep themes of human relationships. It doesn’t necessarily have to be about anything at all, other than showing one’s daily life.

Slice of tomorrow is mundane realism depicting everyday experiences, with the twist being that the events take place in an otherwise “sci-fi” or “cyberpunk” environment. The intention is in the name of the genre— “slice of tomorrow.” Show the world how humanity would react to futuristic technologies, tomorrow’s social mores, and perhaps even different conditions and modes of existence. However, slice of tomorrow does not have to be relateable, nor does one have to intertwine a deeper narrative into one that identifies as “slice of tomorrow.”

 

Adding depth and length to mundanity brings you futuristic realism. Futuristic realism carries with it more of a ‘literary’ swagger. And in order to understand what that means, you must define literary and realistic fiction.

Literary fiction comprises fictional works that hold literary merit; that is, they involve social commentary, or political criticism, or focus on the human condition. Literary fiction is deliberately written in dialogue with existing works, created with the above aims in mind and is focused more on themes than on plot.


Realistic fiction is fiction that uses imagined characters in situations that either actually happened in real life or are very likely to happen. It further extends to characters reacting in realistic ways to real-life type situations. The definition is sometimes combined with contemporary realism, which shows realistic characters dealing with realistic social issues such as divorce, drug abuse, teenage pregnancy and more.

Literary fiction is a style of realism depicting real people in realistic situations, often as a means of exploring the human condition. Here, simply showing a different mode of existence isn’t enough— you have to thoroughly explore it. There is a humongous opportunity to be had in science fiction when it comes to exploring foreign and alien modes of existence, and many sci-fi authors have exploited that opportunity. One fine example of futuristic realism would have to be the Sprawl Trilogy, by William Gibson— in fact, the literary work that gave birth to cyberpunk.

Indeed, futuristic realism and cyberpunk’s origins overlap heavily, and there’s no better way to illustrate this than by telling you how cyberpunk began in the first place, as well as describing what it’s become.

Cyberpunk was born when Gibson felt dissatisfied with the increasingly stagnant Utopian sci-fi, such as Star Trek. Gene Roddenberry’s Star Trek gave us a nearly-utopian world where advanced technology solved all of humanity’s problems and men lived in egalitarian harmony and prosperity; the only sources of conflict came from either other species or the occasional disagreement.
Gibson looked at the world around himself and concluded that, even if we had starships and communicators, there would still be drug dealers and prostitutes. If anything, the acceleration of technology would most likely only greatly benefit a rich few, leaving the rest to get by with whatever scraps are left over. This wasn’t a completely baseless extrapolation, precisely because that’s what had been occurring hitherto the present moment— the developed nations, and in particular the rich, were able to enjoy high-tech consumer goods such as cable television, personal computers, video games, and credit cards, while the poor in many parts of the planet lived in nations that may very well have never experienced the Industrial Revolution. And even in developed nations, the poor were getting shafted by the system at large, especially as corporations grew in power and influence and enacted their will upon the governments of the world. Thus, Neuromancer and subsequently cyberpunk and futuristic realismwas born.

Cyberpunk and futuristic realism quickly branched off into different paths, however, as cyberpunk began becoming “genre” fiction itself— nowadays, in an almost ironic fashion considering how it started, when one thinks of ‘cyberpunk’, they think of ‘aggressively cynical dystopian action science fiction’, with the actual ‘punk’ aspect added in as an afterthought.

 

nn7ltjx
Bringing in elves and orcs sextuples the action! Source: Shadowrun

 

To truly get a feel for futuristic realism, try to follow this one: it’s the genre Ernest Hemingway or Cormac McCarthy would write if they lived in the 2050s.

I have long said that the easiest way to achieve futuristic realism would be to take Sarah, Plain and Tall and add humanoid robots, drones, and smartglasses into the mix. And why? Because there is a very intense disconnect. I even said as much in a previous article:

That’s why I say it’s easiest to pull of futuristic realism with a rustic or suburban setting— it’s already much closer to individual people doing their own thing, without being able to fall back on the glittering neon cyberscapes of a city or cold interiors of a space station to show off how sci-fi/cyberpunk it is. It makes the writer have to actually work. Also, there’s a much larger clash. A glittering neon cyberscape of a megalopolis is already very sci-fi (and realistic); adding sexbot prostitutes and a cyber-augmented population fitted with smartglasses doesn’t really add to what already exists. Add sexbot prostitutes and cyber-augments with smartglasses to Smalltown, USA, however, and you have a jarring disconnect that needs to be rectified or at least expanded upon. That doesn’t mean you can’t have a futuristic realist story in a cyberpunk city, or in space, etc. It’s just much easier to tell one in Smalltown, USA because of the very nature of rural and suburban communities. They’re synonymous with tradition and conformity, with nostalgic older years and pleasantness, of a certain quietness you can’t find in a city.

Last but not least, there is sci-fi realism. This spawned futuristic realism and slice of tomorrow, and once upon a time, it was the catch-all term for the style. However, once I decoupled literary content from visual aesthetics, sci-fi realism became its own thing, and the best way to describe sci-fi realism would be to understand “visual photo-authenticity.”

This is my own term (because I just love making up jargon), and it refers to a visual style that attempts to recreate the feel of a photograph. This doesn’t just mean “ultra-realistic graphics”— it can be 8-bit as long as it looks like something you snapped with your smartphone camera. Of course, ultra-realism does greatly help.

Sci-fi realism is perhaps simultaneously the easiest and hardest to understand because of the nature of photography. After all, don’t many photographs attempt to capture as much artistic merit as paintings and renders? What qualifies as “photographic?”

And I won’t lie that it is, indeed, a subjective matter. However, there is one basic rule of thumb I’ll throw out there.

Sci-fi realism follows the rules of mundanity, even if it’s capturing something abnormal. There are few intentional poses and very little Romanticizing of subjects. It’s supposed to look as if you took a photograph in the future and brought it back to the past.

g8sizra
Source: Vitaly Bulgarov (and his dogs)

Most photographs are taken from ground or eye level, maybe even at bad angles and with poor lighting. Very few of them ever manage to capture wide-open scenes— it’s nearly impossible to get both a shady alleyway and towering skyscrapers in the background from a realistic perspective. There are very few vistas or wide-shots. 

As aforementioned, hyper-realism comes in handy when dealing with sci-fi realism, and wide-shots can be done to be “realistic” from a sci-fi perspective.

34mmdav
Future Dubai, by Thomas Galad


And, also as aforementioned, it doesn’t necessarily have to be photorealistic as long as it carries a photographic quality.

by_burned_2560
“Burned” by Simon Stålenhag

It was watching movies like Real Steel, Chappie, District 9, and Star Wars: A New Hope that really got me interested in this “what if” style. Those movies possessed ‘visual authenticity.’ When I watched Real Steel, I was amazed by how seamlessly the CGI mixed with live action. Normally, the CGI is blatantly obvious; it feels obviously fake. It doesn’t look real. But Real Steel took a different route. It fused CGI with practical props, and it was amazing to see. For the first time, I felt like I was watching a movie sent back from the future rather than a science fiction film. Other films came close, but it was Real Steel that I first really noticed it.

 


The Bait And Switch

All of this refers to fiction. Slice of tomorrow is about slice of life science fiction. Futuristic realism is about literary science fiction. Sci-fi realism is about photographic science fiction.
However, with the obvious exception of slice of tomorrow, these can also fit non-fiction.

I mentioned quite a bit ago the concept of “science non-fiction.” This is a very new genre that has only become possible in the most recent years, and can best be described as “science fiction meets creative non-fiction.”

In recent years, many facets of science fiction have crossed over into reality. Things are changing faster than ever before, and what’s contemporary this decade would be considered science fiction last decade. As time goes on, this will only grow even more extreme, until each next year could be considered “sci-fi” compared to the previous one. At some point, people’s ability to take for granted this rapidly accelerating rate of technological advancement will wane, and there will be medically diagnosed cases of acute future shock. When we reach that point, even things that may have been on the market for years or decades will still be seen as “science fiction.”

We are already seeing a rudimentary form of this in the form of smartphones— smartphones have been a staple of mass consumer culture for well over a decade. Despite this, people still experience future shock when they take time to think about these immensely powerful gadgets. As smartphones grew more powerful and ubiquitous, the effect did not fade but in fact became more intense. This inability to accept the existence of a new technology is virtually unprecedented— we grew used to airplanes, atomic energy, space exploration, personal computers, and the internet faster than we have smartphones. Virtual reality is poised to push this future shock into an even more precarious level, as now we’re beginning to actually infringe upon concepts and technologies with which science fiction has been teasing us for nearly a century.

Space exploration had a bit of an Antiquity moment in the 1960s— we proved we could do it but found no practical way to expand on our accomplishments, much like the ancient Greeks working with analog computers and steam engines— and the actual space revolution remains beyond us, lying at an undetermined point in the future. To prove this point, we still see things like space stations and landing on other celestial bodies as being “science fiction.” This raises a conundrum— a story where a man lands on the moon qualifies as “science fiction”, but we already took that leap roughly 50 years ago. Does that mean Neil Armstrong and Buzz Aldrin actually experienced science fiction? It can’t because of the very definition of the word ‘fiction.’

That’s where this new term— science non-fiction— comes in. When real life crosses over into territories usually only seen in science fiction, you get science non-fiction.

Science fiction has many tropes, and even as we invent and commercialize the technologies behind these tropes, they don’t leave science fiction. Space exploration, artificial intelligence, hyper-information technology, advanced robotics, genetic engineering, virtual and augmented reality, human enhancement, experimental material science, unorthodox transportation— these are staples of science fiction, and merely making them real doesn’t make them any less sci-fi. From a technical perspective, virtual reality and smartphones are no longer sci-fi. However, from a cultural perspective, they’ll never be able to escape the label.

Science non-fiction is extremely subjective precisely because it’s based on the cultural definition of sci-fi. Some people may think smartphones, smartwatches, and VR are sci-fi, but others might have already grown too used to them to see them as anything other than more tech gadgets. Even when we have people and synths on Mars, there will be those who say that missions to Mars no longer qualify as science fiction.

And it’s this disconnect that helps make science non-fiction work.

There’s that word again— disconnect.

Reading about events in real life that seem ripped from sci-fi is one thing. Actually seeing them is another altogether.

bqzoezm
Photograph of Pepper, 2016

We’re back to sci-fi realism. I am reusing the term “science non-fiction”, but this is discussing its visual form. I admit, sometimes I call it ‘sci-fi realism’, but I’ve begun moving away from that (to the detriment of the Sci-Fi Realism subreddit and to the benefit of the Futuristic Realism subreddit). As mentioned, this is what science non-fiction looks likepictures, gifs, videos, and movies of real events that happen to have science non-fiction technologies.

Science non-fiction is not necessarily slice of life or mundane, though it can be (and often is, due to the nature of everyday life). In this case, science non-fiction can actually be everything slice of tomorrow and futuristic realism isn’t— including things we’d consider like cyberpunk, military sci-fi, and space operas. The only prerequisite is that the events have to be real.

For example: glittery cyberpunk-esque cityscapes already exist. There aren’t even a shortage of them— off the top of my head, there’s Dubai, Moscow, Hong Kong, Shanghai, Guangzhou, Tokyo, Singapore, Seoul, and Bangkok. Posting pictures of them can net you thousands of upvotes on /r/Cyberpunk. The vistas may lack flying cars, but who knows how much longer that’ll be the case?

ec8wm
That moment when Dubai starts looking like Coruscant

If I bought a Pepper and brought it into my home, that would also qualify as science non-fiction. Domestic artificially intelligent utility robots are a major staple of science fiction, and them simply existing doesn’t change the fact sci-fi literature, films, and video games will continue utilizing them.

dwf7imt
This is an actual Japanese showroom in 2016

Likewise, if I donned a TALOS exosuit fitted with a BCI-powered augmented reality visor, and picked up a 25 KW pulse-laser Gauss rifle, and then got flown into Syria where I could also pilot semi-autonomous drones and command killer Atlas robots, that too would be science non-fiction.

osaxxkd
The TALOS suit, one of the coolest things I’ve ever seen

Funny thing is, both these examples are already possible. Not fully— ASIMO as yet to see a commercial release, Atlas is not finished its construction into a Terminator, and no one has yet constructed a handheld laser gun stronger than 500 watts. But none of it is beyond us.

And that’s the gist behind all of this. Science non-fiction is based on what we have done.

“So why did you create all this uber-pretentious sci-fi tripe?”

1- Because I wanted to.

2- Because I noticed a delightful trend occurring over and over again online. Even outside of sci-fi forums, I was repeatedly reading stories and anecdotes of people being amazed at how technologically advanced our present society really is— but they then lamented that they didn’t “feel” like they were really living in a sci-fi story.

I am a fantastic example of that myself. I live out in the sticks— I even counted the seconds: if you drive at sixty miles per hour for one minute and twenty-eight seconds, you will come across literally bucolic farmland straight out of a Hallmark Channel movie. The tallest building in my town (and for many miles around it) is the local theatre, which comes in at seven stories. It’s the kind of town where, if you drive down any particular road too late at night, you’ll get abducted by aliens and/or the CIA. I live behind some trees on the very outskirts of this town. And despite that, I still own a drone, several smartphones, a VR headset, and a dead Roomba. If I saved up, I could even potentially buy an artificially intelligent social droid— Aldebaran’s Pepper. It feels so mundane, but my life truly is science non-fiction. A while ago, I lamented that I wasn’t living in one of the aforementioned proto-cyberpunk cities precisely because I thought I had too much technology to be living in the country.

I’ve since decided to bring science fiction to me, and that requires quite a few changes. I’m no revolutionary street urchin. I have no coding skills whatsoever. I can count on a broken hand how many times in my life I’ve held a gun. There’s nothing thrilling about me, my past, or my future. And yet I still feel like I live in a world that’s fast becoming sci-fi. So I needed to find a way to express that. A way to tell a story I— in my unfit, very much kung-fu-challenged world— could relate with. I’m no hero, nor am I an anti-hero, nor am I a villain. I’m basically an NPC, a background character. Yet I still feel I have stories to tell.


Futuristic Realism and Transrealism

So what about transrealism? Isn’t it futuristic realism? In fact, it is. However, it’s a situation where “X is Y, but Y isn’t always X.” Transrealism is futuristic realism, but not all futuristic realism is transrealism. And the best way to understand this is by looking at the definition of transrealism.

Transrealism is a literary mode that mixes the techniques of incorporating fantastic elements used in science fiction with the techniques of describing immediate perceptions from naturalistic realism. While combining the strengths of the two approaches, it is largely a reaction to their perceived weaknesses. Transrealism addresses the escapism and disconnect with reality of science fiction by providing for superior characterization through autobiographical features and simulation of the author’s acquaintances. It addresses the tiredness and boundaries of realism by using fantastic elements to create new metaphors for psychological change and to incorporate the author’s perception of a higher reality in which life is embedded. One possible source for this higher reality is the increasingly strange models of the universe put forward in theoretical astrophysics.


Some final words on the subject, starting with Kovacs from the Cyberpunk forums:

Well… the only real way that sci-fi realism works – for me – is if the science fiction is invisible and ubiquitous.
Today, I could write a fully non-fiction or ‘legit literature’ fiction (e.g. non-genre) story using tech that, a decade or two ago, would have been cyberpunk. For example: 20 years ago if you wrote a murder mystery about a detective that could track a victim’s every thought and action the day they were murdered, all withing 5 minutes or so, that would be sci-fi or even ‘magic’. Today, you just access to the victim’s phone and scroll though their various social media profiles. Same with having a non-static-y video conference with someone halfway around the world; it use to be Star Trek, now it’s Skype. So how would this prog rock of sci-fi work? I suppose you tell a tale where the tech… doesn’t matter. It’s all about human relationships.
Ooooh I bet you think that’s boring, don’t you? Well, maybe. But we can cheat by playing with the definition of ‘human’.

I’m thinking about the movie Her. Artificial intelligence is available and there’s no paradigm shift. A romantic relationship with an AI is seen as odd… but not unimaginable, or perverse. There’s no quest, no corporate spooks, no governments overthrown, no countdown timer, no running from an explosion. The climax of the story is as soft as it gets [OP: do these sentences look familiar?]Robot and Frank is another good example; it’s a story where the robot isn’t exactly needed, but it makes the story make more sense that if it was say a collage student Scent of a Woman style.
(hun… Scent of a Robot anyone? Al Pachino piloting Asimo?)
So I guess what I’m leading to is take the action-adventure component out of sci-fi. Take the dystopia out of cyberpunk. Take out the power fantasy elements. Take out the body horror. What are you left with? Something a little less juvenile? In order to develop this you’d have to have a really good dramatic story as a basis and sneak in the sci-fi elements. You can’t by, definition, rest on them.
Which is tough for me to approach, because I really like my space katanas.

Finally, what is futuristic realism not?: “X can be Y, but Y isn’t X.” Futuristic realism can use these things, but these things aren’t futuristic realism by themselves.

  • Hyper-realistic science fiction. As I said, visual authenticity started futuristic realism, but that’s not what it is anymore. Nowadays, that’s just straight ‘sci-fi realism.’
  • Hard science fiction. Futuristic realism can be hard or soft or anything in between; it’s the story that matters. Hell, you can write fantastic realism if you want to.
  • Military science fiction. Some people kept thinking sci-fi realism meant ‘hard military sci-fi’, which is why I rebranded the style ‘futuristic realism’. Military sci-fi can be futuristic realism, but a story simply being military sci-fi isn’t enough.
  • Rustic science fiction. After the whole spiel on /r/SciFiRealism when a whole bunch of people were angry that I kept posting images of robots in homes and hover cars instead of really gritty battle scenes and dystopian fiction, the pendulum swung way too far in the other direction. I have said that ‘the best way to write futuristic realism is to take Sarah, Plain and Tall and add robots’, but I didn’t say ‘the only way to write futuristic realism is… yadayada.’
  • Dark ‘n gritty science fiction. As aforementioned, some thought ‘sci-fi realism’ meant ‘dark and gritty science fiction’. And I won’t lie, it is easy for a realistic story to be dark and even gritty and edgy. But see above, I had to hit the reset button. 
  • Actionless science fiction. You’d think that, after all this bureaucratic bullshit, I’m trying to force people to write happy science fiction about neighborhood kids with robots. Not at all. In fact, you can have a hyper-realistic, dark and gritty hard military science fiction story that’s pure, raw futuristic realism. It depends on what the story’s about. A story about a space marine genociding Covenant scum, fighting to destroy an ancient superweapon, can indeed be futuristic realism. It just depends on what part of the story you focus on and how you portray it. Novelizing Halo isn’t how you do it. In fact, there’s a futuristic realist story I desperately want to read— a space age War and Peace. Something of that caliber. If you want to attempt that, then I think the first thing you’d have to do before writing is whether you can pull it off without turning it into a space opera. Take myself for example: fuck that noise. I’m not even going to try it. I know it would fast become an emo Gears of War if I tried to write it. It’s not supposed to be Call of Duty in Space, it’s a space-age War and Peace. There are twenty trillion ways you can fuck that up.

Try to think back to the last major sci-fi film, video game, book, or short that didn’t have one of the following—

  • Someone brandishing a weapon
  • A chase sequence
  • Fight sequence
  • Military tech wank
  • Paramilitary tech wank
  • Wide shots over either a city, alien planet, or space vehicle
  • Over-exposed mechanics or cybernetics
  • Romance between lead character and designated lover, usually as a result of the two working together to overcome the Big Bad and realizing they have feelings for each other
  • High-octane stakes, where the life of the protagonist or someone the protagonist cares about is at risk
  • Death of the antagonist, someone close to the protagonist, or the protagonist him/herself
  • Actions causing death in the first place
  • Bands of mooks for someone to mow down
  • Stakes where one side (e.g. space navy; evil megacorporation, warlord, etc.) has to suffer a total, epic defeat in order for the plot to be resolved, usually in the form of a climatic and tense battle

 

I’m not trying to be a creativity fascist; I’m merely attempting to define what futuristic realism and slice of tomorrow fiction aren’t. Hell, I’ve even said that you can have a whole bunch of these things and still come off as futuristic realism. It’s all about execution and perspective.

I suppose, what I’m trying to get at is that if you want to write futuristic realism and slice of tomorrow fiction, you have to ask yourself a very basic question: “Can the central plot be resolved with a gun battle without any major consequences?” Replace ‘gun’ with any weapon of your choice— space katana, quark bomb, logic bomb, giant mecha— the point remains the same. If the answer is no, you may have futuristic realism.

 

You can resolve just about any plot with a good shot from a Lawgiver; the key phrase is “without any major consequences”. Filling a flatmate’s skull with a magnetically-pressurized ionic plasma bolt because he’s not happy over how many sloppy sounds you make with your “sexbot sexpot” is going to have worlds’ different consequences as gunning down Locust filth in an interstellar war— unless, of course, you go deep into the psychological profile of someone who’s spent their lives killing aliens and has never before contemplated why he’s doing this and suddenly gains a keen interest in understanding the other side, particularly those not directly participating in the war.

 

It’s easy to say your story’s about the human condition more than it is about the science and technology, and I suppose that would make it more highbrow than a lot of other sci-fi. But futuristic realism/slice of tomorrow doesn’t have to be highbrow either. 

 

 

So let me use a story instead of just similes, analogies, and overbloated rules of thumb.

 

 

You have three characters: Phil, Daria, and Edward. Phil and Daria live in New York City in 2189. A war for independence has just broken out between Earth forces and Martian colonists. A Martian separatist has masterminded a terrorist attack in New York (what else is new?). What neither Daria or Phil know is their Martian penpal, Edward, is also the terrorist who masterminded the attack. This sounds like a traditional sci-fi plotline in the making. How do you make it into a traditional military sci-fi story? Simple— Phil and Daria sign up for military service, get their own mech suits, and start rolling across Cydonia where they fight communist Martian droids at the now terraformed, statue-like Face on Mars. The climax involves them facing down Edward and realizing their friendship has been put to the ultimate test as a result of a war. That’s a story that’s definitely character driven and engaging— but it’s not necessarily “slice of tomorrow” fiction. How do you turn it into a slice of tomorrow story? You don’t have to change a damn thing, except focus on where the story’s set. For example, Phil and Daria, in the short period of time after the attack and before they join the military, may be utterly shellshocked by the terrorist attack. They’ve seen dead and injured people, and a major landmark has been destroyed. They just want a moment to be thankful for the fact they’re alive. They may want to contact Edward to get his opinion on events considering he’s a Martian and Martians are implicated in the attack. They’re just keeping up with the news to find out more about what just happened, and they grow ever more angry as time goes on. The climax could be them actually joining the military, or maybe something else entirely. Something not involved in the military. The terrorist attack was just a background event to their daily lives— a pretty big and impactful event, but a background event nonetheless. The real drama lies elsewhere. It’s drama you can’t just shoot at to make it go away, either. Thus, the story’s ultimately resolved well before the first mech suit ever gets to fire a shot at separatists.

 

Even writing that mini-blurb proved my point, because I was going to write something after “the real drama lies elsewhere”. Something more specific than “it’s a drama you can’t just shoot at to make it go away, either.” But as I typed it out, I could actually hear the groans of boredom in my head— “if this were an actual sci-fi story,” I thought, “having that plotline would just evoke nothing but frustration.” And what was that plotline?

Phil or Daria calling their parents. That’s it! The actual conversation would follow recent events, yes, but that’s the climax. When I wrote that out, I thought “That’s the dumbest/gayest thing I’ve ever heard” because it sounded a bit like a waste. I have this nice, big universe filled with juicy potential sci-fi action— I even have a fantastic trigger that present-day readers can relate to in the form of a traumatic terrorist attack— and I spent it by having one of the lead characters calling Mommy to wish her a tearful Merry Christmas?

That doesn’t sound sci-fi at all.

 

And that’s the point! Because even though it doesn’t sound like sci-fi, it still is sci-fi.


 

TL;DR:
Sci-Fi Realism: Candid, prosaic, and/or photographic sci-fi
Futuristic Realism: Science fiction as told by F. Scott Fitzgerald
Slice of Tomorrow: Science fiction as told by the Hallmark Channel.
Science Non-Fiction: Neil Armstrong’s autobiography

The Coming Madness

I dedicate this post to the late Alvin Toffler, who helped to popularize the phrase “future shock”. By the end of this post, you will either despise or adore the phrase.

What is “future shock“? In simple terms, it’s what happens to a person when the rate of sci-tech development outstrips their mental ability to handle it. Mr. Toffler defined it as “too much change in too little time”. Though that is a fine description, I feel it also presents a tinge of vagueness.  If I were to change houses fifty times in a month, would the weariness and anxiety of all the change be considered future shock? Not at all. As a phrase, it’s always been used to describe our response to sci-tech, and that’s how Mr. Toffler meant for it to be regarded.

The damnedest irony of it is that Mr. Toffler popularized the phrase in 1970, a phrase that had already been floating around for roughly a decade by that point. Yet when I think of that time period, I think of an almost laughably primitive state of technology— with all its cathode ray-tube TVs, Kodak cameras, and rotary phones. This stems from my privilege of living in the Future™*, having reasonably fast internet in my pocket and all. With this in mind, it becomes amazing to think that people in the 1960s— the early ’60s at that— were experiencing future shock.

What happened in the early ’60s that beckoned sci-tech anxiety? We experienced the Cuban Missile Crisis, which threatened all of human civilization thanks to the existence of atomic weaponry…. The world’s first industrial robot, Unimate, was unveiled….  We created the first supercomputer, the CDC 6600 (whose top performance was 500 kiloFLOPS)— my IoT-capable washing machine is thousands of times more powerful than that thing…

So from my perspective, the 1960s were a hellish time to be a futurist. The idea that you could be overwhelmed by the technology of the day sounds comical, and yet as I mentioned, I’m coming from an era where washing machines are orders of magnitude more powerful than the era’s top supercomputer.

I want to go on about this, about how my imagining of the ’60s and ’70s paint them as the last ‘Luddite’ decades due to them possessing so little computing power across so few computers. But I won’t. That isn’t what this post is about.

No. I want to raise the point that the fact people were shocked by sci-tech in the early ’60s only means we are going to be in for a hell of a time in the coming years.

It used to be that a person experienced such great changes in their lives only ever so often, even as late as the ’70s. Nowadays, we’ve turned it into a meme. The iPhone 30SE -1000 is the latest hot thing— now it’s the iPhone 95DR006. You blinked, and now we’re all using the Samsung Omniverse ∞. Last year, we were into 3D TVs. This year, the Oculus Rift is the hot new thing. But you’d better hurry, because cortical modems are on their way. And now you’re a ball of pure super sapient energy.

This shocking rate of change can prove to be too much for some people.

I know people who are still living in 2006. No, no, they haven’t invented time travel— they just don’t care enough about the latest gadgets to keep up with them. While futurists like myself are jizzing over augmented reality glasses and domestic robots, they’re not even aware that 3D TVs are a thing. Some may have only recently upgraded to Blu-Ray, and that’s considering they’ve not yet noticed that Blockbusters and Hollywood Videos are extinct.

Should you present them with something like, say, the Guinness Book of World Records: 2006, and flip to its technology section, they would be impressed by developments from over 10 years ago.

The last time I was impressed by ASIMO was in 2014, and that was only because I was a Born-Again Singularitarian.

Such technologically retarded people have been sheltered by the relative inability of truly futuristic sci-tech to penetrate the mainstream. They’re not shocked by the iPhone nor are they exceptionally interested in social media. These things came gradually, more as conveniences than shocks.

This won’t last. We’ve begun seeing virtual reality headsets infiltrate the shelves of warehouse stores across America, and Aldebaran’s social droid, Pepper, is about to go on sale across the world. Passenger drones and hyperloops are being teased by companies and governments, while augmented reality glasses are drowning in investor money. All of these things are coming all at once, and they’re merely the first wave of a massive sea change in the mainstream.

Not to mention the stupidly fast progress in the field of artificial intelligence. Billions are being poured into this industry, keeping it in a perpetual AI Spring. Once upon a time, the best AI could not so much as navigate a 2D maze. Now, they’re defeating humans in games we’ve dominated for millennia.

The biggest limiting factor for utility droids has been the ability to navigate 3D space autonomously. This is why ASIMO rediscovered gravity back in 2008 while attempting to walk up stairs, despite all the progress the robot had made in its previous 20 years. Now that we have the proper algorithms and sufficient computing power, this isn’t a problem anymore— all we have left to do is to fuse the likes of Google DeepMind with a robot like Boston Dynamic’s Atlas or Honda’s ASIMO.

What happens when all of these things converge on the common man, something that most expect to occur sometime between 2018 and 2022?

Future shock. The world’s most sweeping and intense case of future shock is upon us. Not only that, but I feel that there will be a trigger for this grand cybernetic anxiety, and it involves the world’s biggest sporting event.

 

In 2020, Tokyo will hold the Olympics. As you may know, Japan is commonly considered to be the most technophilic country on Earth, and their love for the synthetic and digital is not hampered by Western notions of creepiness— hence why Japanese news sites never have to bring up the likes of Terminator even when discussing humanoid droids, whereas American ones will eagerly call a medieval suit of armor a ‘Terminator.’ Prime Minister Abe isn’t holding back punches— his exact words were “I want a Robot Olympics.” Likewise, Tokyo 2020 is fast developing into a spectacle of advanced technology rather than a mere contest between the world’s finest athletes. It is where and when most average people will first see autonomous vehicles and personal robots in action.

If that isn’t enough, also consider that the World Expo 2020 will be held in Dubai, which is perhaps the world’s most futurism-obsessed city-state that has based its whole tourism industry off being our closest replica of a cyberpunk cityscape.

Never mind the very sound of the year being futuristic— “2020” is usually seen as being far in the future, not three and a half years away. One of my favorite video games, 2000’s Perfect Dark for the Nintendo 64, was set in 2023. We’re closer to the year it’s set than its release! As time continues its relentless forward march, as children grow into adults and adults into geriatrics, we will be ever more reminded by how quickly things are changing.

One day— one day soon— we will grow used to the sight of utility droids, passenger drones, and cyborgs, and then some of us will wonder “When did the Future™ arrive?”

Such changes will have come so quickly that there will be people in need of medical attention in order to cope. People who need to be institutionalized, or at the very least need a counselor. Some will desire the old world, a world before all of these changes. Some will experience an intense hiraeth for the old days, not understanding change was always happening even when they weren’t paying attention.

We’re transitioning from a Post-Industrial Society to a Singularity Society. This period we’re living in right now, codified by the existence of strong-Narrow AI and social media, can best be described as “Pre-Singularity Society”. All of these societal shifts bring with them psychological upheaval. However, never have these changes been so rapid. They’re coming so fast that we’re becoming blind to them, or perhaps we’re coping by entering a delusional state where we believe nothing has changed in years. Clearly life now and life in 1996 are not the same, but we’re willfully blind as to how and why, thus inflicting upon ourselves the illusion of stagnation.

It won’t last. It won’t last at all. As I’ve said, future shock will smack us all some time within the next 5 years, and I will put money down that it will be in 2o2o. After 2020, the concept of people needing mental help to cope with rapid sci-tech change will become more and more common.


*The Future™ is a term describing the commonly accepted tropes of what a sci-fi future is supposed to look like; i.e. flying cars, robot butlers, AI, techno music, space colonies, starscrapers, etc.

Decentralized Democracy

Whenever you get into an argument with a socialist over what socialism means, they always claim that it means “worker ownership of the means of production.” Yet when the argument is over and everyone’s back where they were beforehand, the socialist will frequently claim that it’s the State— not the working class— that should possess the means of production.

I’ve noted this many times and it’s been a bit hilarious to keep seeing socialists flip back and forth over what actually qualifies as socialism. That’s not to say that all socialists behave this way— there are some who never claim it’s anything other than State ownership of the means of production, and there are others who never claim it’s anything other than worker ownership of the means of production. Those in the latter category have largely been forgotten in popular discourse because of how socialism has become to mean “any form of Big Government.”

Naturally, I’m keen on wondering what’s so great about Big Government. It’s said that the government needs to regulate industry in order to prevent abuses, and without this regulation, the working class would be a downtrodden, abused, and impoverished underclass without any rights. Yet whenever I look to nations that have the most oppressed working classes, it’s always those with authoritarian or totalitarian governments that attempt to control every facet of the economy.

Of course, is this a damning condemnation of government? Not at all. I can’t say I’d like more privatized prisons, after all. However, there is an aspect of this that I’m starting to realize may prove the socialists right— of course, it’s proving them right in a manner that works against them.

Businesses do need to minimize expenses and maximize profits. That’s just how it works. And often, that will mean that the workers get the short end of the stick. Not always, but that’s how it’ll usually happen, and when most businesses manage to lower wages for workers, they wouldn’t want anything to happen to destroy their hegemony. Just look at what happened with Henry Ford— some of his rivals called him a socialist all because he paid his workers so well.

But that’s not what I’m getting at. No, my point is that socialists are very much right when they call socialist countries “State Capitalist.” And why? Because, as they say, the State takes the role of the capitalist enterprise. Most businesses are run by the State, after all.

However, I’m going deeper than that. It’s not just that the State runs most businesses— it’s also that the State itself has become a business. In order for it to be successful, it needs to be run like a business, like a corporation. However, whenever socialists overthrow the bourgeoisie and implement the Dictatorship of the Proletariat, they see themselves as revolutionary Marxist heroes, not bourgeois businessmen. That’s one reason why socialist nations always fail— those running it fail to realize they’ve essentially turned their parent nation into one giant corporation.

Imagine if wide-eyed idealists tried running Microsoft. Rather than engaging in traditional business practices, they did everything according to some outdated pamphlet or religious document that has no bearing on modern society. Would Microsoft last long? No, it wouldn’t. It would suffer yearly deficits that got worse and worse, with the workers going unpaid and the higher ups reaping any and all money that can be made.

So it seems like you have a choice between one corporation or several corporations. Is there any way out of this matrix? To be blunt, not really. I’m not going to try to sugarcoat anything, because no matter what happens, we’re going to return to a somewhat similar set up in society. However, I do have one hypothesis.

It goes that there is a conflict between authoritarianism and democracy. Democracy is inherently more successful than authoritarianism, as all examples of authoritarianism will eventually collapse in on themselves due to the centralization of power. However, authoritarianism is a dominant gene, whereas democracy is a recessive gene.

Anyone who has ever gone through 8th grade biology class knows these terms, “dominant” and “recessive.” Dominant alleles can show themselves even if they came from only one parent and it is a minority of a person’s alleles. Recessive alleles must come from both parents, and even then they will rarely appear.

This holds true for sociopolitics and economics. You can’t have authoritarianism and democracy. That’s one reason why I feel anarcho-capitalism and Chavismo are doomed ideologies— one claims to respect sociopolitical democracy, and yet all but demands economic authoritarianism. The other claims to pursue economic democracy, but did so by abusing sociopolitical authoritarianism— and as we’re seeing in Venezuela, it’s led to disastrous results.

This is because you need both to be democracies if you want success. If one is authoritarian, soon enough both will be authoritarian. An authoritarian government will never keep its hands off the economy, and authoritarian business structures will always want to corrupt government. You need government in order to create a monopoly, and you need a business powerful enough to get government to create a monopoly in its favor. That’s why the argument over whether monopolies are the result of Big Business or Big Government is a pointless argument that’s very obviously divided along political lines— you need two to tango. If there’s a monopoly, breaking up the business with bigger government won’t solve anything. Likewise, shrinking the government wouldn’t solve anything either. You’d need to do both if you wanted to prevent it from happening again. However, as long as both are authoritarian structures, it will happen again. It’s just something we’d have to deal with time and time again.

That’s one reason why I’ve been extolling the virtues of worker cooperatives, worker self management, decentralized business models, and fully automated businesses (technates).

We probably won’t see the rise of decentralized democracy anytime soon, not unless there were an aggressive move towards it. Digital technologies can aid this movement, as we’ve seen with the likes of the DAO, but it’s still too early to see which way we’re heading.

 

Yuli’s Law

I want to coin a new term: “Yuli’s Law“. Yuli’s Law states that any attempt at discussing futurology or emerging technologies will always result in someone expressing skepticism or pessimism based on past developments and failures.

For example: most discussions about the capabilities robots in the future always fall back on the limited abilities of robots in the past. How often have you read a news story about artificial intelligence only to find in the comments someone saying something to the effect of “You still need to program every action, and that’s why AI will never happen”? Or perhaps when you try discussing technostism— when the world is fully or nearly-fully automated? What’s the standard rebuttal? “It didn’t happen with looms, spinning jennies, tractors, and computers, so it won’t happen now.”

Another example, one I’ll use to go in depth= flying cars. Flying cars have been a staple of futurist optimism for nearly a century now, and yet they’ve never materialized. We’ve had planes for over a century, and helicopters have been around for almost as long. Fuel isn’t a problem— there are a plethora of fuels to use, even if some aren’t as savory as others (i.e. nuclear-powered cars). We’ve even seen an electric plane circumnavigate the globe. So what is keeping a flying family sedan out of your driveway? You are. Not you in particular, but humans in general. We humans evolved to navigate a 2D plane— we move forwards, backwards, side to side, diagonally, and little else. We didn’t evolve to move up and down. The limits of our 3D movements involve standing up, sitting down, squatting, falling down, and the like— not flying through the heavens at lightning speed. And even then, we were never meant to traverse 2D planes at high speeds either, hence why car accidents claim the lives of over a million people each year globally.
Flying cars just aren’t going to happen beyond those novelties like the Avrocar unless you address the pilot problem. What’s an innovation in that field?

Passenger Drone
Ehang 184 cruising through Dubai. Imgur

 

Passenger drones. At CES 2016, a Chinese drone company, Ehang, unveiled the Ehang 184. It’s not the first passenger drone, but it’s arguably the most famous.

This technology is nascent, but it’s already proving itself— Ehang tested their passenger drone in Nevada during the summer of 2016, apparently with success. They may not be the first to usher in passenger drones to the masses, however, as Alphabet’s Larry Page is investing in the development of flying cars. Surely he’s well aware of the crippling limitations of flying cars (which turns them into roadable planes). After all, his company is leading the way in the field of autonomous vehicles. Adding a third dimension to the Google driverless car won’t be a problem because it will be computers that have to deal with it.

And that brings me to my next example. Some people might still say that flying cars won’t ever appear because they haven’t appeared on the market yet, but once you introduce them to passenger drones, and interesting thing happens— they begin to ponder why we didn’t create passenger drones before now.

One thing that every futurist knows (or should know) is that many of our beloved and desired technologies will only be possible with greater computing power. This hasn’t always been the case— we didn’t need computers to create airplanes or automobiles or even atomic bombs and space-faring rockets. However, once we did create these things, we hit a plateau. It’s a plateau that’s been the bane of futurists, sci-fi fans, Singularitarians, transhumanists, everyone with an interest in technology in general. All the low-hanging post-industrial technology fruits have been plucked, and in order to progress further (and to use video game terminology), we need to unlock the AI upgrade. We can still develop futuristic technologies without AI, but it will take exponentially longer timescales to do so, especially considering we aren’t funding sci-tech at anywhere near the levels some people think we are (i.e.: we’re funding fusion energy at a “Fusion Never” level, and NASA’s budget in 2016 is one of their ten lowest funded years ever).

However, even then we still won’t be able to develop some technologies such as domestic robotics and augmented reality. These two technologies are wholly dependent on algorithms that can decipher the incredible amounts of data fed to them by the world at large, and without sufficiently powerful algorithms, they will never take off like we imagined.

When I use the term “AI”, I’m not necessarily referring to artificial superintelligence (ASI) or even artificial general intelligence (AGI). I mean any algorithm, no matter how narrow. With sufficient computing power, even narrow AI becomes impressively capable.
And they need to be capable if we want the futuristic fantasies we’ve always desired.

In the early days of science fiction, we were amused by visions of robot butlers. So amused that we tried making them ourselves. However, every attempt thus far has failed. Does it stand to reason that every attempt will fail, or will there come a day when every middle class family possesses their own automaton slave?

I won’t even let you answer that question— of course that day will come.

There’s a wonderful infogif from Mother Jones that shows why we’re about to get our own domestic droids sooner than many think.

zml5hbc
Mother Jones – “Welcome Robot Overlords. Please Don’t Fire Us?”

For a slower version, click here.

In the 1960s, our computers had so little computing power available that you might have gotten better results with a wind-up toy. Things didn’t improve much in the ’70s, though we were doing the best we could with what we had. It was depressing, to say the least. That there were proto-Singularitarians from that era is remarkable, as I cannot imagine how awfully little hope they had.

I say this from the comfortably robotic year of 2016. I’ve talked about my late Roomba, though I don’t believe I’ve mentioned how frustrating the little bugger really was. Even a 2010 model Roomba felt like a glorified McDonalds toy. Thus, it makes sense why people high on the Jetsons and Star Wars back in vintage decades would be disillusioned by the seemingly stagnant rate of progress in robotics.

But things are changing.

Now that we’re developing computers powerful enough to run the algorithms necessary for a robot to navigate a real life space— as well as wireless networks fast and sturdy enough to drive Cloud computing— we are witnessing a robotics boom time.
How is it that ASIMO went from falling on itself trying to navigate stairs to being able to hop on one leg stably? Better algorithms that could process more information at a much quicker pace. How is it that Boston Dynamics went from the awkward PETMAN to the creepily impressive Atlas 2.0? Better algorithms that could process more information at a much quicker pace. Not forgetting more efficient robotic design, of course— design that still needs to be utilized by said algorithms.

We may have had the necessary algorithms in the ’70s and ’80s, but computers were far, far too weak to exploit them. Thus, if you brought home something that was marketed as a home robot butler for Christmas ’78, you were going to be sorely disappointed. And if you were a 6 year old who had high hopes for robot butlers, the failings of one would scar you for life. Imagine living the rest of your life, working at a decent job and starting a family, and then in 2018, you hear that Honda is selling its first home-ready ASIMOs while Google is readying a passenger drone. Chances are you wouldn’t believe them. Sure, technology’s gotten better, but there’s no way it could have gotten that much better, right…?

And yet it has.

The World of 2029: 1000-Man Algorithms

It’s late Friday, November 16th, 2029.

Samantha Jones wraps up what she was working on and goes into the kitchen for a glass of water. Nothing happened today that really shocked her, and there was little to write about.

Then again, her computer continues to type even though she’s away from the keyboard.

When she sits back down, she rolls her chair around the corner to check up on Miranda. She’s watching cartoons on their 8k TV. The TV itself is as flat as paper and sticks to the wall, making it appear as if it were magic wallpaper.

As the sun sets on the crisp autumn day, the wind blows with vigor and the sky darkens.

Beautiful, Samantha thinks. There’s nothing better than a rain-cooled night.

Immediately, those words appear on the computer screen. Along with them, a stock photo of such a cloudy, cool, and dreary evening.

Chui speaks, “You love rain, right?”

“Better believe it, honey.” Then she begins typing.

Chui saw that these inputs came from the keyboard. “Isn’t it easier to use the iMind?”

“Meh, I grew up typing. Old habits die hard.”

“I thought you loved new technology.”

“When it suits me.” She turns off the speaker and types in the next response. ‘I just don’t like playing cyborg all the time.’

‘I understand.’

“So how far along is the game?”

“79% finished.”

Samantha brings up a new window and sees a message box that displays multiple lines of code. The code generates itself and fills whole pages. On top of the box are the words ‘Sam’s Game.’

So she logs off from her blog and brings up a social media website. Her eyes glow from all the information thrown at her, and she hastes to put on her glasses.

Instantly, she sees a new world around her, one more vibrant than any she’s ever known. She’s in the website, experiencing its cybernetic wonders without any middleman.

In fact, the more she works with this, the less she uses the traditional computer and keyboard. If anything, they’re vestigial. Yes, the tower is necessary, but she wears the screen as glasses, and she uses her mind as the keyboard.

Along with her glasses and cyberkinetic headband, she also wears wireless earbuds. From here, she can listen to any of her 120,000 downloaded songs. Don’t tell anyone, but she used a YouTube-MP3 converter for almost all of it. It’s not like anyone can do anything about it anyway. The last lawsuit over pirated music was nearly a decade ago, and now the music industry doesn’t bother.

And an avatar of Chui is smiling at her. That little thing is one of the reasons why.

Chui, and on a larger scale, artificial intelligence in general, have become what the media has dubbed ‘supercapable.’

Supercapable AI as a term was first used in 2016 after a Google AI beat the world champion at Go, a game whose very function requires some form of generalized intelligence. It has come to bridge the gap between ‘narrow’ AI and ‘general’ AI. For a refresher, narrow AI refers to any programming that can complete a specific task. Computers from the 1960s, thus, relied on narrow AI.

General AI has proven to be a tougher nut to crack, as it requires an algorithm that can learn any task and operate on a human level.

Up until the late 2010s, they were seen as separate worlds. After computer intelligence’s domination of Go, however, the term ‘supercapable AI’ entered parlance to describe AI that was able of some level of generalized learning, even if it was not general intelligence in and of itself.

“Your game is almost ready,” Chui says. “95%.”

Samantha can’t keep her jaw off the desk. “That fast?”

“Yep.” Chui gives an ‘XD’ smiley. “Just putting the finishing touches on the lighting engine. All 18 levels are done, and the game’s AI is working properly.”

Supercapable AI has been the dream of nerds and dreamers— and the nightmare of wage laborers. Chui isn’t the one who built the game, but it is telling her about the progress of its construction. However, Sam isn’t the head of a game design studio— it is another AI that is building the game. Sam wrote in instructions and descriptions of what she wanted from the game and guided the AI in its early design phases, but otherwise she (or any other human) has not put in a single line of code.

That’s not to say this has killed entertainment.

One of the sites Samantha opens up next is a hub for such games to be shared and sold. There are thousands of such games, uploaded by people across the world. Human-developed games are specially marked, though algorithm-developed games dominate the site.

She checks her account. $542.99 made in the past week off game downloads. She’s among the top 500 ‘developers’, as well as one of the site’s oldest accounts.

Samantha is a technophile who keeps her finger on the pulse of the tech world. For years, she lauded the coming of decentralized game development. Indie games have grown in complexity thanks to algorithms, to the point they are indistinguishable from ‘professionally developed games.’

In fact, in the site’s newsfeed is a headline that reads ‘EA Closes Doors— Millions Cheer’. This has been the somber reality of the gaming industry ever since the algorithms first hit the market in 2018. It took a while for them to be noticed. In fact, as late as 2021, many in the games industry claimed that these “silly algorithms would never present a threat to the millions of manhours put in by the industry’s best”, since the best the algorithms seemed to do were basic stages with uncreative utilitarianism.

By 2024, this delusion had shattered when an algorithm-designed game became the biggest selling title of the year. Google and their ilk warned the game industry years prior, saying that top-of-the-line algorithms from 2020 were already capable of ‘creative design’ and that it was only a matter of time before regular consumers got their hands on them. For the industry, it’s only gone downhill since. A few algorithms can outdo a team of 500 skilled programmers and designers with a millionth of a percent of the cost, so what’s the point of having the latter outside of ‘human cred’?

“Aaaaand 100% Game’s finished, Sam.”

“Alright, cool, I’ll check it out in a sec,” she thinks.

A small preview opens in the lower right of her vision, and she sends a mental note to close it and send it to the house’s main computer.

Ben and Miranda run into the living room and sit in front of the TV. The game opens, and they get a screen full of cartoony graphics and bright colors. If one didn’t know, they’d say this were from Nintendo.

This seemingly simple layout was the intention. Samantha knows that, if she wanted to, she could’ve created a sprawling epic featuring the most realistic graphics possible.

And there’s another point of contention. Ever since the early 2020s, computer graphics have been photorealistic. Video games oft feature CG cutscenes. It doesn’t take a rocket scientist to figure out that one can use these game algorithms to create movies and serials.

To Samantha, decoupling power away from a centralized few is the greatest thing to happen to the entertainment industry. Billions of minds hold quadrillions of ideas, but only a few thousand ever get the privilege of seeing them to light.

It just helps that she was an early adopter. She’s been a blogger for decades now, and using AI to write her content has made her life easier (and wealthier). And it’s not like her readers don’t know— she’s one of the Internet’s most outspoken technophiles, openly praising the neverending progression of artificial intelligence and robotics.

This is what makes her opposition to Vyrdism so strange. She profits off of automation and AI, yet she claims that others should rely on the State to pay them benefits and not worry about owning anything. Hence her last article, ‘Giving Vyrd the Bird.’ It’s not been one of her more popular articles, with many Vyrdists attacking it as ‘bourgeois apologism.’

“What do you think about it?” she asks Chui.

“I think it’s a fine article that raises many good points,” it responds.

The rain falls.

The World of 2016

Exponential growth is for Luddites

I dedicate this post to my Roomba, who served me well for four and a half great years. R.I.P., 2011-2016.

Okay, maybe not. But there’s a reason why I trashed my Roomba— it’s outdated. It was outdated when I bought it and it’s worlds outdated today. Every second that passes inches it closer to the Roomba Obsoletion Singularity, the point at which all Roombas are obsolete the moment they’re created.

But I don’t mind, because I didn’t need another Roomba for half of a decade. That one served me well. The same applies to a lot of technology— I recently blogged about how I have a smartphone from 2013, and how I’ll probably keep it until I buy an iPhone 8s Plus, which I’ll then keep until 2026.

It feels good to experience exponential growth. It’s hard to experience it when you’re constantly riding the curve, so making a stop at one point and picking back up some time later is a joyride that can’t be beat.

The same will still be true 13 years from now. Imagine 13 years of exponential growth from where we’re at now. There’s a reason why the World of 2029 posts are increasingly ‘out there’— the more I consider the real nature of the future, the more I realize that I’m badly underestimating the rate of change. When I went into writing The World of 2029: Part One, I was still thinking linearly.

Imagine if this year were 2003, and I was writing about the year 2016. Yeah, there are some things I’ll get wrong, wrong and terribly wrong. But there are other things I might get partially right, except that I wasn’t being creative enough.

Back in 2003, I thought about the future a lot. I oft consider that period in 2003 to be my ‘proto-futurist’ phase of youth. The years that really interested me were 2015 and 3000. Why 2015? I dunno, it just sounded so futuristic to me. What’s more, it’s cute how little difference there was between 2015 and 3000— the year 3000 AD was brighter, taller, and had flying cars, but it was recognizable. The year 2015 had a lot of robots, jetpacks, neon, and some “primitive flying cars.” Mind you, I was 9 years old, so I wasn’t totally aware of all the great changes that had occurred and could occur.

Still, it’s interesting to return to those times and try to wrap my mind around the idea of just how much had actually changed between then and 2015.

The biggest thing was access to the Future. A lot of futurist thinking is predicated upon the idea that the Future will be mostly-evenly distributed. To an extent, this is actually correct: think of smartphones and their prevalence in society. The rich might have snazzy covers and larger storage sizes, but for the most part, a millionaire’s iPhone isn’t very different from my own.

To another extent, it’s totally wrong. Were there robots, jetpacks, and primitive flying cars in 2015? Absolutely. Did everyone have them? Not at all. In fact, only a handful of people altogether had any of the above.

Things get cheaper, as smartphones have, so I’m confident the Future will arrive in the lap of the less fortunate. It’s just that, for now, we have to watch and imagine.

The amount of exponential growth between 2003 and 2015 didn’t seem to be all that great. Towards the end, there was a noticeable curve upwards in progress, but it took a while to get going. Between 2003 and 2010, not much in my daily life changed. I had an iPhone and 7th generation video game consoles, but that was just about it. Never mind the more subtle changes, such as the rise of social media and YouTube.

Compare 2010 to 2015, and I’ll definitely say there was a change. For one, I got a Roomba. I also got a more powerful computer, a much better smartphone, a brand new video game console, and I started using Siri and Cortana. Oh, and then there was this totally nothing drone I got in 2014.

I’ll always use the drone as an example of when the Future hit me and my mother in the face. The thing’s a flying robot. I got a freaking flying robot for Christmas. In 2003, that was the solely the realm of science fiction. My mother? She still can’t get over it. It looks like a flying saucer, which just drives the point home even further.

Now it’s 2016 and I’m already impressed with what I’m seeing, whether it be autonomous manned drones or heavily expanded IoT services. I’ve become used to the overwhelming amount of change because I’ve accepted and embraced it. That makes it easier to see just how much change we’re undergoing and predict how much will occur in the future. Yet I still made the linearist mistake.

So I feel I should spend time talking about where information technology will be in 13 to 14 years. I  can talk about where it’ll be in 4 years all the same. If I use an exponential growth model, things begin making sense.

So expect the World of 2029 posts to get exponentially weirder.

 

The World of 2029: Oliver’s Workplace

The future of work is bleak— for workers. Owners, on the other hand…

Lower Manhattan, New York. Friday, November 16th, 2029.


 

Oliver’s Model N Tesla pulls into a parking lot and he steps out. He walks into a restaurant, 야끼만두 (Yakimandu) and heads into his office. Along the way, he says “Hi” to co-worker and good friend, Hyun Ryu.

“Sup, man?”

Ryu waves and says, “Just got another couple’a orders. There’s $800 an hour we’re on roll to get.”

“Falkener’s?”

“Stuart Bentley, the guy over at ULF’s. He’s sending a droid over to pick it up for us.”

“How are our’s doing?”

“Pretty good. Dawn’s got the updates, so just check in with her when you get the chance.”

“Alright, cool.”

Ryu fades away, and Oliver sits in his swivel chair. He taps his desk, which lights up and sends a holographic display to his eyes. It’s not actually there, it’s his own bionic lenses that lets him see this wonderful magic.

He thinks out, “Dawn, what’s the situation with the workforce?”

Dawn replies, “All units are working properly. Your enterprise is ready to open for the day.”

“Alright, hit the lights.”

And with that, the signs on the outside shine bright. The sun is screaming for a good day, and the city is alive. Patrons come and go across the day, and the place fills to peak by the evening.

This is what Oliver loves most about the job: hearing all these voices, all the laughter, all the drama, just the general sense of humanity among the crowd.

Ryu calls back again, reappearing as a hologram. It hasn’t been five years since Oliver got the hologram system installed, and he still can only barely believe it works. What creeps him out the most is how ‘unhologramish’ holograms actually are— when he was growing up, holograms were usually portrayed as staticy, monocolored images suspended in the air. Real holograms never flicker, and are so finely colored that it aches the brain to understand the image isn’t really there.

After Oliver answers the call, he goes back to checking on his workers.

None of them are human.

Yakimandu is a breed of service that’s become increasingly common— automated enterprises, but not ‘too’ automated. Some locations like McDonalds made the transition well enough, acting more as an automat than a robotic restaurant. But it didn’t work everywhere because older people’s Luddite sensibilities were overpowering. Yakimandu and such types found a happy medium, adopting a shokkenki model of business.

Automats and shokkenki technically describe the same thing: a fully/near-fully automated eatery. However, automats have become known as ‘McVending Machines’, as most of their services come from selecting your order from a screen. Shokkenki eateries, however, maintain the traditional roles of servers and waiters, with the exception than said servers and waiters are robots.

It surprised Oliver as it did everyone that shokkenkibecame as popular as they did.

When asked about his opinion on it, Oliver said to his wife, “The world isn’t ready for automats. The kids, they’re okay with it, but you have the older generations who still value human interaction, even on such a fleeting and insignificant level.”

Yet Oliver works at Yakimandu knowing that very few of his patrons are workers. Sure, it’s a sort of trendy line of club-esque restaurants, but even those tend to be populated by the middle class.

Whatever happened to the middle class?

Every day, Oliver relearns this truth. Just across the street is another restaurant, Double V’s, and it wears a similar trendster veneer. Though they compete for patrons, Yakimandu and Double V’s seem to be different themes for the same business.

Except Double V’s is more successful, more well known, and has a larger number of restaurants in construction.

Oliver knows why.

“They’re Vyrdists.”

Don’t get him wrong, Oliver’s a staunch Democrat. He pays his taxes, and his taxes help fund the state’s guaranteed basic income, something he supported a decade ago. But the Vyrdist movement feels too radical for him, and he fears the potential consequences of being targeted for ‘Vyrdist expropriation.’

A basic income is an amount of money paid out just for being a citizen. When Oliver first heard of the concept in the 2010’s, he asked why no one ever thought to try it before. When he went into business, the world had already changed greatly and he was one of many millions across the globe who successfully petitioned for their respective nations to, at the very least, consider implementing a basic income system. This was because, as he managed to construct a successful shokkenki business, he felt concerned about those that couldn’t adapt to the changes in time.

The argument against this was that people should learn new marketable skills if they want money so badly. Perhaps it was because Oliver’s a Democrat, or perhaps it’s because some small part of him knew, but Oliver thought this counterargument to be insanely short-sighted.

“Pay a basic income so people can actually survive to learn new skills,” he said. “If they can’t find a job to begin with, how on Earth are they gonna afford the education in the first place?” To him, basic income was the best idea in the world. Those displaced by automation should be granted some way to survive, and the government should provide it.

Not everyone agreed. Some time around his senior year in uni, right when he met his sweetheart Samantha, Oliver’s unshakable optimism in a basic income was disturbed when he first heard news of a radical new movement popping up on college campuses and industrial fields across America and China— people who rejected the idea of a basic income on the basis that it created dependency upon the ‘bourgeois-run government.’

According to these people, who called themselves ‘Vyrdists’, after an elusive and potentially mythical man known as John Henry Vyrd, the only true solution to technological unemployment was for the workers to obtain ownership of automation, and that anything less was tantamount to slavery— basic income included.

Though it remained underground throughout the decade, Vyrdism seemed to explode this past year.

Double V’is a worker cooperative, a sort of enterprise owned and managed by the people who work it. Except it’s not a traditional worker enterprise. Vyrdists use the term ‘technate’ to describe a fully-automated business run in a cooperative fashion. As they do everything, they appropriated the term from the old technocracy movement.

On the surface, it seems radical. Workers owning the means of production? Where have we heard that one before?

But Vyrdists rarely describe themselves as Marxists. If anything, they’re ascribe to the phrase “free market socialism,” saying that they don’t want a fully cooperative-run society, only one where workers have a choice and a chance at ownership. And it makes sense— if workers owned automation directly, they wouldn’t have to worry about a government middleman and would have much more power over their lives.

Maybe Oliver could’ve supported something like this if he weren’t a business owner himself. Vyrdists have not been afforded much power to start their own businesses, so they’ve been forced to expropriate them from other, failed businesses and worked from there. Right as the recession hit was when this Great Expropriation began.

“I can’t hate them,” Oliver said to Ryu. “They’re where I get most of my money from.”

“That’s  the whole point of Vyrdism. Basic income relies on wealth redistribution. Vyrdism relies on egalitarian wealth creation. Haven’t you heard the Word of Vyrd?” Ryu laughs.

“Yeah, yeah.” Oliver can’t quite explain what it is about Vyrdism that gets to him. The recession’s over, and the dollar is stronger than it’s ever been. On top of this, millions of Americans have become Vyrdists and have joined the National Worker’s Federation, and have seen their wages rise by extreme amounts because of it. This means they have more money to spend, which should mean people like Oliver benefit.

Yet all it’s caused are tensions between business and labor.

“I can’t say I’m too mad,” Ryu adds. Oliver knows Ryu is a Vyrdist. God, it’s ridiculous how sci-fi Ryu’s life reads. He’s a cyborg— fully cybernetic arms and legs— who’s being beamed into his office space via hologram. Ryu is careful about these things, too. South Korea has surpassed China and Dubai in recent years in terms of notoriety.

“That’s because you come from a corporate hellhole.”

“If they had Vyrdism in Korea, the place would be a thousand times better. At least the States aren’t so bad.” With that, he sounds pained. Both men know what South Korea’s like. Once upon a time, it was seen as being the antithesis of North Korea: a capitalist oasis opposed to a communist dystopia. Nowadays, it seems like four legs are good and two legs are better.
If cyberpunk ever existed anywhere, it exists in Korea. Seoul is a glittering cyberscape filled with mile-high neon-lined skyscrapers, but this shiny glory came with the cost of a near totalitarian corporate dictatorship, one that does not tolerate dissent or complaining. One that thrives on the division of classes. Ryu only escaped because he sold his soul to fight the devil, becoming part of the business class.

To him, it’s shocking how far America has moved in the other direction. Vyrdism is just the latest in an extended trend of greater power in the hands of the People. The idea that a nation as conservative as America, oft seen as 50 years behind the rest of the first world, has such a powerful labor movement seems unbelievable.

To Oliver, maybe he feels unease because of his father.

“You know, my dad is probably why I feel this way. I told you about ‘im, right? That bastard was the most classist asshole around. He tried raising me to believe that, if you can’t work for any reason, you deserve to rot, and if you’re poor, you deserve to be poor.”

Ryu laughs, “Sounds Korean!”

“I know, right? He was just really mean about it, though. Like, if you got rich in a way he didn’t approve, he’d still say you’re poor. So all these co-ops and technates? He’d just call ’em all commies and say they should be forced outta business.”

“But they’re capitalists!”

“Yeah? So? They still ‘share.’ And a lotta them only work the bare minimum and let robots do all the rest. Oh man, if he ever heard of that? Hoo boy.”

“So what I’m hearing is ‘your father is a hypocrite’, is that it?”

“Probably. He’d probably be very happy letting robots work for him, but damn you if you tried it yourself. He’d call you a lazy leech who should actively have your money taken away from you.”

Both of them start laughing. “So wait, wait, wait. He supported wealth redistribution?”

“Don’t call it that, and only if it’s from the poor to the rich. If the government takes from the poor and gives to the rich, that’s just the free market working the way it should. But if it’s the opposite, it’s Stalinism. So I never got him, really. I still liked him as a father, but I’m almost relieved, if that’s the right word, that he passed away before things got to this point. His veins couldn’t take the blood pressure if he read half a page of today’s news.”

Oliver and Ryu walk through the restaurant and interact with the many patrons. No one bats an eye when Ryu passes through them or the robot workers.

The robots are generally humanoid, though some take different and more generalized forms. Each and every one is powered by the Dawn system.

The people are generally chatting, though some do not speak. Instead, these seem to be entranced and detached. Detached, they are not— in fact, they are engaging in telepathic communication.

Oliver wears the same technology to talk to Dawn and his phone contacts. All it takes is a little headband, one that reads brain signals and translates them into words and symbols.

It was the Apple iMind that brought it into the mainstream. When that product was announced in 2023, it was hailed as an invention on par with the discovery of fire and the wheel. This despite psychotronics and cyberkinesis being developed for well over a decade prior.

Actually, for Oliver, that was the moment he realized just how futuristic the world was becoming. He was one of the early adopters, and was amazed by the features. These days, it’s just like texting.

That’s the nature of the game these days, isn’t it? You’re given something unbelievably amazing and futuristic, and yet you’re not given time to take it for granted before something even more amazing comes along.

This past decade has been one big ‘How on Earth did scientists create this’ sort of festival of technology. For a man like Oliver, it’s been a game changer. When he started the decade as an intern, it was still a given that people had to work for a living, that robots were decades away, that telepathy is impossible.

Now here he is, wearing cybernetic contact lenses he controls with his mind, talking to a hologram of his cyborg friend, owner of a business that exclusively employs robots.

What a difference a decade makes.

And he gets to enjoy the fruits of technology because he made the right choices in life. When he was in college, his father was brutally hard on him, telling him that unless he became an electrical engineer, he would never succeed in life. In fact, in the late 2010s and early 2020s, the media kept hyping up how the STEM field was the only place to go if you wanted to make any money. His decision to major in Business seemed to be shortsighted. He watched with great concern as all his friends became STEMgineers and seemed to be set up with 6 figure jobs upon graduation.

And yet guess who makes the most money these days. Somewhere along the line, middle of the decade, the STEM bubble burst. It didn’t burst because it got too big. No.

It burst because something popped it. And that something was the very same thing the STEMgineers were being paid to create— artificial intelligence. No one knows when the ‘spark’ flew, other than that the world hasn’t been the same since. Indeed, the early 2020s seemed to be such a simpler time, but maybe this is just his rosy memory of the days before he had to become so involved with AI.

And it’s this reason why the issue of basic income and Vyrdism are so prominent now. It’s this reason why Oliver has been arguing with Samantha over the future of their 3-year-old daughter, Miranda. AI has become capable of STEM tasks, even the creative ones. The belief that the STEM field would supply humanity with jobs for hundreds of years repairing and maintaining automation collapsed before it started getting entrenched, and the only ones fielding this argument are those most out of touch with the reality on the ground.

Now his STEM-educated friends are desperate for jobs. His degree in Business paid off because he wasn’t being paid to fix automation— he was being paid because he owned it.

It’s tragic, actually, how little prepared the workforce was. But one can’t blame them. In less than a generation, the very nature of labor and business has undergone multiple otherwise century-defining shifts. The children of the 2000s were taught like the children of earlier decades. Then the STEM field became of chief importance in the 2010s, so the children of the 2010s and early 2020s were taught almost exclusively in either the STEM field or the arts. Then artificial intelligence steamrolled employment at a rate faster than anyone could have possibly predicted (or, perhaps, wanted to predict).

This is what Oliver respects about the Vyrdist movement— that they are attacking the problem at its source. But the means at which the Vyrdists are going about it trouble him.

Capitalists support basic income. Without consumers, they become subject to expropriation by masses of former workers. Maybe that’s why… Maybe he’s scared of being expropriated, and it’s in his best interest to see a concession like basic income become the standard.

God Christ, 10 years ago, this wasn’t even a nonissue. How has so much happened to the world in so little time?

It’s that damned quote he keeps on his desk. It’s a curse.

“May you live in interesting times.”

If I Had a Robot

If I had a domestic robot of my own, like say a Pepper or an ASIMO, what would I do with it?

There are many things I’d “do” with it, because I swing that way, but let’s keep it PG. After all, Aldebaran preempted me on that front.

big_thumb_e84b62d048e44f264fe83f965089b52f
I’d rediscover nature

The first thing I’d do would be to act out on my futuristic realist principles and take the robot to the last place robots are ever usually seen— the great outdoors. Being an asociable asshat means I’d rather go on a nature walk with a robot than a human, because I’m the kind of person who’d do that.

What will we do out there? Should I also possess smartglasses, we’ll be on an expedition to view an outdoor wikipedia, looking at various animals and plants and seeing various factoids about them.

More than that, I’ll be using that robot as protection. The wilderness is home to many wonderful beasts and species, diverse and beautiful.

awalkinthewoods1
I hope we spot Bigfoot in these woods… or at least something paranormal

But there are some things that cannot be explained. Things that escape science. Things that are unknown.

Throwing a piece of ultra-high tech like ASIMO out into cryptid-infested woodlands is exactly the kind of thing I’d do and be proud of doing. In fact, what a better segue into the second thing I’d do than with this sort of high strangeness?

americas-12-scariest-real-life-haunted-houses

Using a robot to find ghosts is apparently a rare topic, seldom considered by those in the field of ghost hunting. While I’ve found a few instances of the idea, it remains fleeting. This means I can jump on the bandwagon first. Get rich.

All I hafta do is send my droid to the Myrtles Plantation, which is about an hour’s drive from my home. Better yet, I could send two droids to the plantation.

And I guess it would be ironic that I would be sending robots to a place known for using slaves. After all, the third thing I’d do with my robot is nothing less than technoslavery.

l6qznny

I’ll exploit my robot’s labor by having it work at a fast food joint, where it’ll earn my paycheck. I’ve already labeled using technology in place of a worker ‘technostism’, so of course I’d get to be the pioneer of the movement. How wonderful would that be, to have this sort of passive income.

Alas, there are still kinks to work out before any such robot will be ready to do any of these things, and good god I can’t stop putting in innuendos everywhere. Maybe that’s because I’m a fucking robosexual, and the first thing I really want to do is bed the sexy thing. I dunno. That’s just me.

If you got your own robot, what’s the first thing you would do with it?