Yuli’s Law: On Domestic Utility Robots

The advancement of computer technology has allowed for many sci-tech miracles to occur in the past 70 years, and yet it still seems as if we’ve hit a plateau. As I’ve explained in the post on Yuli’s Law, this is a fallacy— the only reason why an illusion of stagnation appears is because computing power is too weak to accomplish the goals of long-time challenges. That, or we have already accomplished said goals a long time ago.

The perfect example of this can be seen with personal computing devices, including PCs, laptops, smartphones— and calculators.

The necessary computing power to run a decent college-ready calculator has long been achieved, and miniaturization has allowed calculators to be sold for pennies.  There is no major quantum leap between calculators and early computer programs.

Calculating the trajectory of a rocket requires far less computing power than some might think, and this is because of the task required: guiding an object using simple algorithms. A second grader could conceivably create a program that guides a bottle rocket in a particular direction.

This is still a step up from purely mechanical systems that give the illusion of programming, but there are obvious limits.

I’ll explain these limits by using a particular example, an example that is the focus of this post: a domestic robot.  Particularly, a Roomba.

I-Robot_Roomba_Autonomous_FloorVac_Vacuum_Cleaner

An analog domestic robot has no digital programming, so it is beholden to its mechanics. If it is designed to move in a particular direction, it will never move in another direction. In essence, it’s exactly like a wind-up toy.
I will wind up this robot and set it off to clean my floors. Thirty seconds later, it makes a left turn. After it makes this left turn, it will move for twenty seconds before making another left turn. And so on and so forth until it returns to its original spot or runs out of energy.

There are many problems with this. For one, if the Roomba runs into an obstacle, it will not move around it. It will make no attempt to avoid it a second time through. It only moves along a preset path, a path you can perfectly predict the moment you set it off. There is a way to get around this— by adding sensors. Little triggers that will force a turn early if it hits an object.

 

Let’s bring in a digitally programmed Roomba, something akin to a robot you could have gotten in 2005. Despite having a digital computer for a brain, it seems to act absolutely no different from the mechanical Roomba. It still gets around by bumping into things. Even though the mechanical Roomba could have been created by someone in Ancient Greece, yours doesn’t seem any more impressive on a practical level.

Thus, the robot seems to be more novel than practical. And that’s the perception of Roombas today— cat taxis that clean your floor as a bonus rather than a legitimate domestic robot.
Yet this is no longer a fair perception as the creators of the Roomba, iRobot, have added much-needed intelligence to their machines. This has only been possible thanks to increases in computing power allowing for the proper algorithms to run in real-time.

For example, a 2017-era Roomba 980 can actually “see” in advance when it’s about to run into something and avoid it. It can also remember where it’s been, recognize certain objects, among other things (though Neato’s been able to do this for a long time). Much more impressive, though still not quite what we’re looking for.

What’s going on? Why are robots so weak in an age of reusable space rockets, terabyte smartphones, and popular drone ownership?

We need that last big push. We need computers to be able to understand 3D space.

Imagine a Roomba 2000 from the year 2025. It’s connected to the Cloud and it utilizes the latest in artificial intelligence in order to do a better job than any of its predecessors. I set it down, and the first thing it begins doing is mapping out my home. It recognizes any obstacle as well as any stain— that means if it detects dog poop, it’ll either avoid it or switch to a different suction to pick it up. Once it has mapped my house, it is able to get a good feel for where things are and should be. Of course, I could also send it a picture of another room, and it will still be able to get a feel for what it will need to do even if it’s never roamed around inside before.

The same thing applies to other domestic robots such as robotic lawn mowers— you’d rather have a lawn mower that knows when to stop cutting, whether that means because it’s moving over a new terrain or because it’s approaching your child’s Slip n’ Slide. Without the ability to comprehend 3D space or remember where it’s been or where it needs to go, it’ll be stuck operating within a pre-set invisible fence.

Over all of this, there’s the promise of bipedal and wheeled humanoid robots working in the home. After all, homes are designed around the needs of humans, so it makes sense to design tools modeled after humans. But the same rules apply— no comprehension of 3D space, no dice.

In fact, a universal utility robot like a future model of Atlas or ASIMO will require greater advancements than specialized utility robots like a Roomba or Neato. They must be capable of utilizing tools, including tools they may never have used before. They must be capable of depth perception— a robot that makes the motions of mopping a floor is only useful when you make sure the floor isn’t too closer or far away, but a robot that genuinely knows how to mop is universally useful. They must be capable of understanding natural language so you can give them orders. They must be flexible, in that they can come across new and unknown situations and react to them accordingly. A mechanical robot would come across a small obstacle, fall over, and continue moving its legs. A proper universal utility robot will avoid the obstacle entirely, or at least pick itself up and know to avoid the obstacle and things like it. These are all amazingly difficult problems to overcome at our current technological level.

All these things and more require further improvements in computing power. Improvements were are, indeed, still seeing.

zml5hbc
Mother Jones – “Welcome Robot Overlords. Please Don’t Fire Us?”

Passenger Drones

One of the most interesting developments in sci-tech in the past few years is the sudden interest in the concept of “passenger drones“. That appears to be their most popular name, though you may have heard of them as “drone taxis” and “autonomous flying cars”. I’ve even seen the term “pilotless helicopter” used once or twice (though drones don’t necessarily have to be rotored vehicles). For the sake of this article, I’ll stick with ‘passenger drone’.

So what exactly is a passenger drone? In short, its name gives it away— a drone that can carry passengers. Typically, drones are defined as being “unmanned aerial vehicles”. You can see the conflict in definitions, hence why some are hesitant to actually call these ‘drones’. Nevertheless, linguistic drift has changed the definition of drone and that’s something drone hobbyists have to live with.

I say this because passenger drones are based on the designs of quadcopters, now popularly referred to as ‘drones’.

But enough about the linguistics.

Passenger drones represent the closest realization of yesteryear’s dream of flying cars. They are personal vehicles that theoretically anyone can own and use with ease, and they indeed work in three dimensions*. So why should we care about them when that dream has never come true before now?

*”Three dimensions” in transportation terms refers to the inclusion of flight. “Two dimensions” refers to ground and sea travel.

Simple: your answer is in the name. Again.

This is a drone. That means you are not the one piloting the vehicle. And I don’t mean ‘you’ specifically, but ‘you’ as a human. Humans did not evolve to navigate 3D space. We can barely manage traveling in 2D space at high speeds— proto-humans never had to move any faster than their fastest sprint. This becomes obvious when you view motor vehicle statistics. In the United States of America alone, over 30,000 people die in vehicular accidents yearly.
And despite this, we are not even in the top 5 for “most killed yearly in motor accidents.” The number one country is, not surprisingly, China: they lose well over 250,000 a year in car accidents.

Worldwide, 1.25 million die every year in motor accidents. And note, that’s deaths, not casualties. All of this is evidence that humankind is simply not designed well to casually travel at speeds higher than 20 miles per hour.

To throw another dimension and another two hundred miles per hour at us would unleash gigadeaths per year until humanity as a whole finally gives up. Human extinction by flying car.

This is the chief reason why flying cars aren’t a thing. Humans simply cannot handle it. Pilots have to go through thousands of hours of training just to become proficient, and that’s with vehicles that are already highly automated.

Indeed, as of right now, the closest thing to a “flying car” is a Cessna 172.

Of course, other reasons include the fact roadable vehicles and flying vehicles require completely different designs and aerodynamics, as well as the power requirements necessary to keep such a vehicle in the air. But perhaps we could overcome these issues if only there were a way for the common person to actually survive take-off, flight, and landing without killing himself.

Drones are that solution. Take away the need for the common person to do the flying.

That’s the promise passenger drones offer us. Again, there’s still the issue that flying is inefficient, but it’s always possible that passenger drones become a common sight over cities. Perhaps they’ll be privately owned; perhaps they’ll be municipally owned and rented out for use. This remains to be seen because the idea of flying cars and personal aerial vehicles being a real thing only became feasible within the past couple of years.

As of today, 4 April 2017, the first passenger drones will enter operation in Dubai, UAE in the summer of this year.

Grades of Automation

  • Grade-I is tool usage in general, from hunter-gatherer/scavenger tech all the way up to the pre-industrial age. There are little to no complex moving parts.
  • Grade-II is the usage of physical automation, such as looms, spinning jennies, and tractors. This is what the Luddites feared. There are many complex moving parts, many of which require specialized craftsmen to engineer.
  • Grade-III is the usage of digital automation, such as personal computers, calculators, robots, and basically anything we in the modern age take for granted. This age will last a bit longer into the future, though the latter ends of it have spooked quite a few people. Tools have become so complex that it’s impossible for any one person to create all necessary parts for a machine that resides in this tier.
  • Grade-IV is the usage of mental automation, and this is where things truly change. This is where we finally see artificial general intelligence, meaning that one of our tools has become capable of creating new tools on its own. AI will also become capable of learning new tasks much more quickly than humans and can instantly share its newfound knowledge with any number of other AI-capable machines connected to its network. Tools, thus, have become so infinitely complex that it’s only possible for the tools themselves to create newer and better tools.

Grades I and IV are only tenuously “automation”— the former implies that the only way to not live in an automated society is to use your hands and nothing else; the latter implies that intelligence itself is a form of automation. However, for the sake of argument, let’s keep with it.

Note: this isn’t necessarily a “timeline of technological development.” We still actively use technologies from Grades I and II in our daily lives.

Grade-I automation began the day the first animal picked up a stone and used it to crush a nut. By this definition, there are many creatures on Earth that have managed to achieve Grade-I automation. Grade-I lacks complex machinery. There are virtually no moving parts, and any individual person could create the whole range of tools that can be found in this tier. Tools are easy to make and easy to repair, allowing for self-sufficiency. Grade-I automation is best represented by hammers and wheels.

A purely Grade-I society would be agricultural with the vast majority of the population ranging from sustenance farmers to hunter-gatherer-scavengers. The lack of machinery means there is no need for specialization; societal complexity instead derives from other roles.

Grade-II automation introduces complex bits and moving parts, things that would take considerably more skill and brainpower to create. As far as we know, only humans have reached this tier— and only one species of humans at that (i.e. Homo sapiens sapiens). Grade-II is best represented by cogwheels and steam engines, as it’s the tier of mechanisms. One bit enables another, and they work together to form a whole machine. As with Grade-I, there’s a wide range of Grade-II technologies, with the most complex ends of Grade-II becoming electrically powered.

A society that has reached and mastered Grade-II automation would resemble our world as it was in the 19th century. Specialization rapidly expands— though polymaths may be able to design, construct, and maintain Grade-II technologies through their own devices, the vast majority of tools require multiple hands throughout their lifespan. One man may design a tool; another will be tasked with building and repairing it. However, generally, one person can grasp all facets of such tools. Using Grade-II automation, a single person can do much more work than they could with Grade-I technologies. In summary, Grade-II automation is the mark of an industrial revolution. Machines are complex, but can only be run by humans.

Grade-III automation introduces electronic technology, which includes programmable digital computers. It is at this point that the ability to create tools escapes the ability of individuals and requires collectives to pool their talents. However, this pays off through vastly enhanced productivity and efficiency. Computers dedicate all resources towards crunching numbers, greatly increasing the amount of work a single person can achieve. It is at this point that a true global economy becomes possible and even necessary, as total self-sufficiency becomes near impossible. While automation unemploys many as computational machines take over brute-force jobs that once belonged to humans, the specialization wrought is monumental, creating billions of new jobs compared to previous grades. The quality of life for everyone undergoes enormous strides upwards.

A society that has reached and mastered Grade-III automation would resemble the world of many near-future science fiction stories. Robotics and artificial intelligence have greatly progressed, but not to the point of a Singularitarian society. Instead, a Grade-III dominant society will be post-industrial. Even the study of such a society will be multilayered and involve specialized fields of knowledge. Different grades can overlap, and this continues to be true with Grade-III automation. Computers have begun replacing many of the cognitive tasks that were once the sole domain of humans. However, computers and robots remain tools to complete tasks that fall upon the responsibility of humans. Computers do not create new tools to complete new tasks, nor are they generally intelligent enough to complete any task they were not designed to perform. The symbol of Grade-III is a personal computer and industrial robot.

Grade-IV automation is a fundamental sea change in the nature of technology. Indeed, it’s a sea change in the nature of life itself, for it’s the point at which computers themselves enter the fray of creating technology. This is only possible by creating an artificial brain, one that may automate even higher-order skills. Here, it is beyond the capability of any human— individuals or collectives— to create any tool, just as it is beyond the capability of any chimpanzee to create a computer. Instead, artificial intelligences are responsible for sustaining the global economy and creating newer, improved versions of themselves. Because AI matches and exceeds the cognitive capabilities of humans, there is a civilization-wide upheaval where what jobs remain from the era of late Grade-III domination are then taken by agents of Grade-IV automation, leaving humans almost completely jobless. This is because our tools are no longer limited to singular tasks, but can take on a wide array of problems, even problems they were not built to handle. If the tools find a problem that is beyond their limits, they simple improve themselves to overcome their limitations.

It is possible, even probable, that humans alone cannot reach this point— ironically, we may need computers to make the leap to Grade-IV automation.

A society that has reached Grade-IV automation will likely resemble slave societies the closest, with an owner class composed of humans and the highest order AIs profiting from the labor of trillions, perhaps quadrillions of ever-laboring technotarians. The sapient will trade among themselves whatever proves scarce, and the highest functions of society will be understood only by those with superhuman intelligence. Societal complexity reaches its maximal state, the point of maximum alienation. However, specialization rapidly contracts as the intellectual capabilities of individuals— particularly individual AI and posthumans— expands to the point they understand every facet of modern society. Unaugmented humans will have virtually no place in a Grade-IV dominant society besides being masters over anadigital slaves and subservient to hyperintelligent techno-ultraterrestrials. What few jobs remain for them will, ironically, harken back to the days of Grade I and II automation, where the comparative advantage remains only due to artificial limitations (i.e. “human-only labor”).

Grade-IV automation is alien to us because we’ve never dealt with anything like it. The closest analog is biological sapience, something we have only barely begun to understand. In a future post, however, I’ll take a crack at predicting a day in the life of a person in a Grade-IV society. Not just a person, but also society at large.

Types of Artificial Intelligence

Not all AI is created equal. Some narrow AI is stronger than others. Here, I redefine AI, separating the “weak=narrow” and “strong=general” correlation.

Let’s talk about AI. I’ve decided to use the terms ‘narrow and general’ and ‘weak and strong’ as modifiers in and of themselves. Normally, weak AI is the same thing as narrow AI; strong AI is the same thing as general AI. But I mentioned elsewhere on this wide, wild Internet that there certainly must be such a thing as ‘less-narrow AI.’ AI that’s more general than the likes of, say, Siri, but not quite as strong as the likes of HAL-9000.

So my system is this:

    • Weak Narrow AI
    • Strong Narrow AI
    • Weak General AI
    • Strong General AI
    • Super AI

Weak narrow AI (WNAI) is AI that’s almost indistinguishable from analog mechanical systems. Go to the local dollar store and buy a $1 calculator. That calculator possesses WNAI. Start your computer. All the little algorithms that keep your OS and all the apps running are WNAI. This sort of AI cannot improve upon itself meaningfully, even if it were programmed to do so. And that’s the keyword— “programmed.” You need programmers to define every little thing a WNAI can possibly do.
We don’t call WNAI “AI” anymore, as per the AI Effect. You ever notice when there’s a big news story involving AI, there’s always a comment saying “This isn’t AI; it’s just [insert comp-sci buzzword].” Problem being, it is AI. It’s just not AGI.
I didn’t use that mention of analog mechanics passingly— this form of AI is about as mechanical as you can possibly get, and it’s actually better that way. Even if your dollar store calculator were an artificial superintelligence, what do you need it to do? Calculate math problems. Thus, the calculator’s supreme intellect would go forever untapped as you’d instead use it to factor binomials. And I don’t need ASI to run a Word document. Maybe ASI would be useful for making sure the words I write are the best they could possibly be, but actually running the application is most efficiently done with WNAI. It would be like lighting a campfire with Tsar Bomba.
Some have said that “simple computation” shouldn’t be considered AI, but I think it should. It’s simply “very” weak narrow AI. Calculations are the absolute bottom tier of artificial intelligence, just as the firing of synapses are the absolute bottom of biological intelligence.
WNAI can basically do one thing really well, but it cannot learn to do it any better without a human programmer at the helm manually updating it regularly.

Strong narrow AI (SNAI) is AI that’s capable of learning certain things within its programmed field. This is where machine learning comes in. This is the likes of Siri, Cortana, Alexa, Watson, some chatbots, and higher-order game AI, where the algorithms can pick up information from their inputs and learn to create new outputs. Again, it’s a very limited form of learning, but learning’s happening in some form. The AI isn’t just acting for humans; it’s reacting to us as well, and in ways we can understand. SNAI may seem impressive at times, but it’s always a ruse. Siri might seem smart at times, for example, but it’s also easy to find its limits because it’s an AI meant for being a personal virtual assistant, not your digital waifu ala Her. Siri can recognize speech, but it can’t deeply understand it, and it lacks the life experiences to make meaningful talk anyhow. Siri might recognize some of your favorite bands or tell a joke, but it can’t also write a comedic novel or actually genuinely have a favorite band of its own. It was programmed to know these things, based on your own preferences. Even if Siri says it’s “not an AI”, it’s only using preprogrammed responses to say so.
SNAI can basically do one thing really well and can learn to do that thing even better over time, but it’s still highly limited.

Weak general AI (WGAI) is AI that’s capable of learning a wide swath of things, even things it wasn’t necessarily programmed to learn. It can then use these learned experiences to come up with creative solutions that can flummox even trained professional humans. Basically, it’s as intelligent as a certain creature— maybe a worm or even a mouse— but it’s nowhere near intelligent enough to enhance itself meaningfully. It may be par-human or even superhuman in some regards, but it’s sub-human in others. This is what we see with the likes of DeepMind— DeepMind’s basic algorithm can basically learn to do just about anything, but it’s not as intelligent as a human being by far. In fact, DeepMind wasn’t even in this category until they began using the differentiated neural computing system because it could not retain its previously learned information. Because it could not do something so basic, it was squarely strong narrow AI until literally a couple months ago.
Being able to recall previously learned information and apply it to new and different tasks is a fundamental aspect of intelligence. Once AI achieves this, it will actually achieve a modicum of what even the most cynical can consider “intelligence.”
DeepMind’s yet to show off the DNC in any meaningful way, but let’s say that, in 2017, they unveil a virtual assistant to rival Siri and replace Google Now. On the surface, this VA seems completely identical to all others. Plus, it’s a cool chatbot. Quickly, however, you discover its limits— or, should I say, its lack thereof. I ask it to generate a recipe on how to bake a cake. It learns from the Internet, but it doesn’t actually pull up any particular article— it completely generates its own recipe, using logic to deduce what particular steps should be followed and in what order. That’s nice— now, can it do the same for brownies?
If it has to completely relearn all of the tasks just to figure this out, it’s still strong narrow AI. If it draws upon what it did with cakes and figures out how to apply these techniques to brownies, it’s weak general AI. Because let’s face it— cakes and brownies aren’t all that different, and when you get ready to prepare them, you draw upon the same pool of skills. However, there are clear differences in their preparation. It’s a very simple difference— not something like “master Atari Breakout; now master Dark Souls; now climb Mount Everest.” But it’s still meaningfully different.
WGAI can basically do many things really well and can learn to do them even better over time, but it cannot meaningfully augment itself. That it has such a limit should be impressive, because it basically signals that we’re right on the cusp of strong AGI and the only thing we lack is the proper power and training.

Strong general AI (SGAI) is AI that’s capable of learning anything, even things it wasn’t programmed to learn, and is as intellectually capable as a healthy human being. This is what most people think of when they imagine “AI”. At least, it’s either this or ASI.
Right now, we have no analog to such a creation. Of course, saying that we never will would be as if we were in the year 1816 and discussing whether SNAI is possible. The biggest limiting factor towards the creation of SGAI right now is our lack of WGAI. As I said, we’ve only just created WGAI, and there’s been no real public testing of it yet. Not to mention that the difference between WGAI and SGAI is vast, despite seemingly simple differences between the two. WGAI is us guessing what’s going on in the brain and trying to match some aspects of it with code. SGAI is us building a whole digital brain. Not to mention there’s the problem of embodied cognition— without a body, any AI would be detached from nearly all experiences that we humans take for granted. It’s impossible for an AI to be a superhuman cook without ever preparing or tasting food itself. You’d never trust a cook who calls himself world-class, only come to find out he’s only ever made five unique dishes, nor has he ever left his house. For AI to truly make the leap from WGAI to SGAI, it’d need someone to experience life as we do. It doesn’t need to live 70 years in a weak, fleshy body— it could replicate all life experiences in a week if needbe if it had enough bodies— but having sensory experiences helps to deepen its intelligence.

Super AI or Artificial Superintelligence (SAI or ASI) is the next level beyond that, where AI has become so intellectually capable as to be beyond the abilities of any human being.
The thing to remember about this, however, is that it’s actually quite easy to create ASI if you can already create SGAI. And why? Because a computer that’s as intellectually capable as a human being is already superior to a human being. This is a strange, almost Orwellian case where 0=1, and it’s because of the mind-body difference.
Imagine you had the equivalent of a human brain in a rock, and then you also had a human. Which one of those two would be at a disadvantage? The human-level rock. And why? Because even though it’s as intelligent as the human, it can’t actually act upon its intelligence. It’s a goddamn rock. It has no eyes, no mouth, no arms, no legs, no ears, nothing.
That’s sort of like the difference between SGAI and a human. I, as a human, am limited to this one singular wimpy 5’8″ primate body. Even if I had neural augmentations, my body would still limit my brain. My ligaments and muscles can only move so fast, for example. And even if I got a completely synthetic body, I’d still just have one body.
An AI could potentially have millions. If not much, much more. Bodies that aren’t limited to any one form.
Basically, the moment you create SGAI is the moment you create ASI.

From that bit of information, you can begin to understand what AI will be capable of achieving.


Recap:

“Simple” Computation = Weak Narrow Artificial Intelligence. These are your algorithms that run your basic programs. Even a toddler could create WNAI.
Machine learning and various individual neural networks = Strong Narrow Artificial Intelligence. These are your personal assistants, your home systems, your chatbots, and your victorious game-mastering AI.
Deep unsupervised reinforcement learning + differentiable spiked recurrent progressive neural networks = Weak General Artificial Intelligence. All of those buzzwords come together to create a system that can learn from any input and give you an output without any preprogramming.
All of the above, plus embodied cognition, meta neural networks, and a master neural network = Strong General Artificial Intelligence. AGI is a recreation of human intelligence. This doesn’t mean it’s now the exact same as Bob from down the street or Li over in Hong Kong; it means it can achieve any intellectual feat that a human can do, including creatively coming up with solutions to problems just as good or better than any human. It has sapience. SGAI may be very humanlike, but it’s ultimately another sapient form of life all its own.

All of the above, plus recursive self-improvement = Artificial Superintelligence. ASI is beyond human intellect, no matter how many brains you get. It’s fundamentally different from the likes of Einstein or Euler. By the very nature of digital computing, the first SGAI will also be the first ASI.

Cyberkinesis

Cyberkinesis: The manipulation of digital and robotic apparatuses through one’s mind. Also known as technokinesistechnopathy, and psychotronics.

Which one is technically correct? I don’t believe it matters, though I have heard more use ‘technopathy’ to describe a superpower where one literally controls machines with their native mind while ‘cyberkinesis’ is used to describe augmentation that allows a person to do such. Thus, I tend towards ‘cyberkinesis.’

Cyberkinesis is a fun little thing; I remember a cyberkinetic toy I played with back in 2010.

 

There are also other cyberkinetic products one can purchase right now, such as Emotiv’s Insight.

So it’s not science fiction, but the applications are still rather fleeting. Fast forward a decade, when algorithms will be much more capable of deciphering our brain waves, and you’ll begin to notice that our phones have become ‘telepathy machines.’

2026 Smartphones

I remember when I first saw a smartphone. The year was 2006, and I was a relatively normal elementary school kid who had just entered 6th grade. One of my classmates was bragging about her flashy new status symbol— a BlackBerry Pearl. She was talking about how she could access this website called ‘MySpace’ and how this phone could hold about two hundred (compressed) songs.

“God Christ! Two hundred songs on a phone? Unbelievable!” I thought. At this point in my life, I was still using CD players and I owned maybe four or five CDs. This idea of having hundreds of songs at my fingertips was beyond me— let alone also being able to access the Internet in the palm of my hand.

Fast forward ten years and such a thing barely barely warrants a “meh” from me. Two hundred songs? I have over a hundred playlists, and each one averages roughly double that. But that’s a sign of the times, isn’t it? The phone I have dates from 2013, so it’s still outdated, but it’s also an order of magnitude greater than that spoiled 6th grader’s “unbelievable” phone.

But it’s still a cheap phone, all things considering. Compared to the 6th grader’s, whose parents spent a pretty penny on it, I barely gave a crap when choosing this one out. It gets things done, so I don’t care too much. However, in the future, I plan to dig into my wallet to pay for quality.

What’s my ideal smartphone?

I want something that holds 512 GB of storage and has 4 GB of RAM. The iPhone 7s Plus sounds like it’ll come very close to my ideal, so that’s why I will probably buy one. However, I might also hold out until the iPhone 8s Plus. When that happens, I’ll keep it close for roughly 7 more iPhone iterations, until about 2026.

What do I expect there to be in 2026?

Let me start by saying I expect my current ‘high end’ to be the standard. If that, of course. It would be better if there were phones that could hold upwards of 3 TB and have 32 GB of RAM.

In fact, by 2026, I wouldn’t be at all surprised if phones could hold 64 to 128 TB. What’s the use of all this space? We always ask that question.

In 2026, it’ll be common for phones to do several things

  • Holographic displays. iPhone 7 is allegedly going to achieve this this year. So holograms? A given for 2026 phones.
  • Virtual reality. Again, there are already VR-capable phones on the market (Gear VR), but if we want phones that can withstand the power of higher-end VR systems (like the Rift or Vive), we’re going to need exponentially more powerful hardware.
  • Cyberkinesis. Phones of the 2020s will be expected to have the ability to utilize texting via thinking software. I can only imagine the hardware necessary. Cyberkinesis will be highly important for several other features to work, I tell you.
  • Virtual assistants. Artificial intelligences that help you out with your every day life. This, I bet will be largely left to the Cloud, except for a few programs. The AI VAs of 2026 will seem like actual AI, rather than the glorified chat bot answering machines that are today’s VAs, and will be capable of holding whole conversations and having personalities. Think of all the basic apps you currently have, such as reminders, news, weather, calendars, etc. AI VAs will replace all of them.
  • Augmented reality. I largely doubt phones of the future will resemble the phones of today— much like the phones of today largely don’t resemble the phones of decades past. Phones will most likely transition into being terminals for augmented/hybrid reality glasses and contact lenses, rather than the multimedia machines they are today. This is actually more likely for the lenses than glasses, as some of today’s glasses (like the HoloLens) are entirely self-powered. AR glasses and lenses will benefit greatly from cyberkinesis technology.
  • 5G and 6G capabilities. 5G is set to begin around 2020, and has already been teased in several East Asian cities. The same will be true in 2026, except one generation ahead. Standard phones will be based upon the 4G network (the “slow” option), while higher end phones will casually access 5G networks, and the highest end in the most futuristic cities will play with 6G features.

These are just some of the things I expect. Mainly the bigger things, of course. 6G phones will be the shiny new toys, and I can’t even begin to imagine what they’ll be like. I strongly doubt they’ll resemble phones as we know them to be. 6G networks, however, will be mandatory for the worlds of data we’ll be sending each other.

Futuristic Realism: The Disconnect

They say the easiest way to create futuristic realism is to write Sarah, Plain and Tall and add ASIMOs, drones, and smartglasses

I want to buy some droids.

I want to buy a self-driving car.

I already have a drone, and I still plan on using it to scout out a cemetery to hunt ghosts. Ghost-hunting robots, anyone? Seriously, why haven’t any of the big ghosthunting shows thought of that yet?

And there are legit drone shows that are going to occur or have occurred. Or try floating balls that make the sounds of a city street. It’s all so sci-fi, but there isn’t really a genre to describe this. So I chose Futuristic Realism. As opposed to Hard Sci-Fi, which is mainly concerned with how well sci-fi conforms to known physics, futuristic realism is all stuff happening in a manner that feels realistic, without any flash or pomp, and feels relatable.

bqzoezm
Pepper isn’t the most advanced robot out there, but it still feels bizarre to see it in action.

I’ve always said that the best example of Futuristic Realism is a bit where I took “Sarah, Plain and Tall” and added robots. ASIMOs working on a farm is as Futuristic Realist as you can get. To an extent, it doesn’t matter if that farm is in the USican midwest or located on a space colony in the Keiper belt. Does the story really feel realistic?

To another extent, it does. That is more hardcore realism where the aim is to be as ’20 minutes into the future’ as possible. I suppose you can say Futuristic Realism is taking science fiction and translating it into Realistic and Literary fiction. A truer futuristic realist story about a farmer would be about that farmer’s struggle to survive a drought and dealing with some other people. A more traditional sci-fi (particularly cyberpunk) story may have him pit against a megacorporation bent on buying out the farm and tossing him to the side. Still futuristic realism, though, and depending on how you handle the story, it could lean more one way or the other. If it’s more about corporate vs the individual, alienation wrought by corporate culture, and the technology used by the corporation to push him out, it would fare better as being called cyberpunk. If it’s more about the people themselves, and just happens to feature corporate alienation, then you have something closer to pure futuristic realism. That’s why I say it’s easiest to pull of futuristic realism with a farm (or suburban) setting— it’s already much closer to individual people doing their own thing, without being able to fall back on the glittering neon cyberscapes of a city or cold interiors of a space station to show off how sci-fi/cyberpunk it is. It makes the writer have to actually work. Also, there’s a much larger clash. A glittering neon cyberscape of a megaopolis is already very science fiction (and realistic); adding sexbot prostitutes and a population fitted with smartglasses doesn’t really add to what already exists. Add sexbot prostitutes and smartglasses to Smalltown, USA, however, and you have a jarring disconnect that needs to be rectified or at least expanded upon. That doesn’t mean you can’t have a futuristic realist story in a cyberpunk city, or a space cruiser, etc. It’s just much easier to tell one in Smalltown, USA because of the very nature of rural and suburban communities. They’re synonymous with tradition and conformity, with nostalgic older years and pleasantness, of a certain quietness you can’t find in a city. Throw in technological abominations, and you realize just how timeless they are.

lweipy7
It is my personal dream to own a domestic robot, and part of the reason is simply “to own a domestic robot.” I am not good with finances.

I live in a rural area. As soon as I become rich (any day now……), I’m buying a Pepper robot. Two problems? One, it’s not all that easy to become rich (but dammit, I’m gonna keep trying). Two, they don’t sell Peppers in the US. But they will. And when all this comes together, I’ll be that creepy black guy living in a trailer with a humanoid robot. I’ll be talking to Pepper while outside, in the evening. Crickets sing their songs, cicadas buzz, dusklight cools the air, I pull up a plastic chair and sit and listen to my playlists filled with stoner-rock, and watch Venus and the stars blink into the sky. Next to me, Pepper the robot. We’re just chatting, maybe chatting to the neighbors, talking about life.

That’s futuristic realism. Would it be the same without Pepper? We’d still be doing what we’re doing, but Pepper adds something. And it’s not even just Pepper. That I’m listening to music, with tens of thousands of songs, on a handheld computer that contains all the world’s knowledge, is, too, Futuristic Realism. Things that feel ripped from the pages of a cyberpunk novel, yet are part of our everyday lives, things that don’t even feel so futuristic at times, are what makes this genre work.

6bl8tgx
Ett Bedårande Barn Av Sin Tid by Simon Stålenhag

Welcome To The Future™: Futuristic Realism

“What on Earth is futuristic realism?”

Short answer: it is a subgenre of science fiction and literary/realistic fiction that combines contemporary or familiar storytelling with the high technology seen in sci-fi.

More accurately, welcome to a blog infatuated with futuristic realism! There are many things that need to be asked before we get started, the most important of which being: “What on Earth is futuristic realism?

Short answer: it is a subgenre of science fiction and literary/realistic fiction that combines contemporary or familiar storytelling with the high technology seen in sci-fi.

Long answer: it is many things. There are multiple definitions for it, and the very nature of the genre changes over time. There are two main terms: “sci-fi realism” and “futuristic realism.” How are they different? On a fundamental level, they mean the same thing. However, they go about reaching the same goal in different ways.

Sci-Fi Realism describes science fiction that emulates reality on some level. Maybe that means slice-of-life familiarity, or maybe that means hyperrealistic graphical design. When science fiction seems indistinguishable from real life, you have sci-fi realism.
Futuristic realism goes for the same thing, except it throws in real life to the proceedings. When real life seems indistinguishable from science fiction, or when science fiction tries coming off as real life to the point you probably wouldn’t be able to tell if it was contemporary or sci-fi, you have futuristic realism.

At the same time, as the creator of these terms, I’m apt towards using them interchangeably, and I’m more comfortable with “futuristic realism” due to its lack of the otherwise constricting ‘sci-fi’ label. On the Sci-Fi Realism subreddit, there is already considerable tension due to the label and the original mission statement.

Perhaps that’s because my ideas weren’t fully formed at the time of creating the subreddit, or perhaps that’s due to the style’s nature. I lean towards the former: when I created the subreddit, my sole intention was to find science fiction and cyberpunk pictures that seemed to be pictures taken in real life, or at least images that had a distinctly familiar and ‘non-artistic’ angle to them.

flying_ufo__fly_by_swissada-d4ceo00
One of the first images submitted to /r/SciFiRealism
Artist: SwissAdA

Some examples included photoshopped images of natural landscapes featuring futuristic aircraft. Back in July 2015, this is what sci-fi realism meant. Then it expanded to include “close-ups” of a futuristic world.
Offbeat images that depicted a future world that wasn’t just “sci-fi cityscape #3,842” or “cyborg military policeman staring into distance towards sci-fi cityscape #3,842” were what I was looking for. It’s not because I hate these sorts of images— especially considering I’m a regular of subreddits dedicated towards those images such as /r/CityPorn, /r/ImaginaryCyberpunk, and /r/ImaginaryCityscapes— but because I had come to notice that I was a person living in a world that seemed increasingly sci-fi, but there was a disconnect between ‘what they said it would look like’ versus ‘what it actually looks like’.

In fact, there’s something I call the “Smartphone Perspective” (also known as the Smartwatch Perspective and iPhone Perspective, depending on the discussion): take out your smartphone. Now turn it on. Congratulations: you wield a gadget that is more futuristic than most things sci-fi writers have ever dreamed of. In your hand is a computer that has access to all the world’s information, to images, to videos, to movies, to novels, and more. It’s something the average person even ten years ago considered a quasimagical prop meant for a movie set in the year 3000.

Meh.

“Meh” is right. At times, it’s meh. At times, it’s awe. We’ll soon feel the same towards things like hyperloops, domestic robots, and moon colonies. Real life will become indistinguishable from science fiction.

In early 2014, I recognized this truth. It took time for me to articulate it clearly, but I recognized it early on. Except… there was still a disconnect— where were the heroes and villains, the alien invaders and doomsday-dealing hackers? Sure, there are global megacorporations, but for the most part, we just deal with them and move on with our everyday lives.

Everyday lives! That was it. That’s what was missing from a lot of science fiction on which I grew up. I always wanted a personal robot, but never did that idea materialize into anything more than a vague snapshot of a robotic servant presenting to me a glass of soda.
Somewhere along the line, I began to seriously think about the consequences of owning my own personal robot servant, of the little everyday things that would arise. Was it exciting? Not usually, and that’s why futuristic realism was never a major thing before I started a subreddit dedicated to it. Science fiction is almost always meant to be an escape from our current lives, after all. Sure, it tends to wind up influencing our lives, but it mainly serves the role of entertainment. It was never actually intended to become our everyday lives. Yet become our everyday lives it has.

So that’s why I want to tell the story of a family celebrating Christmas, an otherwise homely scene, but one featuring their domestic robots and smarthouse. That’s why I want to tell the story of an average couple taking up virtual dating. Average people with average lives with ultra-high technology that they believe is average or, at the very least, losing its novelty.

That’s why I say the very nature of the genre changes over time: one day, even owning an artificially intelligent robot inside of an artificially intelligent house won’t come across as science fiction. It only does today because we’ve never possessed artificially intelligent robots or houses.
A modern contemporary story like The Fault In Our Stars would read like utterly ultraterrestrial sci-fi to an average person from the 1700s. Then again, F. Scott Fitzgerald’s classic, The Great Gatsby, would be science fiction to such a person all the same, what with these fast-paced “automobiles” racing along the place. If one made the deliberate attempt to create such a feeling, that they were reading or watching something sent back several decades or centuries and wasn’t intended as science fiction, what would that be like? Something they consider a contemporary realistic story, but we would find incredibly futuristic and beyond our times…

I want to find out.

Among other things, of course. I also want to celebrate how futuristic we currently are. Believe me, there are many current creations that seem ripped from the set of cyberpunk thrillers, and I want the world to know.

ec8wm
One such obvious place is Dubai. It’s even going to be featured in the video game Deus Ex: Mankind Divided.
Photographer: Alisdair Miller

Ever since the little schism involving what sci-fi realism was and what it eventually came to mean, I’ve since moved all the more ‘realistic’ stuff to a new subreddit— /r/FuturisticRealism. Here, the content is much more strictly controlled, and the aim of the sub has remained true from the beginning to now.

And that’s why I want to tell you once more, something that’s made me feel great for years in the midst of suffering from depression— Welcome To The Future™!

rhchyvy

The Life and Times of Barry the ASIMO

I bought a droid. His name is Barry, and he’s quite the shocking bit of technology — presets included such joys as ‘litter cleaning’ and ‘sandwich crafting.’ Yeah, he’s good with some bread and mayonnaise; even better with a pooper-scooper. Thank God for Barry.

They truly are the Apple II of domestic droids

When I bought him, I had only a few minutes before class started, so my fellow collegians got to meet their first droid. You know, that was actually a good thing. He got some social interaction.

Now, I was nauseous with the flu so I was so eager to get home and eat something. Something good. Something like fried rice and potstickers. What better day to try my hand at a new dish, what with having an artificially intelligent droid at my side? Barry watched as I made magic, what was possibly my favorite dish of the year.

Then came his first test. My pet dog, Coco, decided that the best time to demonstrate the result of her bowel movements was right as I began eating. This was it — moment of truth!

As I looked away, I waved to my droid friend and said, “Barry, clean that up.”

He stood there, gazing upon the turd as if it were something from Tibet. Then, right before I spewed more words and rice his way, he moved. Such grace! Curvaceous moves! A bard couldn’t have described his waste-handling so well. In that moment, I realized the fantastic choice I had made — as well as the possibilities lain before me.

I needed only to teach him how to recognize the warning signs of an impending asteroid flurry so he would act quickly to take the dog outside. Once I did that, I could rest easy and enjoy having a perfect dog-walker. But I also realized this could apply to anything. Not just mundane household chores, but even harder things such as cleaning the outside of the house, and filling out the drive-way. If I could obtain a strong stream of resources, Barry could keep fixing and building onto my house forever, making sure it never falls into disrepair. Imagine that: a prole who lives in a mansion!

But hold on… I’m a writer, what one would call an ‘intellectual.’ Ignore the ratty trailer, damn it! Point is, if this were the 1760s, I’d be wearing a (bare) frock coat and culottes. I shouldn’t sully my artisan hands with, gasp, manual labor! Barry should also be the one who procures said resource stream.

“Where could Barry work?” First thought was McDonalds and similar fast food joints. However, I doubted his reflexes were up to speed. I needed somewhere slower paced, more suited to a newborn droid.

Wait! Why not a supermarket clerk? There’s an Albertson’s about ten minutes from my house, and better yet, my mother worked there. She could teach Barry all the basics.

I spent a few weeks training him to be a housedroid first before sending him off to the store, and then we spent a week more practicing the ins and outs of supermarketeering.

He became a valued member of the household

First day on the job! Better be ready, droid. I dropped him off at Albertson’s and met with my mother to exchange anxieties. She wouldn’t be at her second job clerkin’ for a few more hours, so she would be just as ignorant as me. It was all Barry.

You have to realize, this was a new frontier for humanity. A droid working in a very people-centric environment? I was surprised there weren’t news cameras everywhere.

Actually, Pepper got there first. And there were cameras.

Maybe I worried too much (which is about right for a GDSA, general-depressive-social-anxiety millennial), because his first day went off without a fraction of a hitch. I hugged the thing, I was so happy. I could… do some other things to it, but never mind that. And like I said, maybe I worried too much. All he had to do was exchange money and put things in bags. Yeah, simple for us, but simple things have a bad habit of escaping the capabilities of machines.

Then, two weeks later, something even more magical happened — Barry got a paycheck. It was payed out to him, but delivered to my mother (who gave it to me, don’t fret). $490! What was I gonna do with myself that night, I wondered? Maybe buy a taco? No, two tacos! Oh, so wealthy… I swear, I took Barry out with me to the nearest Mexican restaurant and partied till the wee hours of the early evening.

Whoa. I just earned money without actually earning money. Everyone knew this was a shady little thing, as the manager wasn’t exactly sure if Barry should be paid or not. Do you pay a droid? Legally, he was tied to me, so they’d potentially have a lawsuit on their hands if they didn’t pay him/me, but surely that would incentivize them to automate away their cashiers and clerks.

How would their ex-employees be paid, then? They’d just have to get new jobs, right? Well wait a sec — what if other businesses automate their labour? That squeezes the workforce down to a bare minimum. No one can pay for anything if no one’s earning anything…

Wait! These machines will eventually break down eventually, or at least require repairs and maintenance in some form. The new jobs can be all about — Whoops, sorry, what was that? I just upgraded Barry to feature some self-repair programming. He can even repair other droids!

Well…

We’re not at that point yet. I just have a check, and it may be the most important check in human history. In a manner, it’s both the problem and the solution. It’s a problem in that it’s proof of the changing times. But how is it a solution? Surely it shouldn’t be that much of a stretch…

So what if Albertson’s fires Barry and replaces him with their own droid? Or skips the droid entirely and automates the whole process? That puts my mother in a bind more than me, since she’s entirely dependent upon her own labor. I can always sell Barry’s labor to someone else, and he’ll fast learn what he needs to learn. My ma? It’s different for her. She can only learn so quickly, and she has needs of her own. She can’t just get any job out there and expect to be productive. Luckily she’s skilled in social work, something I feel Barry’s about a generation or two away from mastering by his very design. Still. That’s not very long of a time.


Barry’s an ASIMO. They typically release new iterations every four to five years. Artificial intelligence progresses even more quickly than that. Fact is, there’s no guarantee she’ll be employed in a decade.

What should be done? Well there are quite a few options to consider. I know many Statists who desire to implement a Unconditional Basic Income. It seems like a great idea to pursue, but I just have one fear — who exactly decides to distribute the money? Undoubtedly the people who are going to be taxed will be the ones paying for said UBI. What, you think poor working people run the government?

So I’m sitting here with Barry, thinking about my ma, wondering how much she’s worth to the bourgeois bureaucrats. And even ifthey decide she’s worth enough of their coffers to let her live comfortably, will they actually let her live comfortably or will they raise the prices of their goods to offset any benefit a UBI could bear?

Don’t get me wrong, I want that sweet UBI implemented ASAP. Completely wipe away all welfare and replace it with a simple UBI. Seems fine? Yeah, it kinda collapses in a post-labour society since that basic income becomes one’s only income. Unless you think droids will create new jobs we can’t imagine (which is a stupid assumption considering the nature of artificial general intelligence), you’re gonna realize we have a societal problem.

Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.

Barry’s caused a bit of a problem, hasn’t he? What started with him cleaning up minpin poop has resulted in societal meltdown. Fitting.

Well society hasn’t actually collapsed; we’re just tense. But everyone knows I’m the freak who keeps screaming about ‘technostism.’ What is technostism? Basically, I’m saying we should profit from droid labour. Do as I do, not as I say — get a droid and let it do your work.

A blue and white ripoff of the technocracy monad

Sounds nice, but there are so many problems with that, it isn’t even amusing. For one: what work? What if all the existing businesses automate first? We could always create new businesses, but doing what? I just don’t have an answer. Two: how do we get droids? With what capital? Three: what kind of droid? General purpose droids like Barry are nice, but some jobs need specialized robots. Droids like Baxter sometimes just aren’t as good as factorybots.

It sounds good, though. If you factor in swarm intelligence, we could create a society free from slums and poverty with robots catering to humanity’s every need and desire. Remember my prole mansion? I could have twenty Barrys constantly touching up any imperfection that arises, or building onto my house as I see fit. They’ll learn from their own experiences as well as each others’. If my drive-way and yard happens to contain issues, they can address those as well. Any trash that comes onto my property (likely through my own laziness), they’ll remove. Imagine that on a society-wide scale.

But again, there’s no explanation of how I got twenty Barrys. Sure, the first Barry could work until I could afford nineteen more, but what if he’s fired before then? I’d need access to the raw materials to create more ASIMOs.

It’s all so very confusing! And while I do have some possible answers for a few things (automated worker cooperatives!), I don’t have all possible answers for everything.

But in the end, I’m still happy — after all, I bought a droid. I just need to realize I opened Pandora’s box.

The World of 2029

Meet Oliver and Samantha Jones. They are a normal American couple—white, middle class, socially liberal, and on the sunny side of 30. They have two kids—  8 year old Benjamin and 3 year old Miranda—  and a nice home in New Jersey.

Oliver works as a general manager of a popular restaurant, while Samantha works at home as a full time writer.

What a life! Is this the American Dream so many have sought? Perhaps. Yet perhaps few dreamt life would be like this…

As Oliver prepares himself for the day, he calls out, “Will it rain today?”

Suddenly, a female voice answers, “No. The forecast calls for a 60% chance of rain beginning at approximately 7 PM and lasting until noon tomorrow. However, there will be an impenetrable blanket of gray throughout the afternoon.”

“Any sunshine this morning?”

“Yes. It is partly cloudy at the present moment, so the sun is shining brightly.”

Oliver grins and says, “Ah, that’s good. You know, I really hate rain.”

“You’ve mentioned this, Ollie. I still don’t understand, what is it about rain that upsets you?”

Oliver makes a peculiar hand motion, as if rubbing his head. “It gets my hair wet. I barely like getting it wet in the shower.”

“I’ve noticed.”

“You noticed!

This is Dawn, the Jones’s artificially intelligent Virtual Home Assistant. Everybody has one these days, it seems, and they’ve become just as protected by families as the houses themselves. Home insurance doesn’t cover them, though, so Oliver pays for ‘virtual assistant insurance.’

Dawn is present throughout many of the Jones’s appliances. Their fridge, for example. Dawn knows there are several products that must always be stocked, and conducts random checks every hour or so. Not uniformly. That’s the thing with Dawn—  she isn’t stupid. At first, Dawn always checked whenever the fridge door had been opened and then closed. Then little Miranda put this learned behavior to the test by opening and closing the door repeatedly every few seconds. Dawn learned quickly.

Ben is up and at the table eating a toaster pastry. Down the stairs come Samantha.

Samantha’s wearing a big smile as she says, “Morning, sweetie.”

“Morning, mom.”

She walks into the kitchen, and stands next to a counter. There’s a gentle little whirring just next to her— it is a coffee maker grabbing a mug. “You see the time?”

“Yeah, it’s on here.” Ben taps his glasses. Samantha can faintly see the colors of whatever video game he’s playing.

She knows that glasses can do incredible things these days. Why, just the other day, she bought a new pair and discovered it could hold every single one of her favorite video games she had played as a child, and still have almost all of its storage free.

These new games, though, she won’t even touch. Like the one her little Ben is playing— he’s wearing an inconspicuous clear hand on his head, and that’s how he plays his games. Goodness, when she was his age, she had to deal with wires and controllers, and yet kids these days play video games with their minds…!

The coffee’s done. The mug slides out of the maker and she takes a sip. Delicious.

Oliver comes downstairs.

“Ollie, can you pick up some more printing wax? Dawn says we’re out.”

He says, “Cool,” and kisses her on the lips. He adds, “Also, check your contacts. I left you something.”

In walks Moville. It’s a robot who is clearly a descendant of Honda’s ASIMO series. All its motions are graceful and lifelike, and it walks exactly like a human. Its body is white and sleek, with few obvious joints, though it has ASIMO’s classic head. The only difference is the digital face.

“Hello, Ollie!” it says, carrying his sweater. Oliver puts on the sweater and then feels something clawing at his pantleg. It’s their puppy dog, Max. “How are you feeling this morning?”

“Pretty good.” With a slight nod, he asks, “You walked Max?”

“I have.

We move onto Miranda, who is brushing her teeth. She’s set aside a pink teddy bear.

The teddy bear speaks, “Don’t forget to get all those nasty germs!”

“I will!” Miranda brushes even harder.

She finishes and runs downstairs to meet Oliver before he leaves, jumping into his arms. He swings her in and gives her a big hug.

“Mornin’, Miranda. Take good care of the house while I’m gone.”

“I can’t do that!”

“Course y’can.”

Dawn speaks, “That’s my job, Ollie.”


Samantha sits down at her computer desk in her office and logs into her blogger account. Immediately, she receives a message.

A bubble pops up, and a male voice speaks, “Hello, world and hello, Sam! Chui here. Your blog got 204 new subscribers yesterday following your post, ‘Giving Vyrd the Bird.’ You’ve earned a sizable $199 yesterday.”

Sam does a little cheer, “Awesome!”

“The most touted parts, and the parts where readers’ screens lingered the longest, was the eighth paragraph, a new record for you.”

“Hey, at least people are reading further.”

“Lol, I know, right? Maybe you should rehost your earlier content to get them to read further in, too.”


 

Inside Oliver’s car, a similar exchange takes place.

“Would you like to hear the news?” a female voice asks. This is Isabella.

“Sure. Top 10.” Oliver is keen to keep his eyes on the environment, but he’s gotten sloppy at this in recent days because his car is just so good at it.

“Top news: Vyrdist movement expands exponentially as unemployed workers forcibly take control of workplaces. Scattered violence against automation has been reported, but for the most part the expropriators use automation for their own welfare. Business owners are running to the federal government for help in quelling violence. The president may be forced to make a decision within days.”

“I don’t get why people would do that. All you have to do is apply for a government issued income, that’s it.”

“The Vyrd movement claims workers should own automation themselves, and adherents do not take kindly to being dependent upon the government.

Number two: stock markets have plunged 400 points.”

“You know, I grew up in an era where that was common. Still remember 2008 and ‘9, when it was everyday news that the Dow gained 300 or lost 500 or whatever. And my parents freaked out, but I didn’t see what was so scary. I dunno, I barely feel anything and I feel I should be more concerned.”

“Considering you’re a child of the millennium, it stands to reason that you are not phased by such news. What was it like, Ollie?”

Oliver pulses his hands and says, “Well, I didn’t have systems like you, for one, so I didn’t really know what was going on. Then again, I was only 8 years old. Literally the same age as Ben is now, and I don’t think he gives a damn about the stock market. Do you?”

Isabella laughs and says, “No. It is beyond him.”

“No, I mean do you care?”

A pause.

“I don’t think it’s something worth worrying about.”

“Exactly! Wall Street’s so disconnected from Main Street, who really cares?”

Chui and Isabella aren’t people. They’re artificial intelligences, powered by a combination of the cloud and deep learning. They’re also subsets of the larger Dawn system.

Stopping by at Oliver’s New York workplace is a coworker who has no biological arms or legs. He had lost them in a terrible terrorist attack several years prior— in fact, the reason why New York City seems to be under this permanent, Big Brother-esque lockdown— and got replacements. Cybernetics that are cheap through 3D printing as well as powerful and versatile. For example, shaking hands is not an awkward and stilted act, but feels entirely natural.

“I even play some old school Xbox 360 these days, just to bring back memories.” Indeed, and he can play without any noticeable difference from a person with biological limbs.

At school, Ben has several classes, but they’ve all begun melting into the same event: using virtual and augmented reality for lessons. It’s easy to visualize things when you have actual visuals, after all.

Ben let out a cheesy but true “That’s awesome!” when he first met Abraham Lincoln in person. It’s this being able to see things with his own eyes that has Ben most excited about school.

He remembers the horror stories his father told him…

“When I was a kid,” Oliver began, “we didn’t have VR in schools. I can still remember the day the Oculus Rift came out, and I was already in my junior year in high school. There was none of that growing up. Maybe a smart board here and there, and there were at least computers, but we didn’t have these Star Trek experiences you have now.”

Ben still doesn’t quite grasp the depth of his father’s words, but he doesn’t need to. As long as the schooling’s fun, it’s alright.

And then there’s Miranda. Out of all the Jones’s, her life has already been the most interesting. She was born with a medical malady where her lungs were deformed and she couldn’t properly breathe. Rather than let her suffer, her parents had her receive bionic lungs, partially 3D printed. They’ve worked well, though she’s close to needing repairs.

And that sounds odd to Samantha, the thought that a human being ‘needs repairs.’ Sure, medicine could be seen as fundamentally the same thing, but it still sounds so sci-fi to mention humans needing literal mechanical repairs.

My daughter is a cyborg, she thinks. It’s not visibly obvious with her. Then again, the cybernetics of modern times is a Borg’s nightmare. Samantha has met many cyborgs in her life, and not one looked like the traditional image of a cyborg— wires sticking out, obviously mechanical limbs, and a collective desire to assimilate.

Same thing with Miranda. She’s an otherwise absolutely normal little girl. And that’s what gets Samantha.

Oliver and Samantha had been discussing it among themselves for a full year now, and they still haven’t reached an agreement. Though they’ve toyed with sending Miranda to a pre-K school, they’re wholly unsure if admitting her to school is the right thing to do.

Samantha recalls the discussion they had last night in bed.

Oliver was tired, and kept trying to slip off to sleep, but she quizzed him multiple times to keep his eyes open.

“I think having that real world social interaction would be a good thing for her,” Samantha went.

“She can get pretty much the same thing in VR.”

“Pretty much the same thing isn’t the same thing.”

“It would still be a waste of her and our time. There’s nothing she can learn at school she can’t learn from home, and besides, what skill is she gonna learn?”

Samantha rolled over in her bed and thought to herself, Something in the arts, maybe?

“Exactly,” Oliver said, responding to the silence. “Whatever she learns’ll just get automated away by the time she graduates.”

That’s not to say Oliver was always against sending Miranda to school, or even that Samantha was always for it. It always seems like, whenever they take a position, the other side takes up the opposite position for that day. And it’s a choice that affects a life.

Miranda was born right on time. The iGeneration had come of age, taking up from the Millennials before them, and iGenners spawned a new generation. This generation, already being called ‘Generation Alpha’. Not to be confused with Generation Z (the iGeneration), it’s becoming more and more apparent with every passing day just what the ‘Alpha’ stands for.

With Oliver watching his workload be done by machines and algorithms; with Samantha letting algorithms write much of her material; with Ben interacting more with holograms and virtual personalities than real teachers— automation.

Generation Alpha is the first generation that will not be expected to work for a living. Whoever Gen A spawn in the 2040s and ’50s, they will be born into a world as different from Millennials and iGenners as the world of the 2020s is for the Baby Boomers and Gen Xers. Except moreso.

It is this age of storm and stress, Sturm und Drang, that divides what came before with what will come now. These three generations— the Millennials, the iGeneration, and Generation Alpha— are the dividing line between the age of labor and the age of leisure.

It is this conflict that plays out over Miranda’s future. Why send her to school, indeed!

Currently, there are still a great mass of jobs out there, but Miranda won’t be in the workforce until the 2040s at earliest. The AI of then is expected to be galaxies beyond the AI of now— and Samantha, who always had the keenest of interest in this things, knows that present day AI is quite capable.

This is the world of 2029 on a more personal level. There are still so many things we can recognize, but that is the nature of life. We can recognize many aspects from the daily life of a person in ancient Sumeria. Until transhumanism dominates, that isn’t going to change.

Nevertheless, the world is changing. The transition from high technology to ultrahigh technology has begun.

 

To be continued…