Paratechnology

Unexplained Mysteries of the Future

I am only a half-believer in the paranormal, so taking mysteries of the unexplained at face value smacks of the ridiculous. Yet I can never shake those doubts, hanging onto my mind like burrs.
The mammalian brain fears and seeks the unknown. That’s all I want— to know. The chance any one particular paranormal or supernatural happening is real is infinitesimal. Cryptids are usually another story, save for the most outlandish, but what likelihood is there that evolution wrought a lizard man or a moth man? Or that certain dolls are cursed?
However, I won’t cast off these reports completely until I can know for sure that they either are or are not true, as unlikely as they may be.

So here are a few words on the subject of paratechnology.


Self-Driving Cars Have Ruined The Creepiness of Self-Driving Cars

Imagine it’s a cool summer evening in 1969. You’re hanging with your mates out in the woods, minding your own business. All of a sudden, as you pass near a road, you see an Impala roll on by, creaking to a stop right as it closes in on your feet. Everything about the scene seems normal— until you realize that’s your Impala. You just saw your own car drive up to you. But that’s not what stops your heart. When you walk up to the window to see who’s the fool who tried to scare you, horror grips your heart as you realize the car was driving itself.

Needless to say, when your grandson finds the burned out shell of the car 50 years later, he doesn’t believe you when you doggedly claim that you saw the car acting on its own.

Except he would believe you if your story happened in the present day.

Phantom vehicles are a special kind of strange, precisely because you’d never expect a car to be a ghost. After all, aren’t ghosts the souls of the deceased?

(ADD moment: this is easy to rectify if you’re a Shintoist)

Nevertheless, throughout history, there have been reports of vehicles that move on their own, with no apparent driver or means of starting. The nature of these reports is always suspect— extraordinary claims require extraordinary evidence— but there’s undeniably something creepy about the idea of a self-driving vehicle.

Unless, of course, you’re talking about self-driving vehicles. You know, the robotic kind. Today, walking out in the woods and seeing your car drive up to you is still a creepy sight to behold, but as time passes, it grows less ‘creepy’ and more ‘awesome’ as we imbue artificial intelligence into our vehicles.

This does raise a good question— what happens if an autonomous car became haunted?

O.o


The Truth About Haunted Smarthouses

For thousands of years, people have spoken of seeing spectres— ghosts, phantoms, spirits, whathaveyou. Hauntings would occur at any time of day, but everyone knows of the primal fear of things that go bump in the night. It’s a leftover of the days when proto-humans were always at risk of being ambushed by hungry nocturnal predators, one that now best serves the entertainment industry.

Ghosts are scary because they represent a threat we cannot actively resist. A lion can kill you, but at least you can physically fight back. Ghosts are ethereal, and their abilities have never been properly understood. This is because we’ve never been fully sure if they’re real at all. Science tells us they’re all in our heads, but science also tells us that everything is all in our heads. Remember: ghosts are ethereal, meaning they cannot actually be caught. Thus, they cannot be studied, rendering them completely useless to science. Anything that cannot be physically examined might as well not exist. Because ghosts are so fleeting, we never even get a chance to study them, instead leaving the work to pseudoscientific “ghost hunters”.  By the time anyone has even noticed a ghost, they’ve already vanished.

Even today, in the era of ubiquitous cameras and surveillance, there’s been no definitive proof of ghosts. No spectral analysis, no tangible evidence, nothing. Why can’t we just set up a laboratory in the world’s most haunted house and be done with it? We’ve tried, but the nature of ghosts (according to those who believe) means that even actively watching out for a ghost doesn’t mean you’ll actually find one, nor will you capture usable data. Our technology is too limited and ghosts are too ghostly.

So what if we put the burden onto AI?

Imagine converting a known haunted house into a smarthouse, where sensors exist everywhere and a central computer always watches. No ghost should escape its notice, no matter how fleeting.

Imagine converting damn near every house into a smarthouse. If paranormal happenings continue evading smarthouse AIs, that casts near irrefutable doubt onto the larger ghost phenomenon. It would mean ghosts cannot actually be meaningfully measured.

Once you bring in transhumanism, the ghost phenomena should already be settled. A posthuman encountering a spectre at all would be proof in and of itself— and if it never happens— if ghosts remain the domain of fearful, fleshy biological humans— then we will properly know once and for all that the larger phenomenon truly is all in our heads.


Bigfoot Can Run, But He Can’t Hide Forever

For the same reasons listed above, cryptids will no longer be able to hide. There’s little tangible evidence suggesting Bigfoot is real, but if there’s any benefit of the doubt we can give, it’s that there’s been very little real effort to find him. If we were serious about finding Bigfoot, we wouldn’t create ‘Bigfoot whistles’ or dedicate hour-long, two hundred episode reality shows to searching for scant evidence. We would hook up the Pacific Northwest with cameras and watch them all.

Except we can’t. INGSOC could never be watching you at all times for as long as the Party lacked artificial intelligence to do the grunt-work for them. That’s true in reality as it is true in fiction— if you have a million cameras and only a hundred people watching them, you’ll never be able to find everything that goes on. You’d need to be able to watch these videos at all moments every day, without fail. Otherwise, video camera #429,133 may capture a very clear image of Bigfoot, but you’d never know.

AI could meet the challenge. And if you need any additional help, call in the robots. Whether you go for drones, microdrones, or ground-traversing models, they will happily and thanklessly search for your spooky creatures of the night.

If, in the year 2077, when we have legions of super-ASIMOs and drones haunting the world’s forests, we still have no definitive proof of a variety of our more outlandish cryptids, we’ll know for sure that they truly were all stories.

Grades of Automation

  • Grade-I is tool usage in general, from hunter-gatherer/scavenger tech all the way up to the pre-industrial age. There are little to no complex moving parts.
  • Grade-II is the usage of physical automation, such as looms, spinning jennies, and tractors. This is what the Luddites feared. There are many complex moving parts, many of which require specialized craftsmen to engineer.
  • Grade-III is the usage of digital automation, such as personal computers, calculators, robots, and basically anything we in the modern age take for granted. This age will last a bit longer into the future, though the latter ends of it have spooked quite a few people. Tools have become so complex that it’s impossible for any one person to create all necessary parts for a machine that resides in this tier.
  • Grade-IV is the usage of mental automation, and this is where things truly change. This is where we finally see artificial general intelligence, meaning that one of our tools has become capable of creating new tools on its own. AI will also become capable of learning new tasks much more quickly than humans and can instantly share its newfound knowledge with any number of other AI-capable machines connected to its network. Tools, thus, have become so infinitely complex that it’s only possible for the tools themselves to create newer and better tools.

Grades I and IV are only tenuously “automation”— the former implies that the only way to not live in an automated society is to use your hands and nothing else; the latter implies that intelligence itself is a form of automation. However, for the sake of argument, let’s keep with it.

Note: this isn’t necessarily a “timeline of technological development.” We still actively use technologies from Grades I and II in our daily lives.

Grade-I automation began the day the first animal picked up a stone and used it to crush a nut. By this definition, there are many creatures on Earth that have managed to achieve Grade-I automation. Grade-I lacks complex machinery. There are virtually no moving parts, and any individual person could create the whole range of tools that can be found in this tier. Tools are easy to make and easy to repair, allowing for self-sufficiency. Grade-I automation is best represented by hammers and wheels.

A purely Grade-I society would be agricultural with the vast majority of the population ranging from sustenance farmers to hunter-gatherer-scavengers. The lack of machinery means there is no need for specialization; societal complexity instead derives from other roles.

Grade-II automation introduces complex bits and moving parts, things that would take considerably more skill and brainpower to create. As far as we know, only humans have reached this tier— and only one species of humans at that (i.e. Homo sapiens sapiens). Grade-II is best represented by cogwheels and steam engines, as it’s the tier of mechanisms. One bit enables another, and they work together to form a whole machine. As with Grade-I, there’s a wide range of Grade-II technologies, with the most complex ends of Grade-II becoming electrically powered.

A society that has reached and mastered Grade-II automation would resemble our world as it was in the 19th century. Specialization rapidly expands— though polymaths may be able to design, construct, and maintain Grade-II technologies through their own devices, the vast majority of tools require multiple hands throughout their lifespan. One man may design a tool; another will be tasked with building and repairing it. However, generally, one person can grasp all facets of such tools. Using Grade-II automation, a single person can do much more work than they could with Grade-I technologies. In summary, Grade-II automation is the mark of an industrial revolution. Machines are complex, but can only be run by humans.

Grade-III automation introduces electronic technology, which includes programmable digital computers. It is at this point that the ability to create tools escapes the ability of individuals and requires collectives to pool their talents. However, this pays off through vastly enhanced productivity and efficiency. Computers dedicate all resources towards crunching numbers, greatly increasing the amount of work a single person can achieve. It is at this point that a true global economy becomes possible and even necessary, as total self-sufficiency becomes near impossible. While automation unemploys many as computational machines take over brute-force jobs that once belonged to humans, the specialization wrought is monumental, creating billions of new jobs compared to previous grades. The quality of life for everyone undergoes enormous strides upwards.

A society that has reached and mastered Grade-III automation would resemble the world of many near-future science fiction stories. Robotics and artificial intelligence have greatly progressed, but not to the point of a Singularitarian society. Instead, a Grade-III dominant society will be post-industrial. Even the study of such a society will be multilayered and involve specialized fields of knowledge. Different grades can overlap, and this continues to be true with Grade-III automation. Computers have begun replacing many of the cognitive tasks that were once the sole domain of humans. However, computers and robots remain tools to complete tasks that fall upon the responsibility of humans. Computers do not create new tools to complete new tasks, nor are they generally intelligent enough to complete any task they were not designed to perform. The symbol of Grade-III is a personal computer and industrial robot.

Grade-IV automation is a fundamental sea change in the nature of technology. Indeed, it’s a sea change in the nature of life itself, for it’s the point at which computers themselves enter the fray of creating technology. This is only possible by creating an artificial brain, one that may automate even higher-order skills. Here, it is beyond the capability of any human— individuals or collectives— to create any tool, just as it is beyond the capability of any chimpanzee to create a computer. Instead, artificial intelligences are responsible for sustaining the global economy and creating newer, improved versions of themselves. Because AI matches and exceeds the cognitive capabilities of humans, there is a civilization-wide upheaval where what jobs remain from the era of late Grade-III domination are then taken by agents of Grade-IV automation, leaving humans almost completely jobless. This is because our tools are no longer limited to singular tasks, but can take on a wide array of problems, even problems they were not built to handle. If the tools find a problem that is beyond their limits, they simple improve themselves to overcome their limitations.

It is possible, even probable, that humans alone cannot reach this point— ironically, we may need computers to make the leap to Grade-IV automation.

A society that has reached Grade-IV automation will likely resemble slave societies the closest, with an owner class composed of humans and the highest order AIs profiting from the labor of trillions, perhaps quadrillions of ever-laboring technotarians. The sapient will trade among themselves whatever proves scarce, and the highest functions of society will be understood only by those with superhuman intelligence. Societal complexity reaches its maximal state, the point of maximum alienation. However, specialization rapidly contracts as the intellectual capabilities of individuals— particularly individual AI and posthumans— expands to the point they understand every facet of modern society. Unaugmented humans will have virtually no place in a Grade-IV dominant society besides being masters over anadigital slaves and subservient to hyperintelligent techno-ultraterrestrials. What few jobs remain for them will, ironically, harken back to the days of Grade I and II automation, where the comparative advantage remains only due to artificial limitations (i.e. “human-only labor”).

Grade-IV automation is alien to us because we’ve never dealt with anything like it. The closest analog is biological sapience, something we have only barely begun to understand. In a future post, however, I’ll take a crack at predicting a day in the life of a person in a Grade-IV society. Not just a person, but also society at large.

Types of Artificial Intelligence

Not all AI is created equal. Some narrow AI is stronger than others. Here, I redefine AI, separating the “weak=narrow” and “strong=general” correlation.

Let’s talk about AI. I’ve decided to use the terms ‘narrow and general’ and ‘weak and strong’ as modifiers in and of themselves. Normally, weak AI is the same thing as narrow AI; strong AI is the same thing as general AI. But I mentioned elsewhere on this wide, wild Internet that there certainly must be such a thing as ‘less-narrow AI.’ AI that’s more general than the likes of, say, Siri, but not quite as strong as the likes of HAL-9000.

So my system is this:

    • Weak Narrow AI
    • Strong Narrow AI
    • Weak General AI
    • Strong General AI
    • Super AI

Weak narrow AI (WNAI) is AI that’s almost indistinguishable from analog mechanical systems. Go to the local dollar store and buy a $1 calculator. That calculator possesses WNAI. Start your computer. All the little algorithms that keep your OS and all the apps running are WNAI. This sort of AI cannot improve upon itself meaningfully, even if it were programmed to do so. And that’s the keyword— “programmed.” You need programmers to define every little thing a WNAI can possibly do.
We don’t call WNAI “AI” anymore, as per the AI Effect. You ever notice when there’s a big news story involving AI, there’s always a comment saying “This isn’t AI; it’s just [insert comp-sci buzzword].” Problem being, it is AI. It’s just not AGI.
I didn’t use that mention of analog mechanics passingly— this form of AI is about as mechanical as you can possibly get, and it’s actually better that way. Even if your dollar store calculator were an artificial superintelligence, what do you need it to do? Calculate math problems. Thus, the calculator’s supreme intellect would go forever untapped as you’d instead use it to factor binomials. And I don’t need ASI to run a Word document. Maybe ASI would be useful for making sure the words I write are the best they could possibly be, but actually running the application is most efficiently done with WNAI. It would be like lighting a campfire with Tsar Bomba.
Some have said that “simple computation” shouldn’t be considered AI, but I think it should. It’s simply “very” weak narrow AI. Calculations are the absolute bottom tier of artificial intelligence, just as the firing of synapses are the absolute bottom of biological intelligence.
WNAI can basically do one thing really well, but it cannot learn to do it any better without a human programmer at the helm manually updating it regularly.

Strong narrow AI (SNAI) is AI that’s capable of learning certain things within its programmed field. This is where machine learning comes in. This is the likes of Siri, Cortana, Alexa, Watson, some chatbots, and higher-order game AI, where the algorithms can pick up information from their inputs and learn to create new outputs. Again, it’s a very limited form of learning, but learning’s happening in some form. The AI isn’t just acting for humans; it’s reacting to us as well, and in ways we can understand. SNAI may seem impressive at times, but it’s always a ruse. Siri might seem smart at times, for example, but it’s also easy to find its limits because it’s an AI meant for being a personal virtual assistant, not your digital waifu ala Her. Siri can recognize speech, but it can’t deeply understand it, and it lacks the life experiences to make meaningful talk anyhow. Siri might recognize some of your favorite bands or tell a joke, but it can’t also write a comedic novel or actually genuinely have a favorite band of its own. It was programmed to know these things, based on your own preferences. Even if Siri says it’s “not an AI”, it’s only using preprogrammed responses to say so.
SNAI can basically do one thing really well and can learn to do that thing even better over time, but it’s still highly limited.

Weak general AI (WGAI) is AI that’s capable of learning a wide swath of things, even things it wasn’t necessarily programmed to learn. It can then use these learned experiences to come up with creative solutions that can flummox even trained professional humans. Basically, it’s as intelligent as a certain creature— maybe a worm or even a mouse— but it’s nowhere near intelligent enough to enhance itself meaningfully. It may be par-human or even superhuman in some regards, but it’s sub-human in others. This is what we see with the likes of DeepMind— DeepMind’s basic algorithm can basically learn to do just about anything, but it’s not as intelligent as a human being by far. In fact, DeepMind wasn’t even in this category until they began using the differentiated neural computing system because it could not retain its previously learned information. Because it could not do something so basic, it was squarely strong narrow AI until literally a couple months ago.
Being able to recall previously learned information and apply it to new and different tasks is a fundamental aspect of intelligence. Once AI achieves this, it will actually achieve a modicum of what even the most cynical can consider “intelligence.”
DeepMind’s yet to show off the DNC in any meaningful way, but let’s say that, in 2017, they unveil a virtual assistant to rival Siri and replace Google Now. On the surface, this VA seems completely identical to all others. Plus, it’s a cool chatbot. Quickly, however, you discover its limits— or, should I say, its lack thereof. I ask it to generate a recipe on how to bake a cake. It learns from the Internet, but it doesn’t actually pull up any particular article— it completely generates its own recipe, using logic to deduce what particular steps should be followed and in what order. That’s nice— now, can it do the same for brownies?
If it has to completely relearn all of the tasks just to figure this out, it’s still strong narrow AI. If it draws upon what it did with cakes and figures out how to apply these techniques to brownies, it’s weak general AI. Because let’s face it— cakes and brownies aren’t all that different, and when you get ready to prepare them, you draw upon the same pool of skills. However, there are clear differences in their preparation. It’s a very simple difference— not something like “master Atari Breakout; now master Dark Souls; now climb Mount Everest.” But it’s still meaningfully different.
WGAI can basically do many things really well and can learn to do them even better over time, but it cannot meaningfully augment itself. That it has such a limit should be impressive, because it basically signals that we’re right on the cusp of strong AGI and the only thing we lack is the proper power and training.

Strong general AI (SGAI) is AI that’s capable of learning anything, even things it wasn’t programmed to learn, and is as intellectually capable as a healthy human being. This is what most people think of when they imagine “AI”. At least, it’s either this or ASI.
Right now, we have no analog to such a creation. Of course, saying that we never will would be as if we were in the year 1816 and discussing whether SNAI is possible. The biggest limiting factor towards the creation of SGAI right now is our lack of WGAI. As I said, we’ve only just created WGAI, and there’s been no real public testing of it yet. Not to mention that the difference between WGAI and SGAI is vast, despite seemingly simple differences between the two. WGAI is us guessing what’s going on in the brain and trying to match some aspects of it with code. SGAI is us building a whole digital brain. Not to mention there’s the problem of embodied cognition— without a body, any AI would be detached from nearly all experiences that we humans take for granted. It’s impossible for an AI to be a superhuman cook without ever preparing or tasting food itself. You’d never trust a cook who calls himself world-class, only come to find out he’s only ever made five unique dishes, nor has he ever left his house. For AI to truly make the leap from WGAI to SGAI, it’d need someone to experience life as we do. It doesn’t need to live 70 years in a weak, fleshy body— it could replicate all life experiences in a week if needbe if it had enough bodies— but having sensory experiences helps to deepen its intelligence.

Super AI or Artificial Superintelligence (SAI or ASI) is the next level beyond that, where AI has become so intellectually capable as to be beyond the abilities of any human being.
The thing to remember about this, however, is that it’s actually quite easy to create ASI if you can already create SGAI. And why? Because a computer that’s as intellectually capable as a human being is already superior to a human being. This is a strange, almost Orwellian case where 0=1, and it’s because of the mind-body difference.
Imagine you had the equivalent of a human brain in a rock, and then you also had a human. Which one of those two would be at a disadvantage? The human-level rock. And why? Because even though it’s as intelligent as the human, it can’t actually act upon its intelligence. It’s a goddamn rock. It has no eyes, no mouth, no arms, no legs, no ears, nothing.
That’s sort of like the difference between SGAI and a human. I, as a human, am limited to this one singular wimpy 5’8″ primate body. Even if I had neural augmentations, my body would still limit my brain. My ligaments and muscles can only move so fast, for example. And even if I got a completely synthetic body, I’d still just have one body.
An AI could potentially have millions. If not much, much more. Bodies that aren’t limited to any one form.
Basically, the moment you create SGAI is the moment you create ASI.

From that bit of information, you can begin to understand what AI will be capable of achieving.


Recap:

“Simple” Computation = Weak Narrow Artificial Intelligence. These are your algorithms that run your basic programs. Even a toddler could create WNAI.
Machine learning and various individual neural networks = Strong Narrow Artificial Intelligence. These are your personal assistants, your home systems, your chatbots, and your victorious game-mastering AI.
Deep unsupervised reinforcement learning + differentiable spiked recurrent progressive neural networks = Weak General Artificial Intelligence. All of those buzzwords come together to create a system that can learn from any input and give you an output without any preprogramming.
All of the above, plus embodied cognition, meta neural networks, and a master neural network = Strong General Artificial Intelligence. AGI is a recreation of human intelligence. This doesn’t mean it’s now the exact same as Bob from down the street or Li over in Hong Kong; it means it can achieve any intellectual feat that a human can do, including creatively coming up with solutions to problems just as good or better than any human. It has sapience. SGAI may be very humanlike, but it’s ultimately another sapient form of life all its own.

All of the above, plus recursive self-improvement = Artificial Superintelligence. ASI is beyond human intellect, no matter how many brains you get. It’s fundamentally different from the likes of Einstein or Euler. By the very nature of digital computing, the first SGAI will also be the first ASI.

What Is Futuristic Realism?

Definitive Explanations, Breakdowns, and Examples of Futuristic Realism, Sci-Fi Realism, Slice of Tomorrow, and Science Non-Fiction

I get asked a lot, “Yuli, what is futuristic realism?”

And that’s a bad thing. I’ve explained what futuristic realism is around five hundred times now, and the fact people still ask me what it means suggests that I, as usual, have failed to give the world a concise definition. That makes sense— I am a legendary rambler.

So I’m here to finally put to bed these questions.

Note: there will be a short version where I get right to the point, and afterwards, there’ll be a long version where I allow myself to ramble go in depth with what I mean.


Short Version

Sci-Fi Realism is a visual style that attempts to fool the viewer into thinking fantastic technologies are actually real and well-used, giving such tech a sort of photographic authenticity. 

Futuristic Realism is a subgenre of both science fiction and literary fiction that draws from science fiction and uses the structure of literary and realistic fiction in order to tell a story that feels familiar and contemporary.

Slice of Tomorrow is the fusion of science fiction and slice of life fiction.

Science Non-Fiction describes fantastic technologies, happenings, stories, and narratives that have already occurred and cause the person to say “I’m living in the future!”


Long Version

Let’s start with slice of tomorrow. Slice of tomorrow fiction is what you get when you take science fiction and mix it with slice of life. In order to understand what that means, you first need to know what “slice of life” is.

Slice of life is mundane realism depicting everyday experiences in art and entertainment.

There’s no grand plot.

There’s no quest, no corporate spooks, no governments overthrown, no countdown timer, no running from an explosion. The climax of the story is as soft as it gets. That’s not to say high-intensity events can’t happen— they just aren’t the focus of the story. Slice of life does not necessarily have to be “literary”— it doesn’t have to focus on incredibly deep themes of human relationships. It doesn’t necessarily have to be about anything at all, other than showing one’s daily life.

Slice of tomorrow is mundane realism depicting everyday experiences, with the twist being that the events take place in an otherwise “sci-fi” or “cyberpunk” environment. The intention is in the name of the genre— “slice of tomorrow.” Show the world how humanity would react to futuristic technologies, tomorrow’s social mores, and perhaps even different conditions and modes of existence. However, slice of tomorrow does not have to be relateable, nor does one have to intertwine a deeper narrative into one that identifies as “slice of tomorrow.”

 

Adding depth and length to mundanity brings you futuristic realism. Futuristic realism carries with it more of a ‘literary’ swagger. And in order to understand what that means, you must define literary and realistic fiction.

Literary fiction comprises fictional works that hold literary merit; that is, they involve social commentary, or political criticism, or focus on the human condition. Literary fiction is deliberately written in dialogue with existing works, created with the above aims in mind and is focused more on themes than on plot.


Realistic fiction is fiction that uses imagined characters in situations that either actually happened in real life or are very likely to happen. It further extends to characters reacting in realistic ways to real-life type situations. The definition is sometimes combined with contemporary realism, which shows realistic characters dealing with realistic social issues such as divorce, drug abuse, teenage pregnancy and more.

Literary fiction is a style of realism depicting real people in realistic situations, often as a means of exploring the human condition. Here, simply showing a different mode of existence isn’t enough— you have to thoroughly explore it. There is a humongous opportunity to be had in science fiction when it comes to exploring foreign and alien modes of existence, and many sci-fi authors have exploited that opportunity. One fine example of futuristic realism would have to be the Sprawl Trilogy, by William Gibson— in fact, the literary work that gave birth to cyberpunk.

Indeed, futuristic realism and cyberpunk’s origins overlap heavily, and there’s no better way to illustrate this than by telling you how cyberpunk began in the first place, as well as describing what it’s become.

Cyberpunk was born when Gibson felt dissatisfied with the increasingly stagnant Utopian sci-fi, such as Star Trek. Gene Roddenberry’s Star Trek gave us a nearly-utopian world where advanced technology solved all of humanity’s problems and men lived in egalitarian harmony and prosperity; the only sources of conflict came from either other species or the occasional disagreement.
Gibson looked at the world around himself and concluded that, even if we had starships and communicators, there would still be drug dealers and prostitutes. If anything, the acceleration of technology would most likely only greatly benefit a rich few, leaving the rest to get by with whatever scraps are left over. This wasn’t a completely baseless extrapolation, precisely because that’s what had been occurring hitherto the present moment— the developed nations, and in particular the rich, were able to enjoy high-tech consumer goods such as cable television, personal computers, video games, and credit cards, while the poor in many parts of the planet lived in nations that may very well have never experienced the Industrial Revolution. And even in developed nations, the poor were getting shafted by the system at large, especially as corporations grew in power and influence and enacted their will upon the governments of the world. Thus, Neuromancer and subsequently cyberpunk and futuristic realismwas born.

Cyberpunk and futuristic realism quickly branched off into different paths, however, as cyberpunk began becoming “genre” fiction itself— nowadays, in an almost ironic fashion considering how it started, when one thinks of ‘cyberpunk’, they think of ‘aggressively cynical dystopian action science fiction’, with the actual ‘punk’ aspect added in as an afterthought.

 

nn7ltjx
Bringing in elves and orcs sextuples the action! Source: Shadowrun

 

To truly get a feel for futuristic realism, try to follow this one: it’s the genre Ernest Hemingway or Cormac McCarthy would write if they lived in the 2050s.

I have long said that the easiest way to achieve futuristic realism would be to take Sarah, Plain and Tall and add humanoid robots, drones, and smartglasses into the mix. And why? Because there is a very intense disconnect. I even said as much in a previous article:

That’s why I say it’s easiest to pull of futuristic realism with a rustic or suburban setting— it’s already much closer to individual people doing their own thing, without being able to fall back on the glittering neon cyberscapes of a city or cold interiors of a space station to show off how sci-fi/cyberpunk it is. It makes the writer have to actually work. Also, there’s a much larger clash. A glittering neon cyberscape of a megalopolis is already very sci-fi (and realistic); adding sexbot prostitutes and a cyber-augmented population fitted with smartglasses doesn’t really add to what already exists. Add sexbot prostitutes and cyber-augments with smartglasses to Smalltown, USA, however, and you have a jarring disconnect that needs to be rectified or at least expanded upon. That doesn’t mean you can’t have a futuristic realist story in a cyberpunk city, or in space, etc. It’s just much easier to tell one in Smalltown, USA because of the very nature of rural and suburban communities. They’re synonymous with tradition and conformity, with nostalgic older years and pleasantness, of a certain quietness you can’t find in a city.

Last but not least, there is sci-fi realism. This spawned futuristic realism and slice of tomorrow, and once upon a time, it was the catch-all term for the style. However, once I decoupled literary content from visual aesthetics, sci-fi realism became its own thing, and the best way to describe sci-fi realism would be to understand “visual photo-authenticity.”

This is my own term (because I just love making up jargon), and it refers to a visual style that attempts to recreate the feel of a photograph. This doesn’t just mean “ultra-realistic graphics”— it can be 8-bit as long as it looks like something you snapped with your smartphone camera. Of course, ultra-realism does greatly help.

Sci-fi realism is perhaps simultaneously the easiest and hardest to understand because of the nature of photography. After all, don’t many photographs attempt to capture as much artistic merit as paintings and renders? What qualifies as “photographic?”

And I won’t lie that it is, indeed, a subjective matter. However, there is one basic rule of thumb I’ll throw out there.

Sci-fi realism follows the rules of mundanity, even if it’s capturing something abnormal. There are few intentional poses and very little Romanticizing of subjects. It’s supposed to look as if you took a photograph in the future and brought it back to the past.

g8sizra
Source: Vitaly Bulgarov (and his dogs)

Most photographs are taken from ground or eye level, maybe even at bad angles and with poor lighting. Very few of them ever manage to capture wide-open scenes— it’s nearly impossible to get both a shady alleyway and towering skyscrapers in the background from a realistic perspective. There are very few vistas or wide-shots. 

As aforementioned, hyper-realism comes in handy when dealing with sci-fi realism, and wide-shots can be done to be “realistic” from a sci-fi perspective.

34mmdav
Future Dubai, by Thomas Galad


And, also as aforementioned, it doesn’t necessarily have to be photorealistic as long as it carries a photographic quality.

by_burned_2560
“Burned” by Simon Stålenhag

It was watching movies like Real Steel, Chappie, District 9, and Star Wars: A New Hope that really got me interested in this “what if” style. Those movies possessed ‘visual authenticity.’ When I watched Real Steel, I was amazed by how seamlessly the CGI mixed with live action. Normally, the CGI is blatantly obvious; it feels obviously fake. It doesn’t look real. But Real Steel took a different route. It fused CGI with practical props, and it was amazing to see. For the first time, I felt like I was watching a movie sent back from the future rather than a science fiction film. Other films came close, but it was Real Steel that I first really noticed it.

 


The Bait And Switch

All of this refers to fiction. Slice of tomorrow is about slice of life science fiction. Futuristic realism is about literary science fiction. Sci-fi realism is about photographic science fiction.
However, with the obvious exception of slice of tomorrow, these can also fit non-fiction.

I mentioned quite a bit ago the concept of “science non-fiction.” This is a very new genre that has only become possible in the most recent years, and can best be described as “science fiction meets creative non-fiction.”

In recent years, many facets of science fiction have crossed over into reality. Things are changing faster than ever before, and what’s contemporary this decade would be considered science fiction last decade. As time goes on, this will only grow even more extreme, until each next year could be considered “sci-fi” compared to the previous one. At some point, people’s ability to take for granted this rapidly accelerating rate of technological advancement will wane, and there will be medically diagnosed cases of acute future shock. When we reach that point, even things that may have been on the market for years or decades will still be seen as “science fiction.”

We are already seeing a rudimentary form of this in the form of smartphones— smartphones have been a staple of mass consumer culture for well over a decade. Despite this, people still experience future shock when they take time to think about these immensely powerful gadgets. As smartphones grew more powerful and ubiquitous, the effect did not fade but in fact became more intense. This inability to accept the existence of a new technology is virtually unprecedented— we grew used to airplanes, atomic energy, space exploration, personal computers, and the internet faster than we have smartphones. Virtual reality is poised to push this future shock into an even more precarious level, as now we’re beginning to actually infringe upon concepts and technologies with which science fiction has been teasing us for nearly a century.

Space exploration had a bit of an Antiquity moment in the 1960s— we proved we could do it but found no practical way to expand on our accomplishments, much like the ancient Greeks working with analog computers and steam engines— and the actual space revolution remains beyond us, lying at an undetermined point in the future. To prove this point, we still see things like space stations and landing on other celestial bodies as being “science fiction.” This raises a conundrum— a story where a man lands on the moon qualifies as “science fiction”, but we already took that leap roughly 50 years ago. Does that mean Neil Armstrong and Buzz Aldrin actually experienced science fiction? It can’t because of the very definition of the word ‘fiction.’

That’s where this new term— science non-fiction— comes in. When real life crosses over into territories usually only seen in science fiction, you get science non-fiction.

Science fiction has many tropes, and even as we invent and commercialize the technologies behind these tropes, they don’t leave science fiction. Space exploration, artificial intelligence, hyper-information technology, advanced robotics, genetic engineering, virtual and augmented reality, human enhancement, experimental material science, unorthodox transportation— these are staples of science fiction, and merely making them real doesn’t make them any less sci-fi. From a technical perspective, virtual reality and smartphones are no longer sci-fi. However, from a cultural perspective, they’ll never be able to escape the label.

Science non-fiction is extremely subjective precisely because it’s based on the cultural definition of sci-fi. Some people may think smartphones, smartwatches, and VR are sci-fi, but others might have already grown too used to them to see them as anything other than more tech gadgets. Even when we have people and synths on Mars, there will be those who say that missions to Mars no longer qualify as science fiction.

And it’s this disconnect that helps make science non-fiction work.

There’s that word again— disconnect.

Reading about events in real life that seem ripped from sci-fi is one thing. Actually seeing them is another altogether.

bqzoezm
Photograph of Pepper, 2016

We’re back to sci-fi realism. I am reusing the term “science non-fiction”, but this is discussing its visual form. I admit, sometimes I call it ‘sci-fi realism’, but I’ve begun moving away from that (to the detriment of the Sci-Fi Realism subreddit and to the benefit of the Futuristic Realism subreddit). As mentioned, this is what science non-fiction looks likepictures, gifs, videos, and movies of real events that happen to have science non-fiction technologies.

Science non-fiction is not necessarily slice of life or mundane, though it can be (and often is, due to the nature of everyday life). In this case, science non-fiction can actually be everything slice of tomorrow and futuristic realism isn’t— including things we’d consider like cyberpunk, military sci-fi, and space operas. The only prerequisite is that the events have to be real.

For example: glittery cyberpunk-esque cityscapes already exist. There aren’t even a shortage of them— off the top of my head, there’s Dubai, Moscow, Hong Kong, Shanghai, Guangzhou, Tokyo, Singapore, Seoul, and Bangkok. Posting pictures of them can net you thousands of upvotes on /r/Cyberpunk. The vistas may lack flying cars, but who knows how much longer that’ll be the case?

ec8wm
That moment when Dubai starts looking like Coruscant

If I bought a Pepper and brought it into my home, that would also qualify as science non-fiction. Domestic artificially intelligent utility robots are a major staple of science fiction, and them simply existing doesn’t change the fact sci-fi literature, films, and video games will continue utilizing them.

dwf7imt
This is an actual Japanese showroom in 2016

Likewise, if I donned a TALOS exosuit fitted with a BCI-powered augmented reality visor, and picked up a 25 KW pulse-laser Gauss rifle, and then got flown into Syria where I could also pilot semi-autonomous drones and command killer Atlas robots, that too would be science non-fiction.

osaxxkd
The TALOS suit, one of the coolest things I’ve ever seen

Funny thing is, both these examples are already possible. Not fully— ASIMO as yet to see a commercial release, Atlas is not finished its construction into a Terminator, and no one has yet constructed a handheld laser gun stronger than 500 watts. But none of it is beyond us.

And that’s the gist behind all of this. Science non-fiction is based on what we have done.

“So why did you create all this uber-pretentious sci-fi tripe?”

1- Because I wanted to.

2- Because I noticed a delightful trend occurring over and over again online. Even outside of sci-fi forums, I was repeatedly reading stories and anecdotes of people being amazed at how technologically advanced our present society really is— but they then lamented that they didn’t “feel” like they were really living in a sci-fi story.

I am a fantastic example of that myself. I live out in the sticks— I even counted the seconds: if you drive at sixty miles per hour for one minute and twenty-eight seconds, you will come across literally bucolic farmland straight out of a Hallmark Channel movie. The tallest building in my town (and for many miles around it) is the local theatre, which comes in at seven stories. It’s the kind of town where, if you drive down any particular road too late at night, you’ll get abducted by aliens and/or the CIA. I live behind some trees on the very outskirts of this town. And despite that, I still own a drone, several smartphones, a VR headset, and a dead Roomba. If I saved up, I could even potentially buy an artificially intelligent social droid— Aldebaran’s Pepper. It feels so mundane, but my life truly is science non-fiction. A while ago, I lamented that I wasn’t living in one of the aforementioned proto-cyberpunk cities precisely because I thought I had too much technology to be living in the country.

I’ve since decided to bring science fiction to me, and that requires quite a few changes. I’m no revolutionary street urchin. I have no coding skills whatsoever. I can count on a broken hand how many times in my life I’ve held a gun. There’s nothing thrilling about me, my past, or my future. And yet I still feel like I live in a world that’s fast becoming sci-fi. So I needed to find a way to express that. A way to tell a story I— in my unfit, very much kung-fu-challenged world— could relate with. I’m no hero, nor am I an anti-hero, nor am I a villain. I’m basically an NPC, a background character. Yet I still feel I have stories to tell.


Futuristic Realism and Transrealism

So what about transrealism? Isn’t it futuristic realism? In fact, it is. However, it’s a situation where “X is Y, but Y isn’t always X.” Transrealism is futuristic realism, but not all futuristic realism is transrealism. And the best way to understand this is by looking at the definition of transrealism.

Transrealism is a literary mode that mixes the techniques of incorporating fantastic elements used in science fiction with the techniques of describing immediate perceptions from naturalistic realism. While combining the strengths of the two approaches, it is largely a reaction to their perceived weaknesses. Transrealism addresses the escapism and disconnect with reality of science fiction by providing for superior characterization through autobiographical features and simulation of the author’s acquaintances. It addresses the tiredness and boundaries of realism by using fantastic elements to create new metaphors for psychological change and to incorporate the author’s perception of a higher reality in which life is embedded. One possible source for this higher reality is the increasingly strange models of the universe put forward in theoretical astrophysics.


Some final words on the subject, starting with Kovacs from the Cyberpunk forums:

Well… the only real way that sci-fi realism works – for me – is if the science fiction is invisible and ubiquitous.
Today, I could write a fully non-fiction or ‘legit literature’ fiction (e.g. non-genre) story using tech that, a decade or two ago, would have been cyberpunk. For example: 20 years ago if you wrote a murder mystery about a detective that could track a victim’s every thought and action the day they were murdered, all withing 5 minutes or so, that would be sci-fi or even ‘magic’. Today, you just access to the victim’s phone and scroll though their various social media profiles. Same with having a non-static-y video conference with someone halfway around the world; it use to be Star Trek, now it’s Skype. So how would this prog rock of sci-fi work? I suppose you tell a tale where the tech… doesn’t matter. It’s all about human relationships.
Ooooh I bet you think that’s boring, don’t you? Well, maybe. But we can cheat by playing with the definition of ‘human’.

I’m thinking about the movie Her. Artificial intelligence is available and there’s no paradigm shift. A romantic relationship with an AI is seen as odd… but not unimaginable, or perverse. There’s no quest, no corporate spooks, no governments overthrown, no countdown timer, no running from an explosion. The climax of the story is as soft as it gets [OP: do these sentences look familiar?]Robot and Frank is another good example; it’s a story where the robot isn’t exactly needed, but it makes the story make more sense that if it was say a collage student Scent of a Woman style.
(hun… Scent of a Robot anyone? Al Pachino piloting Asimo?)
So I guess what I’m leading to is take the action-adventure component out of sci-fi. Take the dystopia out of cyberpunk. Take out the power fantasy elements. Take out the body horror. What are you left with? Something a little less juvenile? In order to develop this you’d have to have a really good dramatic story as a basis and sneak in the sci-fi elements. You can’t by, definition, rest on them.
Which is tough for me to approach, because I really like my space katanas.

Finally, what is futuristic realism not?: “X can be Y, but Y isn’t X.” Futuristic realism can use these things, but these things aren’t futuristic realism by themselves.

  • Hyper-realistic science fiction. As I said, visual authenticity started futuristic realism, but that’s not what it is anymore. Nowadays, that’s just straight ‘sci-fi realism.’
  • Hard science fiction. Futuristic realism can be hard or soft or anything in between; it’s the story that matters. Hell, you can write fantastic realism if you want to.
  • Military science fiction. Some people kept thinking sci-fi realism meant ‘hard military sci-fi’, which is why I rebranded the style ‘futuristic realism’. Military sci-fi can be futuristic realism, but a story simply being military sci-fi isn’t enough.
  • Rustic science fiction. After the whole spiel on /r/SciFiRealism when a whole bunch of people were angry that I kept posting images of robots in homes and hover cars instead of really gritty battle scenes and dystopian fiction, the pendulum swung way too far in the other direction. I have said that ‘the best way to write futuristic realism is to take Sarah, Plain and Tall and add robots’, but I didn’t say ‘the only way to write futuristic realism is… yadayada.’
  • Dark ‘n gritty science fiction. As aforementioned, some thought ‘sci-fi realism’ meant ‘dark and gritty science fiction’. And I won’t lie, it is easy for a realistic story to be dark and even gritty and edgy. But see above, I had to hit the reset button. 
  • Actionless science fiction. You’d think that, after all this bureaucratic bullshit, I’m trying to force people to write happy science fiction about neighborhood kids with robots. Not at all. In fact, you can have a hyper-realistic, dark and gritty hard military science fiction story that’s pure, raw futuristic realism. It depends on what the story’s about. A story about a space marine genociding Covenant scum, fighting to destroy an ancient superweapon, can indeed be futuristic realism. It just depends on what part of the story you focus on and how you portray it. Novelizing Halo isn’t how you do it. In fact, there’s a futuristic realist story I desperately want to read— a space age War and Peace. Something of that caliber. If you want to attempt that, then I think the first thing you’d have to do before writing is whether you can pull it off without turning it into a space opera. Take myself for example: fuck that noise. I’m not even going to try it. I know it would fast become an emo Gears of War if I tried to write it. It’s not supposed to be Call of Duty in Space, it’s a space-age War and Peace. There are twenty trillion ways you can fuck that up.

Try to think back to the last major sci-fi film, video game, book, or short that didn’t have one of the following—

  • Someone brandishing a weapon
  • A chase sequence
  • Fight sequence
  • Military tech wank
  • Paramilitary tech wank
  • Wide shots over either a city, alien planet, or space vehicle
  • Over-exposed mechanics or cybernetics
  • Romance between lead character and designated lover, usually as a result of the two working together to overcome the Big Bad and realizing they have feelings for each other
  • High-octane stakes, where the life of the protagonist or someone the protagonist cares about is at risk
  • Death of the antagonist, someone close to the protagonist, or the protagonist him/herself
  • Actions causing death in the first place
  • Bands of mooks for someone to mow down
  • Stakes where one side (e.g. space navy; evil megacorporation, warlord, etc.) has to suffer a total, epic defeat in order for the plot to be resolved, usually in the form of a climatic and tense battle

 

I’m not trying to be a creativity fascist; I’m merely attempting to define what futuristic realism and slice of tomorrow fiction aren’t. Hell, I’ve even said that you can have a whole bunch of these things and still come off as futuristic realism. It’s all about execution and perspective.

I suppose, what I’m trying to get at is that if you want to write futuristic realism and slice of tomorrow fiction, you have to ask yourself a very basic question: “Can the central plot be resolved with a gun battle without any major consequences?” Replace ‘gun’ with any weapon of your choice— space katana, quark bomb, logic bomb, giant mecha— the point remains the same. If the answer is no, you may have futuristic realism.

 

You can resolve just about any plot with a good shot from a Lawgiver; the key phrase is “without any major consequences”. Filling a flatmate’s skull with a magnetically-pressurized ionic plasma bolt because he’s not happy over how many sloppy sounds you make with your “sexbot sexpot” is going to have worlds’ different consequences as gunning down Locust filth in an interstellar war— unless, of course, you go deep into the psychological profile of someone who’s spent their lives killing aliens and has never before contemplated why he’s doing this and suddenly gains a keen interest in understanding the other side, particularly those not directly participating in the war.

 

It’s easy to say your story’s about the human condition more than it is about the science and technology, and I suppose that would make it more highbrow than a lot of other sci-fi. But futuristic realism/slice of tomorrow doesn’t have to be highbrow either. 

 

 

So let me use a story instead of just similes, analogies, and overbloated rules of thumb.

 

 

You have three characters: Phil, Daria, and Edward. Phil and Daria live in New York City in 2189. A war for independence has just broken out between Earth forces and Martian colonists. A Martian separatist has masterminded a terrorist attack in New York (what else is new?). What neither Daria or Phil know is their Martian penpal, Edward, is also the terrorist who masterminded the attack. This sounds like a traditional sci-fi plotline in the making. How do you make it into a traditional military sci-fi story? Simple— Phil and Daria sign up for military service, get their own mech suits, and start rolling across Cydonia where they fight communist Martian droids at the now terraformed, statue-like Face on Mars. The climax involves them facing down Edward and realizing their friendship has been put to the ultimate test as a result of a war. That’s a story that’s definitely character driven and engaging— but it’s not necessarily “slice of tomorrow” fiction. How do you turn it into a slice of tomorrow story? You don’t have to change a damn thing, except focus on where the story’s set. For example, Phil and Daria, in the short period of time after the attack and before they join the military, may be utterly shellshocked by the terrorist attack. They’ve seen dead and injured people, and a major landmark has been destroyed. They just want a moment to be thankful for the fact they’re alive. They may want to contact Edward to get his opinion on events considering he’s a Martian and Martians are implicated in the attack. They’re just keeping up with the news to find out more about what just happened, and they grow ever more angry as time goes on. The climax could be them actually joining the military, or maybe something else entirely. Something not involved in the military. The terrorist attack was just a background event to their daily lives— a pretty big and impactful event, but a background event nonetheless. The real drama lies elsewhere. It’s drama you can’t just shoot at to make it go away, either. Thus, the story’s ultimately resolved well before the first mech suit ever gets to fire a shot at separatists.

 

Even writing that mini-blurb proved my point, because I was going to write something after “the real drama lies elsewhere”. Something more specific than “it’s a drama you can’t just shoot at to make it go away, either.” But as I typed it out, I could actually hear the groans of boredom in my head— “if this were an actual sci-fi story,” I thought, “having that plotline would just evoke nothing but frustration.” And what was that plotline?

Phil or Daria calling their parents. That’s it! The actual conversation would follow recent events, yes, but that’s the climax. When I wrote that out, I thought “That’s the dumbest/gayest thing I’ve ever heard” because it sounded a bit like a waste. I have this nice, big universe filled with juicy potential sci-fi action— I even have a fantastic trigger that present-day readers can relate to in the form of a traumatic terrorist attack— and I spent it by having one of the lead characters calling Mommy to wish her a tearful Merry Christmas?

That doesn’t sound sci-fi at all.

 

And that’s the point! Because even though it doesn’t sound like sci-fi, it still is sci-fi.


 

TL;DR:
Sci-Fi Realism: Candid, prosaic, and/or photographic sci-fi
Futuristic Realism: Science fiction as told by F. Scott Fitzgerald
Slice of Tomorrow: Science fiction as told by the Hallmark Channel.
Science Non-Fiction: Neil Armstrong’s autobiography

Debating Basic Income

Why I Think UBI Will Actually Be Social Credit-Based Income

While I’m not one of the reactionary Luddites who claims AI is suddenly not capable of doing anything or is only as capable as looms and tractors ever were, and I’m not going to bother using the same an!capistan arguments against basic income that clearly aren’t swaying anyone (I don’t know why anarchocapitalists and libertarians even bother), I will say that we’re giving basic income too much credit.

Keyword: credit. That’s what I’m leading into. Whenever I keep promoting Vyrdism, I also mention why I don’t trust basic income— the State, which is the agency who will distribute said income, is not and never has been altruistic. They’re not going to give out a basic income unconditionally, and if you believe they will, you’re wrong. I know it’s your opinion, but your opinion is wrong. Literally 8,000+ years of experience with the ruling owner class proves you’re wrong— there will be conditions, even if the elite says there won’t be.

And China gave me the idea as to what that condition would actually be.

China is allegedly bringing out a social credit system, and your social credit score determines your ability to function in modern society. That sounds to me like the perfect opportunity to bring about a basic income— your social credit score determines the amount of your income. Lose too much social credit and you might be cut off from the basic income, and the justification will be “you’ve proven that you can’t be helped, even with a basic income.” So yes, you’ll get a basic income, and you’ll allegedly be allowed to do whatever you please with it— but those in power are closely watching what you’re spending it on, as well as your actions in other parts of your life.

Let’s say that there’s a baseline that everyone receives a month— $1,000— which supposedly cannot be altered. The State is promoting a ‘healthy’ lifestyle. In other words, if you buy too many greasy foods and sugary snacks, your social credit takes a hit and you might get less income. It’s not going to be overt— the easiest way to take money away from you while also keeping up with the “unconditional” basic income would be to penalize you elsewhere, such as with higher taxes and fees for goods. You may still receive $1,000 a month, but your expenses jump from $800 to $1,000.

That’s still manageable, and your basic income can still cover most of it. However, if you subscribe to anyone the ruling elite doesn’t like on Facebook, that’s more of a hit. Hell, if the ruling elite decides you can only use certain social media sites or search engines or only use certain ISPs and you defy them, you might get a big hit to your social credit score. Your $1,000 income becomes worthless as your expenses reach $2,000 or more a month. And I don’t even think I need to say what would happen if you protested against the government or its corporate-bourgeois masters. And by that point, it’s too late, because artificially intelligent technotarians have already rendered human workers utterly obsolete, meaning there’s no other way to improve your social credit score again other than to accept whatever the State demands.

Of course, it works in the other direction as well. If the State tells you to jump, you ask “How high?” You become their drone, doing absolutely anything and everything you can to be a Model Citizen™. You may be rewarded with laxed expenses, effectively increasing your basic income every month from $1,000 to $1,200.

Now if you ask me, we are going to see a universal basic income in our lifetimes. Not even just in our lifetimes, but very soon. And it’s not going to hit the ground running as a totalitarian social regimen.

I’m not against basic income. I just recognize the potential for abuse. Basic income-esque schemes have been tried throughout history, even though they’ve never been called basic income. And always, they’ve been part of a “deal” rather than being unconditional. For example, with feudalism, you need only work for the local lord and you get free protection. It’s just that feudalism also gave us serfdom, and basic income could very well lead us to a dystopian existence that few proponents seem to believe it could lead to because they opt to believe a false dichotomy that anything other than basic income is a dystopia as well.

And if you’re alright with this or already accepted that basic income was never going to be unconditional, then fine; I’m not talking to you. I’m talking to the wide-eyed idealists who still believe it’s the ends and of itself instead of a means to an end.

“But Yuli,” one might ask, “isn’t this more of a critique against a social credit score?”

Yes of course. My point is that, at least in our current mode of existence, the two will likely be intertwined. We won’t see UBI without a social credit score— it might even prove to be one of the compromises that must be made!

So in summary, I don’t blindly trust basic income. There’s been no proper debate on it because the opposing argument’s almost nothing but an!cap whinging about how taxation is theft, welfare is Stalinism, and the very-thinly-veiled “Tyrone’s just going to buy crack and beer and play Call of Duty all day on my paycheck”, which backfires and results in more people accepting basic income by making it seem only an!caps and closeted fascists oppose basic income. This, in turn, makes the Left look even more like Statist Sheep that the Right oft claims they are.
A legitimate concern is that the ruling elite won’t make it unconditional because there is literally no evidence in history of them being altruistic in such a way. China’s social credit score is almost certainly what basic income is going to be tied to.

One last word: I’m not against basic income. I know I’m repeating myself, and I know most people are smart, but I’ve long since become cynical enough to realize that I must keep repeating this, as there will always be someone who decides that I’m actually a denizen of the aforementioned An!Capistan all because I dared to say anything against UBI.

If you want a true alternative to the current mode of existence, look to Vyrdism. Maybe read this: OPINION: Why I am pro-Vyrdism and not pro-Universal Basic Income (UBI).