Technism

What exactly is technism?
It is a system defined by automation, particularly the pursuit of maximal automation. The more faculties of society that are automated, the more technist that society becomes.
technist is a person who seeks a highly/fully automated society. The logical endpoint of this philosophy is a point where humanity is disenfranchised from all processes, instead living only to profit from the labor of machines.
In that regard, technism is the opposite of neo-Luddism and primitivism.
The economic philosophy behind technism is known as Vyrdism, which is the belief that humanity should actively exploit the labor of machines, with the common agreement being that we should pursue cooperative ownership. Vyrdists, in the short amount of time they’ve been around, have already sprouted a few branches.
Market Vyrdism describes a society that fuses free market ideals with technism and/or Vyrdism. It bears most resemblance to mutualism and syndicalism. Private property is protected. Humans may no longer be the dominant laborers of society, but they remain in near full control of political and economic matters.
Marxism-Vyrdism describes a society that fuses the ideals of Marxism (perhaps even Marxism-Leninism) with Vyrdism— all automation is collectively owned, with a state apparatus (usually consisting of artificial intelligence, a la Cybersyn) existing to centrally plan the economy. Private property is non-existent. Despite this, humans remain in near full control of political and economic matters.
Pure Technism describes a society that fuses the concept of the dictatorship of the proletariat and replaces the proletariat with the technotariat— automata, both hardware and software, which displace the traditional productive roles of the proletariat. In this case, humanity is completely or almost completely disenfranchised from political and economic matters as automata develops full ownership of society.


Dictatorship of the Technotariat

This is a term I’ve already seen being passed around. This works off pure-technism and can be defined in a very simple and slightly ominous way— the means of production own themselves. This doesn’t mean that hammers become sadistic foremen whipping their abused human slaves— it refers to a state of affairs when synthetic intelligences possess ownership over society and direct social, political, and economic matters. In such a state, humanity would no longer have meaningful ownership over private property, even though private property itself may not have actually been abolished.
AI simply commanding and controlling an economy doesn’t necessarily mean we’ve arrived at this new dictatorship. AI has to own the means of production (essentially, itself).
Unlike Vyrdism, where society is set up similar to an Athenian slave state (with humans and sapient AI existing at the top and sub-sapient or even non-sentient technotarians acting as slave laborers beneath us), a pure-technist society sees humanity exist purely at the whims of synthetic intelligence. It is the logical endpoint of universal basic income, where we do not own anything but are given some capital to live as we please.


To recap: technism is the pursuit of automation, especially full automation. Capitalist and socialist societies ever since the Industrial Revolution could be described as, in some manner, technist. However, technists seek to fully replace the working class with automata known as “technotarians”, whereas most capitalists and socialists seek to use automata to supplement human labor. Vyrdism is a partial fusion of technism with capitalism and socialism (moreso one way or the other depending on if you’re a Market or a Marxist Vyrdist), that’s only possible when technology reaches a point where humans do not need to be directly involved in the economy itself. Pure technism is the full secession of the ownership of the means of production to the means of production themselves, which is only possible if the means are artificially intelligent to a particular point I’ve defined as being “artilectual.” The difference between an AI/AGI and an artilect is that a general AI is an ultra-versatile tool while an artilect is a synthetic person. Of course, when I say “an artilect”, that implies that one would be a physically defined person as we would recognize it— with a tiny primate-esque body and a limited brain, with very-much human aspirations and flaws. In fact, an artilect could be an entire collective of AI that exists across the planet, that has control over nearly all robots.

A pure-technist society is not the same as a Vyrdist society. Not even a “Marxist-Vyrdist” society. Vyrdism involves human ownership over the means of production when the means are capable of working without any human interaction or involvement. Pure-technism is when humans do not own the means of production, rendering us dependent upon the generosity of the machines.

Because of these qualifiers, it is not wrong to say that any automation-based economic system is technist. This includes Resource-Based Economies as well as the Venus Project. If you take Marxism-Vyrdism to its logical conclusion, you will arrive at Fully Automated Luxury Communism. All of which are considered “post-scarcity economics“. All of which are technist.


Joint Economy vs. Mixed Economy

So let me take a minute to discuss the difference between a “joint economy” and a “mixed economy.”

Back when I was doing the technostist wiki (“technostism” being a poor precursor to the current term “technism”), I pointed out the difference between market socialism and mutualism, and mixed economies that claimed to fuse “capitalism and socialism.” Mixed economies fuse state socialism and free-market capitalism. I’ve yet to see a mixed economy be used to describe a place that fuses market socialism and free-market capitalism. So I decided to take the initiative and create a new term myself: “joint economy.”

A joint economy is one that fuses capitalist and worker (and, eventually, automata) ownership of the means of production to some great degree. It has nothing to do with the government— the “socialist” aspects in this case are purely economic. When a nation has a joint economy, that means it has a healthy mixture of purely traditional/authoritarian enterprises and worker cooperatives and democratic businesses (owned and/or managed), perhaps even a cooperative federation or syndicate. You’d still have powerful corporations, but it wouldn’t be a given that all corporations are authoritarian in nature. Something like the Basque Country in Spain is a good example— Mondragon is an absolutely massive corporation, but it’s entirely worker-owned. This means the Basque Country has a “joint economy”. A joint mixed economy is one where you have market socialism and market capitalism alongside state regulations.

This is naturally important in a technist society because we’re fast approaching a time when there’s a third fundamental owner of the means of production, and defining their relationship to the means and to society at large is necessary.
Just as present-day joint economies are the freest possible, an economy where businesses are commonly owned by individuals, collectives, and machines rather than solely one of the three will see the greatest prosperity.

In a future post, I will detail why radical decentralization and ultra-strong encryption must be a goal for any budding technist, as well as how totalitarianism becomes perfected in a degenerated technist society.


 

3bhmy1j

In review: technism is the pursuit of capital imbued with intelligence. The logical endpoint is the point where intelligent capital owns society and all property, thus marking a state of absolute automation.

Paratechnology

Unexplained Mysteries of the Future

I am only a half-believer in the paranormal, so taking mysteries of the unexplained at face value smacks of the ridiculous. Yet I can never shake those doubts, hanging onto my mind like burrs.
The mammalian brain fears and seeks the unknown. That’s all I want— to know. The chance any one particular paranormal or supernatural happening is real is infinitesimal. Cryptids are usually another story, save for the most outlandish, but what likelihood is there that evolution wrought a lizard man or a moth man? Or that certain dolls are cursed?
However, I won’t cast off these reports completely until I can know for sure that they either are or are not true, as unlikely as they may be.

So here are a few words on the subject of paratechnology.


Self-Driving Cars Have Ruined The Creepiness of Self-Driving Cars

Imagine it’s a cool summer evening in 1969. You’re hanging with your mates out in the woods, minding your own business. All of a sudden, as you pass near a road, you see an Impala roll on by, creaking to a stop right as it closes in on your feet. Everything about the scene seems normal— until you realize that’s your Impala. You just saw your own car drive up to you. But that’s not what stops your heart. When you walk up to the window to see who’s the fool who tried to scare you, horror grips your heart as you realize the car was driving itself.

Needless to say, when your grandson finds the burned out shell of the car 50 years later, he doesn’t believe you when you doggedly claim that you saw the car acting on its own.

Except he would believe you if your story happened in the present day.

Phantom vehicles are a special kind of strange, precisely because you’d never expect a car to be a ghost. After all, aren’t ghosts the souls of the deceased?

(ADD moment: this is easy to rectify if you’re a Shintoist)

Nevertheless, throughout history, there have been reports of vehicles that move on their own, with no apparent driver or means of starting. The nature of these reports is always suspect— extraordinary claims require extraordinary evidence— but there’s undeniably something creepy about the idea of a self-driving vehicle.

Unless, of course, you’re talking about self-driving vehicles. You know, the robotic kind. Today, walking out in the woods and seeing your car drive up to you is still a creepy sight to behold, but as time passes, it grows less ‘creepy’ and more ‘awesome’ as we imbue artificial intelligence into our vehicles.

This does raise a good question— what happens if an autonomous car became haunted?

O.o


The Truth About Haunted Smarthouses

For thousands of years, people have spoken of seeing spectres— ghosts, phantoms, spirits, whathaveyou. Hauntings would occur at any time of day, but everyone knows of the primal fear of things that go bump in the night. It’s a leftover of the days when proto-humans were always at risk of being ambushed by hungry nocturnal predators, one that now best serves the entertainment industry.

Ghosts are scary because they represent a threat we cannot actively resist. A lion can kill you, but at least you can physically fight back. Ghosts are ethereal, and their abilities have never been properly understood. This is because we’ve never been fully sure if they’re real at all. Science tells us they’re all in our heads, but science also tells us that everything is all in our heads. Remember: ghosts are ethereal, meaning they cannot actually be caught. Thus, they cannot be studied, rendering them completely useless to science. Anything that cannot be physically examined might as well not exist. Because ghosts are so fleeting, we never even get a chance to study them, instead leaving the work to pseudoscientific “ghost hunters”.  By the time anyone has even noticed a ghost, they’ve already vanished.

Even today, in the era of ubiquitous cameras and surveillance, there’s been no definitive proof of ghosts. No spectral analysis, no tangible evidence, nothing. Why can’t we just set up a laboratory in the world’s most haunted house and be done with it? We’ve tried, but the nature of ghosts (according to those who believe) means that even actively watching out for a ghost doesn’t mean you’ll actually find one, nor will you capture usable data. Our technology is too limited and ghosts are too ghostly.

So what if we put the burden onto AI?

Imagine converting a known haunted house into a smarthouse, where sensors exist everywhere and a central computer always watches. No ghost should escape its notice, no matter how fleeting.

Imagine converting damn near every house into a smarthouse. If paranormal happenings continue evading smarthouse AIs, that casts near irrefutable doubt onto the larger ghost phenomenon. It would mean ghosts cannot actually be meaningfully measured.

Once you bring in transhumanism, the ghost phenomena should already be settled. A posthuman encountering a spectre at all would be proof in and of itself— and if it never happens— if ghosts remain the domain of fearful, fleshy biological humans— then we will properly know once and for all that the larger phenomenon truly is all in our heads.


Bigfoot Can Run, But He Can’t Hide Forever

For the same reasons listed above, cryptids will no longer be able to hide. There’s little tangible evidence suggesting Bigfoot is real, but if there’s any benefit of the doubt we can give, it’s that there’s been very little real effort to find him. If we were serious about finding Bigfoot, we wouldn’t create ‘Bigfoot whistles’ or dedicate hour-long, two hundred episode reality shows to searching for scant evidence. We would hook up the Pacific Northwest with cameras and watch them all.

Except we can’t. INGSOC could never be watching you at all times for as long as the Party lacked artificial intelligence to do the grunt-work for them. That’s true in reality as it is true in fiction— if you have a million cameras and only a hundred people watching them, you’ll never be able to find everything that goes on. You’d need to be able to watch these videos at all moments every day, without fail. Otherwise, video camera #429,133 may capture a very clear image of Bigfoot, but you’d never know.

AI could meet the challenge. And if you need any additional help, call in the robots. Whether you go for drones, microdrones, or ground-traversing models, they will happily and thanklessly search for your spooky creatures of the night.

If, in the year 2077, when we have legions of super-ASIMOs and drones haunting the world’s forests, we still have no definitive proof of a variety of our more outlandish cryptids, we’ll know for sure that they truly were all stories.

Types of Artificial Intelligence

Not all AI is created equal. Some narrow AI is stronger than others. Here, I redefine AI, separating the “weak=narrow” and “strong=general” correlation.

Let’s talk about AI. I’ve decided to use the terms ‘narrow and general’ and ‘weak and strong’ as modifiers in and of themselves. Normally, weak AI is the same thing as narrow AI; strong AI is the same thing as general AI. But I mentioned elsewhere on this wide, wild Internet that there certainly must be such a thing as ‘less-narrow AI.’ AI that’s more general than the likes of, say, Siri, but not quite as strong as the likes of HAL-9000.

So my system is this:

    • Weak Narrow AI
    • Strong Narrow AI
    • Weak General AI
    • Strong General AI
    • Super AI

Weak narrow AI (WNAI) is AI that’s almost indistinguishable from analog mechanical systems. Go to the local dollar store and buy a $1 calculator. That calculator possesses WNAI. Start your computer. All the little algorithms that keep your OS and all the apps running are WNAI. This sort of AI cannot improve upon itself meaningfully, even if it were programmed to do so. And that’s the keyword— “programmed.” You need programmers to define every little thing a WNAI can possibly do.
We don’t call WNAI “AI” anymore, as per the AI Effect. You ever notice when there’s a big news story involving AI, there’s always a comment saying “This isn’t AI; it’s just [insert comp-sci buzzword].” Problem being, it is AI. It’s just not AGI.
I didn’t use that mention of analog mechanics passingly— this form of AI is about as mechanical as you can possibly get, and it’s actually better that way. Even if your dollar store calculator were an artificial superintelligence, what do you need it to do? Calculate math problems. Thus, the calculator’s supreme intellect would go forever untapped as you’d instead use it to factor binomials. And I don’t need ASI to run a Word document. Maybe ASI would be useful for making sure the words I write are the best they could possibly be, but actually running the application is most efficiently done with WNAI. It would be like lighting a campfire with Tsar Bomba.
Some have said that “simple computation” shouldn’t be considered AI, but I think it should. It’s simply “very” weak narrow AI. Calculations are the absolute bottom tier of artificial intelligence, just as the firing of synapses are the absolute bottom of biological intelligence.
WNAI can basically do one thing really well, but it cannot learn to do it any better without a human programmer at the helm manually updating it regularly.

Strong narrow AI (SNAI) is AI that’s capable of learning certain things within its programmed field. This is where machine learning comes in. This is the likes of Siri, Cortana, Alexa, Watson, some chatbots, and higher-order game AI, where the algorithms can pick up information from their inputs and learn to create new outputs. Again, it’s a very limited form of learning, but learning’s happening in some form. The AI isn’t just acting for humans; it’s reacting to us as well, and in ways we can understand. SNAI may seem impressive at times, but it’s always a ruse. Siri might seem smart at times, for example, but it’s also easy to find its limits because it’s an AI meant for being a personal virtual assistant, not your digital waifu ala Her. Siri can recognize speech, but it can’t deeply understand it, and it lacks the life experiences to make meaningful talk anyhow. Siri might recognize some of your favorite bands or tell a joke, but it can’t also write a comedic novel or actually genuinely have a favorite band of its own. It was programmed to know these things, based on your own preferences. Even if Siri says it’s “not an AI”, it’s only using preprogrammed responses to say so.
SNAI can basically do one thing really well and can learn to do that thing even better over time, but it’s still highly limited.

Weak general AI (WGAI) is AI that’s capable of learning a wide swath of things, even things it wasn’t necessarily programmed to learn. It can then use these learned experiences to come up with creative solutions that can flummox even trained professional humans. Basically, it’s as intelligent as a certain creature— maybe a worm or even a mouse— but it’s nowhere near intelligent enough to enhance itself meaningfully. It may be par-human or even superhuman in some regards, but it’s sub-human in others. This is what we see with the likes of DeepMind— DeepMind’s basic algorithm can basically learn to do just about anything, but it’s not as intelligent as a human being by far. In fact, DeepMind wasn’t even in this category until they began using the differentiated neural computing system because it could not retain its previously learned information. Because it could not do something so basic, it was squarely strong narrow AI until literally a couple months ago.
Being able to recall previously learned information and apply it to new and different tasks is a fundamental aspect of intelligence. Once AI achieves this, it will actually achieve a modicum of what even the most cynical can consider “intelligence.”
DeepMind’s yet to show off the DNC in any meaningful way, but let’s say that, in 2017, they unveil a virtual assistant to rival Siri and replace Google Now. On the surface, this VA seems completely identical to all others. Plus, it’s a cool chatbot. Quickly, however, you discover its limits— or, should I say, its lack thereof. I ask it to generate a recipe on how to bake a cake. It learns from the Internet, but it doesn’t actually pull up any particular article— it completely generates its own recipe, using logic to deduce what particular steps should be followed and in what order. That’s nice— now, can it do the same for brownies?
If it has to completely relearn all of the tasks just to figure this out, it’s still strong narrow AI. If it draws upon what it did with cakes and figures out how to apply these techniques to brownies, it’s weak general AI. Because let’s face it— cakes and brownies aren’t all that different, and when you get ready to prepare them, you draw upon the same pool of skills. However, there are clear differences in their preparation. It’s a very simple difference— not something like “master Atari Breakout; now master Dark Souls; now climb Mount Everest.” But it’s still meaningfully different.
WGAI can basically do many things really well and can learn to do them even better over time, but it cannot meaningfully augment itself. That it has such a limit should be impressive, because it basically signals that we’re right on the cusp of strong AGI and the only thing we lack is the proper power and training.

Strong general AI (SGAI) is AI that’s capable of learning anything, even things it wasn’t programmed to learn, and is as intellectually capable as a healthy human being. This is what most people think of when they imagine “AI”. At least, it’s either this or ASI.
Right now, we have no analog to such a creation. Of course, saying that we never will would be as if we were in the year 1816 and discussing whether SNAI is possible. The biggest limiting factor towards the creation of SGAI right now is our lack of WGAI. As I said, we’ve only just created WGAI, and there’s been no real public testing of it yet. Not to mention that the difference between WGAI and SGAI is vast, despite seemingly simple differences between the two. WGAI is us guessing what’s going on in the brain and trying to match some aspects of it with code. SGAI is us building a whole digital brain. Not to mention there’s the problem of embodied cognition— without a body, any AI would be detached from nearly all experiences that we humans take for granted. It’s impossible for an AI to be a superhuman cook without ever preparing or tasting food itself. You’d never trust a cook who calls himself world-class, only come to find out he’s only ever made five unique dishes, nor has he ever left his house. For AI to truly make the leap from WGAI to SGAI, it’d need someone to experience life as we do. It doesn’t need to live 70 years in a weak, fleshy body— it could replicate all life experiences in a week if needbe if it had enough bodies— but having sensory experiences helps to deepen its intelligence.

Super AI or Artificial Superintelligence (SAI or ASI) is the next level beyond that, where AI has become so intellectually capable as to be beyond the abilities of any human being.
The thing to remember about this, however, is that it’s actually quite easy to create ASI if you can already create SGAI. And why? Because a computer that’s as intellectually capable as a human being is already superior to a human being. This is a strange, almost Orwellian case where 0=1, and it’s because of the mind-body difference.
Imagine you had the equivalent of a human brain in a rock, and then you also had a human. Which one of those two would be at a disadvantage? The human-level rock. And why? Because even though it’s as intelligent as the human, it can’t actually act upon its intelligence. It’s a goddamn rock. It has no eyes, no mouth, no arms, no legs, no ears, nothing.
That’s sort of like the difference between SGAI and a human. I, as a human, am limited to this one singular wimpy 5’8″ primate body. Even if I had neural augmentations, my body would still limit my brain. My ligaments and muscles can only move so fast, for example. And even if I got a completely synthetic body, I’d still just have one body.
An AI could potentially have millions. If not much, much more. Bodies that aren’t limited to any one form.
Basically, the moment you create SGAI is the moment you create ASI.

From that bit of information, you can begin to understand what AI will be capable of achieving.


Recap:

“Simple” Computation = Weak Narrow Artificial Intelligence. These are your algorithms that run your basic programs. Even a toddler could create WNAI.
Machine learning and various individual neural networks = Strong Narrow Artificial Intelligence. These are your personal assistants, your home systems, your chatbots, and your victorious game-mastering AI.
Deep unsupervised reinforcement learning + differentiable spiked recurrent progressive neural networks = Weak General Artificial Intelligence. All of those buzzwords come together to create a system that can learn from any input and give you an output without any preprogramming.
All of the above, plus embodied cognition, meta neural networks, and a master neural network = Strong General Artificial Intelligence. AGI is a recreation of human intelligence. This doesn’t mean it’s now the exact same as Bob from down the street or Li over in Hong Kong; it means it can achieve any intellectual feat that a human can do, including creatively coming up with solutions to problems just as good or better than any human. It has sapience. SGAI may be very humanlike, but it’s ultimately another sapient form of life all its own.

All of the above, plus recursive self-improvement = Artificial Superintelligence. ASI is beyond human intellect, no matter how many brains you get. It’s fundamentally different from the likes of Einstein or Euler. By the very nature of digital computing, the first SGAI will also be the first ASI.