Artificial Intelligence: The How

Meet the Sensory Orb. It’s a flesh orb that possesses a powerful synthetic brain. There are several questions one must ask about the Sensory Orb.

Why is the Sensory Orb important? Because it’s the key to artificial general intelligence of the human variety.

You see, there are multiple strains of thought as to how to achieve AGI. Most serious computer scientists and neurologists know that it’s not something we’re likely to achieve anytime soon, but their reasoning is different from what most people might assume.

We don’t understand how intelligence or consciousness works, first and foremost. However, we can try our best at mimicking what we see. Perhaps one of these methods will work. After all, we don’t need to understand every single facet of something in order to make it work. We also expect the final leap to AGI to be accomplished by AI itself. So why is getting there so hard in the first place?

For one, we are still limited by computing power. It seems ridiculous considering how stupidly powerful computers today really are, but it’s true— while the most powerful supercomputers have exceeded the expected operations-per-second done by the brain, these computers still cost hundreds of millions of dollars. We need to bring that cost down if we want to make AI research practical.

But forget about the cost for a moment. Let’s pretend DeepMind had TaihuLight in their possession and could utilize every FLOPS for its own purposes. Would we see major breakthroughs in AI? Of course. But would we see human-level AI? Not even close.

“But they’re DeepMind! Their AI has beaten the human champion at Go a decade before the experts said it could be done! How do they still lack AGI?”

For one, that’s not entirely true— experts said a computer could become the world champion at Go by 2016 if there were sufficient funding put into the problem. And sufficient funding did indeed arrive.

But more importantly, while DeepMind’s accomplishments cannot be overstated, they haven’t actually brought us any closer to human-level AGI.

I want you to marvel at the human brain. It’s a fine thing.

Here is a metal table. On top of this metal table are two brains. One is a newborn baby’s brain, and next to it is the brain of Stephen Hawking. Don’t worry, we’ll return the brains to their rightful owners after this blog post. But I want you to think about what these brains are capable of.

The newborn brain is already a powerful computer that’s learning every single second, forming new neural pathways as it experiences life. Mr. Hawking’s brain is a triple-A machine of cosmic proportions, always thinking and never resting.

Except these two facts are dirty lies. The brains before you aren’t doing anything of the sort. The newborn baby’s brain is not forming any new connections. Hawking’s brain isn’t thinking. And why? Because they are disembodied. They are no longer experiencing any senses, and the senses necessary to make thoughts even work are no longer there. They’re both equal in terms of active intelligence— zero.

If you asked the newborn baby’s brain to add two and two, you’d just look like a fool because you’re talking to a tiny little blob of fat. Even if you asked Hawking’s brain the same question, you’d never get an answer. They can’t answer that question— they’re just brains. They don’t have ears to hear you. They don’t have eyes to see you. They don’t have mouths or hands to respond to you. You do not exist to them.

Despite what fiction may proclaim, brains are not actually ‘sentient’ without their bodies. A brain can’t “see” you or “respond” to you if you ask it a question, even if you stick it into a jar full of culture fluids.

If you hook up a screen and a keyboard to that brain, would you then have a proper sensory input in order to get the outputs of the newborn and Mr. Hawking? Of course not— brains did not evolve to be literal computers. You can’t just stick a plug into a brain and expect it to behave just like your desktop. In order to bring these two brains back to life, you’d need to construct whole bodies around their functions. And not just one or two of their functions— all of them.

So the point is: you can’t just take a human brain, set it out on a desk, and treat it like a fully-intelligent person. If you had Descartes’ Evil Demon or the Brain in a Vat, you could develop the brain until it possessed intelligence in a simulated reality, but the brain itself can do nothing for you. It sounds utterly insane to even contemplate.

Yet, for whatever reason, this is how we treat computers. We think that, if we had a computer with deep reinforcement recurrent spiked progressive neural networks and 3D graphene quantum memristors (insert more buzzwords here), we’d have AGI. In fact, you could have the servers running Skynet brought into real life, and you’d still not see AGI if your idea of making it intelligent is simply to feed it internet data.

Without sensory experiences, that computer will never achieve human-level intelligence. That’s not to say that we could achieve human-level AI today if we took ASIMO and decked it out with sensors, but the gist is that it would be foolish to ever expect synthetic intelligence surpassing humans by treating a computer to the furthest thing from human experiences.

And so we return to the Sensory Orb. The Orb itself is not natively intelligent. It’s no more intelligent than your desktop computer (circa 2027). But, unlike your desktop, it is fitted with a whole body of sensory inputs. The more it experiences, the more it body ‘evolves’ sensory outputs.

It is programmed to like being touched and tickled. Thus, if you tickle it, it will grow to like you. If you pinch its skin, it will roll away from you. Of course, it has to learn how to roll away first, but it quickly learns. If you keep pleasing it or abusing it, its visual senses will recognize you and either run to or away from you on sight. It has many preprogrammed instincts, including knowledge of “eating”. It knows how to find its charger, but if you bring it to its charger, it will grow to like you even more.

And if you teach the orb how to communicate with you through speech, you can teach it various commands. With enough training, the orb will learn to ask about itself. It can learn about other Sensory Orbs, learn about computers and flesh, learn about sensory experiences, and learn that it has its own body that allows it to ‘live’. So one day, you may be surprised if it asks about itself.

Is this human-level intelligence? Not necessarily, but it’s far closer to anything we have today. And we don’t necessarily need a real-life Sensory Orb to achieve this— a good-enough virtual simulation can also suffice. But nevertheless, the point remains: in order to achieve AGI, computers need to experience things..

Yuli’s Law: On Domestic Utility Robots

The advancement of computer technology has allowed for many sci-tech miracles to occur in the past 70 years, and yet it still seems as if we’ve hit a plateau. As I’ve explained in the post on Yuli’s Law, this is a fallacy— the only reason why an illusion of stagnation appears is because computing power is too weak to accomplish the goals of long-time challenges. That, or we have already accomplished said goals a long time ago.

The perfect example of this can be seen with personal computing devices, including PCs, laptops, smartphones— and calculators.

The necessary computing power to run a decent college-ready calculator has long been achieved, and miniaturization has allowed calculators to be sold for pennies.  There is no major quantum leap between calculators and early computer programs.

Calculating the trajectory of a rocket requires far less computing power than some might think, and this is because of the task required: guiding an object using simple algorithms. A second grader could conceivably create a program that guides a bottle rocket in a particular direction.

This is still a step up from purely mechanical systems that give the illusion of programming, but there are obvious limits.

I’ll explain these limits by using a particular example, an example that is the focus of this post: a domestic robot.  Particularly, a Roomba.

I-Robot_Roomba_Autonomous_FloorVac_Vacuum_Cleaner

An analog domestic robot has no digital programming, so it is beholden to its mechanics. If it is designed to move in a particular direction, it will never move in another direction. In essence, it’s exactly like a wind-up toy.
I will wind up this robot and set it off to clean my floors. Thirty seconds later, it makes a left turn. After it makes this left turn, it will move for twenty seconds before making another left turn. And so on and so forth until it returns to its original spot or runs out of energy.

There are many problems with this. For one, if the Roomba runs into an obstacle, it will not move around it. It will make no attempt to avoid it a second time through. It only moves along a preset path, a path you can perfectly predict the moment you set it off. There is a way to get around this— by adding sensors. Little triggers that will force a turn early if it hits an object.

 

Let’s bring in a digitally programmed Roomba, something akin to a robot you could have gotten in 2005. Despite having a digital computer for a brain, it seems to act absolutely no different from the mechanical Roomba. It still gets around by bumping into things. Even though the mechanical Roomba could have been created by someone in Ancient Greece, yours doesn’t seem any more impressive on a practical level.

Thus, the robot seems to be more novel than practical. And that’s the perception of Roombas today— cat taxis that clean your floor as a bonus rather than a legitimate domestic robot.
Yet this is no longer a fair perception as the creators of the Roomba, iRobot, have added much-needed intelligence to their machines. This has only been possible thanks to increases in computing power allowing for the proper algorithms to run in real-time.

For example, a 2017-era Roomba 980 can actually “see” in advance when it’s about to run into something and avoid it. It can also remember where it’s been, recognize certain objects, among other things (though Neato’s been able to do this for a long time). Much more impressive, though still not quite what we’re looking for.

What’s going on? Why are robots so weak in an age of reusable space rockets, terabyte smartphones, and popular drone ownership?

We need that last big push. We need computers to be able to understand 3D space.

Imagine a Roomba 2000 from the year 2025. It’s connected to the Cloud and it utilizes the latest in artificial intelligence in order to do a better job than any of its predecessors. I set it down, and the first thing it begins doing is mapping out my home. It recognizes any obstacle as well as any stain— that means if it detects dog poop, it’ll either avoid it or switch to a different suction to pick it up. Once it has mapped my house, it is able to get a good feel for where things are and should be. Of course, I could also send it a picture of another room, and it will still be able to get a feel for what it will need to do even if it’s never roamed around inside before.

The same thing applies to other domestic robots such as robotic lawn mowers— you’d rather have a lawn mower that knows when to stop cutting, whether that means because it’s moving over a new terrain or because it’s approaching your child’s Slip n’ Slide. Without the ability to comprehend 3D space or remember where it’s been or where it needs to go, it’ll be stuck operating within a pre-set invisible fence.

Over all of this, there’s the promise of bipedal and wheeled humanoid robots working in the home. After all, homes are designed around the needs of humans, so it makes sense to design tools modeled after humans. But the same rules apply— no comprehension of 3D space, no dice.

In fact, a universal utility robot like a future model of Atlas or ASIMO will require greater advancements than specialized utility robots like a Roomba or Neato. They must be capable of utilizing tools, including tools they may never have used before. They must be capable of depth perception— a robot that makes the motions of mopping a floor is only useful when you make sure the floor isn’t too closer or far away, but a robot that genuinely knows how to mop is universally useful. They must be capable of understanding natural language so you can give them orders. They must be flexible, in that they can come across new and unknown situations and react to them accordingly. A mechanical robot would come across a small obstacle, fall over, and continue moving its legs. A proper universal utility robot will avoid the obstacle entirely, or at least pick itself up and know to avoid the obstacle and things like it. These are all amazingly difficult problems to overcome at our current technological level.

All these things and more require further improvements in computing power. Improvements were are, indeed, still seeing.

zml5hbc
Mother Jones – “Welcome Robot Overlords. Please Don’t Fire Us?”