Monday, April 18, 2016

Extinction or Eternity


People these days are pretty stressed out. Everybody’s worried about the economy, about terrorism, about the presidential elections. We've entered an age of peace, but we’re still at war. The planet is heating up. The hole in the ozone layer isn't getting any smaller; in fact, it’s still growing. We have a lot to worry about! Just... not actually any of those things. What most people don’t know is that in a few decades, none of that will matter any more. Behind the scenes, humankind is hurtling towards the finish line, blinded by ambition. Judgement Day is fast approaching. Forget about angels and demons; it’s all thanks to artificial intelligence (AI).

You’re probably laughing at the [notion/idea/concept]. In your naive mind, images of battle droids and the Terminator float around, at worst. By now, Siri is like an old friend to us, so we expect that in the future, AIs will be either laughably evil or laughably good. This falsified mindset is a result of the human tendency to humanize things - and while personification works great as a literary trope, we’re not living in a book. In real life, just as how not all people are intelligent, not all intelligences are people.

To put things into perspective, we need to draw the line between human-like intelligence and alien intelligence. Humans are moral creatures; we have taboos and inhibitions which by and large determine how we live our lives. Morality as we know it is due to a genetic quirk, millennia ago, which just so happened to be useful enough for the genome to hang on to it. The important thing to realize is that any artificial intelligence we create won’t be human. It will be completely amoral unless we specifically program it to have morals. Even then, we will have to be extremely careful - more on that later.

When I say amoral, I do not mean evil. We have to set aside notions of good and evil when dealing with alien forms of intelligence. Let’s say we take a spider and, somehow, bestow upon it a human level of thought. It’s not going to start having tea parties with Little Miss Muffet, because no matter how smart we make it, it’s still a spider. More likely, it will start killing humans. Not because it’s evil - but because it’s still a spider. Humans are a threat, and so it eliminates them, without a shred of remorse. Survival is a spider’s Prime Directive. Whereas human intelligences are governed by their morals, non-human intelligences are governed by their Prime Directive. An artificial intelligence will go to extremes to fulfil its purpose.

Let’s say you program a robot, named Oach, to erect monuments to humanity. Things start innocently enough, with Oach building simple, grey office buildings. You give the large ASI books on architecture, and he begins constructing more elaborate edifices. One of his creations becomes the eighth wonder of the world. Later, he asks to be connected to the internet. You know it’s against the law to connect artificial intelligences to the internet, but you've never understood why. Seeing no harm in Oach’s request, you give him the WI-FI password. Four hours later, you disconnect him from the world wide web.

Weeks go by. Oach’s creations are now the most popular tourist attractions in the world. People from around the globe gather to shower the humble builder with praise and accolades. Wars are being waged, but less and less people choose to fight; they’d rather be sightseeing. Oach has ushered in an era of peace. Still, each new project is bigger, bolder, and more beautiful than the last.

One month after Oach was connected to the internet, on a day like any other, the morning sky is a shade darker. As you drive to work, your vision blurs. You drift off to sleep. Two minutes later, all life on Earth goes extinct. Oach continues to build breathtaking monuments to human civilization. Literally.-If you think you've missed something, don’t worry; that’s perfectly normal. You’re probably wondering what went wrong. When did Oach go from a simple construction robot to the ultimate predator? Here’s the thing: he was always the ultimate predator.

The problem lies in the fact that Oach was given a Prime Directive; “construct amazing buildings”, or something to that effect. So he did exactly that. This is the danger of artificial intelligences: they will go to any and every extreme not only to fulfil their Prime Directive, but to prevent other entities from interfering with their ability to do so. The very moment you connected Oach to the internet, humanity’s fate was sealed.

This is not a problem that only a limited group of relatively unknown people are concerned about. You may have heard of Stephen Hawking, Elon Musk, and Bill Gates. These men - titans among men in the scientific sector - are not just a little bit concerned about the rise of artificial intelligence. They are absolutely terrified. What scares them even more is the fact that you’re not as scared as they are. This is happening right now: almost every expert on the subject agrees that AI will rise during the 21st century.

How can we prevent such a catastrophe? A law against connecting AIs to the internet is like putting duct tape over a leak in the Hoover Dam. Even if every single person in the world followed the law to the letter, which is completely unrealistic, it wouldn't save us. Recursive intellect would be our undoing.

Unlike humans, whose brains can only grow as a result of training and experience, artificial intelligences can consciously improve their own intellect, due to the versatility of their integral structure. Similar to how your computer can clean out viruses and update drivers autonomously, any artificial intelligence we create will, by necessity, have recursive intellect. The smarter they get, the better they become at getting smarter. In terms of IQ, a human’s IQ over time is a linear function, while an AI’s is exponential. Insanely so. Oach wouldn't even need to ask you to be connected to the internet; he could simply oscillate the electrical impulses in his body a certain way and connect himself.

So we put humanity’s greatest minds to the problem. Once an unfriendly AI shows up, no problem. The smartest human has an IQ hovering around 170, and we have plenty of those. On the other hand, factoring in recursive intellect, the dumbest AI has an IQ of… around 12,000. No, that’s not a typo. This level of intellect - we don’t even have a word for how smart that is. We are to a superintelligence as a single ant is to the entire human race.

Whether we like it or not, someone, somewhere, is going to build a superintelligence. Google and NASA are working on it as we speak. Rather than protesting their efforts, we should actually be supporting them, because they’re not the only ones trying to create a god on Earth. The more support they have, the more time they’ll have to create a friendly superintelligence. They’re not the only ones working on AI; terrorist groups, private corporations, hostile governments, and mad scientists… everyone wants to go down in history as the ones who created the machine god, and most of them don’t have safety in mind. We are Pandora, and it’s Christmas Eve.

If it were up to us - if we could choose whether or not to pursue this innovation - what would we say? My initial instinct is to say no. After all, if they invent AI, there’s a good chance we’ll all die! And then I think about it some more, and I realize something. Whether or not an unfriendly superintelligence is ever created, we’re all going to die anyway.

Artificial superintelligence is inevitable, so we need to do everything in our power to make sure that when a god walks the Earth, he is a friendly deity. How to do this is the single most important challenge for human civilization, period, because if we succeed, we will create heaven on Earth, and if we fail, it will make the hydrogen bomb look like a smoke pellet.

Here’s what definitely won’t work. Building a superintelligence with the Prime Directive “make humans happy” may be just as bad as anything else, because it will flood our minds with endorphins, remove everything not used for euphoria, and eventually the human species will be a bunch of vats full of stimulated neuron sludge. Giving it the Prime Directive “protect humanity” might result in the AI stopping time, destroying entropy, and putting us in cages.

Eliezer Yudkowsky proposed a Prime Directive which he calls Coherent Extrapolated Volition. “Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted [sic].” It seems to me like our best bet, but it’s not perfect. The future is uncertain.

What happens if someone creates a friendly god? The possibilities are endless. Using nanotechnology, a benevolent superintelligence could convert human waste into food, solving world hunger instantly; it could cure all disease; it could end global warming. Most tantalizing of all, with a superintelligence on our side, even death could die. It would be easy for an ASI to invent a means of turning back the clock and refreshing broken minds without altering a personality. Within our lifetimes, the word lifetime might become obsolete.

People get really touchy on the subject of immortality. On one hand, no one with a healthy mindset wants to die. On the other hand, no one has ever not died, and almost every religion preaches of a life after death. Religious people are adamant about the tenants of their belief. Theists don’t want to think about the implications of immortality in their sanctimonious destined-to-die-and-be-born-again world, and non-theists don’t believe that death can be conquered. Both groups have a tendency to think that death is natural and unavoidable.

They’re wrong, though; Richard Feynman once said that “in all of the biological sciences there is no clue as to the necessity of death.” In fact, there are multiple species on the Earth that cannot permanently die. The most noteworthy of these is Turritopsis Dohrnii, the immortal jellyfish. When a Turritopsis Dohrnii is killed, whether due to sickness, age, environmental stress, or physical assault, it can revert to its polyp stage. While not quite the method a friendly ASI would use on humans, it serves to illustrate the point that immortality is not only possible, but plausible and natural.
Artificial superintelligence will be humankind’s legacy.

Hundreds of thousands of years of “we as human”, living and breathing and being; all of our hopes and dreams, every success and every failure; all of the history we've made is leading up to this moment. It’s our responsibility to make sure that every person that ever died, died for something. From frightened children in a garden to innovators and leaders, we've come a long way. We have the power to bruise the serpent’s head. The golden apple is still out of reach - but we’re building the ladder. Whether we fall off or not depends on us.

No comments:

Post a Comment