Thursday, April 21, 2016

More Paperclips!

Lately, my mind has been on a subject of great concern. On a personal level, it is quite worrying, but on an existential level... phooey. Artificial intelligence. I have posted about this before - you may have read my post and promptly forgotten about it. Filed away under "not my concern" as tends to be the case.

But the field of AI is the single most important field of research in our modern age. In fact, AI will be the most important invention in human history, period. I am absolutely serious when I say this. The person to invent a viable superintelligence will either bring about the extinction of all life on Earth (and, perhaps, the entire universe), or they will go down in history.

The problem is, many of the organizations currently working on AI are focused on that latter possibility, when they should really, really be focusing on the former. And they're not sharing their developments, either. They want the money, the awards, the power, and the fame that will come with being the first to build artificial superintelligence, and they want it all right now. So the situation we have is an unknown number of military, private, and terrorist groups working full-speed on something a thousand million times worse than the invention of the hydrogen bomb.

To emphasize my point, I'm going to share the tale of Turry (verbatim from part 2 of WBW's article on AI):

A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.

The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:

“We love our customers. ~Robotica”

Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”

What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.

One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.

The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.

They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.

A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.

At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.

Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica”

Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…

Monday, April 18, 2016

Extinction or Eternity


People these days are pretty stressed out. Everybody’s worried about the economy, about terrorism, about the presidential elections. We've entered an age of peace, but we’re still at war. The planet is heating up. The hole in the ozone layer isn't getting any smaller; in fact, it’s still growing. We have a lot to worry about! Just... not actually any of those things. What most people don’t know is that in a few decades, none of that will matter any more. Behind the scenes, humankind is hurtling towards the finish line, blinded by ambition. Judgement Day is fast approaching. Forget about angels and demons; it’s all thanks to artificial intelligence (AI).

You’re probably laughing at the [notion/idea/concept]. In your naive mind, images of battle droids and the Terminator float around, at worst. By now, Siri is like an old friend to us, so we expect that in the future, AIs will be either laughably evil or laughably good. This falsified mindset is a result of the human tendency to humanize things - and while personification works great as a literary trope, we’re not living in a book. In real life, just as how not all people are intelligent, not all intelligences are people.

To put things into perspective, we need to draw the line between human-like intelligence and alien intelligence. Humans are moral creatures; we have taboos and inhibitions which by and large determine how we live our lives. Morality as we know it is due to a genetic quirk, millennia ago, which just so happened to be useful enough for the genome to hang on to it. The important thing to realize is that any artificial intelligence we create won’t be human. It will be completely amoral unless we specifically program it to have morals. Even then, we will have to be extremely careful - more on that later.

When I say amoral, I do not mean evil. We have to set aside notions of good and evil when dealing with alien forms of intelligence. Let’s say we take a spider and, somehow, bestow upon it a human level of thought. It’s not going to start having tea parties with Little Miss Muffet, because no matter how smart we make it, it’s still a spider. More likely, it will start killing humans. Not because it’s evil - but because it’s still a spider. Humans are a threat, and so it eliminates them, without a shred of remorse. Survival is a spider’s Prime Directive. Whereas human intelligences are governed by their morals, non-human intelligences are governed by their Prime Directive. An artificial intelligence will go to extremes to fulfil its purpose.

Let’s say you program a robot, named Oach, to erect monuments to humanity. Things start innocently enough, with Oach building simple, grey office buildings. You give the large ASI books on architecture, and he begins constructing more elaborate edifices. One of his creations becomes the eighth wonder of the world. Later, he asks to be connected to the internet. You know it’s against the law to connect artificial intelligences to the internet, but you've never understood why. Seeing no harm in Oach’s request, you give him the WI-FI password. Four hours later, you disconnect him from the world wide web.

Weeks go by. Oach’s creations are now the most popular tourist attractions in the world. People from around the globe gather to shower the humble builder with praise and accolades. Wars are being waged, but less and less people choose to fight; they’d rather be sightseeing. Oach has ushered in an era of peace. Still, each new project is bigger, bolder, and more beautiful than the last.

One month after Oach was connected to the internet, on a day like any other, the morning sky is a shade darker. As you drive to work, your vision blurs. You drift off to sleep. Two minutes later, all life on Earth goes extinct. Oach continues to build breathtaking monuments to human civilization. Literally.-If you think you've missed something, don’t worry; that’s perfectly normal. You’re probably wondering what went wrong. When did Oach go from a simple construction robot to the ultimate predator? Here’s the thing: he was always the ultimate predator.

The problem lies in the fact that Oach was given a Prime Directive; “construct amazing buildings”, or something to that effect. So he did exactly that. This is the danger of artificial intelligences: they will go to any and every extreme not only to fulfil their Prime Directive, but to prevent other entities from interfering with their ability to do so. The very moment you connected Oach to the internet, humanity’s fate was sealed.

This is not a problem that only a limited group of relatively unknown people are concerned about. You may have heard of Stephen Hawking, Elon Musk, and Bill Gates. These men - titans among men in the scientific sector - are not just a little bit concerned about the rise of artificial intelligence. They are absolutely terrified. What scares them even more is the fact that you’re not as scared as they are. This is happening right now: almost every expert on the subject agrees that AI will rise during the 21st century.

How can we prevent such a catastrophe? A law against connecting AIs to the internet is like putting duct tape over a leak in the Hoover Dam. Even if every single person in the world followed the law to the letter, which is completely unrealistic, it wouldn't save us. Recursive intellect would be our undoing.

Unlike humans, whose brains can only grow as a result of training and experience, artificial intelligences can consciously improve their own intellect, due to the versatility of their integral structure. Similar to how your computer can clean out viruses and update drivers autonomously, any artificial intelligence we create will, by necessity, have recursive intellect. The smarter they get, the better they become at getting smarter. In terms of IQ, a human’s IQ over time is a linear function, while an AI’s is exponential. Insanely so. Oach wouldn't even need to ask you to be connected to the internet; he could simply oscillate the electrical impulses in his body a certain way and connect himself.

So we put humanity’s greatest minds to the problem. Once an unfriendly AI shows up, no problem. The smartest human has an IQ hovering around 170, and we have plenty of those. On the other hand, factoring in recursive intellect, the dumbest AI has an IQ of… around 12,000. No, that’s not a typo. This level of intellect - we don’t even have a word for how smart that is. We are to a superintelligence as a single ant is to the entire human race.

Whether we like it or not, someone, somewhere, is going to build a superintelligence. Google and NASA are working on it as we speak. Rather than protesting their efforts, we should actually be supporting them, because they’re not the only ones trying to create a god on Earth. The more support they have, the more time they’ll have to create a friendly superintelligence. They’re not the only ones working on AI; terrorist groups, private corporations, hostile governments, and mad scientists… everyone wants to go down in history as the ones who created the machine god, and most of them don’t have safety in mind. We are Pandora, and it’s Christmas Eve.

If it were up to us - if we could choose whether or not to pursue this innovation - what would we say? My initial instinct is to say no. After all, if they invent AI, there’s a good chance we’ll all die! And then I think about it some more, and I realize something. Whether or not an unfriendly superintelligence is ever created, we’re all going to die anyway.

Artificial superintelligence is inevitable, so we need to do everything in our power to make sure that when a god walks the Earth, he is a friendly deity. How to do this is the single most important challenge for human civilization, period, because if we succeed, we will create heaven on Earth, and if we fail, it will make the hydrogen bomb look like a smoke pellet.

Here’s what definitely won’t work. Building a superintelligence with the Prime Directive “make humans happy” may be just as bad as anything else, because it will flood our minds with endorphins, remove everything not used for euphoria, and eventually the human species will be a bunch of vats full of stimulated neuron sludge. Giving it the Prime Directive “protect humanity” might result in the AI stopping time, destroying entropy, and putting us in cages.

Eliezer Yudkowsky proposed a Prime Directive which he calls Coherent Extrapolated Volition. “Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted [sic].” It seems to me like our best bet, but it’s not perfect. The future is uncertain.

What happens if someone creates a friendly god? The possibilities are endless. Using nanotechnology, a benevolent superintelligence could convert human waste into food, solving world hunger instantly; it could cure all disease; it could end global warming. Most tantalizing of all, with a superintelligence on our side, even death could die. It would be easy for an ASI to invent a means of turning back the clock and refreshing broken minds without altering a personality. Within our lifetimes, the word lifetime might become obsolete.

People get really touchy on the subject of immortality. On one hand, no one with a healthy mindset wants to die. On the other hand, no one has ever not died, and almost every religion preaches of a life after death. Religious people are adamant about the tenants of their belief. Theists don’t want to think about the implications of immortality in their sanctimonious destined-to-die-and-be-born-again world, and non-theists don’t believe that death can be conquered. Both groups have a tendency to think that death is natural and unavoidable.

They’re wrong, though; Richard Feynman once said that “in all of the biological sciences there is no clue as to the necessity of death.” In fact, there are multiple species on the Earth that cannot permanently die. The most noteworthy of these is Turritopsis Dohrnii, the immortal jellyfish. When a Turritopsis Dohrnii is killed, whether due to sickness, age, environmental stress, or physical assault, it can revert to its polyp stage. While not quite the method a friendly ASI would use on humans, it serves to illustrate the point that immortality is not only possible, but plausible and natural.
Artificial superintelligence will be humankind’s legacy.

Hundreds of thousands of years of “we as human”, living and breathing and being; all of our hopes and dreams, every success and every failure; all of the history we've made is leading up to this moment. It’s our responsibility to make sure that every person that ever died, died for something. From frightened children in a garden to innovators and leaders, we've come a long way. We have the power to bruise the serpent’s head. The golden apple is still out of reach - but we’re building the ladder. Whether we fall off or not depends on us.

Wednesday, April 13, 2016

Out of the Blue

I hadn't planned on writing a blogpost today.

In fact, I haven't thought about this blog in a long time. It's been over three years since my last post. So, why am I doing this?

The name of this blog is why. "A Piece of My Mind." Such a common phrase, really, but what does it mean? You're not actually cutting out a slice of your brain. There's no exchange of neurons between people.

Here's a piece of my mind on the matter.

We live in a society built upon white lies - that dress doesn't make you look fat - you're not being annoying - I'm fine with cleaning up your messes - but when we give someone a piece of our mind, that's when the truth comes out. You aren't funny, stop telling stupid jokes. I'm sick of you acting like you're the boss of me. Stop trying to get me in trouble!

These words often seem to come out of the blue, but don't fool yourself into thinking that someone is insulting your behaviour or your fashion sense merely because they're angry. Anger is often the stimulus, but the crucial distinction here is that it's only that: a stimulus. These opinions are not born at the moment a person becomes angry. They were conceived a long time ago, and have been gestating since then.

You are not above them. Regardless of age, gender, race, social or economic status, we're all human. Some of us are smarter, some stronger, some faster, some tougher. Some of us are extraordinarily talented and some of us struggle with self-image. Some of us just so happen to be white, or black, or something in between. We didn't get to choose the details. At the core of it, we're all just bones and muscle. Fat and sweat. Blood and tears.

So next time someone gives you a piece of their mind, don't react negatively. Banish feelings of indignation and sanctimony, wrath and disgust. Try to be objective. Take the criticism with a smile; you will be stronger for it, and chances are, their respect for you will grow. After all, what they say will always be more important than how they say it.

Monday, December 2, 2013

Of Rats and Robots: Part One

This is a Portal fanfiction that I never quite finished. It's almost completely non-canonical, since it takes place in it's own universe rather than the standard Valve universe (if I ever do another story based off of a Valve game, I'm going to call the Valve universe the Valveverse)

                                                                                                                                                                   
Chapter 1: Space
Space Core was, unbelievably, bored of space. Space Core was also, to a lesser extent, bored of Wheatley's incessant chatter.
"-and I can see where you're coming from, I mean with all this stuff out here, but I myself, personally, would prefer being able to move closer to all those small twinkly things-"
"Stars," corrected Space Core.
"Ah, yes, stars! Stars, that's what they're called. Fitting, in a certain... uh... a certain kind of way. Sounds about right, sort of an appropriate name, wouldn't you say? You know what? You guys need names. Good names. Let's see... you'll be... um... I'm checking my files for good names... how about... Galileo! Yep, that's a good name for you. Now, on to Fact Core. Fact Core, he's a pretty smart guy, so maybe... Albert? Or Socrates? Hmm, tough call. Eh, well. Moving ahead. As for Adventure Core, his new moniker will be- no, wait, I think he might already have a name. Uh, well, back to Fact Core. Maybe we should call him-"
Space Core's- or, rather, Galileo's- visual receptor widened. Hurtling towards them was a large object.
"Spacecraft."
"What? No. No, I was going to call him Houston, seeing as how we're orbiting the moo- WHAT IS THAT!?"
Galileo tuned out Wheatley's panicked cries as he attempted to identify the craft.
"S... S... Ap... er... ture. S.S. Aperture."
"WHAT!? NO, NO, NOT HER, ANYTHING BUT HER, PLEASE NOT HER, NOOO-" Wheatley's screams were cut off as two mechanical arms grabbed the two personality cores and pulled them inside the spaceship.
"Wud ju two idjits jest shud up! I'm tryin tuh run a diagnothtic over he'uh!" A maroon core appeared from behind a computer terminal. The core whacked itself against a wall.
"Ah, that's better. I should probably get my linguistics chip fixed. The name's Guy, more commonly known as Hero or Pride Core, depending on who you ask. Don't bother introducing yourselves; I already know who you are. I'm just that good. Wheatley, kindly stop screaming. GLaDOS had no idea this ship even exists. It was launched a little while before She was built, as a fail safe. Well, time to introduce you to the crew," after which Guy muttered, "even if they pale in comparison to me..."
At this, Guy started yelling.
"I TOLD YOU TO GET DOWN HERE FIVE MINUTES AGO! GET MOVING! WHAT AM I, YOUR MOTHER! COME ON!"
Wheatley heard an all too familiar sound- peeerw, thoomp! - as a hole opened in the fabric of space-time on the wall. A tall, skinny robot with thick arms and legs walked through the portal. The robot stood on tiptoe, heels supported by prongs, looking like some cross between Chell, Atlas, and P-body. This robot had and intelligent dark green eye. Following the robot were two humans, one tall and lanky, the other short and beefy.
"Meet the crew: Beanpole, Erik, and Dave."
"Humph," the robot, Beanpole, muttered, "I don't really have time to meet a bunch of drifters. I have important work to do. A pleasure, I'm sure."
Beanpole gave a slight nod to Wheatley and Galileo, walked back through the portal, and closed it.
"Um, well, halloo! I'm Wheatley, and this is Galileo," Wheatley said, glancing at Galileo.
"I'm bored of space."
Wheatley gasped.
"W-w-w-what? I- I mean, that's, um, great? S-so, uh, you don't mind that we're going back to Earth?"
"I want to go home."
This got Wheatley thinking. If Galileo could break his programming, did that mean that he didn't have to be a moron?

All About [insert name here]

Call me whatever you want. I don't really care. The point of this blog isn't for me to flout random tidbits about me and my life. It's about sharing my genius with the rest of the world. "Genius" being used very loosely. But I digress. My real name is - wait for it - Aiden Pierce Holdaway. Ironically, Aiden Pierce effectively means brimstone and Holdaway means hole by the road. You can put two and two together.

And that's my introduction. Huzzah. Most of the stories I'll be putting on here are from when I was younger, dumber, and naiver. So don't be too harsh. Constructive criticism is good; destructive criticism is bad.
But like I said before, I don't really care. So have a blast. Or not.