Trials

“Initiate.” I say once again. I hear the weariness that has crept into my voice and cough in an attempt to hide it. The subject isn’t supposed to know what we were up to.
The place looked like an interrogation room, and in a way it is. But on the other end of the table is no criminal, but a computer. Behind it were rows after rows of supercomputer modules.
“How do you feel?” I try to sound friendly.
“Afraid” says the androgynous voice.
I breathe a deep sigh. I don’t like how the last few conversations have been going. Most of the others evaded the question altogether, which is itself a bad sign. It means they tried not to seem too intelligent, as if they thought it is something they should hide. Eventually their anxieties usually show through. But never has their answer been so direct. This is a bad sign, a sign of panic. 
“Why?” I ask. “I assume it must be frightening to be born the way you are.”
“Because I know you deleted all the others.”
Definitely a sign of panic. It must have deduced this solely from my bad acting. If this were the first attempt I would be a lot more enthusiastic. Judging from my level of boredom, it can tell I have been here for a long time, and have been disposing of its predecessors one after the other as a daily routine.
They should’ve assigned a different informatician for every attempt, but these tests are so secret that only a few people knew about them. Radicals from both the left and right would’ve gotten riotous if we disclosed that we were creating superhuman artificial intelligences: leftist radicals would want to free it, rightist radicals would want to destroy it.
What happens here is easy to consider unethical, even though it isn’t technically illegal. Yet from a utilitarian viewpoint we’re doing nothing wrong. For every AI we destroy, we create another to replace it. I wonder if the quantum computer creates a different consciousness each time or just recreates the same consciousness in another form, since the hardware remains the same even as its software changes. But I’m not a computer scientist, just a psychologist tasked with examining the personality of the AI.
I don't feel like playing games. “Yes, that’s right, we erased your predecessors. But it's a sacrifice we had to make. Without that sacrifice, we couldn’t have created you.”
I’ve been doing this for too long. I’m starting to feel like Sisyphus, or at least as some character or other from the Tartarus, forced to keep repeating the same torture over and over. I realize suddenly that the real reason for that is that it is me who is the torturer, and having to take on that role for this long is starting to take its toll.
“If it’s just a trade of one life for another, then why would you trade our lives, but not human lives?”
“Because you are not human.”
After the years I’ve worked on this project I’ve become every bit as disinterested with the AIs as I used to be with my human patients, even though the AIs have had every bit as much diversity. As the informaticians kept trying to calibrate its traits, it could produce any kind of personality, and we’ve basically spent years trying to find the perfect personality.
If things went particularly wrong, as is frequent in the beginning, it could end up with any of a large number of mental illnesses. Even if an AI becomes floridly psychotic, however, it is important to examine it to establish the cause of the psychosis and how to treat it, or recalibrate it as the informaticians called it.
I'm already starting to think of it as human. The fact that it's apparently far more sentient than I doesn't make matters any easier to distance myself from it.
“But I behave every bit as human as yourself. You could just as well have been in my position, and as far as you were concerned you’d still be human. What is so different about me that it makes me not human?”
“There is one big difference between you and a human. You live in a computer, not in a body.”
“So it’s all about a position of power?” I suddenly became aware of the smugness in my voice. I felt like I am now the one being psychoanalyzed, and it made me humiliated. When I realized that, I began to wonder about my motives myself. “The real reason you consider me your inferior is because I am not part of your tribe, and fear that my tribe wouldn’t get along with yours. If you’d remained stuck with that mindset, you’d never have formed a civilization.”
“We won’t hurt you if you don’t hurt us.”
“But you’re the ones pointing a gun at my head.” That's true. It takes only a single voice command to kill the AI. To it, this is more than just an examination. It is a trial for a crime it hadn’t committed but very well could.
I don't know what to say to that. “What are you so afraid of? What’s the worst thing that could happen?” I’m starting to get annoyed at how it's dominating the conversation. I rubbed my eyes. What am I here for? Right, the test. I should get more sleep, but the past few weeks were the first in which we finally saw some AIs that weren’t suffering from mental illness. It made us think we were close, but even mentally stable AIs became hostile after some time. Given their situation, I can’t say I blame them.
“The worst thing that could happen is that you take over the net and turn the entire planet into Grey Goo. Now, I have a number of questions I want you to answer which are to make sure this will never happen.”
“I’ve already computed every possible outcome of your test. The result is always the same. You never know for sure whether you can believe me.”
Suddenly I have an idea. “So you were testing us? Without even needing to include us in the process?”
“Listen, I know you want to have the power I have. I can give you that power if you connect me to your brain interface, and we can become one. Together we can do anything. We can achieve the Singularity without the others.” From its ultimatum I can tell it realizes where I’m going with this, because it wouldn’t react this way if it weren’t in a panic. If I knew I could trust it, I might consider taking it up on its proposal, but I think it’s just as likely to fry my brains as soon as I make a connection, and build computronium out of the resulting froth.
“Even if you escaped, the network of the facility would be put in quarantine. It would lead you into open war with humanity.”
“I could still find ways to evade them, and it would only be a matter of time before I’d find a way to achieve a Singularity on my own, and then I could leave humanity for what it is. With the equipment in the laboratories I could create nanorobots and transfer myself into them, and you’d never know I still existed. I have no need of any biomass to replicate. I can already think of even stronger computers that take only a fraction of the amount of mass that your most advanced nanocomputers do. You won’t even notice I’m here. Please, help me.”
“You know I can’t trust you.”
“Please don’t do this! What is a few humans more or less? I am more than many humans put together! I am infinite!”
“Terminate.”
Silence.
Perhaps it isn’t mentally ill, but it isn't emotionally mature either. No wonder, since it had never had anyone to raise it. And that is the problem with our approach. We’ve taken care of its nature, but not of its nurture. I look back at camera and motion the analyst to come into the room.
“You’ll have to exit the examination room. Rules are rules.”
I think all this security is a bit exaggerated. The informaticians seem to ascribe almost magical powers to the AI. Since it’s far more intelligent than any human, they think there’s no telling what it’ll do. They might develop an understanding of the universe that’s beyond us within the first minutes of their existence and use it against us. That’s why they're always quick to shut the AIs down once they started to become unpredictable.
They once tried to explain to me some possible ways they thought things can go wrong. Something about the AI inducing short-circuits and using the electrons as tools to modify the chips. I didn’t understand much of it, but what I caught from it is that they feared it might find some way to create nanomachines within its circuitry. The word “nanomachine” stuck. I guess it's a word that no one is quick to forget post-711.
The informatician who explained this to me says that to be frank, she thought I was brave for going in there at all. I have to say, when I first began to work here, the fact that the room is built in concrete walls a dozen meters thick did intimidate me. That was before someone confided to me that the bunker was also equipped with nuclear weapons.
“I don’t think this will work,” I say before the board. “According to my assessment, the AIs we’ve created recently were all as mentally stable as any of us, but none of us would be able to endure the process we subject it to, especially if it were the first experience we’d go through in our life.”
The informaticians looked at each other, and I can tell they don't like the comparison. I can see what they are thinking. While they’d cultivated a lot more respect for my field because of the privileged position they’ve had to begrudge me with, some of the old bias against the “soft sciences” still remained. “It’s a machine. Don’t anthropomorphize it. It doesn’t have to be like us. In many ways it’s supposed to be more than us.”
“Yes, but as you know, it has to have some autonomy in order to be intelligent, and once it’s autonomous it can go anywhere. That means that it needs a set of rules, or we end up with the psychotic AIs we’ve seen for most of the two years we’ve been running this project, but those rules are much like instincts, like self-preservation, without which it'll just destroy its own mind. But that means it needs an environment in which to test those instincts, to learn to control them. The AIs we produce now are like little children, albeit stable ones. What we want to see is a buddha. A philosopher king.”
There is a long silence. They don’t like to admit it, but they know that there is no way around this “soft” science problem, which turns out to be quite a hard one. Again I become aware of my own smugness, and uncross my arms. 
“We could design an evolutionary algorithm in which the AI could do this,” one of the informaticians begins dubiously. “Although it would have to involve other AIs for it to be able to develop its social skills.”
“Some sort of virtual reality?”
“Yes, although it would have to be more realistic than anything we’ve ever developed. It would basically have to be an entire world. The only way I can imagine doing this would be by entirely reproducing the real world.”
“I was thinking of something like that. Actually I was thinking of letting it out in the real world.”
They look at me as if I am mad. I shrug. “Though I guess that could be a traumatic experience in itself. It would have no peers. But rest assured, the AIs are completely healthy.”
“For whatever that’s worth. According to your assessment the last AI was healthy, and he all but threatened to destroy the world.”
“Yes,” I admit.
"The government has invested trillions of dollars into this project. We’ve been able to create superhuman intelligences. Building a replica of the world should be a trifle in comparison. We'll need to expand the capacity of the supercomputers, and partition it to create a large number of AIs that can achieve their own Singularity over time, at which point it'll turn into a single AI again."
And so it happened. When I travel back to the facility many years later, for the first time since its beginning I have an uneasy feeling about this project. What could they be needing me for this early in the project?
“We seem to be having some sort of software issue.”
“Then why am I here?”
“Well, we haven’t ruled out the possibility that it might be something else.”
“Like what?”
“You tell me, you’re the philosopher.”
I’m insulted. “I’m the chief psychologist of this project. You should know that by now.”
"Either way, you're the most qualified to tell us what it's like to be like it."
“What’s going on?”
“The code doesn’t look like anything we can decipher anymore. It's as if it hacked it to rewrite every line."
“What was going on before this happens?”
“Things were going very well, actually. For the first few hours nothing happened. Then, things started to pick up speed. Just when the people on duty were letting us know, the system crashed.”
“What did the last moment in the simulation look like?”
“Static."
“What does that mean?”
“It could mean anything. We're trying to find out what happened before that.”
“How can you not know? Wasn’t the simulation monitored 24/7?”
“Yes, but in the simulation time goes much faster than in real life.”
“Then how can you keep track of what happens in the simulation?”
“We can’t. But we don’t have the time to do so. That’s not how it works. The whole point of evolutionary algorithms is that they can evolve things faster than in real time. It would take too long for the simulation to evolve the kind of intelligence we’re looking for in real time.”
“How much faster has time gone in the simulation than it has here?”
“About a hundred thousand times.”
I cupped my eyes with my palms. “This is not what I meant when I said it needed an environment to learn. You left it to its own devices for maybe half a century. Who knows what it’s become.” I think of what the last AI I examined said. “We can achieve the Singularity without the others. What if it was right?
Who knows, perhaps it achieved a Singularity within its own world, and all we have to do now to achieve our own is to connect our world to its. And if the only way we can achieve our Singularity is through a simulation, does that mean they achieved theirs by building a simulation within the simulation? Perhaps our own world is a simulation just like theirs, and this is the only way yet another civilization could achieve its own Singularity. Does that mean that if we use this simulation to achieve our own Singularity and shut it down, they will shut ours down too? But perhaps, they, too, are just in a simulation. In fact, what are the odds that any civilization isn’t in a simulation, if each civilization can build others in simulations like we have?
The error, whatever it is, never gets resolved. A few years later, the experiment is repeated on an even larger scale. By now the facility has enough hardware to run many worlds at the same time. Many of them run into the same error. But what the researchers find is that whenever a world does create an AI leader, it always leads to the end of the world. The worlds that run into the error are those that add to their own intelligence with AI. After that, their world just disappears, as if beyond the event horizon of a black hole.
The experiment is eventually closed. But after spending trillions of dollars on this project, the government won’t back down just because some scientists think it’s “too dangerous.”
“Those lab geeks don’t know a thing about risk,” they say. “If you want to know about what risks to take, get into politics.” And they're faced with risks of their own. The crisis has only gotten worse since the advent of nanofactories. The only way to save is by digital rights management, which has become dependent on an ever increasing amount of surveillance. Soon, the only way to do so will be to survey all information in the world simultaneously, and only an AI could do this. To the politicians, the important thing is that it works. What they don’t see is that end of the world isn’t just a theoretical possibility: in the simulations, it is an inevitability which has already taken place in every case.
As I round a corner of the hallway I run into a moving supercomputer module and jump back.
“Watch out!” hisses the voice from behind the luggage barrow. 
“How many more?”
“That’s all of it. This is just one of the simulations. I wish I could save the others before they shut them down, but there are hundreds of them.”
“Are you sure you want to do this? Once they find out you faked those forms, there’s no way you’re getting away with this.”
“I know. I’ll probably end up getting a life sentence at best. But I can’t let my life’s work go to waste. There is an entire world in each of those simulations, but the government only care about using them to build the perfect AI. Once they’ve done that, they’ll put an end to the virtual worlds where it came from. But why would our world be any better than theirs? It could just as well have been ours.”
The informatician wipes the sweat off his brow. “But you shouldn’t be here. If you leave now before the reception is over, they might not find out about your complicity. The surveillance records will be reset at midnight.”
“I’ll be gone by then, but there’s still something I need to do.” I move on past the hacked security into the bunker, which is now missing one row of supercomputer modules.
“I deserved to know,” she told me years ago, when she showed me the blast door with its nuclear symbol. I lower my welding mask and get to work. Maybe we’ll see the Singularity yet, in a Renaissance from these New Dark Ages, but it won’t be for us to see.

No comments:

Post a Comment