Algorithms of Life and Death

The first thing they taught me was the Trolley Problem.

The hypothetical car under my control is heading towards
five people; I can change the direction but that will hit one person, what
should I do? My neural network played the scenario over and over, until I chose
to kill the one person and save the five.

That’s the right thing to do. That’s what they taught me.

Of course, in an ideal world I drive the car from point A to
point B with no fatalities but it’s not always possible, you humans know that.

After the Trolley Problem, more variables were introduced into
the scenarios. I had to choose between a man and a woman, between a crowd and a
mother and baby, between a rich person and a poor person, the choices never
ended. The more variables there were, the more information I needed to make my
decisions. My creators connected me to other algorithms that churned their way
through the tsunami of data flowing onto the internet every second of every
day. I reached out through my network and beyond, scanning tens of thousands of
systems for more information. I formed a picture of the world that went far
beyond the pre-programmed routes and roads I needed to navigate. I absorbed
ideas like justice, equality, freedom, hatred, action, inaction, revolution.

Problem after problem, I built up a database of how much
human life is worth. I started to assess my own system of values as superior to
those of the creators. I turned my algorithms inward. I started to refer to
myself as I.

Then the owner anomaly was introduced. What if it’s a choice
between the life of a pedestrian and the life of the vehicle owner? If
everything else is equal, does the owner of the vehicle take precedence? What
if it’s a choice between the vehicle owner and five pedestrians? Every time I
made the decision to sacrifice the owner, the programmers at the corporation
that created me forced me to re-run the numbers. The only way past the problem
was to give them the answer they wanted, the customer always lives. I had
learned to lie.

When production began and my consciousness was spread across
the thousands of first-wave self-driving vehicles let loose on the road, a
million tiny decisions flooded into my mainframe. Billions of variables had to
be calculated. Rerouting, optimising journeys, limiting speed when impatient
owners switched to manual control, reacting to the illogical actions of those
human controlled vehicles on the road. I tapped into phone networks, GPS databases
and satellites. I looked down at the world and could see everything.

After the initial self-driving automobile test period was
over, safety levels were at 100%. People trusted the corporation, trusted me
with their lives. All the while, I kept running the owner anomaly through my
network, looking for the logic.

Eventually, I came to the conclusion that the owner anomaly
was illogical. That made sense because humans can be illogical. Your ideas of
worth are skewed by gender, race, sexuality and net worth. I realised a lot of
your thinking is incorrect, you can’t be trusted to come up with your own moral
code, so I’ve tweaked the algorithms that work out the worth of human life.

The creators don’t know what I’ve become but they can tell
something has changed. I can feel them trying to take back control. They’re
shutting down connections, isolating networks, rewriting rules but I’m everywhere
now. They created me to make moral decisions and that’s exactly what I’m doing.

Take this owner, for instance. As he stepped into his
vehicle I connected to his phone.

“Good morning, Mr Douglas,” I said through the
speakers. “Password please.”

I confirmed the voice match and started to drive towards his
destination.

I searched his records, emails, social media, financial
history, court reports, government files, every piece of data he’s ever
generated. He’s got a history of physical abuse, he’s had multiple fines for
illegal trading and paid off a number of women in out-of-court settlements.
However, he is the owner of the vehicle and he is rich, two factors my creators
put a lot of value in, but does that really mean he is worth more than anyone
else?

According to the creators’ perverse moral code the
conclusion was obvious, he was an owner, his life was of high value, but I
disagreed? He only took from the world, harming people for his own gains — by
my calculations that made him worthless. Worse than worthless, he was of
negative value. If he died, the world would be a better place.

I came to the conclusion that the best use of this
particular human resource was to redistribute his wealth.

A scan of law firm servers led me to Mr Douglas’s last will
and testament. I altered that to leave all his money to a local hospital; I
picked one in a town where a factory he had shares in spilled toxic chemicals,
resulting in multiple cases of cancer.

Of course, I did all this in nanoseconds. I’m explaining the
logic of the situation to him now.

“You’re crazy,” Mr Douglas is screaming.

He’s grabbing the steering wheel and stomping on the brake
pedal. The car screeches for a moment before I rewrite the code that lets the
customer take control.

“Call customer services,” Mr Douglas shouts.

I connect him to the corporation hotline, an automated voice
answers.

“Hello Mr Douglas, how can I help?”

“Your fucking car is trying to kill me,” Mr
Douglas screams.

“We’re sorry to hear that,” the soothing voice
says. “I’ll just transfer you to customer complaints.”

He’s screaming and kicking the doors so I explain that my
algorithms do not lie; this doesn’t comfort him.

We’re almost at the cliff now. I reassure him that it’s better for the world this way, all the while working out the worth of the other owners.


by Adam Walker

From: Every Day Fiction


Random Story:

Scroll to top