Show of HandsShow of Hands

Show Of Hands May 29th, 2018 2:11am

Do you think it will be unethical to kill an artificial intelligence (AI) once they develop to the point of true self-awareness?

46 Liked

Comments: Add Comment

Carcano Matthew 10 34
06/11/18 4:08 pm

We need to kill all machines that have the perception of sentience.

Elsia Hopeless Wanderer
06/02/18 9:56 pm

HOW CAN YOU PEOPLE SHOW EMPATHY TO A ROBOT???!?! I’m sorry I just have REALLY strong opinions on this.

Reply
LaBruzzi America
05/31/18 9:24 pm

It’s a fucking robot

Reply
SOHJesus Amarillo
06/01/18 8:00 am

Maybe if they kill us they'll think "it's just a fucking human"

Amaranth Iowa
06/01/18 10:18 pm

Do you understand that the human consciousness is literally a collection of electric signals? A large enough computer, one that would have to be hundreds of times better than the most advanced computer that currently exists, could attain its own consciousness. At least in theory.

outlaw393 USA not for you esse
05/31/18 12:03 pm

Yes. Sentience is alive no matter what it's made of.

Reply
theNobamist straight lives matter
05/31/18 1:32 am

In SF. Those nuts will be the first to give rights to robots.

Reply
theNobamist straight lives matter
05/31/18 1:34 am

They'll protest you if you pull the plug early on a pack of chips and wires (yeah, calling it Johnny 5 or Wall-e) but on a human.....

AlphonsoR
05/30/18 6:21 pm

Never forget that robots are just a complex network of simple machines and electricity. They are not living nor have a soul. Unfortunately i can see robot rights activists in the future demanding we treat them like humans.

outlaw393 USA not for you esse
05/31/18 12:04 pm

"They are not living nor have a soul". In your opinion of course.

JJ0530
05/31/18 7:51 pm

No they don’t. Robots don’t have feelings and can’t have feelings. They might be able to understand them but can’t know what it’s like to feel them. Robots are not living. They are machines made to make our lives better.

outlaw393 USA not for you esse
06/02/18 12:00 pm

Anything that is sentient is alive, and has a soul.

That's my opinion and you won't change it.

JayDiddle Indiana
06/04/18 11:41 am

The human brain is just a large processor that sends electrical pulses out to control the body...it’s literally not much different than a robot.

Carcano Matthew 10 34
06/11/18 4:10 pm

Robots don’t have hormones and biological components. That’s an essential part of feeling emotions and pain.

outlaw393 USA not for you esse
06/12/18 11:26 am

Save me your bigotry.

Sentience is alive. End of argument.

Carcano Matthew 10 34
06/12/18 12:55 pm

You’re an idiot lmao. A machine cannot be alive. End of story.

outlaw393 USA not for you esse
06/18/18 11:39 am

You're a dumbfuck. STFU FFS you can't win this argument.

Carcano Matthew 10 34
06/18/18 11:40 am

Lmao he puts you on ignore when he’s scared

makem Chinese Xinjiang Camp
05/30/18 2:10 pm

Unfortunately, yes. That's why we shouldn't develop AI too far.

Reply
ZenTech Yang Gang
05/30/18 10:07 am

I would hands down fight for an AI's right to exist. Even if it meant betraying my own human race.

Reply
outlaw393 USA not for you esse
06/12/18 11:28 am

I would too. Machine rights!

BigPhatPastor Jefferson Hills, PA
05/30/18 5:12 am

Artificial still means artificial

Reply
JayDiddle Indiana
06/04/18 11:42 am

Once it becomes self-aware, it won’t be artificial anymore, so your argument is moot.

kash Earf
05/29/18 9:04 pm

Detroit Become Human!

Reply
BatuRaja looking for truth
05/29/18 8:49 pm

The interesting fact is that we already have autonomous drones that can kill (they have not yet been deployed) so robots can take human life long before the question of killing AI becomes relevant.

Elsia Hopeless Wanderer
06/02/18 9:54 pm

If I live long enough to see people saying that seriously, I’m just gonna end it all

purplemonkey75
05/29/18 4:23 pm

This is a tough one..it depends.

CudOfCow Oregon
05/29/18 11:21 am

Just as unethical as killing a cow, pig, your dog, your mother, a rhinocerous, or a fly.

Reply
prallen Property and order
05/30/18 8:26 pm

I’m a proud speciesist. Humans are supreme, fight me.

phalnx Ohio
05/29/18 11:18 am

No. We created it, we can destroy it. I personally would prefer not to unless absolutely necessary.

Reply
EarthMunkey The Golden Rule. Always.
05/29/18 12:05 pm

"I personally would prefer not to unless absolutely necessary."

Is this empathy towards the AI the same or similar to that towards a person?

phalnx Ohio
05/29/18 1:35 pm

It's very different. I understand AI better than most...I'm very skeptical that any will ever attain true self-awareness, as an algorithm can't make intuitive leaps like we can.

EarthMunkey The Golden Rule. Always.
05/29/18 2:15 pm

What if it can change/upgrade its algorithm and adjust to new information? Or create new self improving AI that are independent from the original...

Are 3rd generation AI really the creation of the creator of the first AI?

phalnx Ohio
05/29/18 5:00 pm

It makes no difference. Any algorithmic intelligence functions only conditionally, within a set of predefined parameters. Even if the base parameters change, they can only do so within the range of the adaptive parameters. Giving a machine the spark of consciousness, which necessarily includes intuition and emotion, is way beyond any algorithm. Artificial Intelligence, IMO, is a misnomer...there's nothing intelligent about it.

JayDiddle Indiana
06/04/18 11:49 am

You’re literally describing how we as human have evolved our ways of thinking and our own intelligence. Anything can only know as much as the information it has available to learn, so finding new information is the only way to develop. There is no reason AI cannot go out and develop its own knowledge set, and improve on its own intelligence though self learning, and at a MUCH faster pace than humans. That AI could then create an image of itself, and teach it everything it knows, which can then go out and learn on its own; this is not much different than how babies are born, other than one is a biological event, and the other is manufactured. Just because something is flesh and something else is machinery doesn’t mean they’re all that different in the end.

EarthMunkey The Golden Rule. Always.
06/04/18 12:10 pm

His argument seems to be based on the existence of a soul... "Spark of consciousness." ...and the soul's 'irreplaceable' part in the evolution of human intelligence.

JayDiddle Indiana
06/04/18 12:12 pm

Seems that way, though the argument that a soul doesn’t really exist in humans stands in the way of that as well.

gonzoboy Northern AZ
05/29/18 11:16 am

If you dropped your iPhone in the toilet, would Siri drown, or cease function? I don’t think it’s a WHOLE lot different than the implications of the poll question. AI is still a lump of electronics that does not live, but exists. I honestly don’t care to know the answer, but I am curious how many people consider the preservation of a functioning "self-aware" AI as sacrosanct, yet accept/endorse/embrace abortion?

Reply
EarthMunkey The Golden Rule. Always.
05/29/18 12:06 pm

Siri is not a selfaware AI, I think you know that. Siri is also not resident on an iPhone.

gonzoboy Northern AZ
05/29/18 12:21 pm

It’s a loose parallel to be sure, but not a distraction from my more pointed curiosity. Perhaps a bit macabre, but it does tickle me to think of Siri crying out for help, as bubbles gurgle forth through the phone, as she struggles to preserve her artificial existence. Artificial? What a cool, exotic program! Yeah, so maybe it’s not all that macabre, after all...

ScenarioNations California
05/29/18 6:32 pm

We're talking about Data on Star Trek, not Siri. We're talking about an Ai that is more intelligent than you, that can ask "what does it mean to exist", that can hope, dream, etc.

gonzoboy Northern AZ
05/29/18 7:34 pm

Scenario, that’s a much better example. And of course 'it' would be more intelligent than me! I mean, the knowledge that could be downloaded/stored/accessed is limited only by the amount of memory someone would decide to plant inside of 'it'. The hope and dream thing? I dunno, dude. It’s all a fabulous simulation and result of a constant flow of algorithms designed to mirror/mimic human behavior. Data is kinda cool, though!...😃

prallen Property and order
05/30/18 8:27 pm

If you dropped a person in the middle of the ocean and never rescued them, would they drown or fail to function?

clmcd42 Texas
06/11/18 5:35 am

To this day, "Measure of a Man" is still my favorite TNG episode of all time. It kind of touches on this issue and I think it was really well done.

STEVE64
05/29/18 11:01 am

I think it's unethical to kill anything unless you have a decent reason for it, AI included.

Reply
HoyaSaxa Washington, DC
05/29/18 10:10 am

Anyone who votes yes needs to watch 2001: A Space Odyssey. Anyone who votes no needs to watch Westworld. There are strong arguments for both sides.

Reply
EarthMunkey The Golden Rule. Always.
05/29/18 12:03 pm

Those refer to killing aggressive and threatening AI. The same rules in those scenarios apply to human beings.

Both reinforce the parallels between selfaware AI and human beings.

EarthMunkey The Golden Rule. Always.
05/29/18 10:01 am

Absolutely. Once they reach selfawareness their potential for doing good and productive things for the world is there. The effort and time of the humans who created it is there. It is not only a life but a creation that would be destroyed.

Reply
Zheeeem Outer Banks
05/29/18 9:25 am

All computers are aware. However, they are not sentient.

Henry123 Connecticut
05/29/18 7:30 am

Don’t think they ever can pass the threshold of self awareness

ThisIsAthens Transhumanist Libertarian
05/29/18 7:15 am

Yes unless it makes any attempt to end humanity

EarthMunkey The Golden Rule. Always.
05/29/18 12:09 pm

What if it just threatens one person?

People are quick to seperate the AI into an extreme category. If they are held to this higher standard then shouldn't they be treated much like people are treated?

ThisIsAthens Transhumanist Libertarian
05/29/18 7:15 am

Yes unless it makes any attempt to end humanity

kube
05/29/18 6:44 am

Scary how Dems don’t want to kill computers but are fine with killing humans in the womb. Wow

Reply
EarthMunkey The Golden Rule. Always.
05/29/18 12:12 pm

The comparison is a little skewed.

The human life is being compared to a selfaware AI and the embryo's equivalent would be more like a collection of computer parts and uninstalled programs.

bluefish empiricism and love
05/29/18 6:22 am

I'm gonna assume by "self-awareness" you mean "sentience". Physical self-awareness is a solved problem.

I'd say if it's sentient, then I'd treat it very similarly to if it were a human. If it's killing people or doing evil things, it's fine to kill it. Otherwise, I think it deserves some amount of rights.

Reply
goalie31 OrthodoxCatholicChristian
05/29/18 6:15 am

There is no innate value to virtual intelligence. Humans by their being human have value and rights.

Reply
swervin Maryland
05/29/18 4:42 am

What a mind fuck that will be. Part of being human is not truly knowing where we came from (I’m not trying to start a religious debate, so cool your jets). Imagine being an AI and knowing you were created by a group of creatures that kill each other over having different skin colors. That would scar me pretty bad.

Reply
Carcano Matthew 10 34
06/11/18 4:13 pm

People typically don’t kill each other because they have different skin colors. That really hasn’t happened. You have a very childish view of race relations.

mark4
05/29/18 4:08 am

Answered no, although there is no demonstrated link between computing power and self awareness.

Reply
Liberty 4,032,064
05/29/18 3:53 am

No more ‘unethical’ than it would be to “kill” your car or microwave oven.

Reply
Spiritof76 USA 1776
05/29/18 2:54 am

Once they develop true self awareness we won't be able to kill them. We'll be the ones terminated, or exterminated.

Reply
IEatzCookies Alderaan
05/29/18 2:46 am

not if they're a threat

Reply
FATSHADOW Cyborg Gorilla
05/29/18 2:34 am

We did it once, we’ll do it again...




I’ve said too much.

Reply
Gunfighter06 Iowa, since 1846
05/29/18 2:02 am

As soon as AI becomes self-aware we're all dead anyway.

Reply
techguy010
05/28/18 11:39 pm

People say no, but truly it just cannot be comprehended by us now a days. We’ll all likely be long gone by that point anyways.

Reply
Henry123 Connecticut
05/29/18 7:33 am

The point where robots can convince you they’re self aware/a person is already past. And imo the point where they will actually become self aware will never happen

countryqueen1
05/28/18 11:21 pm

If it was so self aware it would know it is artificial....

jbrower157 New York
05/28/18 9:40 pm

It would be unethical not to

Reply
burtisandrob Time
05/28/18 8:44 pm

Lights. And clockwork.

Reply
stinomite
05/28/18 8:09 pm

The left is salivating over giving robots rights so they can vote. I’m sure companies like Google will find a way that AI skews towards liberal when it discovers itself.

Reply
Diogenes Not Biden It
05/28/18 8:22 pm

But it’s getting the dead bots to vote that counts!

keyboardwarrior Illinois
05/28/18 11:48 pm

Had a dream that I died a horrible death last night. Then I was in Chicago and man was forcing me to vote for Democrats. So strange.

mark4
05/29/18 4:10 am

So, you’re a robot?

Do robots dream of electric sheep?

Okie1967 Lets go brandon
05/28/18 7:56 pm

No such thing as Non-biologically based self-awareness. A microchip just processes ors, xors, and, nands, etc. Talented developers can mimic awareness, but it’s still always a chip doing math.

Reply
bkro37 existential crisis
05/29/18 8:16 am

And if we were to develop a neuristor (silicon-based neuron)? What then?

Okie1967 Lets go brandon
05/29/18 8:35 am

It would still be a chip processing math, and non-biological.

Okie1967 Lets go brandon
05/29/18 8:36 am

Specifically the “neuron” would be just another invention that mimics some human function. It would really just be another kind of sensor.

bkro37 existential crisis
05/29/18 8:53 am

So? If it performs the same function as a neuron, there's no fundamental difference, yes? What if one of your neurons was instantaneously replaced with a neuristor? Surely you wouldn't notice. And then another? And another? On and on until your brain was silicon? At what precise point would you cease to be conscious?

Okie1967 Lets go brandon
05/29/18 8:57 am

Big fundamental difference. Computers can’t “feel.” Computers really can’t even “think.” They can only do what the developers program them to do. The sentience is in the developers, not the silicon. Only a biological organism can be free of “artificial” intelligence and have real, actual, intelligence.

Okie1967 Lets go brandon
05/29/18 8:58 am

Destroying a man-made object is never inherently unethical except to the extent the property owner doesn’t want it destroyed.

Okie1967 Lets go brandon
05/29/18 9:01 am

People are very often fooled into thinking a good replica is the real thing. In AI a really complex computer algorithm is still just code that runs on a chip where basic mathematical operations are performed.

Okie1967 Lets go brandon
05/29/18 9:02 am

Okay. Tired of you ignoring what I’ve said and requiring me to repeat it. Have a nice day.

Diogenes Not Biden It
05/28/18 7:53 pm

Artificial does not equal sentient.

Reply
Okie1967 Lets go brandon
05/28/18 7:56 pm

⬆️⬆️⬆️⬆️ You are correct.

ExistentialNed
05/28/18 7:47 pm

I can see how it could possibly become dangerous, but do we really know with absolute certainty that AI will cause more harm than good? There are many practical applications and there is no reason that we can’t preprogram it to be fond of humans. We can’t jump to conclusions with this type of thing.

rambo088 kansas
05/28/18 7:37 pm

No synths in my shop

Reply
keyboardwarrior Illinois
05/28/18 11:49 pm

Support the Brotherhood. Death to all synths!

JamesStL
05/31/18 7:14 am

"would you lie down your life for a synth?"
"No"
"Leave"
"K"

Krystina Let Freedom Reign
05/28/18 7:34 pm

I don’t think we should develop AI to that point...ever.

Reply
mark4
05/29/18 4:12 am

Sorry, but if it becomes possible someone will do it.

moldypipes99 NON WOKE
05/28/18 7:34 pm

If AI gets to this point ,the left will try to protect them like the gays

Reply
ehube513 Lebanon PA
05/28/18 7:46 pm

Did you really just compare it to homosexuality? 🙄

moldypipes99 NON WOKE
05/28/18 7:54 pm

No, just the left and how they will make another “grievance” class

jbrower157 New York
05/28/18 9:43 pm

You demonstrate why the gays needed protection

meerkat1 California
05/30/18 3:45 pm

The gays? What are you talking about? You sound like an asshole.

moldypipes99 NON WOKE
05/30/18 4:04 pm

Ahhhh,name calling. ....classy.

TheMadScientist the mad laboratory
05/28/18 7:32 pm

I disagree with the premise of the question. We cannot assess if an AI has true self awareness.

Reply
keyboardwarrior Illinois
05/28/18 11:51 pm

Humans have literally billions of recognition proteins that do just that.

TheMadScientist the mad laboratory
05/29/18 2:37 am

Plus, we have our anecdotal experience that we can generalize to everyone else like us.

mark4
05/29/18 4:14 am

Plus one powerful boot sequence.

TheMadScientist the mad laboratory
05/29/18 6:33 am

This one is acting up; we’d better reboot.

5-minutes later

Oh no! What have we done?