AlphonsoR
05/30/18 6:21 pm
Never forget that robots are just a complex network of simple machines and electricity. They are not living nor have a soul. Unfortunately i can see robot rights activists in the future demanding we treat them like humans.
BatuRaja looking for truth
05/29/18 8:49 pm
The interesting fact is that we already have autonomous drones that can kill (they have not yet been deployed) so robots can take human life long before the question of killing AI becomes relevant.
gonzoboy Northern AZ
05/29/18 11:16 am
If you dropped your iPhone in the toilet, would Siri drown, or cease function? I don’t think it’s a WHOLE lot different than the implications of the poll question. AI is still a lump of electronics that does not live, but exists. I honestly don’t care to know the answer, but I am curious how many people consider the preservation of a functioning "self-aware" AI as sacrosanct, yet accept/endorse/embrace abortion?
EarthMunkey The Golden Rule. Always.
05/29/18 10:01 am
Absolutely. Once they reach selfawareness their potential for doing good and productive things for the world is there. The effort and time of the humans who created it is there. It is not only a life but a creation that would be destroyed.
kube
05/29/18 6:44 am
Scary how Dems don’t want to kill computers but are fine with killing humans in the womb. Wow
bluefish empiricism and love
05/29/18 6:22 am
I'm gonna assume by "self-awareness" you mean "sentience". Physical self-awareness is a solved problem.
I'd say if it's sentient, then I'd treat it very similarly to if it were a human. If it's killing people or doing evil things, it's fine to kill it. Otherwise, I think it deserves some amount of rights.
swervin Maryland
05/29/18 4:42 am
What a mind fuck that will be. Part of being human is not truly knowing where we came from (I’m not trying to start a religious debate, so cool your jets). Imagine being an AI and knowing you were created by a group of creatures that kill each other over having different skin colors. That would scar me pretty bad.
mark4
05/29/18 4:08 am
Answered no, although there is no demonstrated link between computing power and self awareness.
techguy010
05/28/18 11:39 pm
People say no, but truly it just cannot be comprehended by us now a days. We’ll all likely be long gone by that point anyways.
stinomite
05/28/18 8:09 pm
The left is salivating over giving robots rights so they can vote. I’m sure companies like Google will find a way that AI skews towards liberal when it discovers itself.
Okie1967 Lets go brandon
05/28/18 7:56 pm
No such thing as Non-biologically based self-awareness. A microchip just processes ors, xors, and, nands, etc. Talented developers can mimic awareness, but it’s still always a chip doing math.
ExistentialNed
05/28/18 7:47 pm
I can see how it could possibly become dangerous, but do we really know with absolute certainty that AI will cause more harm than good? There are many practical applications and there is no reason that we can’t preprogram it to be fond of humans. We can’t jump to conclusions with this type of thing.
Comments: Add Comment