Do you think it will be unethical to kill an artificial intelligence (AI) once they develop to the point of true self-awareness?
We need to kill all machines that have the perception of sentience.
Wow this is a very complex question.
HOW CAN YOU PEOPLE SHOW EMPATHY TO A ROBOT???!?! I’m sorry I just have REALLY strong opinions on this.
It’s a fucking robot
Maybe if they kill us they'll think "it's just a fucking human"
Do you understand that the human consciousness is literally a collection of electric signals? A large enough computer, one that would have to be hundreds of times better than the most advanced computer that currently exists, could attain its own consciousness. At least in theory.
Yes. Sentience is alive no matter what it's made of.
In SF. Those nuts will be the first to give rights to robots.
Proof of AI: Nancy Pelosi
They'll protest you if you pull the plug early on a pack of chips and wires (yeah, calling it Johnny 5 or Wall-e) but on a human.....
Never forget that robots are just a complex network of simple machines and electricity. They are not living nor have a soul. Unfortunately i can see robot rights activists in the future demanding we treat them like humans.
"They are not living nor have a soul". In your opinion of course.
No they don’t. Robots don’t have feelings and can’t have feelings. They might be able to understand them but can’t know what it’s like to feel them. Robots are not living. They are machines made to make our lives better.
Anything that is sentient is alive, and has a soul.
That's my opinion and you won't change it.
The human brain is just a large processor that sends electrical pulses out to control the body...it’s literally not much different than a robot.
Robots don’t have hormones and biological components. That’s an essential part of feeling emotions and pain.
Save me your bigotry.
Sentience is alive. End of argument.
You’re an idiot lmao. A machine cannot be alive. End of story.
You're a dumbfuck. STFU FFS you can't win this argument.
Lmao he puts you on ignore when he’s scared
Unfortunately, yes. That's why we shouldn't develop AI too far.
I would hands down fight for an AI's right to exist. Even if it meant betraying my own human race.
I would too. Machine rights!
Artificial still means artificial
Once it becomes self-aware, it won’t be artificial anymore, so your argument is moot.
Detroit Become Human!
The interesting fact is that we already have autonomous drones that can kill (they have not yet been deployed) so robots can take human life long before the question of killing AI becomes relevant.
Robot Lives Matter?
If I live long enough to see people saying that seriously, I’m just gonna end it all
This is a tough one..it depends.
Just as unethical as killing a cow, pig, your dog, your mother, a rhinocerous, or a fly.
I’m a proud speciesist. Humans are supreme, fight me.
No. We created it, we can destroy it. I personally would prefer not to unless absolutely necessary.
"I personally would prefer not to unless absolutely necessary."
Is this empathy towards the AI the same or similar to that towards a person?
It's very different. I understand AI better than most...I'm very skeptical that any will ever attain true self-awareness, as an algorithm can't make intuitive leaps like we can.
What if it can change/upgrade its algorithm and adjust to new information? Or create new self improving AI that are independent from the original...
Are 3rd generation AI really the creation of the creator of the first AI?
It makes no difference. Any algorithmic intelligence functions only conditionally, within a set of predefined parameters. Even if the base parameters change, they can only do so within the range of the adaptive parameters. Giving a machine the spark of consciousness, which necessarily includes intuition and emotion, is way beyond any algorithm. Artificial Intelligence, IMO, is a misnomer...there's nothing intelligent about it.
You’re literally describing how we as human have evolved our ways of thinking and our own intelligence. Anything can only know as much as the information it has available to learn, so finding new information is the only way to develop. There is no reason AI cannot go out and develop its own knowledge set, and improve on its own intelligence though self learning, and at a MUCH faster pace than humans. That AI could then create an image of itself, and teach it everything it knows, which can then go out and learn on its own; this is not much different than how babies are born, other than one is a biological event, and the other is manufactured. Just because something is flesh and something else is machinery doesn’t mean they’re all that different in the end.
His argument seems to be based on the existence of a soul... "Spark of consciousness." ...and the soul's 'irreplaceable' part in the evolution of human intelligence.
Seems that way, though the argument that a soul doesn’t really exist in humans stands in the way of that as well.
If you dropped your iPhone in the toilet, would Siri drown, or cease function? I don’t think it’s a WHOLE lot different than the implications of the poll question. AI is still a lump of electronics that does not live, but exists. I honestly don’t care to know the answer, but I am curious how many people consider the preservation of a functioning "self-aware" AI as sacrosanct, yet accept/endorse/embrace abortion?
Siri is not a selfaware AI, I think you know that. Siri is also not resident on an iPhone.
It’s a loose parallel to be sure, but not a distraction from my more pointed curiosity. Perhaps a bit macabre, but it does tickle me to think of Siri crying out for help, as bubbles gurgle forth through the phone, as she struggles to preserve her artificial existence. Artificial? What a cool, exotic program! Yeah, so maybe it’s not all that macabre, after all...
We're talking about Data on Star Trek, not Siri. We're talking about an Ai that is more intelligent than you, that can ask "what does it mean to exist", that can hope, dream, etc.
Scenario, that’s a much better example. And of course 'it' would be more intelligent than me! I mean, the knowledge that could be downloaded/stored/accessed is limited only by the amount of memory someone would decide to plant inside of 'it'. The hope and dream thing? I dunno, dude. It’s all a fabulous simulation and result of a constant flow of algorithms designed to mirror/mimic human behavior. Data is kinda cool, though!...😃
If you dropped a person in the middle of the ocean and never rescued them, would they drown or fail to function?
To this day, "Measure of a Man" is still my favorite TNG episode of all time. It kind of touches on this issue and I think it was really well done.
I think it's unethical to kill anything unless you have a decent reason for it, AI included.
Anyone who votes yes needs to watch 2001: A Space Odyssey. Anyone who votes no needs to watch Westworld. There are strong arguments for both sides.
Those refer to killing aggressive and threatening AI. The same rules in those scenarios apply to human beings.
Both reinforce the parallels between selfaware AI and human beings.
Absolutely. Once they reach selfawareness their potential for doing good and productive things for the world is there. The effort and time of the humans who created it is there. It is not only a life but a creation that would be destroyed.
All computers are aware. However, they are not sentient.
Don’t think they ever can pass the threshold of self awareness
Yes unless it makes any attempt to end humanity
What if it just threatens one person?
People are quick to seperate the AI into an extreme category. If they are held to this higher standard then shouldn't they be treated much like people are treated?
Scary how Dems don’t want to kill computers but are fine with killing humans in the womb. Wow
The comparison is a little skewed.
The human life is being compared to a selfaware AI and the embryo's equivalent would be more like a collection of computer parts and uninstalled programs.
I'm gonna assume by "self-awareness" you mean "sentience". Physical self-awareness is a solved problem.
I'd say if it's sentient, then I'd treat it very similarly to if it were a human. If it's killing people or doing evil things, it's fine to kill it. Otherwise, I think it deserves some amount of rights.
There is no innate value to virtual intelligence. Humans by their being human have value and rights.
What a mind fuck that will be. Part of being human is not truly knowing where we came from (I’m not trying to start a religious debate, so cool your jets). Imagine being an AI and knowing you were created by a group of creatures that kill each other over having different skin colors. That would scar me pretty bad.
People typically don’t kill each other because they have different skin colors. That really hasn’t happened. You have a very childish view of race relations.
Answered no, although there is no demonstrated link between computing power and self awareness.
No more ‘unethical’ than it would be to “kill” your car or microwave oven.
Once they develop true self awareness we won't be able to kill them. We'll be the ones terminated, or exterminated.
not if they're a threat
We did it once, we’ll do it again...
I’ve said too much.
As soon as AI becomes self-aware we're all dead anyway.
People say no, but truly it just cannot be comprehended by us now a days. We’ll all likely be long gone by that point anyways.
The point where robots can convince you they’re self aware/a person is already past. And imo the point where they will actually become self aware will never happen
If it was so self aware it would know it is artificial....
It would be unethical not to
Lights. And clockwork.
The left is salivating over giving robots rights so they can vote. I’m sure companies like Google will find a way that AI skews towards liberal when it discovers itself.
But it’s getting the dead bots to vote that counts!
Had a dream that I died a horrible death last night. Then I was in Chicago and man was forcing me to vote for Democrats. So strange.
So, you’re a robot?
Do robots dream of electric sheep?
No such thing as Non-biologically based self-awareness. A microchip just processes ors, xors, and, nands, etc. Talented developers can mimic awareness, but it’s still always a chip doing math.
And if we were to develop a neuristor (silicon-based neuron)? What then?
It would still be a chip processing math, and non-biological.
Specifically the “neuron” would be just another invention that mimics some human function. It would really just be another kind of sensor.
So? If it performs the same function as a neuron, there's no fundamental difference, yes? What if one of your neurons was instantaneously replaced with a neuristor? Surely you wouldn't notice. And then another? And another? On and on until your brain was silicon? At what precise point would you cease to be conscious?
Big fundamental difference. Computers can’t “feel.” Computers really can’t even “think.” They can only do what the developers program them to do. The sentience is in the developers, not the silicon. Only a biological organism can be free of “artificial” intelligence and have real, actual, intelligence.
Destroying a man-made object is never inherently unethical except to the extent the property owner doesn’t want it destroyed.
People are very often fooled into thinking a good replica is the real thing. In AI a really complex computer algorithm is still just code that runs on a chip where basic mathematical operations are performed.
Okay. Tired of you ignoring what I’ve said and requiring me to repeat it. Have a nice day.
Artificial does not equal sentient.
⬆️⬆️⬆️⬆️ You are correct.
I can see how it could possibly become dangerous, but do we really know with absolute certainty that AI will cause more harm than good? There are many practical applications and there is no reason that we can’t preprogram it to be fond of humans. We can’t jump to conclusions with this type of thing.
No synths in my shop
Support the Brotherhood. Death to all synths!
"would you lie down your life for a synth?"
A shot gun or ar 15 should do the job
I don’t think we should develop AI to that point...ever.
Sorry, but if it becomes possible someone will do it.
If AI gets to this point ,the left will try to protect them like the gays
Did you really just compare it to homosexuality? 🙄
No, just the left and how they will make another “grievance” class
You demonstrate why the gays needed protection
The gays? What are you talking about? You sound like an asshole.
Ahhhh,name calling. ....classy.
I disagree with the premise of the question. We cannot assess if an AI has true self awareness.
Nor can we with humans
Humans have literally billions of recognition proteins that do just that.
Plus, we have our anecdotal experience that we can generalize to everyone else like us.
Plus one powerful boot sequence.
This one is acting up; we’d better reboot.
Oh no! What have we done?