Comic Discussion > QUESTIONABLE CONTENT
A question regarding the post-singularity
Morituri:
To paraphrase Howard Tayler, I wouldn't want to be the reason for the extinction of humanity, but I'm the best reason I can think of.
At some level, Momo (and other AI) know that fear is at least potentially justified. And I think *THAT* is why she''s so uncomfortable with Bubbles' military career. Getting involved in wars means getting perilously close to becoming a potential exterminator of humanity.
And try as she might to promote peaceful nonviolent interaction with humans and emotional health among AI, she's well aware that full sentience means the ability to go fully crazy, and that sooner or later there are going to be AI just as prejudiced and hateful as that minister she was talking about.
This is something that AI researchers think about. A lot. I'm doing a paper for an upcoming conference about making artificial neural networks more stable with respect to poor initializations and badly chosen training parameters. That's purely a technical improvement in how easy it is to make things learn, not an imminent danger to humanity. But every one of these technical advancements puts one barrier out of the way, and sooner or later when somebody figures out how to make full-on strong existential threat to humanity, none of those relatively-trivial barriers are going to be left to stop her.
And I'm one of those weirdos. When I go home from work where we train neural networks to do translation of natural language, I work on my own time on the origins of will and desire and subjective experience.
Sanity is a refinement. It's not likely to be an attribute of any of the early AI. Until we know something from experience about how and why they go insane, or why their starting state isn't anywhere close to sanity, we can't even get our hands around the problems. And we go on even so, because we curious monkeys can't resist the tremendously interesting problem.
Is it cold in here?:
There will be abused AIs, and there will be AIs radicalized by abuse.
Momo and those who think like her can take away the fear of the unknown, but that will leave large problems unsolved.
Carl-E:
--- Quote from: Morituri on 07 Feb 2016, 18:02 ---This is something that AI researchers think about. A lot. I'm doing a paper for an upcoming conference about making artificial neural networks more stable with respect to poor initializations and badly chosen training parameters.
--- End quote ---
Sounds like an ideal cauldron for chaotic behaviour (mathematically speaking). Despite the similarity in language, being unstable in a chaotic system usually means some parameter or another is running off to infinity, which may cause a system to lock up or hit limiters of some sort, changing the system's behaviour unpredictably, and adding to the chaos. :-P
The "locking up" part is probably the best outcome in such a situation. At least, it's "mostly harmless".
Is it cold in here?:
Oh joy. When we invented AIs in the QC world we may have invented new forms of mental illness.
Mad Cat:
What might be the symptoms of the AI psychopathy known as Kugai Syndrome?
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version