Comic Discussion > QUESTIONABLE CONTENT
Something bothering me a lot
ckridge:
We have a million-odd years of practice in sorting out that, say, that cop who has an uncanny knack for telling if someone has a gun on them doesn't like black people or can't quite ever believe that a woman could have a gun, or that the genius teacher is a genius only with boys and girls she can convince herself are boys. Part of it is that we know how humans glitch, and part of it is that language is full of connotations, double meanings, imagery, emotive phrases, and emotional tones, all of which we use unconsciously to express ourselves, and all of which we are exquisitely tuned to understanding. If you listen carefully to the undercurrent of their speech, most people will eventually inform you of their prejudices and blind spots. There are jargons specifically designed for removing that undercurrent, and some of those are deliberately designed for lying in, but one can tell instantly if someone has retreated into a jargon.
AIs - expert systems or algorithms might be better words for what we actually have now - are harder to figure out than that. They don't talk, and they don't glitch in the ways we do.
If they could talk, really talk, it would be different. Talk to someone long enough, and you can figure them out.
ckridge:
>Take, for example, an episode recently reported by machine learning researcher Rich Caruana and his colleagues. They described the experiences of a team at the University of Pittsburgh Medical Center who were using machine learning to predict whether pneumonia patients might develop severe complications. The goal was to send patients at low risk for complications to outpatient treatment, preserving hospital beds and the attention of medical staff. The team tried several different methods, including various kinds of neural networks, as well as software-generated decision trees that produced clear, human-readable rules.
The neural networks were right more often than any of the other methods. But when the researchers and doctors took a look at the human-readable rules, they noticed something disturbing: One of the rules instructed doctors to send home pneumonia patients who already had asthma, despite the fact that asthma sufferers are known to be extremely vulnerable to complications.
The model did what it was told to do: Discover a true pattern in the data. The poor advice it produced was the result of a quirk in that data. It was hospital policy to send asthma sufferers with pneumonia to intensive care, and this policy worked so well that asthma sufferers almost never developed severe complications. Without the extra care that had shaped the hospital’s patient records, outcomes could have been dramatically different.
The hospital anecdote makes clear the practical value of interpretability. “If the rule-based system had learned that asthma lowers risk, certainly the neural nets had learned it, too,” wrote Caruana and colleagues—but the neural net wasn’t human-interpretable, and its bizarre conclusions about asthma patients might have been difficult to diagnose. If there hadn’t been an interpretable model, Malioutov cautions, “you could accidentally kill people.”<
http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
JoeCovenant:
--- Quote from: ckridge on 07 Mar 2018, 10:01 ---... If there hadn’t been an interpretable model, Malioutov cautions, “you could accidentally kill people.”<
http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
--- End quote ---
Well, I don’t think there is any question about it.
It can only be attributable to human error.
This sort of thing has cropped up before, and it has always been due to human error.
Thrudd:
Also the premise "A little learning is a dangerous thing."
You overlook one little detail or a factor or assumption and things can get dangerous rather quickly.
ex
Collect all of the pertinent data, as far as you know
Feed it into a computer
run your data analysis
The answer is 42
SpanielBear:
--- Quote from: Thrudd on 08 Mar 2018, 06:45 ---Also the premise "A little learning is a dangerous thing."
You overlook one little detail or a factor or assumption and things can get dangerous rather quickly.
ex
Collect all of the pertinent data, as far as you know
Feed it into a computer
run your data analysis
The answer is 42
--- End quote ---
"What do you get if you multiply seven by nine?"
In other words, if you ask a computer a question, make sure you asked the right one.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version