Comic Discussion > QUESTIONABLE CONTENT
Pet theory on the origins of AI
SmilingCat:
--- Quote from: jwhouk on 14 Mar 2018, 21:34 ---1506 - "Irreconcilable Differences"
--- End quote ---
Thank you very much, trying to find that strip has been burrowing a headache into my brain.
Storel:
--- Quote from: jwhouk on 14 Mar 2018, 21:34 ---
--- Quote from: SmilingCat on 14 Mar 2018, 20:20 ---Though John Ellicot Chatham was involved and was pretty evasive about certain human/computer interactions... :wink:
(now if only I could find the comic I'm referencing)
--- End quote ---
1506 - "Irreconcilable Differences"
--- End quote ---
Yes, in my headcanon the "beige box" story is the official version that Hanners and Winslow were describing, and the real story is more like what Pintsize and Hannerdad were implying...
Morituri:
I think you don't get "consciousness" until it emerges as the best strategy for solving a more basic problem. Which seems obvious considering evolution, but we don't think about it in the context of consciousness very much. In biological evolution - meaning, for humans, survival involves staying fed and not being eaten, getting enough to drink without drowning, staying warm enough without burning, and eventually bringing forth the next generation. And these, ultimately, are all things that all animals, from paramecia on up, rely on sensory input and physical movement to do.
Consciousness in humans is solely an optimization of that process. Your brain is what transforms sensory input into physical muscle control in the service of survival. And that, as far as evolution is concerned, is its only purpose. Consciousness is a side effect. The fact that consciousness - processing so complex that it has to take not only the body and its state but also the processing itself - into account, is part of the most efficient muscle control strategy so far discovered is pretty damned remarkable.
Of course we don't think of it as muscle control any more when the objectives are communication of information, control of machinery, construction of devices, etc. But all of that, every bit of the "meta" activity we do, is leverage giving us ever greater efficiency in terms of how much return we get for how little physical exertion of our bodies. You could say that the control strategy that involves consciousness, and in our case also sapience, is doing pretty well.
And this leads up to the question of what drives consciousness in AI? What kind of problem can we give an evolutionary algorithm - what can we make the fundamental necessity as a drive for an AI - such that developing sentience - even human-style sapience - is part of the most efficient solution to that problem? Something that it will be discovered as a necessary part of optimizing for the ability to do?
Keeping in mind that the problem of biological life - stay fed, have babies, etc, in an unpredictable and competitive world - has millions of working solutions (ie, evolved species), only one of which involves anything like human-style sapience. It's one of the most complex problems we know about, if not the most complex problem, and our species is still the only one in millions to develop our kind of sapience. More than that, it's one in millions, in hundreds of millions of years of history. If we pose our hypothetical evolutionary system a problem only as complex as the one that produced us, we only have a one-in-a-million shot of producing something like our own intelligence. This particular strategy for dealing with *that* problem is so rare that it qualifies as BIZARRE!
Morituri:
All that said, I think consciousness is something we have a handle on at this point. I have a whole shelf full of books on neuroanatomy, a whole hard drive full of high-resolution brain scans, and a whole bunch of papers from various journals explaining various aspects of human consciousness and experience that are finally getting down to terms of actual brain anatomy and actual signal processing at the neuron level.
Modeling the activity of the human brain in real time is still a few years off in terms of computing power, but of course all the mad scientists are trying to figure out whether, where, and how much cheating we can get away with to make the problem into one we can handle with a bit less hardware than that.
I have little doubt that, humans being what they are, the very first thing that humans will do when presented with a new class of exploitable intelligences, will be to force a bunch of them into slavery and prostitution. It's what we've always done to each other, after all.
Navigation
[0] Message Index
[*] Previous page
Go to full version