Comic Discussion > QUESTIONABLE CONTENT
On AI Identity in the QC-verse
awgiedawgie:
--- Quote from: Is it cold in here? on 30 May 2018, 16:28 ---It is possible that any process humans can understand would be too limited to be able to create AIs.
--- End quote ---
To be fair, the AIs don’t understand the process any better than humans. All anyone knows for sure is that the process works, and that it is repeatable. There may be one or two exceptionally gifted humans who do understand it, but they can’t dumb it down enough to explain it to normal people.
Dandi Andi:
No conscious system could ever understand itself fully. To understand how each part works together requires at least as many parts to construct the model; a neuron for every neuron. The best we could hope for is to construct a functional model of component parts, simplify those models to accurately reflect inputs and outputs, then stitch the models together to create a simplified model of the whole. This results in a slightly imperfect patchwork model that leaves itself open for surprising deviation in unexpected situations. It's less than ideal, but it works. Usually.
Emergent systems can defy even this kind of patchwork model. Schooling behavior in fish is an emergent property of very simple, and seemingly unrelated, behaviors of individual fish. That behavior can be as simple as "stay a certain distance from other fish." When enough fish gather together, each individually following that single rule, a teeming, swirling school forms with highly specific behavior that makes it highly resistant to predation. Not a single fish is behaving in a way to deliberately give rise to school behavior and nothing about any fish's behavior would suggest schooling to an outside observer. The only way to get from one model to the other is to get a bunch of fish together and see what happens.
I suspect AI consciousness is similar. No matter how well we understand the constituent parts, we can't ever fully understand how they give rise to emergent consciousness because we do not and cannot have the computational power needed to make a working model of how those parts work together.
mgrayson3:
It was once thought provably that we could never know what a star was made of. It was also said that Quantum Mechanics was one of the few endeavors that could never be used for harm. (The latter is from Hardy, "A Mathematician's Apology")
Negative predictions of scientific accomplishments have a very bad track record.
And then there's
"But that's theoretically impossible."
"Perhaps they have different theories."
Morituri:
Uh, I know this is tantamount to heresy, and people can just redefine consciousness until they can claim it isn't true again, but... we actually *do* have a pretty good understanding of what consciousness is and how it happens these days.
People have been studying neurology, neuroanatomy, etc on the one side, and researchers in AI have been building things and making discoveries about what those neural structures mean, on the other. At this point, we do in fact know enough that given sufficiently powerful hardware we could build not just a simulation of a brain, but engineered neural structures and topologies that would also experience consciousness as we normally understand it.
Many systems - biological or computer - are fully capable of having a complete knowledge of themselves. It's just that, at some point, the knowledge is that, "This structure that's just been described is repeated at foo density in bar volume, with variations on the following schema..." and then proceeds to describe millions of structures with data that's representable in far less space than those structures.
And this is very true of the brain. We have cortical columns that repeat in a hexagonal pattern, with variations depending on what region of the brain we're talking about, for example. And we know what cortical columns do, and how they pass signals to one another, and how those signals interact to make propagating patterns (they used to call them "brain waves") and how those propagating patterns carry and are changed by incoming sensory information, and how memory encoded in the synapses influences the way they propagate and how the interactions between those continually modified patterns in a normal brain lead eventually (through a process we experience as a "decision") to nerve impulses that drive movements of voluntary muscles.
I'm oversimplifying drastically, obviously. There's a hell of a lot more going on. But, the fundamentals, the core process, yes we do understand it. And, I know this is true because of my job. Where do you think we got convolutional neural networks? It's a simplification of one of the primary patterns of structure that appears in the visual cortex, and we've not only figured out that it works, but also *why* it works, and why each part of the structure is needed, and why they're connected the way they are, to the point that we can now build custom convolutional networks suited to different tasks.
With regard to consciousness, we hardcore AI people are now in the position of a seventeenth-century scientist who has figured out transistors. She has nothing yet available in her civilization that would allow her to manufacture any. But she knows what technological developments will be required to enable it.
This is a substantial part of why the conversations on AI have turned serious in the last few years. This is no longer a fantasy that we can safely ignore.
There have always been some people saying that AIs will be monsters or companions or helpers or saviors - look at any number of bad science fiction films from the 50s to today. But they were the inventions of people who had no technical understanding of the process, and mostly spun for drama instead of for any real understanding or for actually planning the future course of civilization. And people left it to the entertainers and didn't occupy the time of sober serious people with it, because nobody took that conversation seriously. After all "we could never build something like that."
They're taking the conversation seriously now, because we now actually understand the parts and how they work together, and people who used to know that "we could never build something like that" are now having to come to grips with "holy shit this is actually going to happen".
People with technological understanding have been pointing out what they can do, and we can do pretty much the whole 'consciousness' thing now given the resources. And sober, serious people who understand how those resources get mobilized and why, and the social implications of that process with respect to what AI will actually get built - Everybody from Elon Musk to Stephen Hawking - have been saying 'wait, we need to figure out what we're going to do with AI, or more importantly what it's going to do with us, before we go much further....'
awgiedawgie:
That's all well and good in the real world. Alan Turing (the mathematician, not Dora's old classmate) pioneered Artificial Intelligence over 60 years ago, and scientists have been fascinated by the possibilities ever since. But - and mind you, it is all quite fascinating - it is sort of irrelevant.
In the QCverse, however, I can't recall anyone saying that they finally understand how it happens. They have said that they know it works, and they can repeat the process, but they don't know why it works. We still don't know what parts of the QC AI's behaviour, such as Roko's psychosomatic fainting at the sight of her internal parts, are actually hard-coded, and what parts are developed through recursive subroutines or some other internal process. Roko doesn't understand why she faints at the sight of her own parts, but then again, does the average human understand why some people faint at the sight of their own blood? Or someone else's blood?
Navigation
[0] Message Index
[*] Previous page
Go to full version