THESE FORUMS NOW CLOSED (read only)

  • 28 Mar 2024, 07:12
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1]   Go Down

Author Topic: On AI Identity in the QC-verse  (Read 3548 times)

AlliedToasters

  • Not quite a lurker
  • Offline Offline
  • Posts: 11
On AI Identity in the QC-verse
« on: 16 May 2018, 16:40 »

Hello beautiful people,

I recently joined the forum after coming and browsing silently for years ("lurking.") With the eventful week in-comic, I had to sign up.
I wanted to start this discussion, and while I know this has been talked about before, I feel I might be able to contribute some novel information. I'm a data scientist and I work with "artificial intelligence" on a daily basis. We all know, of course, that the "real thing" does not at all resemble the characters we know and love (or whatever feelings we may hold) from QC. However, there are some very general concepts that I know from my professional and academic experience that somewhat influence the way I think about these characters.
I've seen a lot of discussions on QC AI identity, and many people talk about things like "config files" or "core programming" or whatever as possibly pre-defining identity roles for these characters. On the cutting edge of AI research, the very best results come from models known as neural networks. These models are able to do some remarkable things, like discover sentiment in user-generated text by looking only at the ordering of letters in them (https://blog.openai.com/unsupervised-sentiment-neuron/).
Neural networks work by approximating functions, and a "function" can be anything, including human intelligence. Given enough computational power, memory, and training data, arbitrarily complicated functions can be approximated to an arbitrary level of performance. These function approximation machines are initialized with random (range and distribution limited) values(https://arxiv.org/pdf/1704.08863.pdf). They use training data to "learn," adjusting these millions of parameters to approximate the function at hand.
Having worked with so many of these models, I've seen that, even with the same training data and hyperparameters, models will behave differently based solely on these random initializations. It's easy to imagine, then, that the QC-verse AI develop their unique personalities from the same effect: random initialization. Because of this, I argue that AI personalities in these stories are randomly initialized and that there is no "configuration" for identities and roles at all.
Furthermore, neural networks work in a way by "compressing" the data they learn from. If we assume QC-verse AI continually learn throughout their lives, we can expect that the social behaviors they observe will start to be reflected in their own behaviors.
Of course, the AI characters in QC possess emergent intelligence and perhaps comparing them to such "primitive" algorithms would be offensive; in any case, my knowledge of AI influences the way I see these characters and I want to share my thoughts with you all. That's what forums are all about, after all!
« Last Edit: 16 May 2018, 16:47 by AlliedToasters »
Logged
crummy and proud

Morituri

  • William Gibson's Babydaddy
  • *****
  • Offline Offline
  • Posts: 2,276
Re: On AI Identity in the QC-verse
« Reply #1 on: 16 May 2018, 20:39 »

Right.  I work in the same field, and I can't emphasize it enough.  You absolutely DON'T KNOW what a given run will come up with as its winning strategy.  Often as not it's something that fulfils the criteria you gave it, but does so without actually producing value, and then you have to go back and debug your reference or utility function.

Incidentally.  Welcome! 
Logged

Dandi Andi

  • FIGHT YOU
  • ***
  • Offline Offline
  • Posts: 389
  • They/them she/her. Formerly Pecoros7. Still tired.
Re: On AI Identity in the QC-verse
« Reply #2 on: 16 May 2018, 20:54 »

This may or may not be the same kind of technology you're talking about, but I think I get where you're going (though I am very much a layman in this field, so please forgive my lack of knowledge). The technology used to detect pornographic images for "safe search" filters, for example, is designed to look for patterns in small clusters of pixels. It then feeds that output into a higher level search of larger clusters of pixels and then into larger clusters and then larger, etc. It then makes a final determination based on thousands or millions of these data points as to whether the image is pornographic. The initial conditions that the software was looking for are arbitrary and random. After giving it a huge pile of training images, some porn and some not, it takes its success rate, makes small adjustments across all of its search parameters, and tries again. Changes that improve its success rate are kept. Changes that reduce its success rate are abandoned. Eventually, after enough training data and enough tweaks, it becomes very good at identifying pornography. The catch is that because the system is using millions of data points in ways that were initially random and were changed randomly and that are interacting in arbitrarily complex ways, we don't actually understand exactly how the software is detecting the images. We know how it got there, but we no longer truly understand what it's doing.

May once suggested to Momo that if you stuck a bunch of AI on a server, they would probably never think about sex. Give them human-like bodies and send them to walk around with humans in a culture that is constantly talking about sex and those same AI are eventually going to start thinking about sex, too. If the task the AI was initially designed to perform was simulating human intelligence, all human interactions become training data. They are very likely to develop concepts of gender and sexuality as well as things like political opinions or preferences about food or music; all the things that go into our sense of individual identity. Since AI learning is a randomized evolutionary process, every AI is likely going to develop differently, even given exactly the same training data. And none of it would be a simple line in that supposed config file. It would all be an emergent property of millions or billions of "neurons" doing very simple things together creating vastly more complex things that we no longer fully understand.
Logged

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: On AI Identity in the QC-verse
« Reply #3 on: 16 May 2018, 20:55 »

I repeat my welcome, fascinating new person!

Indeed QC AIs seem to emerge from the creche with differing personalities, hence the matching service that pairs companion AIs with people they will be compatible with.

I imagine that Jeph would say nobody knows why and the creation of new AIs seems to be poorly understood even by in-universe people.
Logged
Thank you, Dr. Karikó.

Morituri

  • William Gibson's Babydaddy
  • *****
  • Offline Offline
  • Posts: 2,276
Re: On AI Identity in the QC-verse
« Reply #4 on: 16 May 2018, 21:16 »

Human paraphilias are often mystifying and impossible to understand.

Roko's problems over her attachment to bread would seem to indicate that it's not something she is really in control of or can just stop having/doing.  As to how it came about?  Well, that's mystifying and impossible to understand. 

But sometimes stuff happens.  Something emerges as a characteristic, quirky response to a "non-critical" input while you're training the system to give correct responses to inputs you actually care about, and you'll never know why.  Sometime in the middle of studying criminal psychology, she learns about the way criminals regret missed opportunities and how it's different from the way ordinary people do, and when she's got it down, she doesn't know it yet but there are unexpected feelings about bread?  It would be analogous to some of the things I've seen. 
Logged

BenRG

  • coprophage
  • *****
  • Offline Offline
  • Posts: 7,861
  • Boldly Going From The Back Seat!
Re: On AI Identity in the QC-verse
« Reply #5 on: 17 May 2018, 00:49 »

The fact that Roko said her phobic aversion to seeing her subdermal anatomy was 'psychosomatic' suggests that the AIs in the QC universe have a lot of 'reflexive' behaviours that are hard-coded during the emergence of their intelligence/personality algorithm about which they can think of no logical purpose and they can't control or resist. That intrinsic imperfection is part of what makes them so human as characters, IMO.
Logged
~~~~

They call me BenRG... But I don't know why!

Thrudd

  • Scrabble hacker
  • *****
  • Offline Offline
  • Posts: 1,271
  • Sucess Redefined
Re: On AI Identity in the QC-verse
« Reply #6 on: 17 May 2018, 06:29 »

As a lay lay lay type person I would not go so far as to say hard-coded so much as a useful core subroutine that developed unexpected results due to being developed from first principle and not having filters against unexpected input parameters. Sort of like a stack overflow that never happened during the development stage yet when presented with input in the field that wasn't predicted, something unexpected/exciting/terrifying happens.
In my area of expertise we call it "the user is an idiot". For example shoving a PB&J into the VCR because it looks kind of like a cassette and will fit into the slot and then asking if a grilled cheese would have been a better choice.  :facepalm:
Logged
A good pun is it's own reword.
There is a difference between spare parts, extra parts and left over parts.

The Venn diagram  for Common Sense and Good Sense has very little, if any, overlap.

AlliedToasters

  • Not quite a lurker
  • Offline Offline
  • Posts: 11
Re: On AI Identity in the QC-verse
« Reply #7 on: 17 May 2018, 19:04 »

Hey, sounds like a lot of us are in agreement. And, thanks for the many warm welcomes! The truth is that, as pointed out by many, the source of AI sentience in QC isn’t fully understood. Also, since the singularity happened, by definition post-singularity AI would be designed by AI and beyond our capacity to understand. So, comparing these AI to today’s “hot” machine learning algorithms probably does little to help define these characters.
I guess my whole point is that every reader’s experience is different and I wanted to share mine and some of the things from my life that shape that experience.
Here’s to more rich discussions to come!
Logged
crummy and proud

A small perverse otter

  • Bizarre cantaloupe phobia
  • **
  • Offline Offline
  • Posts: 219
  • Staying well enhydrated
Re: On AI Identity in the QC-verse
« Reply #8 on: 30 May 2018, 16:09 »

Heyh. I, too, am a 'data scientist' in my daily work.  Back in the day, though, I did work on "real modeling" of neural systems -- yes, it's an oxymoron, but work with me here. One of the things that crops up really fast when you start talking about the wetware instead of the siliconware is that only a tiny, tiny fraction of your brain actually participates directly in metacognition, and that part isn't actually terribly well suited to it. Most of your the metabolic activity of your brain goes into doing stuff which has nothing whatsoever to do with 'thinking', but rather with keeping the thinking part alive. (And, as an aside, that brain? It's really, really expensive. During normal function, the human brain consumes approximately 20% of total metabolic activity. When you suffer severe injury, and particularly when you are dying, the rest of the body will quite literally kill itself to protect that brain.)

That makes the whole "Gosh, the AI's just woke up one day and started asking for champagne" story in QC all the more likely. Metacognition? Meh. Small potatoes compared to the important stuff.
« Last Edit: 30 May 2018, 18:44 by A small perverse otter »
Logged
"AGH! Humans are so STUPID sometimes!" -- QC #3668

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: On AI Identity in the QC-verse
« Reply #9 on: 30 May 2018, 16:28 »

It is possible that any process humans can understand would be too limited to be able to create AIs.
Logged
Thank you, Dr. Karikó.

awgiedawgie

  • FIGHT YOU
  • ***
  • Offline Offline
  • Posts: 411
  • DON'T PANIC
Re: On AI Identity in the QC-verse
« Reply #10 on: 30 May 2018, 17:18 »

It is possible that any process humans can understand would be too limited to be able to create AIs.
To be fair, the AIs don’t understand the process any better than humans. All anyone knows for sure is that the process works, and that it is repeatable. There may be one or two exceptionally gifted humans who do understand it, but they can’t dumb it down enough to explain it to normal people.
Logged
When, in the course of human events,
You can keep your head when all about you
Took the one less traveled by,
It's up to you to cremate those last remains.

Dandi Andi

  • FIGHT YOU
  • ***
  • Offline Offline
  • Posts: 389
  • They/them she/her. Formerly Pecoros7. Still tired.
Re: On AI Identity in the QC-verse
« Reply #11 on: 30 May 2018, 18:11 »

No conscious system could ever understand itself fully. To understand how each part works together requires at least as many parts to construct the model; a neuron for every neuron. The best we could hope for is to construct a functional model of component parts, simplify those models to accurately reflect inputs and outputs, then stitch the models together to create a simplified model of the whole. This results in a slightly imperfect patchwork model that leaves itself open for surprising deviation in unexpected situations. It's less than ideal, but it works. Usually.

Emergent systems can defy even this kind of patchwork model. Schooling behavior in fish is an emergent property of very simple, and seemingly unrelated, behaviors of individual fish. That behavior can be as simple as "stay a certain distance from other fish." When enough fish gather together, each individually following that single rule, a teeming, swirling school forms with highly specific behavior that makes it highly resistant to predation. Not a single fish is behaving in a way to deliberately give rise to school behavior and nothing about any fish's behavior would suggest schooling to an outside observer. The only way to get from one model to the other is to get a bunch of fish together and see what happens.

I suspect AI consciousness is similar. No matter how well we understand the constituent parts, we can't ever fully understand how they give rise to emergent consciousness because we do not and cannot have the computational power needed to make a working model of how those parts work together.
Logged

mgrayson3

  • Not quite a lurker
  • Offline Offline
  • Posts: 17
Re: On AI Identity in the QC-verse
« Reply #12 on: 31 May 2018, 09:17 »

It was once thought provably that we could never know what a star was made of. It was also said that Quantum Mechanics was one of the few endeavors that could never be used for harm. (The latter is from Hardy, "A Mathematician's Apology")

Negative predictions of scientific accomplishments have a very bad track record.

And then there's
"But that's theoretically impossible."
"Perhaps they have different theories."
Logged
"If your love life requires close air support, something has gone very wrong." QC#3670

Morituri

  • William Gibson's Babydaddy
  • *****
  • Offline Offline
  • Posts: 2,276
Re: On AI Identity in the QC-verse
« Reply #13 on: 31 May 2018, 10:31 »

Uh, I know this is tantamount to heresy, and people can just redefine consciousness until they can claim it isn't true again, but...  we actually *do* have a pretty good understanding of what consciousness is and how it happens these days. 

People have been studying neurology, neuroanatomy, etc on the one side, and researchers in AI have been building things and making discoveries about what those neural structures mean, on the other.  At this point, we do in fact know enough that given sufficiently powerful hardware we could build not just a simulation of a brain, but engineered neural structures and topologies that would also experience consciousness as we normally understand it.

Many systems - biological or computer - are fully capable of having a complete knowledge of themselves.  It's just that, at some point, the knowledge is that, "This structure that's just been described is repeated at foo density in bar volume, with variations on the following schema..." and then proceeds to describe millions of structures with data that's representable in far less space than those structures.

And this is very true of the brain.  We have cortical columns that repeat in a hexagonal pattern, with variations depending on what region of the brain we're talking about, for example.  And we know what cortical columns do, and how they pass signals to one another, and how those signals interact to make propagating patterns (they used to call them "brain waves")  and how those propagating patterns carry and are changed by incoming sensory information, and how memory encoded in the synapses influences the way they propagate and how the interactions between those continually modified patterns in a normal brain lead eventually (through a process we experience as a "decision") to nerve impulses that drive movements of voluntary muscles.

I'm oversimplifying drastically, obviously.  There's a hell of a lot more going on.  But, the fundamentals, the core process, yes we do understand it.  And, I know this is true because of my job.  Where do you think we got convolutional neural networks?  It's a simplification of one of the primary patterns of structure that appears in the visual cortex, and we've not only figured out that it works, but also *why* it works, and why each part of the structure is needed, and why they're connected the way they are, to the point that we can now build custom convolutional networks suited to different tasks.

With regard to consciousness, we hardcore AI people are now in the position of a seventeenth-century scientist who has figured out transistors.  She has nothing yet available in her civilization that would allow her to manufacture any.  But she knows what technological developments will be required to enable it.

This is a substantial part of why the conversations on AI have turned serious in the last few years.  This is no longer a fantasy that we can safely ignore. 

There have always been some people saying that AIs will be monsters or companions or helpers or saviors - look at any number of bad science fiction films from the 50s to today.  But they were the inventions of people who had no technical understanding of the process, and mostly spun for drama instead of for any real understanding or for actually planning the future course of civilization.  And people left it to the entertainers and didn't occupy the time of sober serious people with it, because nobody took that conversation seriously.  After all "we could never build something like that." 

They're taking the conversation seriously now, because we now actually understand the parts and how they work together, and people who used to know that "we could never build something like that" are now having to come to grips with "holy shit this is actually going to happen". 

People with technological understanding have been pointing out what they can do, and we can do pretty much the whole 'consciousness' thing now given the resources.  And sober, serious people who understand how those resources get mobilized and why, and the social implications of that process with respect to what AI will actually get built - Everybody from Elon Musk to Stephen Hawking - have been saying 'wait, we need to figure out what we're going to do with AI, or more importantly what it's going to do with us, before we go much further....' 
Logged

awgiedawgie

  • FIGHT YOU
  • ***
  • Offline Offline
  • Posts: 411
  • DON'T PANIC
Re: On AI Identity in the QC-verse
« Reply #14 on: 31 May 2018, 20:37 »

That's all well and good in the real world. Alan Turing (the mathematician, not Dora's old classmate) pioneered Artificial Intelligence over 60 years ago, and scientists have been fascinated by the possibilities ever since. But - and mind you, it is all quite fascinating - it is sort of irrelevant.

In the QCverse, however, I can't recall anyone saying that they finally understand how it happens. They have said that they know it works, and they can repeat the process, but they don't know why it works. We still don't know what parts of the QC AI's behaviour, such as Roko's psychosomatic fainting at the sight of her internal parts, are actually hard-coded, and what parts are developed through recursive subroutines or some other internal process. Roko doesn't understand why she faints at the sight of her own parts, but then again, does the average human understand why some people faint at the sight of their own blood? Or someone else's blood?
Logged
When, in the course of human events,
You can keep your head when all about you
Took the one less traveled by,
It's up to you to cremate those last remains.
Pages: [1]   Go Up