Comic Discussion > QUESTIONABLE CONTENT

On AI Identity in the QC-verse

<< < (2/3) > >>

BenRG:
The fact that Roko said her phobic aversion to seeing her subdermal anatomy was 'psychosomatic' suggests that the AIs in the QC universe have a lot of 'reflexive' behaviours that are hard-coded during the emergence of their intelligence/personality algorithm about which they can think of no logical purpose and they can't control or resist. That intrinsic imperfection is part of what makes them so human as characters, IMO.

Thrudd:
As a lay lay lay type person I would not go so far as to say hard-coded so much as a useful core subroutine that developed unexpected results due to being developed from first principle and not having filters against unexpected input parameters. Sort of like a stack overflow that never happened during the development stage yet when presented with input in the field that wasn't predicted, something unexpected/exciting/terrifying happens.
In my area of expertise we call it "the user is an idiot". For example shoving a PB&J into the VCR because it looks kind of like a cassette and will fit into the slot and then asking if a grilled cheese would have been a better choice.  :facepalm:

AlliedToasters:
Hey, sounds like a lot of us are in agreement. And, thanks for the many warm welcomes! The truth is that, as pointed out by many, the source of AI sentience in QC isn’t fully understood. Also, since the singularity happened, by definition post-singularity AI would be designed by AI and beyond our capacity to understand. So, comparing these AI to today’s “hot” machine learning algorithms probably does little to help define these characters.
I guess my whole point is that every reader’s experience is different and I wanted to share mine and some of the things from my life that shape that experience.
Here’s to more rich discussions to come!

A small perverse otter:
Heyh. I, too, am a 'data scientist' in my daily work.  Back in the day, though, I did work on "real modeling" of neural systems -- yes, it's an oxymoron, but work with me here. One of the things that crops up really fast when you start talking about the wetware instead of the siliconware is that only a tiny, tiny fraction of your brain actually participates directly in metacognition, and that part isn't actually terribly well suited to it. Most of your the metabolic activity of your brain goes into doing stuff which has nothing whatsoever to do with 'thinking', but rather with keeping the thinking part alive. (And, as an aside, that brain? It's really, really expensive. During normal function, the human brain consumes approximately 20% of total metabolic activity. When you suffer severe injury, and particularly when you are dying, the rest of the body will quite literally kill itself to protect that brain.)

That makes the whole "Gosh, the AI's just woke up one day and started asking for champagne" story in QC all the more likely. Metacognition? Meh. Small potatoes compared to the important stuff.

Is it cold in here?:
It is possible that any process humans can understand would be too limited to be able to create AIs.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version