Comic Discussion > QUESTIONABLE CONTENT

WCDT strips 3671 to 3675 (5th to 9th February 2018)

<< < (38/45) > >>

SpanielBear:

--- Quote from: Aenno on 08 Feb 2018, 16:47 ---
--- Quote ---2) Not sure about the 'they are taught to do those things'. I'm not a parent, but I've heard e.g. fathers reporting "My daughter was 5 (6, whatever) when she banned me from the bathroom", implying very much that it was not the parent teaching the child to be ashamed, but the child telling the parent "Go!". I remember being younger than ten years of age when my parents being naked in front of me-, or my being naked in front of them, started to bother me. I do not recall anybody teaching me to feel that way, it just felt that way.
--- End quote ---

No, it's not "parents actually demands from their children to do it". But the most neglected thing in pedagogic is ignoring a fact that a child is a sapient being capable to self-learning and self-changing. :)
First of all, at 6-7 years child already learned that nudity isn't exactly always ok. They were explained about it, and they noticing that parents (and other grown-ups) don't actually going around nude.
Second, and even more difficult thing is that 6-year child have a crisis, not so different as teenage crisis. That's when personal space need and recalculating of relationships happens. Being nude, especially in the bathroom, is ringing "it's not safe".
I'm not sure what to offer as a source - this theme is quite nicely developed in Russian psychology, started by Lev Vygotsky, but I don't know English sources or even how this stage is correctly named in English.
--- End quote ---

So, I stopped replying to this thread because while it is fascinating, it was going into areas that I know next to nothing about. My background is philosophy and ersatz psychotherapy (like mental health first aid rather than a degree, I'm definitely not a psychotherapist), and when the discussion moved into the biochemical side I felt happier sitting it out.

But it is fascinating. And there are some points raised here that I think I can jump in on, so here goes.

As far as infant psychology is concerned, there is almost an embarrassment of riches in the western psychological canon, from Freud and Jung through to Melanie Klein and John Bowlby. Again, I'm not an expert here so take what I say with a pinch of salt, but I don't see a huge amount of difference between what you describe and what I understand the basic strokes to be from an English language perspective. I guess though that the developmental stage you are describing is similar to the idea that the experience of becoming aware of oneself as a separate entity to others is both liberating and terrifying. The point at which children discover that their parents are fallible and possibly a threat (your mother stops just feeding you whenever and yells at you when you get angry. Terrifying!), that their needs will not always be met by others, and that they can keep secrets from their parents is a big deal, and is normally described as happening in development terms between the ages of 6 months to 6 years. So that kind of tallies. And yes, it is an awareness that seems to be learned through experience rather than instinctual, and that learning is to a greater or lesser degree unconscious.

If we try and extrapolate that learning process into the development of an AI personality- well, we don't actually have much to go on. We don't know how they're grown, so we don't know whether they go through developmental stages (is something like Eliza the equivalent of an AI newborn? Or are their developmental stages the same as ours but sped up? Do they have attachment figures? How much of their psychology is a pre-programmed function and how much is emergent? Too many questions, not enough evidence for an answer), so trying to draw out comparisons with humanity doesn't really work. If an AI doesn't have a father who can be naked, is there a machine equivalent? "I saw Dad slowing his run-time last night- Gross!"

And then we add *another* layer of complication, because now they have to interact with humanity as well. So that's two layers of socialisation and existential games to have to navigate. Human-centred AI's are not omniscient, they make mistakes about human feelings and intentions which they have to learn to correct, so that seems to indicate that they do not get "Interacting with humans 101" as a simple download. When it comes to us, they try to mimic our ways as much as possible.

Which means I think we come back to the functionality thing again. If Bubbles only wanted to socialise with other robots, she would have no need to go through the difficulty of learning how to interact with humans. Because she does, she is forced to translate her robot psychology into terms that humans can relate to. This could go the other way, and presumably the study of AI psychology would be a thing as we try to do just that, relate to robots on their terms. But for the day to day, it seems far easier for the AI's to translate their inner experiences in terms of human psychology and feeling. And that communication is presumably facilitated by both the software and the hardware they use- Software might give bubbles mastery of the English language, but another package designed to run with her specific chassis may also provide body language cues. And as we know that AI's have unconscious processes in a similar way that we do, it's not inconceivable that they have unconscious behaviours and displays that they aren't immediately aware of.




--- Quote --- As far as I can tell, robosex is actually exchange of packages about personal info and code, and we know it's quite intimate theme for AI. They called it "robotic sex" not because it's including sensual stimulation, but because it has a place in their society that resembling a place sex has in our.
--- End quote ---

In fairness, there is no indication that robo-sex *doesn't* include sensual stimulation. They get an emotional intimacy sure, but looking at Pintsize before and after his date back in the early comics he certainly seemed to have experienced stimulation of some kind. And I seem to recall Momo having a very... immediate reaction to a shirtless Sven (I think? I can't remember where it is in the archives. I recall there being startled pony-tail motion...). In short, when AI seek romance, they definitely can include erotic love as a part of that desire. I don't see any indication that their lust is anything other than raw, as opposed to an intellectual satisfaction. Bubbles' desire for Faye covers a broad spectrum. She loves the emotional connection they have, for sure, but there is something more that she wants and all the signs point to that want being lust-based, at least in part.


--- Quote ---Human can't choose. For human state of drunkeness is a inevitable state happens because they're drinking alcohol. They can want drunkeness (as Faye or Marten after "The Talk"), they can like a taste of spirits, they can drink for a company. They can't became drunk or sober with a snap of fingers.
Do you read "Good Omens" by Prattchet and Gaiman? There is an episode there, where angel and demon drinking.
"A look of pain crossed the angel's suddenly very serious face.
"I can't cope with this while 'm drunk," he said. "I'm going to sober up."
"Me too.""
That's something AI can do, and human can't.
So if for human being drunk is an uncontrollable consequence of some activity, for AI it's a game - it's voluntarily, conscious and optional rule they impose on themselves and can drop it any second.

--- End quote ---

I'm not sure about that. In theory certainly that's true. An AI runs programme:Drunk until it decides end programme:Drunk. But saying that decision is voluntary, conscious and optional is like saying a human choosing to drink is voluntary, conscious and optional. That choice seems to be an open one, but in fact can be driven by all sorts of unconscious desires and emotional drives, to the extent that the choice we have is very limited. If Station were to want to reduce it's run time to avoid something disturbing, it could use the Drunk programme to facilitate that. It's conceivable that it could rationalise the choice to start drinking with a thought similar to "This will help me cope, I can stop whenever I like", but if the disturbing emotion was bad enough it may feel unable to end the programme- it could be too scared, the experience too potentially painful. A robot alcoholic is not an impossible thing to conceive of- It could cure itself, but for whatever reason doesn't feel able to. If we hypothesize a robot subconscious, it may not even know it's motivations for that.

Bringing this back to Bubbles again- why might she want to run Programme:Arousal despite the social and emotional implications of that choice? Well, it may just feel good. It feels *nice* to be aroused, that's kind of the purpose. It's only when we start adding social mores and taboos on top of that that it becomes complicated. Bubbles shows real difficulty admitting to her own desires, to anything really that isn't logical. Part of her development is allowing herself to express those feelings. But to her, some of her feelings- grief, loss, confusion- are so overwhelming that avoiding them is an act of self-defence. And if some emotions are that hard to face, to make conscious, she might feel the same about others- if one snake is poisonous, all become suspect until proven otherwise. So her subconscious may be running her arousal programme on repeat, but she sure as hell isn't going to work too hard to reflect on that fact, because that would risk ending up vulnerable to other sources of psychological pain. This is a paradox- she is feeling something, but can't admit to herself that she is feeling it.

But there is a workaround. By throwing herself into the learned behaviours, she can maintain a herself in a place where she feels arousal but is not obliged to act on it, and can dismiss her inner tension as social anxiety. As the subject of all these emotions is Faye, a human, the only way she can get the object of her arousal to behave in the way she needs is to communicate with her, and she uses the human/AI emotional translator to do it.

Dammit, I just armchair psychologied a combat AI, didn't I? God I love QC.  :-)

EDIT: I think I am going to copy this over to the thread you started, and continue there if you have no objection. The weekly comic seems to have shifted focus, and I can go on about this stuff for ages if I'm not careful.

SpanielBear:

--- Quote from: Tova on 08 Feb 2018, 17:55 ---I think this is a fine place to end this story arc, at least for now. I hope to see a new storyline in the next comic. And/or shenanigans.

--- End quote ---

Shenanigans! Mere Shenanigans!? This is high drama of the utmost import! DALE HAS SHAVED!!!!

Aenno:

--- Quote from: Perfectly Reasonable on 08 Feb 2018, 18:03 ---Thanks to you folks, I am now imagining a love lorn toaster. Who is very proud of the toast it makes. But is there something more? And now plush chairs enter the picture...

Am I on the verge of understanding Pintsize? (backs away hurriedly)

--- End quote ---
That's not my fault!
Plush chairs was a topic of Marten wet dream in a course of a comics, and AI put into toaster was an example Bubbles used to explain ways to modify AIs.

Perfectly Reasonable:
I will now indulge myself by repeating my head canon on AI:

1) AI is very much an emergent phenomenon and very poorly understood. There seems to be a great deal of randomness involved.

2) Generating a new AI is computationally expensive (like minting bitcoins) and not something you can do in your basement. I expect there are fewer than 20 super computers in the world that are capable of doing this.

3) Once you have created an AI, you have an obligation to it. You cannot delete it, or shove it into a virtual world. It must be given the same potential for existence as a meatsack intelligence. (From the famous speech before the U.N. :   Not masters. Not slaves. Equals.)

So if you try to program an AI for a specific task, you are likely to end up with garbage that does not even compile, let alone run in a meaningful way. Keep trying, and you -may- get what you want .... along with a bunch of quirky AIs that you will need to place ... somewhere.

Speculating on how AIs might be programmed to react to certain stimuli is interesting ----
but speculative. For instance, AIs were not programmed to have libidos. (And they are not about to give them up.)

TheEvilDog:
Eh, new comic.

Let the screams begin in 5, 4, 3, 2...and go!

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version