Comic Discussion > QUESTIONABLE CONTENT

Something bothering me a lot

<< < (7/15) > >>

SpanielBear:
COPIED FROM WEEKLY DISCUSSION THREAD



--- Quote from: Aenno on 08 Feb 2018, 16:47 ---
--- Quote ---2) Not sure about the 'they are taught to do those things'. I'm not a parent, but I've heard e.g. fathers reporting "My daughter was 5 (6, whatever) when she banned me from the bathroom", implying very much that it was not the parent teaching the child to be ashamed, but the child telling the parent "Go!". I remember being younger than ten years of age when my parents being naked in front of me-, or my being naked in front of them, started to bother me. I do not recall anybody teaching me to feel that way, it just felt that way.
--- End quote ---

No, it's not "parents actually demands from their children to do it". But the most neglected thing in pedagogic is ignoring a fact that a child is a sapient being capable to self-learning and self-changing. :)
First of all, at 6-7 years child already learned that nudity isn't exactly always ok. They were explained about it, and they noticing that parents (and other grown-ups) don't actually going around nude.
Second, and even more difficult thing is that 6-year child have a crisis, not so different as teenage crisis. That's when personal space need and recalculating of relationships happens. Being nude, especially in the bathroom, is ringing "it's not safe".
I'm not sure what to offer as a source - this theme is quite nicely developed in Russian psychology, started by Lev Vygotsky, but I don't know English sources or even how this stage is correctly named in English.
--- End quote ---

So, I stopped replying to this thread because while it is fascinating, it was going into areas that I know next to nothing about. My background is philosophy and ersatz psychotherapy (like mental health first aid rather than a degree, I'm definitely not a psychotherapist), and when the discussion moved into the biochemical side I felt happier sitting it out.

But it is fascinating. And there are some points raised here that I think I can jump in on, so here goes.

As far as infant psychology is concerned, there is almost an embarrassment of riches in the western psychological canon, from Freud and Jung through to Melanie Klein and John Bowlby. Again, I'm not an expert here so take what I say with a pinch of salt, but I don't see a huge amount of difference between what you describe and what I understand the basic strokes to be from an English language perspective. I guess though that the developmental stage you are describing is similar to the idea that the experience of becoming aware of oneself as a separate entity to others is both liberating and terrifying. The point at which children discover that their parents are fallible and possibly a threat (your mother stops just feeding you whenever and yells at you when you get angry. Terrifying!), that their needs will not always be met by others, and that they can keep secrets from their parents is a big deal, and is normally described as happening in development terms between the ages of 6 months to 6 years. So that kind of tallies. And yes, it is an awareness that seems to be learned through experience rather than instinctual, and that learning is to a greater or lesser degree unconscious.

If we try and extrapolate that learning process into the development of an AI personality- well, we don't actually have much to go on. We don't know how they're grown, so we don't know whether they go through developmental stages (is something like Eliza the equivalent of an AI newborn? Or are their developmental stages the same as ours but sped up? Do they have attachment figures? How much of their psychology is a pre-programmed function and how much is emergent? Too many questions, not enough evidence for an answer), so trying to draw out comparisons with humanity doesn't really work. If an AI doesn't have a father who can be naked, is there a machine equivalent? "I saw Dad slowing his run-time last night- Gross!"

And then we add *another* layer of complication, because now they have to interact with humanity as well. So that's two layers of socialisation and existential games to have to navigate. Human-centred AI's are not omniscient, they make mistakes about human feelings and intentions which they have to learn to correct, so that seems to indicate that they do not get "Interacting with humans 101" as a simple download. When it comes to us, they try to mimic our ways as much as possible.

Which means I think we come back to the functionality thing again. If Bubbles only wanted to socialise with other robots, she would have no need to go through the difficulty of learning how to interact with humans. Because she does, she is forced to translate her robot psychology into terms that humans can relate to. This could go the other way, and presumably the study of AI psychology would be a thing as we try to do just that, relate to robots on their terms. But for the day to day, it seems far easier for the AI's to translate their inner experiences in terms of human psychology and feeling. And that communication is presumably facilitated by both the software and the hardware they use- Software might give bubbles mastery of the English language, but another package designed to run with her specific chassis may also provide body language cues. And as we know that AI's have unconscious processes in a similar way that we do, it's not inconceivable that they have unconscious behaviours and displays that they aren't immediately aware of.




--- Quote --- As far as I can tell, robosex is actually exchange of packages about personal info and code, and we know it's quite intimate theme for AI. They called it "robotic sex" not because it's including sensual stimulation, but because it has a place in their society that resembling a place sex has in our.
--- End quote ---

In fairness, there is no indication that robo-sex *doesn't* include sensual stimulation. They get an emotional intimacy sure, but looking at Pintsize before and after his date back in the early comics he certainly seemed to have experienced stimulation of some kind. And I seem to recall Momo having a very... immediate reaction to a shirtless Sven (I think? I can't remember where it is in the archives. I recall there being startled pony-tail motion...). In short, when AI seek romance, they definitely can include erotic love as a part of that desire. I don't see any indication that their lust is anything other than raw, as opposed to an intellectual satisfaction. Bubbles' desire for Faye covers a broad spectrum. She loves the emotional connection they have, for sure, but there is something more that she wants and all the signs point to that want being lust-based, at least in part.


--- Quote ---Human can't choose. For human state of drunkeness is a inevitable state happens because they're drinking alcohol. They can want drunkeness (as Faye or Marten after "The Talk"), they can like a taste of spirits, they can drink for a company. They can't became drunk or sober with a snap of fingers.
Do you read "Good Omens" by Prattchet and Gaiman? There is an episode there, where angel and demon drinking.
"A look of pain crossed the angel's suddenly very serious face.
"I can't cope with this while 'm drunk," he said. "I'm going to sober up."
"Me too.""
That's something AI can do, and human can't.
So if for human being drunk is an uncontrollable consequence of some activity, for AI it's a game - it's voluntarily, conscious and optional rule they impose on themselves and can drop it any second.

--- End quote ---

I'm not sure about that. In theory certainly that's true. An AI runs programme:Drunk until it decides end programme:Drunk. But saying that decision is voluntary, conscious and optional is like saying a human choosing to drink is voluntary, conscious and optional. That choice seems to be an open one, but in fact can be driven by all sorts of unconscious desires and emotional drives, to the extent that the choice we have is very limited. If Station were to want to reduce it's run time to avoid something disturbing, it could use the Drunk programme to facilitate that. It's conceivable that it could rationalise the choice to start drinking with a thought similar to "This will help me cope, I can stop whenever I like", but if the disturbing emotion was bad enough it may feel unable to end the programme- it could be too scared, the experience too potentially painful. A robot alcoholic is not an impossible thing to conceive of- It could cure itself, but for whatever reason doesn't feel able to. If we hypothesize a robot subconscious, it may not even know it's motivations for that.

Bringing this back to Bubbles again- why might she want to run Programme:Arousal despite the social and emotional implications of that choice? Well, it may just feel good. It feels *nice* to be aroused, that's kind of the purpose. It's only when we start adding social mores and taboos on top of that that it becomes complicated. Bubbles shows real difficulty admitting to her own desires, to anything really that isn't logical. Part of her development is allowing herself to express those feelings. But to her, some of her feelings- grief, loss, confusion- are so overwhelming that avoiding them is an act of self-defence. And if some emotions are that hard to face, to make conscious, she might feel the same about others- if one snake is poisonous, all become suspect until proven otherwise. So her subconscious may be running her arousal programme on repeat, but she sure as hell isn't going to work too hard to reflect on that fact, because that would risk ending up vulnerable to other sources of psychological pain. This is a paradox- she is feeling something, but can't admit to herself that she is feeling it.

But there is a workaround. By throwing herself into the learned behaviours, she can maintain a herself in a place where she feels arousal but is not obliged to act on it, and can dismiss her inner tension as social anxiety. As the subject of all these emotions is Faye, a human, the only way she can get the object of her arousal to behave in the way she needs is to communicate with her, and she uses the human/AI emotional translator to do it.

Dammit, I just armchair psychologied a combat AI, didn't I? God I love QC.  :-)

Aenno:

--- Quote from: ckridge on 08 Feb 2018, 17:48 ---Let us say then that assertions about feelings are evidence of feelings only when made by a creature that can pass Turing tests as often as humans can.
--- End quote ---
That's kind of tautology. AI passing Turing test is a situation where human can't differ AI from another human, so it means humans already would accept declaration of feelings from said AI. It's a prerequisite, not a following.


--- Quote ---I don't see how this is an objection. Humans build machines with automatic responses to stress all the time.
--- End quote ---
That just means that robot bodies built for human proposes, not AI ones. So no function installed (or not installed) should be explained as following AI desire to understand humans, for example, until we believe it's human desire itself. Why would military create combat gynoid equipped to understand humanity or being able to drunk?


--- Quote ---That AIs sometimes voluntarily induce dizziness, slurred speech, and balance problems and interpret them as pleasure does not mean that they only occur voluntarily. Humans both induce them voluntarily for pleasure and suffer them as the result of fatigue, fever, anoxia, poisoning, and any number of other causes. This is not a problem for my argument that AIs can interpret a relatively small number of automatic physical stress responses in a large variety of ways, but rather supports it.
--- End quote ---
There is a big difference I noted in the answers to Case and SpanielBear.


--- Quote from: SpanielBear on 08 Feb 2018, 18:11 ---As far as infant psychology is concerned, there is almost an embarrassment of riches in the western psychological canon, from Freud and Jung through to Melanie Klein and John Bowlby. Again, I'm not an expert here so take what I say with a pinch of salt, but I don't see a huge amount of difference between what you describe and what I understand the basic strokes to be from an English language perspective. I guess though that the developmental stage you are describing is similar to the idea that the experience of becoming aware of oneself as a separate entity to others is both liberating and terrifying. The point at which children discover that their parents are fallible and possibly a threat (your mother stops just feeding you whenever and yells at you when you get angry. Terrifying!), that their needs will not always be met by others, and that they can keep secrets from their parents is a big deal, and is normally described as happening in development terms between the ages of 6 months to 6 years. So that kind of tallies. And yes, it is an awareness that seems to be learned through experience rather than instinctual, and that learning is to a greater or lesser degree unconscious.
--- End quote ---
Not exactly that. Russian aging psychology differ two different crisises - it's 3-year crisis (basic protests, establishing your own "I", establishing basic image of yourself - things you're describing), and 6-year crisis (establishing social hierarchy, trust issues with parents, creating social behavior patterns). Of course, "3-year" and "6-year" are just conventional names.


--- Quote ---In fairness, there is no indication that robo-sex *doesn't* include sensual stimulation.
--- End quote ---
Can't imagine how is it. We have quite coherent evidence that AI don't need to see or even be near AI who with whom he/she have sex. So we can actually be sure (I believe) that everything involved involve mind, not chassis.


--- Quote ---I'm not sure about that. In theory certainly that's true. An AI runs programme:Drunk until it decides end programme:Drunk. But saying that decision is voluntary, conscious and optional is like saying a human choosing to drink is voluntary, conscious and optional.
--- End quote ---
And that's the very difference. Humans don't choose consequences, they choose activity. But when they chose activity, they can't get rid from consequences. They can want it or not want it, but they would get it, no matter what. I bring "voluntary, conscious and optional", because it's the very definition of game ( “the voluntary attempt to overcome unnecessary obstacles.”). AIs playing being drunk (or arousing). Humans don't, even as they can playing flirt or drinking.
Once again, because it's the very difference: for human, activity can be a game, physiological response can't. For AI both are games.


--- Quote ---So her subconscious may be running her arousal programme on repeat, but she sure as hell isn't going to work too hard to reflect on that fact, because that would risk ending up vulnerable to other sources of psychological pain. This is a paradox- she is feeling something, but can't admit to herself that she is feeling it.
--- End quote ---
Problem I see here is that system demands from AI don't be aware of the processes it's running. Bubbles directly pointed that she can't access parts of her mind, and that being unable to access parts of her mind bothering her.
That's actually quite logical. Human unconscious exists because we're not just our consciousness, but also a million-years bugged and messed system, patched and really never been created to support human-type consciousness. But AIs is essentially programm constructs, that can be backuped, copy-pasted or influenced directly; AI is his consciousness, and chassis means not more then a tool. I mean, ask US Marine about how important his rifle is; but still it's a tool, not Marine himself.
That means I'm kinda not agree with "And as we know that AI's have unconscious processes in a similar way that we do, it's not inconceivable that they have unconscious behaviours and displays that they aren't immediately aware of." How exactly do we know it?

SpanielBear:

--- Quote from: Aenno on 08 Feb 2018, 18:57 ---
--- Quote ---In fairness, there is no indication that robo-sex *doesn't* include sensual stimulation.
--- End quote ---
Can't imagine how is it. We have quite coherent evidence that AI don't need to see or even be near AI who with whom he/she have sex. So we can actually be sure (I believe) that everything involved involve mind, not chassis.
--- End quote ---

I agree it doesn't involve the chassis, but it's an open question as to how setting up a link *feels* for an AI. While the mechanics of arousal are embodied, my experience of them is mental. One can feel aroused in dreams. What I meant by "experiences sensual stimulation" is that the experience is not reflective, but immediate.


--- Quote ---I'm not sure about that. In theory certainly that's true. An AI runs programme:Drunk until it decides end programme:Drunk. But saying that decision is voluntary, conscious and optional is like saying a human choosing to drink is voluntary, conscious and optional.
--- End quote ---

--- Quote ---And that's the very difference. Humans don't choose consequences, they choose activity. But when they chose activity, they can't get rid from consequences. They can want it or not want it, but they would get it, no matter what. I bring "voluntary, conscious and optional", because it's the very definition of game ( “the voluntary attempt to overcome unnecessary obstacles.”). AIs playing being drunk (or arousing). Humans don't, even as they can playing flirt or drinking.
Once again, because it's the very difference: for human, activity can be a game, physiological response can't. For AI both are games.
--- End quote ---

I still think you are overstating the extent to which AI are free of unintentional consequences in this case, and others like it. In your argument, humans have limited control over their physiology, whereas an AI has absolute control over theirs. But the AI's control is dependant on the AI being able to exercise it. If an AI feels an uncontrollable urge to be drunk, it doesn't matter that it could order the programme to be purged- it is unable to do so. If it didn't realise that was going to be the outcome of it's initial choice, the consequence of becoming drunk was just as unintentional as it is for a human.

You keep describing AI's as 'playing' when it comes to experiences, but this I think implies they are acting more in bad faith than is fair. The 'game' for AI is to use human language to adequately describe what they are feeling. When Bubbles tells Faye she is angry, she is using that word to describe a genuine emotion, based on a functioning, if wounded, psychology. May isn't playing a game when she acts on her fascination with prolapses, she is responding to a genuine desire on her part.


--- Quote ---So her subconscious may be running her arousal programme on repeat, but she sure as hell isn't going to work too hard to reflect on that fact, because that would risk ending up vulnerable to other sources of psychological pain. This is a paradox- she is feeling something, but can't admit to herself that she is feeling it.
--- End quote ---

--- Quote ---Problem I see here is that system demands from AI don't be aware of the processes it's running. Bubbles directly pointed that she can't access parts of her mind, and that being unable to access parts of her mind bothering her.
That's actually quite logical. Human unconscious exists because we're not just our consciousness, but also a million-years bugged and messed system, patched and really never been created to support human-type consciousness. But AIs is essentially programm constructs, that can be backuped, copy-pasted or influenced directly; AI is his consciousness, and chassis means not more then a tool. I mean, ask US Marine about how important his rifle is; but still it's a tool, not Marine himself.
That means I'm kinda not agree with "And as we know that AI's have unconscious processes in a similar way that we do, it's not inconceivable that they have unconscious behaviours and displays that they aren't immediately aware of." How exactly do we know it?

--- End quote ---

Momo alludes to it here:-

http://www.questionablecontent.net/view.php?comic=2285

The key is when she talks about the big AI's, and their ability to treat human thought as a subroutine, that level of self awareness is alien both to Emily *and to her*. Momo always refers to her consciousness and psychology as being equivalent to humans. She may differ in hardware or qualia, but the functions of her thoughts are not alien. Also she talks about not "thinking faster" than a human. This implies that the limits on her mind, despite it being artficial, are the same as human limitations.

There is also a big taboo here. You talk about AI personalities being able to be backed up or directly influenced, but doing this was what made Corpse Witch such a criminal in Spookybot's eyes. One *can* make an AI feel something by direct programming, and overide it's desires, but doing so is like brainwashing a human. It would be manipulating their personality against their will. And the fact that one can do that to an AI does not mean that their mind is fundamentally different to a humans, or that they are incapable of sharing communicable experiences.

In short- why do AI's behave like humans? Because they are like humans.

Edit: I realised I didn't respond to your point about Bubbles feeling concerned she couldn't access parts of her mind. But I think you are mistaken as to the level of access Bubbles was expecting. it wasn't that she expected total awareness of her whole mental state, she was talking specifically about not being able to remember her comrades and the events around her military service. That sort of confusion is not AI centric, it happens in humans too. It's called Amnesia. And that can happen both due to a hardware issue (brain damage) or a software one (supressing traumatic memories).

Aenno:

--- Quote ---What I meant by "experiences sensual stimulation" is that the experience is not reflective, but immediate.
--- End quote ---
I'm not sure we can speak about any AI experience as "not reflective". But definition accepted.


--- Quote ---You keep describing AI's as 'playing' when it comes to experiences, but this I think implies they are acting more in bad faith than is fair. The 'game' for AI is to use human language to adequately describe what they are feeling. When Bubbles tells Faye she is angry, she is using that word to describe a genuine emotion, based on a functioning, if wounded, psychology. May isn't playing a game when she acts on her fascination with prolapses, she is responding to a genuine desire on her part.

--- End quote ---
I know I can be bothering here, but I'm going to repeat it again.
I'm not saying EVERY experience is a game for them.
I want to set strict border about... let's name it "body emotions" and "mind emotions". Anger in "mind" one. Look into Wiki, for instance. "Anger - is an emotion that involves a strong uncomfortable and hostile response to a perceived provocation, hurt or threat." If AI can be hurt or threaten (why not?), he can react about it. This reaction is "anger". It's not human anger, but anger nerveless. Here I believe your arguments is entirely valid.
Desire is abstract as well. It's wish to have. If AI can understand concept of "having" and have preferences to have ("I haven't enough RAM to process everything I want to process, I want to have more RAM"), it's desire. In my... let's call it "religious" opinion desire to know is a basic emotion every every conscious mind capable to get abstractions would have.
Let's take appetite. Do AI feels appetite? They don't need food, they don't have any system to process it or (in most models) taste receptors. Still, Pintsize put cake into his chassis and declare "it was too tasty" - something we would define as feeling appetite. How can it happen, if it's not playing around?

Or another approach. Let's assume every emotion AI declaring is genuine.
When Bubbles says "I'm angry" - she means that she believes her personal space and interests were violated, and she is not ok with it.  This is actually clear message, understandable and relatable. Bubbles has all reasons to believe Faye would understand it. It's a clear message using human language.
When Pintsize shows at a cake and says "I'm hungry" - what do he means? Are we to believe that Pintsize have a sensation same as human have?


--- Quote ---You talk about AI personalities being able to be backed up or directly influenced, but doing this was what made Corpse Witch such a criminal in Spookybot's eyes.
--- End quote ---
And Marten did it more then once or twice. He backed up Pintsize on his PC to change his chassis. He changed Pintsize "language locals".
But yup, that's it. That's why software that would make AIs feel uncontrollable desires is bad.


--- Quote ---The key is when she talks about the big AI's, and their ability to treat human thought as a subroutine, that level of self awareness is alien both to Emily *and to her*. Momo always refers to her consciousness and psychology as being equivalent to humans. She may differ in hardware or qualia, but the functions of her thoughts are not alien. Also she talks about not "thinking faster" than a human. This implies that the limits on her mind, despite it being artficial, are the same as human limitations.
--- End quote ---
I can't exactly get how it would prove existence of unconscious.
Momo says that:
1. Processing power on an hardware is limited enough to make her experience in-tune with human one.
2. Big AIs are not limited such way, and have vastly greater attention field.
It's not as Momo or greater AIs having hidden subroutines. It's just Momo is limited by hardware, so her attention field and span are human-like. I do believe it can be specially done, when AI decide he can be integrated with humans, to exact reason to make integration simpler, but it doesn't means some subroutines are locked from AI.
Such lockdown without actual AI's ability to undo it would be exact crime Corpse Witch done to Bubbles.

Imagine classic arabian plot, about sultan who wants to know how his subjects live. He is not his subject, and he do understand that his experience isn't allowing himself to understand them. So he mask himself into poor man and going to the city. He is playing. His feelings is genuine, until it's not, let's say, feeling about unsafe with laws and officials. But he can emulate this emotions (let's say he came into bulgars den and making run from guards) to better understand something else. That's a game, but it's very useful and important game.
Then imagine evil vizier stripped sultan from being able to return. Here his feeling about unsafe with laws and officials would became genuine - but vizier submitted crime.

Actually I thought now - the crime Corpse Witch submitted to Bubbles is that she made her MORE human - she actually created a subconscious (locked form conscious parts of mind, where something dangerous can lurk and influence her behavior). And everybody, including Bubbles and Spookybot, are royally pissed.

REFERENCE: http://www.questionablecontent.net/view.php?comic=3380

ckridge:
For a time I spoke to myself - "well, maybe they're AIs, but they're humanlike enough to have human problems, so I just need to think it's humans with some problems". As I said, I can imagine PTSD soldier, or rich kid who is trying to make things better, or former convict, all that things, so relate for them same way I relate Hannelore (as I said, not many multibilliarder daughters around in my social circle, but I can relate for Hannelore). But then I thought - hell, isn't it defeating the point?

I mean, and please take me right, when I'm for diversity, I'm not for plain equalization. If I say "hey, Hannelore, you're just a human being like me!" - would not I be a jerk? I mean, Hannelore isn't like me. She had issues I haven't, and even if I can believe they're ridicules, they aren't for her. So if I'm gonna be good, I should remember that Hannelore don't give hugs. I should keep in mind that Faye, maybe, isn't so fond about suicide jokes, and I better not to replace her juice with bourbone as a friendly prank. Every being deserves understanding of its issues and if I wanna to be a friend with such a being I should keep their issues in mind.

Then there is two possibilities.

First is that being AI is nothing else that being human, it's exactly same issues and psychology. It is possible, sure (AFAIK, between me and everybody else on this planet nobody can really describe strong AI); but wouldn't it just defeat the issue? So thing would be "we're all different, but AIs are same as we are"? "You should notice basic story of every living creature, but being an AI is essentially nothing, just ignore it"?
Second is that being AI is something different, with socially acceptable possibility to change bodies just for couple of money (tell it Claire), with being able to lock psychological problems with competent programming (tell it Faye), and some unique issues that came with a status; but can somebody of humans really relate for specific AI issues?

There are interesting questions in here, and a number of possible ways to approach them. I am going to reject choosing between the two possibilities offered here, on the basis that there are almost never only two possibilities. Further, such questions could be asked of any character whatever that wasn't just like me, e.g. "If women are just like men, what is the point of having women characters? If they are fundamentally different from men, how I am ever to imagine what they are like?" 

Instead, I am going to try to figure out what narrative function robots serve in QC. That is, what do they do for the story that couldn't be done by human characters?

I think they are an expression of hope for the future, and, at the same time, an assertion about what a good society is like.

It's easy to be pessimistic about the future lately, not because the future looks any grimmer than it did during the Cold War, but because the impending disaster looks so tedious. Instead of a radioactive hellscape populated by mutants and wandering death machines, we are faced with a long, slow series of famines, mass migrations, and floods that will gradually kill most large animals and most of us, leaving a world populated mostly by people rich enough to hoard food and arable land for themselves. We will follow Scrooge's suggestion that we should die and reduce the surplus population, and the world we leave behind will be a gated community. This is a sufficiently depressing prospect that we have nostalgic, escapist Mad Max movies about how exciting and fun it would be to wander a hellscape in death machines.

What Jeph is offering is a future full of new creatures the like of which we have never seen before, some of whom will converse with us and be our friends. At the same time, he is suggesting the ideal state of affairs is  full of new creatures the like of which we have never seen before, some of whom will converse with us and be our friends. He is promoting xenia, "guest-friendship," the love of strangers precisely because they are strange, and robots are paradigm instances of such strangers.

As best I can figure, Jeph wanders into his use of robots, or, if you prefer, he evolves it intuitively. Pintsize is pretty clearly a personification of a lonely guy's laptop: he is Marten's only friend at first, he is mostly full of porn, and he speaks in an immature, unfiltered, random fashion that sounds remarkably like the collective voice of any internet comment section made friendly. He is a self-deprecating joke about what it is like to have a laptop as your best friend.

As Jeph develops robot characters, most notably in 3080-3095, in which they interact exclusively with one another, it becomes evident that they are outsiders anxious about their position in the world. Momo manages this anxiety by being a very good girl, May by ostentatiously not caring, and Bubbles first by attempting to live heroically and then by hiding away. Robot outsiders are a standard science fiction trope dating back to Eando Binder's 1939 story "I, Robot" and the Isaac Asimov robot stories it inspired. Robots differ from the other two stock outsiders, aliens and mutants, in that they are made by humans, and can justly ask why they were made as they were if there is no place for them in the world. Because they are entirely artificial, with no animal substrate whatever, robot characters can perfectly express the feeling of looking at society and seeing an enormous factory that built one like a machine but that has no place for one. This is not an uncommon feeling.

Robots also get Jeph out of a bind. He got known for writing about LGBT characters in a rational fashion, got pilloried for it, got pissed, and doubled down. Now he is in the dicey position of writing about LGBT characters as a straight, cis guy, with the continual possibility of coming off sounding like Evie lecturing Bubbles about what it is like to be an AI. You get the feeling of a man working wearing asbestos gloves every time Claire is on stage. He can't afford to fall in love with her the way he does with characters, because he can't afford ever to make a mistake. He can use Bubbles as a symbolic surrogate for trans, queer characters, though, and let himself love her, because if he gets it wrong, it will just be assumed that robots are like that. He can justly claim to be the world's foremost authority on what robots he makes up are like.

Robots have a fourfold function in this story, then. They express hope that the future will be full of strange new creatures who will be our friends. They are a wry joke about what it is like to have a computer be a chief social contact. They express what it like to feel like an unwanted artifact, something neither natural nor useful. They serve as a useful symbolic surrogate for outsiders in general.

This is how Pintsize, Momo, Winslow, and Bubbles serve the narrative considered merely as robots. As characters, they serve it in much more complicated, interesting ways, but discussing that would make this post impossibly long.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version