I don't think fictional characters MUST be relatable to be enjoyable. Fiction can be escapist, or it can be exploratory, and both of these things sometimes call for characters that are OUTSIDE what the reader can relate to. Batman is one of the most recognizable, popular and enjoyed characters in modern fiction, and if I were to take a wild guess, I'd assume most people can't relate to anything about Bruce Wayne's life.
I don't know about that. He lost both his parents at quite a young age -- I'm sure lots of folks can relate to that.I actually believe it's a good example of what I mean.
How does it change a person's psychology if their body is as replaceable as an automobile? We've had only hints.Truth to be said I even more interested how having said person around would affect a life situation and feelings of hard-transition transperson who were shunned for her condition.
We know how Claire reacted to Pintsize talking about chassis-swapping. Every trans person will have an individual perspective but I wouldn't be surprised if many others felt the way she did.I believe it wasn't exactly about chassis-swapping, but about Pintsize dismissing difficulties of adapting new options with "bah, technology would fix everything".
Yes, having mental issues isn't means it's a crippling inability to act 100% of the time, in 100% of situations. But the very definition of having mental issues is that person DO fall apart at least sometimes. If your mental issues never manifest and don't influence your behavior by any way, you haven't mental issues. Dora can handle some situations well. Faye can handle some situations well. Hannelore can. But Dora, Faye or Hannelore can't handle some situations well, and it's chronical and patterned, that's why they can be called as having mental issues.
Check for Bubbles appearing in the comics after Stolen Memories arc. When do she show any evidence of any mental issues?
Feeling bad about something isn't mental issue. Being not able to be cool all the times isn't mental issues. Bubbles don't always cool, but she can really handle it. When she is angry on Clinton, who just ignored her explaining they are not couple with Faye, she cool out as soon as he admits his mistake (and it took four iterations about "we're not" from him). When she is angry of Faye, who just never came to Bubbles to discuss a way Bubbles would live next, she is going to "Coffee of Doom" and vent out quite adequately.
P.S.: Actually you're right and I noticed it later. Thanks for moving.
You're right, any struggles we have seen from Bubbles since then have mostly been innocuous and similar to what we may find in non-PTSD individuals with low experience in social/romantic situationsI'd even say that she even doesn't obliged to have low experience in romantic situations to behave like she does.
QuoteYou're right, any struggles we have seen from Bubbles since then have mostly been innocuous and similar to what we may find in non-PTSD individuals with low experience in social/romantic situationsI'd even say that she even doesn't obliged to have low experience in romantic situations to behave like she does.
I believe we can safely suppose that Bubbles knows about Faye biography. Even if it never were declared directly, Bubbles actually spoke with Marten about his relationship with Faye in the past. So she knows how Marten situation ended for Faye. She knows how Sven situation ended for Faye. She knows story how Angus situation ended for Faye. She hasn't any evidences that Faye ever was attracted by women or AIs.
Imagine you developed a crush to a person who is: highly damaged in relationships, taking therapy, never shows any interests in making romantic with you or your kind, had a breaking point in your sight at least once. Said person is your coworker, and basically (let's say you had crappy times) your only social link this days. Would you feel yourself ok pushing relationships or even slip your thoughts about it out?
(Mind you it could also be seen as simple, illogical lust!)Can't imagine AI lust as something simple! Actually human lust isn't simple as well. It's an engine of social progress and quite complex hormonal mechanism!
I want to know mechanism
Quote(Mind you it could also be seen as simple, illogical lust!)Can't imagine AI lust as something simple! Actually human lust isn't simple as well. It's an engine of social progress and quite complex hormonal mechanism!
Actually it is a basic problem I have here. If AI lust appears, I want to know mechanism at least! or let humans be humans.
So would the robot-psychologists in the QC world. Jeph said once that nobody knows why the AI citizens have libidos.Let's say I'm in quite different position with robot-psychologists from the QC world. ;)
(and do she feels anything at all, or it's just playing around)
Quote(and do she feels anything at all, or it's just playing around)
All the evidence is that QC AI people feel emotions as genuinely as we do.
Again, I don't think Dora actually acused Marten with concious lying. Or Claire, when they speaked about Pamela. Marten definitly is sicere by nature himself.
I'm trying to say, "All the evidence is that QC AI people feel emotions as genuinely as we do." isn't the best recomendation. We are known for being able to self-decept with astonishing ability.
From these passages, I gather that you mean that the object of arousal may be entirely socially determined, since it does not seem possible that some section of DNA codes for attraction to chairs, no matter how curvaceous, cozy, plushy, and compliant; but that the sensation of arousal is 100% biological. That narrows down the field of argument a lot.Ok, let's do this thought experiment first. Doctor, patient, body can't respond.
Let me propose a thought-experiment: Suppose someone goes to their doctor and says "I'm sexually dysfunctional. I desire my spouse intensely, but my body can't respond properly. The frustration is killing me." The doctor hooks the patient up to some instruments and directs them to think longingly of their spouse, and says "No, you are mistaken. Your erectile tissue is not tumescent when you think about your spouse, and since arousal is 100% biological, that means you aren't feeling desire. There is no problem here." Would the doctor's response be correct? If not, and if arousal is 100% biological, why not?
You argue from analogy here, writing that since humans have bodies analogous to our own, we have better reason to believe that they have sensations like our own than we would have for believing that robots did, regardless of what robots claimed. This argument is invalid. We decided that those neural structures correspond to those sensations by asking humans what they felt and then seeing what neural structures are activated when they say they feel that way. The fundamental evidence was the assertion of a feeling. The neural structure's involvement in that feeling was deduced on the basis of the assertion. Denying someone else's assertion that they feel that way because they haven't got the neural structure would be disregarding equally good evidence for no good reason.
AIs in this universe are largely self-programming. They have learning programs and built-in goals, both quite flexible.Mild correction - I don't think they actually have built-in goals. There is a moment, when Marten asking "I wonder why no robot around doing things they are designed to", and when Winslow objected he is, Marten asked "what was you designed for?". Winslow couldn't answer.
AIs who are interested in associating with humans put on bodies for this purpose. Their bodies have automatic stress reactions producing simple, powerful mental events that are analogous to but not identical with ones humans have under similar circumstances. Just as with humans, these simple, powerful sensations are capable of a very wide set of possible interpretations depending on the context and on what part of the human sociocultural psychosexual matrix the robot has become embedded in. The same basic sensations may be experienced as fear, sadness, anger, pleasurable excitement, arousal, drunkenness, desire, or any combination of these depending on circumstance and on whom the robot has learned to be.That's quite some problems in this reasoning.
Quote2) Not sure about the 'they are taught to do those things'. I'm not a parent, but I've heard e.g. fathers reporting "My daughter was 5 (6, whatever) when she banned me from the bathroom", implying very much that it was not the parent teaching the child to be ashamed, but the child telling the parent "Go!". I remember being younger than ten years of age when my parents being naked in front of me-, or my being naked in front of them, started to bother me. I do not recall anybody teaching me to feel that way, it just felt that way.
No, it's not "parents actually demands from their children to do it". But the most neglected thing in pedagogic is ignoring a fact that a child is a sapient being capable to self-learning and self-changing. :)
First of all, at 6-7 years child already learned that nudity isn't exactly always ok. They were explained about it, and they noticing that parents (and other grown-ups) don't actually going around nude.
Second, and even more difficult thing is that 6-year child have a crisis, not so different as teenage crisis. That's when personal space need and recalculating of relationships happens. Being nude, especially in the bathroom, is ringing "it's not safe".
I'm not sure what to offer as a source - this theme is quite nicely developed in Russian psychology, started by Lev Vygotsky, but I don't know English sources or even how this stage is correctly named in English.
As far as I can tell, robosex is actually exchange of packages about personal info and code, and we know it's quite intimate theme for AI. They called it "robotic sex" not because it's including sensual stimulation, but because it has a place in their society that resembling a place sex has in our.
Human can't choose. For human state of drunkeness is a inevitable state happens because they're drinking alcohol. They can want drunkeness (as Faye or Marten after "The Talk"), they can like a taste of spirits, they can drink for a company. They can't became drunk or sober with a snap of fingers.
Do you read "Good Omens" by Prattchet and Gaiman? There is an episode there, where angel and demon drinking.
"A look of pain crossed the angel's suddenly very serious face.
"I can't cope with this while 'm drunk," he said. "I'm going to sober up."
"Me too.""
That's something AI can do, and human can't.
So if for human being drunk is an uncontrollable consequence of some activity, for AI it's a game - it's voluntarily, conscious and optional rule they impose on themselves and can drop it any second.
Let us say then that assertions about feelings are evidence of feelings only when made by a creature that can pass Turing tests as often as humans can.That's kind of tautology. AI passing Turing test is a situation where human can't differ AI from another human, so it means humans already would accept declaration of feelings from said AI. It's a prerequisite, not a following.
I don't see how this is an objection. Humans build machines with automatic responses to stress all the time.That just means that robot bodies built for human proposes, not AI ones. So no function installed (or not installed) should be explained as following AI desire to understand humans, for example, until we believe it's human desire itself. Why would military create combat gynoid equipped to understand humanity or being able to drunk?
That AIs sometimes voluntarily induce dizziness, slurred speech, and balance problems and interpret them as pleasure does not mean that they only occur voluntarily. Humans both induce them voluntarily for pleasure and suffer them as the result of fatigue, fever, anoxia, poisoning, and any number of other causes. This is not a problem for my argument that AIs can interpret a relatively small number of automatic physical stress responses in a large variety of ways, but rather supports it.There is a big difference I noted in the answers to Case and SpanielBear.
As far as infant psychology is concerned, there is almost an embarrassment of riches in the western psychological canon, from Freud and Jung through to Melanie Klein and John Bowlby. Again, I'm not an expert here so take what I say with a pinch of salt, but I don't see a huge amount of difference between what you describe and what I understand the basic strokes to be from an English language perspective. I guess though that the developmental stage you are describing is similar to the idea that the experience of becoming aware of oneself as a separate entity to others is both liberating and terrifying. The point at which children discover that their parents are fallible and possibly a threat (your mother stops just feeding you whenever and yells at you when you get angry. Terrifying!), that their needs will not always be met by others, and that they can keep secrets from their parents is a big deal, and is normally described as happening in development terms between the ages of 6 months to 6 years. So that kind of tallies. And yes, it is an awareness that seems to be learned through experience rather than instinctual, and that learning is to a greater or lesser degree unconscious.Not exactly that. Russian aging psychology differ two different crisises - it's 3-year crisis (basic protests, establishing your own "I", establishing basic image of yourself - things you're describing), and 6-year crisis (establishing social hierarchy, trust issues with parents, creating social behavior patterns). Of course, "3-year" and "6-year" are just conventional names.
In fairness, there is no indication that robo-sex *doesn't* include sensual stimulation.Can't imagine how is it. We have quite coherent evidence that AI don't need to see or even be near AI who with whom he/she have sex. So we can actually be sure (I believe) that everything involved involve mind, not chassis.
I'm not sure about that. In theory certainly that's true. An AI runs programme:Drunk until it decides end programme:Drunk. But saying that decision is voluntary, conscious and optional is like saying a human choosing to drink is voluntary, conscious and optional.And that's the very difference. Humans don't choose consequences, they choose activity. But when they chose activity, they can't get rid from consequences. They can want it or not want it, but they would get it, no matter what. I bring "voluntary, conscious and optional", because it's the very definition of game ( “the voluntary attempt to overcome unnecessary obstacles.”). AIs playing being drunk (or arousing). Humans don't, even as they can playing flirt or drinking.
So her subconscious may be running her arousal programme on repeat, but she sure as hell isn't going to work too hard to reflect on that fact, because that would risk ending up vulnerable to other sources of psychological pain. This is a paradox- she is feeling something, but can't admit to herself that she is feeling it.Problem I see here is that system demands from AI don't be aware of the processes it's running. Bubbles directly pointed that she can't access parts of her mind, and that being unable to access parts of her mind bothering her.
QuoteIn fairness, there is no indication that robo-sex *doesn't* include sensual stimulation.Can't imagine how is it. We have quite coherent evidence that AI don't need to see or even be near AI who with whom he/she have sex. So we can actually be sure (I believe) that everything involved involve mind, not chassis.
I'm not sure about that. In theory certainly that's true. An AI runs programme:Drunk until it decides end programme:Drunk. But saying that decision is voluntary, conscious and optional is like saying a human choosing to drink is voluntary, conscious and optional.
And that's the very difference. Humans don't choose consequences, they choose activity. But when they chose activity, they can't get rid from consequences. They can want it or not want it, but they would get it, no matter what. I bring "voluntary, conscious and optional", because it's the very definition of game ( “the voluntary attempt to overcome unnecessary obstacles.”). AIs playing being drunk (or arousing). Humans don't, even as they can playing flirt or drinking.
Once again, because it's the very difference: for human, activity can be a game, physiological response can't. For AI both are games.
So her subconscious may be running her arousal programme on repeat, but she sure as hell isn't going to work too hard to reflect on that fact, because that would risk ending up vulnerable to other sources of psychological pain. This is a paradox- she is feeling something, but can't admit to herself that she is feeling it.
Problem I see here is that system demands from AI don't be aware of the processes it's running. Bubbles directly pointed that she can't access parts of her mind, and that being unable to access parts of her mind bothering her.
That's actually quite logical. Human unconscious exists because we're not just our consciousness, but also a million-years bugged and messed system, patched and really never been created to support human-type consciousness. But AIs is essentially programm constructs, that can be backuped, copy-pasted or influenced directly; AI is his consciousness, and chassis means not more then a tool. I mean, ask US Marine about how important his rifle is; but still it's a tool, not Marine himself.
That means I'm kinda not agree with "And as we know that AI's have unconscious processes in a similar way that we do, it's not inconceivable that they have unconscious behaviours and displays that they aren't immediately aware of." How exactly do we know it?
What I meant by "experiences sensual stimulation" is that the experience is not reflective, but immediate.I'm not sure we can speak about any AI experience as "not reflective". But definition accepted.
You keep describing AI's as 'playing' when it comes to experiences, but this I think implies they are acting more in bad faith than is fair. The 'game' for AI is to use human language to adequately describe what they are feeling. When Bubbles tells Faye she is angry, she is using that word to describe a genuine emotion, based on a functioning, if wounded, psychology. May isn't playing a game when she acts on her fascination with prolapses, she is responding to a genuine desire on her part.I know I can be bothering here, but I'm going to repeat it again.
You talk about AI personalities being able to be backed up or directly influenced, but doing this was what made Corpse Witch such a criminal in Spookybot's eyes.And Marten did it more then once or twice. He backed up Pintsize on his PC to change his chassis. He changed Pintsize "language locals".
The key is when she talks about the big AI's, and their ability to treat human thought as a subroutine, that level of self awareness is alien both to Emily *and to her*. Momo always refers to her consciousness and psychology as being equivalent to humans. She may differ in hardware or qualia, but the functions of her thoughts are not alien. Also she talks about not "thinking faster" than a human. This implies that the limits on her mind, despite it being artficial, are the same as human limitations.I can't exactly get how it would prove existence of unconscious.
I'm sorry. I am droning on about things you know already if you care about them. I accidentally hit a vein of geekery. Let's flee.Yeah, I do. But, in my firm believe, declaring that something is hard to solve isn't solution. :)
Dickheads gave Jeph shit for having too many LGBT characters...My thoughts about this statement can't be written safely, because Russian laws directly forbid using hard-lined mat in a public space.
May is so poor that her face and her arm fall off and she can't afford to do anything about it. That speaks both to how little human she is and to the position of robots.Well, Faye sometimes couldn't afford herself new glasses when hers are broken. It's not about the position of robots. It's about the position of low-paid labour. May is a released convict, who hadn't good qualification for non-specialized work, and every decent work would ask "are you ever were convicted for felony?". Her problems about being poor are 100% human. :)
I think we just disagree, is all. One of the things you most object to, that everyone takes robots perfectly for granted, was one of the first things to please me about the strip. I find myself surrounded by things at least that strange with everyone taking them just that much for granted.Just to clarify.
My thoughts about this statement can't be written safely, because Russian laws directly forbid using hard-lined mat in a public space.
Hunh. You aren't allowed to curse online. Weird. I will moderate my language too, then, so as not to tempt you into trouble.I don't think Aenno was talking about cursing, I think this was referring to Russia's laws against anything that could be perceived as 'homosexual propaganda'.
QuoteIn fairness, there is no indication that robo-sex *doesn't* include sensual stimulation.Can't imagine how is it. We have quite coherent evidence that AI don't need to see or even be near AI who with whom he/she have sex. So we can actually be sure (I believe) that everything involved involve mind, not chassis.
Not exactly, and I was kinda joking (kinda - because said laws really exists, I just don't ever believe any police structure would ever notice it here). Sorry, I'm actually in a deep night by my time (for instance, it's 5:30 AM here now) when I come here, and sometimes forgot that some jokes have cultural context to understand and just make a literal from Russian.My thoughts about this statement can't be written safely, because Russian laws directly forbid using hard-lined mat in a public space.Hunh. You aren't allowed to curse online. Weird. I will moderate my language too, then, so as not to tempt you into trouble.I don't think Aenno was talking about cursing, I think this was referring to Russia's laws against anything that could be perceived as 'homosexual propaganda'.
Oh. Oh, dear. Noted. OK, tricky but doable. Language is infinitely flexible. I should probably try to review those laws.You can, but you really shouldn't if you're not a researcher. I mean, we have laws about homosexual propaganda, sure. It's a federal law 436-FZ "On Protection of Children from Information Harmful to Their Health and Development”.
The problem of how to make well-developed robot characters look truly strange is exacerbated by the very wide range of human types in QC. It was relatively easy for '50s and '60s writers to write strange characters, because everyone was still trying to act like everyone else. In a social circle containing Emily, Brun, Tilly, and Faye, it is hard to step outside the bounds of human social behavior.Yup, and that's fantastic. When you have humans like that (and I can actually swear on my diploma that they're realistic!) you don't need robots to research xenia.
I think the spider robots are like those guys who wear neckbeards and sandals with socks and who, when told that this puts people off, will explain that it is more efficient and rational, and thus should not bother anyone.Is that exact types of guys you would put into social worker position? ;)
Do you know about phenomenology? :mrgreen:Yeah! :)
Can you still uphold your distinction that the sensual stimuli received are not, in fact, sensual stimuli? What if the simulation is sufficiently complex that the brain itself can no longer recognize that it is in a simulation? Then your verdict ("no body, ergy no sensory stimulation") would be in conflict with the verdict of the brain experiencing those "questionable stimuli". Does using the term 'sensory stimuli' require actual, physical senses to be attached to the brain?Actually it does. But you definitely can fool a brain and make it to think it have some sensory information. That wouldn't be "sensory stimuly", but brain inside would never tell the difference.
If you haven't already, I highly recommend watching Carpenter's 1974 SF-comedy Dark Star. Add a few like-minded friends (the nerdier, the better) and mind-altering substances (beer should suffice) for an optimal experience.I did. Without mind-altering drugs, but after big philosophy cycle. It worked even better!
>When you have humans like that (and I can actually swear on my diploma that they're realistic!) you don't need robots to research xenia.<Nah. "Let's have more diversity, not shrinking being AIs into being humans; or let's allow humans being as diverse as they really are."
"Why do we need more diversity when we are so diverse already?"
We don't have to worry about shrinking AIs into being humans because AIs don't exist. AIs are fantastical creatures, like trolls, space aliens, elves, or Morlocks.And as fantastical creatures they are fulfilling some task. I mean, there is an answer about "why humans are X they are?". "It's realistic, people are X". You can make nearly anything as X. But if you put fantastical creature, it always for a propose.
That's the way the form has worked time out of mind. If you want a wide and realistic representation of human diversity, you go to short stories and novels, which offer much more detail and differentiation, but which are longer, slower, less vivid, and more bound to particular cultures....or the big part of QC, don't you think? That's actually what I liked so much - wide and realistic representation of human diversity.
The point is the vampires themselves are a useful neutral slate. If Stoker started writing about the sexual desires of middle class Victorian women, he would have been in serious trouble. Stick the concepts onto a vampire, and there's an element of ablative armour.I do fully agree. As far as I know, Stoker actually is very puritan vampire literature, there was a lot of more obvious works of fiction about vampires before him.
I also like the idea that robots represent an exploration of xenia.So do I.
And I didn't really mean protection in terms of "Because someone will get offended", I'm not accusing Jeph of authorial cowardice. I meant it more in terms of "I am not intimately familiar with this issue, so there's a risk I'll get things wrong. That'll matter slightly less if I'm talking about robots."Well, I never contacted Jeph in person, but by posts and around I had a feeling he is just a type of person who don't want to offend people around. But, well. Personally I'd prefer to see his opinions as they are.
I also don't think that QC AI was really set up with that point in mind. It feels like Jeph had kooky AI's running around, and as more of them turned up and the universe became more fully realised, the robots were just there. So Jeph started playing around with them and tropes because he found it interesting. Their status as outsiders and the narrative vehicle they became is a bit messy, because their role wasn't clearly defined in the beginning.Maybe. And I hadn't a problem here until robots didn't happen to be arc main players.
I also think that the Dora/Marten/Faye perspective of xenia, as people who are just normal people for whom this change is just an aspect of day-to-day life, is also kind of interesting.That's true - but thing is, I can't really imagine Dora, Marten or Faye just taking sides here if some weird coincidence wouldn't put them on the fence and thereto making them decision persons. I mean, well, they already have a position, and it's kind of "ok, they're robots, they're here, some of them are nice, some of them smug assholes" *undoubtedly* They haven't "change happens" aptitude, and it was actually addressed in some issues, I believe.
And when classical art used any thing like giants, sorcerers or cyclops, it was always about canonical depiction. Listeners knew that fair folk is X and they do Y.
... If there hadn’t been an interpretable model, Malioutov cautions, “you could accidentally kill people.”<
http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
Also the premise "A little learning is a dangerous thing."
You overlook one little detail or a factor or assumption and things can get dangerous rather quickly.
ex
Collect all of the pertinent data, as far as you know
Feed it into a computer
run your data analysis
The answer is 42
Also the premise "A little learning is a dangerous thing."
You overlook one little detail or a factor or assumption and things can get dangerous rather quickly.
ex
Collect all of the pertinent data, as far as you know
Feed it into a computer
run your data analysis
The answer is 42
"What do you get if you multiply seven by nine?"
In other words, if you ask a computer a question, make sure you asked the right one.
Also the premise "A little learning is a dangerous thing."
You overlook one little detail or a factor or assumption and things can get dangerous rather quickly.
ex
Collect all of the pertinent data, as far as you know
Feed it into a computer
run your data analysis
The answer is 42
"What do you get if you multiply seven by nine?"
In other words, if you ask a computer a question, make sure you asked the right one.
SIX by Nine...
There's something fundamentally flawed with the universe... :wink:
Also the premise "A little learning is a dangerous thing."
You overlook one little detail or a factor or assumption and things can get dangerous rather quickly.
ex
Collect all of the pertinent data, as far as you know
Feed it into a computer
run your data analysis
The answer is 42
"What do you get if you multiply seven by nine?"
In other words, if you ask a computer a question, make sure you asked the right one.
SIX by Nine...
There's something fundamentally flawed with the universe... :wink:
Well, not necessarily fundamentally flawed. But that answer should make you start considering why you've been thinking in base ten all this time when the universe clearly does its math in base thirteen.
“What machines are picking up on are not facts about the world,” Batra says. “They’re facts about the dataset.”