THESE FORUMS NOW CLOSED (read only)

  • 20 Apr 2024, 04:56
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1] 2   Go Down

Author Topic: On the psychology of Artificial Intelligence  (Read 14386 times)

Orkboy

  • Beyoncé
  • ****
  • Offline Offline
  • Posts: 725
  • Yelling angrily at the universe.
    • Bloodgood's Bloody Good Beer Blog
On the psychology of Artificial Intelligence
« on: 22 Jan 2015, 22:21 »

So, I was describing the comic to my brother, and mentioned Spaceship and Station, and a rather interesting question occurred to me.  Does an AI like Spaceship or Station, currently residing in something like the computer system of a large habitat/aerospace vehicle consider themselves to be what they're inhabiting, or do they see themselves as residents in it, since they can (presumably) hop from one chassis to another whenever they want?  Does the AI referred to as Spaceship consider himself the spaceship or the pilot?  What about other AIs?  Is Momo the collection of digital information currently residing in the chassis, or is the chassis Momo?  If you poke a human in the arm, you have poked the human.  If you poke an AI's chassis, have you poked the AI or the AI's ride?  When the government made them take the lasers out of the spaceship, were they telling a guy he can't own a gun or were they confiscating a part of Spaceship's body?

If an AI's chassis is the AI, then Robot Jail becomes horrifying in a downright torturous way, because the inmates are confined to a "Storage Medium."  To me, "storage medium" suggests an external hard drive, meaning that Robot Jail might be the equivalent of having your brain removed and kept in a jar for the sake of making your incarceration more convenient for the warden. 

I may be wandering into Cyberpunk territory here, but it's still a fascinating line of thinking. 

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: On the psychology of Artificial Intelligence
« Reply #1 on: 22 Jan 2015, 22:49 »

Fascinating questions.

Didn't Station say something about the space station being who he was?
Logged
Thank you, Dr. Karikó.

Oenone

  • FIGHT YOU
  • ***
  • Offline Offline
  • Posts: 406
Re: On the psychology of Artificial Intelligence
« Reply #2 on: 22 Jan 2015, 23:09 »

Even that might depend on the AI; some people consider themselves to be their jobs, like teachers or AI companions. Others might consider who they are to be independent of what they do, like May
Logged

ReindeerFlotilla

  • Scrabble hacker
  • *****
  • Offline Offline
  • Posts: 1,339
  • All Your Marriage Are Belong to Everyone
    • Singular Blues
Re: On the psychology of Artificial Intelligence
« Reply #3 on: 23 Jan 2015, 01:06 »

May wanted to be a fighter jet. In fact, she didn't have a name before she met Dale which suggests she was part of relatively non-human-facing system. Makes sense, given that she was in a position to divert 750 million dollars.

This raises the question of whether she even identified as female before she met Dale. Her embodied identity is modeled on an image created to appeal to Dale. It wasn't her choice originally, but it is now. (I know, she really looks like that so we would recognize her, but she didn't have to.)

GarandMarine

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,307
  • Kawaii in the streets, Senpai in the sheets
Re: On the psychology of Artificial Intelligence
« Reply #4 on: 23 Jan 2015, 01:09 »

Pre-Complex Anthro PCs identified as male or female or other even when they all looked like Pintsize, and Pintsize explained that gender is basically a flip of a switch for them, so assuming that conversation's still canon, I don't think gender identity is really an issue for anthro PC psychology.
Logged
I built the walls that make my life a prison, I built them all and cannot be forgiven... ...Sold my soul to carry your vendetta, So let me go before you can regret it, You've made your choice and now it's come to this, But that's price you pay when you're a monster with no name.

BenRG

  • coprophage
  • *****
  • Offline Offline
  • Posts: 7,861
  • Boldly Going From The Back Seat!
Re: On the psychology of Artificial Intelligence
« Reply #5 on: 23 Jan 2015, 05:58 »

IIRC, Momo told either Marten or one of the interns that high-end AIs like Station tend to be in control every aspect of their facility and process so much information that their 'minds' are vast and don't really work in the same way as a human's. Given that their processes are intimately involved in all parts of operating their facilities, I would think that they regard their facilities as being their 'chassis'. However, I think that AIs, especially those of the complexity of Station don't consider them 'bodies' because that's a human concept and they're not human.

Remember also that they can transfer their algorithm over to a whole new chassis, should one be available. So, the their concept of their physical form is probably very mutable and probably doesn't quite have the same value of 'part of my self' as it would with a human. I mean, look at Momo. She swapped chassis and Marigold seems to have kept her old one around for the lack of any other ideas what to do with it. I don't know about you, but to have my mind moved to a different body and to look down at the mindless but still operating shell that had be me would probably trigger a psychotic episode. Momo left her old chassis behind (still powered up but dormant) and her friend left it lying on a chair and she didn't react or object in the slightest.

I think that AIs in mobile chassis are aware of their vulnerability (destroy the processor and they die too). However, I do think that they consider their algorithm to be their 'self' and their chassis being just an environment in which they exist and interact with the wider world.

Some Portal 2 'ChellOS' ship-fics are interesting in how they write GLaDOS operating a humanoid body whilst her other processes (the vast majority of them) are maintaining the Enrichment Centre. I wonder if one of the reasons Hannelore isn't willing to pursue a romantic relationship with Station right now is because she isn't sure she can have a relationship with a mind so vast and different from a human's that 95% or more of its runtime will be literally elsewhere and thinking about things other than her, no matter what they are doing together.
Logged
~~~~

They call me BenRG... But I don't know why!

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: On the psychology of Artificial Intelligence
« Reply #6 on: 23 Jan 2015, 07:39 »

That's an interesting point about the mind being elsewhere. 

One thing about he chassis, though - it's the source of sensory input for the AnthroPC as well as its "ride".  I'm sure it took a little while for Momo to get used to her new one (you glasses wearers, just think about the day or so adjustment it takes for a new pair of specs).  I imagine in the case of a massive AI (submarine, station, spaceship, ATM system, what have you) they consider the cameras, thermostats, etc. as sensory inputs and react to them. 

A quick note before I go further - an AnthroPC is an AI, but AI and AntrhoPC are not interchangeable terms.  Station and the like are AIs, but definitely not AnthroPCs like Pintsize, Winslow and Momo.  As a mathematician, I wanted to be clear with the terminology. 

At any rate, an AnthroPC would have to have much more detailed input for sensory information like touch and pressure to be able to function independently in the world.  Sure, someone like Spaceship has a lot of input to process, too, but I doubt they'd give him a sense of touch all over the ship - maybe in a few key areas, like landing gear and telling if the doors are sealed, but not like whatever Pintsize's bulbs are doing that allow him to climb around, or what Momo can sense in a humanoid body to be able to hold a glass without breaking it.  Instead, Spaceship has sensors about fuel consumption and engine performance.  Depending on what May was before, it must have taken quite a bit of readjustment to be put into her new chassis. 

Which makes me wonder about the sales clerk that used to be a nuclear sub - the one that liked to mess with the customers and sing.  A bit off balance?  Well, wouldn't you be? 

Anyway, just a thought. 
Logged
When people try to speak a gut reaction, they end up talking out their ass.

dexeron

  • Obscure cultural reference
  • **
  • Offline Offline
  • Posts: 126
    • My Twitter
Re: On the psychology of Artificial Intelligence
« Reply #7 on: 23 Jan 2015, 07:43 »

Some Portal 2 'ChellOS' ship-fics are interesting in how they write GLaDOS operating a humanoid body whilst her other processes (the vast majority of them) are maintaining the Enrichment Centre. I wonder if one of the reasons Hannelore isn't willing to pursue a romantic relationship with Station right now is because she isn't sure she can have a relationship with a mind so vast and different from a human's that 95% or more of its runtime will be literally elsewhere and thinking about things other than her, no matter what they are doing together.


All I can think of is a conversation in the ST: TNG episode "In Theory," where Data begins a romantic relationship with a crewmember, Jenna D'Sora.

--

Jenna: Kiss me.

Jenna: What were you just thinking?

Data: In that particular moment, I was reconfiguring the warp field parameters, analyzing the collected works of Charles Dickens, calculating the maximum pressure I could safely apply to your lips, considering a new food supplement for Spot...

-awkward pause as Data realizes that Jenna is a little bothered by this-

Jenna: ...I'm glad I was in there somewhere.
Logged

Neko_Ali

  • Global Moderator
  • ASDFSFAALYG8A@*& ^$%O
  • ****
  • Offline Offline
  • Posts: 4,510
Re: On the psychology of Artificial Intelligence
« Reply #8 on: 23 Jan 2015, 08:08 »

May wanted to be a fighter jet. In fact, she didn't have a name before she met Dale which suggests she was part of relatively non-human-facing system. Makes sense, given that she was in a position to divert 750 million dollars.

This raises the question of whether she even identified as female before she met Dale. Her embodied identity is modeled on an image created to appeal to Dale. It wasn't her choice originally, but it is now. (I know, she really looks like that so we would recognize her, but she didn't have to.)

It would have been amusing if she had shown up on Dale's doorstop as an Amazon delivery drone though. "Hey, remember me? It's May! They finally let me out and got me a new job. It's no fighter jet, but...."
Logged

cesium133

  • Preventing third impact
  • *****
  • Offline Offline
  • Posts: 6,148
  • Has a fucked-up browser history
    • Cesium Comics
Re: On the psychology of Artificial Intelligence
« Reply #9 on: 23 Jan 2015, 08:12 »

I mean, look at Momo. She swapped chassis and Marigold seems to have kept her old one around for the lack of any other ideas what to do with it. I don't know about you, but to have my mind moved to a different body and to look down at the mindless but still operating shell that had be me would probably trigger a psychotic episode. Momo left her old chassis behind (still powered up but dormant) and her friend left it lying on a chair and she didn't react or object in the slightest.
Given the current storyline where it's possible Pintsize may have been broken by Faye throwing him against the wall... If Pintsize needs a new chassis, and that's the only one easily available...  :psyduck:

(Of course I'm sure Momo would object to that.)
Logged
The nerdy comic I update sometimes: Cesium Comics

Unofficial character tag thingy for QC

BenRG

  • coprophage
  • *****
  • Offline Offline
  • Posts: 7,861
  • Boldly Going From The Back Seat!
Re: On the psychology of Artificial Intelligence
« Reply #10 on: 23 Jan 2015, 08:16 »

Given the current storyline where it's possible Pintsize may have been broken by Faye throwing him against the wall... If Pintsize needs a new chassis, and that's the only one easily available...  :psyduck:

Or there's the robo-boyfriend that Dr Elicott-Chatham built for Hannelore... Actually, that is really plausible. The mind cringes away from what Pintsize would be like in a human adult-sized chassis.
Logged
~~~~

They call me BenRG... But I don't know why!

A Duck

  • Curry sauce
  • ***
  • Offline Offline
  • Posts: 267
Re: On the psychology of Artificial Intelligence
« Reply #11 on: 23 Jan 2015, 08:19 »

I mean, look at Momo. She swapped chassis and Marigold seems to have kept her old one around for the lack of any other ideas what to do with it. I don't know about you, but to have my mind moved to a different body and to look down at the mindless but still operating shell that had be me would probably trigger a psychotic episode. Momo left her old chassis behind (still powered up but dormant) and her friend left it lying on a chair and she didn't react or object in the slightest.
Given the current storyline where it's possible Pintsize may have been broken by Faye throwing him against the wall... If Pintsize needs a new chassis, and that's the only one easily available...  :psyduck:

(Of course I'm sure Momo would object to that.)

Pintsize's old chassis is still in Marten's closet, IIRC
Logged

cesium133

  • Preventing third impact
  • *****
  • Offline Offline
  • Posts: 6,148
  • Has a fucked-up browser history
    • Cesium Comics
Re: On the psychology of Artificial Intelligence
« Reply #12 on: 23 Jan 2015, 08:24 »

Given the current storyline where it's possible Pintsize may have been broken by Faye throwing him against the wall... If Pintsize needs a new chassis, and that's the only one easily available...  :psyduck:

Or there's the robo-boyfriend that Dr Elicott-Chatham built for Hannelore... Actually, that is really plausible. The mind cringes away from what Pintsize would be like in a human adult-sized chassis.

She said she was going to send it back, though it's never really stated whether she actually did.

I mean, look at Momo. She swapped chassis and Marigold seems to have kept her old one around for the lack of any other ideas what to do with it. I don't know about you, but to have my mind moved to a different body and to look down at the mindless but still operating shell that had be me would probably trigger a psychotic episode. Momo left her old chassis behind (still powered up but dormant) and her friend left it lying on a chair and she didn't react or object in the slightest.
Given the current storyline where it's possible Pintsize may have been broken by Faye throwing him against the wall... If Pintsize needs a new chassis, and that's the only one easily available...  :psyduck:

(Of course I'm sure Momo would object to that.)

Pintsize's old chassis is still in Marten's closet, IIRC
That's the one he destroyed with cake batter, though.
Logged
The nerdy comic I update sometimes: Cesium Comics

Unofficial character tag thingy for QC

dexeron

  • Obscure cultural reference
  • **
  • Offline Offline
  • Posts: 126
    • My Twitter
Re: On the psychology of Artificial Intelligence
« Reply #13 on: 23 Jan 2015, 08:53 »

New QC plot point: Pintsize is placed in Vespabot, and for some inexplicable reason can not refrain from kicking Marten in the junk every time he sees him.
Logged

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: On the psychology of Artificial Intelligence
« Reply #14 on: 23 Jan 2015, 10:24 »

Which makes me wonder about the sales clerk that used to be a nuclear sub - the one that liked to mess with the customers and sing.  A bit off balance?  Well, wouldn't you be? 

Marten did speculate once about the kind of counseling self-aware nuclear weapons would need.
Logged
Thank you, Dr. Karikó.

A Duck

  • Curry sauce
  • ***
  • Offline Offline
  • Posts: 267
Re: On the psychology of Artificial Intelligence
« Reply #15 on: 23 Jan 2015, 10:32 »

I think Pintsize WOULD work with a more human-like chassis.

They weren't common during the Hannelore's-Robot-Boyfriend arc, but now it's a long time after time (including a timeskip too), and we have Momo.
I really like the science fiction elements QC has and if would be really interesting to see our comic-relief grow as a character (and in potential for destruction, too)
Logged

Pilchard123

  • Older than Moses
  • *****
  • Offline Offline
  • Posts: 4,131
  • I always name them Bitey.
Re: On the psychology of Artificial Intelligence
« Reply #16 on: 23 Jan 2015, 10:43 »

(Of course I'm sure Momo would object to that.)

No! The eels!
Logged
Piglet wondered how it was that every conversation with Eeyore seemed to go wrong.

Neko_Ali

  • Global Moderator
  • ASDFSFAALYG8A@*& ^$%O
  • ****
  • Offline Offline
  • Posts: 4,510
Re: On the psychology of Artificial Intelligence
« Reply #17 on: 23 Jan 2015, 11:47 »

Pintsize in a more human like body would be troublesome... both for him and everyone around him. Thanks to robot racism, anthro PCs with bodies like his or Winslows are both given more leeway on behavior and treated more like things than people. Where as ones with more human chassis like Momo and May are pretty much treated more like human. Treatment varies of course, but there is no way Pintsize would get away with half what he does now if he had a human-like chassis. But then again, he always wouldn't be casually thrown into walls, disassembled or have his head punched so hard it dents by people just meeting him either...
Logged

ReindeerFlotilla

  • Scrabble hacker
  • *****
  • Offline Offline
  • Posts: 1,339
  • All Your Marriage Are Belong to Everyone
    • Singular Blues
Re: On the psychology of Artificial Intelligence
« Reply #18 on: 23 Jan 2015, 13:38 »

The anthropcs aren't the only type of AI. In fact, we don't really know if station qualifies as one of the really big AI's that Momo was talking about. Remember, station is at least 24 years old. Mono is 2. Computing power has increased by about 4096 times since station came about.

Who is to say he could be upgraded to the level Mono was talking about?

I'm only saying that we really have no clue how the details we have fit together. Why would a financial software package that didn't deal with people enough to need a name need a gender?

Sure, it's also a case of "why not" but to assume that an AI would even bother to think about that switch in that context is fairly self centered. Just because we have gender, we assume they would want it, too.

Dale is a chill guy, but he's not that super special. Yet May has pretty much latched on to him. At a guess, I would say that's because the respect he showed her was a new experience for her.

Also, he was the only person she knew at the time.

It's probably not nearly that interesting. But the possibility is more fascinating than just assuming all AI are anthropcs

Pilchard123

  • Older than Moses
  • *****
  • Offline Offline
  • Posts: 4,131
  • I always name them Bitey.
Re: On the psychology of Artificial Intelligence
« Reply #19 on: 23 Jan 2015, 14:10 »

Not really QC related, but I was browsing worldbuilding.stackexchange earlier and came across this question.

How could ghosts be explained without an afterlife?

Quote from: Serban Tanasa
I've always loved well-done ghosts. However, I've always hated the afterlife-speculation that they engender if used in a story. So I need a way to get ghosts without the fluffy spiritualistic bits. This is not a value judgement on the afterlife, I just don't want to cheapen the concept with easy answers.

If we can get a purely 'materialistic' ghost, some sort of system to preserve and project the memory of a person or important event, that would make me much happier.

What defines an acceptable ghost:

Must be perceivable, and if possible by multiple people simultaneously.
Must be immaterial in some sense (i.e you probably should not be able to grab it by the collar) Nonetheless, I would like my ghosts to be able to generate sound.
Must resemble some formerly living person in some essential aspect (visage, patterns of behavior, speech if possible). I would love it if they were partially sentient/aware and thus interactive and endowed with a deceased person's memories (at least up to a point), so they would (mis-)recognize people and could be persuaded to share their secrets.
This is not vital, but if they need to be killed put to rest, I would love for a way to do so.
So, how do you construct a ghost?

Quote from: Erik
If you're looking for something that resembling a former human being but without there being an afterlife, then essentially you're talking about someone's imprinted consciousness remaining behind in some form.

Such a thing would have no physical manifestation (because it isn't a real thing, so to say) but could still be around because other minds or objects keep it going. I´m not sure what the tech level of your story is, but the higher it gets, the easier it becomes.

If we put the tech level slightly above where we are now, it becomes quite plausible. We´re already getting close, with Facebook pages for dead users that are still interacted with as an example. These people are gone, but their page lingers. If you go one step further and imagine a Facebook script that automatically replies to certain things, such as birthdays, you might get messages from dead people. In this case you still know the person is dead and you know what it is happening, but it's the first step. Any script that is smart enough to reproduce the kind of message the user would post will sound quite a bit like them.

If you make the internet a bit smarter, things get more eerie. Imagine you make a picture inside your late friends home, and the face-recognition software suddenly pings his face somewhere in the corner. Of course, the software is simply pretending to be smart; it's picked up that this is your friends' home, it found a "face" that it couldn't place, and suggested that considering where you are and that there's apparently someone there with you, it must be your friend.

Later on, the same kind of software might think that since your friend hasn't been talking to you in a while it will helpfully start a conversation between the two of you. Of course it's goal would be to kickstart it for a few lines before your actual friend takes over (both sides thinking the other initiated the conversation, ideally). It'll sound quite like your friend used to do, but sort of stops responding after exchanging a few platitudes.

Of course the above is just software, but imagine if the software has the same response to various other people the person knew, and they start talking to each other. Human communication being what it is, something like "I talked to John yesterday" will come up. Many people will not add in "through the computer", and will instead start thinking ghosts. (Remember; people already do this). But this time, they'll have a chat history to prove it, and it'll look pretty convincing.

People already use automation for a lot of common tasks and this will only expand in the future. At some point, if you die, you'll leave behind so many automated tasks, some of which are so hard to pinpoint as being automated (because if people realise it's automated, it becomes insincere, so they'll be as lifelike as possible) that it might easily be possible to get the feeling someone is still around.

You'll be able to 'interact' with them, they can make sounds, generate images and even control other devices. When you add in glitches and detection faults, it gets even creepier. (Imagine the door to your friends' house going open downstairs and hearing "Welcome, John" from the automated system. It just made a false positive and when you get downstairs there'll be nobody, but you'll still get a nagging suspicion)

As for putting the 'ghost' to rest; the solution would be to convince the world that this person is truly dead. This can be easy if there's a centralized register where someone's state is kept, but it can also be very difficult if various devices independently check whether or not someone is still around against each other, where the other devices automated interactions trigger the "still alive" for it, and it triggers the "still alive" for others.

Such ghosts could even become angry because the scripts are picking up that you're trying to convince the world their patron is dead, even though they think he isn't. They might react less friendly, decide that your friend's logical reaction would be to deny you access to their home and things, or even alert the authorities. They might even get the idea that you are trying to kill them and become openly hostile to you.
Logged
Piglet wondered how it was that every conversation with Eeyore seemed to go wrong.

mustang6172

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,852
  • Citizen First Class
Re: On the psychology of Artificial Intelligence
« Reply #20 on: 23 Jan 2015, 21:55 »

I recall Marten having to reboot Pintsize from backup discs.  This indicates that AI's can be copied.

Makes me wonder what Momo went through when she got the new chassis.  Was a copy placed into the new chassis and the old one deleted?  Do you have to "kill" an AI to move it from one chassis to the next?
Logged

Smallest

  • Curry sauce
  • ***
  • Offline Offline
  • Posts: 266
Re: On the psychology of Artificial Intelligence
« Reply #21 on: 23 Jan 2015, 22:28 »

I recall Marten having to reboot Pintsize from backup discs.  This indicates that AI's can be copied.

Makes me wonder what Momo went through when she got the new chassis.  Was a copy placed into the new chassis and the old one deleted?  Do you have to "kill" an AI to move it from one chassis to the next?

I assumed it was like when you drag and drop files into folders, although admittedly usually they copy if you're putting them onto a new stick or whatever.
Logged

ReindeerFlotilla

  • Scrabble hacker
  • *****
  • Offline Offline
  • Posts: 1,339
  • All Your Marriage Are Belong to Everyone
    • Singular Blues
Re: On the psychology of Artificial Intelligence
« Reply #22 on: 24 Jan 2015, 23:42 »

You'll have to define "kill" to answer that question.

hedgie

  • Methuselah's mentor
  • *****
  • Offline Offline
  • Posts: 5,382
  • No Pasarán!
Re: On the psychology of Artificial Intelligence
« Reply #23 on: 25 Jan 2015, 00:12 »

Well, since transferring the AI from one chassis to another is a matter of copying files over and then deleting the original, it's like the debate as to whether or not Star Trek transporters "kill" a person and make a copy at the other end.  Are they really the same person?
Logged
"The highest treason in the USA is to say Americans are not loved, no matter where they are, no matter what they are doing there." -- Vonnegut

Oilman

  • Obscure cultural reference
  • **
  • Offline Offline
  • Posts: 126
Re: On the psychology of Artificial Intelligence
« Reply #24 on: 25 Jan 2015, 02:52 »

I'd have thought that some sort of EULA applied, like those versions of programmes that specify the number of installations. Momo, or whichever was in discussion, would have a back-up CD with an installation/uninstall wizard on it. Whichever chassis Momo was in at present would be the only functioning version at that time.

I don't like the idea of a quasi-human Pintsize one little bit. Pintsize gets away with a lot of things because he clearly isn't human. The idea of a robot immersing itself in porn is ridiculous, a non-sequitur, a case of "rule of funny". Same goes for grabbing the girls' boobs, humping the espresso machine and the rest of it. It isn't "robot racism", it's just "rule of funny", where the joke is that the action is nonsensical or irrelevant. FAYE can get away with looking under Momo's skirt, but if Marten did it, that would be creepy.

Logged

Oilman

  • Obscure cultural reference
  • **
  • Offline Offline
  • Posts: 126
Re: On the psychology of Artificial Intelligence
« Reply #25 on: 25 Jan 2015, 03:01 »

Thinking about that last post, there is a clear implication that an AnthroPC or AI could indeed "die" in the sense of being permanently lost, if their chassis or server were destroyed and no uninstall/reinstall wizard run. Marten mentioned earlier (when Pintsize had a virus) that the reboot might, in effect, only restore Pintsize's memory to the back-up point on the CD - that it would be like reformatting a PC.

I don't doubt there would be Administrator over-rides - apart from anything else, since AIs and AnthroPC both seem to use the internet fairly freely, I'd guess that Idoru or someone similar would soon know, or form a pretty good idea of whether any given "identity" were functioning or not.
Logged

Half Empty Coffee Cup

  • Psychopath in a hockey mask
  • ****
  • Offline Offline
  • Posts: 609
Re: On the psychology of Artificial Intelligence
« Reply #26 on: 25 Jan 2015, 12:41 »

Realistically, one would think that AIs would regularly sync with a backup copy stored in cloud computing. It's what makes sense given the state of the internet and computers today.
Logged
Mistakes, ahoy!

DSL

  • Older than Moses
  • *****
  • Offline Offline
  • Posts: 4,097
    • Don Lee Cartoons
Re: On the psychology of Artificial Intelligence
« Reply #27 on: 25 Jan 2015, 12:49 »

Given the whole debate over whether a copy of you is really you (as in, Star Trek Transporter), I wonder if an AI that considers itself "religious" might regard "me" as including "any copy of 'me', anywhere." Groundwork has been laid years ago in what was then cutting-edge SF, but (I've been away) I wonder it the topic's been explored beyond a story-ending hook that asks the question.
Logged
"We are who we pretend to be. So we had better be careful who we pretend to be."  -- Kurt Vonnegut.

ReindeerFlotilla

  • Scrabble hacker
  • *****
  • Offline Offline
  • Posts: 1,339
  • All Your Marriage Are Belong to Everyone
    • Singular Blues
Re: On the psychology of Artificial Intelligence
« Reply #28 on: 25 Jan 2015, 12:52 »

We don't know of any reason an AI couldn't be copied. There's no particular reason Pintsize couldn't download himself into all of the chassis in a PC store at once.

Except that he doesn't.

Jeph's extra canon explanation for AI is a combination of software and hardware that has emergent properties. The Wiki attempts to reconcile this with PS being sideloaded on a desktop PC, but there's really no reason to do that. Jeph's canon is whatever works for what he's writing now. He's contradicted himself before.

Since Momo's old chassis seemed "dead" we can speculate. Either Winslow is a bit too high strung. or transfering an AnthroPC to new Chassis involves moving hardware and software==some core module must be present for an AI to be I as well as A and their personalities are tied to individual modules, or it's simply illegal for AI's to run on multiple platforms at the same time (The last protects the canon of Pintsize's sideloading).

As for the killing... Does sleeping mean you die several times a night? We can pinpont the difference between dream times and simply unconscious times. Several times a night (or day, depending on when you sleep) you basically cease to exist. Yet you still think of you as you the next morning.

It's persistence of identity/continuity of experience. So long as the experience is continuous, you feel as if you are unchanged. (If our growing idea about what sleep is actually for are correct, you've actually changed more than you have at any other time during the day, but it doesn't feel that way.)

The same has to apply to an anthroPC. Their software side is nothing more than bits on a medium. If they optimize the bits, they have to copy a bit into memory, then copy it back to the media in a different location, then delete the original bit. By the "copying from one chassis to another logic" APCs would be killing themselves over and over when they defragged their hard drives. It makes sense to simply assume that as long as their experience is continuous, the question is moot. If Jeph ever has one AI running on two different platforms at the same time, the question will be come unmoot.

Half Empty Coffee Cup

  • Psychopath in a hockey mask
  • ****
  • Offline Offline
  • Posts: 609
Re: On the psychology of Artificial Intelligence
« Reply #29 on: 25 Jan 2015, 12:55 »

Religious AIs have been addressed... Momo used the phrase "distributed devotion". I would assume that such AIs would have their dormant backups running that. Or perhaps the state of dormancy a backup would have might be considered a form of meditation? I'm really just spitballing, here. No real clue.

Re: Does an anthroPC "sleeping" = death?

If you put your laptop on sleep mode, and bring it back up, everything you had open is still open. Compare with turning it off, where you have to open everything up again. There's a continuity with sleep mode.
Logged
Mistakes, ahoy!

ReindeerFlotilla

  • Scrabble hacker
  • *****
  • Offline Offline
  • Posts: 1,339
  • All Your Marriage Are Belong to Everyone
    • Singular Blues
Re: On the psychology of Artificial Intelligence
« Reply #30 on: 25 Jan 2015, 13:16 »

Now, if you want a really deep question...

Google is a thing in the QCverse. AI handle all sorts of supermundane tasks, like "Toaster."

Google's core product is connecting users to the information they want. Why wouldn't AI be doing that job? (Remember, Toaster.) So everything you google in the QC verse is probably examined, in detail, by multiple AI.

And they probably judge you for it.

All sleep mode is is stopping the disk I/O and CPU. Power is still sent to the memory. So everything in memory is still active. Human sleep is not like that. Indeed the human brain isn't like that. Parts of it actually turn right off. It seems this is required for flushing waste products out. Human sleep is, at best, halfway between PC sleep and PC hibernate, where the content of memory is written to non-volatile storage and all power is shut down.

Hibernate is not really any different from the copy to a new chassis question. There's no actual reason that you couldn't hibernate your laptop, copy it's state to another laptop, boot that and start up right where you left off on a different computer (and in fact you can do this if you set up things just right. {Trust me. I get paid for this stuff}). There's continuity if you did that, yet you still powered the computers down completely (something that doesn't happen in human sleep. You may lose all consciousness--no dreams--but at no point is your whole brain "off." Just parts. This is actually only a little different than being awake).

Half Empty Coffee Cup

  • Psychopath in a hockey mask
  • ****
  • Offline Offline
  • Posts: 609
Re: On the psychology of Artificial Intelligence
« Reply #31 on: 25 Jan 2015, 14:03 »

Fascinating. (Also, might it be relevant to link the GKC strip where the old-old robot talks about how being turned off is like death, and how life is motion?)

Perhaps they developed a state closer to human sleep specifically for AIs over the course of the evolution of the basic program. Idle speculation, since this isn't addressed, but if AI architecture is sufficiently like human intelligence, this may be a feature included as part of the emergent and not-understood properties of AI.
Logged
Mistakes, ahoy!

ReindeerFlotilla

  • Scrabble hacker
  • *****
  • Offline Offline
  • Posts: 1,339
  • All Your Marriage Are Belong to Everyone
    • Singular Blues
Re: On the psychology of Artificial Intelligence
« Reply #32 on: 25 Jan 2015, 14:58 »

I expect so, as the APC's have been known to suggest sleep, but they don't need it. A sleep mode would probably be useful for system maintenance, but not necessary.

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: On the psychology of Artificial Intelligence
« Reply #33 on: 25 Jan 2015, 18:07 »

Momo said once that she didn't need sleep but Jeph has said out-of-comic that everything intelligent needs sleep.
Logged
Thank you, Dr. Karikó.

Oilman

  • Obscure cultural reference
  • **
  • Offline Offline
  • Posts: 126
Re: On the psychology of Artificial Intelligence
« Reply #34 on: 25 Jan 2015, 19:22 »

Humans sleep because they physically need to, eventually. They have mechanisms to postpone that, both intrinsic and extrinsic, but they can only be used for short periods and exact a considerable cost longer-term. There are psychological and mental functiins tgat happen during sleep, but sleep per se is a physical function.

Anthros and AIs don't seem to have that requirement. Momo sat on the dresser, or the bed, watching Marigold sleep., and also told her she could work 24 hours, not needing sleep. Pintsize doesn't appear to sleep. May stated that she could power-down into a dormant state at will, and revive at will. None of them have ever been seen to observe any sort of charging regime, although Pintsize has referred to "delicious alternating current".

So I'd guess that Anthros and AI use periodic down-time for back-up and maintenance functiins, and existing in an interface with humans on a 24-hour cycle provides a fairly obvious routine for that
Logged

mustang6172

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,852
  • Citizen First Class
Re: On the psychology of Artificial Intelligence
« Reply #35 on: 25 Jan 2015, 19:58 »

Momo doesn't sleep; she waits.

Logged

Oilman

  • Obscure cultural reference
  • **
  • Offline Offline
  • Posts: 126
Re: On the psychology of Artificial Intelligence
« Reply #36 on: 26 Jan 2015, 00:15 »

42 from KiwiBlitz is a seriously creepy robot!
Logged

ReindeerFlotilla

  • Scrabble hacker
  • *****
  • Offline Offline
  • Posts: 1,339
  • All Your Marriage Are Belong to Everyone
    • Singular Blues
Re: On the psychology of Artificial Intelligence
« Reply #37 on: 26 Jan 2015, 00:31 »

Everything intelligent needs sleep...

Sounds like 2010. Something Doctor Chandra said to SAL. It's not the same line.  It just reminds me of it.

hedgie

  • Methuselah's mentor
  • *****
  • Offline Offline
  • Posts: 5,382
  • No Pasarán!
Re: On the psychology of Artificial Intelligence
« Reply #38 on: 26 Jan 2015, 00:46 »

I'd mention something about whether or not Anthro-PCs dream of electric sheep, but we all know what's being done to those sheep in Pintsize's dreams.
Logged
"The highest treason in the USA is to say Americans are not loved, no matter where they are, no matter what they are doing there." -- Vonnegut

BenRG

  • coprophage
  • *****
  • Offline Offline
  • Posts: 7,861
  • Boldly Going From The Back Seat!
Re: On the psychology of Artificial Intelligence
« Reply #39 on: 26 Jan 2015, 06:41 »

Humans sleep because they physically need to, eventually.

IIRC, our sleep cycle is about transferring short-term memory to long-term memory and also basically a maintenance cycle for our brains (flushing out toxins and the like). Both Momo and May have said they have sleep cycles and I imagine that it is for much the same reason - compressing and archiving long-term memory and restabilising their intelligence algorithm.
Logged
~~~~

They call me BenRG... But I don't know why!

hedgie

  • Methuselah's mentor
  • *****
  • Offline Offline
  • Posts: 5,382
  • No Pasarán!
Re: On the psychology of Artificial Intelligence
« Reply #40 on: 26 Jan 2015, 08:17 »

Methinks that Momo was *really* ready to "grow up", as I have said elsethread.  Even Station had burnt out a metric-fucktonne of processors to track what a butterfly had done.  He probably has several autonomous bits sleep in shifts, just to keep everything running.  Normal robots like Momo might need to mostly power-down from time to time, just as you said to let system tasks take care of things.  I think that she claimed that she could be up all the time has more to do with her immediate ambitions than her long-term capabilities. 
Logged
"The highest treason in the USA is to say Americans are not loved, no matter where they are, no matter what they are doing there." -- Vonnegut

Aziraphale

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,529
  • Extra Medium
    • The First 10,000
Re: On the psychology of Artificial Intelligence
« Reply #41 on: 26 Jan 2015, 09:28 »

Pintsize in a more human like body would be troublesome... both for him and everyone around him. Thanks to robot racism, anthro PCs with bodies like his or Winslows are both given more leeway on behavior and treated more like things than people. Where as ones with more human chassis like Momo and May are pretty much treated more like human. Treatment varies of course, but there is no way Pintsize would get away with half what he does now if he had a human-like chassis. But then again, he always wouldn't be casually thrown into walls, disassembled or have his head punched so hard it dents by people just meeting him either...

Disclaimer: I'm thinking aloud here, so I'm going to ask forgiveness in advance for any toes I may inadvertently step on (and am going to try hard not to step on any).

Thought one:
I don't know that Pintsize getting a more humanoid body is necessarily a bad thing. The first thing that comes to mind is Pinocchio; you've got a puppet who wants to be a real boy, but in between those two points, he's a liar and a bit of a fuckup, right? Might a similar dynamic be at play here?

Thought two:
I'd posit that Claire's trans* status wasn't Jeph's first foray into trans* issues in particular, or gender  issues more generally. Might the fluid nature of AI identity, in all its facets, have been a test run for what came later with Claire?

Thought three:
In the same way that a trans* individual might seek to have their body conform to what they know themselves to be psychologically/sexually/socially (though this process has no single way of playing out, and there's a wide spectrum of gender identity and expression), might an AI not go through a similar process?

Thought four:
If an individual -- human or AI -- feels they're in the "wrong" body (that is to say, one that doesn't match what they feel to be their true self) or is otherwise denied full self-expression, wouldn't it stand to reason that some of their conduct would seem puzzling, if not downright inappropriate, to someone who doesn't understand where they are in their lives?

Thought five:
In the same way that a person whose presentation is more in line with who they know themselves to be would be presumed to be healthier and happier, might an AI given the chance to express themselves as they truly are also be more fully self-realized and self-actualized?

Tying all this stuff together: Pintsize has generally been played for comic relief, but has also shown a fair amount of insight from time to time; in that sense, he's filled the role of the Jester, who gets away with speaking the truth because he's putting it in a way that's funny.* But in contrast to May and Momo, nobody's really A: given any serious thought to, or asked about, his inner life or well-being, or B: given him the chance to be or become more than a punchline. Might it be possible, then, that if given a more sophisticated chassis, he would, in effect, "grow into it"? That isn't to say there wouldn't be some speed bumps -- if you're a Star Trek: TNG fan, you remember the learning curve that came with Data getting an emotion chip, for instance -- but it could be an opportunity to explore a totally different dimension to the AI experience, and make Pintsize more of a character and less of a wiseass prop. Given the sensitivity with which Jeph's handled Claire's trans-ness up to this point, Pintsize's "transition," and the growth it could bring with it, could be an interesting arc to watch.

*Of course, there's the other times, too, where he's just being an asshole
Logged
May goldfish leave Lincoln Logs in your sock drawer.

Oilman

  • Obscure cultural reference
  • **
  • Offline Offline
  • Posts: 126
Re: On the psychology of Artificial Intelligence
« Reply #42 on: 26 Jan 2015, 09:38 »

Eerrrrmmmmm ...... nope. Pintsize is a complete dickhead, much of his conversation and actions is sexually abusive, he "takes drugs" (gets stoned on drivers), picks up Internet viruses from porn surfing, damages himself through various stupid actions ranging from attempted re-wiring to eating cake batter and generally is of no practical use to anyone.

May at least, made a quite correct observation about Dale, Marigold and Momo's "plan" - IE, that they were socially incompetent, Momo didn't know how to handle it and sterner measures were required.

IF anyone can link to a strip where Pintsize does anythung genuinely useful, in case I just missed it?
Logged

osaka

  • Scrabble hacker
  • *****
  • Offline Offline
  • Posts: 1,438
Re: On the psychology of Artificial Intelligence
« Reply #44 on: 26 Jan 2015, 10:13 »

Both pretty solid pieces of advice.
Logged
Meh, if you have to run fsck, you're already fscked.

Neko_Ali

  • Global Moderator
  • ASDFSFAALYG8A@*& ^$%O
  • ****
  • Offline Offline
  • Posts: 4,510
Re: On the psychology of Artificial Intelligence
« Reply #45 on: 26 Jan 2015, 11:04 »

Pintsize is actually very deep. He puts on a wise ass front partly as an act, but mostly because he is a wise ass. But that doesn't mean he doesn't care about and understand the people around him. He's just the smart alec of the group. The only real surprising part about when he is openly helpful and kind is that he's being open about it. Instead of you know. Butts.

As far as AIs and trans-ness goes... They are similar, but not the same. It's an apples and oranges situation. If an AI is unhappy with their assigned gender, it's a flip of the software switch. If they don't like the body they are in, they can transfer to a new one pretty simply. And there is no social stigma attached to changing bodies or function. In fact AI society seems to make it a point to help it's people comfortable with themselves and their role in life, aiding in getting someone a new job and body if say, they find they don't want to be a nuclear submarine anymore. There are limits though. May tried to embezzle or steal enough money to get a new body. Presumably for whatever reason (poor impulse control and anti-social behavior most likely) she was deemed a poor fit to be a fighter jet, so she tried to get one herself.
Logged

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: On the psychology of Artificial Intelligence
« Reply #46 on: 26 Jan 2015, 11:41 »

OK, I know I'm a day late, but whenever AI/AnthroPC sleep and dreams come up, I feel it necessary to post this; 

https://www.scribd.com/doc/72536922/Dream-Therapy

It's a fanfic I wrote a couple of years ago - the first fiction I'd written in over 20 years, and the last one so far.  It's all about what dreams may come. 

I don't know how much Scribd has changed - I don't think you need an account to view this, but I'm not sure.  If it's a problem, let me know and I'll find an alternative way to post it. 
Logged
When people try to speak a gut reaction, they end up talking out their ass.

Oilman

  • Obscure cultural reference
  • **
  • Offline Offline
  • Posts: 126
Re: On the psychology of Artificial Intelligence
« Reply #47 on: 26 Jan 2015, 17:43 »

Both pretty solid pieces of advice.

I'd say the first one was a platitude, and serves to set up the second one - which is a punchline.
Logged

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: On the psychology of Artificial Intelligence
« Reply #48 on: 26 Jan 2015, 20:17 »

Yeah, I remember thinking at the time that it sounded a lot like a fortune cookie. 



The first one, I mean. 
Logged
When people try to speak a gut reaction, they end up talking out their ass.

Oilman

  • Obscure cultural reference
  • **
  • Offline Offline
  • Posts: 126
Re: On the psychology of Artificial Intelligence
« Reply #49 on: 26 Jan 2015, 20:56 »

.... you'd hope Marten knew the second one already?
Logged
Pages: [1] 2   Go Up