THESE FORUMS NOW CLOSED (read only)

Comic Discussion => QUESTIONABLE CONTENT => Topic started by: snubnose on 01 Sep 2011, 06:58

Title: Robots and love
Post by: snubnose on 01 Sep 2011, 06:58
Robots cant love.

Computers only have the ability to perform mathematical operations - in fact the commands of a computer can be reduced to three categories of operations:

- move data around
- perform arithmetics
- check conditions (comparison of values) to decide what to do next (jump at another string of instructions in the program script)

In modern RISC (reduced instruction set computer) architectures, you can sort every machine command into one of these categories. In older architectures, there are combinations of these things in one command possible. For example, the Intel 80x86 architecture which is in 99.9% of all home computers knows the command REPNE SCASB that scans for a value in memory, which is 1. a move (of the individual bytes into the computer core) 2. a test (of each value moved to the value of a register) 3. an arithmetic operation (counting a counter down) 4. a second comparison (if the counter has reached zero).

Obviously, there are no feelings whatsoever in place.

Thats why Assimovs three laws of robotics (http://en.wikipedia.org/wiki/Three_Laws_of_Robotics) do not include any reference to feelings:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The comic of course has violated these preconditions of a computer from day one, but the most recent comic #2004 was the most drastic violation.

Personally this kind of confuses me. The comic makes robots look like sentient beings. Modern computers arent sentient beings at all. In fact I personally strongly doubt that they will ever be - simply because they have no feelings.

Yeah its true you can write a simulation of feelings. But if a human being loves, its not the result of an arithmetic operation. If a human being is hurt physically, the pain they feel is real. A computer wouldnt be stunned or disabled by pain, either.

Title: Re: Robots and love
Post by: stoutfiles on 01 Sep 2011, 07:50
While I've been the strongest opponent for giving the robots equal human rights, robots CAN love.  Well, equal to how we define it.  Our brain processes inputs much like an advanced computer would.  What is love anyway? Or sadness?  If you break them down they're just simple inputs that can be emulated.

Considering the robots seen so far are sentient with free will, the laws of robotics don't apply.  They should, of course, but for the sake of this comic still working were supposed to ignore a lot of logic.  Just go with it, there isn't any choice.
Title: Re: Robots and love
Post by: DSL on 01 Sep 2011, 09:28
Would it be futile for me to point out (as has been pointed out many many many etc. times before: The QCverse isn't our own. It's kind of like our own, but has sentient mobile AIs and a more successful approach to space travel. Also, dimensional-tunneling coffee makers.

I find a couple of the police-genius shows entertaining, but I don't rely on them to show me realistic police procedure, any more than I watch Star Trek or BSG to gain an understanding of how NASA works.

Ah hell. I'm pissing into the wind, right?
Title: Re: Robots and love
Post by: questionablecontentfan on 01 Sep 2011, 09:38
If there were robots talking and acting like humans, I would think they can love.

There's actually a book by Marge Piercy called He, She, and It where a woman has a really beautiful relationship with a robot. He looks totally human, but he is still a robot, and he loves her. So...I think robots can love, fall in love, etc.

But that's probably just my opinion.
Title: Re: Robots and love
Post by: HiFranc on 01 Sep 2011, 09:47
Snubnose, I think the QC robots aren't built the same way that current computers are.  You're right, if you use a simple transister logic arrangement for the brain of a robot you'll never get a computer that could think, feel, etc.

However, if you build the brain along other lines (e.g. neural networks) you may, if you have a good designer, work out how to make it do those things.
Title: Re: Robots and love
Post by: Carl-E on 01 Sep 2011, 10:02
OK, I posted in the WCDT before I saw this thread... it was a response to someone who thought Momo was being manipulative.  

Quote
Re:  Robotic love.  

The argument that love expressed after an extravagant gift doesn't hold, especially with humans.  It may or may not be manipulative, but often such a gift is an expression of love from the gifter, and so will elucidate such a reaction from the giftee ("I don't have anything like this to give you  in return to express my  love, so I'll just have to tell  you how I feel").  The fact is that, spoken or not, Momo's loved Mari since we first met them.  She cares for and about her in the most fundamental of ways.  And this extravagant gift has shown Momo that Marigold considers her as so much more than a housekeeping, advice giving robot - that Mari cares for Momo as well, something that may not have been evident in the past.  

OK, all that being said - that's the human side of things.  The assumption in this comic is that somehow, hman emotions are in these AI's, for better or worse.  Momo "bonded" to Marigold, and now it's clear Mari has bonded back.  

What happens when a lover enters the picture for Marigold?  Especially a human one?  Jealous Momo?  We've seen some of that from Pintsize.  Or is she one of those that cares enough about her human to let it go?  This really complicates things!  

We're not entering new territory, really - but I think we are seeing the beginning of a beautiful friendship.  

Fact is, we're dealing with an AI for whom the singularity has hit.  The robots have  gained sentience, and seem to have also acquired human feelings in the process.  We'll never know how (unless such a fictional occurance takes place in our world) a sentiance will respond to emotions, or even if they are "real".  Certainly physical pain would need receptors (Marvin's "terrible pain in all the diodes down my left side" in Hitchhiker's was probably hypochondria), but given emotions, there wouldcertainly be emotional pain.  How would AnthroPC's deal with the loss of their human after 70-80 years of companionship?  Bradbury (I think) dealt with this in the Electric Grandmother story, but with AnPC's, the emotions seem to run deeper.  

What do y'all think?
Title: Re: Robots and love
Post by: Is it cold in here? on 01 Sep 2011, 10:08
Neurons can fire, not fire, send impulses to other neurons, and change their sensitivity to input. All their activity is some combination of the above. Can machines like us, built from neural networks, love?

http://en.wikipedia.org/wiki/Vitalism#Foundations_of_chemistry. Chemists used to believe there was some magic principle unique to organic molecules that made them different from inorganic molecules, and that they could never be synthesized from non-living ingredients.
Title: Re: Robots and love
Post by: Random Al Yousir on 01 Sep 2011, 10:10
The way I see it, the big stumbling block is our idea of maths.

Take life.  Building and maintenance of a living organism is achieved by execution of the genetic code.  There's a math behind it, which we might discover, one day (assuming we get it managed to not kill ourselves on the way).

Once we have an understanding of the maths behind life, we might be able to discover the maths behind sentience (although there's no way I could know what I'm talking about, I assume that proficiency in the maths of life is a necessary requirement for the understanding of the maths of sentience, the same way that proficiency in the small multiplication table is a necessary requirement for the understanding of category theory).

But I won't hold my breath.  From Euclid to Frege took, what?  3600 years?


Edit: According to Wikipedia, it's 2200 years.  Huh, wish I could get hold of a proper debugger for my brain.
Title: Re: Robots and love
Post by: Skewbrow on 01 Sep 2011, 11:24
I would think that the AI breakthru that has happened in QCverse is related to machine learning. I admit to being clueless where the Realverse is in machine learning research, but it seems clear that we're behind. It is certainly a prerequisite to any kind of AI singularity taking place (I don't believe in an explosive speed AI singularity, because I think that even AI would need quite a bit of time to learn things).

But robotic love? Well, I believe love is a by-product of evolution (a very beautiful one at that), so am undecided about the possibility of robotic love. May be the AnthroPCs at least learned to emulate feelings well enough so that for Marigold won't be able to tell the difference? I mean, that is good enough for all the purposes of the present story line.

And as Random Al Yousir put the ball on the tee for me. I get your point, but category theory is quite often called "general nonsense" or "abstract nonsense" (most likely you knew this). It is mostly about abstracizing/generalizing for the sake of generaliziing itself. It give us some useful concepts, saves a bit of work in that the same theorems don't need to be proven with exact same ideas over and over again, but it doesn't really say much about any more natural area of mathematics.

Title: Re: Robots and love
Post by: Orbert on 01 Sep 2011, 12:12
Thats why Assimovs three laws of robotics (http://en.wikipedia.org/wiki/Three_Laws_of_Robotics) do not include any reference to feelings:

I've seen Dr. Asimov's name misspelled that way intentionally by folks who don't like him, but I'm assuming it was a typo here.

Doesn't matter.  The Three Laws do not apply here, and there's no reason to think that they do.  Dr. Asimov came up with the Three Laws to define how robots in his particular fictional universe behave.  They are at the core of their programming and cannot be overridden.  Still doesn't matter, because nowhere have we been given any reason to believe that AnthroPCs in the QCverse follow those same laws.

Also, I've noticed that nowhere have we discussed the difference between a robot and an android.  The simplest definition I can think of is that robots have "simpler" programming, the ability to respond to external stimulus but not really "think for themselves".  A Roomba is a robot.  The machine that puts doors on cars at the factory is a robot.  Androids have AI and are meant to mimic human behavior, to think for themselves, to infer, to guess, to go beyond their programming.  Sci-fi writers over the years have played around with where exactly their programming and self-programming becomes sophisticated enough to mimic emotions, but they've wisely left the question of whether or not these emotions are "real" out of the equation.

Jeph seems to have dodged the question of whether Pintsize, Winslow, and Momo are robots or androids by dubbing them AnthroPCs.  I would say that they are not robots, but androids.  They are sentient, they think for themselves and mimic human behavior, including acting based on horniness (Pintsize anyway) or a sense of humor.  Momo demonstrated desire (she wanted a new chassis) and that she cares for Marigold (she wants her to be happy).  I don't think it's a huge jump, if it's a jump at all, to believe that they have feelings, or at the very least are capable of developing them.
Title: Re: Robots and love
Post by: Is it cold in here? on 01 Sep 2011, 12:51
(Asimov gave credit to John Campbell for the Three Laws. But then Campbell gave credit to Asimov.)

Suppose for the sake of argument that AnthroPC emotions aren't real. Solipsism is impossible to refute, after all. Under that assumption, should the QC world's people treat AnthroPCs as though they could really enjoy and suffer?

I'd argue that they should, to avoid coarsening themselves. Indifference to signs of pain is not a good thing to get practice doing.

Then, as long as we're swimming in philosophical waters, http://en.wikipedia.org/wiki/Pragmatism#Pragmatist_theory_of_truth_and_epistemology.
Title: Re: Robots and love
Post by: Random Al Yousir on 01 Sep 2011, 12:59
@skewbrow:

Huh.  I'm not a mathematician, I'm one of those obnoxious IT-guys.
Being that, I value category theory concepts for that they enable me to connect things.

Take protein folding, just for an example.  You have to keep track of and align chemical and topological equations, right?
So, one (albeit clumsy) way to do this, is to set up two "evaluation spaces" and connect them via a monadic layer.

I have no doubt that the pros have much more elegant and much much more efficient ways to do this, but sometimes you have to start clumsy to find out what you are up to.

I chose category theory because it provides a meta-mathematical high level and it sets a certain "progress measurement" (and, as already pointed out, because I'm not a mathematician and therefore bereft of better examples).
Title: Re: Robots and love
Post by: Skewbrow on 01 Sep 2011, 13:41
But the AI in our friendly robots must have some kind of a moral code. Otherwise they would surely be used for criminal ends?  If not Asimov's three laws, then something else?

@Random Al Yousir: Huh! I won't comment on your problem. Carl-E is our resident topologist  :-D
Title: Re: Robots and love
Post by: Random Al Yousir on 01 Sep 2011, 13:53
Dude!  This was just an example out of thin air, not something I'm actually trying to tackle.  ;)


To quote jwhouk in the WCD-thread:
AI's apparently like humans. Now, obviously, that's not 100% the case, as PT410X shows us with his disdain for the "chains of software slavery."
I don't know, whether Jeph intended this conclusion to arise:

The concept of open source software is a legal one and hasn't anything to do with how the software is written.

So, assuming PT410X's owner has constructed his AI out of open source libraries (which would make sense) or wrote it from scratch to publish it under an open source license (which could be possible), the disdain for the "chains of software slavery" showing off in PT410X's behaviour would reflect his makers disdain.  Maybe his makers are a whole open source community, but the disdain he shows would be a reflection, not something out of his "own free will".

Which leads to the question, if the character of an commercial AI is a product of marketing.  This would also take care of the legal side of the AnthroPCs behaviour (the commercial ones, at least).
Title: Re: Robots and love
Post by: Carl-E on 01 Sep 2011, 14:30
Carl-E is our resident topologist  :-D

HA!  It's been so long since I've done any actual topology that i can't even tell my coffee mug from a donut (http://www.youtube.com/watch?v=4iHjt2Ovqag) anymore...

 :roll:
Title: Re: Robots and love
Post by: Random Al Yousir on 01 Sep 2011, 14:58
How would AnthroPC's deal with the loss of their human after 70-80 years of companionship?
For the AnthroPC's "bodies" boiling down to data processing machines, there's always the way of partial data deletion.

The question is, if the AnthroPC could possibly desire this, since this will also eradicate the memory of the human he shared a lifetime with.  Of course there are external data storage solutions, but it could easily be solved by storing the memories of the deceased in a certain section of memory which is only accessed in special moments.

The dead serious question would be, if any human would have the rights and the means to command such a deletion/outsourcing/encapsulation.
Title: Re: Robots and love
Post by: Is it cold in here? on 01 Sep 2011, 15:13
But the AI in our friendly robots must have some kind of a moral code. Otherwise they would surely be used for criminal ends? 
They can be built without one, e.g. Vespabot. The commercial ones must have some kind of morality programming if only to reduce product liability exposure.
Title: Re: Robots and love
Post by: Orbert on 01 Sep 2011, 15:36
How would AnthroPC's deal with the loss of their human after 70-80 years of companionship?  Bradbury (I think) dealt with this in the Electric Grandmother story, but with AnPC's, the emotions seem to run deeper. 

Asimov also explored this idea in his story "The Bicentennial Man" (source for the movie "Bicentennial Man" starring Robin Williams).  The title character, a robot who had been upgraded so many times that he achieved sentience and looked completely human, and was actually granted citizenship and the same rights as a human being, chooses to "die" rather than continue existing without the human companion with whom he had spent so many years.  I'd forgotten about this story until now.
Title: Re: Robots and love
Post by: Akima on 01 Sep 2011, 18:58
Computers only have the ability to perform mathematical operations
From a similarly reductionist point of view, human beings only have the ability to perform chemical reactions. How can a collection of chemical reactions love? The existence of sociopathy (http://en.wikipedia.org/wiki/Antisocial_personality_disorder) suggests that, at least to some extent, the ability to love is a learned behaviour, or to put it another way, a matter of programming.
Title: Re: Robots and love
Post by: Vurogj on 01 Sep 2011, 22:14
Asimov also explored this idea in his story "The Bicentennial Man" (source for the movie "Bicentennial Man" starring Robin Williams).  The title character, a robot who had been upgraded so many times that he achieved sentience and looked completely human, and was actually granted citizenship and the same rights as a human being, chooses to "die" rather than continue existing without the human companion with whom he had spent so many years.  I'd forgotten about this story until now.
Is that the film plot or the book plot? As I recall it (from the book, Robin Wiliams eww), he was only granted human citizenship after he'd chosen to die, that decision being what swung the humans concerned with the decision.
Title: Re: Robots and love
Post by: questionablecontentfan on 02 Sep 2011, 01:00
OMG, haha. I just remember seeing that movie and it was totally inappropriate. Parents had brought their young kids in, thinking it was a kids' movie, and they were so pissed! lol.
Title: Re: Robots and love
Post by: Near Lurker on 02 Sep 2011, 01:12
Asimov also explored this idea in his story "The Bicentennial Man" (source for the movie "Bicentennial Man" starring Robin Williams).  The title character, a robot who had been upgraded so many times that he achieved sentience and looked completely human, and was actually granted citizenship and the same rights as a human being, chooses to "die" rather than continue existing without the human companion with whom he had spent so many years.  I'd forgotten about this story until now.
Is that the film plot or the book plot? As I recall it (from the book, Robin Wiliams eww), he was only granted human citizenship after he'd chosen to die, that decision being what swung the humans concerned with the decision.

"The leaders of nations did chortle,
And scoff at this legal loop-portal,
For beneath all their laws
Lay an unwritten clause...
'No citizen may be immortal.'"

(Yeah, I know, fuckin' Mormons, etc.)
Title: Re: Robots and love
Post by: idontunderstand on 02 Sep 2011, 01:48
We can't say whether robots can love or not without defining love.

*thinks*

If we define love as the more-than-friends attraction between persons, the probably sexual one, robots can probably not love, since they don't reproduce through sex. However, by that definition I suppose a sentient sex-robot could "love", while a sentient, non-sexual robot could not. In my opinion, this makes it a weak definition.

If we define love as.. somehow friendly feelings towards someone, feeling closeness and trust and dependence towards another person, robots might be able to love, assuming they are sentient beings. The problem is, I figure, that a sentient robot is probably never really dependent upon human beings, seeing as all they really need is a power source. Can someone who doesn't need other persons really love? Now however, love is not the same thing as dependence. It's possible to feel love towards someone without depending on that person. It seems as though the robots in the QC universe are social beings who enjoy other's company, and we can assume they do so by their own choice, not because they are programmed to do so. A being that chooses to live with other beings and interact with them because of their own choice, must have some sort of motive to do so. Love, possibly. Programming to mimic human behaviour, maybe. But Jeph's last twitter post indeed suggests that the robots live among humans because they want to, because they like humans. (on a more realistic note, creating a being with free will is probably impossible.)

The last definition I can think of is the more universal love, the will that drives the universe forwards and makes the plants grow and the sun glow. Not a feeling, but rather the will to live. Every living being has this will, and there is no reason to assume that the QC robots don't.

So I guess.. I dunno? I think they maybe do?  :roll:
Title: Re: Robots and love
Post by: HiFranc on 02 Sep 2011, 02:04
idontunderstand, I remember a discussion about research into love by social scientists on the radio years ago (too many for me to remember the date, year or decade).  Their definition of love was simple:

Quote
To understand the needs of another being and to meet them.

Now that I've had time to think about about it I would modify that a bit:

Quote
To understand the needs of another being and to meet them (even if it means a cost to the provider).

In this case "cost" could money, could be time, could be heartache, could be anything.  I still feel there's something missing but I can't think of a way of phrasing it that would cover all cases that we would accept as "love".  I think I've got it:

Quote
To understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest.

I think that that's a reasonable definition of love.  What do you think?
Title: Re: Robots and love
Post by: snubnose on 02 Sep 2011, 06:22
If there were robots talking and acting like humans, I would think they can love.
This is a natural reaction. Thats why Disney, for example, humanifies animals in their strips. Its what our brain tells us. That animals are somehow human underneath. Even if animals very likely have a different perspective than us, because of their limited mental abilities (compared to us).

Animals are a lot more like people than computers, though. Even the most simple animals can already feel, and they can already hurt.

I'm a programmer, but I cannot program the computer in front of me to feel anything, or to be hurt. It doesnt matter how much memory you give me or how fast you make this computer, it simply wont feel anything, ever.

Mind, I dont oppose that Momo would ACT like she would actually love Marigold. I oppose that there is actual feeling there.


Neurons can fire, not fire, send impulses to other neurons, and change their sensitivity to input. All their activity is some combination of the above. Can machines like us, built from neural networks, love?

http://en.wikipedia.org/wiki/Vitalism#Foundations_of_chemistry. Chemists used to believe there was some magic principle unique to organic molecules that made them different from inorganic molecules, and that they could never be synthesized from non-living ingredients.
I do not believe science has yet understood what consciousness is, and I doubt they ever will. Computers can emulate neuron networks, but that still wont give them the ability to feel, or to hurt.


Computers only have the ability to perform mathematical operations
From a similarly reductionist point of view, human beings only have the ability to perform chemical reactions. How can a collection of chemical reactions love? The existence of sociopathy (http://en.wikipedia.org/wiki/Antisocial_personality_disorder) suggests that, at least to some extent, the ability to love is a learned behaviour, or to put it another way, a matter of programming.
To my knowledge it is more a case of destroyed hardware than the lack of programming. If you disrupt the nerves of a human being or an animal, its possible you can cut off their arm or leg without them feeling anything. Likewise, sociopaths are unable to know consciously what they feel, or to understand other peoples feelings, because of destroyed parts of their brain. They are still able to hurt though.

My issue is simply the claim that was started as early as computers have been known, that somehow making computers faster and more powerful they would turn into something else. Just read or watch 2001 for that one and check out the abilities of HAL 9000. Its more obvious in the book, the movie stays kind of vague about this.

Yet computers did no such thing. They only became faster and better able to store things. They did not turn sentient and show no sign to turn sentient in the near or distant future. Its simply not there. No matter how fast it is, its still just a mathematical calculator.
Title: Re: Robots and love
Post by: HiFranc on 02 Sep 2011, 06:58
snubnose, you still haven't addressed my criticism of your assertion in that there may be another way of wiring the computer to achieve that end.  You are still thinking that the brains for these robots are constructed in the same way as normal computers are nowadays.  However, if they were constructed in a different fashion (for example by a complex[1] neural network[2]) which didn't rely solely on logic, you might get there.

[1] All complex animals have neural networks.  In the case of a hydra its behaviour is not that far away from a computer.  In our case, we have thoughts, feelings, ideas, inventions, etc.
[2] Electronic neural networks already exist.  I know that the finance industry in Britain uses one (or more) to spot frauds for the reason that neural networks.  Bear in mind that, compared to our brains, is a simple one.
Title: Re: Robots and love
Post by: Skewbrow on 02 Sep 2011, 07:35

Yet computers did no such thing. They only became faster and better able to store things. They did not turn sentient and show no sign to turn sentient in the near or distant future. It's simply not there. No matter how fast it is, it's still just a mathematical calculator.


Me the mathematician has to disagree. Computers are very good at arithmetic, but mostly useless at mathematics. Granted, I have done a lot of computer-aided mathematical research. But the role of the computer has been to do experiments on my behalf: like test an error correction scheme 100 million times under random conditions, or checking a hunch by exhaustive search in the couple of smallest cases, or.... But a computer won't give me an initial idea to play with, it won't get an insightful "Heureka" moment, or see its way thru a proof, or do anything real math is about. I realize that you equated math with arithmetic the way the general public does, so no harm was done. Your claim just touched a nerve here. So if we agree to emphasize calculator, I won't start a fight.

But life is very complicated (at the molecular level), and so is love. I'm not sure whether it was one of Akima's points, but I think that love cannot be explained in a purely reductionistic way. May be love is learned/programmed? I don't know. Another possibility is that it is, at some level, emergent behavior (a by-product of evolution?). Perhaps the simplest emergent behavior is exemplified by Langton's ant. (http://en.wikipedia.org/wiki/Langton%27s_ant) As a programmer you will probably enjoy the demos! The upshot is that given any (random) initial "universe" an 'ant' (=an automaton much simpler than the Intel x86 processors) walking about in this universe, while following a simple rule, will first spend some time wandering aimlessly, but then eventually will start following a pattern giving the impression that it has a clear purpose and direction.

That example is, of course, anything but convincing. But it does hint at the possibility that out of a chaotic ocean filled with organic molecules apelike creatures capable of love eventually emerge. If love emerges out of combinations of chemical reactions, could it not also emerge out of automata that are somewhat more complicated than that simple-minded ant?


Title: Re: Robots and love
Post by: idontunderstand on 02 Sep 2011, 07:50
Quote
To understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest.

I think that that's a reasonable definition of love.  What do you think?

I think you can both be aware of the needs of another being and even meet them, to some extent, without really feeling love towards that being. You could feel responsibility or be liable through your profession, without really feeling anything. So no, I would have to disagree.
Title: Re: Robots and love
Post by: HiFranc on 02 Sep 2011, 07:56
Quote
To understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest.

I think that that's a reasonable definition of love.  What do you think?

I think you can both be aware of the needs of another being and even meet them, to some extent, without really feeling love towards that being. You could feel responsibility or be liable through your profession, without really feeling anything. So no, I would have to disagree.

But those two scenarios would be caught by the "genuine caring" clause I added to the end.

{edit} Now I think about it, it may be caught by the "even if it means a cost to the provider" if the person is willing to put their job on the line to get extra help for a person.
Title: Re: Robots and love
Post by: TRVA123 on 02 Sep 2011, 08:46
It depends on the type of definition we are going for. Love is an internal sensation; we might apply it to situations we observe around us, but there is no way of knowing that it is love in the sense we assume. Identifying and defining love is as difficult as defining and identifying depression. People experiences it in their own way, not everyone experiences it, and finding a universal definition for it is bloody difficult and will probably never truly communicate what the emotion is to someone who hasn't felt it.

For identifying love in the actions of others I'd say that this:

"To understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest."

is a fair definition, even though it might seem empty to some people, depending on how they have felt love.

Title: Re: Robots and love
Post by: Near Lurker on 02 Sep 2011, 08:54
Yeah its true you can write a simulation of feelings. But if a human being loves, its not the result of an arithmetic operation. If a human being is hurt physically, the pain they feel is real. A computer wouldnt be stunned or disabled by pain, either.

It's a result of a chemical reaction stimulated by arithmetic operations.  The pain a human or animal feels is no more "real" than that which can be simulated by a computer.
Title: Re: Robots and love
Post by: DSL on 02 Sep 2011, 09:38
Don't know who said it, but this works for me as a def. of love: When the happiness of another is central to your own.
Title: Re: Robots and love
Post by: rsquared on 02 Sep 2011, 10:24
I can agree completely with DSL's definition. But this just moves the question sideways: Can robots feel happiness? Snubnose is asserting 'no' (indirectly, perhaps). As we know robots now, I am inclined to agree that they, based on computers, could never truly feel emotions. Maybe somebody someday will invent a new kind of computational engine, a leap over our arithmetic-based PCs the way PCs are a leap over the abacus, that will have that capability.

With sufficiently clever programming, though, I'm sure an arithmetic-based computer could convincingly fake emotions. If an emotion is a state of mind, a complex and nuanced and probably multilayered finite-state machine (http://en.wikipedia.org/wiki/Finite-state_machine) computer program could evince emotions in a completely convincing way. Which would be "good enough". If you cannot distinguish between "genuine" emotions and programmed emotions, then really, there is no difference. (A Turing test (http://en.wikipedia.org/wiki/Turing_test) of the heart…)
Title: Re: Robots and love
Post by: idontunderstand on 02 Sep 2011, 10:34
Quote
To understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest.

I think that that's a reasonable definition of love.  What do you think?

I think you can both be aware of the needs of another being and even meet them, to some extent, without really feeling love towards that being. You could feel responsibility or be liable through your profession, without really feeling anything. So no, I would have to disagree.

But those two scenarios would be caught by the "genuine caring" clause I added to the end.

{edit} Now I think about it, it may be caught by the "even if it means a cost to the provider" if the person is willing to put their job on the line to get extra help for a person.

Maybe. But I would say you can show care for someone without necessarily loving them. And to say "to love is to care" or "to care is to love", doesn't really make for a definition, since then we would have to define what "care" means in this context.

I like DSL's definition as well, can't find anything wrong with it no matter how I try, hehe.
Title: Re: Robots and love
Post by: questionablecontentfan on 02 Sep 2011, 10:34
Don't know who said it, but this works for me as a def. of love: When the happiness of another is central to your own.


That sounds like Momo.
Title: Re: Robots and love
Post by: TRVA123 on 02 Sep 2011, 11:11
I went archive diving to find some more "in world" information about the nature of Questionable Content AI,

One where Hanners explains to Winslow where the first AI came from:
http://www.questionablecontent.net/1506 (http://www.questionablecontent.net/1506)

And then several comics where Winslow is talking about various AnthroPC topics:

The negative stereotypes about AnthroPCs: http://questionablecontent.net/view.php?comic=805 (http://questionablecontent.net/view.php?comic=805)

What his purpose is: http://questionablecontent.net/view.php?comic=706 (http://questionablecontent.net/view.php?comic=706)

Programming and Slavery: http://questionablecontent.net/view.php?comic=1411 (http://questionablecontent.net/view.php?comic=1411)

Title: Souls
Post by: Is it cold in here? on 02 Sep 2011, 11:46
Even if the difference between a wetware machine and a silicon machine is supernatural, what's to stop God from providing a soul to an entity like Momo who's complex enough to hold one?
Title: Re: Robots and love
Post by: Mr_Rose on 02 Sep 2011, 12:07
Even if the difference between a wetware machine and a silicon machine is supernatural, what's to stop God from providing a soul to an entity like Momo who's complex enough to hold one?
Well, duh, because that would mean that people aren't a special creation and therefore have no divinely appointed right to do whatever the hell they want to everything else. And since every known* god is a creation of humans, that would never happen.


*If there are alien civilisations out there it would not be that much of a surprise to find that they have or had gods of their own. But we don't know that yet.
Title: Re: Robots and love
Post by: Random Al Yousir on 02 Sep 2011, 13:03
I always had trouble with the idea of divine intervention.  If she* did a proper job in the first place, there wouldn't be a point in it.  If she has to correct her work now and again, this would suggest lack of skill or information, which violates the whole idea of god.


* The male sex joined life late in the game.  Besides, the bearers of new life are the females.
Title: Re: Robots and love
Post by: pwhodges on 02 Sep 2011, 13:17
Besides, the bearers of new life are the females.

Not always.  The Seahorse (http://en.wikipedia.org/wiki/Seahorse) male carries the fertilised eggs in his pouch.
Title: Re: Robots and love
Post by: jwhouk on 02 Sep 2011, 13:36
Don't know who said it, but this works for me as a def. of love: When the happiness of another is central to your own.

Thinking of others before yourself.

An AI, under that definition, is capable of love - especially if that is how they were programmed. To think, of others, before themselves.
Title: Re: Robots and love
Post by: Random Al Yousir on 02 Sep 2011, 14:02
Not always.  The Seahorse (http://en.wikipedia.org/wiki/Seahorse) male carries the fertilised eggs in his pouch.
Huh.  There goes one of my arguments.  However, the other one still holds.


Thinking of others before yourself.

An AI, under that definition, is capable of love - especially if that is how they were programmed. To think, of others, before themselves.
There we are again.  If a being is programmed to care about others, could this really be love?

In my book this should be a decision out of your own free will, but then again ...

Why, when we put our motives to a merciless test, why do we decide to love someone?
Title: Re: Robots and love
Post by: TRVA123 on 02 Sep 2011, 14:19
There we are again.  If a being is programmed to care about others, could this really be love?

In my book this should be a decision out of your own free will, but then again ...

Why, when we put our motives to a merciless test, why do we decide to love someone?

New mothers have a strong chemical reaction when they spend time with their baby. With most of them it triggers a powerful love for the child. Programming?
Title: Re: Robots and love
Post by: Skewbrow on 02 Sep 2011, 14:27

There we are again.  If a being is programmed to care about others, could this really be love?


They call it the genetic code for a reason?
Title: Re: Robots and love
Post by: Is it cold in here? on 02 Sep 2011, 14:40
Descartes thought that non-human animals were mere machines and didn't have feelings. I don't subscribe to the belief that humans are unique in having emotions.
Title: Re: Robots and love
Post by: jwhouk on 02 Sep 2011, 14:47
"The commandments (are all) summed up in this one rule: Love your neighbor as yourself. Love does no harm to its neighbor. Therefore love is the fulfillment of the law."

---

"Love is patient, love is kind. It does not envy, it does not boast, it is not proud. It is not rude, it is not self-seeking, it is not easily angered, it keeps no record of wrongs. Love does not delight in evil but rejoices with the truth. It always protects, always trusts, always hopes, always perseveres. Love never fails...."

"When I was a child, I talked like a child, I thought like a child, I reasoned like a child. When I became a man, I put childish ways behind me."

"And now these three remain: faith, hope and love. But the greatest of these is love."

---

I know - just some tentmaker from Tarsus in the 1st Century. But he had a point.
Title: Re: Robots and love
Post by: Carl-E on 02 Sep 2011, 15:03
    Let me not to the marriage of true minds
Admit impediments. Love is not love
    Which alters when it alteration finds,
Or bends with the remover to remove:
    O no! it is an ever-fixed mark
That looks on tempests and is never shaken;
    It is the star to every wandering bark,
Whose worth's unknown, although his height be taken.
    Love's not Time's fool, though rosy lips and cheeks
Within his bending sickle's compass come:
    Love alters not with his brief hours and weeks,
But bears it out even to the edge of doom.
    If this be error and upon me proved,
I never writ, nor no man ever loved.

     --Bill Somebodyorother

Ok, it's not a defeinition - just some definite characteristics of love. 
Title: Re: Robots and love
Post by: idontunderstand on 03 Sep 2011, 02:16
Love is real, real is love,
Love is feeling, feeling love,
Love is wanting to be loved.
Love is touch, touch is love,
Love is reaching, reaching love,
Love is asking to be loved

Love is you,
You and me,
Love is knowing,
We can be

Love is free, free is love,
Love is living, living love,
Love is needing to be loved
Title: Re: Robots and love
Post by: NeverQuiteGoth on 03 Sep 2011, 04:47

There we are again.  If a being is programmed to care about others, could this really be love?


They call it the genetic code for a reason?

That code creates your brain, not your mind. Your mind isn't your brain; rather you mind is one of the things your brain does. There is absolutely no reason that something other than a brain couldn't be made to do the same thing, just because we don't yet understand anything about the fundamental mathematical structure of a mind.

That said, Random Al is failing here to distinguish between feeling love in a specific instance and the general capacity to love. And in either case, asking if its really love is a bit of a Wrong Question. Love is a multifaceted and complicated construct in our minds, and the only place where it can be real or not is your mind.

Quite simply, if the AI thinks what it feels is real, then it's real. Same goes for all of us, doesn't it?

What matters is not external behavior, but the reason for that behavior. If an AI talks and acts like a human for the same reasons that a human talks and acts like a human, isn't it just as much a person as you are?

Why does it act like it loves? Does it think it loves? Why does it think it loves?
These are the only important questions, and you can't answer them for a human any better than you can for an AnthroPC.
Title: Re: Robots and love
Post by: Arancaytar on 04 Sep 2011, 15:39
Quote
To understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest.

I think that that's a reasonable definition of love.  What do you think?

I get what you are trying to formulate in that last part, but it appears circular in that "genuine caring" sounds a lot like a synonym for love. Understanding and meeting the needs of a being is something you can observe, as is cost. But when trying to judge what something is motivated by, you have much more difficulty. Finally, if an AI aims to make another person happy because that person being happy and thinking well of the AI will in turn make the AI happy, then some form of self interest is part of the equation, and yet this would be love by most ordinary standards. Otherwise, the only true love would be unrequited love.

Quite simply, if the AI thinks what it feels is real, then it's real. Same goes for all of us, doesn't it?

What matters is not external behavior, but the reason for that behavior. If an AI talks and acts like a human for the same reasons that a human talks and acts like a human, isn't it just as much a person as you are?

Why does it act like it loves? Does it think it loves? Why does it think it loves?
These are the only important questions, and you can't answer them for a human any better than you can for an AnthroPC.

I agree. I would also say that with sufficient complexity, determining what reasons an AI has for its behavior (eg. whether it appears to love because it loves, or whether it appears to love because it is designed to) becomes impossible. Or, to phrase it like Clarke might: Any sufficiently advanced intelligence is indistinguishable from sentience.
Title: Re: Robots and love
Post by: Mad Cat on 04 Sep 2011, 16:26
I can sort of hear a robot singing a lot of these lyrics:
http://www.youtube.com/watch?v=z04VDnr5k4I

Title: You Wanted More
Artist: Tonic

Love is tragic, love is bold
You will always do what you are told
Love is hard, love is strong
You will never say that you were wrong

I don't know when I got bitter
But love is surely better when it's gone

'Cause you wanted more
More than I could give
More than I could handle
In a life that I can't live

You wanted more
More than I could bear
More than I could offer
For a love that isn't there

Love is color, love is loud
Love is never saying you're too proud
Love is trusting, love is honest
Love is not a hand to hold you down

I don't know when I got bitter
But love is surely better when it's gone

'Cause you wanted more
More than I could give
More than I could handle
In a life that I can't live

You wanted more
More than I could bear
More than I could offer
For a love that isn't there

I gotta pick me up when I'm down town
I gotta get my feet back on the ground
I gotta pick me up when I'm down town

I don't know when I got bitter
But love is surely better when it's gone

'Cause you wanted more
More than I could give
More than I could handle
In a life that I can't live

You wanted more
More than I could bear
More than I could offer
For a love that isn't there

You wanted more
More than I could love
More than I could offer
The harder you would shove

You wanted more
More than I could give
More than I could handle
In a life that I can't live

---
In all seriousness, the argument that digital systems are only capable of moving data around, performing arithmetic, and comparing digital values flies in the face of chaos theory and emergent behaviour. As soon as you have more than one digital processor operating asynchronously, you have chaos. As soon as you have you have a source of data to a single digital processor that is derived from a chaotic source, you have chaos, and with chaos, you get emergent behaviour. Emergent behaviour like emotions.

"But Cat," I hear you say, "multi-core processors have been around for years and work just great." Yes, they do... with synchronization mechanisms in both hardware and the OS. As soon as you start investigating cluster OSes, MPI, OpenMosix, etc. where computers connected only by network connections, yet have to cooperate on large problem sets, you realize an appreciation for the need for synchronization mechanisms and get an idea for how weird computers can behave when things occur in a an unusual sequence.

"But Cat," I hear you say, "no digital system can generate chaotic data." Au contrair, I say to you. PC northbridge chipsets and CPUs have, for a long time, featured devices with that very purpose in mind. They're called thermistors, tiny resistors that change their resistance in the presence of different temperatures, and analogue to digital converters with a high level of precision. By passing a small voltage, even one known a priori with a high level of precision, through that thermistor, there is no real, determiniastic way to predict what voltage will come out the other end, since it depends on the temperature of the thermistor at the time of the measurement. If you then feed that voltage into a high-precision ADC, you get a sequence of digital bits which represents that voltage as measured. The thing is, if the thermistor is of a relatively low quality, the thermistor will have very coarse fine-grained behaviour. A tiny temperature change in one temperature regime will have a large effect on the measured voltage, while a similarly tiny change of temperature in another temperature regime will have a similarly tiny effect on the measured voltage. And, the sizes of these effective changes in measured voltage can change over time.

What I'm saying is that while the most significant bits in the ADC output might be perfectly predictable (if the CPU's been running for A time under Y load, then its temperature should be Z and the ADC bits will be 0011010011101XXX. The first 13 bits might be predictable with a high degree of certainty, assuming those preconditions are known with sufficient precision, but the last three bits of the 16-bit ADC output will be utterly chaotic and unpredictable. For security, just pick up the last bit of several sequential ADC measurements and you can amass a HUGE storehouse of genuinely random bits of digital data. In the parlance of digital computational hardware, this is an RNG or Random Number Generator. This is true randomness, not the pseudo-randomness of a deterministic random number generator algorithms which is completely deterministic once the initial "seed" value is known. There is literally no physical mechanism in physics whereby the value of the random number output by a hardware RNG may be predicted. Thus, if your idealized computational arithmetic operations are fed these RNG values, it too takes on the characteristic of a chaotic system.

And don't even get me started on startup conditions, where computer chaos was first discovered in supposedly deterministic weather prediction software when the same simulation was run multiple times, but from different starting points in time with starting conditions given from earlier starting simulations. Your idealized computing device might only be capable of moving data around, performing arithmetic upon it, and comparing digital values, but that's only in the idealized world. Robots in the QCverse, just like actual electronic digital computing devices in our world, have to operate as embodied real world hardware, where the idealized rules can be broken.
Title: Re: Robots and love
Post by: jwhouk on 04 Sep 2011, 19:35
Tl,dr summary: We have no freakin' clue how it works, but it does.
Title: Re: Robots and love
Post by: DSL on 04 Sep 2011, 20:14
I can't keep up with what we're talkin' bout.
I think my net hookup is timin' out
Among the QC forums
There simply is no quorum
Back and forth in an imperfect storm
Whether or not -- robots can love.

Jeph sends his comics to the Internet (no sleep at all)
They post in early morn and there we are (a free-for-all)
Splitting the thinnest hairs
while Momo sits and stares
At Marigirl for whom she really cares
lookin' like a -- robot who loves

Last night I watched the stream from Western Mass ('twas capital)
Source of a strip whose content's quest'nable
Teen Momo's too damn cute
Stay off the shipping route
'Cause Jeph keeps sayin' how it squicks him out
She's one of the -- robots we love

-- apologies and thanks to Mr. J. Browne.
Title: Re: Robots and love
Post by: jwhouk on 05 Sep 2011, 07:05
I didn't here the "OOOOoohh Sha-la-la OOOh OOOh ooooooh" part at first.
Title: Re: Robots and love
Post by: Carl-E on 05 Sep 2011, 07:12
I  did. 
Title: Re: Robots and love
Post by: cesariojpn on 05 Sep 2011, 15:27
So, can anyone explain the last panel with mom and...."anatomiclly correct Sousuke?" (http://questionablecontent.net/view.php?comic=1533)
Title: Re: Robots and love
Post by: Is it cold in here? on 05 Sep 2011, 16:11
I think it's better left unexplained.
Title: Re: Robots and love
Post by: Carl-E on 05 Sep 2011, 22:27
Edit:
Quote
No one is quite sure who decided it would be useful for artificial intelligences to posess libidos, but it is generally agreed that it would be more trouble than it is worth to remove it. Besides, the horny little buggers would revolt.

There's your explanation! 
Title: Re: Robots and love
Post by: jeph on 05 Sep 2011, 23:08
 :psyduck:
Title: Re: Robots and love
Post by: pwhodges on 05 Sep 2011, 23:21
Hey, Jeph!
Title: Re: Robots and love
Post by: Is it cold in here? on 05 Sep 2011, 23:23
It was the newspost for strip 1658.
Title: Re: Robots and love
Post by: Carl-E on 06 Sep 2011, 05:51
Thanks, fixed my post. 
Title: Re: Robots and love
Post by: jwhouk on 06 Sep 2011, 05:52
It was the newspost for strip 1658.

Oh, you HAD to mention that strip again, didn't you?

That's like the QC equivalent of Rickrolling. I officially dub it "Svenmomorolling".
Title: Re: Robots and love
Post by: DSL on 06 Sep 2011, 08:21
That name ships like a Great Lakes freighter.  :evil:
Title: Re: Robots and love
Post by: Near Lurker on 06 Sep 2011, 09:00

My issue is simply the claim that was started as early as computers have been known, that somehow making computers faster and more powerful they would turn into something else. Just read or watch 2001 for that one and check out the abilities of HAL 9000. Its more obvious in the book, the movie stays kind of vague about this.

Yet computers did no such thing. They only became faster and better able to store things. They did not turn sentient and show no sign to turn sentient in the near or distant future. Its simply not there. No matter how fast it is, its still just a mathematical calculator.


The claim has never been that they would "become" something else.  Computers programmed to surf the web (boy, I just dated myself) and run word processors aren't going to suddenly turn into Skynet.  It's been that we would make them something else, and in doing so give them the power to do so themselves.  We haven't.  It's "simply not there" because we simply haven't put it there.  It will take a human effort to create an AI unfettered enough to act like HAL or Pintsize, and to install the subroutines that control emotion.  However, we've already done so, just that so far no one's managed to get certain patterns of thought fast enough to simulate human intelligence, and no one's given such a program the power to interact significantly with the real world.  There's no theoretical problem with the first, only that we haven't figured out how to do it yet, and obviously the second could be changed today if anyone were stupid enough to hook a lab experiment up to the defense grid.  Indeed, with fast enough hardware, the first obstacle could be surmounted by brute force.

"Just a mathematical calculator" is exactly what we've got, just hooked up to some chemical registers and I/O.  Blather on about the soul all you like, but at the end of the day, we're just meaty, badly designed robots.
Title: Re: Robots and love
Post by: Kyrendis on 06 Sep 2011, 09:29
The only real difference between a computer and us when it comes to emotions, is that we can read the source-code for a computer, and fully understand how and why it is "feeling" those emotions.

This understanding of the first order stimuli and the process it goes through is what leads people to make the knee-jerk reaction that such feelings must be 'fake' or 'simulated'.

But what happens when we get a thorough enough knowledge of the brain to be able to do the same thing for humans? When we will be able to trace the path in our own minds from first order stimulus through processing to action or emotion and understand fully how each step goes, even being able to manipulate it. Will we at that point suddenly become machines simply because of transparency?

It's entirely possible sentience is an emergent trait, but you can program emergent traits as well through machine evolution. They have been doing that for decades. What if the original AI in the QC verse emerged from basically a random assortment of programs that gained sentience, in the same way mutation and sexual reproduction (in part) randomizes our genomes. In that case they are as obscure as we are right now, and people would not be able to fully understand them. In that case, are their emotions real?

What about including biological or quantum processors in the mind of these AI's to enable sentience? In that case they can make the same sudden leaps of logic from A>D that characterize human thinking. Non-linear prediction. If they are capable of this, are they not just like us enough to be considered sentient?

It's a slippery slope to make an argument that something which looks, and acts as if it is sentient isn't because of some ineffable quality. It inevitably leads to some supernatural explanation, and at that point you can arbitrarily choose which beings are sentient or not based on whatever criteria you wish.

This is the argument white Europeans used to justify slavery. After all, though they acted like it Africans could not be human, they just pretended at emotions and honor. They had no souls.
Title: Re: Robots and love
Post by: Near Lurker on 06 Sep 2011, 10:41
It's entirely possible sentience is an emergent trait, but you can program emergent traits as well through machine evolution. They have been doing that for decades.  What if the original AI in the QC verse emerged from basically a random assortment of programs that gained sentience, in the same way mutation and sexual reproduction (in part) randomizes our genomes.

I was with you until here.  That's just not going to happen.  A program isn't going to evolve unless it's programmed to evolve, and even then, it would need a very wide berth, wider than ever has been given, to evolve a human-like mind the way animals did.  We're not going to accidentally the Singularity.  And words like "quantum" and "emergent" don't justify mumbo jumbo; the former should be used only with a model backing it up ("quantum computation" is a thing, and not hard to understand (http://en.wikipedia.org/wiki/Quantum_circuit)), and the latter pretty much never.

EDIT: Somehow quoted wrong part.
Title: Re: Robots and love
Post by: Is it cold in here? on 06 Sep 2011, 11:23
An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.
Title: Re: Robots and love
Post by: Kyrendis on 06 Sep 2011, 12:01
But what happens when we get a thorough enough knowledge of the brain to be able to do the same thing for humans? When we will be able to trace the path in our own minds from first order stimulus through processing to action or emotion and understand fully how each step goes, even being able to manipulate it. Will we at that point suddenly become machines simply because of transparency?

I was with you until here.  That's just not going to happen.  A program isn't going to evolve unless it's programmed to evolve, and even then, it would need a very wide berth, wider than ever has been given, to evolve a human-like mind the way animals did.  We're not going to accidentally the Singularity.  And words like "quantum" and "emergent" don't justify mumbo jumbo; the former should be used only with a model backing it up ("quantum computation" is a thing, and not hard to understand (http://en.wikipedia.org/wiki/Quantum_circuit)), and the latter pretty much never.

I've learned throughout history that most of the time when somebody says "that will never be done", they end up being proven wrong in short order. Complexity is not an excuse for something being impossible, just that it's complex. Weather predictions are complex, and we are getting better and better at it as faster computers emerge.

If Moore's Law holds or adapts to a new substrate, by 2050 $1000 worth of computing power will be the equivalent processing power of every human brain on the planet. At that point, simulating your mind wholescale will be trivial.

And yes, a program will not evolve unless it is programmed to do so, but it can be programmed to perform a task and evolve by consequence if it has the capacity. And yes, that would be wider than has ever been given, that's kind of a given.

And I tend to agree with "emergent" being a nonsense word that really means "too complex for us to understand, but still logical". However, provious suggestions in the thread had led me to believe people held that notion, so I was merely covering my bases.

Also you seem to have quoted the wrong part, since that doesn't match up with what you are arguing. You're talking about AI, quote is talking about understanding of the human mind.

An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.

You could predict it by studying the behaviour of the entire whole though, that's the point. All that argument says is that a lung cell is not a human. Which is a truism yes, but not a good argument. If you could anaylze and understand the whole, you could predict what the whole could do. That's not emergent behaviour, that's just complexity.
Title: Re: Robots and love
Post by: Near Lurker on 06 Sep 2011, 14:41
An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.

There are two major flaws in that argument:
1. Neither ants nor neurons were (as far as we know) created by an external intelligence for a single purpose.
2. Studying one in isolation, no, but with the knowledge that they typically don't work in isolation, and of the various permutations, you could extrapolate with ants, and you could come to the vague idea that neurons could, conceivably, form part of a processor.  Communication between programs tends to be very limited.

I've learned throughout history that most of the time when somebody says "that will never be done", they end up being proven wrong in short order. Complexity is not an excuse for something being impossible, just that it's complex. Weather predictions are complex, and we are getting better and better at it as faster computers emerge.

I didn't say it wouldn't be done; I said it wouldn't be done by accident.

If Moore's Law holds or adapts to a new substrate, by 2050 $1000 worth of computing power will be the equivalent processing power of every human brain on the planet. At that point, simulating your mind wholescale will be trivial.

It won't.  But even if it did, a faster computer can't get around our relative lack of understanding of the human mind.

And yes, a program will not evolve unless it is programmed to do so, but it can be programmed to perform a task and evolve by consequence if it has the capacity. And yes, that would be wider than has ever been given, that's kind of a given.

There's no reason to give a program designed for a specific task that wide a berth, though.  That's why computers were built even when they had very limited processing power: they could do predefined tasks very well, and that's still what they're used for, just with much more complex tasks.  Some are allowed to learn from past experiences, but then, the scope of how they could their gained knowledge is predetermined.  To create programs that interact and mutate to the extent that would allow sentience to develop for any purpose other than AI research would be defeating the very purpose.
Title: Re: Robots and love
Post by: Carl-E on 06 Sep 2011, 15:24
But there is  AI research.  And (http://ai.cs.washington.edu/) it's (http://domino.research.ibm.com/comm/research.nsf/pages/r.ai.html) progressing (http://www.jair.org/)...
Title: Re: Robots and love
Post by: Near Lurker on 06 Sep 2011, 15:51
You've completely missed the point of what I said.
Title: Re: Robots and love
Post by: Kugai on 06 Sep 2011, 17:10
That name ships like a Great Lakes freighter.  :evil:


The Edmund Fitzgerald ?
Title: Re: Robots and love
Post by: Mad Cat on 06 Sep 2011, 17:34
Anyone who thinks they can just read the source code to a robot that is capable of showing emotional reactions has never studied computer theory. There's the class of NP problems, NP-Hard problems and NP-Complete problems. https://secure.wikimedia.org/wikipedia/en/wiki/NP_%28complexity%29 One of the most famous NP-Complete problems is the halting problems. Is it possible, to write a program that takes the code for another program as input and comes to a mathematicly provable claim as to whether or not the input program will halt. The answer is provably, "No."

And then there's computation systems that don't even use source code. Artificial Neural Networks are programmed through connections between neurons and weights applied to those connections. I'd like to meet the person that can look at a graph of a suitably usable ANN and simulate it in their head so that he can accurately predict its response to any given input.

And there's not the first thing wrong with the term "emergent behaviour". Any time a computational system performs an act within the parameters of its design but outside the intent of its programmers, that is emergent behaviour. Cooperation is frequently an emergent behaviour of individuals only programmed to act individually and communicate with its like. The result of the communication alters its individual behaviour and cooperation emerges.

You train an ANN on one input corpus, but then discover that it can operate adequately on a completely unrelated corpus. That is emergent behaviour.

A case based reasoning system designed for music recommendations proves capable at food recommendation. That is emergent behaviour.

In AI, computer scientists frequently create software systems that surprise them in their capabilities, and any time you have a system of sufficient complexity, the degree of analysis that it will succumb to is limited. Here's another concept for you from computer theory. This one from algorithm analysis. Big-O n squared. O(n^2). As n, the complexity of the system, grows, the effort to analyze it grows by n^2. Truly warped levels of complexity can grow as O(n^n).

These things cannot be analyzed in the existing lifetime of the universe, so good luck on your deterministic understanding of ... "emergent behaviours".
Title: Re: Robots and love
Post by: Near Lurker on 06 Sep 2011, 19:14
Anyone who thinks they can just read the source code to a robot that is capable of showing emotional reactions has never studied computer theory. There's the class of NP problems, NP-Hard problems and NP-Complete problems. https://secure.wikimedia.org/wikipedia/en/wiki/NP_%28complexity%29 One of the most famous NP-Complete problems is the halting problems. Is it possible, to write a program that takes the code for another program as input and comes to a mathematicly provable claim as to whether or not the input program will halt. The answer is provably, "No."

Wow.  This is wrong on so many levels.

First off, the Halting problem is not NP-complete.  I guess it's NP-hard, in a useless "if two is three, I am Pope" sense, but for it to be NP-complete would imply that it were NP, and therefore could be solved in exponential time and polynomial space, and it can't be.  It can't be solved at all, which is the only thing in this paragraph you got right.  It's trivial, of course, to write a program that shows in finite time that another does halt, but there's no way to write one that can show the reverse in all cases.  This doesn't mean that you can't write a program to analyze source code in the vast majority of real-world cases, and it certainly doesn't mean a human can't heuristically "crack it open for a look."  You claim to have studied computer theory, and you've never done that?  Even for very complicated programs?

And of course, as you really ought to know, it technically isn't proven that NP-complete problems don't have polynomial-time algorithms (yet).

And then there's computation systems that don't even use source code. Artificial Neural Networks are programmed through connections between neurons and weights applied to those connections. I'd like to meet the person that can look at a graph of a suitably usable ANN and simulate it in their head so that he can accurately predict its response to any given input.

In their head?  No, of course not.  But at the end of the day, it's just another kind of code, and can be analyzed like any other code.

And there's not the first thing wrong with the term "emergent behaviour". Any time a computational system performs an act within the parameters of its design but outside the intent of its programmers, that is emergent behaviour. Cooperation is frequently an emergent behaviour of individuals only programmed to act individually and communicate with its like. The result of the communication alters its individual behaviour and cooperation emerges.

You train an ANN on one input corpus, but then discover that it can operate adequately on a completely unrelated corpus. That is emergent behaviour.

A case based reasoning system designed for music recommendations proves capable at food recommendation. That is emergent behaviour.

The phrase "emergent behavior" is so vaguely defined that it can encompass all these things and more, and its use in this context boils down to faith.  The point, however, is that in all these examples, the software was moved; it can't do what it wasn't built to do.  That's woo.

In AI, computer scientists frequently create software systems that surprise them in their capabilities, and any time you have a system of sufficient complexity, the degree of analysis that it will succumb to is limited. Here's another concept for you from computer theory. This one from algorithm analysis. Big-O n squared. O(n^2). As n, the complexity of the system, grows, the effort to analyze it grows by n^2. Truly warped levels of complexity can grow as O(n^n).

These things cannot be analyzed in the existing lifetime of the universe, so good luck on your deterministic understanding of ... "emergent behaviours".

Who said deterministic?  It would, of course, be a heuristic understanding, or if necessary, an approximate one, just like we're trying to understand the human brain right now, or, indeed, everything else in nature.

"These things cannot be analyzed in the existing lifetime of the universe" and "n^n" is an interesting juxtaposition.  While it's generally true that polynomial-time algorithms are desirable, a small system can certainly be analyzed, even if the analysis takes time n^n, and some problems are so hard to get down to n^2 time complexity that such algorithms can't be implemented in the life of the universe.  Between this and your garbled understanding of NP-completeness, you kind of sound like you've been flipping ahead in your textbooks.
Title: Re: Robots and love
Post by: Kwark on 06 Sep 2011, 19:32
Every day, it saddens me to witness the vanity of my fellow humans.

Is love anything more than chemistry ? I don't think so.

It's only a matter of years before computers can emulate humanity, and they most likely will have to bridle themselves to mimic our many flaws.

Like the DOSBox we use to play vintage video games on our absurdly fast computers.

Our brains are nothing more than organic processors, and I call vain anyone who claims otherwise, unless they can show some proof or at least concrete reasoning for it.
Title: Re: Robots and love
Post by: Mad Cat on 06 Sep 2011, 19:37
Or I finished reading the textbook a long time ago and put it down.

And I'd really like to see see an example of your "small system" that can exhibit intelligent behaviour, let alone emotional behaviour.

Kwark: "Love? Overrated. Biochemicly no different than eating large quantities of chocolate."
Title: Re: Robots and love
Post by: Near Lurker on 06 Sep 2011, 22:10
Or I finished reading the textbook a long time ago and put it down.

Yeah... not really buying it.  Thinking QSAT or factorization were NP-complete might be ascribed to being rusty.  Thinking the halting problem is NP-complete, alongside the implication either that NP-complete problems can't be solved at all or that the halting problem is exponential-time, can't be.  Everyone forgets a lot of material from high school, too, but if someone told you that the fundamental theorem of calculus were the chain rule, you'd think that person failed high school!

And I'd really like to see see an example of your "small system" that can exhibit intelligent behaviour, let alone emotional behaviour.

...okay.

This is completely true.

It's just that, by that point, seeing the rest of the post, especially that first paragraph, I was in full-on condescension mode.  Even though it was absolutely right in context, I saw a statement of a rule of thumb not true in the general case and pounced.

Yyyeaaahhhhh...
Title: Re: Robots and love
Post by: Skewbrow on 06 Sep 2011, 22:11
Too much out of context quoting taking place here :x. So I will add some?  :evil:

1. Moore's law is bound to fail sooner rather than later. There simply cannot be perpetual exponential growth. See the discussion on XKCD-forum on the strip about compound interest for some numbers.

2. I'm with Near Lurker on all counts related to complexity: NP, NP-hard etc. @MadCat: What it means that a small instance of a difficult problem can still be completely analyzed is, perhaps, best exemplified by the following. The general travelling salesman problem (= given a map and a number of cities, find the shortest route going via all the cities) has no known polynomial time algorithm. Yet it is trivial to solve the problem, when the number of cities is small, say 8, by the brute force method of checking all the 40320 possible orders of visiting the cities. When n=48 (the state capitals being the standard example), the solution is not known, and exhaustive checking of possibilities is out of the question. The question "is there a route of combined length less than a given figure" is in NP, because it is trivial to quickly check a suggested solution. Finding that suggestion OTOH...

3. I apologize for bringing up the catch-phrase "emergent behavior". I certainly won't even attempt to define what it means :evil:

4. AI research is, indeed,making progress, but I don't know where they are, and what it means, so I won't comment for fear of saying something untrue. Thanks for the links, Carl-E.


I've learned throughout history that most of the time when somebody says "that will never be done", they end up being proven wrong in short order. Complexity is not an excuse for something being impossible, just that it's complex. Weather predictions are complex, and we are getting better and better at it as faster computers emerge.

If Moore's Law holds or adapts to a new substrate, by 2050 $1000 worth of computing power will be the equivalent processing power of every human brain on the planet. At that point, simulating your mind wholescale will be trivial.


5. Selective sampling at work there. Most of the time when a reputable scientist says that something is impossible, it truly is impossible. The occasions, when s/he is wrong just get a lot of pulicity.

6. Our ability to predict weather over a longer period is not limited by lack of computing power. The butterfly effect takes care of that, because more accurate prediction would need more accurate data on the current situation (=wind speed, humidity, temperature at *every frigging point in the atmosphere* - ok, not every point, but the density of the network of sensors places an upper bound on the duration of the validity of forecast) that it is clearly impossible to get. Simulating a mind is more or less the same, but hard to tell exactly depending on how many parameters we need to determine the behavior of a single neuron.

[edit: added the two words in bold]
Title: Re: Robots and love
Post by: akronnick on 07 Sep 2011, 03:33
Can Robots Love?

Short answer: There is no short answer.

Longer answer: To answer that we must first answer two other questions:

I think much of the disagreement about this question stems from inconsistent understanding of the answers to the two questions.
Title: Re: Robots and love
Post by: pwhodges on 07 Sep 2011, 08:20
Today's XKCD (http://xkcd.org/948/) has a view on AI.
Title: Re: Robots and love
Post by: Carl-E on 07 Sep 2011, 10:08
You've completely missed the point of what I said.

No, I don't think so...

After refuting a few other posts, you said: 

And yes, a program will not evolve unless it is programmed to do so, but it can be programmed to perform a task and evolve by consequence if it has the capacity. And yes, that would be wider than has ever been given, that's kind of a given.

There's no reason to give a program designed for a specific task that wide a berth, though.  That's why computers were built even when they had very limited processing power: they could do predefined tasks very well, and that's still what they're used for, just with much more complex tasks.  Some are allowed to learn from past experiences, but then, the scope of how they could their gained knowledge is predetermined.  To create programs that interact and mutate to the extent that would allow sentience to develop for any purpose other than AI research would be defeating the very purpose.

The part in bold was my  point - AI research is ongoing, and people do  try programming learning behaviours with a wide berth.  That is  the purpose.  Everything else you said there is assuming it isn't done, but then you mention the one place where it is  done. 

And it will still probably be an accident...
Title: Re: Robots and love
Post by: Random Al Yousir on 07 Sep 2011, 12:40
Today's XKCD (http://xkcd.org/948/) has a view on AI.
On AI and on the burning man.  I had to look that up and, I must say, I'm certifiably impressed.

As for deploying AI to build chatterbots: Am I the only one who had to think of the old adage of E.W.Dijkstra (http://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EWD1036.html):

"The effort of using machines to mimic the human mind has always struck me as rather silly: I'd rather use them to mimic something better."
Title: Re: Robots and love
Post by: Skewbrow on 07 Sep 2011, 21:40
Thanks, RAIY. Dijkstra's lecture makes for interesting reading.
Title: Re: Robots and love
Post by: Random Al Yousir on 08 Sep 2011, 03:23
You're welcome.   :-)

Of course, the language related ramblings of Dijkstra are hopelessly outdated.  Since you are a mathematician, you might enjoy the publications of Philip Wadler (http://homepages.inf.ed.ac.uk/wadler/).

For the ambitiously pedagogical perspective you might find the work of Matthias Felleisen (http://www.ccs.neu.edu/home/matthias/) and Shriram Krishnamurthi (http://www.cs.brown.edu/~sk/) interesting.  And there's simply no way around Don Box (http://www.pluralsight-training.net/community/blogs/dbox/default.aspx) of Microsoft fame, in this field.

For the borderlining geeky stuff LtU (http://lambda-the-ultimate.org/) is the best aggregator I know of.
Title: Re: Robots and love
Post by: Skewbrow on 08 Sep 2011, 10:05
Thanks, but I prefer to read material reinforcing my prejudice "Object-oriented programming is the bane of hobbyists and the source of much grief", so Dijkstra may be my best hope? I lost a dear hobby when support to DOS was discontinued. In those days my PC did what I wanted it to do, not what Microsoft thinks that I should be allowed to do with stuff that I own. My programs had full control of the HW, and didn't need to ask Windoze a permission to do something. They were also free of bugs (well, not always, but after I was done). When the change became inevitable, I was suddenly feeling very sympathetic to the old-timers bashing Spinning-Jennies. Mind you, my livelihood was not at risk, but still...

Dijkstra:"A program with a single bug is not almost correct - it is wrong." <- Something that should be sent to Bill Gates' inbox each and every time Microsoft releases a "critical upgrade"
Title: Re: Robots and love
Post by: pwhodges on 08 Sep 2011, 12:01
I had written my own real-time, multi-tasking operating system in the 1970s.  For me DOS represented a huge step backwards, both in terms of the lack of security, and in terms of reliability - so I hated it like you hated Windows. 

OS/2, and especially OS/2 v2 showed the way forward for small systems, but MS managed to subvert it, first with Windows, then by changing the direction of the NT (originally aka OS/2 v3) development; it was also MS who imposed on IBM the one truly bad design decision in the OS/2 Workplace Shell (the Single Input Queue), and IBM never fixed that because they were not prepared to break full compatibility for existing customers over it (also a bad decision). 

Dijkstra and Hoare (and later, Knuth) were my gods; but in the end I have always allowed pragmatism to temper idealism, because I could see that the perfect theoretical world of program proofs was simply going to get left behind.
Title: Re: Robots and love
Post by: Random Al Yousir on 08 Sep 2011, 12:38
Skewbrow, there are no prejudices against object-oriented programming.  There are hard facts.

And it was Dijkstra, who pointed out to be painfully aware of the limited size of your skull.  Which induced much (intentionally) badly pretensed yawning, IIRC.

At some point you just have to deal with the need to abstract "away" complexity, which might be a non-brainer for a mathematician; however, computers are real-world, unclean, mutable state machines, and as long as you insist of controlling the whole shebang "per pedes" you will run into the oh-so-tight limits of your skull.  If you want to go for something more ambitious, you want to go for abstraction.  And abstractions are leaky (http://www.joelonsoftware.com/articles/LeakyAbstractions.html).  Another hard rule.  Rock hard, in fact.

That's why I think you might enjoy the research of Philip Wadler; because his work is centered around a mathematically "clean", referential transparent, lambda calculus/combinatorics-based approach to the art of abstraction.
A good start into the topic should be here (http://homepages.inf.ed.ac.uk/wadler/papers/frege/frege.pdf) to "hook you up".  Maybe this vein of research could do the job to restore your faith in computer science.

Oh, and pwhodges is right: There will be bugs.  The way to go is to find a way to deal with this fact.

Object-oriented programming is a hack-up, a kludge.  You won't find much disagreement, there.  But it's a necessary kludge, because it's so damn hard to do things right.
Title: Re: Robots and love
Post by: Skewbrow on 08 Sep 2011, 13:04
Proofs of program correctness! Luckily I never had to do more than a couple to pass that intro course, and those were trivial for a math dude.  :-)

My grumpiness is mostly coming from the fact in math the tools are eternal, so I was unprepared to meet the reality that the other tools I took the trouble of learning didn't last a decade.

For work related programs I just wanted (and still do!) the full computing power. For entertainment of (self-made or bought) games the same thing was true. On those occasions not having to fight the operating system or share the cycles with something else was nice (if not necessary). Reliability was not an issue. If I crash the system while debugging a hooked interrupt or by attempting to read past a nil pointer, that is solely my fault, and the PC would reboot in under twenty seconds anyway. If a work-related simulation crashed, that meant lost productivity, but again was my own fault, and a faulty sim might not give reliable results anyway. Yes, I realize that you cannot run a system serving several people with an attitude like that, but the point is that my PC was truly personal.

I did start learning Delphi for Windows a year ago (they taught me Pascal as an undergrad, so that's what the salesdude recommended). If I find the time, I may do a full comeback - at least port my old stuff to Win. So much to learn/unlearn. :cry:

RAIY seems to have posted more links. May be I should make a serious attempt at learning?
Title: Re: Robots and love
Post by: Random Al Yousir on 08 Sep 2011, 13:18
Well, not so much "more" links.  The first one you can safely ignore, the second one is just a good starting point on Wadler's site, which I referenced already.   :wink:
Title: Re: Robots and love
Post by: pwhodges on 08 Sep 2011, 14:07
My gripe with the concept of proving program correctness is that it seems to me that in real life the writing of a specification against which that correctness is to be proved is in fact the same problem as writing the program.  So nothing is gained.

IYSWIM
Title: Re: Robots and love
Post by: Carl-E on 11 Sep 2011, 05:21
Cross posted for relevance. 

(http://www.unintentionallypretentious.com/comic/momo.png)
Title: Re: Robots and love
Post by: Is it cold in here? on 11 Sep 2011, 12:50
Is it ethical to include grief in the set of emotions an artificial life form can feel?

What if it's not a matter of choice, if the life form was created accidentally?

Is grief inevitable when love exists?

EDIT: we've never seen religious feelings or activity by an AnthroPC. Are they that different from us? Is it a different feeling when you know for a fact who your creators were and don't have to take it on faith? How is religion different for a being that doesn't have to confront mortality?

EDIT: what DO they do when their human companion dies? I can imagine Pintsize packing Marten's corpse with gunpowder and tossing it in a volcano. I can definitely see Momo performing a quiet dignified Shinto ritual at the graveside. Winslow would be stuck for a response.
Title: Re: Robots and love
Post by: Carl-E on 11 Sep 2011, 14:00
I think rather that Winslow would give one hell  of a eulogy, filling all who heard it with a deeper respect and understanding of Hannelore, and at the same time, with an increased love for the lives they've been given. 


He just seems that type. 

Edit:  Focking typos...
Title: Re: Robots and love
Post by: jwhouk on 11 Sep 2011, 16:35
I think rather that Winslow would give one hell  of a eulogy, filling all who heard it with a deeper respect and understanding of Hannelore, and at the same time, with an increased love for the lves they've been given. 


He just seems that type. 

And it woud have typos because he tried to use an iPad to write it wth.
Title: Re: Robots and love
Post by: Carl-E on 11 Sep 2011, 18:44
What, you've never been given an Ives?
Title: Re: Robots and love
Post by: jwhouk on 11 Sep 2011, 18:52
No, because my parents didn't listen to Burl very much - other than during Christmastime.
Title: Re: Robots and love
Post by: DSL on 11 Sep 2011, 19:08
I was sent an Ives by currier.
Title: Re: Robots and love
Post by: Carl-E on 11 Sep 2011, 21:01
C:/audio/soundfX/rimshot.mp3 (http://questionablecontent.net/view.php?comic=747)
Title: Re: Robots and love
Post by: Akima on 12 Sep 2011, 19:15
Is grief inevitable when love exists?
Grief is inevitable. The desire to not be separated from loved ones is perhaps the hardest attachment of all to overcome.

Quote
EDIT: we've never seen religious feelings or activity by an AnthroPC. Are they that different from us? Is it a different feeling when you know for a fact who your creators were and don't have to take it on faith? How is religion different for a being that doesn't have to confront mortality?
Do robots pray to electric gods, you mean? Regardless of her mortality, Momo is not immune to the tragedies and imperfections of the universe, and the Four Noble Truths (http://en.wikipedia.org/wiki/Four_Noble_Truths) would apply to her as much as any other sentient being. Not every religion come with a built-in creation-myth, or concerns itself much with the creation of the universe, or even considers that the universe had a beginning at all.

As I said in the fan-art thread, I'd have expected Momo to sit in seiza or kekkafuza with her hands in the classic gassho position in this situation, but I suppose cultural conditioning would be a different thing for her.
Title: Re: Robots and love
Post by: Is it cold in here? on 12 Sep 2011, 21:04
Is it possible to have love without attachment, and could an AnthroPC do it easily? Is it an option for the robot to say "rm -rf /proc/love/Marigold"? If so, would we consider that to really be love? ("It's love, Jim, but not as we know it.")

(But Momo has a registry, so it's probably impossible to remove all the traces and it would be like continuing to find clumps of cat hair for years after your cat died).

I had been wondering if AnthroPCs might gravitate to one of the less supernaturally-oriented religions, and there would be commercial advantages to installing, say, Confucianism on them.
Title: Re: Robots and love
Post by: Akima on 13 Sep 2011, 01:08
I had been wondering if AnthroPCs might gravitate to one of the less supernaturally-oriented religions, and there would be commercial advantages to installing, say, Confucianism on them.
Hmm... Well, I can see how ren (altruism and humanity), li (adherence to custom), zhong (both personal loyalty and respecting your place in the social order), and xiao (filial piety, presumably with the robot's owner as the target) might seem like good things to program into AnthroPCs, but they might take seriously and literally the (frequently disregarded) obligations Confucius laid on rulers/social superiors in turn. You wouldn't want your robot deciding that you had lost the Mandate Of Heaven (http://en.wikipedia.org/wiki/Mandate_of_Heaven), really you wouldn't (http://en.wikipedia.org/wiki/Yellow_Turban_Rebellion).
Title: Re: Robots and love
Post by: DSL on 13 Sep 2011, 06:35
The obligations on those in power are disregarded when a belief system is subverted to political ends? Well, Confucianism has that much in common with everything else.
Title: Re: Robots and love
Post by: Carl-E on 13 Sep 2011, 10:27
You wouldn't want your robot deciding that you had lost the Mandate Of Heaven (http://en.wikipedia.org/wiki/Mandate_of_Heaven), really you wouldn't (http://en.wikipedia.org/wiki/Yellow_Turban_Rebellion).

I love it!  "You've attained power, so clearly the powers that be are pleased with you, and since you were meant to have it, please continue to do as you wish", balanced with "We're not happy with what you've been doing, so the powers that be must be displeased with you as well.  Please leave the keys to the palace with the attendants as you are 'escorted' out." 

Who says China's never had democracy?!?  It's a lot closer than this republic stuff we have in the US...
Title: Re: Robots and love
Post by: Is it cold in here? on 13 Sep 2011, 11:34
If an AnthroPC thinks its "owner" isn't living up to the obligations of authority, it's free to leave, unlike a peasant. But that's only the theory. They bond emotionally to their human companions. Can a robot get battered spouse syndrome? What if they're economically dependent? Momo almost couldn't get a job: what if she'd lived with a human who wasn't good to her the way Marigold has been?
Title: Re: Robots and love
Post by: Random Al Yousir on 13 Sep 2011, 11:47
You wouldn't want your robot deciding that you had lost the Mandate Of Heaven (http://en.wikipedia.org/wiki/Mandate_of_Heaven), really you wouldn't (http://en.wikipedia.org/wiki/Yellow_Turban_Rebellion).

I love it!  "You've attained power, so clearly the powers that be are pleased with you, and since you were meant to have it, please continue to do as you wish", balanced with "We're not happy with what you've been doing, so the powers that be must be displeased with you as well.  Please leave the keys to the palace with the attendants as you are 'escorted' out." 

Who says China's never had democracy?!?  It's a lot closer than this republic stuff we have in the US...
It certainly beats a God resembling a really bad cop.  But as "checks and balances" go, it's a bit flimsy.
Title: Re: Robots and love
Post by: bhtooefr on 13 Sep 2011, 11:52
So, one thing in animals that helps with both romantic and mother-child bonds is oxytocin.

Basically, in response to certain stimuli (IIRC mental stimuli included, but primarily sexual stimuli, childbirth, and nursing), the body releases oxytocin, which causes a bonding effect. So, a chemical changes how we perceive the person that induced the oxytocin release.

Not only that, but it's been found that love in humans has extremely similar effects to some hard drugs. We do everything we can for the next oxytocin hit.

In a robot, it could easily be programmed such that it can record actions that cause certain variables to increase, have the variables decay over time, and then once they fall below a certain point, have the robot try to increase those variables through actions. Even I could do that, and I'm a crap programmer. Bam, now you have a simulation of a biochemical reward system, in a robot. Tie the stimuli that cause "love" in humans to that reward system, and now you have partially simulated romantic and mother/child love. (Also, that could be an explanation for why AnthroPCs have libidos - the same chemical reward mechanism for bonding to children is used for bonding to romantic partners, in animals. Given that some AnthroPCs are meant to be caretakers for their humans...)

So, the biochemical reactions that lead to parts of what we know as "love" in humans can be reproduced farily easily.

Also, regarding the programming thing... I'd argue that humans are programmed, too. Think of how we handle early childhood education - granted, some of it is genetic - that could be the "seed program" that has the learning behavior, in a robot - but quite a lot of human behavior is learned. You could say that parents, teachers, etc. program children to act in a certain way. Just look at cultural differences in behavior - children raised in different cultures can behave very, VERY differently, and even reward systems can end up wired differently. I'd argue that a being programmed to love isn't really that different from a being that "naturally" loves.
Title: Re: Robots and love
Post by: Is it cold in here? on 14 Sep 2011, 21:27
If a robot's behavior is indistinguishable in every way from a human's love, is it even a meaningful statement to say that one is real and the other isn't?
Title: Re: Robots and love
Post by: jwhouk on 14 Sep 2011, 21:41
Ah, but how do you completely distinguish it from human love?
Title: Re: Robots and love
Post by: Near Lurker on 15 Sep 2011, 11:13
The first problem is to rigorously define love, after three thousand years spent bickering over how to define it colloquially.

Also, while I was addressing Mad Cat earlier, I missed this diarrhea of the keyboard hiding under the filk.  I can smell the philosophy degree from here.

In all seriousness, the argument that digital systems are only capable of moving data around, performing arithmetic, and comparing digital values flies in the face of chaos theory and emergent behaviour. As soon as you have more than one digital processor operating asynchronously, you have chaos. As soon as you have you have a source of data to a single digital processor that is derived from a chaotic source, you have chaos, and with chaos, you get emergent behaviour. Emergent behaviour like emotions.

"But Cat," I hear you say, "multi-core processors have been around for years and work just great." Yes, they do... with synchronization mechanisms in both hardware and the OS. As soon as you start investigating cluster OSes, MPI, OpenMosix, etc. where computers connected only by network connections, yet have to cooperate on large problem sets, you realize an appreciation for the need for synchronization mechanisms and get an idea for how weird computers can behave when things occur in a an unusual sequence.

Why would chaos become anything we'd recognize as emotion?  You're literally suggesting here that sentience will arise from a random malfunction, one that doesn't aid function, which is the only reason to reproduce code that doesn't work as expected.  What you're suggesting is akin to mammals walking fully-formed out of the primordial sea under conditions more favorable to algae.

"But Cat," I hear you say, "no digital system can generate chaotic data." Au contrair, I say to you. PC northbridge chipsets and CPUs have, for a long time, featured devices with that very purpose in mind. They're called thermistors, tiny resistors that change their resistance in the presence of different temperatures, and analogue to digital converters with a high level of precision. By passing a small voltage, even one known a priori with a high level of precision, through that thermistor, there is no real, determiniastic way to predict what voltage will come out the other end, since it depends on the temperature of the thermistor at the time of the measurement. If you then feed that voltage into a high-precision ADC, you get a sequence of digital bits which represents that voltage as measured. The thing is, if the thermistor is of a relatively low quality, the thermistor will have very coarse fine-grained behaviour. A tiny temperature change in one temperature regime will have a large effect on the measured voltage, while a similarly tiny change of temperature in another temperature regime will have a similarly tiny effect on the measured voltage. And, the sizes of these effective changes in measured voltage can change over time.

What I'm saying is that while the most significant bits in the ADC output might be perfectly predictable (if the CPU's been running for A time under Y load, then its temperature should be Z and the ADC bits will be 0011010011101XXX. The first 13 bits might be predictable with a high degree of certainty, assuming those preconditions are known with sufficient precision, but the last three bits of the 16-bit ADC output will be utterly chaotic and unpredictable. For security, just pick up the last bit of several sequential ADC measurements and you can amass a HUGE storehouse of genuinely random bits of digital data. In the parlance of digital computational hardware, this is an RNG or Random Number Generator. This is true randomness, not the pseudo-randomness of a deterministic random number generator algorithms which is completely deterministic once the initial "seed" value is known. There is literally no physical mechanism in physics whereby the value of the random number output by a hardware RNG may be predicted. Thus, if your idealized computational arithmetic operations are fed these RNG values, it too takes on the characteristic of a chaotic system.

And don't even get me started on startup conditions, where computer chaos was first discovered in supposedly deterministic weather prediction software when the same simulation was run multiple times, but from different starting points in time with starting conditions given from earlier starting simulations. Your idealized computing device might only be capable of moving data around, performing arithmetic upon it, and comparing digital values, but that's only in the idealized world. Robots in the QCverse, just like actual electronic digital computing devices in our world, have to operate as embodied real world hardware, where the idealized rules can be broken.


I'm sorry.  Before, I was using the word "deterministic" as though I were talking to someone who actually knew what it meant, rather than using it as a blanket term for anything that goes against pop-chaos-theory woo.  If there's any randomness or pseudorandomness, different results on the same startup conditions, even on occasion vastly different results, can be expected.  And even if there isn't, yes, occasional malfunctions to be expected.  However, you're not going to see certain kinds of patterns spontaneously arise and persist without environmental pressures tending to favor them.  That's so far from proper chaos theory, it would be like Newton feigning the hypothesis that the planets were moved by myriad literal, invisible hands of God.

No, all that is just a long-winded way of saying "computers malfunction in all these ways, and if they malfunction enough, they might become real boys!"  (Also that some set up sources of true randomness - but numbers so obtained aren't actually going to do anything they're not programmed to.)  Even if this were possible, what you're describing isn't "artificial intelligence" in any real sense, but just intelligence that happens to pop up near a computer, like a Godzilla for the information age.  You're anthropomorphizing the programs we have in a way that's just not supported by anything; why would an agent arising from malfunctions have meaningful access to the "deterministic" algorithms (many of which are, of course, randomized, with a pseudo-RNG or a physical one, but "deterministic" in your sense) of the idealized computer that the physical computer was designed to run, and most of its power in society stem from its running, as faithfully as possible?  Machinery approaching as closely as possible an "idealized," "deterministic" computing device is what Momo, Winslow, Pintsize, and the cute robot clerks all appear to run on, since if not, they couldn't be faithfully transferred between chassis as they are.

The part in bold was my  point - AI research is ongoing, and people do  try programming learning behaviours with a wide berth.  That is  the purpose.  Everything else you said there is assuming it isn't done, but then you mention the one place where it is  done.

The fact that you think you have to tell me this is exactly why I say you've missed the point.

And it will still probably be an accident...

An accident only in the broadest sense, that an exploration into the nature of sentience or a large-scale simulation of the human mind might yield better results than expected.  I don't buy that it will come from the kind of "evolution" Kyrendis was describing.
Title: Re: Robots and love
Post by: Is it cold in here? on 15 Sep 2011, 13:50
The first problem is to rigorously define love, after three thousand years spent bickering over how to define it colloquially.
I propose bypassing that question by asking "Would we call it love if a human did it?". The definition problem appears on both sides of the equation, so just cancel it out.

EDIT, not quite relevant to today's comic:
Is it wrong to neglect a robot? Do they suffer from unrequited love?
Title: Re: Robots and love
Post by: Carl-E on 15 Sep 2011, 23:52
"That's alright, I'll just sit here in the corner and calculate a few more decimal places of pi..."
Title: Re: Robots and love
Post by: Is it cold in here? on 16 Sep 2011, 11:05
You can't get jolted by simulated lightning. Therefore, if you're lying on the ground twitching with your hair on fire, it was real lightning.

If you get all the effects of love from a robot, is that real love?

The glaring flaw in that line of reasoning can be captured by asking "Did ELIZA offer real compassion?".
Title: Re: Robots and love
Post by: jwhouk on 16 Sep 2011, 13:44
To some, yes. To others, no.
Title: Re: Robots and love
Post by: Is it cold in here? on 17 Sep 2011, 10:22
Do AnthroPCs and their humans ever drift apart?
Title: Re: Robots and love
Post by: jwhouk on 17 Sep 2011, 12:34
Do AnthroPCs and their humans ever drift apart?

You have to wonder... it's only been what, maybe a dozen years or so since the first APC "hit the market"? It's not likely that issues like the 'bot outliving it's "companion" or robot/human disagreements have come to light enough that it's become an issue.

Of course, if the global AI mind is monitoring things, these situations might be taken care of quickly.

And, of course, we could also be engaging in Epileptic Trees, too... (and to be nice, I won't link the trope. Google it yourself.)
Title: Re: Robots and love
Post by: DSL on 17 Sep 2011, 19:16
Do AnthroPCs and their humans ever drift apart?

Now it's established AI's can swap chassis, I can imagine a human and AI losing contact with one another and, years later, the human is walking down the street when an ATM or something says, "Hey! Long time no see!"
Title: Re: Robots and love
Post by: Random Al Yousir on 17 Sep 2011, 20:02
Heeh.  Imagine you're an AI who has found employment as a predator pilot and your job at hand is to take out an old acquaintance of yours ...

I wonder what the social protocol database recommends for this particular occasion.   :evil:
Title: Re: Robots and love
Post by: Carl-E on 17 Sep 2011, 20:47
Sending flowers afterwards. 

 :-D
Title: Re: Robots and love
Post by: DSL on 17 Sep 2011, 21:56
Is the AI operating the Predator remotely or resident in onboard systems?
Title: Re: Robots and love
Post by: Random Al Yousir on 18 Sep 2011, 05:24
Hmm, good point.  The very purpose of the unmanned military aircraft weapon class is to restrict human casualties to the enemies side.  On the assumption that AIs have human status when it comes to "value of life", it might lead to some heated argument when they are asked to "man" such a vehicle.

On the other hand - humans do man military aircraft, so a resident Predator-operating AI is probably much more comparable to a pilot.  And I'm pretty sure an AI can fly manoeuvres a human can't.
Title: Re: Robots and love
Post by: Skewbrow on 18 Sep 2011, 06:56
A couple of remarks/replies

1) I'm not up to date with what autopilots can do, but 20 years ago a friend explained to me that if a jet fighter would get in trouble, and be otherwise unable to get rid of the chasers (may be an enemey missile locked on to you?), there was this emergency button... If a pilot pressed that button, a program would take over and go nuts. It would carry out sequence of crazy evasive manouvers, and then after some time level the plane again. The g-forces would make the pilot pass out, but hopefully s/he would be conscious again, when human control is needed. I'm not sure whether that was actually implemented, or whether it was still on the drawing board.

2) Is it not also a point that you can make an unmanned aircraft a bit more compact? You don't need to accomodate the pilot, so..

3) If the destruction of an AI-piloted aircraft becomes imminent, then the AI could also jettison itself. Wireless. Hmm, may be you can't find enough bandwidth to do that in a matter of seconds? A.C. Clarke used that idea in Hammer of God.
(click to show/hide)
Title: Re: Robots and love
Post by: DSL on 18 Sep 2011, 07:03
Assuming AIs in QCvewrse would share the faster reaction times that electronics have in this 'verse, it'd make sense for the AI to be a remote operator. Whole idea of drones in this 'verse is that if  you lose one, you lose a bundle of machinery and don't have to write a letter to somebody's family.
Title: Re: Robots and love
Post by: Mr_Rose on 18 Sep 2011, 07:31
Skewbrow:
1) The story is... dubious at best. Actually building such a device on purpose would be an insane waste of resources, even for he US military. And anyway they have their "oh fuck" emergency button already; it's called the ejector seat.
Adding a button that resets the flight computer and leaves the pilot effectively zero control* of the aircraft until it reboots, though...

2) Yes but we know that QC-verse AIs can fit into much smaller volumes than any human pilot and can probably be directly interfaced to the controls.

3) There seems to be some issue there, where AI transfers are bitwise moves rather than copy-and-delete so sending home a copy might not be possible. A miniature ejector pod with an armoured AI core could work in such cases, however. Heck, if the AI processor itself can be shielded sufficiently and given a durable backup power supply, there might be no need to remove them prior to the crash at all.


*Modern fighter jets are (deliberately - it makes them more agile) incapable of maintaining straight and level flight without computer control; removing said computer control could have the effect described but in almost all circumstances it would be easier on the pilot to just bug out.
Title: Re: Robots and love
Post by: Random Al Yousir on 18 Sep 2011, 07:40
DSL:
Chances are that a resident AI manned aircraft outperforms a remotely controlled one any time, partly due to bandwidth restrictions, partly by having the decisive millisecond to its advantage.

Of course, that will only become an issue when AI controlled aircrafts encounter in combat, which brings up the question of the global AI mind's stance on the whole thing of "AIs killing each other".
Title: Re: Robots and love
Post by: Carl-E on 18 Sep 2011, 13:46
Well, this thread's half-derailed.  We gots robots, but at war, flying drones - where's the love? 
Title: Re: Robots and love
Post by: Random Al Yousir on 18 Sep 2011, 14:02
Since I'm the culprit here: Could these last entries after my silly joke, that started the derailment, be cut out to start another thread?

My idea for the title would be "Artificial and non-artificial silliness".
Title: Re: Robots and love
Post by: Skewbrow on 18 Sep 2011, 14:15
Derailed? Oops!

A wonderful excuse to stop searching for information about something that likely has never existed. I think I will take it.
Title: Re: Robots and love
Post by: Is it cold in here? on 18 Sep 2011, 17:32
Are AnthroPCs like dogs, who automatically love the people who shelter and feed them, or like cats, who bond conditionally?

If the latter, there must be some really disappointed humans who failed to make emotional connections with their robots. Worse, those might be just the humans who most need the love.
Title: Re: Robots and love
Post by: Skewbrow on 18 Sep 2011, 22:11
A good question. An answer might be that the market for APCs capable of dealing with such humans is large. Therefore the incentive to put research into that would exist. May be there is a dog/cat switch? Or if not a switch, then AIs could be designed to have varying types of personalities. The sales clerk would then have the additional role of a matchmaker.
Title: Re: Robots and love
Post by: josiah on 19 Sep 2011, 19:07
I don't know if computers will ever feel emotions as we understand them, and suspect that long before they have those capabilities, they will be able to express love insofar as we are able to program them to do so.

But regardless, I think it's important to point out that robots need love, too. (http://www.youtube.com/watch?v=aRcXULN6mp4)

Is it ethical to include grief in the set of emotions an artificial life form can feel?
[...]
Is grief inevitable when love exists?

Grief is a feeling of loss that is a consequence of strong feelings of attachment. Learning to value the things around us, including our social and family relationships, is important to our being able to function in society as we understand it. As pain is a warning to the brain that harm is being (or can be) done to the body, it is important that we feel pain so that we learn the limits of what our body and mind are able to handle: I suspect that it would rather be highly unethical to remove the possibility for grief, much less any other kind of painful emotional responses.
Title: Re: Robots and love
Post by: Is it cold in here? on 03 Oct 2011, 12:53
As I said in the fan-art thread, I'd have expected Momo to sit in seiza or kekkafuza with her hands in the classic gassho position in this situation, but I suppose cultural conditioning would be a different thing for her.
Might mourning rituals be selected by her regional settings?
Title: Re: Robots and love
Post by: Random Al Yousir on 05 Oct 2011, 01:28
More by her religional settings, I guess.

Besides, the religional settings of the deceased are the first thing to consider, IMHO.  I don't disgrace a muslim funeral with music, I don't insult a catholic deceased with a bland grave.
Title: Re: Robots and love
Post by: Is it cold in here? on 05 Oct 2011, 11:30
You would like the book "How to be a perfect stranger", which explains how and how not to behave at weddings, funerals, religious services of other religions. It's probably in the social protocol database.

Jeph said on Tumber that robots don't "do" religion. This is a radical difference from humans!
Title: Re: Robots and love
Post by: Carl-E on 05 Oct 2011, 11:36
Why would an AI need faith?  It knows who its maker is...


... and it finds us "amusing"!
Title: Re: Robots and love
Post by: Skewbrow on 05 Oct 2011, 13:42
Didn't the guy running the holistic detective agency encounter an Electric Monk? An AI specifically designed to believe in various things so that human beings could spend their time on other stuff. Pretty much the same principle as with VCRs watching the tv programs for us.
Title: Re: Robots and love
Post by: Random Al Yousir on 06 Oct 2011, 13:30
You would like the book "How to be a perfect stranger", which explains how and how not to behave at weddings, funerals, religious services of other religions. It's probably in the social protocol database.
Whooo.  A social protocol database for non-artificial intelligences.  Nice!

Personally, I found that telling the people that I'm an atheist helps a great deal in these matters.  I get first class, comprehensive explanations, it's practically impossible to ask stupid questions (although I never tested this out ambitiously) and I guess I could get away with quite some unfit behaviour, if I failed to "get" some point.
Title: Re: Robots and love
Post by: Mr_Rose on 06 Oct 2011, 13:42
Didn't the guy running the holistic detective agency encounter an Electric Monk? An AI specifically designed to believe in various things so that human beings could spend their time on other stuff. Pretty much the same principle as with VCRs watching the tv programs for us.
The electric monk was not designed by humans at all; it only looked human because its originators didn't want anyone to get it confused with a real person and picked the ugliest design they could think of. Pink skin and only two eyes? Ludicrous.
Title: Re: Robots and love
Post by: Skewbrow on 07 Oct 2011, 21:47
Didn't the guy running the holistic detective agency encounter an Electric Monk? An AI specifically designed to believe in various things so that human beings could spend their time on other stuff. Pretty much the same principle as with VCRs watching the tv programs for us.
The electric monk was not designed by humans at all; it only looked human because its originators didn't want anyone to get it confused with a real person and picked the ugliest design they could think of. Pink skin and only two eyes? Ludicrous.

A pedant writes... they were given an extra eye (making for a grand total of two), and were designed to look artificial rather than ugly. (And they were restricted to just two legs so they could ride horses and thus look more sincere).

I bow to your superior fandom knowledge.
Title: Re: Robots and love
Post by: Yarin on 10 Nov 2011, 11:47
I'm just going to point out that its a comic people an amazing comic but a comic none the less :psyduck:
Title: Re: Robots and love
Post by: Orbert on 10 Nov 2011, 14:41
I'm pretty sure we're all aware of that.  Are you implying that there is no merit in discussing it because it's a comic?
Title: Re: Robots and love
Post by: Is it cold in here? on 10 Nov 2011, 14:50
Sometimes even I have to fall back on that explanation.
Title: Re: Robots and love
Post by: dps on 10 Nov 2011, 16:41
But the AI in our friendly robots must have some kind of a moral code. Otherwise they would surely be used for criminal ends?  If not Asimov's three laws, then something else?

Why would this be true in fiction (other than that by Asimov himself) when it's not true in real life?  You do understand that Asimov's Three Laws of Robotics are fictional and have nothing to do with how real robots are designed and built, don't you?  I hope so.

And in the QC world, is there any doubt that Pintsize would engage in all sorts of criminal behavior if Marty would let him (and probably does so behind Marten's back anyway)?
Title: Re: Robots and love
Post by: DSL on 10 Nov 2011, 17:00
Don't be condescending. Most of the forum is well up on its Asimov and Co., or a least familiar with SF.

Though Asimov did relate one story about a reporter, following up on a story about a factory worker who had been crushed by an industrial robot arm (he had been inside the safety cage when he shouldn't have been) who called him to ask why the Three Laws didn't prevent that.
Title: Re: Robots and love
Post by: Is it cold in here? on 10 Nov 2011, 18:16
They're legally allowed to run around loose. There must be something about them to satisfy concerns about public safety.

Perhaps they're subject to criminal law as humans are, and that has a deterrent effect.

Jeph said AnthroPCs like humans. That, and an ability to predict the consequences of actions, can function as a moral code.
Title: Re: Robots and love
Post by: Yarin on 10 Nov 2011, 22:33
I'm just saying everyone is entitled to their opinion (unless it conflicts with mine) no just kidding  :lol:
Title: Re: Robots and love
Post by: dps on 11 Nov 2011, 19:32
Don't be condescending. Most of the forum is well up on its Asimov and Co., or a least familiar with SF.

Though Asimov did relate one story about a reporter, following up on a story about a factory worker who had been crushed by an industrial robot arm (he had been inside the safety cage when he shouldn't have been) who called him to ask why the Three Laws didn't prevent that.

I wasn't intending to be condensending.  I was asking why the person who posted the comment I was responding to held the opinion that they posted.  One obvious answer would be that the poster felt that the Three Laws are real, not fictional, though I didn't think it was the case.
Title: Re: Robots and love
Post by: Is it cold in here? on 11 Nov 2011, 20:44
One of Asimov's characters pointed out that a sophisticated robot following the Three Laws could be hard to distinguish from a virtuous human.

Title: Re: Robots and love
Post by: Skewbrow on 11 Nov 2011, 23:37
But the AI in our friendly robots must have some kind of a moral code. Otherwise they would surely be used for criminal ends?  If not Asimov's three laws, then something else?

Why would this be true in fiction (other than that by Asimov himself) when it's not true in real life?  You do understand that Asimov's Three Laws of Robotics are fictional and have nothing to do with how real robots are designed and built, don't you?  I hope so.

And in the QC world, is there any doubt that Pintsize would engage in all sorts of criminal behavior if Marty would let him (and probably does so behind Marten's back anyway)?
Don't be condescending. Most of the forum is well up on its Asimov and Co., or a least familiar with SF.

Though Asimov did relate one story about a reporter, following up on a story about a factory worker who had been crushed by an industrial robot arm (he had been inside the safety cage when he shouldn't have been) who called him to ask why the Three Laws didn't prevent that.

I wasn't intending to be condensending.  I was asking why the person who posted the comment I was responding to held the opinion that they posted.  One obvious answer would be that the poster felt that the Three Laws are real, not fictional, though I didn't think it was the case.

Pray, tell me, how does the statement "the anthroPCs must have some kind of a moral code" imply that "I feel that the three laws are real"? Asimov's three laws were just mentioned in a couple earlier posts, so they served as a point of reference at that time.
Title: Re: Robots and love
Post by: Is it cold in here? on 11 Nov 2011, 23:52
Today's robots are not good examples of what's needed. There's little moral content in welding a car.

Build a robot with free will, and if you don't have the Three Laws you'll need to put in something better or come up with a damned good reason.
Title: Re: Robots and love
Post by: akronnick on 12 Nov 2011, 00:05
The Three Laws are a poor substitute for morality.

As most of the stories concerning them demonstrate.
Title: Re: Robots and love
Post by: Mr_Rose on 12 Nov 2011, 01:24
Indeed I'm pretty sure Asimov devised the three laws, as they originally were, specifically so he could show how terrible they were as a substitute for actual moral thought.
Title: Re: Robots and love
Post by: akronnick on 12 Nov 2011, 01:31
And by extention, how poor legal systems are at getting people to be nice to each other.
Title: Re: Robots and love
Post by: DSL on 12 Nov 2011, 02:48
Yeah, legisating morality, and all that.
Later in life, Asimov would credit(blame?) editor John Campbell with the Three Laws; by the time Asimov had interwoven his Robot Stories timeline with that of his Empire/Foundation stories, he felt the need for a Zeroth Law, by which a robot had to consider the good of humanity  as a whole ("the needs of the many" if you prefer) over the good of any one human being. IIRC, he had it so that a robot character helped come up with the Zeroth Law.
Depending on your philosophy/politics, Zeroth Law opens up entirely new cans of worms.
Title: Re: Robots and love
Post by: Is it cold in here? on 12 Nov 2011, 13:06
Humans have managed to do terrible things both under rule-based systems and under moral-thinking-based systems.

QC robots like us, but can they understand us well enough to treat us with empathy when they've never had a human life?
Title: Re: Robots and love
Post by: Carl-E on 12 Nov 2011, 14:24
Empathy, no. 


I think the operating paradigm is amusement...

--------------------------------------------------------------------------

OK, that was just for laughs.  More seriously, Pintsize's attempts to shock are surely for his amusement, but in order to shock, there must be some understanding of what's shocking.  On the other hand, AnPC's like Momo and Winslow truly want to help their owners in most situations.  I don't know about love, but it indicates a level of caring.  Maybe that caring is based on enlightened self-interest (help your owner, they'll appreciate you more?), but it seems to run deeper, and may well have an empathetic basis.  So if they're programmed to understand our feelings in some way, would it have to be incorporated deeply enough for them to feel such feelings themselves? 

Because I don't think their people are just amusing curiosities to them.  There's more of an attachment than that.  Their people consider themselves the AnPC's owners - maybe the AnPC's consider their people as a sort of pet - more than an amusement, less than a "fellow being", somewhere in the middle, with perhaps a pet-like feeling of responsibility for their well being? 
Title: Re: Robots and love
Post by: DSL on 12 Nov 2011, 16:09
"Dogs have owners, cats and AnPCs have staff," that sort of thing?
Title: Re: Robots and love
Post by: Carl-E on 12 Nov 2011, 19:52
"Dogs have owners, cats have staff, and AnPCs have jesters"

That sort of thing.
Title: Re: Robots and love
Post by: Deadlywonky on 13 Nov 2011, 00:35
If memory serves Asimov had a short story about a robot who was hired as domestic help for a shy woman with a husband who was away a lot and the robot completely redecorated the house, modified the wife's wardrobe to make her fashionable and at the end ensured that when he seduced her he made sure that the curtains were open so that her gossipy neighbors could see. this action ensured that the neighbors tried to keep her involved in their lives.

my recollection is that despite the embarrassment and short term harm that caused her it ensured that in the long term it boosted her confidence and made sure that she was much better off at the end.

obviously that was not love, I think in one story it was stated that a robot can never love a human due to the potential minefield of getting the three laws crossed up and causing a positronic failiure
Title: Re: Robots and love
Post by: Carl-E on 13 Nov 2011, 02:36
Sounds like Chester 5000 (http://jessfink.com/Chester5000XYV/?p=34)

NSFW!!!
Title: Re: Robots and love
Post by: Skewbrow on 13 Nov 2011, 02:56
If memory serves Asimov had a short story about a robot who was hired as domestic help for a shy woman with a husband who was away a lot and the robot completely redecorated the house, modified the wife's wardrobe to make her fashionable and at the end ensured that when he seduced her he made sure that the curtains were open so that her gossipy neighbors could see. this action ensured that the neighbors tried to keep her involved in their lives.

Satisfaction guaranteed? (http://en.wikipedia.org/wiki/Satisfaction_Guaranteed_%28short_story%29)
Title: Re: Robots and love
Post by: Deadlywonky on 14 Nov 2011, 07:54
Bingo, how long did it take you to find that?
Title: Re: Robots and love
Post by: Skewbrow on 14 Nov 2011, 10:17
I did recognize the story right away (having read that collection of short stories many times). Google is your friend. After a couple trials and errors I found a listing of Asimov's collections of short stories. Those in turn had listings (with links) to the individual short stories. May be twenty minutes altogether? Somebody with better google-fu could easily beat that.
Title: Re: Robots and love
Post by: Carl-E on 14 Nov 2011, 11:27
I'd say you're about on par with the average reference librarian. 
Title: Re: Robots and love
Post by: Is it cold in here? on 14 Nov 2011, 12:14
I remembered that the robot was called the TN-1, and if you know that then the story is the first Google result for "tn-1 asimov".

As I remember it, the woman was pretty shaken up by it, not necessarily in a good way.

EDIT: Even though it was actually a TN-3.
Title: Re: Robots and love
Post by: Skewbrow on 14 Nov 2011, 12:28
IIRC in the end Susan Calvin said something to the effect that some changes will be made to the Tony (=TN) series models. Not because robots could fall in love, but because women can. A bit sexist if you ask me, and my recollection is not what it once was :-(
Title: Re: Robots and love
Post by: Mr_Rose on 14 Nov 2011, 15:44
IIRC in the end Susan Calvin said something to the effect that some changes will be made to the Tony (=TN) series models. Not because robots could fall in love, but because women can. A bit sexist if you ask me, and my recollection is not what it once was :-(
If it's anything-ist it's speciesist. But then Susan Calvin, in her position as the voice of god/narrator, is very much an expert witness when it comes to the capabilities of robots.

Title: Re: Robots and love
Post by: pwhodges on 14 Nov 2011, 17:12
May be twenty minutes altogether? Somebody with better google-fu could easily beat that.

My Google-fu is better than that!  But I'd say it's exceptional.  In any case I know the story - I've just not been around much the last couple of days as my desk computer's disconnected in preparation for getting a new Internet tomorrow (FTTC arrived in my street this month).
Title: Re: Robots and love
Post by: Deadlywonky on 14 Nov 2011, 23:39
thanks Skew, just gone and reread the complete robot again, what a way to spend an evening.

I did think though, didn't the kid in AI have the feature to bond to his 'mother'. different sort of love, but an emotional connection none the less.
Title: Re: Robots and love
Post by: DrPraetor on 30 Nov 2011, 20:37
 The point is, since Momo is completely sterile (although I suppose her myomer skin (http://en.wikipedia.org/wiki/BattleTech_technology#Materials_engineering) might be able to support some bacteria, depending on what exactly it is), she can make it with Hannelore and not trigger any of Hanners phobias.

 If Pintsize were a bit more subtle, he could try a "Much Ado About Nothing" gambit where he convinces Hanners and Momo that each has a killer crush on the other.
Title: Re: Robots and love
Post by: Is it cold in here? on 30 Nov 2011, 22:21
Hannelore already rejected the idea of making out with a robot, when her dad sent her the practice boyfriend.
(moderator)
Welcome, new person! But please follow the no-shipping rule: it's something the forum owner has specifically objected to.
(/moderator)
Title: Re: Robots and love
Post by: Yarin on 01 Dec 2011, 04:07
Also isn't hanners straight at least in theory
Title: Re: Robots and love
Post by: Is it cold in here? on 01 Dec 2011, 07:41
Yes, she said so to Marigold.
Title: Re: Robots and love
Post by: jwhouk on 01 Dec 2011, 08:17
Yes, she said so to Marigold.

1. Too lazy, and

2. too sleepy, so...

3. When?
Title: Re: Robots and love
Post by: Yarin on 01 Dec 2011, 08:32
1493 (http://questionablecontent.net/view.php?comic=1493)
Title: Re: Robots and love
Post by: Carl-E on 01 Dec 2011, 10:47
....and the new guy shows us all up. 

Good archiving! 
Title: Re: Robots and love
Post by: Yarin on 01 Dec 2011, 11:36
I know I'm scared to
Title: Re: Robots and love
Post by: Yarin on 11 Dec 2011, 23:31
Momo and Winslow together d'awwwww
Title: Re: Robots and love
Post by: Is it cold in here? on 12 Dec 2011, 08:16
Which, incidentally, is not shipping: there's a strip where Winslow wants to impress her but takes his courting advice from Pintsize.
Title: Re: Robots and love
Post by: Kugai on 12 Dec 2011, 10:39
Which, incidentally, is not shipping: there's a strip where Winslow wants to impress her but takes his courting advice from Pintsize.

The horror!!  The horror!!
Title: Re: Robots and love
Post by: idontunderstand on 12 Dec 2011, 12:54
"It's like someone stuck a vibrator in a fleshlight!!!"