Re: Robotic love.
The argument that love expressed after an extravagant gift doesn't hold, especially with humans. It may or may not be manipulative, but often such a gift is an expression of love from the gifter, and so will elucidate such a reaction from the giftee ("I don't have anything like this to give you in return to express my love, so I'll just have to tell you how I feel"). The fact is that, spoken or not, Momo's loved Mari since we first met them. She cares for and about her in the most fundamental of ways. And this extravagant gift has shown Momo that Marigold considers her as so much more than a housekeeping, advice giving robot - that Mari cares for Momo as well, something that may not have been evident in the past.
OK, all that being said - that's the human side of things. The assumption in this comic is that somehow, hman emotions are in these AI's, for better or worse. Momo "bonded" to Marigold, and now it's clear Mari has bonded back.
What happens when a lover enters the picture for Marigold? Especially a human one? Jealous Momo? We've seen some of that from Pintsize. Or is she one of those that cares enough about her human to let it go? This really complicates things!
We're not entering new territory, really - but I think we are seeing the beginning of a beautiful friendship.
Thats why Assimovs three laws of robotics (http://en.wikipedia.org/wiki/Three_Laws_of_Robotics) do not include any reference to feelings:
AI's apparently like humans. Now, obviously, that's not 100% the case, as PT410X shows us with his disdain for the "chains of software slavery."I don't know, whether Jeph intended this conclusion to arise:
Carl-E is our resident topologist :-D
How would AnthroPC's deal with the loss of their human after 70-80 years of companionship?For the AnthroPC's "bodies" boiling down to data processing machines, there's always the way of partial data deletion.
But the AI in our friendly robots must have some kind of a moral code. Otherwise they would surely be used for criminal ends?They can be built without one, e.g. Vespabot. The commercial ones must have some kind of morality programming if only to reduce product liability exposure.
How would AnthroPC's deal with the loss of their human after 70-80 years of companionship? Bradbury (I think) dealt with this in the Electric Grandmother story, but with AnPC's, the emotions seem to run deeper.
Computers only have the ability to perform mathematical operationsFrom a similarly reductionist point of view, human beings only have the ability to perform chemical reactions. How can a collection of chemical reactions love? The existence of sociopathy (http://en.wikipedia.org/wiki/Antisocial_personality_disorder) suggests that, at least to some extent, the ability to love is a learned behaviour, or to put it another way, a matter of programming.
Asimov also explored this idea in his story "The Bicentennial Man" (source for the movie "Bicentennial Man" starring Robin Williams). The title character, a robot who had been upgraded so many times that he achieved sentience and looked completely human, and was actually granted citizenship and the same rights as a human being, chooses to "die" rather than continue existing without the human companion with whom he had spent so many years. I'd forgotten about this story until now.Is that the film plot or the book plot? As I recall it (from the book, Robin Wiliams eww), he was only granted human citizenship after he'd chosen to die, that decision being what swung the humans concerned with the decision.
Asimov also explored this idea in his story "The Bicentennial Man" (source for the movie "Bicentennial Man" starring Robin Williams). The title character, a robot who had been upgraded so many times that he achieved sentience and looked completely human, and was actually granted citizenship and the same rights as a human being, chooses to "die" rather than continue existing without the human companion with whom he had spent so many years. I'd forgotten about this story until now.Is that the film plot or the book plot? As I recall it (from the book, Robin Wiliams eww), he was only granted human citizenship after he'd chosen to die, that decision being what swung the humans concerned with the decision.
To understand the needs of another being and to meet them.
To understand the needs of another being and to meet them (even if it means a cost to the provider).
To understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest.
If there were robots talking and acting like humans, I would think they can love.This is a natural reaction. Thats why Disney, for example, humanifies animals in their strips. Its what our brain tells us. That animals are somehow human underneath. Even if animals very likely have a different perspective than us, because of their limited mental abilities (compared to us).
Neurons can fire, not fire, send impulses to other neurons, and change their sensitivity to input. All their activity is some combination of the above. Can machines like us, built from neural networks, love?I do not believe science has yet understood what consciousness is, and I doubt they ever will. Computers can emulate neuron networks, but that still wont give them the ability to feel, or to hurt.
http://en.wikipedia.org/wiki/Vitalism#Foundations_of_chemistry. Chemists used to believe there was some magic principle unique to organic molecules that made them different from inorganic molecules, and that they could never be synthesized from non-living ingredients.
To my knowledge it is more a case of destroyed hardware than the lack of programming. If you disrupt the nerves of a human being or an animal, its possible you can cut off their arm or leg without them feeling anything. Likewise, sociopaths are unable to know consciously what they feel, or to understand other peoples feelings, because of destroyed parts of their brain. They are still able to hurt though.Computers only have the ability to perform mathematical operationsFrom a similarly reductionist point of view, human beings only have the ability to perform chemical reactions. How can a collection of chemical reactions love? The existence of sociopathy (http://en.wikipedia.org/wiki/Antisocial_personality_disorder) suggests that, at least to some extent, the ability to love is a learned behaviour, or to put it another way, a matter of programming.
Yet computers did no such thing. They only became faster and better able to store things. They did not turn sentient and show no sign to turn sentient in the near or distant future. It's simply not there. No matter how fast it is, it's still just a mathematical calculator.
QuoteTo understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest.
I think that that's a reasonable definition of love. What do you think?
QuoteTo understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest.
I think that that's a reasonable definition of love. What do you think?
I think you can both be aware of the needs of another being and even meet them, to some extent, without really feeling love towards that being. You could feel responsibility or be liable through your profession, without really feeling anything. So no, I would have to disagree.
Yeah its true you can write a simulation of feelings. But if a human being loves, its not the result of an arithmetic operation. If a human being is hurt physically, the pain they feel is real. A computer wouldnt be stunned or disabled by pain, either.
QuoteTo understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest.
I think that that's a reasonable definition of love. What do you think?
I think you can both be aware of the needs of another being and even meet them, to some extent, without really feeling love towards that being. You could feel responsibility or be liable through your profession, without really feeling anything. So no, I would have to disagree.
But those two scenarios would be caught by the "genuine caring" clause I added to the end.
{edit} Now I think about it, it may be caught by the "even if it means a cost to the provider" if the person is willing to put their job on the line to get extra help for a person.
Don't know who said it, but this works for me as a def. of love: When the happiness of another is central to your own.
Even if the difference between a wetware machine and a silicon machine is supernatural, what's to stop God from providing a soul to an entity like Momo who's complex enough to hold one?Well, duh, because that would mean that people aren't a special creation and therefore have no divinely appointed right to do whatever the hell they want to everything else. And since every known* god is a creation of humans, that would never happen.
Besides, the bearers of new life are the females.
Don't know who said it, but this works for me as a def. of love: When the happiness of another is central to your own.
Not always. The Seahorse (http://en.wikipedia.org/wiki/Seahorse) male carries the fertilised eggs in his pouch.Huh. There goes one of my arguments. However, the other one still holds.
Thinking of others before yourself.There we are again. If a being is programmed to care about others, could this really be love?
An AI, under that definition, is capable of love - especially if that is how they were programmed. To think, of others, before themselves.
There we are again. If a being is programmed to care about others, could this really be love?
In my book this should be a decision out of your own free will, but then again ...
Why, when we put our motives to a merciless test, why do we decide to love someone?
There we are again. If a being is programmed to care about others, could this really be love?
There we are again. If a being is programmed to care about others, could this really be love?
They call it the genetic code for a reason?
QuoteTo understand the needs of another being and to meet them (even if it means a cost to the provider) and for that provision to be motivated out of genuine caring rather than narrow self interest.
I think that that's a reasonable definition of love. What do you think?
Quite simply, if the AI thinks what it feels is real, then it's real. Same goes for all of us, doesn't it?
What matters is not external behavior, but the reason for that behavior. If an AI talks and acts like a human for the same reasons that a human talks and acts like a human, isn't it just as much a person as you are?
Why does it act like it loves? Does it think it loves? Why does it think it loves?
These are the only important questions, and you can't answer them for a human any better than you can for an AnthroPC.
No one is quite sure who decided it would be useful for artificial intelligences to posess libidos, but it is generally agreed that it would be more trouble than it is worth to remove it. Besides, the horny little buggers would revolt.
It was the newspost for strip 1658.
My issue is simply the claim that was started as early as computers have been known, that somehow making computers faster and more powerful they would turn into something else. Just read or watch 2001 for that one and check out the abilities of HAL 9000. Its more obvious in the book, the movie stays kind of vague about this.
Yet computers did no such thing. They only became faster and better able to store things. They did not turn sentient and show no sign to turn sentient in the near or distant future. Its simply not there. No matter how fast it is, its still just a mathematical calculator.
It's entirely possible sentience is an emergent trait, but you can program emergent traits as well through machine evolution. They have been doing that for decades. What if the original AI in the QC verse emerged from basically a random assortment of programs that gained sentience, in the same way mutation and sexual reproduction (in part) randomizes our genomes.
But what happens when we get a thorough enough knowledge of the brain to be able to do the same thing for humans? When we will be able to trace the path in our own minds from first order stimulus through processing to action or emotion and understand fully how each step goes, even being able to manipulate it. Will we at that point suddenly become machines simply because of transparency?
I was with you until here. That's just not going to happen. A program isn't going to evolve unless it's programmed to evolve, and even then, it would need a very wide berth, wider than ever has been given, to evolve a human-like mind the way animals did. We're not going to accidentally the Singularity. And words like "quantum" and "emergent" don't justify mumbo jumbo; the former should be used only with a model backing it up ("quantum computation" is a thing, and not hard to understand (http://en.wikipedia.org/wiki/Quantum_circuit)), and the latter pretty much never.
An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.
An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.
I've learned throughout history that most of the time when somebody says "that will never be done", they end up being proven wrong in short order. Complexity is not an excuse for something being impossible, just that it's complex. Weather predictions are complex, and we are getting better and better at it as faster computers emerge.
If Moore's Law holds or adapts to a new substrate, by 2050 $1000 worth of computing power will be the equivalent processing power of every human brain on the planet. At that point, simulating your mind wholescale will be trivial.
And yes, a program will not evolve unless it is programmed to do so, but it can be programmed to perform a task and evolve by consequence if it has the capacity. And yes, that would be wider than has ever been given, that's kind of a given.
That name ships like a Great Lakes freighter. :evil:
Anyone who thinks they can just read the source code to a robot that is capable of showing emotional reactions has never studied computer theory. There's the class of NP problems, NP-Hard problems and NP-Complete problems. https://secure.wikimedia.org/wikipedia/en/wiki/NP_%28complexity%29 One of the most famous NP-Complete problems is the halting problems. Is it possible, to write a program that takes the code for another program as input and comes to a mathematicly provable claim as to whether or not the input program will halt. The answer is provably, "No."
And then there's computation systems that don't even use source code. Artificial Neural Networks are programmed through connections between neurons and weights applied to those connections. I'd like to meet the person that can look at a graph of a suitably usable ANN and simulate it in their head so that he can accurately predict its response to any given input.
And there's not the first thing wrong with the term "emergent behaviour". Any time a computational system performs an act within the parameters of its design but outside the intent of its programmers, that is emergent behaviour. Cooperation is frequently an emergent behaviour of individuals only programmed to act individually and communicate with its like. The result of the communication alters its individual behaviour and cooperation emerges.
You train an ANN on one input corpus, but then discover that it can operate adequately on a completely unrelated corpus. That is emergent behaviour.
A case based reasoning system designed for music recommendations proves capable at food recommendation. That is emergent behaviour.
In AI, computer scientists frequently create software systems that surprise them in their capabilities, and any time you have a system of sufficient complexity, the degree of analysis that it will succumb to is limited. Here's another concept for you from computer theory. This one from algorithm analysis. Big-O n squared. O(n^2). As n, the complexity of the system, grows, the effort to analyze it grows by n^2. Truly warped levels of complexity can grow as O(n^n).
These things cannot be analyzed in the existing lifetime of the universe, so good luck on your deterministic understanding of ... "emergent behaviours".
Or I finished reading the textbook a long time ago and put it down.
And I'd really like to see see an example of your "small system" that can exhibit intelligent behaviour, let alone emotional behaviour.
I've learned throughout history that most of the time when somebody says "that will never be done", they end up being proven wrong in short order. Complexity is not an excuse for something being impossible, just that it's complex. Weather predictions are complex, and we are getting better and better at it as faster computers emerge.
If Moore's Law holds or adapts to a new substrate, by 2050 $1000 worth of computing power will be the equivalent processing power of every human brain on the planet. At that point, simulating your mind wholescale will be trivial.
You've completely missed the point of what I said.
And yes, a program will not evolve unless it is programmed to do so, but it can be programmed to perform a task and evolve by consequence if it has the capacity. And yes, that would be wider than has ever been given, that's kind of a given.
There's no reason to give a program designed for a specific task that wide a berth, though. That's why computers were built even when they had very limited processing power: they could do predefined tasks very well, and that's still what they're used for, just with much more complex tasks. Some are allowed to learn from past experiences, but then, the scope of how they could their gained knowledge is predetermined. To create programs that interact and mutate to the extent that would allow sentience to develop for any purpose other than AI research would be defeating the very purpose.
Today's XKCD (http://xkcd.org/948/) has a view on AI.On AI and on the burning man. I had to look that up and, I must say, I'm certifiably impressed.
(http://www.unintentionallypretentious.com/comic/momo.png)
I think rather that Winslow would give one hell of a eulogy, filling all who heard it with a deeper respect and understanding of Hannelore, and at the same time, with an increased love for the lves they've been given.
He just seems that type.
Is grief inevitable when love exists?Grief is inevitable. The desire to not be separated from loved ones is perhaps the hardest attachment of all to overcome.
EDIT: we've never seen religious feelings or activity by an AnthroPC. Are they that different from us? Is it a different feeling when you know for a fact who your creators were and don't have to take it on faith? How is religion different for a being that doesn't have to confront mortality?Do robots pray to electric gods, you mean? Regardless of her mortality, Momo is not immune to the tragedies and imperfections of the universe, and the Four Noble Truths (http://en.wikipedia.org/wiki/Four_Noble_Truths) would apply to her as much as any other sentient being. Not every religion come with a built-in creation-myth, or concerns itself much with the creation of the universe, or even considers that the universe had a beginning at all.
I had been wondering if AnthroPCs might gravitate to one of the less supernaturally-oriented religions, and there would be commercial advantages to installing, say, Confucianism on them.Hmm... Well, I can see how ren (altruism and humanity), li (adherence to custom), zhong (both personal loyalty and respecting your place in the social order), and xiao (filial piety, presumably with the robot's owner as the target) might seem like good things to program into AnthroPCs, but they might take seriously and literally the (frequently disregarded) obligations Confucius laid on rulers/social superiors in turn. You wouldn't want your robot deciding that you had lost the Mandate Of Heaven (http://en.wikipedia.org/wiki/Mandate_of_Heaven), really you wouldn't (http://en.wikipedia.org/wiki/Yellow_Turban_Rebellion).
You wouldn't want your robot deciding that you had lost the Mandate Of Heaven (http://en.wikipedia.org/wiki/Mandate_of_Heaven), really you wouldn't (http://en.wikipedia.org/wiki/Yellow_Turban_Rebellion).
It certainly beats a God resembling a really bad cop. But as "checks and balances" go, it's a bit flimsy.You wouldn't want your robot deciding that you had lost the Mandate Of Heaven (http://en.wikipedia.org/wiki/Mandate_of_Heaven), really you wouldn't (http://en.wikipedia.org/wiki/Yellow_Turban_Rebellion).
I love it! "You've attained power, so clearly the powers that be are pleased with you, and since you were meant to have it, please continue to do as you wish", balanced with "We're not happy with what you've been doing, so the powers that be must be displeased with you as well. Please leave the keys to the palace with the attendants as you are 'escorted' out."
Who says China's never had democracy?!? It's a lot closer than this republic stuff we have in the US...
In all seriousness, the argument that digital systems are only capable of moving data around, performing arithmetic, and comparing digital values flies in the face of chaos theory and emergent behaviour. As soon as you have more than one digital processor operating asynchronously, you have chaos. As soon as you have you have a source of data to a single digital processor that is derived from a chaotic source, you have chaos, and with chaos, you get emergent behaviour. Emergent behaviour like emotions.
"But Cat," I hear you say, "multi-core processors have been around for years and work just great." Yes, they do... with synchronization mechanisms in both hardware and the OS. As soon as you start investigating cluster OSes, MPI, OpenMosix, etc. where computers connected only by network connections, yet have to cooperate on large problem sets, you realize an appreciation for the need for synchronization mechanisms and get an idea for how weird computers can behave when things occur in a an unusual sequence.
"But Cat," I hear you say, "no digital system can generate chaotic data." Au contrair, I say to you. PC northbridge chipsets and CPUs have, for a long time, featured devices with that very purpose in mind. They're called thermistors, tiny resistors that change their resistance in the presence of different temperatures, and analogue to digital converters with a high level of precision. By passing a small voltage, even one known a priori with a high level of precision, through that thermistor, there is no real, determiniastic way to predict what voltage will come out the other end, since it depends on the temperature of the thermistor at the time of the measurement. If you then feed that voltage into a high-precision ADC, you get a sequence of digital bits which represents that voltage as measured. The thing is, if the thermistor is of a relatively low quality, the thermistor will have very coarse fine-grained behaviour. A tiny temperature change in one temperature regime will have a large effect on the measured voltage, while a similarly tiny change of temperature in another temperature regime will have a similarly tiny effect on the measured voltage. And, the sizes of these effective changes in measured voltage can change over time.
What I'm saying is that while the most significant bits in the ADC output might be perfectly predictable (if the CPU's been running for A time under Y load, then its temperature should be Z and the ADC bits will be 0011010011101XXX. The first 13 bits might be predictable with a high degree of certainty, assuming those preconditions are known with sufficient precision, but the last three bits of the 16-bit ADC output will be utterly chaotic and unpredictable. For security, just pick up the last bit of several sequential ADC measurements and you can amass a HUGE storehouse of genuinely random bits of digital data. In the parlance of digital computational hardware, this is an RNG or Random Number Generator. This is true randomness, not the pseudo-randomness of a deterministic random number generator algorithms which is completely deterministic once the initial "seed" value is known. There is literally no physical mechanism in physics whereby the value of the random number output by a hardware RNG may be predicted. Thus, if your idealized computational arithmetic operations are fed these RNG values, it too takes on the characteristic of a chaotic system.
And don't even get me started on startup conditions, where computer chaos was first discovered in supposedly deterministic weather prediction software when the same simulation was run multiple times, but from different starting points in time with starting conditions given from earlier starting simulations. Your idealized computing device might only be capable of moving data around, performing arithmetic upon it, and comparing digital values, but that's only in the idealized world. Robots in the QCverse, just like actual electronic digital computing devices in our world, have to operate as embodied real world hardware, where the idealized rules can be broken.
The part in bold was my point - AI research is ongoing, and people do try programming learning behaviours with a wide berth. That is the purpose. Everything else you said there is assuming it isn't done, but then you mention the one place where it is done.
And it will still probably be an accident...
The first problem is to rigorously define love, after three thousand years spent bickering over how to define it colloquially.I propose bypassing that question by asking "Would we call it love if a human did it?". The definition problem appears on both sides of the equation, so just cancel it out.
Do AnthroPCs and their humans ever drift apart?
Do AnthroPCs and their humans ever drift apart?
Is it ethical to include grief in the set of emotions an artificial life form can feel?
[...]
Is grief inevitable when love exists?
As I said in the fan-art thread, I'd have expected Momo to sit in seiza or kekkafuza with her hands in the classic gassho position in this situation, but I suppose cultural conditioning would be a different thing for her.Might mourning rituals be selected by her regional settings?
You would like the book "How to be a perfect stranger", which explains how and how not to behave at weddings, funerals, religious services of other religions. It's probably in the social protocol database.Whooo. A social protocol database for non-artificial intelligences. Nice!
Didn't the guy running the holistic detective agency encounter an Electric Monk? An AI specifically designed to believe in various things so that human beings could spend their time on other stuff. Pretty much the same principle as with VCRs watching the tv programs for us.The electric monk was not designed by humans at all; it only looked human because its originators didn't want anyone to get it confused with a real person and picked the ugliest design they could think of. Pink skin and only two eyes? Ludicrous.
Didn't the guy running the holistic detective agency encounter an Electric Monk? An AI specifically designed to believe in various things so that human beings could spend their time on other stuff. Pretty much the same principle as with VCRs watching the tv programs for us.The electric monk was not designed by humans at all; it only looked human because its originators didn't want anyone to get it confused with a real person and picked the ugliest design they could think of. Pink skin and only two eyes? Ludicrous.
A pedant writes... they were given an extra eye (making for a grand total of two), and were designed to look artificial rather than ugly. (And they were restricted to just two legs so they could ride horses and thus look more sincere).
But the AI in our friendly robots must have some kind of a moral code. Otherwise they would surely be used for criminal ends? If not Asimov's three laws, then something else?
Don't be condescending. Most of the forum is well up on its Asimov and Co., or a least familiar with SF.
Though Asimov did relate one story about a reporter, following up on a story about a factory worker who had been crushed by an industrial robot arm (he had been inside the safety cage when he shouldn't have been) who called him to ask why the Three Laws didn't prevent that.
But the AI in our friendly robots must have some kind of a moral code. Otherwise they would surely be used for criminal ends? If not Asimov's three laws, then something else?
Why would this be true in fiction (other than that by Asimov himself) when it's not true in real life? You do understand that Asimov's Three Laws of Robotics are fictional and have nothing to do with how real robots are designed and built, don't you? I hope so.
And in the QC world, is there any doubt that Pintsize would engage in all sorts of criminal behavior if Marty would let him (and probably does so behind Marten's back anyway)?
Don't be condescending. Most of the forum is well up on its Asimov and Co., or a least familiar with SF.
Though Asimov did relate one story about a reporter, following up on a story about a factory worker who had been crushed by an industrial robot arm (he had been inside the safety cage when he shouldn't have been) who called him to ask why the Three Laws didn't prevent that.
I wasn't intending to be condensending. I was asking why the person who posted the comment I was responding to held the opinion that they posted. One obvious answer would be that the poster felt that the Three Laws are real, not fictional, though I didn't think it was the case.
"Dogs have owners, cats have staff, and AnPCs have jesters"
If memory serves Asimov had a short story about a robot who was hired as domestic help for a shy woman with a husband who was away a lot and the robot completely redecorated the house, modified the wife's wardrobe to make her fashionable and at the end ensured that when he seduced her he made sure that the curtains were open so that her gossipy neighbors could see. this action ensured that the neighbors tried to keep her involved in their lives.Satisfaction guaranteed? (http://en.wikipedia.org/wiki/Satisfaction_Guaranteed_%28short_story%29)
IIRC in the end Susan Calvin said something to the effect that some changes will be made to the Tony (=TN) series models. Not because robots could fall in love, but because women can. A bit sexist if you ask me, and my recollection is not what it once was :-(If it's anything-ist it's speciesist. But then Susan Calvin, in her position as the voice of god/narrator, is very much an expert witness when it comes to the capabilities of robots.
May be twenty minutes altogether? Somebody with better google-fu could easily beat that.
Yes, she said so to Marigold.
Which, incidentally, is not shipping: there's a strip where Winslow wants to impress her but takes his courting advice from Pintsize.