THESE FORUMS NOW CLOSED (read only)

  • 14 May 2024, 18:56
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 [2] 3 4   Go Down

Author Topic: Robots and love  (Read 54172 times)

Mad Cat

  • Beyond Thunderdome
  • ****
  • Offline Offline
  • Posts: 555
  • Master of my domain, but not of my range.
Re: Robots and love
« Reply #50 on: 04 Sep 2011, 16:26 »

I can sort of hear a robot singing a lot of these lyrics:
http://www.youtube.com/watch?v=z04VDnr5k4I

Title: You Wanted More
Artist: Tonic

Love is tragic, love is bold
You will always do what you are told
Love is hard, love is strong
You will never say that you were wrong

I don't know when I got bitter
But love is surely better when it's gone

'Cause you wanted more
More than I could give
More than I could handle
In a life that I can't live

You wanted more
More than I could bear
More than I could offer
For a love that isn't there

Love is color, love is loud
Love is never saying you're too proud
Love is trusting, love is honest
Love is not a hand to hold you down

I don't know when I got bitter
But love is surely better when it's gone

'Cause you wanted more
More than I could give
More than I could handle
In a life that I can't live

You wanted more
More than I could bear
More than I could offer
For a love that isn't there

I gotta pick me up when I'm down town
I gotta get my feet back on the ground
I gotta pick me up when I'm down town

I don't know when I got bitter
But love is surely better when it's gone

'Cause you wanted more
More than I could give
More than I could handle
In a life that I can't live

You wanted more
More than I could bear
More than I could offer
For a love that isn't there

You wanted more
More than I could love
More than I could offer
The harder you would shove

You wanted more
More than I could give
More than I could handle
In a life that I can't live

---
In all seriousness, the argument that digital systems are only capable of moving data around, performing arithmetic, and comparing digital values flies in the face of chaos theory and emergent behaviour. As soon as you have more than one digital processor operating asynchronously, you have chaos. As soon as you have you have a source of data to a single digital processor that is derived from a chaotic source, you have chaos, and with chaos, you get emergent behaviour. Emergent behaviour like emotions.

"But Cat," I hear you say, "multi-core processors have been around for years and work just great." Yes, they do... with synchronization mechanisms in both hardware and the OS. As soon as you start investigating cluster OSes, MPI, OpenMosix, etc. where computers connected only by network connections, yet have to cooperate on large problem sets, you realize an appreciation for the need for synchronization mechanisms and get an idea for how weird computers can behave when things occur in a an unusual sequence.

"But Cat," I hear you say, "no digital system can generate chaotic data." Au contrair, I say to you. PC northbridge chipsets and CPUs have, for a long time, featured devices with that very purpose in mind. They're called thermistors, tiny resistors that change their resistance in the presence of different temperatures, and analogue to digital converters with a high level of precision. By passing a small voltage, even one known a priori with a high level of precision, through that thermistor, there is no real, determiniastic way to predict what voltage will come out the other end, since it depends on the temperature of the thermistor at the time of the measurement. If you then feed that voltage into a high-precision ADC, you get a sequence of digital bits which represents that voltage as measured. The thing is, if the thermistor is of a relatively low quality, the thermistor will have very coarse fine-grained behaviour. A tiny temperature change in one temperature regime will have a large effect on the measured voltage, while a similarly tiny change of temperature in another temperature regime will have a similarly tiny effect on the measured voltage. And, the sizes of these effective changes in measured voltage can change over time.

What I'm saying is that while the most significant bits in the ADC output might be perfectly predictable (if the CPU's been running for A time under Y load, then its temperature should be Z and the ADC bits will be 0011010011101XXX. The first 13 bits might be predictable with a high degree of certainty, assuming those preconditions are known with sufficient precision, but the last three bits of the 16-bit ADC output will be utterly chaotic and unpredictable. For security, just pick up the last bit of several sequential ADC measurements and you can amass a HUGE storehouse of genuinely random bits of digital data. In the parlance of digital computational hardware, this is an RNG or Random Number Generator. This is true randomness, not the pseudo-randomness of a deterministic random number generator algorithms which is completely deterministic once the initial "seed" value is known. There is literally no physical mechanism in physics whereby the value of the random number output by a hardware RNG may be predicted. Thus, if your idealized computational arithmetic operations are fed these RNG values, it too takes on the characteristic of a chaotic system.

And don't even get me started on startup conditions, where computer chaos was first discovered in supposedly deterministic weather prediction software when the same simulation was run multiple times, but from different starting points in time with starting conditions given from earlier starting simulations. Your idealized computing device might only be capable of moving data around, performing arithmetic upon it, and comparing digital values, but that's only in the idealized world. Robots in the QCverse, just like actual electronic digital computing devices in our world, have to operate as embodied real world hardware, where the idealized rules can be broken.
« Last Edit: 04 Sep 2011, 17:44 by Mad Cat »
Logged
The Quakers were masters of siege warfare.

jwhouk

  • Awakened
  • *****
  • Offline Offline
  • Posts: 11,022
  • The Valley of the Sun
Re: Robots and love
« Reply #51 on: 04 Sep 2011, 19:35 »

Tl,dr summary: We have no freakin' clue how it works, but it does.
Logged
"Character is what you are in the Dark." - D.L. Moody
There is no joke that can be made online without someone being offended by it.
Life's too short to be ashamed of how you were born.
Just another Joe like 46

DSL

  • Older than Moses
  • *****
  • Offline Offline
  • Posts: 4,097
    • Don Lee Cartoons
Re: Robots and love
« Reply #52 on: 04 Sep 2011, 20:14 »

I can't keep up with what we're talkin' bout.
I think my net hookup is timin' out
Among the QC forums
There simply is no quorum
Back and forth in an imperfect storm
Whether or not -- robots can love.

Jeph sends his comics to the Internet (no sleep at all)
They post in early morn and there we are (a free-for-all)
Splitting the thinnest hairs
while Momo sits and stares
At Marigirl for whom she really cares
lookin' like a -- robot who loves

Last night I watched the stream from Western Mass ('twas capital)
Source of a strip whose content's quest'nable
Teen Momo's too damn cute
Stay off the shipping route
'Cause Jeph keeps sayin' how it squicks him out
She's one of the -- robots we love

-- apologies and thanks to Mr. J. Browne.
« Last Edit: 04 Sep 2011, 20:31 by DSL »
Logged
"We are who we pretend to be. So we had better be careful who we pretend to be."  -- Kurt Vonnegut.

jwhouk

  • Awakened
  • *****
  • Offline Offline
  • Posts: 11,022
  • The Valley of the Sun
Re: Robots and love
« Reply #53 on: 05 Sep 2011, 07:05 »

I didn't here the "OOOOoohh Sha-la-la OOOh OOOh ooooooh" part at first.
Logged
"Character is what you are in the Dark." - D.L. Moody
There is no joke that can be made online without someone being offended by it.
Life's too short to be ashamed of how you were born.
Just another Joe like 46

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: Robots and love
« Reply #54 on: 05 Sep 2011, 07:12 »

I  did. 
Logged
When people try to speak a gut reaction, they end up talking out their ass.

cesariojpn

  • Scrabble hacker
  • *****
  • Offline Offline
  • Posts: 1,392
Re: Robots and love
« Reply #55 on: 05 Sep 2011, 15:27 »

Logged

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: Robots and love
« Reply #56 on: 05 Sep 2011, 16:11 »

I think it's better left unexplained.
Logged
Thank you, Dr. Karikó.

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: Robots and love
« Reply #57 on: 05 Sep 2011, 22:27 »

Edit:
Quote
No one is quite sure who decided it would be useful for artificial intelligences to posess libidos, but it is generally agreed that it would be more trouble than it is worth to remove it. Besides, the horny little buggers would revolt.

There's your explanation! 
« Last Edit: 06 Sep 2011, 05:51 by Carl-E »
Logged
When people try to speak a gut reaction, they end up talking out their ass.

jeph

  • Administrator
  • Duck attack survivor
  • ******
  • Offline Offline
  • Posts: 1,848
  • MON DIEU!
    • Questionable Content
Re: Robots and love
« Reply #58 on: 05 Sep 2011, 23:08 »

 :psyduck:
Logged
Deathmole Jacques' head takes up the bottom half of the panel, with his words taking up the top half. He is not concerned about the life of his friend.

pwhodges

  • Admin emeritus
  • Awakened
  • *
  • Offline Offline
  • Posts: 17,241
  • I'll only say this once...
    • My home page
Re: Robots and love
« Reply #59 on: 05 Sep 2011, 23:21 »

Hey, Jeph!
Logged
"Being human, having your health; that's what's important."  (from: Magical Shopping Arcade Abenobashi )
"As long as we're all living, and as long as we're all having fun, that should do it, right?"  (from: The Eccentric Family )

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: Robots and love
« Reply #60 on: 05 Sep 2011, 23:23 »

It was the newspost for strip 1658.
Logged
Thank you, Dr. Karikó.

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: Robots and love
« Reply #61 on: 06 Sep 2011, 05:51 »

Thanks, fixed my post. 
Logged
When people try to speak a gut reaction, they end up talking out their ass.

jwhouk

  • Awakened
  • *****
  • Offline Offline
  • Posts: 11,022
  • The Valley of the Sun
Re: Robots and love
« Reply #62 on: 06 Sep 2011, 05:52 »

It was the newspost for strip 1658.

Oh, you HAD to mention that strip again, didn't you?

That's like the QC equivalent of Rickrolling. I officially dub it "Svenmomorolling".
Logged
"Character is what you are in the Dark." - D.L. Moody
There is no joke that can be made online without someone being offended by it.
Life's too short to be ashamed of how you were born.
Just another Joe like 46

DSL

  • Older than Moses
  • *****
  • Offline Offline
  • Posts: 4,097
    • Don Lee Cartoons
Re: Robots and love
« Reply #63 on: 06 Sep 2011, 08:21 »

That name ships like a Great Lakes freighter.  :evil:
Logged
"We are who we pretend to be. So we had better be careful who we pretend to be."  -- Kurt Vonnegut.

Near Lurker

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,642
Re: Robots and love
« Reply #64 on: 06 Sep 2011, 09:00 »


My issue is simply the claim that was started as early as computers have been known, that somehow making computers faster and more powerful they would turn into something else. Just read or watch 2001 for that one and check out the abilities of HAL 9000. Its more obvious in the book, the movie stays kind of vague about this.

Yet computers did no such thing. They only became faster and better able to store things. They did not turn sentient and show no sign to turn sentient in the near or distant future. Its simply not there. No matter how fast it is, its still just a mathematical calculator.


The claim has never been that they would "become" something else.  Computers programmed to surf the web (boy, I just dated myself) and run word processors aren't going to suddenly turn into Skynet.  It's been that we would make them something else, and in doing so give them the power to do so themselves.  We haven't.  It's "simply not there" because we simply haven't put it there.  It will take a human effort to create an AI unfettered enough to act like HAL or Pintsize, and to install the subroutines that control emotion.  However, we've already done so, just that so far no one's managed to get certain patterns of thought fast enough to simulate human intelligence, and no one's given such a program the power to interact significantly with the real world.  There's no theoretical problem with the first, only that we haven't figured out how to do it yet, and obviously the second could be changed today if anyone were stupid enough to hook a lab experiment up to the defense grid.  Indeed, with fast enough hardware, the first obstacle could be surmounted by brute force.

"Just a mathematical calculator" is exactly what we've got, just hooked up to some chemical registers and I/O.  Blather on about the soul all you like, but at the end of the day, we're just meaty, badly designed robots.
« Last Edit: 06 Sep 2011, 09:02 by Near Lurker »
Logged
After seventeen years, once again, sort of a lurker.  (he/him)

Kyrendis

  • Not quite a lurker
  • Offline Offline
  • Posts: 10
Re: Robots and love
« Reply #65 on: 06 Sep 2011, 09:29 »

The only real difference between a computer and us when it comes to emotions, is that we can read the source-code for a computer, and fully understand how and why it is "feeling" those emotions.

This understanding of the first order stimuli and the process it goes through is what leads people to make the knee-jerk reaction that such feelings must be 'fake' or 'simulated'.

But what happens when we get a thorough enough knowledge of the brain to be able to do the same thing for humans? When we will be able to trace the path in our own minds from first order stimulus through processing to action or emotion and understand fully how each step goes, even being able to manipulate it. Will we at that point suddenly become machines simply because of transparency?

It's entirely possible sentience is an emergent trait, but you can program emergent traits as well through machine evolution. They have been doing that for decades. What if the original AI in the QC verse emerged from basically a random assortment of programs that gained sentience, in the same way mutation and sexual reproduction (in part) randomizes our genomes. In that case they are as obscure as we are right now, and people would not be able to fully understand them. In that case, are their emotions real?

What about including biological or quantum processors in the mind of these AI's to enable sentience? In that case they can make the same sudden leaps of logic from A>D that characterize human thinking. Non-linear prediction. If they are capable of this, are they not just like us enough to be considered sentient?

It's a slippery slope to make an argument that something which looks, and acts as if it is sentient isn't because of some ineffable quality. It inevitably leads to some supernatural explanation, and at that point you can arbitrarily choose which beings are sentient or not based on whatever criteria you wish.

This is the argument white Europeans used to justify slavery. After all, though they acted like it Africans could not be human, they just pretended at emotions and honor. They had no souls.
Logged

Near Lurker

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,642
Re: Robots and love
« Reply #66 on: 06 Sep 2011, 10:41 »

It's entirely possible sentience is an emergent trait, but you can program emergent traits as well through machine evolution. They have been doing that for decades.  What if the original AI in the QC verse emerged from basically a random assortment of programs that gained sentience, in the same way mutation and sexual reproduction (in part) randomizes our genomes.

I was with you until here.  That's just not going to happen.  A program isn't going to evolve unless it's programmed to evolve, and even then, it would need a very wide berth, wider than ever has been given, to evolve a human-like mind the way animals did.  We're not going to accidentally the Singularity.  And words like "quantum" and "emergent" don't justify mumbo jumbo; the former should be used only with a model backing it up ("quantum computation" is a thing, and not hard to understand), and the latter pretty much never.

EDIT: Somehow quoted wrong part.
« Last Edit: 06 Sep 2011, 22:20 by Near Lurker »
Logged
After seventeen years, once again, sort of a lurker.  (he/him)

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: Robots and love
« Reply #67 on: 06 Sep 2011, 11:23 »

An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.
Logged
Thank you, Dr. Karikó.

Kyrendis

  • Not quite a lurker
  • Offline Offline
  • Posts: 10
Re: Robots and love
« Reply #68 on: 06 Sep 2011, 12:01 »

But what happens when we get a thorough enough knowledge of the brain to be able to do the same thing for humans? When we will be able to trace the path in our own minds from first order stimulus through processing to action or emotion and understand fully how each step goes, even being able to manipulate it. Will we at that point suddenly become machines simply because of transparency?

I was with you until here.  That's just not going to happen.  A program isn't going to evolve unless it's programmed to evolve, and even then, it would need a very wide berth, wider than ever has been given, to evolve a human-like mind the way animals did.  We're not going to accidentally the Singularity.  And words like "quantum" and "emergent" don't justify mumbo jumbo; the former should be used only with a model backing it up ("quantum computation" is a thing, and not hard to understand), and the latter pretty much never.

I've learned throughout history that most of the time when somebody says "that will never be done", they end up being proven wrong in short order. Complexity is not an excuse for something being impossible, just that it's complex. Weather predictions are complex, and we are getting better and better at it as faster computers emerge.

If Moore's Law holds or adapts to a new substrate, by 2050 $1000 worth of computing power will be the equivalent processing power of every human brain on the planet. At that point, simulating your mind wholescale will be trivial.

And yes, a program will not evolve unless it is programmed to do so, but it can be programmed to perform a task and evolve by consequence if it has the capacity. And yes, that would be wider than has ever been given, that's kind of a given.

And I tend to agree with "emergent" being a nonsense word that really means "too complex for us to understand, but still logical". However, provious suggestions in the thread had led me to believe people held that notion, so I was merely covering my bases.

Also you seem to have quoted the wrong part, since that doesn't match up with what you are arguing. You're talking about AI, quote is talking about understanding of the human mind.

An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.

You could predict it by studying the behaviour of the entire whole though, that's the point. All that argument says is that a lung cell is not a human. Which is a truism yes, but not a good argument. If you could anaylze and understand the whole, you could predict what the whole could do. That's not emergent behaviour, that's just complexity.
Logged

Near Lurker

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,642
Re: Robots and love
« Reply #69 on: 06 Sep 2011, 14:41 »

An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.

There are two major flaws in that argument:
1. Neither ants nor neurons were (as far as we know) created by an external intelligence for a single purpose.
2. Studying one in isolation, no, but with the knowledge that they typically don't work in isolation, and of the various permutations, you could extrapolate with ants, and you could come to the vague idea that neurons could, conceivably, form part of a processor.  Communication between programs tends to be very limited.

I've learned throughout history that most of the time when somebody says "that will never be done", they end up being proven wrong in short order. Complexity is not an excuse for something being impossible, just that it's complex. Weather predictions are complex, and we are getting better and better at it as faster computers emerge.

I didn't say it wouldn't be done; I said it wouldn't be done by accident.

If Moore's Law holds or adapts to a new substrate, by 2050 $1000 worth of computing power will be the equivalent processing power of every human brain on the planet. At that point, simulating your mind wholescale will be trivial.

It won't.  But even if it did, a faster computer can't get around our relative lack of understanding of the human mind.

And yes, a program will not evolve unless it is programmed to do so, but it can be programmed to perform a task and evolve by consequence if it has the capacity. And yes, that would be wider than has ever been given, that's kind of a given.

There's no reason to give a program designed for a specific task that wide a berth, though.  That's why computers were built even when they had very limited processing power: they could do predefined tasks very well, and that's still what they're used for, just with much more complex tasks.  Some are allowed to learn from past experiences, but then, the scope of how they could their gained knowledge is predetermined.  To create programs that interact and mutate to the extent that would allow sentience to develop for any purpose other than AI research would be defeating the very purpose.
Logged
After seventeen years, once again, sort of a lurker.  (he/him)

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: Robots and love
« Reply #70 on: 06 Sep 2011, 15:24 »

But there is  AI research.  And it's progressing...
Logged
When people try to speak a gut reaction, they end up talking out their ass.

Near Lurker

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,642
Re: Robots and love
« Reply #71 on: 06 Sep 2011, 15:51 »

You've completely missed the point of what I said.
Logged
After seventeen years, once again, sort of a lurker.  (he/him)

Kugai

  • CIA Handler of Miss Melody Powers
  • Awakened
  • *****
  • Offline Offline
  • Posts: 11,493
  • Crazy Kiwi Shoujo-Ai Fan
    • My Homepage
Re: Robots and love
« Reply #72 on: 06 Sep 2011, 17:10 »

That name ships like a Great Lakes freighter.  :evil:


The Edmund Fitzgerald ?
Logged
James The Kugai 

You can never have too much Coffee.

Mad Cat

  • Beyond Thunderdome
  • ****
  • Offline Offline
  • Posts: 555
  • Master of my domain, but not of my range.
Re: Robots and love
« Reply #73 on: 06 Sep 2011, 17:34 »

Anyone who thinks they can just read the source code to a robot that is capable of showing emotional reactions has never studied computer theory. There's the class of NP problems, NP-Hard problems and NP-Complete problems. https://secure.wikimedia.org/wikipedia/en/wiki/NP_%28complexity%29 One of the most famous NP-Complete problems is the halting problems. Is it possible, to write a program that takes the code for another program as input and comes to a mathematicly provable claim as to whether or not the input program will halt. The answer is provably, "No."

And then there's computation systems that don't even use source code. Artificial Neural Networks are programmed through connections between neurons and weights applied to those connections. I'd like to meet the person that can look at a graph of a suitably usable ANN and simulate it in their head so that he can accurately predict its response to any given input.

And there's not the first thing wrong with the term "emergent behaviour". Any time a computational system performs an act within the parameters of its design but outside the intent of its programmers, that is emergent behaviour. Cooperation is frequently an emergent behaviour of individuals only programmed to act individually and communicate with its like. The result of the communication alters its individual behaviour and cooperation emerges.

You train an ANN on one input corpus, but then discover that it can operate adequately on a completely unrelated corpus. That is emergent behaviour.

A case based reasoning system designed for music recommendations proves capable at food recommendation. That is emergent behaviour.

In AI, computer scientists frequently create software systems that surprise them in their capabilities, and any time you have a system of sufficient complexity, the degree of analysis that it will succumb to is limited. Here's another concept for you from computer theory. This one from algorithm analysis. Big-O n squared. O(n^2). As n, the complexity of the system, grows, the effort to analyze it grows by n^2. Truly warped levels of complexity can grow as O(n^n).

These things cannot be analyzed in the existing lifetime of the universe, so good luck on your deterministic understanding of ... "emergent behaviours".
Logged
The Quakers were masters of siege warfare.

Near Lurker

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,642
Re: Robots and love
« Reply #74 on: 06 Sep 2011, 19:14 »

Anyone who thinks they can just read the source code to a robot that is capable of showing emotional reactions has never studied computer theory. There's the class of NP problems, NP-Hard problems and NP-Complete problems. https://secure.wikimedia.org/wikipedia/en/wiki/NP_%28complexity%29 One of the most famous NP-Complete problems is the halting problems. Is it possible, to write a program that takes the code for another program as input and comes to a mathematicly provable claim as to whether or not the input program will halt. The answer is provably, "No."

Wow.  This is wrong on so many levels.

First off, the Halting problem is not NP-complete.  I guess it's NP-hard, in a useless "if two is three, I am Pope" sense, but for it to be NP-complete would imply that it were NP, and therefore could be solved in exponential time and polynomial space, and it can't be.  It can't be solved at all, which is the only thing in this paragraph you got right.  It's trivial, of course, to write a program that shows in finite time that another does halt, but there's no way to write one that can show the reverse in all cases.  This doesn't mean that you can't write a program to analyze source code in the vast majority of real-world cases, and it certainly doesn't mean a human can't heuristically "crack it open for a look."  You claim to have studied computer theory, and you've never done that?  Even for very complicated programs?

And of course, as you really ought to know, it technically isn't proven that NP-complete problems don't have polynomial-time algorithms (yet).

And then there's computation systems that don't even use source code. Artificial Neural Networks are programmed through connections between neurons and weights applied to those connections. I'd like to meet the person that can look at a graph of a suitably usable ANN and simulate it in their head so that he can accurately predict its response to any given input.

In their head?  No, of course not.  But at the end of the day, it's just another kind of code, and can be analyzed like any other code.

And there's not the first thing wrong with the term "emergent behaviour". Any time a computational system performs an act within the parameters of its design but outside the intent of its programmers, that is emergent behaviour. Cooperation is frequently an emergent behaviour of individuals only programmed to act individually and communicate with its like. The result of the communication alters its individual behaviour and cooperation emerges.

You train an ANN on one input corpus, but then discover that it can operate adequately on a completely unrelated corpus. That is emergent behaviour.

A case based reasoning system designed for music recommendations proves capable at food recommendation. That is emergent behaviour.

The phrase "emergent behavior" is so vaguely defined that it can encompass all these things and more, and its use in this context boils down to faith.  The point, however, is that in all these examples, the software was moved; it can't do what it wasn't built to do.  That's woo.

In AI, computer scientists frequently create software systems that surprise them in their capabilities, and any time you have a system of sufficient complexity, the degree of analysis that it will succumb to is limited. Here's another concept for you from computer theory. This one from algorithm analysis. Big-O n squared. O(n^2). As n, the complexity of the system, grows, the effort to analyze it grows by n^2. Truly warped levels of complexity can grow as O(n^n).

These things cannot be analyzed in the existing lifetime of the universe, so good luck on your deterministic understanding of ... "emergent behaviours".

Who said deterministic?  It would, of course, be a heuristic understanding, or if necessary, an approximate one, just like we're trying to understand the human brain right now, or, indeed, everything else in nature.

"These things cannot be analyzed in the existing lifetime of the universe" and "n^n" is an interesting juxtaposition.  While it's generally true that polynomial-time algorithms are desirable, a small system can certainly be analyzed, even if the analysis takes time n^n, and some problems are so hard to get down to n^2 time complexity that such algorithms can't be implemented in the life of the universe.  Between this and your garbled understanding of NP-completeness, you kind of sound like you've been flipping ahead in your textbooks.
Logged
After seventeen years, once again, sort of a lurker.  (he/him)

Kwark

  • Not quite a lurker
  • Offline Offline
  • Posts: 8
Re: Robots and love
« Reply #75 on: 06 Sep 2011, 19:32 »

Every day, it saddens me to witness the vanity of my fellow humans.

Is love anything more than chemistry ? I don't think so.

It's only a matter of years before computers can emulate humanity, and they most likely will have to bridle themselves to mimic our many flaws.

Like the DOSBox we use to play vintage video games on our absurdly fast computers.

Our brains are nothing more than organic processors, and I call vain anyone who claims otherwise, unless they can show some proof or at least concrete reasoning for it.
Logged

Mad Cat

  • Beyond Thunderdome
  • ****
  • Offline Offline
  • Posts: 555
  • Master of my domain, but not of my range.
Re: Robots and love
« Reply #76 on: 06 Sep 2011, 19:37 »

Or I finished reading the textbook a long time ago and put it down.

And I'd really like to see see an example of your "small system" that can exhibit intelligent behaviour, let alone emotional behaviour.

Kwark: "Love? Overrated. Biochemicly no different than eating large quantities of chocolate."
« Last Edit: 06 Sep 2011, 19:39 by Mad Cat »
Logged
The Quakers were masters of siege warfare.

Near Lurker

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,642
Re: Robots and love
« Reply #77 on: 06 Sep 2011, 22:10 »

Or I finished reading the textbook a long time ago and put it down.

Yeah... not really buying it.  Thinking QSAT or factorization were NP-complete might be ascribed to being rusty.  Thinking the halting problem is NP-complete, alongside the implication either that NP-complete problems can't be solved at all or that the halting problem is exponential-time, can't be.  Everyone forgets a lot of material from high school, too, but if someone told you that the fundamental theorem of calculus were the chain rule, you'd think that person failed high school!

And I'd really like to see see an example of your "small system" that can exhibit intelligent behaviour, let alone emotional behaviour.

...okay.

This is completely true.

It's just that, by that point, seeing the rest of the post, especially that first paragraph, I was in full-on condescension mode.  Even though it was absolutely right in context, I saw a statement of a rule of thumb not true in the general case and pounced.

Yyyeaaahhhhh...
Logged
After seventeen years, once again, sort of a lurker.  (he/him)

Skewbrow

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,960
  • damn it
Re: Robots and love
« Reply #78 on: 06 Sep 2011, 22:11 »

Too much out of context quoting taking place here :x. So I will add some?  :evil:

1. Moore's law is bound to fail sooner rather than later. There simply cannot be perpetual exponential growth. See the discussion on XKCD-forum on the strip about compound interest for some numbers.

2. I'm with Near Lurker on all counts related to complexity: NP, NP-hard etc. @MadCat: What it means that a small instance of a difficult problem can still be completely analyzed is, perhaps, best exemplified by the following. The general travelling salesman problem (= given a map and a number of cities, find the shortest route going via all the cities) has no known polynomial time algorithm. Yet it is trivial to solve the problem, when the number of cities is small, say 8, by the brute force method of checking all the 40320 possible orders of visiting the cities. When n=48 (the state capitals being the standard example), the solution is not known, and exhaustive checking of possibilities is out of the question. The question "is there a route of combined length less than a given figure" is in NP, because it is trivial to quickly check a suggested solution. Finding that suggestion OTOH...

3. I apologize for bringing up the catch-phrase "emergent behavior". I certainly won't even attempt to define what it means :evil:

4. AI research is, indeed,making progress, but I don't know where they are, and what it means, so I won't comment for fear of saying something untrue. Thanks for the links, Carl-E.


I've learned throughout history that most of the time when somebody says "that will never be done", they end up being proven wrong in short order. Complexity is not an excuse for something being impossible, just that it's complex. Weather predictions are complex, and we are getting better and better at it as faster computers emerge.

If Moore's Law holds or adapts to a new substrate, by 2050 $1000 worth of computing power will be the equivalent processing power of every human brain on the planet. At that point, simulating your mind wholescale will be trivial.


5. Selective sampling at work there. Most of the time when a reputable scientist says that something is impossible, it truly is impossible. The occasions, when s/he is wrong just get a lot of pulicity.

6. Our ability to predict weather over a longer period is not limited by lack of computing power. The butterfly effect takes care of that, because more accurate prediction would need more accurate data on the current situation (=wind speed, humidity, temperature at *every frigging point in the atmosphere* - ok, not every point, but the density of the network of sensors places an upper bound on the duration of the validity of forecast) that it is clearly impossible to get. Simulating a mind is more or less the same, but hard to tell exactly depending on how many parameters we need to determine the behavior of a single neuron.

[edit: added the two words in bold]
« Last Edit: 07 Sep 2011, 02:01 by Skewbrow »
Logged
QC  - entertaining you with regular shots in the butt since 2003.

akronnick

  • Only pretending to work
  • *****
  • Offline Offline
  • Posts: 2,188
  • I'm freakin' out, man!!!!
Re: Robots and love
« Reply #79 on: 07 Sep 2011, 03:33 »

Can Robots Love?

Short answer: There is no short answer.

Longer answer: To answer that we must first answer two other questions:
  • 1.) What is a Robot?
  • 2.) What is Love?

I think much of the disagreement about this question stems from inconsistent understanding of the answers to the two questions.
Logged
Akronnick, I can think of no more appropriate steed for a Knight Of The Dickbroom than a foul-mouthed, perpetually shouting, lust-crazed bird with a scrotum hanging from its chin and a distinctive cry of "Gobble gobble gobble".   --Tergon

pwhodges

  • Admin emeritus
  • Awakened
  • *
  • Offline Offline
  • Posts: 17,241
  • I'll only say this once...
    • My home page
Re: Robots and love
« Reply #80 on: 07 Sep 2011, 08:20 »

Today's XKCD has a view on AI.
Logged
"Being human, having your health; that's what's important."  (from: Magical Shopping Arcade Abenobashi )
"As long as we're all living, and as long as we're all having fun, that should do it, right?"  (from: The Eccentric Family )

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: Robots and love
« Reply #81 on: 07 Sep 2011, 10:08 »

You've completely missed the point of what I said.

No, I don't think so...

After refuting a few other posts, you said: 

And yes, a program will not evolve unless it is programmed to do so, but it can be programmed to perform a task and evolve by consequence if it has the capacity. And yes, that would be wider than has ever been given, that's kind of a given.

There's no reason to give a program designed for a specific task that wide a berth, though.  That's why computers were built even when they had very limited processing power: they could do predefined tasks very well, and that's still what they're used for, just with much more complex tasks.  Some are allowed to learn from past experiences, but then, the scope of how they could their gained knowledge is predetermined.  To create programs that interact and mutate to the extent that would allow sentience to develop for any purpose other than AI research would be defeating the very purpose.

The part in bold was my  point - AI research is ongoing, and people do  try programming learning behaviours with a wide berth.  That is  the purpose.  Everything else you said there is assuming it isn't done, but then you mention the one place where it is  done. 

And it will still probably be an accident...
Logged
When people try to speak a gut reaction, they end up talking out their ass.

Random Al Yousir

  • Emoticontraindication
  • *
  • Offline Offline
  • Posts: 72
  • There will be bugs
Re: Robots and love
« Reply #82 on: 07 Sep 2011, 12:40 »

Today's XKCD has a view on AI.
On AI and on the burning man.  I had to look that up and, I must say, I'm certifiably impressed.

As for deploying AI to build chatterbots: Am I the only one who had to think of the old adage of E.W.Dijkstra:

"The effort of using machines to mimic the human mind has always struck me as rather silly: I'd rather use them to mimic something better."
Logged
"Just try and make it through the night without saying anything else stupid, okay?"
"Your ass looks fat in that skirt.  I mean, yes Ma'am."

Skewbrow

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,960
  • damn it
Re: Robots and love
« Reply #83 on: 07 Sep 2011, 21:40 »

Thanks, RAIY. Dijkstra's lecture makes for interesting reading.
Logged
QC  - entertaining you with regular shots in the butt since 2003.

Random Al Yousir

  • Emoticontraindication
  • *
  • Offline Offline
  • Posts: 72
  • There will be bugs
Re: Robots and love
« Reply #84 on: 08 Sep 2011, 03:23 »

You're welcome.   :-)

Of course, the language related ramblings of Dijkstra are hopelessly outdated.  Since you are a mathematician, you might enjoy the publications of Philip Wadler.

For the ambitiously pedagogical perspective you might find the work of Matthias Felleisen and Shriram Krishnamurthi interesting.  And there's simply no way around Don Box of Microsoft fame, in this field.

For the borderlining geeky stuff LtU is the best aggregator I know of.
Logged
"Just try and make it through the night without saying anything else stupid, okay?"
"Your ass looks fat in that skirt.  I mean, yes Ma'am."

Skewbrow

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,960
  • damn it
Re: Robots and love
« Reply #85 on: 08 Sep 2011, 10:05 »

Thanks, but I prefer to read material reinforcing my prejudice "Object-oriented programming is the bane of hobbyists and the source of much grief", so Dijkstra may be my best hope? I lost a dear hobby when support to DOS was discontinued. In those days my PC did what I wanted it to do, not what Microsoft thinks that I should be allowed to do with stuff that I own. My programs had full control of the HW, and didn't need to ask Windoze a permission to do something. They were also free of bugs (well, not always, but after I was done). When the change became inevitable, I was suddenly feeling very sympathetic to the old-timers bashing Spinning-Jennies. Mind you, my livelihood was not at risk, but still...

Dijkstra:"A program with a single bug is not almost correct - it is wrong." <- Something that should be sent to Bill Gates' inbox each and every time Microsoft releases a "critical upgrade"
Logged
QC  - entertaining you with regular shots in the butt since 2003.

pwhodges

  • Admin emeritus
  • Awakened
  • *
  • Offline Offline
  • Posts: 17,241
  • I'll only say this once...
    • My home page
Re: Robots and love
« Reply #86 on: 08 Sep 2011, 12:01 »

I had written my own real-time, multi-tasking operating system in the 1970s.  For me DOS represented a huge step backwards, both in terms of the lack of security, and in terms of reliability - so I hated it like you hated Windows. 

OS/2, and especially OS/2 v2 showed the way forward for small systems, but MS managed to subvert it, first with Windows, then by changing the direction of the NT (originally aka OS/2 v3) development; it was also MS who imposed on IBM the one truly bad design decision in the OS/2 Workplace Shell (the Single Input Queue), and IBM never fixed that because they were not prepared to break full compatibility for existing customers over it (also a bad decision). 

Dijkstra and Hoare (and later, Knuth) were my gods; but in the end I have always allowed pragmatism to temper idealism, because I could see that the perfect theoretical world of program proofs was simply going to get left behind.
Logged
"Being human, having your health; that's what's important."  (from: Magical Shopping Arcade Abenobashi )
"As long as we're all living, and as long as we're all having fun, that should do it, right?"  (from: The Eccentric Family )

Random Al Yousir

  • Emoticontraindication
  • *
  • Offline Offline
  • Posts: 72
  • There will be bugs
Re: Robots and love
« Reply #87 on: 08 Sep 2011, 12:38 »

Skewbrow, there are no prejudices against object-oriented programming.  There are hard facts.

And it was Dijkstra, who pointed out to be painfully aware of the limited size of your skull.  Which induced much (intentionally) badly pretensed yawning, IIRC.

At some point you just have to deal with the need to abstract "away" complexity, which might be a non-brainer for a mathematician; however, computers are real-world, unclean, mutable state machines, and as long as you insist of controlling the whole shebang "per pedes" you will run into the oh-so-tight limits of your skull.  If you want to go for something more ambitious, you want to go for abstraction.  And abstractions are leaky.  Another hard rule.  Rock hard, in fact.

That's why I think you might enjoy the research of Philip Wadler; because his work is centered around a mathematically "clean", referential transparent, lambda calculus/combinatorics-based approach to the art of abstraction.
A good start into the topic should be here to "hook you up".  Maybe this vein of research could do the job to restore your faith in computer science.

Oh, and pwhodges is right: There will be bugs.  The way to go is to find a way to deal with this fact.

Object-oriented programming is a hack-up, a kludge.  You won't find much disagreement, there.  But it's a necessary kludge, because it's so damn hard to do things right.
Logged
"Just try and make it through the night without saying anything else stupid, okay?"
"Your ass looks fat in that skirt.  I mean, yes Ma'am."

Skewbrow

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,960
  • damn it
Re: Robots and love
« Reply #88 on: 08 Sep 2011, 13:04 »

Proofs of program correctness! Luckily I never had to do more than a couple to pass that intro course, and those were trivial for a math dude.  :-)

My grumpiness is mostly coming from the fact in math the tools are eternal, so I was unprepared to meet the reality that the other tools I took the trouble of learning didn't last a decade.

For work related programs I just wanted (and still do!) the full computing power. For entertainment of (self-made or bought) games the same thing was true. On those occasions not having to fight the operating system or share the cycles with something else was nice (if not necessary). Reliability was not an issue. If I crash the system while debugging a hooked interrupt or by attempting to read past a nil pointer, that is solely my fault, and the PC would reboot in under twenty seconds anyway. If a work-related simulation crashed, that meant lost productivity, but again was my own fault, and a faulty sim might not give reliable results anyway. Yes, I realize that you cannot run a system serving several people with an attitude like that, but the point is that my PC was truly personal.

I did start learning Delphi for Windows a year ago (they taught me Pascal as an undergrad, so that's what the salesdude recommended). If I find the time, I may do a full comeback - at least port my old stuff to Win. So much to learn/unlearn. :cry:

RAIY seems to have posted more links. May be I should make a serious attempt at learning?
Logged
QC  - entertaining you with regular shots in the butt since 2003.

Random Al Yousir

  • Emoticontraindication
  • *
  • Offline Offline
  • Posts: 72
  • There will be bugs
Re: Robots and love
« Reply #89 on: 08 Sep 2011, 13:18 »

Well, not so much "more" links.  The first one you can safely ignore, the second one is just a good starting point on Wadler's site, which I referenced already.   :wink:
Logged
"Just try and make it through the night without saying anything else stupid, okay?"
"Your ass looks fat in that skirt.  I mean, yes Ma'am."

pwhodges

  • Admin emeritus
  • Awakened
  • *
  • Offline Offline
  • Posts: 17,241
  • I'll only say this once...
    • My home page
Re: Robots and love
« Reply #90 on: 08 Sep 2011, 14:07 »

My gripe with the concept of proving program correctness is that it seems to me that in real life the writing of a specification against which that correctness is to be proved is in fact the same problem as writing the program.  So nothing is gained.

IYSWIM
Logged
"Being human, having your health; that's what's important."  (from: Magical Shopping Arcade Abenobashi )
"As long as we're all living, and as long as we're all having fun, that should do it, right?"  (from: The Eccentric Family )

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: Robots and love
« Reply #91 on: 11 Sep 2011, 05:21 »

Cross posted for relevance. 


Logged
When people try to speak a gut reaction, they end up talking out their ass.

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: Robots and love
« Reply #92 on: 11 Sep 2011, 12:50 »

Is it ethical to include grief in the set of emotions an artificial life form can feel?

What if it's not a matter of choice, if the life form was created accidentally?

Is grief inevitable when love exists?

EDIT: we've never seen religious feelings or activity by an AnthroPC. Are they that different from us? Is it a different feeling when you know for a fact who your creators were and don't have to take it on faith? How is religion different for a being that doesn't have to confront mortality?

EDIT: what DO they do when their human companion dies? I can imagine Pintsize packing Marten's corpse with gunpowder and tossing it in a volcano. I can definitely see Momo performing a quiet dignified Shinto ritual at the graveside. Winslow would be stuck for a response.
« Last Edit: 11 Sep 2011, 13:23 by Is it cold in here? »
Logged
Thank you, Dr. Karikó.

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: Robots and love
« Reply #93 on: 11 Sep 2011, 14:00 »

I think rather that Winslow would give one hell  of a eulogy, filling all who heard it with a deeper respect and understanding of Hannelore, and at the same time, with an increased love for the lives they've been given. 


He just seems that type. 

Edit:  Focking typos...
« Last Edit: 11 Sep 2011, 18:44 by Carl-E »
Logged
When people try to speak a gut reaction, they end up talking out their ass.

jwhouk

  • Awakened
  • *****
  • Offline Offline
  • Posts: 11,022
  • The Valley of the Sun
Re: Robots and love
« Reply #94 on: 11 Sep 2011, 16:35 »

I think rather that Winslow would give one hell  of a eulogy, filling all who heard it with a deeper respect and understanding of Hannelore, and at the same time, with an increased love for the lves they've been given. 


He just seems that type. 

And it woud have typos because he tried to use an iPad to write it wth.
Logged
"Character is what you are in the Dark." - D.L. Moody
There is no joke that can be made online without someone being offended by it.
Life's too short to be ashamed of how you were born.
Just another Joe like 46

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: Robots and love
« Reply #95 on: 11 Sep 2011, 18:44 »

What, you've never been given an Ives?
Logged
When people try to speak a gut reaction, they end up talking out their ass.

jwhouk

  • Awakened
  • *****
  • Offline Offline
  • Posts: 11,022
  • The Valley of the Sun
Re: Robots and love
« Reply #96 on: 11 Sep 2011, 18:52 »

No, because my parents didn't listen to Burl very much - other than during Christmastime.
Logged
"Character is what you are in the Dark." - D.L. Moody
There is no joke that can be made online without someone being offended by it.
Life's too short to be ashamed of how you were born.
Just another Joe like 46

DSL

  • Older than Moses
  • *****
  • Offline Offline
  • Posts: 4,097
    • Don Lee Cartoons
Re: Robots and love
« Reply #97 on: 11 Sep 2011, 19:08 »

I was sent an Ives by currier.
Logged
"We are who we pretend to be. So we had better be careful who we pretend to be."  -- Kurt Vonnegut.

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: Robots and love
« Reply #98 on: 11 Sep 2011, 21:01 »

Logged
When people try to speak a gut reaction, they end up talking out their ass.

Akima

  • WoW gold miner on break
  • *****
  • Offline Offline
  • Posts: 6,523
  • ** 妇女能顶半边天 **
Re: Robots and love
« Reply #99 on: 12 Sep 2011, 19:15 »

Is grief inevitable when love exists?
Grief is inevitable. The desire to not be separated from loved ones is perhaps the hardest attachment of all to overcome.

Quote
EDIT: we've never seen religious feelings or activity by an AnthroPC. Are they that different from us? Is it a different feeling when you know for a fact who your creators were and don't have to take it on faith? How is religion different for a being that doesn't have to confront mortality?
Do robots pray to electric gods, you mean? Regardless of her mortality, Momo is not immune to the tragedies and imperfections of the universe, and the Four Noble Truths would apply to her as much as any other sentient being. Not every religion come with a built-in creation-myth, or concerns itself much with the creation of the universe, or even considers that the universe had a beginning at all.

As I said in the fan-art thread, I'd have expected Momo to sit in seiza or kekkafuza with her hands in the classic gassho position in this situation, but I suppose cultural conditioning would be a different thing for her.
Logged
"I would rather have questions that can't be answered, than answers that can't be questioned." Richard Feynman
Pages: 1 [2] 3 4   Go Up