Comic Discussion > QUESTIONABLE CONTENT

Robots and love

<< < (14/37) > >>

Kyrendis:
The only real difference between a computer and us when it comes to emotions, is that we can read the source-code for a computer, and fully understand how and why it is "feeling" those emotions.

This understanding of the first order stimuli and the process it goes through is what leads people to make the knee-jerk reaction that such feelings must be 'fake' or 'simulated'.

But what happens when we get a thorough enough knowledge of the brain to be able to do the same thing for humans? When we will be able to trace the path in our own minds from first order stimulus through processing to action or emotion and understand fully how each step goes, even being able to manipulate it. Will we at that point suddenly become machines simply because of transparency?

It's entirely possible sentience is an emergent trait, but you can program emergent traits as well through machine evolution. They have been doing that for decades. What if the original AI in the QC verse emerged from basically a random assortment of programs that gained sentience, in the same way mutation and sexual reproduction (in part) randomizes our genomes. In that case they are as obscure as we are right now, and people would not be able to fully understand them. In that case, are their emotions real?

What about including biological or quantum processors in the mind of these AI's to enable sentience? In that case they can make the same sudden leaps of logic from A>D that characterize human thinking. Non-linear prediction. If they are capable of this, are they not just like us enough to be considered sentient?

It's a slippery slope to make an argument that something which looks, and acts as if it is sentient isn't because of some ineffable quality. It inevitably leads to some supernatural explanation, and at that point you can arbitrarily choose which beings are sentient or not based on whatever criteria you wish.

This is the argument white Europeans used to justify slavery. After all, though they acted like it Africans could not be human, they just pretended at emotions and honor. They had no souls.

Near Lurker:

--- Quote from: Kyrendis on 06 Sep 2011, 09:29 ---It's entirely possible sentience is an emergent trait, but you can program emergent traits as well through machine evolution. They have been doing that for decades.  What if the original AI in the QC verse emerged from basically a random assortment of programs that gained sentience, in the same way mutation and sexual reproduction (in part) randomizes our genomes.
--- End quote ---

I was with you until here.  That's just not going to happen.  A program isn't going to evolve unless it's programmed to evolve, and even then, it would need a very wide berth, wider than ever has been given, to evolve a human-like mind the way animals did.  We're not going to accidentally the Singularity.  And words like "quantum" and "emergent" don't justify mumbo jumbo; the former should be used only with a model backing it up ("quantum computation" is a thing, and not hard to understand), and the latter pretty much never.

EDIT: Somehow quoted wrong part.

Is it cold in here?:
An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.

Kyrendis:

--- Quote from: Near Lurker on 06 Sep 2011, 10:41 ---
--- Quote from: Kyrendis on 06 Sep 2011, 09:29 ---But what happens when we get a thorough enough knowledge of the brain to be able to do the same thing for humans? When we will be able to trace the path in our own minds from first order stimulus through processing to action or emotion and understand fully how each step goes, even being able to manipulate it. Will we at that point suddenly become machines simply because of transparency?
--- End quote ---

I was with you until here.  That's just not going to happen.  A program isn't going to evolve unless it's programmed to evolve, and even then, it would need a very wide berth, wider than ever has been given, to evolve a human-like mind the way animals did.  We're not going to accidentally the Singularity.  And words like "quantum" and "emergent" don't justify mumbo jumbo; the former should be used only with a model backing it up ("quantum computation" is a thing, and not hard to understand), and the latter pretty much never.

--- End quote ---

I've learned throughout history that most of the time when somebody says "that will never be done", they end up being proven wrong in short order. Complexity is not an excuse for something being impossible, just that it's complex. Weather predictions are complex, and we are getting better and better at it as faster computers emerge.

If Moore's Law holds or adapts to a new substrate, by 2050 $1000 worth of computing power will be the equivalent processing power of every human brain on the planet. At that point, simulating your mind wholescale will be trivial.

And yes, a program will not evolve unless it is programmed to do so, but it can be programmed to perform a task and evolve by consequence if it has the capacity. And yes, that would be wider than has ever been given, that's kind of a given.

And I tend to agree with "emergent" being a nonsense word that really means "too complex for us to understand, but still logical". However, provious suggestions in the thread had led me to believe people held that notion, so I was merely covering my bases.

Also you seem to have quoted the wrong part, since that doesn't match up with what you are arguing. You're talking about AI, quote is talking about understanding of the human mind.


--- Quote from: Is it cold in here? on 06 Sep 2011, 11:23 ---An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.

--- End quote ---

You could predict it by studying the behaviour of the entire whole though, that's the point. All that argument says is that a lung cell is not a human. Which is a truism yes, but not a good argument. If you could anaylze and understand the whole, you could predict what the whole could do. That's not emergent behaviour, that's just complexity.

Near Lurker:

--- Quote from: Is it cold in here? on 06 Sep 2011, 11:23 ---An anthill does things you wouldn't have predicted from the limited behavior of an individual ant. I do things that couldn't have been predicted from studying one of my neurons. It's a matter of observation that complex systems have emergent behavior.

--- End quote ---

There are two major flaws in that argument:
1. Neither ants nor neurons were (as far as we know) created by an external intelligence for a single purpose.
2. Studying one in isolation, no, but with the knowledge that they typically don't work in isolation, and of the various permutations, you could extrapolate with ants, and you could come to the vague idea that neurons could, conceivably, form part of a processor.  Communication between programs tends to be very limited.


--- Quote from: Kyrendis on 06 Sep 2011, 12:01 ---I've learned throughout history that most of the time when somebody says "that will never be done", they end up being proven wrong in short order. Complexity is not an excuse for something being impossible, just that it's complex. Weather predictions are complex, and we are getting better and better at it as faster computers emerge.
--- End quote ---

I didn't say it wouldn't be done; I said it wouldn't be done by accident.


--- Quote from: Kyrendis on 06 Sep 2011, 12:01 ---If Moore's Law holds or adapts to a new substrate, by 2050 $1000 worth of computing power will be the equivalent processing power of every human brain on the planet. At that point, simulating your mind wholescale will be trivial.
--- End quote ---

It won't.  But even if it did, a faster computer can't get around our relative lack of understanding of the human mind.


--- Quote from: Kyrendis on 06 Sep 2011, 12:01 ---And yes, a program will not evolve unless it is programmed to do so, but it can be programmed to perform a task and evolve by consequence if it has the capacity. And yes, that would be wider than has ever been given, that's kind of a given.
--- End quote ---

There's no reason to give a program designed for a specific task that wide a berth, though.  That's why computers were built even when they had very limited processing power: they could do predefined tasks very well, and that's still what they're used for, just with much more complex tasks.  Some are allowed to learn from past experiences, but then, the scope of how they could their gained knowledge is predetermined.  To create programs that interact and mutate to the extent that would allow sentience to develop for any purpose other than AI research would be defeating the very purpose.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version