Fun Stuff > CLIKC

Artifical Intelligence

<< < (5/6) > >>

EssJay89:
The AI thing is so overated cos we will have to program them to think for cryin' out loud then one simple mistake... only god knows what happens then...

p.s what is wrong with sig pics????

Samari:
well to get technical, we do have an awful lot of AI right now.  chess programs that can beat grand masters, cars that can adjust the steering based on road conditions.  There are lots of examples of programs that input from sensors and behave appropriately.  What everyone really cares about though is sentient AI.  The Turing test is really only the first step along this path, but we still can't even manage to write an AI that can do that.  Of course the other thing to consider in the event of an AI that could pass as human is would the AI actually ever BE sentient or just appear sentient.

Personally I'm not convinced that sentient AI is a computable problem, but if it is I would bet falls into the NP realm.

Jiperlee:

--- Quote from: Anson ---
We were talking about Cyc in my Media Studies class last semester. Though I don't think it's really hard to tell a machine that it isn't human... Things get weird when it starts asking if it's alive and if it's going to die....
--- End quote ---


Naw, its probly harder to explain to it why its greatest dreams will never come true(if it ever evolves to such a state)

nescience:

--- Quote from: Samari ---Personally I'm not convinced that sentient AI is a computable problem, but if it is I would bet falls into the NP realm.
--- End quote ---


You mean NP hard, right?  See, I think it's difficult to classify strong AI in such terms, because in order to come up with a language as such (say, a decision problem revealing intelligence) we have to (a) define the language such that a machine is sentient IFF it accepts a string of that language and (b) show that strings of the language can be verified in polynomial time.  The first requirement seems to beg the question of what the implications of sentience may be.  The second is interesting in that we may not be able to properly show a verification algorithm, forcing us to assume the requirement is satisfied by observation of the only known sentience (our own) and its ability to verify "quickly."  There really isn't enough language linking sentience and computability yet because frankly there isn't enough knowledge of ANYTHING linking sentience and computability yet.

EssJay89:
I still think that AI will be the down fall of the human race cos their programing can be change and destroy us

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version