Comic Discussion > QUESTIONABLE CONTENT

WCDT: 18-22 Oct 2010 (1776-1780)

<< < (47/50) > >>

Skewbrow:

--- Quote from: Near Lurker on 22 Oct 2010, 20:30 ---
Or do you mean the similar proofs of undecidability?  These follow the same logic, but they only apply to precise algorithms.  Modern AI is no more algorithmic than the human mind, but rather both generate multiple precise algorithms to attack a problem in a variety of ways, arriving at conclusions that couldn't be found by a single algorithm.

--- End quote ---

Are you quite sure that you understand what the undecidability results actually state? My understanding is that the undecidability results are statements about classes of problems. Not about algorithms. In other words they state that no algorithm can possibly exist that would solve a general problem in this class. Multiple (a finite number) algorithms can always be combined to a single one, so saying that using multiple algorithms somehow circumvents an undecidability result is bogus. My math PhD is in rather different area, so I am not an expert on undecidability questions either. I would need to ask the half a dozen researchers at our department, if you have a more precise question.

Of course, those results do use a precise definition of an algorithm (something that can be carried out by a Turing machine).

Mind you, when I googled for this kind of 'singularity', my first impression was that a cousin to Gödel's theorems might prevent this from ever happening :-) You are, of course, correct in that Gödel is way too often misapplied to justify some obscure piece of pseudoreasoning, but the law of nature/logic underlying it places limitations to non-humans also.

Then again, if the singularity happened the way some believers think it might (an instantaneous explosive growth of the power of reasoning of that device), my profession would be among the first to go, so I'm psychologically incapable of accepting the possibility of this ever happening. Denial. Denial. Denial :-)

I rather think that if an AI ever becomes capable of designing something smarter than itself, the design process will involve a time-consuming step very similar to the way we educate our children.

akronnick:

--- Quote from: Skewbrow on 23 Oct 2010, 01:34 ---I rather think that if an AI ever becomes capable of designing something smarter than itself, the design process will involve a time-consuming step very similar to the way we educate our children.

--- End quote ---

Either that, or it will be the size of an entire planet and will run for ten million years before the Vogons destroy to make way for a hyperspace bypassit ten seconds before it outputs the results...





...and we're back!

Olymander:

--- Quote from: Skewbrow on 23 Oct 2010, 01:34 ---I rather think that if an AI ever becomes capable of designing something smarter than itself, the design process will involve a time-consuming step very similar to the way we educate our children.

--- End quote ---

How do we define something as "smarter than ourselves", though?  I mean, we can easily conceive of something that does what we do, but faster, but is simply doing something faster actually smarter?  I mean, wouldn't the usual argument be that something that was actually "smarter" than we are is so because it thinks in a way that we cannot, or at least cannot easily comprehend?  Or perhaps in the more classical sense, happens when something is looked at from a completely different angle than before, like the rise of logic, or how relativistic physics replaced/extended classical Newtonian physics.  In either case, though, I would think that something "smarter" is more likely to arise by accident, or randomness, than by design.

Near Lurker:

--- Quote from: Skewbrow on 23 Oct 2010, 01:34 ---Of course, those results do use a precise definition of an algorithm (something that can be carried out by a Turing machine).
--- End quote ---

It's better to say that those results use precise algorithms, since all algorithms that can be expressed precisely can be carried out by a Turing machine (that's kind of the point), but that's the idea.  Our brains approximate infinite algorithms by changing tack haphazardly, which is how we're able to solve problems that can't be generally solved algorithmically (as reflected in the script for every detective show ever).  There's no reason to think that we can't make a computer do the same better, and it's certainly no reason to think our brains are immaterial.

Skewbrow:

--- Quote from: Near Lurker on 23 Oct 2010, 09:55 ---
--- Quote from: Skewbrow on 23 Oct 2010, 01:34 ---Of course, those results do use a precise definition of an algorithm (something that can be carried out by a Turing machine).
--- End quote ---

It's better to say that those results use precise algorithms, since all algorithms that can be expressed precisely can be carried out by a Turing machine (that's kind of the point), but that's the idea.  Our brains approximate infinite algorithms by changing tack haphazardly, which is how we're able to solve problems that can't be generally solved algorithmically (as reflected in the script for every detective show ever).  There's no reason to think that we can't make a computer do the same better, and it's certainly no reason to think our brains are immaterial.

--- End quote ---

Whatever. To me your phrase "precise algorithms" sounded like you were talking about certain specific algorithms rather than the totality of all conceivable algorithms that a Turing machine can run. I apologize for misunderstanding you there.

I feel that your reference to detective shows is a bit off the mark. I feel that a more likely explanation to e.g. my poor results as a detective is a faulty algorithm as opposed to the problem itself not being algorithmically tractable. After all, the good ole Sherlock himself claimed to only apply algorithms of logical thinking :-)

It may very well be possible to turn a computer into a better detective than I could ever be, but... a computer is a Turing machine (or more precisely a limited Turing machine, because a Turing machine is usually modelled to have an unlimited amount of memory. So a computer cannot do anything that a Turing machine could not. Therefore all the present and future computers are doomed to forever remain dumbfounded when facing a sufficiently general instance of an algorithmically undecidable question. If we want to get around undecidability we need something stronger than a Turing machine.

Our brain cannot "approximate infinitely many algorithms", because it only has had a finite amount of time to learn, and only has a finite pool of things to try. However, the "haphazard" part is more promising. Some of the more curious stuff revolves around concepts like "genetic algorithms". Even they are still run on Turing machines and are thus somewhat limited in their capabilities. Also we then lose the blinding speed that we today associate with computers.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version