Comic Discussion > QUESTIONABLE CONTENT

Robots and love

<< < (15/37) > >>

Carl-E:
But there is  AI research.  And it's progressing...

Near Lurker:
You've completely missed the point of what I said.

Kugai:

--- Quote from: DSL on 06 Sep 2011, 08:21 ---That name ships like a Great Lakes freighter.  :evil:


--- End quote ---

The Edmund Fitzgerald ?

Mad Cat:
Anyone who thinks they can just read the source code to a robot that is capable of showing emotional reactions has never studied computer theory. There's the class of NP problems, NP-Hard problems and NP-Complete problems. https://secure.wikimedia.org/wikipedia/en/wiki/NP_%28complexity%29 One of the most famous NP-Complete problems is the halting problems. Is it possible, to write a program that takes the code for another program as input and comes to a mathematicly provable claim as to whether or not the input program will halt. The answer is provably, "No."

And then there's computation systems that don't even use source code. Artificial Neural Networks are programmed through connections between neurons and weights applied to those connections. I'd like to meet the person that can look at a graph of a suitably usable ANN and simulate it in their head so that he can accurately predict its response to any given input.

And there's not the first thing wrong with the term "emergent behaviour". Any time a computational system performs an act within the parameters of its design but outside the intent of its programmers, that is emergent behaviour. Cooperation is frequently an emergent behaviour of individuals only programmed to act individually and communicate with its like. The result of the communication alters its individual behaviour and cooperation emerges.

You train an ANN on one input corpus, but then discover that it can operate adequately on a completely unrelated corpus. That is emergent behaviour.

A case based reasoning system designed for music recommendations proves capable at food recommendation. That is emergent behaviour.

In AI, computer scientists frequently create software systems that surprise them in their capabilities, and any time you have a system of sufficient complexity, the degree of analysis that it will succumb to is limited. Here's another concept for you from computer theory. This one from algorithm analysis. Big-O n squared. O(n^2). As n, the complexity of the system, grows, the effort to analyze it grows by n^2. Truly warped levels of complexity can grow as O(n^n).

These things cannot be analyzed in the existing lifetime of the universe, so good luck on your deterministic understanding of ... "emergent behaviours".

Near Lurker:

--- Quote from: Mad Cat on 06 Sep 2011, 17:34 ---Anyone who thinks they can just read the source code to a robot that is capable of showing emotional reactions has never studied computer theory. There's the class of NP problems, NP-Hard problems and NP-Complete problems. https://secure.wikimedia.org/wikipedia/en/wiki/NP_%28complexity%29 One of the most famous NP-Complete problems is the halting problems. Is it possible, to write a program that takes the code for another program as input and comes to a mathematicly provable claim as to whether or not the input program will halt. The answer is provably, "No."
--- End quote ---

Wow.  This is wrong on so many levels.

First off, the Halting problem is not NP-complete.  I guess it's NP-hard, in a useless "if two is three, I am Pope" sense, but for it to be NP-complete would imply that it were NP, and therefore could be solved in exponential time and polynomial space, and it can't be.  It can't be solved at all, which is the only thing in this paragraph you got right.  It's trivial, of course, to write a program that shows in finite time that another does halt, but there's no way to write one that can show the reverse in all cases.  This doesn't mean that you can't write a program to analyze source code in the vast majority of real-world cases, and it certainly doesn't mean a human can't heuristically "crack it open for a look."  You claim to have studied computer theory, and you've never done that?  Even for very complicated programs?

And of course, as you really ought to know, it technically isn't proven that NP-complete problems don't have polynomial-time algorithms (yet).


--- Quote from: Mad Cat on 06 Sep 2011, 17:34 ---And then there's computation systems that don't even use source code. Artificial Neural Networks are programmed through connections between neurons and weights applied to those connections. I'd like to meet the person that can look at a graph of a suitably usable ANN and simulate it in their head so that he can accurately predict its response to any given input.
--- End quote ---

In their head?  No, of course not.  But at the end of the day, it's just another kind of code, and can be analyzed like any other code.


--- Quote from: Mad Cat on 06 Sep 2011, 17:34 ---And there's not the first thing wrong with the term "emergent behaviour". Any time a computational system performs an act within the parameters of its design but outside the intent of its programmers, that is emergent behaviour. Cooperation is frequently an emergent behaviour of individuals only programmed to act individually and communicate with its like. The result of the communication alters its individual behaviour and cooperation emerges.

You train an ANN on one input corpus, but then discover that it can operate adequately on a completely unrelated corpus. That is emergent behaviour.

A case based reasoning system designed for music recommendations proves capable at food recommendation. That is emergent behaviour.
--- End quote ---

The phrase "emergent behavior" is so vaguely defined that it can encompass all these things and more, and its use in this context boils down to faith.  The point, however, is that in all these examples, the software was moved; it can't do what it wasn't built to do.  That's woo.


--- Quote from: Mad Cat on 06 Sep 2011, 17:34 ---In AI, computer scientists frequently create software systems that surprise them in their capabilities, and any time you have a system of sufficient complexity, the degree of analysis that it will succumb to is limited. Here's another concept for you from computer theory. This one from algorithm analysis. Big-O n squared. O(n^2). As n, the complexity of the system, grows, the effort to analyze it grows by n^2. Truly warped levels of complexity can grow as O(n^n).

These things cannot be analyzed in the existing lifetime of the universe, so good luck on your deterministic understanding of ... "emergent behaviours".

--- End quote ---

Who said deterministic?  It would, of course, be a heuristic understanding, or if necessary, an approximate one, just like we're trying to understand the human brain right now, or, indeed, everything else in nature.

"These things cannot be analyzed in the existing lifetime of the universe" and "n^n" is an interesting juxtaposition.  While it's generally true that polynomial-time algorithms are desirable, a small system can certainly be analyzed, even if the analysis takes time n^n, and some problems are so hard to get down to n^2 time complexity that such algorithms can't be implemented in the life of the universe.  Between this and your garbled understanding of NP-completeness, you kind of sound like you've been flipping ahead in your textbooks.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version