Comic Discussion > QUESTIONABLE CONTENT
WCDT Strips 3236 - 3240 (6-10 June 2016)
oddtail:
Regarding the connection between intelligence and consciousness and whether they *have* to coexist, I think there's an issue with both logic and semantics that muddles the issue quite a bit. And I know, semantics are boring for many people, but here it's impossible to ignore how they influence thinking about consciousness in relation to intelligence.
For starters, there is no clear, general meaning or definition of "intelligence". To quote Wikipedia, "Intelligence has been defined in many different ways including one's capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving." This gives a pretty good "feel" for what intelligence is, but does not answer where the distinction between intelligence and non-intelligent analysis, algorithms or data manipulation lies.
We can approach intelligence either as purely problem solving, which is less ambiguous but both counterintuitive and controversial, or as the capacity for abstract thought and reasoning similar to that of a human. But the thing is, both are related, but different, and both have their own problems.
If intelligence is purely problem solving, we have to consider *any* data manipulation that leads to a useful result a form of intelligence. A chess program is intelligent in this sense, but so is a simple Python script to automate a workplace task. Heck, an automatically operated door with a light sensor displays a rudimentary form of intelligence if you use this definition.
You can add caveats to this understanding of intelligence, such as the use of memory and the ability to solve problems beyond the scope that the system was originally designed for, but those do not remove the issue completely. A GPU that is used to mine for Bitcoin would be "intelligent" in the sense that it operates beyond the original parametres the device was made for. A human using their wits to escape a predator would not be displaying "intelligence".
Anyhow, we don't think of simple machines, or simple programs, as "intelligent", we usually tend to think of the ability to analyse a situation and solve it, similarly to the way humans do. We consider something to be intelligent if it is like us, but the problem is - this is dangerously close to the "no true Scotsman" fallacy.
When looking for signs of intelligence in, say, animals, we consider certain signs of intelligence to be more telling than others. In general, these are:
1) Communication, especially verbal communication;
2) The ability to manipulate abstract symbols and to associate signs with their meaning (language, writing, art etc.)
3) Problem-solving when a clear goal is presented;
4) Manual dexterity;
5) Empathy.
These are all obviously signs of intelligence in the broad, dry, "problem solving" sense, but we associate certain behaviour with intelligence more than others. An excellent athlete is obviously very good at rapid-fire analysis of information and reaction to them, but we do not conventionally call athletic prowess "intelligence" to the extent we consider being good at games, academic achievement, good social skills to be signs of intelligence.
The problem is, this indicates quite clearly that our perception of intelligence is not based on a verifiable, objective principle. There is no mathematical metric for that. For example, a computer that is amazing at solving a particular task is still "just a machine" even if it does the task 1000 times better than a human. What is perceived as "intelligent" is basically any behaviour that is either human-like, or highly valued in a particular culture.
The problem is, again, that this is both a "no true Scotsman" fallacy and a case of begging the question (in the original sense of assuming the conclusion, not in the everyday use sense of the phrase). We have some vague notions of what intelligence is, but it's not "real" intelligence if it strays too far from the human template. The ability to understand mathematical concepts and apply them? Considered intelligent. The ability of a simple mathematical program to perform very rapid calculations? Not intelligent.
The thing is, the question "is consciousness and general intelligence connected" is pointless if we use that intuitive, human-centric understanding of intelligence. Our general idea of intelligence IS centered around consciousness and worse, what humans perceive as meaningful. With such an assumption, any intelligence (in the general sense of being able to solve problems based on data and memory) is judged not based on pure efficiency and capability, but on how close it mimics a human thought process. This is circular reasoning and *of course* it leads to the conclusion that any kind of intelligence without consciousness is "not really" intelligence.
In other words, if there were a hypothetical species that is able to solve insanely difficult problems, but is not able to meaningfully communicate (due to its evolutionary history or whatever) would be considered unintelligent. On the flipside, any hypothetical species that is extremely good at coordinating their actions and understanding and predicting the behavious of another being, but incapable of understanding complex abstract concepts would also be deemed not very intelligent. And conversely, a species that was, say, very octopus-like and had an extreme ability regarding spatial reasoning and object manipulation might very well think of humans as unintelligent.
Without a good explanation of why a good chess program is not intelligent, the question "is intelligence without consciousness possible" is both pointless and kinda has the answer built in.
prime_pm:
Did the author inadvertently break the fourth wall with this strip?
BenRG:
If anything, this has a feeling of an answer to an FAQ about the AIs in Questionable Content. It doesn't feel like part of the narrative; it's seems almost a part of the background of the universe.
jheartney:
--- Quote from: oddtail on 10 Jun 2016, 05:28 ---In other words, if there were a hypothetical species that is able to solve insanely difficult problems, but is not able to meaningfully communicate (due to its evolutionary history or whatever) would be considered unintelligent.
--- End quote ---
This raises the question of how this hypothetical non-communicative species is going to grasp an insanely difficult abstract problem if there's no way to communicate the problem to it. Language is part of how we organize the world conceptually, and without language most of abstract reasoning is probably unreachable. (Add to that the fact that natural language is complex, and highly dependent on assumed theories of mind and on an assumed frame of reference based on lived experience.)
--- Quote from: oddtail on 10 Jun 2016, 05:28 ---On the flipside, any hypothetical species that is extremely good at coordinating their actions and understanding and predicting the behavious of another being, but incapable of understanding complex abstract concepts would also be deemed not very intelligent.
--- End quote ---
This probably describes the great mass of human beings, most of whom rarely attempt to grasp complex abstract reasoning. There is more to the story, though, in that much of what we do effortlessly (instantly modeling a complex 3D environment based on a pair of 2D images we receive from our eyes; deciphering the syntax and meaning of idiosyncratic and highly cryptic verbal communications) implies a huge amount of sophisticated pre-processing happening below the level of consciousness. Until we started trying to do this sort of processing on computers, there wasn't an appreciation of how difficult it is.
Today's experimental driverless cars, for example, use a kind of cheat in that they have access to massive map databases which allow them to avoid having to process the large majority of their input data, and instead focus only on whatever is novel in whatever they are getting from their sensor arrays. Humans (and mobile animals) don't do this, at least not as a primary strategy. Rather than depending on massive databases, we process our sensory inputs on the fly, and generally come up with a "good enough" model of our environment. The implication here is that while an average person may not be able to do complex abstract reasoning on the conscious level, something like that complex reasoning must be going on underneath.
I'd also like to address the question of game-playing strategies in advanced gaming AI's. You bring up the example of Chess, which may not be the best illustration. Instead, I'd point to the Jeopardy-playing machine. Unlike Chess, Jeopardy is a free-form puzzle format in which players have to both grasp the meaning of natural-language cues and then apply them to previous knowledge. If any gaming challenge would require conventional natural "intelligence," this would seem to be it. Yet researchers were able to program the machine to use the algorithm-plus-database strategy to prevail there as well.
It's pretty clear that human players don't use such a strategy; our minds just don't work that way. So perhaps I'm wrong; perhaps one could have an "intelligent" agent able to handle general cognition challenges without having a conscious component. But I'd still think the agent would need both a communicative and an empathic processing capability in order to get very far with being an overlord. And I'd think both those capabilities could only happen in something with conscious point of view.
Rincewind:
And that is why this web-comic is one of my absolute favorite things to read! The comic itself is amusing and often thought-provoking, plus the comment section is usually full of diverse and fascinating discussions on all manner of topics; ranging from the very nature of consciousness and intelligence, sentient AI rights, gender-identity rights, the lives of those with "different" thought/emotional processes (the Sherlock Homes semi-knock-off series "Elementary" showed the Holmes character in a brief relationship with a woman who was [I think] an Aspy. She had a marvelous term for herself [that I can't frickin' remember!] that described herself as being "different thinking" [but was much better than my attempt]), to witty remarks, butts, and of course dumb jokes and half-witty observations (like I do). It's a privilege to be a (small) part of such a community.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version