Comic Discussion > QUESTIONABLE CONTENT
The Singularity vs. Stephen Hawking
NilsO:
--- Quote from: Schwungrad on 05 May 2014, 04:05 ---We should probably distinguish between Artificial Intelligence (the ability to pursue a given goal with at least the same range and flexibility of strategies that humans display) and Artificial Consciousness (the ability - and urge - to set and pursue one's own goals). The classical "Robot Apocalypse" scenario implies the development of AC - which I think is unlikely as long as we don't even understand what human consciousness really is. However, even "bare" AI can wreak enough havoc if the given goals are carelessly or maliciously formulated.
--- End quote ---
No one has yet been able to formulate what consciousness really is. If we leave out the religious aspects, the physical basis is probably some kind of complex electrical and/or chemical process in the brain, with input from our sensory organs. As such, it should in principle be possible to simulate. But the complexity is huge. We have very few clues to what the physical reality is behind memory, reasoning, and self-awareness. We do not even know if we have free will or not.
Most of the brain appears to be hard-wired, with instincts and reflexes ruling our daily lives (inherited from our animal ancestors). When we make a jump, we do not consciously evaluate the visual input, required force, direction, muscle groups, and complex mathematical calculations necessary to make a precise jump. But a cat can do this better than humans, even if it has a much smaller brain, and is not considered particularly intelligent.
As Scwungrad says, AI and AC is not necessarily the same thing. If AI roughly corresponds to an animal, and AC to a human, we are still very far from being able to create an artificial cat-level intelligence, let alone an artificial human-level consciousness.
The scary thing is that most AI research is probably done by the military, in order to improve their drone technology. If this AI one day becomes self-conscious, we may be in for a lot of trouble :psyduck:
Storel:
Summary of entire thread: artificial intelligence is hard.
LTK:
--- Quote from: Mlle Germain on 06 May 2014, 05:19 ---Probably, yes. I was wording that poorly. I meant the type of human theory of mind, self-awareness thing. If the structure of our brain has anything to do with how it works, we'll never get a computer to think like that, because it can't simuate this structure with its own very different setup. That's why in my opinion in order to recreate a brain, we have to go a different route, as I outlined - and this route does not have to be unique.
--- End quote ---
I think I see what you mean. It's possible that massive parallel computing as done by neurons is the only method that allows the development of self-awareness, and computers as we know them are fundamentally limited in a way that prevents them from achieving this, but I don't know enough about either one to say whether that's certain. I think it's more likely to just be a matter of implementation: http://xkcd.com/505/
--- Quote ---Again, this depends hugely on what you call intelligent (and also what you call life, I guess). If you mean machines that can identify and analyse patterns in huge amounts of data incredibly well and thus make decisions in split-seconds or give you appropriate answers even on somewhat ambiguous questions: Yes, that already exists - see the Jeopardy Supercomputer, robot cars etc.
Assuming you mean: Humans can artificially manufacture something with a brain-equivalent that sort of works like a human brain in that it has a personality, then I'm not so sure, especially not in the near future. Don't forget evolution had an insanely long time to try and there were always many, many things going on at the same time. If you look at what's currently known about how the mess of neurons in our head produces the sensation of ourselves we have and the resulting human behaviour - it's pracitically nothing. We are ridiculously far away from actually understanding even the brain of relatively simple animals on a fundamental level. So right now, I'm not so optimistic on that.
--- End quote ---
My intended meaning was closer to the second one, but just one aside: While consciousness is fundamentally very poorly understood that doesn't mean brain function itself is. The nematode brain has been completely mapped neuron by neuron, and the mouse brain has been the subject of intense study for probably over a century. Our collective knowledge of small-scale and large-scale neural processes is far more advanced than you give credit for. (You can get a rough idea of the scale by looking around brain-map.org.) So I don't understand your lack of optimism, given that evolution took millions of years to get us here and it took us a few thousand to change the entire world beyond recognition, and a lot less than that to develop the scientific method and use it to get a pretty good idea about how the universe and the things inside it work. Who knows what we could achieve in another hundred years? In that perspective, artificial human-like life doesn't seem far-fetched at all.
--- Quote ---Wait, when you say "intelligent life can make other intelligent life" you mean breeding it from existing organisms? In my opinion, that doesn't count as artificial intelligence or really "creating intelligent life". Although it would still be quite an achievement, of course.
--- End quote ---
Yeah, I never thought about it much before, but take a group of fast-maturing animals, selectively breed them for their ability for complex communication and problem-solving - take parrots, for example - and it shouldn't take more than three human generations to have an animal on your hands that can talk to you about the weather.
--- Quote from: Storel on 06 May 2014, 13:20 ---Summary of entire thread: artificial intelligence is hard.
--- End quote ---
Is there maybe a single interesting thought you can contribute or are you content with stating the incredibly obvious?
Loki:
I am certain Storel has more than one single interesting thought, surely.
Mlle Germain:
--- Quote from: LTK on 06 May 2014, 15:23 ---I think I see what you mean. It's possible that massive parallel computing as done by neurons is the only method that allows the development of self-awareness, and computers as we know them are fundamentally limited in a way that prevents them from achieving this, but I don't know enough about either one to say whether that's certain. I think it's more likely to just be a matter of implementation: http://xkcd.com/505/
--- End quote ---
This bit about computers being fundamentally inadequate to simulate/ imitate brain processes comes from a talk I went to by Professor Karlheinz Meier of Heidelberg University who is one of the leaders of the Human Brain Project I linked above. He tries to build neuromorphic computing structures as opposed to using regular computers to simulate the brain. In his talk, he outlined very well how much energy a normal (i.e. with normal computer architechture) computer needs to do one computation - and this amount is (even with as small electronic components as you can build) much too large to compute complex processes as in a brain. It's not that it might not be done because it's too complex, but because the amounts of energy needed are insane. Actual living brains are more energy efficient in what they do by several degrees of magnitude. That's why this group tries to build a new kind of computing machine with a brain-like structure.
Sorry, I can't explain it any better or in more detail. I tried to find the talk online - very interesting talk! - but can't.
--- Quote ---My intended meaning was closer to the second one, but just one aside: While consciousness is fundamentally very poorly understood that doesn't mean brain function itself is. The nematode brain has been completely mapped neuron by neuron, and the mouse brain has been the subject of intense study for probably over a century. Our collective knowledge of small-scale and large-scale neural processes is far more advanced than you give credit for. (You can get a rough idea of the scale by looking around brain-map.org.) So I don't understand your lack of optimism, given that evolution took millions of years to get us here and it took us a few thousand to change the entire world beyond recognition, and a lot less than that to develop the scientific method and use it to get a pretty good idea about how the universe and the things inside it work. Who knows what we could achieve in another hundred years? In that perspective, artificial human-like life doesn't seem far-fetched at all.
--- End quote ---
This bit was inspired by a discusssion with a friend of mine, a biologist. I'm no expert on this, but as far as I understood, we know very well how neurons connect together, what chemicals are exchanged inside the brain, which chemicals trigger which receptor, how certain firing patterns look like, which areas of the brain are active during certain activities (although roughly) - but we have not much of an idea how these connect to human behaviour. She talked about this example where some stimulant (I forgot which) actually chemically does more or less the same thing in the brain as alcohol - and yet the consequences on the behaviour are almost opposite. There is a huge gap between neuroscience and behavioural psychology. One of the problems in studying this is of course that you can't just go and implant electrodes or put chemicals in peoples' brains to see what they do when you give impulses - I think this has been done for mice, but humans are of course way more complex. Unfortunately, you also can't ask the mouse what it was feeling/experiencing during the experiment, so it's not as enlightening.
--- Quote --- Yeah, I never thought about it much before, but take a group of fast-maturing animals, selectively breed them for their ability for complex communication and problem-solving - take parrots, for example - and it shouldn't take more than three human generations to have an animal on your hands that can talk to you about the weather.
--- End quote ---
The timeframe still seems a bit short to me, but I don't doubt that an intelligence somewhat similar to our own is in principle possible if you start with a species which is already almost there. Isn't it actually weird that noone seems to have tried that yet? I would have thought of apes as being the most promising (because they're closest to us, I guess), but you're right: One needs a species which reproduces faster. Also, I think that some species of bird fare better in many of these classic intelligence tests than apes do.
Actually, there is this one gorilla (Koko) that as been taught sign language and also to understand English (~2000 words apparently). According to Wikipedia, she's the only gorilla to ever have passed the mirror test for self-recognition. I read an article about her once; she can talk to humans in a limited way (say she wants to have food or water, which colour something is etc.) and has even lied to the scientists studying her - behaviour that has also been observed in chimpanzees and is generally taken to require quite a bit of intelligence. On the other hand, it is not entirely clear in how far Koko really understands sign language and uses it to communicate and how much she has just been trained to sign certain things in certain situations to receive treats. There might be some anthropomorphising going on.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version