Comic Discussion > QUESTIONABLE CONTENT
The Singularity vs. Stephen Hawking
Mlle Germain:
--- Quote from: LTK on 04 May 2014, 17:23 ---I've thought for a long time that the public's perception of AI (helped along by pop sci and sci-fi) is the biggest feat of anthropomorphisation in history. The development of human-level artificial intelligence will probably have a large impact on the world but it is nothing, nothing like encountering actual alien life.
--- End quote ---
Good point. I agree.
The questions I always ask myself is: What do we even mean by artificial intelligence and why would people develop AIs?
For the latter: I think anything you could truly call independently thinking artificial being would only ever be developped out of scientific interest in how intelligence and the brain work and how far we can go in recreating it, not because it seves any practial function. Sure, we want to have programs/robots with a certain amount of artificial intelligence (pattern recognition, self-learning etc.) for lots of applications everywhere in industry to automise processes and eliminate errors, but here's exactly the thing: We don't want those machines to be like humans, because the point is that they are supposed to do things humans can't do - e.g. work endlessly on the same task with no disctraction and no physical or mental fatigue, without breaks, without holidays, without wages. With a practically human AI, this is not possible anymore - it doesn't have many advantages over employing a human, or better, any advantages over employing a machine with less intelligence and self-awareness.
About the first question: This one is pretty hard - even among living things or within the human species, we struggle hugely to properly define what intelligence is. I mean we already have programs that can do certain things way better than humans and even some that can be programmed to talk a lot like a human. But in the end, it's still just some program. It can't really develop further on its own in functions different from what it's programmers intended it to do.
Then we also have to think about how complex the human brain really is and how fundamentally different from the digital structure of a computer. People obviously try to simulate brains (or at least a few thousand neurons) on supercomputers and you know what: It's practically impossible due to how energy inefficient computers are compared to the human brain and due to their structure being completely different. On a powerful supercomputer, you can roughly simulate a few ten thousand neurons at 1/10th or so the speed of actual neuron interactions (I don't quite remember the numbers, but something like that). The human brain has something like 20 billion neurons.
Now, there are also people who try to build an artificial brain by actually building a model of a brain: Wires/tiny resistors for neurons etc to see whether they can get any kind of neural firing structure like in an actual brain. This works much better in some sense - the structure is actually that of a brain - but this is recent so they haven't really gotten far with this. If you're interested, check out the Human Brain Project, specifically the group from Heidelberg University.
Anyway, what I want to say with this: Computers are bad at imitating structures like the human brain. If this structure is fundamental to actual intelligence, we will never have a truly intelligent computer. The only way to go are probably actual artificial, non-digital brains and with those, we still have a loooong way to go. So: Probably no true AI any time soon.
Edit: Fixed typos
LTK:
X2
--- Quote from: techkid on 04 May 2014, 22:24 ---I don't know. We still have some way to go before we get self-aware AI,
--- End quote ---
Confabulation number one: Intelligence doesn't necessarily mean self-awareness! I completely forgot about that aspect that people always attribute to AIs, but as you can imagine an AI that is merely intelligent is probably still a long way from self-awareness. What does 'self-aware AI' mean anyway? If an AI has no senses that enable it to distinguish itself from the rest of the world, its entire existence, including interaction with us, is part of the same dimension, so how could it become self-aware? If we do give an AI senses analogous to a human's, like giving it the ability to recognise objects and catalogue their interrelationships (like human - keyboard - computer - AI) how could it reach an understanding of its own existence in this web of interactions when it has no frame of reference? Lacking the selection pressure that life has been subjected to in the whole of its existence means that these concepts are much, much harder to develop.
--- Quote from: Schwungrad on 05 May 2014, 04:05 ---We should probably distinguish between Artificial Intelligence (the ability to pursue a given goal with at least the same range and flexibility of strategies that humans display) and Artificial Consciousness (the ability - and urge - to set and pursue one's own goals).
--- End quote ---
Confabulation number two: Consciousness does not imply agency or goal-directedness! In fact, consciousness has very little to do with those things. Being conscious means having sensations; the intrinsic 'what-it-is-likeness' of sights, sounds, smells and all the other things about our existence. Consciousness is poorly understood indeed, but from what I've been taught, it can be thought of as the global integration of information into the entire brain. From neurology studies it is evident that the difference between being conscious of a stimulus and being not is whether the stimulus is transmitted throughout the entire brain rather than being processed only locally. Obviously that doesn't explain the hard problem of consciousness: how is it possible that this integrative process results in such elusive and intangible things as the colour pink, the smell of bacon and the taste of capsaicin? Before we can answer that question, it is without a doubt impossible to ascertain whether an AI has developed consciousness. Even when one claims that it has, we cannot verify it, and while the same thing is true for humans, our shared biological background at least makes it more likely that we are all conscious. The same thing cannot be said for an AI.
But coming back to my previous complaint, which is ascribing motivation for an AI. An extremely advanced AI may have the ability to set its own subgoals when provided with a main goal, such as 'contact extraterrestrial life', but how could an AI of our making possibly have goals that it intrinsically wants to achieve? Humans are born with the goal to stay alive and reproduce; although not all humans ascribe to the latter, the former is a pretty robust driving factor. As I mentioned, AI don't have the evolutionary baggage that gives us our agency, so how could it develop even a basic urge for self-preservation if it is not explicitly programmed with one?
--- Quote from: Mlle Germain on 05 May 2014, 04:09 ---Anyway, what I want to say with this: Computers are bad at imitating structures like the human brain. If this structure is fundamental to actual intelligence, we will never have a truly intelligent computer. The only way to go are probably actual artificial, non-digital brains and with those, we still have a loooong way to go. So: Probably no true AI any time soon.
--- End quote ---
I'm quite certain there are many, many roads that lead to intelligence. It's just that intelligence tends not to benefit from natural selection so we don't see other previously insignificant species colonising the entire surface of the planet in the blink of an eye on an evolutionary timescale. Or it's by pure chance that we're the first. Anyway, if something as chaotic and random as evolution can produce intelligent life, then intelligent life can sure as hell make other intelligent life.
Loki:
--- Quote from: LTK on 05 May 2014, 17:30 ---If an AI has no senses that enable it to distinguish itself from the rest of the world, its entire existence, including interaction with us, is part of the same dimension, so how could it become self-aware? If we do give an AI senses analogous to a human's, like giving it the ability to recognise objects and catalogue their interrelationships (like human - keyboard - computer - AI) how could it reach an understanding of its own existence in this web of interactions when it has no frame of reference?
--- End quote ---
This is probably a stupid question, but couldn't we tell it "Okay, so there is a lot of objects. Assign the label "Me" to object 0x421337 as a constant"?
LTK:
Yeah, but from the AI's perspective, that would be about as useful as assigning it the label 'tastes like chicken'. You can teach an AI to provide information on its own object when prompted with the grammatical structure that refers to an individual, so when you ask "What can you tell me about yourself?" it can say something like "I was made in the year 2083, I am located at MIT, my purpose is to organize, and provide people with, information, my program contains 250 million lines of code..." It says this because it knows to respond to questions framed with 'your' with answers framed with 'I' and 'my'. But that doesn't make it any more self-aware than the Wikipedia article about Wikipedia.
Before an AI can come to grasp with this concept, it needs to understand that humans are (self-)aware before it can apply the same thing to itself. That means it has to know Theory of Mind, which is something even humans aren't born with. When the AI is able to create a basic model of human behaviour and interpersonal interaction, it might be able to put itself into that model, which might constitute a form of self-awareness, but who knows? Maybe self-awareness is fundamentally impossible without the personal framework that evolution has provided us. We might have a better chance evolving intelligence from existing lifeforms rather than building it from scratch if we want something that's self-aware.
Mlle Germain:
--- Quote from: LTK on 05 May 2014, 17:30 ---I'm quite certain there are many, many roads that lead to intelligence.
--- End quote ---
Probably, yes. I was wording that poorly. I meant the type of human theory of mind, self-awareness thing. If the structure of our brain has anything to do with how it works, we'll never get a computer to think like that, because it can't simuate this structure with its own very different setup. That's why in my opinion in order to recreate a brain, we have to go a different route, as I outlined - and this route does not have to be unique.
--- Quote --- It's just that intelligence tends not to benefit from natural selection so we don't see other previously insignificant species colonising the entire surface of the planet in the blink of an eye on an evolutionary timescale. Or it's by pure chance that we're the first. Anyway, if something as chaotic and random as evolution can produce intelligent life, then intelligent life can sure as hell make other intelligent life.
--- End quote ---
Again, this depends hugely on what you call intelligent (and also what you call life, I guess). If you mean machines that can identify and analyse patterns in huge amounts of data incredibly well and thus make decisions in split-seconds or give you appropriate answers even on somewhat ambiguous questions: Yes, that already exists - see the Jeopardy Supercomputer, robot cars etc.
Assuming you mean: Humans can artificially manufacture something with a brain-equivalent that sort of works like a human brain in that it has a personality, then I'm not so sure, especially not in the near future. Don't forget evolution had an insanely long time to try and there were always many, many things going on at the same time. If you look at what's currently known about how the mess of neurons in our head produces the sensation of ourselves we have and the resulting human behaviour - it's pracitically nothing. We are ridiculously far away from actually understanding even the brain of relatively simple animals on a fundamental level. So right now, I'm not so optimistic on that.
--- Quote from: LTK on 06 May 2014, 03:09 ---Yeah, but from the AI's perspective, that would be about as useful as assigning it the label 'tastes like chicken'. You can teach an AI to provide information on its own object when prompted with the grammatical structure that refers to an individual, so when you ask "What can you tell me about yourself?" it can say something like "I was made in the year 2083, I am located at MIT, my purpose is to organize, and provide people with, information, my program contains 250 million lines of code..." It says this because it knows to respond to questions framed with 'your' with answers framed with 'I' and 'my'. But that doesn't make it any more self-aware than the Wikipedia article about Wikipedia.
--- End quote ---
This is precisely why I think we won't have artificial humans running around any time soon.
--- Quote from: LTK on 06 May 2014, 03:09 ---We might have a better chance evolving intelligence from existing lifeforms rather than building it from scratch if we want something that's self-aware.
--- End quote ---
Wait, when you say "intelligent life can make other intelligent life" you mean breeding it from existing organisms? In my opinion, that doesn't count as artificial intelligence or really "creating intelligent life". Although it would still be quite an achievement, of course.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version