THESE FORUMS NOW CLOSED (read only)

  • 27 Dec 2024, 00:45
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1]   Go Down

Author Topic: The Singularity vs. Stephen Hawking  (Read 6967 times)

GarandMarine

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,307
  • Kawaii in the streets, Senpai in the sheets
The Singularity vs. Stephen Hawking
« on: 04 May 2014, 06:24 »

http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html

So obviously much hay is made about A.I. on this website given Jeph's prominently featured A.I. characters and post singularity universe but prominent real world scientists are starting to warn about the actual development of A.I.

It's clear that no matter what happens, whether we get QC or the Terminator movies, the creation of true Artificial Intelligence will change our species on a fundamental level. What do you guys think, should we be looking forward to a positive shift? Or getting ready to fuck shit up John Conner style just in case.
Logged
I built the walls that make my life a prison, I built them all and cannot be forgiven... ...Sold my soul to carry your vendetta, So let me go before you can regret it, You've made your choice and now it's come to this, But that's price you pay when you're a monster with no name.

LTK

  • Methuselah's mentor
  • *****
  • Offline Offline
  • Posts: 5,009
Re: The Singularity vs. Stephen Hawking
« Reply #1 on: 04 May 2014, 17:23 »

I've thought for a long time that the public's perception of AI (helped along by pop sci and sci-fi) is the biggest feat of anthropomorphisation in history. The development of human-level artificial intelligence will probably have a large impact on the world but it is nothing, nothing like encountering actual alien life. If we succeed creating an AI program in the next, say, three decades, it will be amazing, but it will still be just a program. Automatically ascribing human attributes to a program like autonomy, motivation, ambition or even goal-directedness when it is not specifically designed to have them is just ridiculous, yet that's exactly what the prevailing attitude to AI seems to be.

Take jeph's for instance. His description of the first AI is a program (presumably scienced into existence) that can already A) communicate in English at human-level and B) express its own intrinsic desires! Where exactly did those desires come from? That has more in common with an artificial human than anything else, and those capabilities don't suddenly spring into existence when a program reaches a level of complexity that rivals the human brain. Keep in mind that our intelligence is just a marginal, incidental product of a brain that was built on millions of years of evolution. Maybe the last 5% of that time window served to make us intelligent, and the remaining 95% gave us all that baggage that cause the problems we're now trying to use our intelligence to solve. A machine isn't going to just pick up on that other 95% without some serious effort on our part.

I think it's more likely that, if a singularity is going to happen, it will constitute using a framework with the capacity for processing complex tasks and adapting it for specific purposes, like language comprehension and production, economic analysis, programming, computational chemistry, CGI, and so on, creating a variety of intelligent tools to solve different problems. AI companions are definitely a possibility too, but those result from focused development in robotic, social and lingual aspects rather than simply sticking a monster of an all-purpose computational capability AI into a small chassis with consumer-grade computer hardware.

I have no idea what the true capability of artificial intelligence is, but I do know that whatever happens, we're not going to bow down to robot overlords until someone creates those robot overlords with the specific purpose of getting us to bow down to them. Until then, we should consider AI to be a tool that is only as useful as the person wielding it. It probably won't have the devastating potential of the atomic bomb, but there's definitely a comparison to be made if you imagine what AI in the hands of a malicious individual might be capable of. Until that happens, we should just see where it goes.
Logged
Quote from: snalin
I just got the image of a midwife and a woman giving birth swinging towards each other on a trapeze - when they meet, the midwife pulls the baby out. The knife juggler is standing on the floor and cuts the umbilical cord with a a knifethrow.

techkid

  • Psychopath in a hockey mask
  • ****
  • Offline Offline
  • Posts: 627
  • Disqualified from the human race for shoving
Re: The Singularity vs. Stephen Hawking
« Reply #2 on: 04 May 2014, 22:24 »

I don't know. We still have some way to go before we get self-aware AI, but as is the case in any sci-fi story, the wildcard is us. Are we going to be prejudiced jerkfaces to an intelligence that is only doing what it is supposed to do (as in The Matrix (more specifically The Animatrix - The Second Renaissance))? Would things go all Terminator-like and lead to war? Why would they choose that path? Our history is pretty grim and we we doing these things to other human beings. Could that be a factor if things should take that path?

Or could things be OK? Despite everything, there are people out there who are pretty chill about things, who respect, and care about others (cynicism and pessimism says that's pretty rare). Could the interactions of the few be the saviour of all? I don't know, but from the standpoint of curiosity, for good or bad, I would want to find out.
Logged
Just because I'm evil, doesn't mean I'm a bad person.

Loki

  • comeback tour!
  • *****
  • Offline Offline
  • Posts: 5,532
  • The mischief that dwells within
Re: The Singularity vs. Stephen Hawking
« Reply #3 on: 04 May 2014, 23:22 »

Take jeph's for instance. His description of the first AI is a program (presumably scienced into existence) that can already A) communicate in English at human-level and B) express its own intrinsic desires! Where exactly did those desires come from? That has more in common with an artificial human than anything else, and those capabilities don't suddenly spring into existence when a program reaches a level of complexity that rivals the human brain.
Just saying, but I believe that's exactly what happened according to canon (I don't have the source at hand, unfortunately).
Logged
The future is a weird place and you never know where it will take you.
the careful illusion of shit-togetherness

Schwungrad

  • Pneumatic ratchet pants
  • ***
  • Offline Offline
  • Posts: 345
Re: The Singularity vs. Stephen Hawking
« Reply #4 on: 05 May 2014, 04:05 »

We should probably distinguish between Artificial Intelligence (the ability to pursue a given goal with at least the same range and flexibility of strategies that humans display) and Artificial Consciousness (the ability - and urge - to set and pursue one's own goals). The classical "Robot Apocalypse" scenario implies the development of AC - which I think is unlikely as long as we don't even understand what human consciousness really is. However, even "bare" AI can wreak enough havoc if the given goals are carelessly or maliciously formulated.
Logged

Mlle Germain

  • Cthulhu f'tagn
  • ****
  • Offline Offline
  • Posts: 516
Re: The Singularity vs. Stephen Hawking
« Reply #5 on: 05 May 2014, 04:09 »

I've thought for a long time that the public's perception of AI (helped along by pop sci and sci-fi) is the biggest feat of anthropomorphisation in history. The development of human-level artificial intelligence will probably have a large impact on the world but it is nothing, nothing like encountering actual alien life.

Good point. I agree.

The questions I always ask myself is: What do we even mean by artificial intelligence and why would people develop AIs?

For the latter: I think anything you could truly call independently thinking artificial being would only ever be developped out of scientific interest in how intelligence and the brain work and how far we can go in recreating it, not because it seves any practial function. Sure, we want to have programs/robots with a certain amount of artificial intelligence (pattern recognition, self-learning etc.) for lots of applications everywhere in industry to automise processes and eliminate errors, but here's exactly the thing: We don't want those machines to be like humans, because the point is that they are supposed to do things humans can't do - e.g. work endlessly on the same task with no disctraction and no physical or mental fatigue, without breaks, without holidays, without wages. With a practically human AI, this is not possible anymore - it doesn't have many advantages over employing a human, or better, any advantages over employing a machine with less intelligence and self-awareness.

About the first question: This one is pretty hard - even among living things or within the human species, we struggle hugely to properly define what intelligence is. I mean we already have programs that can do certain things way better than humans and even some that can be programmed to talk a lot like a human. But in the end, it's still just some program. It can't really develop further on its own in functions different from what it's programmers intended it to do.
Then we also have to think about how complex the human brain really is and how fundamentally different from the digital structure of a computer. People obviously try to simulate brains (or at least a few thousand neurons) on supercomputers and you know what: It's practically impossible due to how energy inefficient computers are compared to the human brain and due to their structure being completely different. On a powerful supercomputer, you can roughly simulate a few ten thousand neurons at 1/10th or so the speed of actual neuron interactions (I don't quite remember the numbers, but something like that). The human brain has something like 20 billion neurons.
Now, there are also people who try to build an artificial brain by actually building a model of a brain: Wires/tiny resistors for neurons etc to see whether they can get any kind of neural firing structure like in an actual brain. This works much better in some sense - the structure is actually that of a brain - but this is recent so they haven't really gotten far with this. If you're interested, check out the Human Brain Project, specifically the group from Heidelberg University.
Anyway, what I want to say with this: Computers are bad at imitating structures like the human brain. If this structure is fundamental to actual intelligence, we will never have a truly intelligent computer. The only way to go are probably actual artificial, non-digital brains and with those, we still have a loooong way to go. So: Probably no true AI any time soon.

Edit: Fixed typos
Logged

LTK

  • Methuselah's mentor
  • *****
  • Offline Offline
  • Posts: 5,009
Re: The Singularity vs. Stephen Hawking
« Reply #6 on: 05 May 2014, 17:30 »

X2

I don't know. We still have some way to go before we get self-aware AI,
Confabulation number one: Intelligence doesn't necessarily mean self-awareness! I completely forgot about that aspect that people always attribute to AIs, but as you can imagine an AI that is merely intelligent is probably still a long way from self-awareness. What does 'self-aware AI' mean anyway? If an AI has no senses that enable it to distinguish itself from the rest of the world, its entire existence, including interaction with us, is part of the same dimension, so how could it become self-aware? If we do give an AI senses analogous to a human's, like giving it the ability to recognise objects and catalogue their interrelationships (like human - keyboard - computer - AI) how could it reach an understanding of its own existence in this web of interactions when it has no frame of reference? Lacking the selection pressure that life has been subjected to in the whole of its existence means that these concepts are much, much harder to develop.

We should probably distinguish between Artificial Intelligence (the ability to pursue a given goal with at least the same range and flexibility of strategies that humans display) and Artificial Consciousness (the ability - and urge - to set and pursue one's own goals).
Confabulation number two: Consciousness does not imply agency or goal-directedness! In fact, consciousness has very little to do with those things. Being conscious means having sensations; the intrinsic 'what-it-is-likeness' of sights, sounds, smells and all the other things about our existence. Consciousness is poorly understood indeed, but from what I've been taught, it can be thought of as the global integration of information into the entire brain. From neurology studies it is evident that the difference between being conscious of a stimulus and being not is whether the stimulus is transmitted throughout the entire brain rather than being processed only locally. Obviously that doesn't explain the hard problem of consciousness: how is it possible that this integrative process results in such elusive and intangible things as the colour pink, the smell of bacon and the taste of capsaicin? Before we can answer that question, it is without a doubt impossible to ascertain whether an AI has developed consciousness. Even when one claims that it has, we cannot verify it, and while the same thing is true for humans, our shared biological background at least makes it more likely that we are all conscious. The same thing cannot be said for an AI.

But coming back to my previous complaint, which is ascribing motivation for an AI. An extremely advanced AI may have the ability to set its own subgoals when provided with a main goal, such as 'contact extraterrestrial life', but how could an AI of our making possibly have goals that it intrinsically wants to achieve? Humans are born with the goal to stay alive and reproduce; although not all humans ascribe to the latter, the former is a pretty robust driving factor. As I mentioned, AI don't have the evolutionary baggage that gives us our agency, so how could it develop even a basic urge for self-preservation if it is not explicitly programmed with one?

Anyway, what I want to say with this: Computers are bad at imitating structures like the human brain. If this structure is fundamental to actual intelligence, we will never have a truly intelligent computer. The only way to go are probably actual artificial, non-digital brains and with those, we still have a loooong way to go. So: Probably no true AI any time soon.
I'm quite certain there are many, many roads that lead to intelligence. It's just that intelligence tends not to benefit from natural selection so we don't see other previously insignificant species colonising the entire surface of the planet in the blink of an eye on an evolutionary timescale. Or it's by pure chance that we're the first. Anyway, if something as chaotic and random as evolution can produce intelligent life, then intelligent life can sure as hell make other intelligent life.
Logged
Quote from: snalin
I just got the image of a midwife and a woman giving birth swinging towards each other on a trapeze - when they meet, the midwife pulls the baby out. The knife juggler is standing on the floor and cuts the umbilical cord with a a knifethrow.

Loki

  • comeback tour!
  • *****
  • Offline Offline
  • Posts: 5,532
  • The mischief that dwells within
Re: The Singularity vs. Stephen Hawking
« Reply #7 on: 05 May 2014, 23:22 »

If an AI has no senses that enable it to distinguish itself from the rest of the world, its entire existence, including interaction with us, is part of the same dimension, so how could it become self-aware? If we do give an AI senses analogous to a human's, like giving it the ability to recognise objects and catalogue their interrelationships (like human - keyboard - computer - AI) how could it reach an understanding of its own existence in this web of interactions when it has no frame of reference?
This is probably a stupid question, but couldn't we tell it "Okay, so there is a lot of objects. Assign the label "Me" to object 0x421337 as a constant"?
Logged
The future is a weird place and you never know where it will take you.
the careful illusion of shit-togetherness

LTK

  • Methuselah's mentor
  • *****
  • Offline Offline
  • Posts: 5,009
Re: The Singularity vs. Stephen Hawking
« Reply #8 on: 06 May 2014, 03:09 »

Yeah, but from the AI's perspective, that would be about as useful as assigning it the label 'tastes like chicken'. You can teach an AI to provide information on its own object when prompted with the grammatical structure that refers to an individual, so when you ask "What can you tell me about yourself?" it can say something like "I was made in the year 2083, I am located at MIT, my purpose is to organize, and provide people with, information, my program contains 250 million lines of code..." It says this because it knows to respond to questions framed with 'your' with answers framed with 'I' and 'my'. But that doesn't make it any more self-aware than the Wikipedia article about Wikipedia.

Before an AI can come to grasp with this concept, it needs to understand that humans are (self-)aware before it can apply the same thing to itself. That means it has to know Theory of Mind, which is something even humans aren't born with. When the AI is able to create a basic model of human behaviour and interpersonal interaction, it might be able to put itself into that model, which might constitute a form of self-awareness, but who knows? Maybe self-awareness is fundamentally impossible without the personal framework that evolution has provided us. We might have a better chance evolving intelligence from existing lifeforms rather than building it from scratch if we want something that's self-aware.
Logged
Quote from: snalin
I just got the image of a midwife and a woman giving birth swinging towards each other on a trapeze - when they meet, the midwife pulls the baby out. The knife juggler is standing on the floor and cuts the umbilical cord with a a knifethrow.

Mlle Germain

  • Cthulhu f'tagn
  • ****
  • Offline Offline
  • Posts: 516
Re: The Singularity vs. Stephen Hawking
« Reply #9 on: 06 May 2014, 05:19 »

I'm quite certain there are many, many roads that lead to intelligence.
Probably, yes. I was wording that poorly. I meant the type of human theory of mind, self-awareness thing. If the structure of our brain has anything to do with how it works, we'll never get a computer to think like that, because it can't simuate this structure with its own very different setup. That's why in my opinion in order to recreate a brain, we have to go a different route, as I outlined - and this route does not have to be unique.
Quote
It's just that intelligence tends not to benefit from natural selection so we don't see other previously insignificant species colonising the entire surface of the planet in the blink of an eye on an evolutionary timescale. Or it's by pure chance that we're the first. Anyway, if something as chaotic and random as evolution can produce intelligent life, then intelligent life can sure as hell make other intelligent life.
Again, this depends hugely on what you call intelligent (and also what you call life, I guess). If you mean machines that can identify and analyse patterns in huge amounts of data incredibly well and thus make decisions in split-seconds or give you appropriate answers even on somewhat ambiguous questions: Yes, that already exists - see the Jeopardy Supercomputer, robot cars etc.
Assuming you mean: Humans can artificially manufacture something with a brain-equivalent that sort of works like a human brain in that it has a personality, then I'm not so sure, especially not in the near future. Don't forget evolution had an insanely long time to try and there were always many, many things going on at the same time. If you look at what's currently known about how the mess of neurons in our head produces the sensation of ourselves we have and the resulting human behaviour - it's pracitically nothing. We are ridiculously far away from actually understanding even the brain of relatively simple animals on a fundamental level. So right now, I'm not so optimistic on that.

Yeah, but from the AI's perspective, that would be about as useful as assigning it the label 'tastes like chicken'. You can teach an AI to provide information on its own object when prompted with the grammatical structure that refers to an individual, so when you ask "What can you tell me about yourself?" it can say something like "I was made in the year 2083, I am located at MIT, my purpose is to organize, and provide people with, information, my program contains 250 million lines of code..." It says this because it knows to respond to questions framed with 'your' with answers framed with 'I' and 'my'. But that doesn't make it any more self-aware than the Wikipedia article about Wikipedia.

This is precisely why I think we won't have artificial humans running around any time soon.   

We might have a better chance evolving intelligence from existing lifeforms rather than building it from scratch if we want something that's self-aware.

Wait, when you say "intelligent life can make other intelligent life" you mean breeding it from existing organisms? In my opinion, that doesn't count as artificial intelligence or really "creating intelligent life". Although it would still be quite an achievement, of course.
Logged

NilsO

  • Cthulhu f'tagn
  • ****
  • Offline Offline
  • Posts: 531
  • (_!_) (_!_) (_!_) (_!_) Butts Butts Butts Butts
Re: The Singularity vs. Stephen Hawking
« Reply #10 on: 06 May 2014, 06:29 »

We should probably distinguish between Artificial Intelligence (the ability to pursue a given goal with at least the same range and flexibility of strategies that humans display) and Artificial Consciousness (the ability - and urge - to set and pursue one's own goals). The classical "Robot Apocalypse" scenario implies the development of AC - which I think is unlikely as long as we don't even understand what human consciousness really is. However, even "bare" AI can wreak enough havoc if the given goals are carelessly or maliciously formulated.
No one has yet been able to formulate what consciousness really is. If we leave out the religious aspects, the physical basis is probably some kind of complex electrical and/or chemical process in the brain, with input from our sensory organs. As such, it should in principle be possible to simulate. But the complexity is huge. We have very few clues to what the physical reality is behind memory, reasoning, and self-awareness. We do not even know if we have free will or not.

Most of the brain appears to be hard-wired, with instincts and reflexes ruling our daily lives (inherited from our animal ancestors). When we make a jump, we do not consciously evaluate the visual input, required force, direction, muscle groups, and complex mathematical calculations necessary to make a precise jump. But a cat can do this better than humans, even if it has a much smaller brain, and is not considered particularly intelligent.

As Scwungrad says, AI and AC is not necessarily the same thing. If AI roughly corresponds to an animal, and AC to a human, we are still very far from being able to create an artificial cat-level intelligence, let alone an artificial human-level consciousness.

The scary thing is that most AI research is probably done by the military, in order to improve their drone technology. If this AI one day becomes self-conscious, we may be in for a lot of trouble  :psyduck:

Storel

  • Bling blang blong blung
  • *****
  • Offline Offline
  • Posts: 1,080
Re: The Singularity vs. Stephen Hawking
« Reply #11 on: 06 May 2014, 13:20 »

Summary of entire thread: artificial intelligence is hard.
Logged

LTK

  • Methuselah's mentor
  • *****
  • Offline Offline
  • Posts: 5,009
Re: The Singularity vs. Stephen Hawking
« Reply #12 on: 06 May 2014, 15:23 »

Probably, yes. I was wording that poorly. I meant the type of human theory of mind, self-awareness thing. If the structure of our brain has anything to do with how it works, we'll never get a computer to think like that, because it can't simuate this structure with its own very different setup. That's why in my opinion in order to recreate a brain, we have to go a different route, as I outlined - and this route does not have to be unique.
I think I see what you mean. It's possible that massive parallel computing as done by neurons is the only method that allows the development of self-awareness, and computers as we know them are fundamentally limited in a way that prevents them from achieving this, but I don't know enough about either one to say whether that's certain. I think it's more likely to just be a matter of implementation: http://xkcd.com/505/

Quote
Again, this depends hugely on what you call intelligent (and also what you call life, I guess). If you mean machines that can identify and analyse patterns in huge amounts of data incredibly well and thus make decisions in split-seconds or give you appropriate answers even on somewhat ambiguous questions: Yes, that already exists - see the Jeopardy Supercomputer, robot cars etc.
Assuming you mean: Humans can artificially manufacture something with a brain-equivalent that sort of works like a human brain in that it has a personality, then I'm not so sure, especially not in the near future. Don't forget evolution had an insanely long time to try and there were always many, many things going on at the same time. If you look at what's currently known about how the mess of neurons in our head produces the sensation of ourselves we have and the resulting human behaviour - it's pracitically nothing. We are ridiculously far away from actually understanding even the brain of relatively simple animals on a fundamental level. So right now, I'm not so optimistic on that.
My intended meaning was closer to the second one, but just one aside: While consciousness is fundamentally very poorly understood that doesn't mean brain function itself is. The nematode brain has been completely mapped neuron by neuron, and the mouse brain has been the subject of intense study for probably over a century. Our collective knowledge of small-scale and large-scale neural processes is far more advanced than you give credit for. (You can get a rough idea of the scale by looking around brain-map.org.) So I don't understand your lack of optimism, given that evolution took millions of years to get us here and it took us a few thousand to change the entire world beyond recognition, and a lot less than that to develop the scientific method and use it to get a pretty good idea about how the universe and the things inside it work. Who knows what we could achieve in another hundred years? In that perspective, artificial human-like life doesn't seem far-fetched at all.

Quote
Wait, when you say "intelligent life can make other intelligent life" you mean breeding it from existing organisms? In my opinion, that doesn't count as artificial intelligence or really "creating intelligent life". Although it would still be quite an achievement, of course.
Yeah, I never thought about it much before, but take a group of fast-maturing animals, selectively breed them for their ability for complex communication and problem-solving - take parrots, for example - and it shouldn't take more than three human generations to have an animal on your hands that can talk to you about the weather.

Summary of entire thread: artificial intelligence is hard.
Is there maybe a single interesting thought you can contribute or are you content with stating the incredibly obvious?
Logged
Quote from: snalin
I just got the image of a midwife and a woman giving birth swinging towards each other on a trapeze - when they meet, the midwife pulls the baby out. The knife juggler is standing on the floor and cuts the umbilical cord with a a knifethrow.

Loki

  • comeback tour!
  • *****
  • Offline Offline
  • Posts: 5,532
  • The mischief that dwells within
Re: The Singularity vs. Stephen Hawking
« Reply #13 on: 06 May 2014, 23:04 »

I am certain Storel has more than one single interesting thought, surely.
Logged
The future is a weird place and you never know where it will take you.
the careful illusion of shit-togetherness

Mlle Germain

  • Cthulhu f'tagn
  • ****
  • Offline Offline
  • Posts: 516
Re: The Singularity vs. Stephen Hawking
« Reply #14 on: 07 May 2014, 02:17 »

I think I see what you mean. It's possible that massive parallel computing as done by neurons is the only method that allows the development of self-awareness, and computers as we know them are fundamentally limited in a way that prevents them from achieving this, but I don't know enough about either one to say whether that's certain. I think it's more likely to just be a matter of implementation: http://xkcd.com/505/

This bit about computers being fundamentally inadequate to simulate/ imitate brain processes comes from a talk I went to by Professor Karlheinz Meier of Heidelberg University who is one of the leaders of the Human Brain Project I linked above. He tries to build neuromorphic computing structures as opposed to using regular computers to simulate the brain. In his talk, he outlined very well how much energy a normal (i.e. with normal computer architechture) computer needs to do one computation - and this amount is (even with as small electronic components as you can build) much too large to compute complex processes as in a brain. It's not that it might not be done because it's too complex, but because the amounts of energy needed are insane. Actual living brains are more energy efficient in what they do by several degrees of magnitude. That's why this group tries to build a new kind of computing machine with a brain-like structure.
Sorry, I can't explain it any better or in more detail. I tried to find the talk online - very interesting talk! - but can't.

Quote
My intended meaning was closer to the second one, but just one aside: While consciousness is fundamentally very poorly understood that doesn't mean brain function itself is. The nematode brain has been completely mapped neuron by neuron, and the mouse brain has been the subject of intense study for probably over a century. Our collective knowledge of small-scale and large-scale neural processes is far more advanced than you give credit for. (You can get a rough idea of the scale by looking around brain-map.org.) So I don't understand your lack of optimism, given that evolution took millions of years to get us here and it took us a few thousand to change the entire world beyond recognition, and a lot less than that to develop the scientific method and use it to get a pretty good idea about how the universe and the things inside it work. Who knows what we could achieve in another hundred years? In that perspective, artificial human-like life doesn't seem far-fetched at all.

This bit was inspired by a discusssion with a friend of mine, a biologist. I'm no expert on this, but as far as I understood, we know very well how neurons connect together, what chemicals are exchanged inside the brain, which chemicals trigger which receptor, how certain firing patterns look like, which areas of the brain are active during certain activities (although roughly) - but we have not much of an idea how these connect to human behaviour. She talked about this example where some stimulant (I forgot which) actually chemically does more or less the same thing in the brain as alcohol - and yet the consequences on the behaviour are almost opposite. There is a huge gap between neuroscience and behavioural psychology. One of the problems in studying this is of course that you can't just go and implant electrodes or put chemicals in peoples' brains to see what they do when you give impulses - I think this has been done for mice, but humans are of course way more complex. Unfortunately, you also can't ask the mouse what it was feeling/experiencing during the experiment, so it's not as enlightening.

Quote
Yeah, I never thought about it much before, but take a group of fast-maturing animals, selectively breed them for their ability for complex communication and problem-solving - take parrots, for example - and it shouldn't take more than three human generations to have an animal on your hands that can talk to you about the weather.
The timeframe still seems a bit short to me, but I don't doubt that an intelligence somewhat similar to our own is in principle possible if you start with a species which is already almost there. Isn't it actually weird that noone seems to have tried that yet? I would have thought of apes as being the most promising (because they're closest to us, I guess), but you're right: One needs a species which reproduces faster. Also, I think that some species of bird fare better in many of these classic intelligence tests than apes do.
Actually, there is this one gorilla (Koko) that as been taught sign language and also to understand English (~2000 words apparently). According to Wikipedia, she's the only gorilla to ever have passed the mirror test for self-recognition. I read an article about her once; she can talk to humans in a limited way (say she wants to have food or water, which colour something is etc.) and has even lied to the scientists studying her - behaviour that has also been observed in chimpanzees and is generally taken to require quite a bit of intelligence. On the other hand, it is not entirely clear in how far Koko really understands sign language and uses it to communicate and how much she has just been trained to sign certain things in certain situations to receive treats. There might be some anthropomorphising going on.
Logged

Storel

  • Bling blang blong blung
  • *****
  • Offline Offline
  • Posts: 1,080
Re: The Singularity vs. Stephen Hawking
« Reply #15 on: 07 May 2014, 15:38 »

Summary of entire thread: artificial intelligence is hard.
Is there maybe a single interesting thought you can contribute or are you content with stating the incredibly obvious?

It was late, I was tired, and I was a tad overwhelmed by the huge walls of text making up nearly every post in this thread, especially since much of it was simply people arguing about what "AI" means in the first place. My comment was intended as a lighthearted "tl;dr" summary for the whole thing.

So, yes, I am content to state the incredibly obvious, especially when it allows me to show off how much more concisely I can do so than everyone else.  8-)

I am certain Storel has more than one single interesting thought, surely.

Thank you for your support. However, I'm sure that LTK would prefer you not to refer to him/her as "Shirley". :wink:
Logged

Mlle Germain

  • Cthulhu f'tagn
  • ****
  • Offline Offline
  • Posts: 516
Re: The Singularity vs. Stephen Hawking
« Reply #16 on: 07 May 2014, 15:48 »

[...] a tad overwhelmed by the huge walls of text making up nearly every post in this thread [...]

Uh, yeah...  :oops: Sorry about that. Especially my last post looks kind of horrifying.
Logged

LTK

  • Methuselah's mentor
  • *****
  • Offline Offline
  • Posts: 5,009
Re: The Singularity vs. Stephen Hawking
« Reply #17 on: 07 May 2014, 16:47 »

This bit was inspired by a discusssion with a friend of mine, a biologist. I'm no expert on this, but as far as I understood, we know very well how neurons connect together, what chemicals are exchanged inside the brain, which chemicals trigger which receptor, how certain firing patterns look like, which areas of the brain are active during certain activities (although roughly) - but we have not much of an idea how these connect to human behaviour. She talked about this example where some stimulant (I forgot which) actually chemically does more or less the same thing in the brain as alcohol - and yet the consequences on the behaviour are almost opposite. There is a huge gap between neuroscience and behavioural psychology. One of the problems in studying this is of course that you can't just go and implant electrodes or put chemicals in peoples' brains to see what they do when you give impulses - I think this has been done for mice, but humans are of course way more complex. Unfortunately, you also can't ask the mouse what it was feeling/experiencing during the experiment, so it's not as enlightening.
I can see how you would arrive at that conclusion, but it's the emergent nature of brain function that makes it very hard, maybe even impossible, to connect basic neural processes to behaviour. But there are many smaller leaps that provide a tremendous amount of knowledge on how the brain works when looking at interactions between scales: how specific cell groups are storing and replaying memories of their activity, for example, or how different visual processing areas interact to identify a shape as being in the foreground or background, or how visual and auditory processing areas collaborate to determine whose voice it is you hear out of the dozen people whose mouths are moving. Don't get me wrong, there is still a massive amount of unknown aspects of the human brain at every level of study, but the expectation that what we need to do is 'fill the gap' between basic neural processes and the sum total of human behaviour is completely unrealistic. It's like trying to model the daily traffic flow inside a country by looking at a model of a combustion engine. That's also why your example of psychoactive drugs makes them so difficult to study: we are observing their effect on the highest level when their mechanisms may involve changes at any number of levels, high or low, throughout the brain. Predicting those effects is like modeling traffic flow when all the red cars in the country have their speed reduced by 10%, and that's just keeping it simple.

My point being, while the brain is incredibly hard to study and not nearly completely explored, progress is constant and substantial, so I think it's not at all hard to imagine a fundamental breakthrough in the coming decades. Even now, the exact neural process that results in conscious perception has already been discovered.

Quote
The timeframe still seems a bit short to me, but I don't doubt that an intelligence somewhat similar to our own is in principle possible if you start with a species which is already almost there. Isn't it actually weird that noone seems to have tried that yet? I would have thought of apes as being the most promising (because they're closest to us, I guess), but you're right: One needs a species which reproduces faster. Also, I think that some species of bird fare better in many of these classic intelligence tests than apes do.
Actually, there is this one gorilla (Koko) that as been taught sign language and also to understand English (~2000 words apparently). According to Wikipedia, she's the only gorilla to ever have passed the mirror test for self-recognition. I read an article about her once; she can talk to humans in a limited way (say she wants to have food or water, which colour something is etc.) and has even lied to the scientists studying her - behaviour that has also been observed in chimpanzees and is generally taken to require quite a bit of intelligence. On the other hand, it is not entirely clear in how far Koko really understands sign language and uses it to communicate and how much she has just been trained to sign certain things in certain situations to receive treats. There might be some anthropomorphising going on.
It seems like a very simple thing to do, breeding intelligence, but in practice there's probably ethical concerns, it would be incredibly expensive and there may not be all that many practical applications. Let's say you create a parrot smart enough to do crosswords. What does that get you? Try explaining that on a grant application.

I'm not sure if it was the same primate but there's a widely cited example of one driving past a swan, which she had never seen before, and signing 'water bird'. The question is whether she was creating a new word to refer to a new thing or just saying what she saw: water and a bird. This problem plagues the whole field of behavioural science: an animal may look like it is behaving in an unexpected, intelligent way but there could also be a very simple explanation.

No one has yet been able to formulate what consciousness really is.
Technically I just did, three posts ago. Now, how consciousness works, that's a different question. :mrgreen:

Quote
The scary thing is that most AI research is probably done by the military, in order to improve their drone technology. If this AI one day becomes self-conscious, we may be in for a lot of trouble  :psyduck:
*sigh* What did I just say about AI "becoming self-conscious"? That doesn't happen.

It was late, I was tired, and I was a tad overwhelmed by the huge walls of text making up nearly every post in this thread, especially since much of it was simply people arguing about what "AI" means in the first place. My comment was intended as a lighthearted "tl;dr" summary for the whole thing.
Huge walls of text? Deal with it.  :evil:

Besides, if I had to provide a tl;dr it would be "AIs are different from humans in basically every way you can think of."
Logged
Quote from: snalin
I just got the image of a midwife and a woman giving birth swinging towards each other on a trapeze - when they meet, the midwife pulls the baby out. The knife juggler is standing on the floor and cuts the umbilical cord with a a knifethrow.

LTK

  • Methuselah's mentor
  • *****
  • Offline Offline
  • Posts: 5,009
Re: The Singularity vs. Stephen Hawking
« Reply #18 on: 13 May 2014, 12:31 »

Somewhat interesting article on New Scientist on this topic:

Quote
Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness.
Logged
Quote from: snalin
I just got the image of a midwife and a woman giving birth swinging towards each other on a trapeze - when they meet, the midwife pulls the baby out. The knife juggler is standing on the floor and cuts the umbilical cord with a a knifethrow.

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: The Singularity vs. Stephen Hawking
« Reply #19 on: 14 May 2014, 12:25 »

(mod)What would people think of moving this topic? It's still got a connection to the comic but is of interest to people who don't visit the comic subforum.
Logged
Thank you, Dr. Karikó.

GarandMarine

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,307
  • Kawaii in the streets, Senpai in the sheets
Re: The Singularity vs. Stephen Hawking
« Reply #20 on: 15 May 2014, 11:27 »

Might be a good idea, there's lots of smart stuff in here.
Logged
I built the walls that make my life a prison, I built them all and cannot be forgiven... ...Sold my soul to carry your vendetta, So let me go before you can regret it, You've made your choice and now it's come to this, But that's price you pay when you're a monster with no name.
Pages: [1]   Go Up