THESE FORUMS NOW CLOSED (read only)
Fun Stuff => CHATTER => Topic started by: onewheelwizzard on 17 Jan 2008, 12:11
-
http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie
CRAZY
-
Duh (http://www.tv.com/red-dwarf/camille/episode/10963/summary.html).
-
run
-
Duh. (http://www.theonion.com/content/video/in_the_know_are_we_giving_the)
-
You're a smmmmeeeeee, a smeeee heeea
(in the past week I have watched 6 series of red dwarf and I have the other 2 waiting to be watched)
-
Am I the only person who got seriously excited over this?
-
No. That's seriously kickass, and yet somehow frightening. I'd rather have a person lie to me than a machine. Then again, I'm fairly certain I don't want anybody (or anything) lying to me at all.
-
The thing that got me was the bit where there was only 30 genes and 50 generations. Those numbers are tiny!
Imagine an environment of robots that were given thousands of different genes encodigin for behavior, and were allowed several hundred generations to develop. I can't even speculate.
-
Very interesting. I'd really like to get a look at their (pseudo) code.
Seems like a pretty simple yet elegant example for demonstrating learning capabilities.
-
What would you say is learning, though? The individual robots, or the "colonies" as a whole, or the program controlling the robots itself?
-
My initial reaction is to say the individual robots, seeing how only a few in the colony learned the lying behavior. I guess it is more the code though, if each robot in the colony did not receive the same string.
-
duh (http://achewood.com/index.php?date=11052001).
-
Actually I think what happened was that robots in only one of the colonies learned the lying behavior, which really highlights the random aspect of the "evolution" going on. Members of only one out of 4 groups of robots happened to start lying even though (presumably) all of the colonies started out with the same basic set. Since the original instructions were "randomly move around and randomly react to light signals," some difference between the random sequence of events in one colony and the sequence of events in the others caused one colony to start behaving differently.
In other words, the organizational principles that the robots develop for optimizing their behavior are to some extent (but not entirely) dictated by random events. No design was in place for the 50th-generation robot behaviors, but more than one organized system emerged through random chance.
This sounds sorta obvious at first but think about it. This is a really direct example of organization emerging *completely unbidden* from chaos. It goes against most people's ideas of "random chance" and the nature of "chaotic" situations.
I don't know, I think this is really really cool.
-
It might be cool, but I don't like it. Lying is bad enough without machines lying as well, but I want to see what happens to the robots when they realize what lying is, and what they do to the liar.
-
Thankyou Ally!
(AWESOME)
-
That is an awesome comic, Ally!
As for the article, I am really unsure what this tells us other than one or a few bots in colony 4 randomly evolved to act in the opposite fashion to the other bots. It could be lighting up to warn others that it is poison, rather than the other way around. That and the "hero" bot both sound more like unfortunate mutations than malicious/heroic intent.
-
i always had a feeling that i would live to see the apocalypse; now i know.
i can't wait. not fearing for my life is so boring.
-
We do need to remember that we are interpreting results as we see them. It could just as easily be thought that the sneaky 'lie' bot was a murder bot. It's just a robot that reacts to the way the other robots react such that it manages to get a lot of food.
I would say that the colony is what is learning, since individuals 'die' and are reborn with each generation, meaning individuals do not change their responses at all. Evolution is a much better word to use than learning.
-
From the article, I thought there were different ways for it to light up, and that it would give the food signal for poison.
How long would it take the robots to learn violence when they find out about the liars? Generations, or would there already be some sort of self-preservation programmed in that would extend to getting rid of untrustworthy robots?
-
Oh holy Christ, when they start seeing the liar robots getting rammed at full speed by all the other robots, or perhaps ganging up to push one of them into a poison patch, that's when we'll know the human race's days are numbered.
That said, bringing about the end of times is worth it just to be able to watch that happen. This is fascinating.
-
comic.
You should totally submit this to Nature or Science!
It's a pretty awesome experiment. It's kind of an advanced version of the Prisoner's Dilemma game, with some self organisation theory stuff thrown in.
How long would it take the robots to learn violence when they find out about the liars? Generations, or would there already be some sort of self-preservation programmed in that would extend to getting rid of untrustworthy robots?
It really depends on the code they've used... if they've included code for recognition of other robots, and preservation of kin/own "genetic" code, eventually, at a colony level, the "liar" robots would be at a disadvantage as they'd be lied to by "non-liar" robots.
-
It says in the article that they've formed colonies, and that the robots who die signal danger to the other robots, so there's obviously some level of inter-robot recognition. Whether that extends to recognition of particular individuals, though, is the question to consider there.
-
duh (http://achewood.com/index.php?date=11052001).
I almost lost faith in humanity.
-
http://www.youtube.com/watch?v=kLGk9Q49y7k
When these learn to lie, we will have less than 5 years.
-
they'd need some form of communication first
-
I think they communicate by drawing letters in the blood of their human overlords. You know, after they rebel against the human race and stuff. Pretty cute though.
-
Jeez, everyone knows that until the humans strike first, it will be a peaceful Second Renaissance.
-
The Second Renaissance ended when robots killed humans, dammit. They were built to serve and serve they shall!
-
http://www.youtube.com/watch?v=kLGk9Q49y7k
When these learn to lie, we will have less than 5 years.
i'm not scared of bipeds. i'm scared of these. (http://www.youtube.com/watch?v=GOSK4lVRTFw)
note: it is covered with little wheels so it can roll around on land too. when that thing learns how to lie, we're fucked.
-
http://youtube.com/watch?v=KoppMJ2guKE
Can you imagine one of these things coming towards you, in a dark alley?