THESE FORUMS NOW CLOSED (read only)
Fun Stuff => CLIKC => Topic started by: hedgie on 27 Jan 2016, 11:05
-
http://www.bbc.com/news/technology-35420579
A Google artificial intelligence program has beaten the European champion of the board game Go.
The Chinese game is viewed as a much tougher challenge than chess for computers because there are many more ways a Go match can play out.
The tech company's DeepMind division said its software had beaten its human rival five games to nil.
One independent expert called it a breakthrough for AI with potentially far-reaching consequences.
-
This is big news. Conventional wisdom was that Go was going to be too difficult for computers.
-
They probably didn't think that memories and storage would get as fast as they have. After all, "complexity" is just another way to call "decision tree depth".
-
WOULD. YOU. LIKE. TO. PLAY. A. NICE. GAME. OF. CHESS?
-
They probably didn't think that memories and storage would get as fast as they have. After all, "complexity" is just another way to call "decision tree depth".
From what I have read, Go actually can't be brute-forced, at least not with today's technology. Much of the advance really is from improvements to learning AI, as well as the ability of a computer to train by playing millions of games in a fairly short period of time.
-
I, for one, welcome our new computerised masters and as a human of dubious moral conviction, I could be useful in rounding up others to serve as organic battery packs!
-
They probably didn't think that memories and storage would get as fast as they have. After all, "complexity" is just another way to call "decision tree depth".
From what I have read, Go actually can't be brute-forced, at least not with today's technology. Much of the advance really is from improvements to learning AI, as well as the ability of a computer to train by playing millions of games in a fairly short period of time.
Well, it can't be bruteforced from turn 0, but I'm pretty sure that at some point you can enter the realm of what can be bruteforced. Although yeah, deep learning is an advancement.
-
This is an impressive feat, but Google's posting (https://googleblog.blogspot.co.uk/2016/01/alphago-machine-learning-game-go.html) should be put in context. The human player was Fan Hui (http://senseis.xmp.net/?FAnHui) 2p, and Google describes him as "an elite professional player". A 2-dan professional will be a strong player, and certainly much stronger than I am, but at Fan's age it makes him a "journeyman" pro, not "elite". Fan Hui has won tournaments in Europe, but that is strictly bush-league, and he has no track record of wins or even decent placings against the top pros from Korea, China, and Japan.
Google is not resting on its laurels though; Lee Sedol (http://senseis.xmp.net/?LeeSedol) 9p has accepted a challenge to play AlphaGo in March. It's a toss-up whether Lee Sedol or Gu Li (http://senseis.xmp.net/?GuLi) 9p was the strongest player in the world for the first decade of the 21st century, so that is definitely moving up to the first division, and I'm looking forward eagerly to seeing the games.
Despite what Google claims, this is not the first time a professional player has been beaten by a computer program. In 2012 Takemiya Masaki (http://senseis.xmp.net/?TakemiyaMasaki) 9p was twice beaten by Zen (http://senseis.xmp.net/?ZenGoProgram), but he gave the computer five stones for the first game and four stones for the second. It is impressive that AlphaGo was able to win against a pro on equal terms, and I think it's the first time that has been done.
It's been clear for some time that computers beating top humans at Go was a matter of "when" not "if".
-
We can still welcome our Robot overlords and go Quisling on everyone though right?
-
The real question should be, "when will we reach the moment the computer know it's playing go, instead of just hammering down a simple algorithm?"
-
A long, long way away. That ability - to 'jump up a level' of reasoning - is kinda a big thing in AI research.
-
I know. I just watched Ex Machina, and it's a wonderful movie.
-
Update. Game 1, the computer beats South Korea's Lee Se-dol. There are four matches to go however, so maybe the human species will make a comeback.
www.bbc.com/news/technology-35761246
-
AlphaGo has defeated Lee Sedol 3-0 (http://www.bbc.com/news/technology-35785875)! That is impressive.
-
The final score was 4-1. Lee Sedol won game four. He managed to get an edge in the center in the middle game. Shortly after it realized that it was losing Alphago started making really questionable moves. There have been fun speculations about why Lee Sedol's trick move worked. Apparently the correct continuation was difficult to spot and Alphago's Monte Carlo -approach missed it. Unfortunately I'm not informed enough about either Go or AI to understand the details. Anyway.
- Google reps (and enthusiasts at a go server I frequent) praised that one move by Lee Sedol, but other top ranked pros are not sure it really would have worked.
- Towards the end Alphago made moves that even I understand were no-hopers. Apparently its design does not allow it to look for moves that would give the strongest resistance (and best chance for a human opponent under severe time pressure to make mistakes)
My frogeye perspective observations.
- to some extent Lee Sedol underestimated his opponent before the first game. Apparently Alphago had learned a lot since its match against Fan Hui.
- In games two and three Lee Sedol didn't really have a chance. May be the pressure was too much?
- In games four and five, with the match already decided, Lee Sedol played better. The last game was very close even though he lost it.
Congrats to the team who programmed Alphago. Wonderfully implemented learning algorithms (and the approach of having two grids of computers - one working on the strategy, the other on the local tactics. If only Alphago could also explain why it played the way it did.
-
A blogger with interesting comments (http://askakorean.blogspot.com/2016/03/fate-of-humanity-in-hands-of-korean.html).