hi all. feeling blog-overwhelmed by recent Turing Machine-type events but just cannot resist blogging again at this historic moment (my blog frequency is up lately and some at expense of other activities). caught some of the 2nd match live late wed eve. do not know go heads from tails myself but found the commentary by Redmond quite engaging. it was interesting in postgame analysis that Sedol felt at no time was he winning, but Redmond saw the game as fairly even even far into the middle. but near the end something shifted, possibly a single move, and Redmond said that Black (alphaGo) had a major ~10stone advantage based on rough count.
the long ~4hr match made me feel a bit sorry for the commentators attempting to say something meaningful the whole time sometimes when the moves were very slow. there was a ~30m delay in the midgame as sedol pondered a weird/ unusual move by alphago. at one point Redmond called alphago a “he” and they reacted briefly on that, redmond said it seemed natural to him. (at my job a guy also sometimes talks about computational processes in terms of “he”…)
this game is quite interesting in that, for apparently nearly even positions (which are possibly frequently the case in very advanced level games) one does not know clearly if one is winning or losing and single moves can significantly tip the balance. the single moves seem to be about unifying separate regions and strengthening major separate areas such that they reinforce each other. it seems to be about simultaneously playing out multiple strategies in separate regions and then tying them together in the end.
the game seems to me to have a strong fractal quality, apparently not noted by many. explaining exactly what this means is not quite possible at this moment in scientific history. fractals are very difficult to describe.
in the press conference, which is very large and looks like a major political meeting with so many reporters and cameras, there was some question about the “confidence” of the machine. the word “confidence” comes from probability and mathematics and Hassabis was using it in this sense, and then there is another sense of the word that of course relates to human emotion. its interesting to ponder certain words in the man-machine contest. some words have the perfect ambiguity and double meanings due to long historical use. other words show us humans struggling to describe our new robot overlords using anthropocentric language.
clearly even the commentators are a bit perplexed on how the google go programming works. it does not really have an internal “database”, it actually studied very large databases but the rules are emergent based on analyzing all those games. heres another example of what seems to me a human mischaracterization from a wired article by Metz: [a22]
The thing to realize is that, after playing AlphaGo for the first time on Wednesday, Lee Sedol could adjust his style of play—just as Kasparov did back in 1996. But AlphaGo could not. Because this Google creation relies so heavily on machine learning techniques, the DeepMind team needs a good four to six weeks to train a new incarnation of the system. And that means they can’t really change things during this eight-day match.
yes, the concept of “style of play” is very important. for example it could be aggressive or more tentative. and humans do experiment with “changing their style” sometimes with some succees eg Kasparov v Deep Blue. but this can backfire because the human is playing in an unfamiliar way (to them) also and move them out of their own strengths.
was reading commentary for Deep Blue by King where he talked about a 1-square pawn move opening and said “kasparov would not dare use such a move against any other grandmaster”. but in a way this seems risky, because maybe/ arguably the human should treat the computer something like another grandmaster, and the temptation to “experiment” under the pressure of a match can be a fatal trap to the human. sometimes it can pay off, other times it can backfire.
more on the comparison of Deep Blue vs Kasparov: King says he talked to developers of Deep Blue who even in 1997 admitted (counterintuitively) the system was unpredictable/ inconsistent. it could play grandmaster level chess and possibly win a game against their in-house expert but then in another game, make weird moves, losing the plot so to speak, and lose the game.
therefore, going out on a limb with a near apples-and-oranges comparison, it looks like at this moment based on just 2 games so far, with no “weird/ bad moves” detected by experts & quite to the contrary, inspired, even “beautiful” ones noted, AlphaGo is playing even more reliably/ consistent against a grandmaster than Deep Blue was at its height. in fact in a documentary, the Deep Blue team admitted that the machine actually made a nearly random move in an early game due to a bug partly because it ran out of time, and this may have been adjusted by the team during the match. or as the old software engr expr goes, something like “changing the plane engine while its in flight…”
Kasparov complained pointedly on repeated occasions on not having historical games by the machine to study and this is clearly a highly asymmetric situation (the machine has all of the humans games to “study”). and maybe that access could indeed level the playing field to some degree. tournaments/ matches have significantly varying rules and if man-machine matches continue (as appears to be the case), the “rules of engagement” are likely to be continued to be modified substantially.
Kasparov did intentionally change his style of play in 1997 and it proved alternatively both fruitful and fatal in a few cases. it seems somewhat strange or weird to me to say “alphago cannot change its style of play”. in some sense this is true, but it also misses the deep essence of gameplay. in a sense, every move it is adjusting its style of play to the human player (just like a human player). it is locked in a see-think-“act”-respond loop.
it is a collection/ amalgamation based on all possible (human) styles ever encountered, and myriads/ uncountable “what-if” machine-generated challenges to those styles. it evokes a complexity we have not seen before, a deep complexity like one of the universe, where our familiar words/ concepts like “style” start to break down, a complexity that may even be slightly beyond/ superior to the complexity that can be grasped by the human mind.
the phrase “this google creation relies so heavily on machine learning techniques…” seems off/ suspect also. those techniques are now proven capable of beating top humans. they are now demonstrably on par or superior to human learning (qualifier: in mostly still limited/ specialized contexts).
“the DeepMind team needs a good four to six weeks to train a new incarnation of the system.” but another question is, would a few games added to a database of tens of thousands (or more!) alter the derived/ calculated rules/ analysis much? probably not. the DeepMind team does tuning but its possible adding merely 5 new Sedol games to its current “training regimen” might not alter its rules much.
in some ways the word “train” stands out there. the word is again by historical development/ convention literally applicable to both humans and machines, and the machine learning field has long borrowed its usage, but in a sense, computers are doing something not-exactly-the-same or maybe much different different than humans when they “train”. how so? for example in strong contrast to a human player (and both Kasparov/ Sedol and possibly all grandmasters prepare in this way), the computer is not at all focused or tuned particularly on the style of a single opponent but in fact built to generalize and withstand all competitors entirely. in other words my understanding is that Google may not have particularly tuned the machine against Sedols prior games much at all! (a good question for the next press conference eh?)
and if it won, arguably it does not really need alteration. sometimes programs in such highly tuned states tend to not be amenable to further tinkering as any further tinkering is away from their already near-perfect optimum (Sedol himself said in post game wrapup that the machine seemed to play “nearly perfect”).
⭐ ⭐ ⭐
a/ the bigger question is if the machine made “lame” moves that can be “fixed”. but even that is quite difficult to measure. arguably its all subjective. it is difficult to judge moves in isolation and say they are good or bad. a move that looks lame to a human might actually have major advantage. that is exactly how games are won, by plays that have different meanings to different players. its not black and white, its far from it; its “a zillion shades of gray”. the human player may look at a move that looks “not meaningful” and later it may prove meaningful to the losing human.
a personal demonstrative anecdote: recently, a few days ago, just lost a chess game to a 12year old who found a way to checkmate me, his half-casual-half-serious chess coach, immediately after sacrificing a 5-point(!) rook to seemingly no gain, which looked like a dumb move to me (not seeing the checkmate)! it was a trap! (in my all-too-human defense, he also tricked me with some diversionary human expressive antics, which mistook for him missing the point/ attack! but this so-called “defense” could also be regarded mercilessly as an excuse…. which reminds me of our name for our increasingly strictly adversarial games, merciless chess.)
humans have some bias. there is some cultural teaching around the game, a body of knowledge some “conventional wisdom” and those are things that get overturned by advanced players. in short, some of the human knowledge could be verging on dogma, and just “best practices” that humans have relied on to attack the very complex game.
re my personal anecdote involving emotions, reminds me of another point. Sedol has talked about how he reacts to human emotions of the opponent during the game and that this subtly helps guide a player with his game and tuning ones approach. even top players never merely focus on the board, the human element is always present, and top players have a way of reading and conveying significant emotion at the board between opponents, and its a key part of human games (also in chess).
another wild/ near-wacky idea that came to me is that maybe the press could interview Kasparov again about his thoughts of this new Go match, wouldnt that be a fun angle? Kasparov loves his celebrity and almost never shies from the spotlight except after a losing match, which is rare!
Sedol was very gracious to the Deepmind team. he said he was impressed with the play and congratulated the team on their accomplishment. thanks Lee! its so great to see an asian modesty/ sportsmanship/ humility in contrast to the western in-your-face attitude of Kasparov (but Kasparovs near “trash talk” was highly entertaining also at the time, almost like an intellectual wrestling match…)
the match is literally front page news in Korea and the Google team expressed some surprise at this. Google CEO Schmidt flew in on surprise visit for the occasion! and said at press conference, “no matter what happens its a victory for humanity.” Sedol was downcast in the press meeting but did manage to raise a big gracious/ “winning” smile at the photo opportunity. at least for now, this moment, we do not have to feel threatened by machine intelligence and “gaze in awe and wide wonder at the joy we have found.” (apologies, a silly line from an old rock song called BAD TO THE BONE by george thoroughgood… maybe a new theme song of the moment for machine intelligence?)
but speaking of “feeling threatened,” its an old concept wrt 3 other angles this match reminds me of. the rivalry of humans vs machines, or machine adversaries, goes back hundreds of years in english literature and drama, possibly highly exacerbated by the intense shearing of the industrial revolution, which continues in its impacts even today except on a more abstract level with information technology, but no less transformative and occasionally wrenching (in some ways even more revolutionary and at-times shocking). it reminds me of the famous stories of Frankenstein, John Henry, and the Luddites.[a25][a23][a24] and of course the late 20th century moving into the 21st has a huge list of movies on the topic. its a deep theme/ groove in human mass psychology and sociology.
footnote: according to a new article and google, 60million chinese alone watched the 1st game! [a32] amazing! one wonders if the global audience may have been even higher (eg how many koreans, Sedols home crowd?)
⭐ ⭐ ⭐
a few other thoughts. there may be some fascinating deeper historical connections between Go and computer science than those being heralded wrt machine learning etc. some commentary by Redmond in the 1st game (iirc) really caught my eye. he showed a repeating pattern on the board. looking it up on the net, it was apparently whats called a “ladder” in elementary go theory. (an old expression goes that if one hasnt seen ladders, one should not play go.) this totally reminded me of the dynamic “glider” pattern in Conways Life game.
have been thinking about similarities between Go and Cellular Automata and looked for refs.[b] didnt find a lot, but apparently Conway himself played Life on Go boards during its earlier inception. von Neuman is credited with the early inception of cellular automata. is there any known connection there with Go, eg in his own writings? my other question is whether Conway has ever credited Go as part of his inspiration (was/ is he a player?); it certainly seems plausible to me. Go seems to show up in other mathematical contexts eg Nash/ Beautiful mind has a scene. there are some rules in Go eg “Ko” that work to prevent repeating patterns, so in some ways the game is a much different nature than the game of life.
cellular automata theory would seem to be able to explain some of the characteristics of Go and vice versa; ie there are some significant cross-themes. eg the dominating concept of global vs local patterns, a sort of butterfly effect where small changes have large global effects, some fractal aspects, the predominant theme of “emergence,” typical patterns, etc.
- 1. Google’s AI Is About to Battle a Go Champion—But This Is No Game | WIRED
- 2. Go players react to computer defeat : Nature News & Comment
- 3. British Go Journal / Special coputer Go insert govering AlphaGo v Fan Hui match
- 4. Mastering the Game of Go by Searching with Deep Policy and Value Networks | British Go Association
- 5. xkcd: Game AIs
- 6. Computers will overtake us when they learn to love, says futurist Ray Kurzweil – Mar. 8, 2016
- 7. Google’s AI Wins First Game in Historic Match With Go Champion | WIRED
- 8. Go master: AI will one day prevail but beauty of Go remains | Miami Herald
- 9. Google’s software beats human Go champion in first match | Miami Herald
- 10. AlphaGo’s chance to beat Lee slim, says former Google VP | Shanghai Daily
- 11. S. Korean Go Player Lowers Expectations Before Facing Google AI | Be Korea-savvy
- 12. Update: Why this week’s man-versus-machine Go match doesn’t matter (and what does) | Science | AAAS
- 13. AI Competition : Competition among AI Developers Becoming Intense | BusinessKorea
- 14. Alphabet’s Eric Schmidt to visit Seoul for Go match
- 15. AlphaGo defeats Go champion in first match
- 16. Demis Hassabis – Wikipedia, the free encyclopedia
- 17. Google’s Deepmind AI beats Go world champion in first match
- 18. AlphaGo v Best Human – It’s 1-0
- 19. Google DeepMind’s AlphaGo takes on Go champion Lee Sedol in AI milestone in Seoul
- 20. Go Grandmaster Says He’s ‘in Shock’ But Can Still Beat Google’s AI | WIRED
- 21. In a Huge Breakthrough, Google’s AI Beats a Top Player at the Game of Go | WIRED
- 22. Google’s AI Wins Pivotal Second Game In Match With Go Grandmaster | WIRED
- 23. John Henry (folklore) – Wikipedia, the free encyclopedia
- 24. Luddite – Wikipedia, the free encyclopedia
- 25. Frankenstein – Wikipedia, the free encyclopedia
- 26. Google’s DeepMind beats Lee Se-dol again to go 2-0 up in historic Go series | The Verge
- 27. DeepMind Go challenge News | The Verge
- 28. Why is Google’s Go win such a big deal? | The Verge
- 29. DeepMind founder Demis Hassabis on how AI will shape the future | The Verge
- 30. Google AI wins second Go game against top player – BBC News
- 31. Human Go champion ‘speechless’ after 2nd loss to machine
- 32. The Sadness and Beauty of Watching Google’s AI Play Go | WIRED
- 1. Cycle at Sensei’s Library
- 2. Ko at Sensei’s Library
- 3. Ladder (Go) – Wikipedia, the free encyclopedia
- 4. Conway’s Game of Life – Wikipedia, the free encyclopedia
- 5. Wei Qi, Cellular Automata, Ising Model, Feynman Checkerboard
- 6. Go Spotting: Conway’s Game of Life