hi all. AI technology is really exploding in the last few years. the last big post/ compilation on the subj here was ~½ year ago and the links piled up in a blur since then. the main trigger for this post: the game of poker now seems to have “folded” to computer supremacy. a new paper was published on Deepstack and its highly competitive play, and Libratus is $800K up in a recent match against top experts (top players). my understanding is that there is still some weakness in multiplayer games and that the new breakthru is for 1-1 games, human vs computer, but presumably that razor-thin human edge might also melt away quickly.[a]
poker was a very good game for humans wrt our inherent/ evolved psychology. we (top humans that is) seem to have an intuitive grasp of how to bet based on the strength of cards, including the use of bluffing. it took computers until the 21st century to master this stuff. but it looks like they just passed the threshhold again. in a small surprise, it wasnt done by Deepmind but which is behind many other near-monthly, even verging on weekly breakthroughs.[c]
maybe not by total coincidence, the winning Libratus algorithm involves training a neural network to accurately estimate the search tree, quite similar to the Deepmind Go strategy that made huge headlines just a year ago. the media hasnt picked up on the poker competition as much as it did with Go… is it because cautious/ publicity shy academics have less PR instinct than google? or less budget? but maybe that “relatively low profile” will change in the weeks/ months ahead. hopefully there will be a very high profile contest that again captures widespread public interest/ imagination.
it seems the top poker competitions are typically held in Las Vegas afaik… what would it take to get the computers in that? wouldnt be cool if say Vegas (or some other high profile gambling center) decided to publicize it to attract attn/ tourism? but would the computer algorithms be competitive in the top multiplayer games? there have been increasing/ huge audiences for poker over last few years, not sure what all the factors are in in this surge (internet gambling might play a role…)
its neat to see academia still at the top of competitive research in AI. but that seems to be thinning somewhat over last few years as the massive corporations Google[b], Microsoft, Apple,[g] Facebook, Intel [f] and misc other corps [e] are snapping up AI talent like its a feverish arms race, and to some degree it is. theres also very fast/ dynamic startup/ other merger activity going on, and new research laboratories being founded.[h]
➡ ❗ ⭐ 😎 😀 💡 hi all sparkfun 2016 was last wknd sat sep 17th. what a blast! got the big robot fix and geek (over?)dose for a long time.
as mentioned before in this blog (last summer), sparkfun is an amazing company with a lot of really dedicated/ passionate members. its grown massively in only about a decade. they have very impressive warehouse/ facilities with nice features such as several classroom areas.
a huge unexpected highlight for me (got there just in time) was the presentation by Casey Kuhns (aerospace engineer!) and Zachary Goff of the POISON ARROW battlebot. their robot is built incredibly well in short timeframes. they have to glue snap connectors together otherwise they break apart during collisions that have as much kinetic energy as in car crashes. they showed a highlight of launching another 250LB robot 8ft in the air. they also had a flying drone that could shoot down flames. it was impressive to watch but seemed to have a lot of trouble honing in on targets.
to a large room/ ~50 count rapt audience with lots of kids, they detailed the fascinating building and insider/ behind the scenes aspects/ figures of its creation. lots of great/ riveting slides/ videos. they revealed the ABC battlebot cage cost $3M. builders got $10K from the show, and they didnt mention much prizes on the show, it didnt seem to be much of a consideration for them.
😮 💡 ❗ 😎 😀 ❤ hi all. big news in number theory last few months and years. this is a tribute to a few years of top breakthroughs and exciting developments. the general theme is “primes” but there are a few detours.
in a breakthru/ rare event, a new statistical property of primes was discovered related to frequency of occurrence digits in base-n expansions. it was discovered by Oliver/ Soundararajan and has led to a huge amount of media attention and notice by top scientists. some of it was uncovered with dear-to-my-heart computer empirical/ experimental approaches.[e]
yet its somewhat reminiscent of another similar discovery only ~7yrs ago in 2009 by Luque/ Lacasa.[e6]
big discoveries like this sometimes make one think that maybe we havent even “scratched the surface” of theory of primes. [a2], a long-held/featured link on this site (on main sidebar) points out connections with quantum mechanics by Sautoy, an authority on the Riemann hypothesis (Dyson/ Montgomery 1972).
speaking of Riemann, and apropos/ befitting/ in observation/ reverence of todays date, it was claimed to be proven by a Nigerian recently. and it turns out not surprisingly Nigerian mathematics is not all so different than illustrious Nigerian business ventures advertised on the internet.[g]
hi all the go match was very eventful and turned out to have massive media coverage worldwide, with huge interest from technology publications, and theres a sizeable AI/ ML/ singulatarian crowd on the internet that follows these types of developments quite avidly.
wish that google would transcribe the press conferences. there was a lot of interesting details but it takes a long time to watch them and the midstream translation interruption (korean to english, english to korean) slows down the presentation also.
saw some interesting press questions in the post-3rd match conference iirc that tie in with some angles explored on this blog.
hi all. feeling blog-overwhelmed by recent Turing Machine-type events but just cannot resist blogging again at this historic moment (my blog frequency is up lately and some at expense of other activities). caught some of the 2nd match live late wed eve. do not know go heads from tails myself but found the commentary by Redmond quite engaging. it was interesting in postgame analysis that Sedol felt at no time was he winning, but Redmond saw the game as fairly even even far into the middle. but near the end something shifted, possibly a single move, and Redmond said that Black (alphaGo) had a major ~10stone advantage based on rough count.
the long ~4hr match made me feel a bit sorry for the commentators attempting to say something meaningful the whole time sometimes when the moves were very slow. there was a ~30m delay in the midgame as sedol pondered a weird/ unusual move by alphago. at one point Redmond called alphago a “he” and they reacted briefly on that, redmond said it seemed natural to him. (at my job a guy also sometimes talks about computational processes in terms of “he”…)
this game is quite interesting in that, for apparently nearly even positions (which are possibly frequently the case in very advanced level games) one does not know clearly if one is winning or losing and single moves can significantly tip the balance. the single moves seem to be about unifying separate regions and strengthening major separate areas such that they reinforce each other. it seems to be about simultaneously playing out multiple strategies in separate regions and then tying them together in the end.
the game seems to me to have a strong fractal quality, apparently not noted by many. explaining exactly what this means is not quite possible at this moment in scientific history. fractals are very difficult to describe.