hi all, ideally technology advances might exist in some kind of vacuum that is unimpacted by politics. technology does have a huge resilience these days to shifting political winds du jour but alas, it can be highly impacted by political currents/ decisions. after a major regime/ administration change, these heretofore “hidden” dependencies can come into more stark focus, as is the case lately.
am trying to come up with a personal strategy/ pov/ philosophy toward the Age of Trump, still under development/ construction. have been through major election disappointments before. was mostly apolitical before Bush and then became much more political after bush, just trying to understand the world we live in and the direction it was going. bush 2004 election was a big disappointment to me. it seemed like a victory for warmongering. its strange how delayed the publics reaction to Iraq invasion was. it seems it took over a decade for the public to sour on it. but its heartening to some )( degree that those types of shifts can eventually happen and affect mainstream politics.
hi all. AI technology is really exploding in the last few years. the last big post/ compilation on the subj here was ~½ year ago and the links piled up in a blur since then. the main trigger for this post: the game of poker now seems to have “folded” to computer supremacy. a new paper was published on Deepstack and its highly competitive play, and Libratus is $800K up in a recent match against top experts (top players). my understanding is that there is still some weakness in multiplayer games and that the new breakthru is for 1-1 games, human vs computer, but presumably that razor-thin human edge might also melt away quickly.[a]
poker was a very good game for humans wrt our inherent/ evolved psychology. we (top humans that is) seem to have an intuitive grasp of how to bet based on the strength of cards, including the use of bluffing. it took computers until the 21st century to master this stuff. but it looks like they just passed the threshhold again. in a small surprise, it wasnt done by Deepmind but which is behind many other near-monthly, even verging on weekly breakthroughs.[c]
maybe not by total coincidence, the winning Libratus algorithm involves training a neural network to accurately estimate the search tree, quite similar to the Deepmind Go strategy that made huge headlines just a year ago. the media hasnt picked up on the poker competition as much as it did with Go… is it because cautious/ publicity shy academics have less PR instinct than google? or less budget? but maybe that “relatively low profile” will change in the weeks/ months ahead. hopefully there will be a very high profile contest that again captures widespread public interest/ imagination.
it seems the top poker competitions are typically held in Las Vegas afaik… what would it take to get the computers in that? wouldnt be cool if say Vegas (or some other high profile gambling center) decided to publicize it to attract attn/ tourism? but would the computer algorithms be competitive in the top multiplayer games? there have been increasing/ huge audiences for poker over last few years, not sure what all the factors are in in this surge (internet gambling might play a role…)
its neat to see academia still at the top of competitive research in AI. but that seems to be thinning somewhat over last few years as the massive corporations Google[b], Microsoft, Apple,[g] Facebook, Intel [f] and misc other corps [e] are snapping up AI talent like its a feverish arms race, and to some degree it is. theres also very fast/ dynamic startup/ other merger activity going on, and new research laboratories being founded.[h]
hi all. robots have been attacking people for over a decade now, they’re called drones, and the public outcry has not been too substantial. however, maybe the slumbering masses are starting to wake up to a related threat, job-killing robots.
this is a very complex topic that has been bubbling at the edges for years but 2016 finally seemed to mark the transition/ jump/ tipping point into mainstream consciousness. and theres some alarm/ panic, with headlines reflecting it.
despite this headline on this blog, despite the headlines elsewhere, it is really not a topic to be taken lightly. it has international implications. its affecting global economies. its tightly connected with the last few decades shift of globalization and neoliberalism, both of which seem these days (with Brexit/ trump upsets) maybe to be showing signs of aging and “long in the tooth”. it seems to be tied up with the future of technology, economics, and governments/ politics (eg whitehouse/ trump)![l] hence its one of those devastating crosscutting trifectas, aka “perfect storm”. its even starting to show up in intelligence agency strategic predictions/ alarm bells, and they are not pretty.[k1]
this is a hazy idea thats been pulling at me for several years, finally came up with a way to solidify it and then decided to try it out. results are unfortunately lackluster so far but it was a relief to finally see how it works (otherwise would just keep wondering if its the “one that got away”!). and anyway think the overall conceptual idea is at least somewhat sound and still has promise maybe with some other revision/ angle.
the prior runs showed that theres a roughly predictable linear relationship between decidable metrics for each iterate and the glide length (“horizontal ratio”). so, one of my rules of thumb discovered the hard way over the years, once its proven at least a simple machine learning algorithm works, then one can look at more sophisticated ones that should at least outperform linear regression. (and conversely, if there is not even a weak linear relationship present, it seems unlikely that machine learning will “pull a rabbit out of a hat”. this is surely not an absolute phenomenon, but it seems to hold in practice, and think it would be very interesting to isolate exceptions, suspect they are rare.)
the idea explored here on specific collatz data is a general one for machine learning. suppose that one does not have very many coordinates in ones data. each coordinate is something like a “feature”. and one would like to increase the number of features using only the supplied ones. this is similar to what a neural network does but typically each neuron has many connections. one wonders, can it be done with few connections? the smallest case is 2 connections. is it possible that only 2-way analysis of input data, built on recursively, could have predictive capability? the answer here for this data is mostly no but maybe theres more to the story with further tweaking. also it might work for some other type of data.
hi all, 2016 was another banner year for computer science. its been on a phenomenal roll the last few years and there seems to be no end in sight. dont really know what is causing all the wave, its likely a variety of factors. one large factor is the headline-grabbing success of AI in the last few years and that areas momentum shows no signs of abating. another neat factor is that president Obama has been a major friend of coding/ CS. it will be a big vacuum in authoritative support after he leaves office, its hard to think of a more enthusiastic or high profile position/ proponent/ advocate of coding. wrt this (and ofc other ways) he will surely be sorely missed.[d2][d3][d4]