happy 2nd anniversary to this blog! :!: :cool: :D returning to a theme of “reverse visualizing the collatz tree”. a bunch of riffs and some other misc ideas tried out recently. nothing spectacular but just posting these partly for archival/ log purposes.
these algorithms use more complex comparison metrics, a few quadratic complexity instead of linear complexity, to decide which points to advance next. they exhibit transition-point and tipping-point like behaviors where eg there are two different regimes, one where points are scattered between horizontally increasing lines and one where they line up in “fenceposts” (vertical). (they also fix a defect noted in that earlier code where some additional spurious points (not strictly in the collatz problem) were included.)
hi all. some big news in the gaming front. as mentioned in last post MS bought Minecraft for ~$2½B. amazing![b]
other big news. DeepMind learns to play video games with a supervised neural network, which basically just uses reinforcement learning on the game points apparently. a tiny company already acquired by google. stunning! google must have been very impressed and it is not very easy to impress google. what is not being reported much is that the AI must presumably/ basically “experiment” with all kinds of random actions before discovering any that lead to points. now that seems like evidence of highly intelligent behavior.[d][d2] at this point have another huge collection of AI links and wasnt sure how to classify these but a gaming-topic post 1st seems aligned with the current zeitgeist.
oculus rift 3d glasses tech is very big in the news. revolutions like this dont come along very often over the years, but this does sound like a “gamechanger” product to me. esp because it has such huge implications outside of gaming for eg remote conferencing, remote learning, etc (enormous markets).[c] (recently/ elsewhere/ in contrast apple announced Apple Pay but just cant really see how it could turn out into a hit at this point. does anyone see a killer app of this tech right now? internet commerce via phones? dunno!)
hi all. filed under “the joy and pain (& yin & yang) of research”. :( :) microsoft unceremoniously announced the abrupt/ brusque closure of the silicon valley CS research laboratory which ran for about ~1½ decade. actually there does not seem to be any official announcement anywhere. one cannot even find a list of the researchers and their papers any more, apparently the web page previously documented it was vaporized also. “the memory hole”! a bit orwellian even! easy come, easy go! (and this is also quite a jarring contrast/ juxtaposition wrt just last blogging about Google buying up an entire QM computing lab!) updated: MS page back up [a14]
this is a harsh moment but certainly not an unprecedented one. commercial/ industrial research labs have tended to become very rare in the last few decades, an endangered species. this closure triggered sizeable ripples, commentary, and reflections in the (T)CS blogosphere including by many insiders, past visitors, and admirers posting comments, tributes, & memories.[a] among them [a1] is esp notable/ standout as Omer Reingold announces the closure in a very classy way with lots of responses & reminiscing, thoughtful, even touching comments. a sort of mini blog eulogy. a test of that old aphorism-verging-on-canard which might feel like little solace to the victims: “when one door closes, another opens” also heard in some recent pop music (ah, that new near-saccharine-upbeat Katy Perry song that mixes more metaphors than even me?).
Sometimes good things fall apart so better things can fall together. —Marilyn Monroe
hi all, its not too often that one scoops both time magazine & wired, but that was indeed the case with my last blog on DWave at the beginning of the year.[a] it was so cool to see Time devote its cover story to DWave and quantum computing. however, could not manage to find a copy of the magazine to buy it! Time is far less iconic in this new 21st century era of cyberspace awash.
the massive/ exciting/ milestone news less than a week old justifying/ inspiring this new blog post is that Google is hedging its bets and hiring Martinis and his entire team at UCSB to work on an apparently hybrid mix of adiabatic and gate computing techniques, some pioneered in DWave and others in his own lab, respectively. details are very sketchy right now but its a major paradigm shift & phase/ game changer. it is speculated by the commentariat that DWave is probably/ understandably not too happy about this development & they have no comment so far. in contrast Martinis specializes in very many careful scientific papers on high fidelity, superconducting gate model which has achieved about 5 qubits to date and which serious scientists seem to endorse/ prefer.[b]
:idea: :!: :!: :!: this is some code that was just written & run today & there is no choice but to archive (& share) it immediately! the basic idea is a riff on the last theme of looking at base-2 suffixes that lead to large glides. a simple obvious variation is to compare the “subsequent” (adjacent) “parity sequences” of nodes of this graph/ tree. a property of this recursive algorithm/ enumeration is that prefixes must match up to some position, the question is, how much? then one can just look at the “incremental work/ computation” to determine that a glide falls below, and how much is that work in general? the jawdropping finding here is that this work can be sorted into a FINITE 6 equivalence classes, disregarding the equivalent parity sequence prefixes.
:star: :star: :star:
at this point think almost surely this property can be turned into an inductive proof based on the same algorithmic structure. :?: