hi all the chip and hardware news has been very active lately. years ago tracked AI/ big data as separate categories and these two areas are merging. AI + hardware design is merging these days also. this is typified/ exemplified by googles new announcement of TPU (tensorflow processor) chips that they have designed themselves.[a] google is intending to scale up this architecture for open use. its an impressive initiative in its early stages and highly crosspollinates with its other big moves into AI such as with Deepmind.
elsewhere there is a lot of news about “brainlike chips” being developed. these may be massively parallel or based on “neuromorphic” computing.[b]
as was noted/ heralded on this blog quite awhile back, moores law is now widely questioned and suspected to be plateauing.[c]
💡 ❗ ⭐ this code has a lot of moving parts and took quite awhile but is also a culmination of many previous ideas. it has the same distribution generation mechanism as last run (slightly modified/ adjusted), but its a 1st cut on evaluating the significance/ accuracy of the MDE/RR model as a predictor of trajectory length. the better the accuracy of its modelling, the more “plausible” a (“analytic”!) model it is and viable for exploiting for further derivations.
hi all theres been a recent shock of awareness of the Royen proof of gaussian correlation inequality, pop-sci publicized by Wolchover for Quanta, a big milestone… this is a nearly ½-century-open problem![a] Quanta funded by Simons institute is one of the top outlets for scientific/ mathematical writing around today. a real community resource/ treasure!
the Royen proof is not exactly my area so cant write a lot on it but do note that its a key case study in dynamics of scientific peer review, and seems like it has some parallels to the ongoing mochizuki proof analysis.[b] it took over ~1½ year for “community” to begin to grasp the correctness of this proof and Wolchover has a nice historical timeline for how others began to notice/ accept it, a mapping of the spread of awareness. it did not help that Royen was somewhat isolated and did not seem to personally contact any cohorts for peer review. he published openly but it got lost in the noise. it shows how community acceptance is sometimes far from a black/ white binary decision, esp for “big problems”.
is there any way to improve peer review? its definitely a bit of an achilles heel of the scientific process. my feeling is that there is no way to improve it very much except maybe to try to increase transparency somehow. its very similar to the problem of “fake news”. how do you measure quality in content? we live in the vast Information Age but as has long been noted, theres a big difference between Information and Wisdom, and in a way peer review is the major mechanism that is designed to separate/ discriminate the two.
hi all. science is in the news. it looks like the US public has realized fairly quickly that this is nearly an anti-science admin through the administrations rhetoric and many early anti-science decisions.[a] the public “protests/ marches” for science are unprecedented (triggering this post/ “outburst”).[b] but one might argue they are not entirely protests but in fact an advocacy. there is strong overlap with climate concern.[i]
my favorite area/ subspeciality is Computer Science a very neat blend of STEM.[d] years (decades!) ago an interviewer asked me “whats the difference between science and technology”? that was before the STEM term was invented. found it difficult to answer the question.[j] but my focus is not so narrow and recognize that CS is part of Science and there are all kinds of ripples/ shifts/ waves going on in the latter. and ofc have a lot of physics ideas/ engagement/ writing on this blog.
science & technology are fusing in our lives like never before. the boundaries blur and some new capabilities may seem nearly god-like compared to the prior human condition. but there is also always the icarus aspect of flying close to the sun with waxed feathers. or pandoras box. the greeks seemed to dream uncannily far into the future in their legends/ mythology.
science is an ideology, but one that is threatened in various ways. its like those big ideas like Democracy that require active engagement by the public and is no longer something to be assumed or taken for granted.
science has given us miraculous stuff in the US and the US has been a world leader, but it seems some of that edge is eroding. its not something that comes automatically, it requires something like a vibrant/ thriving infrastructure, even ecosystem, and that cultural/ intellectual ecosystem is threatened quite analogous to the earths. academia is a big part of it, facing some difficulties.[f]
hi all, have gotten a sizeable spike in hits over the last week on collatz related blog posts! it seems to be traceable to an old reddit page on collatz from jun 2015 by
level1807 talking about using a new feature in Mathematica to graph the collatz conjecture. profiled that finding/ graph myself here in this blog around that time. not sure how people are finding that page again, but googling around, it looks like this graph is now immortalized in a mathematical coloring book which was announced in a recent March 28th numberphile video getting just a few thousand short of ~200K hits at the time of writing this (maybe a few ten thousand in a few days!), and profiled the same day by a popular mechanics blogger weiner under the title of the Sea Monster. so, essentially viral, but putting the bar a little lower for mathematics! and as for the “elephant in the room” much to my amusement/ chagrin the video never once uses the word fractal (bipolar moods again attesting to a longterm love-hate relationship, not to mention the other (mal?)lingering facet of mania-depression!).
and this coincides very nicely with my announcement of the following. have been making some big hints lately and think finally have a Big Picture/ Big Idea from the most recent experiments. (yeah, no hesitation in the open Big Reveal on a mere blog after years of a similar routine…)
what is looking very plausible at this point is a formula in the form of a matrix difference equation/ matrix recurrence relation. the devil is of course in the details, but heres a rough sketch. prior experiments have some “indicator metrics” that are based mainly on binary density of iterates, and other “surface-like” aspects such as 0/1 run lengths etc… and its now shown that these are strong enough to predict future iterate sizes (10 for now) with some significant degree of accuracy.