Category Archives: collatz

collatz new strategy

hi all. on vac this week & doing some new stuff (happy BTD US). there is a semifamous thousands-year old quote by sun-tzu maybe not yet contained in this blog (its been going thru my mind for quite awhile now, but wasnt able to find it in the blog via google). it is a quite favorite quote of business consultants which might tell you something about modern “leave no prisoners” business attitudes/ culture in our at-times militaristic/ hypercapitalistic modern age. (dramatic alpha-male stuff, but to put it more bluntly, one with a conscience/ empathy/ independent mind might wonder about the “fine print,” ie how many “enemy…” men did sun-tzu kill personally or oversee killing as a general? …or even humans which includes women/ children? oh but ofc its utterly metaphorical right?) 😮 😳 o_O

Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat. –Sun Tzu.

collatz has been described as the very impenetrable/ unconquerable adversary. strategy/ tactics both play a key role and have commented at length on both. they are like a yin-yang combination. victory will likely not come without some kind of balance between the two.

have been more/ very tactical for quite awhile but have been musing on some overarching strategy/ perspective/ pov lately, thinking it all over at current point. this involves more abstraction.

couldnt find this basic idea pointed out in old blogs. was it? the key question is to prove f(x) < g(x) for all x. here f(x) is collatz stopping distance or some similar metric and g(x) is “any recursive function” (either time/ space bounded). now apparently f(x) in many related forms has extreme entropy, the “needle in haystack” property, and also “fat/ long tails” distribution, and fractal. earlier blogs have outlined the idea that it appears that victory seems to lie on the path of decreasing or minimizing entropy somehow. g(x) can be regarded as an orderly function from analytic mathematics and f(x) is “far from it” in the sense of being extremely disorderly.

Continue reading


collatz shift

this months title is hopeful and in line/ theme with the last entry title. the disordered climbs were found to be a major wrench in the works of many months of analysis. however there are now some signs that even though seemingly without signal, maybe there are some angles to leverage on the disordered glides. am starting to get some general ideas. have developed/ honed some even stronger tools/ techniques. will they be enough to crack the problem?

this code is a modification of review51 to extract the recent/ latest generated glides from mix25g. it turned out extracting usable climbs from the table/ database was really nontrivial. 1st this code looks for the most common bit width encountered in the glides (climbs). this ends up to be exactly 200 the same as the bin count. this is interesting and maybe worth exploring further: all the lower glides are tending to cross (repeatedly) through that section. there are ~10K total iterations. then the climbs containing at least 1 iterate with that width are selected.

Continue reading

collatz new highs

am still clawing/ scrounging for any “big picture” leverage, resuscitating old leads. after some extended musing of new findings, came up with this latest idea. there are lsb triangles even in the disordered climbs for the uncompressed mapping. do these mean anything? seemed to see some cases/ trend where the triangle sizes successively decrease in the climb, ie dont successively increase. can this be quantified? this is somewhat similar to the old workhorse “nonmonotone run length”. instead, its something like “count of new highs” (in bit run lengths either 0/1). that statistic is (maybe not uncoincidentally) used in stock market analysis, which has to deal with some of the most wild/ intractable fractals of all.

it was not hard to wire up the last “dual/ cross-optimizer” (ie both within fixed bit sizes and also increasing bit sizes) to calculate this metric, named here ‘mc’, serving its intended purpose of trying out new ideas quickly. ran it for 100 bit width seeds and then it did indeed seem to flatline somewhat. upped the seed size to 200 bits and then more of a (very gradual) trend is apparent. it looks like a logarithmic increase (‘mc’ red right side scale, other metrics left side scale). ‘mw’ is the # of iterations since last max/ peak run length. the optimizer ran for a long ~650K iterations.

Continue reading

collatz killer

hi all here we go with the latest installment. trying to come up with new names/ themes. again theres a “pivot” going on at the moment but maybe there are now too many too count. time now for an intermediate retrospective/ pov. my recent physics blog talked about “killing the copenhagen interpretation” and thats my latest idea for this problem. the problem is definitely “killer” in many senses of the word. it kills all great ideas launched against it, its like an impenetrable fortress.

there was a tone of optimism in a lot of prior writing. now looking all that over, it was based on a longtime theme that was yielding fruit(s) of labor(s). the basic idea is that there are locally computable “features” that can, with enough ingenuity, predict longterm glide behavior with high accuracy, and also generally explain other basic trajectory dynamics properties. this clearly ties in with the machine learning approaches. this research theme has been pursued for several years now.

however, last month there was a massive setback on this particular theme/ direction. did you catch it? to summarize, the features being used, mostly based on (binary) density, were leading to a lot of insights and leverage on the problem. but there was a moment a few years ago when the research started to focus on generating density-based seed trajectories instead of more generally. that turned out to be a major detour bordering on a mistake (in 2020 hindsight). 😮 😳 😥 😡 👿 o_O

Continue reading

collatz trap(s)

the last installment ended with some idea of possible “traps” in certain metrics. this idea occurred to me quite awhile ago and didnt work out under some examination previously but theres some new ways of looking at this. in the past, it is clear that eg there is no strict density trap in the sense of one iterates density bounding the next ones eg wrt to the density core. but the last experiments led to a different idea. what if a metric is bounded over some count of iterations, does that limit future glide potential? its a simple variation, seems to be quite related, but is maybe the key twist that looks more plausible as a measurable/ consistent property.

this new experiment simplifies the code a lot and bounds distance-from-core labelled ‘dc’ and a entropy metric. the entropy is counting total # of 0-to-1 and 1-to-0 transitions in the binary form, scaled by the bit width. the (scaled) inverse entropy formula (aka “order”) ends up as one minus sum of count of 0/ 1 runs/ groups divided by iterate bit width, labelled ‘e’. this upper bound on the inverse entropy is equivalent to a lower bound on the entropy (because as mentioned entropy increases as glides progresses, and the potential trap is at the end; also note a “low upper bound (on order)” is a “high lower bound (on entropy)”). it finds a sharp transition point ‘e’ ≈ 0.40. (as mentioned, suspect that both low and high entropy may be tending to bound glide length, therefore maybe glide bounding wrt density-distance-from-core is inversely related to entropy-distance-from-core?)

Continue reading