happy new yr to all my readers, made it to another decade, some smiles, some scars! a few trolls have visited over the years, they seem to have found the blog via reddit comments, comments there do draw in minicrowds, but alas reddit mods (supposedly rather hypocritically following policy on the user generated content site) very aggressively scrub links to blogs in comments, although many submitted links are often blogs. so am banned from posting comments on about 3 subreddits lately for citing blog articles that are unquestionably on topic for the subreddits. badges of honor lol!
am thinking about trolls sometimes because seems like maybe around ~½ comments on this blog are from trolls, and sometimes in low moments identify with their sentiments (as in the ½-serious title of this blog)… theyre lowlifes but theres a silver lining in the cloud, they do have the energy to write comments, which cant be said for the uncountable “K(ilo)s” of lurkers who hit this blog.
some of my blogs try to tiptoe around triggering troll(s) (its not clear how many there were in the past, they seem very similar sometimes), eg avoiding certain phrases. one wants to present the appearance of some kind of progress or professionalism here. but nobodys said anything for months now, so might as well lead with a provocative title alluded to in last months blog. yes, trying to embrace the zen of going in circles at the moment. sometimes life and/or ones efforts feel inconsequential/ futile… sometimes lifetime accomplishments have all the permanence of circles in the sand + sandcastles…
a blog here mentions an “old idea” of trying to prove collatz via proving equivalence of two Turing machines. keep coming back to this. one would be eg the stopping distance calculated in a provably recursive way, and the other is the TM implementing counting the stopping distance using the standard collatz computation. was curious about how old that idea was and tried to use google to locate the 1st reference. (maybe some nostalgia today associated with the date.) earliest seems to be in this 10/2013 blog.
that led to another rather wild idea that gets to the heart of these exercises. it would seem the goal is to create an identical function to the collatz stopping distances using known recursive algorithms and then prove the two are equivalent. of course this is an undecidable problem in general (for two arbitrary functions) but in a sense all major proofs in mathematics are about overcoming that limitation
some of the proof directions sometimes remind me of the pumping lemma for regular sets, a powerful tool. have always wondered if there are pumping lemma analogs for languages harder than regular languages, and if there could be one for eg P languages, or even undecidable languages… am aware this doesnt sound totally coherent.
but heres a sort of analogy wrt undecidability & the TM equivalence problem. suppose one had a provably recursive machine that “apparently” calculates collatz stopping distances. that is, for any arbitrary starting points selected, running the computation, it correctly calculates the stopping distance. then one can verify the equivalence for any finite # of samples. imagine astronomically large # of samples being verified. given that the TMs involved have finite # of states, it almost seems like there could be a “magic number” such that if one verifies for very large # of samples, the TMs must be equivalent. magical thinking?
but then think from this angle: of course one can have an “adversarial” TM that correctly calculates for only a finite # of samples, for arbitrarily large # of samples, and fails otherwise. but note that its state count must be astronomically large also (at least, if it is to be constructed). also note proving the undecidability of determining TM equivalence cannot bound the TM sizes, the proof only works if the TM sizes are unbounded. but here we are talking about bounded TMs of specific size… in other words, is there something about working with specific TMs of specific (“relatively small!”) # of states that can be leveraged somehow? the undecidability of proving two TMs are equivalent in general cannot inherently mean that its unprovable for any specific TM1, TM2… is there some kind of loophole or exception in the “stone/brick wall of undecidability”? this also reminds me of the busy beaver problem. just to count to a very large # and halt requires a minimum number of TM states (starting from an empty tape), but its undecidable exactly how many in general…
thinking out loud, in other words it seems the idea is something like a proof that if any TM of size n states matches the collatz stopping distance calculation for large m # of trajectories, then it must match for all trajectories? it sounds outlandish but maybe is there any contrived problem where something like this could hold? somehow the idea is verifying infinite cases reduces to verifying finite cases? again sounds kind of weird stated outright but maybe that is also isnt that the essence of proof in general? (reminds me of some of hilberts old, pre-computational ideas about “finitistic mathematics”…)
💡 elsewhere, this is a quick implementation of some of the edit distance ideas outlined at end of last blog. its relatively natural to traverse starting seeds using (nearest) edit distance (in binary) and probably should have looked at something like this a long time ago. the hybrid genetic algorithms are very similar, even beyond that, theres a crucial/ basic natural correspondence not noticed until now (2020 hindsight!) its now very obvious to me and to the imaginary reader that has followed everything up to this point, but alas thats a big emphasis on imaginary so let me spell this crucial idea out: the GAs have contraction, expansion, and mutation/ crossover operators that are analogous/ parallels to the delete, insert, and replace in the edit distance algorithm. the GA operators allow for more change in a single operation but in simplest cases of changes in single positions they reduce to the edit distance operations.
this traverses seeds and tries to visit ordered by “computational tableau size” an idea that has appeared a few scattered times before. the tableau size is ‘t’ magenta calculated as the binary space associated with full trajectory sequences ie sum of all iterates sizes in binary. (note that for binary strings the 1-character replace operation is equivalent/ reduces to bit flipping!) was a little surprised to get this particular graph. the traversal is very smooth/ orderly except for some downward spikes. looking at them, not unexpectedly, they have lsb 0-runs “sandwiched” in them but not in the initial iterates. theyre outliers in the trend in a sense “hard” to find wrt the algorithm dynamics. the trajectory lengths ‘c’ blue converge to a sawtooth pattern mostly also correlated with initial bit width ‘nw’ lightblue. the sawtooths, not seen often in experiments—although seen in many prior experiments (there are so many experiments now!)—nevertheless are a striking manifestation of emergent behavior and self-organization, a natural recursive induction pattern.
❗ “getting warmer it seems!” it seems a key aspect of this experiment is that the entire trajectory data down to minute variations is encoded in the tableau size calculation in a way not captured otherwise whereas many other metrics seem to skip substantial details; on other hand inherently every metric is a kind of data compression algorithm. and it turns out trajectories within nearby edit distances (wrt starting seeds) have similar sizes at least over the overall frontier, and the frontier is “dense” enough to cover all the similar sized trajectories.
the overall approach reminds me of the basic factoid that all enumerable languages are enumerable in canonical order. as phrased in other terms previously, (to solve the problem) one is actually wondering if the collatz stopping distances is an enumerable language. this experiment seems to be “evidence” in the direction that it is. it immediately leads to the question, what if a completely smooth computation size curve could be constructed (eg no spikes as here)? wouldnt that somehow find the hidden order of the problem? also note, a technique utilized in the past, if the spikes are bounded in size/ frequency then a buffering algorithm can remove them…
💡 came up with this idea something on a whim and it seems to be real/ confirmed. it comes from noticing there is something apparently “special” about the postdetermined glide. no substantial ufos have been found there. so a natural conjecture is that trajectories are basically a predetermined range, followed by a “controlled” postdetermined glide. what is meant by “controlled” is possibly the essence of a proof kernel/ structure. what would be a strict possible control? how about just counting the max (absolute) difference in odd vs even iterates (in the range)? it seems at first to be somewhat farfetched that this would be bounded. but its definitely worth checking. and heres a new idea: the just built
edit1 traversal logic can be used in a sort of intermediate optimization algorithm between very simple
bitwise and the very sophisticated
hybrid frontier advance logics. it seems to have a nice compromise/ balance/ tradeoff, more sophisticated than the
bitwise code so maybe less likely to miss optimums, and not as timeconsuming to run as the
this does single variable optimizing to maximize ‘ba’ the max absolute difference using the
edit1 traversal logic. strikingly, it (lightblue line) does seem to level off “substantially” even as others mostly climb, in particular ‘nw’ starting bit width and ‘c’ trajectory length, and ‘cg’ glide length green! (‘ba’, ‘nw’ right side scale.) is it just my imagination or do the top spikes compress even as the bottom seems to go up very gradually? some of this is a little obscured in that the graph has some overplotting disadvantage with the ‘cg’ trend very close to ‘ba’ (aka highly correlated) also with uncontrasting colors. the ‘cm’ trend is notable in that it also seems to narrow and higher values become more sparse, almost as if pushing up ‘ba’ pushes down ‘cm’… another variable that would be good to optimize here in a multivariable approach would be ‘cg – nw’ the total postdetermined glide span.
(1/2) 💡 while writing similar ideas, dont think have noted/ highlighted this particular dichotomy before but just realized/ noticed it helps motivate/ understand some of the general ideas. somewhat like zen, theres some very substantial thought here, yet it still remains slippery.
- traversing integers in incremental order and calculating stopping distances leads to very high or in a sense highest order in the traversal algorithm, ie its just counting, which in a sense requires almost no memory, or very low memory to keep track of resolved vs unresolved points or frontier vs visited (just the current integer tracking the boundary, with only a single point on the frontier!), whereas the resulting adjacent stopping distances have high disorder. ie in a sense, and looking at this all in a general way, counting is the simplest traversal algorithm; and note that all induction proofs in a sense reduce to counting!
- then in direct contrast, wrt the prior
traverse15balgorithm there is very substantial (to the point of unexpected/ surprising degree of) order in the visiting of stopping distances in the algorithm, but, not tracked by the algorithm but present/ lurking “behind the scenes”, the complexity of the traversal is very high in the sense that the frontier/ resolved (visited) sets are “large”/ very sparse and not much compressible (“relatively speaking, in comparison”). so theres the two extremes of the tradeoff right there, and somehow it seems a proof—if it exists—is to be found in a sense “balanced/ traded off/ compromised/ somewhere in the middle,” such that both the “output side” evaluation of stopping distance and “input side” traversal orderings have significant order, ie such that both are in some crucial sense “algorithmic”… again as imagined last month it looks like a crucial next step is looking at the complexity of the visited vs frontier sets and wiring measurements of them and steering based on them into the optimization algorithm… ❓
but then comes another thought. counting seems to have unlimited complexity in the sense that the current integer has arbitrary size. for every very large integer, it could encode a “complex” set of “smaller” points. so what exactly is the distinction being examined here? even with many powerful concepts/ experimental results/ findings like order vs entropy, compression, traversal, emergence, undecidable vs decidable algorithms, etc, am still struggling with how to formalize this exactly! its some kind of new idea never before seen exactly but composed of many other ideas seen/ known! in short it all seems to imply optimizing some relative quantity somehow built out of all the above moving parts, but what exactly…? ❓ ❓ ❓
💡 ❗ further musing. that offhand idea of treating an integer as a data encoding may have helped trigger this! maybe the idea of an approximation (long!) involved here is part of the problem/ distraction/ red herring. the ideas involve relative degrees of order/ entropy/ compression. but what about “perfect order”? as already stated, if collatz stopping distances are a recursive language, they can be generated in sorted order. so perfect order would be this sorted order! and looking at that last extraordinary graph for
traverse15b, it definitely seems within reach (empirically, with a little more ingenuity/ tweaking)!
⭐ 😮 ❗ but then how does this really help? whats the next step? what could then one do with perfect ordering? a natural operation, tried in the more distant (younger/ naive) past: difference operation, aka like the calculus derivative! and also iterated differences are easy to compute, aka calculating the nth derivative, taking the “integer encoding” (binary strings) of trajectories/ sequences/ computational tableaus! my neurons are firing! have an extraordinary, shocking, nagging, nearly magical, little )( wild & crazy, this-nanosecond-moment suspicion that if the collatz stopping distances are recursive, and perfectly ordered, ie wrt corresponding computational tableaus (which are basically nearly perfectly correlated), theres a high probability one of the n’th derivatives will have a computable pattern in it! ❓ 🙄
(fineprint: pattern in differences of the tableaus, or the stopping distance? presumably it shouldnt matter too much, and presumably show up in many related ways! but the devil is always in the details!) 👿 😈
(1/3) this is the
edit1 optimization with multioptimization added to
traverse16. it was amazingly easy to drop in the
order subroutine which uses sums of parameter order indexes as a sort of simple proxy for scaling by parameter standard deviations. several optimization runs are explored. ‘ba’ does seem to increase when multioptimized with other trajectory statistics. then looked into ‘bas’ the ‘ba’ scaled value by # of iterates in the span (postdetermined glide, ‘cg’ – ‘nw’) which is very similar to a sequence parity density (distance) measurement. it tends flatline very close to 0. also looked at ‘dba’ and ‘dbc’ the parity density over the glide and the parity density over the postdetermined range, all tending to flatline very close to 0. do not recall looking at these parameters before. it does seem to relate highly to earlier experiments showing slope correlation with a line over the drain is very high. think there were other related experiments looking at ‘da’ density distance maximum over postdetermined range. but not sure how to reconcile these results with the known existence of ufos in the postglide.
its immediately a foremost question whether tinkering with the optimization metric could give better results and remove some of the spikes from
traverse15b here again the ‘t’ variable in magenta. this answers the question in both the affirmative and the negative. the basic idea is to look at another variable called ‘t01a’ thats the absolute difference of total 0 and 1 bits in the binary iterates (new metrics ‘t0’ and ‘t1’ yellow, black) and use it in the separate evaluation function now called ‘z’ magenta. there are some runs that increase spikes and maybe 1 that cuts back some on the few big ones.
somewhat as expected its quite remarkable/ dramatic how much these adjustments alter the traversal orders in some cases, a phenomenon long noticed in other experiments, but in short theres no “cheap solution” at least wrt this metric. its esp notable how the spikes tend to show up in the same places in several runs suggesting maybe some other approach is needed. also one has to be careful with some metrics that cause a “runaway” or “blowup” effect/ increase in the pursuit of longer/ large trajectories and skip over smaller ones, some were found but discarded. this seems to be due to a sort of “skewing” or “lopsided” effect in the metrics such that larger iterate seeds have very short tableaus. this is understood for collatz eg in the sense that “most glides are short” even for large seeds. but maybe need to think about this standout/ apparently key aspect more wrt traversal logic eg mitigation or even exploitation strategies.
this looks at what it takes to get a perfect trajectory size tracking curve. one wonders how close the outliers/ spikes are in edit distances to the explored points. on closer examination with some simple analysis code, it turns out they are close! at distances of only 2 instead of the 1 in the algorithm! this naturally suggests expanding the frontier by the
edit2 distance instead of
edit1. this is “cheap” in the sense the code is easy to write, almost already written (ie edit 2 distances are just all edit 1 distances away from edit 1 distances!), but expensive in terms of the memory and time consumption of the algorithm which increase substantially, definitely something to note/ mention but arent tracked/ quantified here, because am now thinking along with the recent paradigm shift, it doesnt necessarily matter that much!
⭐ 💡 ❗ 😮 this algorithm succeeds in obtaining an apparently perfectly monotonic trajectory (binary) memory size curve! (‘t’ magenta, ‘z’ lightblue overplotting, at least cosmetically “perfect”, without spikes.) more testing will be needed to confirm the solidity of the result. using higher edit distances was obvious to me from the start but its quite remarkable it succeeds “already” for the low 2! but then how to find out if its not missing trajectories? already have the idea of using similar techniques as previously…
(later) 😡 😥 argh, dashed on the rocks again! rude awakening! figures it was just magical thinking…
ran some validation code (below) which did verify that the algorithm is apparently working correctly. the validation code creates random trajectories starting with initial seeds of the same # of bits found in the “1K lowest set” above. then it looks for the seed associated trajectory that has a smaller (binary) size that is not in the set, and finds none after 50K iterations. so its solid wrt the sorted enumeration, still a remarkable feat!
but then looking at the 1K lowest set, its extremely disorderly, there seems to be no inherent order looking at various metrics: trajectory strings in binary, initial seed n‘th differences, trajectory strings n‘th differences (n up to hundreds!), trajectory plots, trajectory binary sequences (mod 2). 😳 😦
in a skeptical moment it might almost seem like strong “statistical evidence” the collatz problem is undecidable!
although a remaining hidden order may be grammar compression…
on the other hand, is this disorder to be expected? what would be a control case study? idea: eg consider a (decidable) sorting algorithm of binary inputs. suppose one enumerated its computational tableaus in sorted order. would one see the same disorder?
one aspect to point out: the concatenated trajectory binary sequences even though they have very close total lengths have “almost entirely” different internal structure and associated iterate/ trajectory sequence lengths are wildly varying. in other words the recent idea that trajectory lengths and associated binary memory counts are correlated is almost immediately, resoundingly refuted! which makes me think maybe some new approach looking at edit distances or some other similarity measure between the trajectory binary sequences makes sense… actually hamming distance is quite similar to edit distances for binary strings and easier to compute…
(1/4) ❗ ❓ the
construct100b experiment from last month noted an apparent implication/ tendency with (postdetermined) slopes with the single 1-triangles and gave me some ideas to follow up. this looks at slopes starting from
cg, cm, x indexes aka
mcg, mcm, mx, right side scale red, green, blue, where ‘x’ is the initial seed bit width aka 1st postdetermined point. not surprisingly in retrospect notably it ends up looking a lot like
construct100 experiment from last month which was looking at densities wrt similar/ related spans, but this one is remarkably more orderly, showing that slopes seem to be a “lower noise/ more orderly” version of density measurements.
got tripped up facing/ chasing down an apparent anomaly from a glitch in the code, but the fixed version is still meaningful. maybe not surprisingly ‘mx’ and ‘mcm’ slopes nearly perfectly track each other! the difference ‘dmxcm’ scaled by 1K is lightblue and it narrows, and is mostly correlated with difference
cm - x magenta. slope ‘mcg’ tends to follow or “float around”
mcm/mx. admittedly, also need to better understand/ explain the glaring sawtooth pattern already noted, as mentioned it seems to relate to merging trajectories… ❓
however the basic story here seems to fit into a lot of other analysis. it looks like more confirmation that the start of the postdetermined range is also the start of a mean-returning random walk without a lot of divergence from the mean slope, and even ‘cm’ later in the glide in the postdetermined range cannot have much of an effect on the overall “high constrained” slope.
but, the now omnipresent caveat, it is also known that large ufos are possible in the postdetermined range… *
* actually, have stated this in several places, but maybe need to rethink/ revisit/ reexamine this! have 2nd thoughts and have been rethinking this, and again in the goal of greater rigor/ precision/ accuracy/ correctness etc, feel now maybe there is some prior assumption/ mixup of presuming that ufos “deep” into the drain (at arbitrary indexes) is the same as being in the postdetermined region. arbitrary ufos were constructed “embedded” in the drain but thats not at all necessarily equivalent to a/ the postdetermined range! constructing them arbitrarily in the postdetermined range has to do with backtrack logic trying to construct “pushed down pre-trajectories” of given seeds ie more gradual vs steeper slopes which was
found to be hard in multiple ways. in short if the pre-trajectory slope is steeper than a 45 degree angle on a bit grid plot, the algorithm is only creating a pre-trajectory with a longer predetermined range, “pulling into” the constructed structure into the predetermined region instead of “pushing it out” to the postdetermined range. ofc have to illustrate some of these ideas sketched out with further experiments/ algorithms…
(later) the relevant experiments for “ufo in postdetermined region” can be found 3/2019; was forgetting some of the details after ¾ year and mostly had it correct, the algorithms all had gradual pre-trajectory slopes with the tendency to “push out” the ufo deeper into the postdetermined range. key experiments to look at wrt pretrajectory slopes were
vary12, vary13, backtrack, backtrack3.
but some conceptual clarification/ detailing esp wrt slopes etc remains not noted earlier. the initial construction of the ufo in a seed and early pre-iterates adjacent to the ufo/ seed put it in the predetermined range, but typically in a drain, not a glide. again the slope of the pre-trajectory is critical and there is a critical point/ value where it either “pushes” a constructed ufo more toward the postdetermined range (gradual, the cases found) or more toward the predetermined range (steep, but not putting it inside a glide, because its still in a drain-looking range!). ie the algorithms had non-steep slopes with corresponding tendency to push the ufo deeper into the postdetermined region (so apparently not so hard!). and its a key open question if steeper pre-trajectory slopes exist or can be constructed that put ufos in the predetermined region. the effect of those algorithms logic on pre-trajectory slope needs to be reexamined/ reviewed closely and would like to look into new algorithms that directly target/ have different slopes/ ranges. also note what was found to be hard was finding pre-trajectories that are smaller than a starting seed, ie finding a positive slope of the pre-trajectory, ie pre-trajectories that lead to glides.
(1/5) already past this idea but its worth some mention, while short it was not easy to code. my thinking is that one needs to look into more detailed similarity measures between “internal trajectory structures”. an example would be “distances” between parity sequences. one of the simplest distances to compute is hamming distance. as just mentioned in comparison to edit distance, hamming distance is also a natural way to expand/ traverse the frontier. this takes those ideas but runs into an immediate difficulty. it has a concept of “in points” (chosen) and “out points” (frontier) and looks for (binary based) similarities between the two, defined over the glide parity sequence. but even if there are not that many frontier points, theres a quadratic complexity to looking at distances between all the frontier points and chosen points. after only 500 iterations this code has 193416 points in the distance map, relatively memory-expensive, it seems to go up almost geometrically.
but theres substantial order obtained at this expense seen here and otherwise in the binary iterate grid aka “getting warmer”. ‘z’ is the optimization metric the sum of ‘dp’ and dn’ where ‘dp’ is the similarity distance and ‘dn’ is binary width of iterate, apparently perfectly monotonically increasing. adding in ‘dn’ keeps the algorithm from just “runaway” moving to higher iterates that have 1-2 length glides. ‘pw’ is parity sequence width also glide length and the algorithm does move/ advance toward larger ‘pw’ up to 9 not getting stuck or fixated on the “trivial glides”.
(later) as has been repeatedly emphasized (and maybe not only by me, but possibly pointed at by a distinctive/ memorable commenter here still not forgotten) a lot of the problem reduces to looking at what might be simply called (finally!) postdetermined glides ie glides that extend past the predetermined range. looking carefully over prior blogs, there was some mischaracterization of this initially eg 5/20/2019. “glide descent/ right” mentioned there is not at all the same as “postdetermined glide” because its easy to construct glides in the predetermined range that have arbitrary behavior (are unconstrained) after the glide peak. can see some grasping at concepts/ distinctions in that writing/ thinking. its critical to get all this straight. some terms can have ambiguity and can still make progress (to the point that some vagueness is beneficial/ an advantage), other times, lack of rigor in the ideas impedes/ interferes with progress.
the overall idea guiding it is that maybe large ufos exist in the drain, but not in the drain “associated with” (contained in) the glide (aka descent region of the glide), and therefore maybe some approach with viability shows the drain part of the glide always exists without “ufo complications” inside it; in other words, proving all glides are finite based on combined unavoidability and special structure of the drain part of the glide, even with known ufo complications existing postglide.
but actually next months blog 6/2019 got it right
mentioned in last months installment that it seemed like long 0/1 runs (ufos) in the glide decline were rare. this was once called “glide right” and its a simple nickname. technically this is postdetermined glide right, not exactly the same thing as earlier glide right concepts which was not restricted to postdetermined.
note that sometimes depending on how the code is written it will automatically tend to “push” the glide into the postdetermined range, hence maybe sometimes this oversight is acceptable or “works out,” but its also crucial to make the logic explicit.
even though there was some reference to pre vs postdetermined in nearby text, the 5/2019 sentences quoted and other later musings in the same blog are not strictly accurate without further qualifications. “drain” has never been fully defined carefully, and theres some ambiguity. can it extend into a predetermined range? trajectories with drainlike ranges can be easily (Terras-construction) “built” in the predetermined range. also as just stated, the “descent region of the glide” can happen in the predetermined range even to the point of being arbitrarily constructed.
also (thinking this over recently, coming up with an important/ explicit counterexample), utilizing the Terras construction one can easily construct a large lsb 0/1 ufo in the descent region of the glide if the entire glide is in the predetermined range. non-lsb ufos in the same range remain open to question… anyway all this suggests its really critical to understand how the glide structure interrelates to the pre vs postdetermined range, and many experiments indeed poke at that interrelation, eg the lsb 1-triangles, bridges, etc, esp the series of ‘cmnl’ analysis—although that needs a 2nd look wrt these highlighted clarifications! am now thinking that merely maximizing ‘cmnl’ may be similar but (exactly) the same as trying to find the longest postdetermined glides! all this can have a lot of razor subtleties! but also, always trying to figure out which subtleties are irrelevant to a proof…
❗ 😮 here is yet another revealing/ striking result. it apparently captures a dramatic snapshot of mean-returning and/ or “hailstone” behavior and shows that postdetermined glides seem very distinctive compared to other trajectories. this creates all 27328 20-bit-seed postdetermined glides recursively/ inductively using the msb-extension logic found in latest bitwise algorithms, and then analyzes the trends/ trajectories using binary width color-code-histogram bins. all the glides plotted as full trajectories starting at the 20-bit seeds (some starting inside the “cloud” or “blob”) are right-aligned at 1.
theres an extraordinary clustering effect. looking at some experiments at 1st thought taking random ½ density iterates in comparison seemingly leads to a much larger “end-spread” on the right, but need to quantify this further. from other experiments it appears theres an infinite # of postdetermined glides but here theres a striking constraint revealed on their dynamics. some of this was hinted in the lsb 1-triangle trajectory merging phenomenon and this further generalizes the idea. other/ further stuff to look into: it looks like the “curved top/ arch” is constrained, seem to recall experiments relating to that. also the “left shelf” makes me wonder that maybe some part of the postdetermined glide is not part of the drain, that also again relates to the ‘cmnl’ range.
(later) …”already quantifying it.” heres the comparison with random ½ density iterates. immediately revealing some of the structures/ features are not a unique characteristic of the postdetermined glides but overall scale definitely is. left and top are compressed and/ or truncated. the clustering/ “end (spread) narrowing” effect at right is still present. a central “right angle edge” emerges brighter lower right. ❓
(1/6) its straightforward to look into that “top compression” from another angle for large iterates, it reminds me of other experiments finding something maybe similar for siblings, and seems analogous to a “vertical limit” similar to the “horizontal limit” ‘cmnl’. this is a very thorough optimization in the sense of a strong algorithm, many metrics, and relatively long runtime using the handy new intermediate
stepwise code (renamed from
traverse17) that looks at ‘mnw’ red right side scale/ overplotted, the bit width difference between the 1st postdetermined iterate and the glide max iterate, and finds it seems to be bounded to ~18 even as all the other trajectory metrics go up except for again ‘cmnw’ magenta flatlining and spiking only about as high as ~200 in the midrange of the plot, but the latter mostly overplotted. after the bound was discovered without them, ‘cmnw’ magenta, ‘cgnw’ lightblue and ‘cgcm’ yellow the peak index minus bit width, postdetermined glide, and postpeak glide lengths were/ are included/ added in the (final) optimization and it doesnt seem to affect the ‘mnw’ bound. it seems notable that ‘cgcm’ “right side” of glide is apparently not bounded whereas ‘cmnw’ “left side” is.
again in the interest of strict accuracy/ rigor, the standin criteria here for “postdetermined glide” is the inequality
cm > nw (stricter, a presumably “longest” subset of all postdetermined glides) and there are probably numerous scattered references to that across blogs, but its not exactly accurate. strictly speaking postdetermined glides are
cg > nw. one way to visualize the difference is that its possible for a glide to peak in the predetermined region but still decline into the postdetermined region. not sure how the difference would affect this optimization, right now dont think it would much because it seems like the longest glides will have
cm > nw.
(1/7) 💡 ❗ an idea for a basic story about collatz built out of prior knowledge, but with some key new insight/ leap. theres been significant investigation into the 3-power sequence and think its really key. another way to see collatz is that its apparently sampling bits in the 3-power sequence left-to-right from lsb to msb as the sequence advances. recall the sequence is x · 3m where x is the starting iterate. a way to visualize this is that the 3m part “spreads out” or “smears” the bits of x. initially x · 3m seen as a set of adjacent bits is dominated by the statistical patterns of x for low m, eg wrt the density of x, it has approximately the same density—it seems for high and low densities, and for some midrange densities of x, x · 3m seems to be able to “transform it” high or low. then as m gets larger, the 3m term dominates as far as controlling the statistical properties of the set of bits. and 3m seems to adhere very close to ½ (binary) density, this is probably “easily” provable—and maybe x · 3m tends to converge to ½ density right around m equal to # of bits in x! the other part of the visualization is that as x · 3m grows, bits are sampled at increasingly leftward positions. how fast that leftward trend moves was examined in some past experiments and has to do with how the collatz sequence parity compares to the 3-power sequence. am now wondering how chaotic/ irregular it is, or if there is some relatively orderly trend to this leftward “advance speed”. it really almost a glaring oversight that some of these basic ideas havent been examined/ confronted until now.
(1/8) along those same lines needed to review/ “recollect” some of those prior experiments, forgot some of the details. these are all (inter)related/ similar theme some with 3-power sequence, Terras-generated iterates, random ½-density iterates, poking at similar areas. (and when was the earliest that density of the 3-power sequence was studied, was it here?)
- Terras density generated
- ½-density/ Terras 3^n match ratio
- Terras density 3^n
construct42, construct43, construct459/2019
there was a lot of excitement over these signals at the time, didnt really fully follow up on all of them… aka “leads”… need to try to synthesize all that together into some kind of overall story incl both ½-density + Terras generated…
(1/9) this is a basic experiment that again should have been done already and/ or a long time ago, so here it is, right on time, lol! this starts from varying density 100-bit starting “iterates” and determines their 3-power sequence density, higher density hotter over 100 “iterations” ie up to 3100. as already expected/ implied by the prior results wrt Terras density seeds, which have the tendency to vary in density in a way correlated to this, the 3-power sequence density clearly has a center density attracting tendency. putting this together with the idea of the collatz sequence as in a large sense sampling from this “bit set,” this therefore seems to explain a large part of the basic collatz dynamics. holy @#%& cow, apparently nobody noticed this until now…?! also, an early idea from years ago was looking at other computational sequences that are somehow closely related to the collatz sequence, and trying to leverage the correspondence towards a proof, and it turns out, it was hiding in plain sight—the 3-power sequence seems to be it. the Terras proofs also seem to have references to the 3-power sequence and maybe need to go nail that down more exactly also.
however, theres a flip side. this pov does not seem to overcome the “curse of undifferentiability” because (again) theres no differentiable feature here in the center density iterates. although the
construct40 experiment was able to overcome that for the drain region. but again remaining, the big question is about center-density iterates that have substantial nondrain (glide) behavior. how to explain those wrt 3-power sequence? it seems the basic conclusion as already sketched out must be that the collatz sampling operation is somehow “biased” wrt them… ❓ oh, but wait… again, a WDITTSI “why didnt I think of that sooner idea”, need to look for bias in the 3-power sequence of glides (seeds)! ok, the answer is already known for Terras density generated seeds via the 9/2019 experiments just cited, its a strong signal, but what about “arbitrary” other contrived glides via all the known algorithms? whats the story there? makes sense to look at the trajectory database again, an easy/ WDITTSI exercise, but at this point, theres so much signal here that it already makes me want to look into optimizations that try to increase the disparity between the 3-power sequence and trajectories, because from multiple or even many angles it seems to be rather tight.
(1/10) this is both a confirmation and something of a disappointment, but maybe something like/ very close to this was already tried/ noticed (eg similar to some match ratio experiments nearby others just mentioned
bitwise41, bitwise42b 8/2019). this loads the trajectory database, finds longest climbs by each of the 8 methods, and then looks at 3-power sequence density ‘d3’ red right axis, trajectory track ‘y1’ green, and a ‘d3’ parity controlled random walk ‘y3’ blue. there is both signal and noise here so to speak. the signal is that the methods are clearly biased toward creating seeds starting with farther from center density 3-power sequences even with some large initial spikes and the spreads tend to decline/ narrow. also notably the last method has uniquely tighter spread over a long range. however (and do recall trying/ noting this before) theres not a lot of clearly discernable correlation of this signal with the climb or descent—but maybe this is overthinking it, and the emphasis on rough correlation with trajectory length should be regarded as very significant, given how rare and sought-after such a signal is. elsewhere ‘y3’ blue seems to contain some signal and/ or bias other than random, eg it trends upward through 2 glide methods, middle, but any deeper pattern is not immediately obvious.
(1/14) 😳 😥 😡 sigh, argh… thought this was all building to something in particular… reality again intrudes… earlier match ratio experiments ie between density parity of 3-power sequence and collatz sequences had high signal and that got my hopes up. it seems like it surely would be exploitable. other experiments contribute to this. but then theres an immediate next step that breaks down. need to go look back, but my memory is that for some types of seeds, maybe mostly Terras density generated, there is substantial or high correlation between the iterate density and the density parity, do recall several experiments along those lines. but some quick experiments building further on the hard seed database as like the immediate prior one dash those hopes rather soundly. maybe there were some hints of this earlier. and am now suspecting/ thinking maybe its a (surprising! unexpected! disruptive!) even fundamental difference between the seed database and Terras-density generated iterates. in short, for the seeds in the database, there is an edge in the correlation between the density parity and the iterate density, but its unfortunately very thin/ ie in 2 words, really weak, therefore seemingly/ presumably nonexploitable (but ofc its now a very longstanding open question around here about the degree that correlations can be turned into proof mechanisms) … anyway once again, grasping around for
straws a Plan B… ❓
(1/15) 💡 my new idea is roughly as follows. the match ratio is a powerful correlation/ connection. so it ties the density of the collatz sequence to the 3-power sequence, serving maybe as a proof mechanism or element. this ties the two density parities/ and therefore densities of each side. but as just pointed out the density of iterates of the collatz sequence is very loosely tied to the collatz sequence parity, at least in general. the density of the iterate is computed over all the bits. is there anything else about the collatz iterate “structure” aka bit patterns that could be tied to its parity? immediately apparent, the question must be interpreted in some more narrow sense eg in the sense of iterates in sequences, because otherwise its a meaningless question, because any binary pattern is associated with odd/ even iterates. in other words, subtly, but probably a theme/ aspect already uncovered in many experiments, iterates in sequences are not exactly the same as iterates chosen at random. suitably stated, amazingly, this question does not exactly seem to have been directly studied previously. maybe there are some experiments in all the piles that could relate to it, but cant think of any at the moment.
so did some hunting and it was seeming like nothing was & could possibly turn up, it was just one big frothing ocean of undifferentiability so to speak. finally hit on the following idea that actually turned something up. many of the null results were related to looking at 0/1 parity iterates pulled out sequentially from the entire sequence. but then started to wonder if there could be something “special” about 0/1 iterate locations in the sequence. this following code selects only iterates that come in 1-0 pairs out of the sequence (starting iterates are ½ density ie sequences are drains). amazingly, it seems to (literally!) make a difference. without this selection its otherwise a null result, at least in the sense the red-leading-green difference is not consistently reproducible (its green-leading-red instead with roughly equal probability). with the selection, there is a very thin )( consistent/ repeatable statistical difference detectable/ measurable between the 1/0 sets as seen in the diagram. the convert subroutine logic last seen months ago on 10/2019
construct79 is reutilized to process the 1-runs of the concatenated binary iterates starting from 200-bit ½ density seeds. red line is the 1/ odd and green line is the 0/ even curve. honestly, its disappointingly scant, but given the challenge/ difficulty here, its like the old expressions-maybe-nearly-cliches, not much to work with, but better than nothing…
(1/16) 💡 ❗ 😮 that route did not seem promising, so then what? … heres another mini-pivot that flashed into my brain. speaking of match ratios, theres a years-old idea in here that is related to the 3-power sequence that has a lot of potential to be something seriously exploitable. recall that one can look at the matching between msbs of the 3-power sequence and the corresponding collatz sequence and that it has, in a word, extraordinary predictive features. have not looked into that in a long time but it came again back into my thinking when considering the “bit statistics” and “bit structure” of the collatz binary iterate. the relationship explains a lot of the aspects of the 3-power sequence being investigated recently. and heres a quick return. dont recall if this was tried against all the algorithms in the seed database, it was tried against 1 or a few at least.
but, even though nearly a “rerun,” this looks at the 3-power msb matching over all 8 algorithms for the longest glides (green), and find that basically the 3-power msb matching line perfectly discriminates between the glide and drain regions of each glide (red)! in the glide range it plateaus/ stabilizes to the length of the glide, and in the drain region, it tracks the drain. the full drain was better tracked in another plot and then shortened it in this plot for better visibility of the glides. these plots include the 200 postglide points. from one experiment to the next, this is like a shift from almost nothing to almost everything, one extreme to the other…! it looks like the long-sought general glide vs drain discriminator, already found years ago, not fully recognized/ pursued at the time! WDITTSI!
another clear point to make: this is a somehow “natural” separation of the msbs vs the lsbs of the collatz iterate and maybe explains better some of the distribution differences that are being seen. it makes me suspect that in the drain with the short lsb section, the 3-power sequence has more predictive power, whereas would expect more divergence in sections with long lsb sections ie glides ie thinking of this boundary as separating two distributions, and the larger the separated (lsb) section, the greater the potential difference in the distributions. do recall doing some series of experiments to look at lsb vs msb spans right now but dont recall how they were divided at the moment… iirc maybe it was glide bits vs remaining bits…? which would likely tend to be close to these distributions…
(1/20) some quick google searching on this blog turns up the 7/2018 related page/ experiments. there was some density study of the “glide bits” (lsb bits of iterates “beyond” the starting iterate bit width) versus the “base bits,” the remaining msb bits constant size over the span of the glide. as just/ long pointed out this boundary is very similar to the 3-power msb vs lsb separation just looked at but here the 3-power series is not utilized, instead its just the straightfwd “glide bits vs base bits.” on a hunch, looked into the match ratio over the glide bits versus the base bits, over glide range only. not surprisingly its consistently higher for the glide bits. the glide bits contain the sequence parity bit and are maybe “not so/ as many” compared to the bases bit count. the green line ‘rg’ is glide bits match ratio, the blue line ‘rb’ is the base bits match ratio, and the magenta line ‘r’ is the match ratio over the entire iterate bits, all left side axis. the interrelationship/ trend is generally/ mostly quite consistent,
rg > r > rb after the initial small-set/ sample noise/ high-divergence ranges of each glide. glide offset is graphed red right side axis. glides shorter than 50 are excluded which then removes/ omits the “last” (shortest by glide length ordering) 2/8 glide generator methods.
(1/26) 💡 some new ideas. was thinking about statistical analysis of the postdetermined glide. it has been poked at quite a bit now. it can be seen as a mostly drain-like random walk. no ufos have been found in it so far. it is thought so far that ufos are confined to what might be called “postdetermined postglide” based on some found there via searches and artificially constructed also via backtracking methods. the postdetermined glide has other related properties such as short 0/1 runs in the parity sequence and short nonmonotone runs. all these statistics are interrelated in ways that havent been completely nailed down, ie they all tend to imply each other. as outlined in all the prior analysis, proving some kind of drain-like property of the postdetermined glide is nearly the entire open problem. and it seems to be one of the more orderly/ less random/ less unconstrained elements of the overall problem found.
an idea arose that maybe the postdetermined glide can be broken into “words”. the words would be binary parity sequences of varying length. then, what are the words? another idea came to me that maybe there is a concept of “prime words” or words that do not contain other words as (concatenated) subwords. did not immediately see how to carry this out. because in a sense wrt this definition ‘0’, ‘1’ are both prime and then the whole sequence is trivially a concatenation of 1-length-words. but some other slight variant of the definition might lead to something more meaningful/ nontrivial. (vaguely recall grammar compression experiments from years ago that might relate to this idea and have similar concepts.) was thinking someone might have already outlined something like this, did a google search, a sometimes fun/ enlightening exercise, but didnt turn up a close match.
however that line of thinking did lead to this interesting experiment. suppose one wants to find all n-length words or substrings. there are some brute force ways to find that. another more sophisticated approach is as follows. start with pointers to every location in the string, denoting 0-length strings. then build on these substrings by finding all locations that have the same prefixes, one character at a time. this is similar to a histogram analysis of substrings.
this code also creates the parity sequences from glides with an idea hinted at earlier. awhile back the glide “critical point” is for predetermined densities of 0.64 for Terras-density generated sequences was determined. densities above that will tend to glide (in a roughly linearly increasing way) and those right at that level tend to go horizontally. ofc some will tend to terminate early due to the inherent early noise of the random walk but those are easily discarded, ie here glides any smaller than the predetermined range.
was looking at basic statistics then found a remarkable discovery. the count of substrings-by-length are expected to go up exponentially, but it was found that they hit a maximum and then decline (1st graph, log scale of string counts by length). this is both expected and unexpected. the random walk is not a pure random walk and is mean-returning. such a walk necessarily does not contain all possible binary substrings. but this analysis seems to be a special way to look at this limit. some of it is explained in that all the substrings tend to cluster near half density (2nd graph, sorted densities). the 2nd graph takes the string distribution found at the maximum in graph 1, it is easy to capture without holding all the intermediate calculations (which would require a lot of memory) because the maximum is “very well behaved” ie unimodal. the maximum shifts slightly between runs but here it was at 16 width: 2^16=65536 possible strings of which only 1706 occurred and then declined after for all higher string lengths.
so in other words this also looks like a helpful way to analyze an arbitrary random walk tendency. again this like many other experiments tends to evoke comparison with the Hurst coefficient of random walks naturally measuring mean-returning tendency. and an immediate question, what kind of random walks give rise to these particular statistics? needs more attention/ investigation.
(1/27) earlier this month ~1/4 there was some extended musing about the base-2 size slopes wrt creating ufos in the postdetermined glide range (via backtracking), and the 45 degree angle. my new thinking is that all this is again off. there may be something to the 45 degree angle but the basic question is about creating ufos in the postdetermined glide. to do that via backtracking, the backtracking seed has to be smaller size than the starting seed, which was generally hard to find. so slope is related, but not in the same way outlined/ speculated previously.
just did a quick experiment looking at the same bit-word analysis over ½ density parity sequences. the histogram curve is nearly perfectly identical with only tiny deviations measurable. in other words the postdetermined glide seems to be basically ½ density parity sequences and the prior histogram curve is characteristic of them. again, proving something like this is nearly the entire open question.
(1/29) these experiments reminds me of the series from 6/2019 that looked at postdetermined glide features, in particular
hybrid21 which found max 0/1 runs of about ~19 in the postdetermined glide across arbitrary bit sizes, there ~70-190.
(1/30) this is a riff off the last
stepwise2 code that occurred to me. it has the prior trajectory expansion measures
c, cg, cm, cgnw, cgcm, cmnw, nw. then it occurred to me to try to maximize the 0/1 runs in the postdetermined glide parity sequence range and also the 0/1 runs/ scaled for the iterates
pmx01, mx01, mx01s in the same range. got it? its a lot to keep track of and leads to a busy graph with lots of overplotting and trends arent too discernable without switching on/ off different variables/ associated lines in the graph, and takes advantage of the sheer, powerful even nearly delightful ease of creating new parameters and multioptimizing in these code frameworks. but the results seem a little strange for at least 2 reasons. ‘mx01s’ does not get very small as seen in other experiments, but that is explained in that ‘nw’ bit width doesnt seem to climb much, although theres a very gradual trend, almost so gradual the 2nd ½ looks flat. it almost seems that some combination of the optimization parameters is pushing down/ suppressing growth of ‘nw’.