this is an idea that occurred to me, building on last prior theme, to look into bit patterns connected to the subsequences of the parity sequence. for a glide there is a predetermined and postdetermined parity sequence. this looks at ‘w’ width bit windows over the iterate sequences, here w=10, for a single long trajectory. for the climb it is clear that the lsbs (in the iterate/ window) tend toward 1 odd. what about adjacent bits? this somewhat ingeniously shows they are indeed also tending toward 1s in 1st graph, ie the familiar 1-lsb runs/ triangles. the visualization is a tree diagram where the branches are 0/1 with 1 the upper and 0 the lower branch, and are easier to draw than one might guess; the code uses a “cumulate coordinate halving” idea. the left side is the climb/ predetermined range and the right side is the decline/ postdetermined range. color coding is by iterate # in the ranges. the 2nd graph is the windows over the msbs instead of the lsbs. close study shows some difference in the “ending” comb density from one side to the other with left side a little denser than right implying some kind of different distribution. the color distribution seems fairly even/ uniform.

**(2/3)** have been having some ideas to use the Terras construction to study dynamics more. KT had some excellent ideas. in particular here is a notable/ meaningful construction that KT alluded to directly and ties in with other aspects of the Terras proof such as the logarithmic approximation. did not immediately grasp how to code this but then finally it occurred to me, dont know if his formulation is identical, would like to see that. it is possible to “synthesize” a nearly horizontal glide. the logic is revealing. this was probably done inadvertently in some earlier code eg 9/2018 `bitwise3b`

and here its more methodical. the idea is to treat the glide, like Terras did, as a multiplicative random walk where there are 2 factors corresponding to the (3*n*+1)/2 and div2 ops of the mapping. then looking at it in logarithmic terms its an addition or subtraction “very close approximation” of an additive random walk (either 3/2*n* or div2). a simple greedy algorithm balances the two and leads to the following code.

1st graph is the straight logarithmic walk vs the collatz-synthesized walk, normalized by starting iterate sizes, and they are almost perfectly aligned. it looks regular on quick glance but its apparently an irregular/ nonrepeating fractal pattern. 2nd graph is the difference between them and this is due to the “small” +1 additive factor on the (3*n*+1)/2 op. 3rd graph is the bit width of the iterates and again its a nonrepeating pattern that looks regular.

this is a quick yet revealing riff. the prior code balances on a “razor edge” where the density is balanced but in a very finetuned way, ie both locally + globally. another idea is to look at the average density of the generated sequence and then create a parity sequence with that same density. this has the same global value but is locally more divergent. this is a special “transition point” in the parity density where the trajectory is most likely to run sideways, higher densities lead to climbs and lower densities lead to falls. that was seen in the 10/2018 `121`

experiment. the “small” difference illustrates a subtlety of the different methods. the density transition point *(note, of the parity sequence)* turns out to be measured at 0.64. the next obvious step is to look at interrelations to iterate density.

how to compute/ derive the parity transition point? the up increment is log(3/2) = log(3) – log(2) ≈ 0.45. and the down decrement is log(2) ≈ 0.69. then the ratio of up increments to up down decrements needs to be *r* = log(2) / log(3/2) ≈ 1.7. this occurs if the (parity) density is *r* / (*r* + 1) ≈ 0.64. that formula “normalizes” the down decrement length to 1 and then computes the relative up increments to total length.

❓ ❗ further look: not surprisingly the starting iterates are close to ½ density and there are not clearly discernable differences in density or entropy of the iterates in the trajectories (until typically the ends). this is one of the big question marks of this research. ½ density starting iterates can have almost arbitrary behavior: climb, descent, sideways, for arbitrary lengths, but chosen at random, tend to be declining. it seems to relate to the fractal/ self similar nature of the problem. just as mini-mandelbrot turtles can be found in the mandelbrot set “all over the place”, in a reminiscent way different glides of arbitrary shapes/ sizes can be found in the ½ density region. at least the Terras lemma is a powerful way/ tool to study the phenomenon. but maybe the way to look at density/ entropy is that in analyzing the problem they both have a lot of power and powerlessness at the same time. this has been very hard to wrap my brain around and come to grips/ terms with.

**(2/5)** 💡 ❗ 😮 ⭐ this is some code further building on the Terras 1-1 mapping. it creates 50 100-width iterates with varying (parity sequence) density similar to the `121`

experiment just cited but doing a 100 sample integration/ average and analyzes more statistics. the dotted statistics are entropy/ density on right side axis and others are left side axis. theres a lot of info here but the basic story is that the *mean* statistics are very flat with “almost no differentiating signal” except for at the extremes in parity density (left/ right edges). in other words they cannot distinguish much the behavior of iterates in the ½ density core.

there are few almost-exceptions: entropy ‘e’ red dotted, density ‘d’ green dotted, and max 1-run ‘mx1’ length lightblue have signal toward/ nearly reaching the core density, and avg 1-run length ‘a1’ blue actually seems to fit the bill by a very narrow edge/ sliver. this is gratifying to see these key old statistics again confirmed as substantial from an entirely new angle (not long ago, one that CT insisted invalidated the entire paradigm…ironically his findings/ spirit lives on esp in prior and this exercise!). the big question, is there some way to “amplify” these signals toward the density core? also another immediate idea coming to mind is to look at deviation in these statistics. ❓

something else to note: theres been a long challenge in finding “typical” or “representative” trajectories and ofc thats an extremely problematic concept with fractals and this problem. and recently took other stabs at it with the hybrid optimizer `hybrid3, hybrid4,`

(12/2018) and `hybrid11`

(1/2019). but the Terras construction combined with density specification of the parity sequence is in a new sense a way of generating “representative” or “evenly distributed” random walks. the Terras 1-1 construction has a lot of implications for how to consider representative trajectories/ glides.

⭐ ❗ 😮 🙄 *did you catch that?* ‘a1’ avg 1-run length deserves further notice. it looks eerily that it predicts less than ½ parity sequence density for less than 2 and larger than ½ density for larger than 2. this is a rediscovery of this threshhold/ transition point from an entirely different angle. the 2-attractor was seen in some generated statistics but the threshhold/ transition point connection to parity sequence density is new and bordering on eyepopping. heres a detail with a blue line at the ½ density. *striking emergent property!*

**(2/6)** the 1-run statistics (of the generated starting iterates) seemed to have more signal. was looking at the median 1-run length and it seemed to correlate fairly well with the parity sequence density, maybe the highest signal so far. then went in the direction of wondering about the 1-run distributions and then used this prior technique. it just sorts the 1-run lengths and then normalizes by count on x axis and max 1-run length on y axis.

was surprised to find this remarkable/ strange/ unexpected globs/ bunching/ threshhold/ fragmentation effect and dont have an explanation right now except to note on cursory further look some of this is due to overplotting where there are a lot of overlapping “grooves.” this is 200 densities and 1000 length parity sequences colored by parity sequence density. the effect was not apparent for some lower parameters showing some threshhold effect and how substantial effects can be hidden in noise. feel like some kind of basic analysis of the distributions would shed light on this but cant think of what right now, maybe some new formulation… guess (related) histogram analysis is an obvious idea to try… ❓

**(2/8)** 💡 ❗ 😎 looking at 1-run histograms was inconclusive because they seem to be very noisy. however here is something very similar that builds on those last ideas, almost starting me in the face. sorting the 1-runs leads to the stairstep pattern and it makes sense to find the points on the stairstep and do linear interpolation between them. this is easier done than said so to speak because gnuplot does “linear interpolation” by default on line plotting. so this code has a new routine `convert`

that finds the stairstep points basically by sorting the 1-runs and finding the transitions, and then converts those to coordinates. the coordinates are normalized by x, y width/ height. theres an integration over 100 separate samples of each density and the 1-runs are all combined into a single array.

this all leads to a very low-noise result, possibly one of the lowest noise/ smoothest seen. in a sense, *a work of art.* the density range 0-½ seemed more noisy so this is ½-1 range only. the display is revealing a shift in the distribution of 1-runs, a kind of “skewing/ squeezing/ squashing” effect as density increases. the prior bunching effect is beautifully adjusted, a real “x-ray” scope visualization into the heart of the problem. the bunching seemed to be related to a few integer #s of total count of 1-run lengths around a dozen. an immediate question easily answered, would a synthetic density gradation lead to the same results?

💡 ❗ pondering it more, the `construct5`

diagram is really remarkable, it captures/ synthesizes many ideas. is it enough to create a “pre vs post determination” signal? *it would seem so!* havent sketched out this idea yet but basically *its possibly near to a proof formulation because it might provide the inductive structure/ dynamics/ mechanics such that the predetermined region inevitably transitions/ evolves into the postdetermined region!
*

this next item is not yet remarked on, but its also remarkable. in the

`construct5`

diagram the left half is a region with ½ density and increasing entropy, ie leftward starting from center (*oops*the legend annoyingly overplots/ obscures some of this crucial trend, meant to center it).

*does that remind anyone of anything?*it took a little bit to find it again but it was a direct/ memorable exercise earlier this year on 4/2018

`entropy`

code where the same “gradation” was synthesized but starting instead from 0 entropy, the upper/ top ½ of the diagram would seem to correspond to ½ entropy and higher.the binary entropy aspect/ measurement was discovered “only” at beginning last year pondering the ordered vs disordered regions although the more general concept of entropy has been mentioned/ pondered for years in this blog. also the striking pyramid entropy-density figure `blend2`

from just 2 mos ago 12/2018 continued on this theme of looking at the entropy-density gradations/ variation/ spectrum. also, maybe the presence of ½ density seems to take away order from the 1-runs distributions statistics somehow, maybe explaining why the last code was noisy on that region, not sure how to tie this all together/ picture all this yet.

💡 ❓ there is likely some simple relationships between all the known metrics even for randomly distributed binary strings eg between density, entropy, 1-runs, 1-run avg, max 1-runs, etc… and how do those relate to the Terras mapping? none of this is very hard to study… and developing these basic views/ tools is already proven highly valuable verging on crucial…

**(2/9)** 💡 some of the old seeming barriers persist. the latest conjecture about “pre vs postdetermined” regions should somehow apply to some kind of set constructions. how are they technically/ formally classified based on some kind of property? defining them in terms of collatz iterations is a technical definition but for purposes of a proof verges on circular. another challenge is the old “global vs local” dichotomy. the global properties are fairly well understood but need to find local properties that connect to the global ones. the `construct5`

graph is very meaningful along these lines but requires more attention/ work. another aspect is “dichotomy vs gradations”. the pre vs postdetermined concept is a dichotomy, but for a proof, it seems one needs a property that measures “distance from one or the other or between the two” and then a proof would involve showing *convergence* in that property. this is the familiar long search for a monotonically declining indicator.

it is not exactly clear what to do next but then came up with this straightfwd idea. the `construct5`

properties show that all the following would seem to be expected to narrow from pre to post determined: average 1-bit run length distance from the 2 attractor, entropy distance from ½, and density distance from ½. but these are very noisy to measure and there are tricky aspects. basically at both the beginning and end of trajectories they are/ can be more divergent and somehow only narrow in the “middle”. so how to handle that? one could look at just the glide which excludes the end but that still includes the start of the trajectory. one can try to exclude based on ordered vs disordered regions and there have been statistics/ features constructed for/ close to that but as recently found (the “UFO” effect) they are problematic for various/ misc reasons. another idea is to look at glide descent only, but its not yet so clear how that relates to the forefront concept/ study of pre vs postdetermined regions.

so after all that hit on this. the ‘cmnl’ region was helpful to study. its the region after the predetermined region where the trajectory is still increasing. its the questionable area. earlier it was found to be at least ~300 iterations and its not known if it could be larger. its special so call it the *limbo range*. following is the `bitwise`

code with the postanalysis ripped out. 3 new statistics ‘mxda’, ‘mxea’, ‘mxa2’ are constructed as indicated over the limbo range.

optimizing over these 3 only tended to paint into corner and the code eventually couldnt find nonempty limbo ranges at the end (‘cmnl’ going negative). optimizing over these plus all the trajectory length metrics leads to this result. these 3 new “narrowing” metrics (‘mxda’ dotted gray, ‘mxea’ dotted red) indeed narrow even when attempted to be pushed up by the algorithm. note ‘mxa2’ green is multiplied by 100. dotted lines are right side scale. also notably ‘cmnl’ tended to hover at very low values under ~100. in other words these metrics might be gradation signal possibilities for pre vs post determined. its a new pov on the old “density core” idea. *apparently the density and entropy cores narrow in the limbo range. *but maybe approaching a nonzero constant and not asymptotically approaching zero? notably/ interestingly/ remarkably this exact same constant apparently shows up in the `construct5`

diagram where both ‘da’ and ‘ea’ seem to converge around 0.05 in the center region and its striking its apparently the same value for each. ❓

**(2/11)** already understood to be very substantial, the `construct5`

dynamic could be something of a gamechanger. the key is about what might be called “self-reinforcing properties” ie recursive, inductive, or convergent in a sense. density is known to converge to the core region and this meshes with the `construct5`

dynamic, but am not sure much about any further convergence once in the core. lets call this graph the “random walk map”.

the problem is to try to show that all trajectories move toward the center of this graph as the trajectory evolves, this is the hypothesized ½-parity sequence density “strange attractor” of the postdetermined region. this is sufficient but not necessary for trajectory terminations. the graph represents (down)slope-mean-returning glides in the center and slope-mean-diverging for the right and left extremes (“faster” upward or downward resp). trajectories terminate “faster” on the left side but in the proof structure it seems this acceleration is attenuated from pre- to post-determined. for iterate density, it flatlines for left extreme which means for all these initial iterates, they already or “instantly” start in the core density region.

a convergent property is one that converges toward something through each iteration and as doing so, pushes the iterate density toward core density but also the parity sequence density to ½.

the random walk map might also be measuring other properties that are not necessarily convergent. for example am suspecting ‘da’ and ‘ea’ as associated with the 1st iterate maybe do not change much (disregarding their inherent noisiness) with subsequent iterates in trajectories. the `bitwise24`

is showing some squeezing of ‘mxda’ and ‘mxea’ but an important distinction is that this is not necessarily implying particular dynamics over individual trajectories, its measured as a global property (of/ within the limbo region) as trajectories get larger; ie ‘da’, ‘ea’ may not be “convergent” within a trajectory. so part of the key challenge is finding the/ any convergent properties which seem to *cause* the trajectory terminations.

**(2/15)** 💡 ❗ 😮 this is a riff off `construct5`

. was interested in “evolution of density/ entropy in further iterations after initial found”. so in this sense theres a “depth” of points associated with each sample in the graph. the code is not so complex but it really took a long time to wrap my brain around how to write it. maybe a little burned out in that moment. its a depth (iteration count) 200 along with averaging over the 100 samples. also there was a glitch in that the code wasnt paying attn to early terminations and then was surprised at results. it found cycles but it was the 4, 2, 1, 4 cycle repeated at end that messed up statistics. its not so easy to see due to overplotting but there is some surprising oscillation effect on the tail ends of the trends esp on left side for both density, entropy.

⭐ 😮 🙄 😎 ❗ ❤ this is some quick code to look at a left side iterate evolution, here 0.05 density on a 100 bit iterate. it made a little more sense to show the binary evolution left to right with lsb at bottom, ie a 45 degree flip from prior binary diagrams, and overplots the entropy and density on right side scale (note, the x-y ranges/ boundaries got mixed up/ swapped in the flip). these are some *really extraordinary/ wild* iterate patterns and combining a sort of correlated thrashing or sawtooth pattern in the entropy and density, stripe/ checkerboard patterns, lsb 0-triangles, and really can leave no remaining doubt whatsoever about the sometimes-amazing relevance of looking at binary diagrams. *just when you thought youd seen it all!*

2nd graph is a 0.95 density evolution. *deja vu all over again*—the familiar 1-lsb triangles correlated with the climb. there is some major vindication here in prior analysis that it found these structures already. it would appear from these two cases that 1-lsb triangles are the lynchpin and that 0-runs point toward decline-biased and 1-run point to climb-biased, and interestingly, related to the 1-runs, the density/ entropy converge/ narrow right at the glide max. however from other relatively recent experiments its known that apparently in some sense there seems to be no limit to climbs that have negligible 1-lsb runs.

**(2/16)** the Terras 1-1 density generator can be seen also as a glide generator and heres the standard statistics peak index ‘cm’ and glide length ‘cg’ and trajectory length ‘c’ with also the globally computed ones ‘cm1’ and ‘cg1’ which are indeed different where there is some thrashing between the two in the ~⅗-⅘ density range, the non-global/ local metrics are plotted with the dashed lines. the stairstep effect in max index is quite notable, its exactly the bit width of starting iterate 200 and maybe even suggests the recently studied (nondeclining) limbo region between bit distance and peak max is “atypical”. this actually gives strong further credence to the idea of the postdetermined slope-mean constrained random walk, as starting downward right at the pre/postdetermined boundary.

⭐ ⭐ ⭐

😮 ❗ 😀 😎 ⭐ ❤ **there is some kind of deep culmination/ vindication/ even bordering on closure here.***it seems, the endgame is in sight.* this is now a simple/ basic visualization that builds on prior findings and could have done this sooner ie maybe 1st priority! have already visualized this mentally at this point but here a pictures really worth a thousand words. **this looks like a rosetta stone to the whole problem**. this is 50 trajectories displayed based on parity density as the coloring using again the Terras 1-1 density construction. *breathtaking!*

its a beautiful, striking revelation of the pre- vs post-determined boundary/ dynamics at the bit width distance 200. its exactly as expected and reveals the deeper structure and possibly even the overall proof outline as already sketched out. basically an iterate determines the random walk up to the bit distance and then theres a quick conversion to a slope-mean-returning random walk. however, on closer look there is some wrinkle/ exception in that the very low parity density trajectories actually have a higher slope bias in the (early/ short) postdetermined region. the top density iterate also seems to be an outlier/ biased away/ outward.

`construct11`

code range mixup/ flip not fixed was bugging/ nagging at me and those amazing evolutions deserve some further highlighting. its a 1 line *iota* )( of a code chg but feel unless saved somewhere in cyberspace it doesnt exist (the modern take on a tree falling in the forest). also have some weird slightly irrational aversion to revising blog mistakes, maybe partly as a way to be candid-verging-on-raw about the research process/ history. (aka *stream of consciousness* which acc. to my highschool english teacher was once a paradigm shift in english literature style, maybe captured in *catcher in the rye* which reminds me of modern blog style.) here it is fixed and a gallery of variations. the characteristic tail disorder is captured/ included with the corresponding noisy region of density/ entropy whereas before it was cut off, but the opaque upper left legend obscures a little of the edge. as remarked once awhile ago, to repeat, *its truly a wonder/ marvel the same algorithm is behind all these dynamics.*

⭐ ⭐ ⭐

❓ something to ponder, a deep question, something long dancing, slithering, or poking at the edges of my awareness. earlier it was mentioned that a key challenge seems to be separating pre- and post-determined sets via some kind of computable property, and that defining this in terms of collatz iterations *seems circular*. but maybe this is some kind of bias or failed head-stretching going on. thinking about this more carefully, it is apparently trivial that pre- vs post-determined iterate is a decidable/ computable property: just iterate *n* iterations and examine the trajectory, and determine how close it is to the slope-mean-returning post-determined region. (although its also possible to “construct” a post-determined looking slope in/ at the end of the predetermined region… thinking out loud, is that a problem?) so can a proof be constructed out of this “property”? maybe its a deceptive/ hopeless/ futile quest to find a “simpler” or “more local” indicator, there is a lot of suggestive statistics but nothing really substantial that seems to indicate “something else might be there”…

**(2/18)** 💡 😮 ❗ ⭐ 😎 ❤ *life is lived fwd but understood backward.* this is the `construct5d`

code with a single line of code change to reveal a different format/ layout, ie same data plotted with different orientation, even dramatically clarifying some of the prior obscuring due to overplotting (ie in retrospect this could be readily merged with the prior code). the code computes an average iteration depth/ density or entropy average matrix. just changing the transpose of the matrix (taking out the transpose operation) plots the average evolution of the iterates colored by generated parity density instead of iteration depth.

these graphs are further revealing. there is some breakthrough revelation here of underlying dynamics.

the basic idea is that the collatz function is like a sampling operator from a distribution where the bits of the iterate express the distribution. it samples the initial bit (lsb) of a iterate with some amount of (evolving) density and entropy.

*however, these general tendencies are not necessarily easily detectable in individual trajectories where density statistics are very noisy.* this following graph shows that roughly, the sampling operator is not obviously biased. it strongly correlates (ie probability of 0/1 measurement in the lsb) with the average entropy or density of the iterate. for example for the (hotter) climbing/ higher parity density trajectories, the overall iterate density is elevated throughout the climb and conversely, pushed down for the (darker) descents. its nearly a long initial plateau for most parity densities except the very hottest which is a roughly linear trend downward. there is some adjustment to that trend however in the later ~¼ part of the 200 iterate bit distance where average density/ entropy converge toward the core.

there are further ways to look at this and it would be very helpful to try to quantify computationally/ numerically whether there is any “bias” in this sampling operator. ** this is also possibly nearly a further revelation of the overall proof structure.** it depends if this framework can encompass the entire dynamics of the problem without problematic “outliers/ exceptions” but the current signs are that its truly the

**the “mechanism” of convergence apparently indeed turns out to be (somewhat ironically? aka**

*beating heart of the problem.**“what a long strange trip its been…”*)

**iterate density**. now in retrospect the difficulty in showing this over the years has been removing the noise/ “extraneous factors,” and other techniques that are probably/ actually in comparison “outliers”. one might say these new Terras construction + averaging techniques allow a powerful xray-like view that overcomes the inherent noisiness of the problem that somewhat capriciously masks/ veils its internal dynamics.

this is a revision/ reformulation that immediately answers the question of degree of sampling operator bias. (immediately on reading but not on writing.) basically a “local pointwise parity density” can be estimated over the iterates of the ensemble of trajectories emanating from each of the 50 starting parity densities. then the bias is the ratio of the iterate density divided by the pointwise parity density, here its graphed logarithmic scale due to the stretching otherwise. there is some systematic bias. parity density is biased high for low iterate densities and vice versa, biased low for high iterate densities. note the heatmap is inverted here by parity density. the story for entropy bias is a little more complicated but similar, there is a relationship but its apparently nonlinear and/ or discontinuous. the isolated downward hot spikes in the density plot are notable/ currently mysterious/ deserve further analysis.

it would be interesting to try to measure this same bias via other means eg using random starting iterates, its apparently fundamental but possibly a relatively simple exercise (at least as far as code complexity) thats been overlooked until now. but its maybe not trivial to construct either, because lsb sampling of a uniform density distribution will be unbiased by definition, have to think about it more. this bias seems to have something to do with the iterative/ sequence structure of trajectories. also from long prior analysis its known sometimes the density isnt uniform/ varies in the iterate from lsb side to msb side.

the smooth/ outlier trending lines are the extreme densities (lowest parity density trajectory in density diagram and highest density in entropy diagram). the measured bias is basically independent of the location in the predetermined region. *bottom line, **more basic/ deep properties that likely reveal fundamental proof structure.*

this is a graph of the inverse value (expected parity density divided by iterate density) not logarithmic scaled, could this be more natural?

💡 ❗ further thought and comparison with prior graphs, this is showing the sampling operator acts in a sense as a nearly linear amplifier! it amplifies the signal of the current density core distance! in other words even for densities near the core, slight off-center bias above or below the center ½ density causes a large effect in the parity density distance (from center ½). notice the unamplified ratio ~1 is roughly corresponding to the center parity density ½.

**(2/20)** ❗ ⭐ 😎 😀 as already emphasized the `construct13`

diagram is a very big deal. might as well call it the **rosetta diagram**. just cited it on Taos collatz page! *(forgot to say, the time is ripe!)* havent posted there in 4 years! currently have a total of maybe 10 thumbs down on my comments there over the years.

*lol, thanks guys!*friends in high

*low*places!

*lol oh well whatever*you dont get to my position without some cyberspace scars over the years, some rather deep, and learn early on its better not to take everything personally. nevertheless it all leads to the new philosophical question

*if somebody proves Collatz on a blog page in cyberspace, will it (ever?) make any noise?*

the rosetta diagram and the recent experiments are already substantially outlining/ building a major proof structure. *the same global-local problem/ theme touched and pointed out over the years here continues in this new context, but at this point its apparently mostly overcome/ conquered!* the theme for the prior few exercises is “integration (averaging)—

*effectively targeted*—can be ones friend”. thats because its like a noise reducer, which is strong/ key medicine for this problem. this is a quick/ simple confirmation of that. its a more local calculation of the more global properties just measured.

this takes 200 100-bit width iterates of varying density and then looks at average density of the pre (red) vs postdetermined (green) regions, only over individual trajectories. for the low-parity-density range (left 50 points) there is no differentiation (but there is direct/ strong signal away from center) whereas for the high-parity-density range (right 50 points) theres a strong differentiating signal. hence as expected *the global characteristic of pre or postdetermined range can be extracted from more local trajectory statistics,* ie is recoverable from more local density, although some adjustment/ alternative will have to be made/ found for the low parity density region to fully generalize; some of the difficulty may be that its postdetermined region can be small. that deserves special notice/ repeating!

the global pre/ postdetermined region location can be extracted from (the very noisy) local iterate density/ core of individual trajectories and its subtle (non)bias toward the center!

(to be honest, think maybe this was uncovered in different context(s) in some earlier experiment(s), have to try to go dig them up.) anyway this is impressive wrt old experiments over the years that showed a lot of noise in this (iterate density) statistic and the idea of hidden bias was never really evident in the past, it would have seemed too radical. oh, and it still needs some more sharpening, because it was already known that with various climb algorithms there were 1-lsb runs in the ordered region (thereby affecting iterate density), and maybe these Terras-generated glides have significant leading ordered regions in there, and dont know how much this affects the overall average, have to analyze all this further (eg are the high density measurements in predetermined region for the high-density parity sequence guaranteed to indicate mostly ordered region? think so, dont see why not). in other words it will probably be important for the proof structure to show the signal is still detectable in the later disordered region of the predetermined range and that could be more difficult/ elusive… it depends on how much of this current calculation predetermined range is ordered vs disordered. if the leading ordered part is not large, it will not affect results much.

in retrospect view of recent experiments, it is nearly obvious, the integration experiments almost prove the bias has to be there (it would be a very interesting exercise to contrive/ explain a scenario where the integration statistics have a signal but do not successfully uncover a local property, maybe have seen something like this in another experiment but would have trouble pinpointing it). bottom line in arbitrage circles (which really are not so foreign from this work/ project!) this would be known as *“a slight *)(* edge”* and as denizens there know—one of them named *Simons*—its sometimes *worth a lot.*

this idea can be taken further. let me give away most of the idea already: the collatz problem is equivalent to studying large *n*-block *multi-iterations* where *n* is just iteration count within a “batch,” iirc there were some experiments posted along these lines a long time ago. already suspect this could be most of the proof structure right there… precocious readers will *“get my drift”*.

**(2/21)** 😳 this code quickly tests and somewhat dashes/ refutes some of those hopes, so that they look *“too easy to be true”* or in the words in a letter of Einstein on Bohms theory of quantum mechanics, *“too cheap”.* the idea is to look at two adjacent n-blocks of multi-iterations and compare the average iterate densities over each, hopefully showing a decline, without much concern otherwise. this is like prior experiment `construct16`

except that the 2nd window, closer to the post determined region than the 1st, is fixed size and not variable, and also like `construct15`

experiment in looking at average iterate densities.

the code/ graph is as follows, not esp complex compared to prior code but not simple to describe either. the code takes 200 trajectories in the 0.64-1.0 parity density range. the 0.64 density is found earlier as the “flat glide,” transition point wrt Terras generation, anything less is basically not a glide and anything more is a longer/ steeper glide. then it looks at 2 adjacent windows left/ right of fixed 100 iterate blocks and the average iterate density relation between them. it loops over starting bit widths of size 100..250 in increments of 25, in other words intentionally the left window is entirely in the starting predetermined region and the right window increasingly covers/ shifts into the remaining predetermined region instead of the postdetermined (a visualization/ diagram of this window placement would help…). the x axis is initial iterate parity density within the integer test runs. the difference of average iterate densities of the post minus the pre window is graphed in green if negative and red if positive. the green shows the desired decrease for an inductive proof structure. but the red becomes more predominant for larger bit width seeds over the 11 test runs left-to-right.

in other words the general idea is not rejected and valid “to some degree”—ie when the window size is nearer to the starting bit size as on the left, and this continues the trick/ theme of pulling a “*rabbit* signal” out of what was previously thought to be “*hat* noise”—but a “mere” fixed window/ multi-iteration block size is quickly ruled out and (apparently) “not gonna cut it” as starting iterates grow without bound, the trend moving right. this also relates to “insufficient mixing”. in short larger iterates require larger mixing as measured by iteration counts to achieve the same density decreases, and some more sophisticated induction beyond merely looking at multi-iteration blocks is required.

💡 further thinking just realized the slope-mean-returning property may have been discovered in at least one different context previously but its implication/ significance partly unrecognized at the time. have to go back and find them/ hunt them down; there were some experiments with siblings that showed their post-determined descents were somehow “tied together” in various ways, eg their descents only different in bit widths under some bound, before the concept of the post-determined region was better understood. the post-determined idea was implicit in the earlier idea that two “top” siblings only differ in 1-2 msbs. and maybe this has a key role in some kind of an induction framework… a distinction to think about is that as observed two random walks could be “tied together” which enforces a kind of order, although their “tied path” could be still quite random, but if they are slope-mean-returning, that is an additional level of order, previously missed at the time. am now starting to see some ideas/ hints of some kind of “stochastic calculus” dynamics/ framework/ more general theory…