the ufos found in the postdetermined region earlier have been on my mind and they seem to be aptly named because they are mysterious. its a strange emergent pattern that doesnt seem to fit into a lot of other analysis. most of the findings/ “momentum” of analysis is that the 0/1-runs tend to crunch in most situations. the ufos found were not numerous so far and the few found took a lot of processing to discover, and also as mentioned there may be some flickering circumstantial evidence that they could be limited in some way eg maybe to lower iterates, although a lot of other experience with the problem would tend to push against that, ie nearly any pattern seen at small scales seem to be seen at larger ones—actually even more than that, that idea/ general theme is a big part of the underlying motivating ideology of computational investigation techniques.
an idea with the last exercise was hopefully to be able the synthesize ufo type patterns. but some of the findings of that experiment were that after picking a local ufo pattern at random it seems to be hard to create a long prior sequence to it although it also suggested thats not limited to ufos. that code could typically only find pre-iterations in small counts. it was searching for special pre-iterations that are smaller than the final/ target/ pattern iteration bit width, which is maybe not the same as ufos in the postdetermined region, but maybe the same difficulty still holds, it would be nice to better understand those interrelationships. “further TBD”
this is some more straightfwd code that uses the nice convert routine from
construct9c more descriptively renamed
linearize. it uses Terras ¾ density generation and looks at 1-runs on 1st postdetermined iterate. 100 runs and then filters by 3-runs and longer and sorts/ calls the
linearize formatting, for initial bit widths 50..500 in increments of 50. there is some similar sampling on the prior month
construct5e graph. this is further indication of the basic aspect of 1-run lengths being related to mixing. in the 1st graph even as the initial bit width goes up by 10x there is very little change in the max 1-run lengths which edge up only very slightly. another way of looking at this is at scaled 1-runs ie length divided by (postdetermined) iterate size, in 2nd graph. there is actually a decline (“crunchlike”) in this quantity for larger size starting iterates and maybe this can be seen in some other experiments, recall suspecting something like it, it seems vaguely familiar but would be hard to pinpoint quickly. there is some additional computation to look at the postdetermined iterate size and its typically about ~1.2x the starting iterate size with very little variance for each bit size.
none of this is unexpected but its hard to fit/ mesh this with the idea of large UFOs showing up in the post determined region. can they ever show up after Terras density generation? it seems unlikely from these results, they must be extremely rare wrt that if even possible at all. at the moment am a little perplexed but seems like maybe building more sophisticated UFO construction algorithms might be the way to go…
thinking further again maybe the UFOs are a red herring. keeping the eye on the prize, what is really needed is a solid way of showing the “drain set” of all integers is a slope-mean-returning random walk. the UFOs are involved in that, but the real key is to rule out unusually long 0/1 runs in the lsb section. have been thinking about the idea that maybe UFOs exist but somehow “always avoid touching” the lsb region. that indeed seems to be the case from known observations/ experiments. another possibility again is that UFOs are never more than a certain size, ie are somehow bounded and associated only with “smaller” iterates. wishful thinking?
(3/8) have been having this other idea about how to search for ufos by working backwards “reverse order” from the ending iterate 1. this code is a 1st cut on the general idea. it attempts to maximize the max 1-run found, the sequence length, and the density distance but not using the variable optimization combination logic which maybe inhibits its optimization strength some. it limits parity 0/1 runs to 20 max as found in earlier experiments associated with the drain. after 50k iterations and a 2k buffer size it finds a max 1-run of 14 at position 518 in a 1031 length sequence. it looks like the code is “moving past” the largest 1-run by extending the reverse sequence farther but not increasing the 1-run size. thinking how to respond to that… it seems the code maybe needs to track/ branch out from the sequence points with largest 1-runs… but its reverse-direction options are apparently relatively limited…
this is a little reformulation of some of the prior code. it creates 1-runs in 100 width ½ density covering 30-40% and then backtracks with parity sequences that have less than 20 length (consecutive) 0-runs and maximizing sequence start iterate bit width. it turns out to be effective/ powerful in comparison to prior ideas, using simple restart logic it finds arbitrarily long trajectories. it seems to show/ prove that ufos can be constructed roughly arbitrarily in the drain region as somewhat suspected. the code is not intentionally/ directly pointed to find higher prior iterates but its a consistent emergent property. the graph is 500 iterations running averages of the longest sequence lengths for the 1-run end iterates red compared with ½ density end iterates green. it consistently finds a small higher edge in the sequence size for the 1-run iterates similarly to a prior experiment.
to review/ recap, the general idea behind ufo study is something like the following. qualitatively the crunch region is seen to be associated with the drain but both of these concepts need to be quantified further/ technically and then it becomes slippery-to-problematic. trying to quantify the crunch region tends to lead to accessible, typical binary features of iterates, and the ufos are outliers wrt those features (full disclosure, shades of drunk searching under streetlight for keys). apparently some/ any characterization of the drain must somehow include and not be “thwarted” by ufos. the drain apparently cannot be characterized as “non-ufo occurring/ containing”. on the other hand its a familiar story. binary features of the iterates have yielded a lot of signal/ insight but again seem to fall short of cracking open the full solution due to problematic exceptions/ outliers.
this code puts the nail in the coffin a bit further so to speak. one might wonder “how deep” into the postdetermined region a ufo could appear. an earlier intuition was that there is high mixing at the beginning of the postdetermined region, and there are many experiments confirming variations on that statistically/ empirically… “and so therefore” it would seem ufos would not appear in the postdetermined region, and this dashes that (although the basic idea was already refuted for early in the postdetermined region, whereas this pushes it “deeper”). this runs the optimizer over a series of variable width starting ufos of size 45-55% of the initial (or final) iterate. the optimizer linearly finds larger ratios of final index divided by sequence initial bit width, ‘mxi’, for the relative location of the ufo, over ending iterates widths 100-1000 in increments of 100. the slopes vary and decline for larger widths but are clearly linear. apparently in short ufos can be found “arbitrarily deep” in the postdetermined region assuming the optimization trend holds indefinitely, which it seems to. this further pushes the remarkable aspect of “spontaneous emergence” and shows that absence of ufos is not a necessary criteria for “mixing”.
this code specifies an arbitrary ratio ‘r’ to end the optimization at and a limit on the 0/1 runs in the parity sequence ‘mxp’ where the prior code limited 0-runs but allowed arbitrarily long 1-runs. this limits each to a small 4 count and still finds it can find arbitrarily long pre-sequences to a ~50% bit width (45%-55%) ufo again using restart logic. it has some new careful counting logic for relating the parity sequence positions to the semicompressed collatz mappings both reverse/ fwd directions. this graphs the result using grid layout, lsb on bottom, and ‘e’/ ‘d’ entropy/ density metrics red, green, and on 2nd thought it would be nice to graph max 0/1-runs. heres a run for r=2.0 starting from 100 width end seed, note red/ green “1-2” spike at very end (entropy spikes on the long 01 repetition on 2nd to last iterate and density spikes on final iterate with the long 1-run). this code is not very sophisticated, its basically just the familiar pattern of a greedy search with a semirandom frontier very similar to the
bitwise code/ pattern, and its somewhat remarkable its so unconstrained in its findings/ results.
a concept to think about here is “diversity/ range/ array of prior states.” on these reverse-order backtracking searches, there seems to be either a lot or almost no prior states to search at each iteration (later aka “paint into corner”). in the 2nd case the trajectory is discarded and restart is triggered, and its apparently not hard to find new iterates that have arbitrarily long spans of many prior states.
(3/10) was looking at prior diagram and then thinking about the lower line. due to the generation algorithm limiting 0/1 runs in the parity sequence the entropy/ density of the lowest “lsb crosssection” is distinctive. and then was wondering about some kind of gradient up or down. measuring it, its quickly dissipated moving upward. but then was just curious about density/ entropy starting from the msb crosssection. there is a msb on each iterate, and a slice or crosssection of them has a defined density or entropy. it turns out the msb density/ entropy crosssections have some distinctive shape. this is a distinctive global calculation. there has been various analysis of msb vs lsb differences over the years and more recently and this fits into/ aligns/ meshes with those findings.
another idea is the following observation. as discovered early on but not realized, most integers chosen at random with ½ density are either in the drain set or quickly iterate into the drain set in a few steps. this means the drain set is very easy to study statistically because it requires essentially no (complex) construction logic. so this following code studies msb crosssection density/ entropy of the drain set. it starts with 100 bit ½ density integers and iterates until the iterates have less than ½ starting ie 50 bit width. it generates 50 samples and then computes averages over the “slices” using the nice reusable
avgs subroutine intermittently utilized. sure enough there is a characteristic shape in the density and entropy. graph is msb avg on left, 2nd msb next, etc. the entropy slice has a characteristic/ remarkable “ringing” to it. it seems notable ‘d’, ‘e’ slices coincide at 2nd msb position at ~0.4. these same shapes were initially discovered on the previous ufo code generation scheme and then decided to generalize it to the drain set. the msb entropy slice converges to a constant ~¼. the lsb density/ entropy slices are the expected/ familiar ~½. this all is a notable property found along the way/ on the side.
(3/14) 😮 ❗ 😦 👿 another nail in the coffin or brick in the wall so to speak. this is a substantial idea that could have been tried awhile back and now am getting to it after led in the direction by last few experiments findings. this code uses Terras 1-1 construction to build a 100-length parity sequence with the 1st ½ of the sequence as ½ density pattern and the remaining a solid pattern either 0 or 1. then it runs the backtracking algorithm and doesnt have much trouble/ problem finding long precursor sequences with setting r=2.0. (some adjustment was made to the entropy calculation logic again to merge logic.)
recently and over the years there was a conjecture with lots of significant/ nice empirical support (ie multiple separate experiments) that, related to the overall mixing idea of the postdetermined region, 0/1 max runs in the parity sequence in it must be limited, a recent experiment found ~20. but this code basically refutes that and suggests that the 0/1 max runs in the parity sequences seem to be unlimited even in the postdetermined region. here are two runs for 0, 1. so then on new findings, the prior experiments were “merely” measuring the high difficulty of finding unusual patterns but this different algorithmic orientation/ angle manufactures/ constructs them without much trouble. once again collatz turns out to be like a sheer rock face where any handholds look illusory on closer examination.
at this point am feeling a little lost in the wilderness. without some Big Conjecture to drive the analysis, theres a directionlessness. my only idea is that these experiments somehow seem to show that collatz is sort of like “wrappings around wrappings around….” ie somehow almost arbitrary patterns can be “wrapped inside” of outer wrappings like the proverbial or near-cliche russian doll(s). but maybe this is somehow yet another manifestation of fractal self-similarity? on the other hand all this “stuff” is found in basically postdetermined regions and maybe does not rule out some types of differentiable/ exploitable features in the predetermined region. but note that as proven months/ years ago with the Terras construction, the predetermined region parity sequence is completely unconstrained. however that does not necessarily preclude some kinds of (exploitable) patterns/ signal in the region’s iterates. and the 2nd trajectory/ pattern is in line with the old idea that 1-lsb triangles are necessary for climbs, its just the climb is “embedded” in a descent so to speak… also the recent experiments comparing the correlation coefficient of the drain and a linear fit maybe arent rejected either, because they had an inherent scale-invariant aspect…
❓ is it just my imagination or are there some differences in the predetermined regions? more 0 triangles in the 1st and maybe some entropy/ density differences, more volatile in the 1st and more stable in the 2nd? but note the different horizontal scale might change appearance.
(3/15) 😡 @#$& speaking of entropy… found some semirandom formatting glitches on my now ½-decade old summary page due to wordpress glitches, not handling special characters in code snippets correctly due to rewriting/ escape related logic in the inline/ embedded code/ js syntax highlighter, some maybe ipad-chrome-editor related/ triggered, as noted previously a jarring risk-of-the-unique-unorthodox-profession and source of latenight anxieties/ sweat, cleaned it up, had to use wordpress revision history, thats a pretty valuable feature sometimes, and reverified some of the code, and more confirmation of the value of the
gist site! anyway do need to revise this pg with lots of cool findings from last few yrs, but its very timeconsuming, and (putting this diplomatically as possible) theres not a lot of audience reaction… 😥
also (sigh) now trying to tread lightly/ carefully/ tiptoe around a minefield… but have seemingly successfully navigated/ resolved similar territory/ valiantly managed to tame or at least disarm/ neutralize a feral cyberspace entity not so long ago… old previously encountered borderline troll “RA” from a few yrs ago already (time flies when youre not annoyed, and conversely…) has inexplicably returned in the comment section with characteristic/ unmistakable fanfare and started some mini-harassment campaign… (speaking of “wasting time”/ misplaced energies…) awhile back encouraged him to get a stackexchange (chat) account thinking s/he had something significant to say about eg bitcoin and just needed a place for it. boy was that a mistake… to make the story short, a stackexchange mod intervened and deleted dozens of msgs…
so… to be purposefully distanced/ identified in 3rd person until behavior improves and shows any )( responsivity to requests/ directions and acts just a little )( less crazy! (lol, magical thinking?) aka “pulling out the 10ft pole”… since dismissive of my blog(s), maybe not noticing further comment in it, lol? RA is a “strange bird,” and, always trying to say something positive esp about other humans, but here maybe straining some, apparently has a few )( redeeming qualities, eg high grasp of/ flair for english language/ style (native speaker/ language?), theatrical, melodramatic, humorous, poetic, persistent, apparently triggered by some long forgotten/ buried cyber saga and/ or mistaken identity, and alas on other hand also exhibiting some semialarming cyberstalker symptoms. now calls himself an apprentice… to what? evil? right now emphasis definitely on the runty part… aka incorrigible… shoo fly 👿 😛
(3/21) have been having some extended period of thoughtfulness or emptymindedness, sometimes its not easy to tell the difference (and thats very desirable in a zen kind of way). a few new ideas. it seems to me the binary statistics are to be seen as useful but maybe very noisy, almost “too local,” subject to spikes such as ufos. its kind of melting my brain. maybe need to look more at ways of averaging or aggregating them, not losing sight of the bigger picture, maybe some focusing on trees and not the forest in all that. its also not clear if some of the binary statistics trends are associated with random numbers in general and could or need to do some control comparisons along those lines. maybe need to not lose sight of the idea of larger emergent trends/ properties arising out of the local dynamics/ occasional noise, and sometimes “work backwards” from them. then started to go in this direction.
heres another way to look at postdetermined vs predetermined region. one basically expects a decline in the postdetermined region although one experiment showed that ‘cm’ max index could exceed ‘nl’ initial bit width by about ~300 iterations although that single value doesnt indicate relative difference (more on that below/ next). a fairly simple calculation/ experiment would be to compare binary areas of predetermined vs postdetermined regions. my initial idea/ picture was to expect declining area in the postdetermined region, so that just measuring a ratio would tend to capture the trend. this code looks at a 2-region of the predetermined followed by the same count of iterates in the postdetermined region, and measures binary area of 2nd over area of 1st ‘a21’ red, attempting to maximize it along with initial bit width ‘nw’ green via bitwise approach.
it finds a ratio close to ~1 but slightly higher, somewhat unexpectedly high with the idea in mind of the drain overlapping the 2nd region. something similar was seen in the striking
construct13 (“rosetta”) graph from last month, where a high climbing predetermined region led to a outlier high postdetermined outcome… sort of forgot about that in picturing all the other “lower” trajectories as more typical, but ofc optimization algorithms tend to focus on outliers. the binary diagram shows one result. some better logic could try to find high ‘a21’ sample in the longer runs— picking the largest ‘a21’ is biased toward the short trajectories; this code just punts and picks the largest run, and just rerunning the algorithm, sometimes it ends on more of a spike. the binary area post-to-pre ratio was 1.09 for this run. so the 2nd region can decline “slightly less” than the incline, and the slope-mean-returning drain section seems to be located farther/ “kick in” “relatively far after” the start of the postdetermined region. in the binary diagram there is not much slope in the 2nd half. immediately this gives me the idea to measure/ optimize 2nd half closing iterate bit size over max iterate width and expect it would be consistently less than 1, if so pointing to some possible inductive structure.
‘cmnl’ was studied somewhat in depth, the difference of max index minus initial bit width. related to prior experiment, wondering how “far/ deep/ late” into the postdetermined region the decline can start, it makes sense to study the ratio of the two here named ‘cmnw,’ blue dashed line, optimized along with ‘cm’/ ‘nw’ separately. the results show that it seems to be bounded but intermittently spikes up to about ~1.5 even for larger trajectories, although it does seem to be declining but gradually at the end.
😳 fyi theres a small glitch introduced in the
fmt1 subroutine that has a ‘\\’ after the plot and causes the binary diagram to give an error in gnuplot, related to disabling/ commenting out the ‘e’, ‘d’ entropy/ density overplot lines.
💡 ❗ as just wondered, and this is kind of basic & maybe should have been searched sooner, this looks at the “2x width distance” iterate width ‘wz’ (bit width of iterate at 2x the starting bit width distance) red divided by max iterate width ‘mx’ blue, called ‘wzmx’ magenta, optimizing for all 3 along with initial bit width ‘nw’ green, and finds that its consistently less than 1 although not by much. re overarching inductive structure this nearly seems to be some kind of key constraint/ invariant maybe at the deep core of the problem…? because this basically points to a consistent, ie potentially universal decline-after-peak property. and again the constant nagging background refrain/ theme, how to prove it? ❓ 😐
(3/22) its just so darn easy to try out/ dash off ideas in the
bitwise framework, if only it led to corresponding leverage over a proof. its almost trivial to measure various bounds/ constraints around trajectories but nearly impossible to prove them, and this asymmetry is part of the deep mystery. was wondering about “how far away” from the bit distance that ‘cm’, ‘cg’ can be pushed. this code measures their relative distance to ‘nw’ initial bit width via ratios ‘cmr’, ‘cgr’. this pushes on all 5 metrics
cm, cg, cmr, cgr, nw. the optimization ends at cmr≈1.02 and cgr≈2.9 ie (steeper) upslope distance roughly ~½ the downslope. the ‘cmnl’ phenomenon observed/ poked at in the past is notable here again, it seems like ‘cm’ can only spike a max nearly constant distance above initial bit width (red, green). correspondingly ‘cmr’ magenta seems to be bounded and declining gradually. in the binary diagram there are lsb 1-runs in the climb apparent.
another way of looking at this. there have been some studies about composing higher trajectories in terms of lower ones. this has to do with higher trajectories having identical binary prefixes as lower ones. based on this study it would seem that every higher glide could be “composed” of ~3 lower ones (spec. ~1 over the predetermined range and ~2 over the postdetermined range). trying to do this in some kind of organized way with a pattern/ structure is the big challenge.
‘cgr’ is similar to a metric studied years ago called “horizontal distance” aka ‘h2’ probably introduced here and there were other later studies. ‘h2’ in contrast used a higher compressed collatz mapping.
again a simple idea that could have been tried long ago, this idea/ experiment just occurred to me. another way of visualizing the problem is that one might wonder simply which iterates lead to large vs small climbs. not the same but somewhat nearly related is just asking how many odd iterates occur in the parity sequence. this code looks at all numbers less than 2^10 and determines their odd iterate count (roughly same as parity density), and then sorts them by this value, and then graphs the seeds in that order, leftmost is least odd counts and rightmost is highest odd counts. in some ways this captures some of the core mystery of the problem. the graph is not random but not very orderly either although there is unmistakable order and bunching. and what property of the iterate is correlated with this odd count? it is tempting to try to somehow throw machine learning at this basic question and see what it comes up with. approaching from another direction, is there any known property of integers that looks anything like this graph?
later note: (deja vu!) further poking, the parity density measured here is closely correlated with the stopping distance and plotting it in a slightly different way brings that out.
(3/28) kind of in an empty space right now. not sure where to turn, what to try next. kind of headwracking + handwringing. so many powerful ideas/ approaches/ techniques have been tried at this point but none seem to point clearly the/ any route to a proof. it seems a deep paradox to me that one can run almost any algorithm whatsoever on the problem and sometimes seemingly come no closer to a proof. feel still am missing some really big paradigm that is at the same time close by but not yet in reach. in two words, tantalizing/ elusive. in a 3rd, sometimes frustrating. 😦 🙄
a few more thoughts. there is a “forward mixing vs reverse mixing”. the Terras construction and many other approaches eg density/ entropy etc look at a kind of forward mixing. then recent experiments look at a “reverse mixing” where one can start with arbitrary iterates and often “work backward indefinitely” and a kind of mixing occurs in that context also. also notable for this reverse case is that the increase in iterate size is more gradual than the increase in iterations, hence the recently studied “depth ratio” can typically be increased indefinitely as seen in prior results.
also, am having this fantasy lately of running a GAN generative adversarial network which have been in the news lately with striking results, eg a human face-generating system. the GAN would be as follows. one network would analyze the iterates to attempt to estimate the “open properties” such as glide/ trajectory characteristics like length, peak etc, supplied with lots of binary features as have now been identified. the other “adversarial” network would find iterates (arbitrarily large) that tend thwart the estimates. have described this basic setup before. note the search space is infinite. this is a bit different than typical ML problems which often have finite search spaces or finite input/ output datasets, and this is a key challenge/ question in adapting ML technology to this domain. the GAN code out there is more cutting edge and there are fewer off-the-shelf packages, and its harder to set up than with a finite set of training data. but maybe will delve/ dig into this at some point.
but then, a big question, what constitutes a proof? the answer again is in the error. the network will find some error. is it possible to prove a rigorous bound on this error for arbitrary sized iterates? ie looking at the error vs iterate size function, is it (provably) bounded by any function? proof of any bound whatsoever is equivalent to proof of a decidable computation, the crux of the problem. empirically one can measure whether the error increases for larger iterates and estimate a bound. but actually proving one holds in general is the “holy grail”. this is a key open question in neural network research, although maybe not exactly recognized or emphasized/ focused on there, and yet conquering it unites the two relatively disparate fields of ML and math theorem proving, an old dream of mine and others that is slowly/ finally starting to materialize before our eyes.
backtrack were somewhat dramatic hypothesis-busters—am still now reeling some/ seriously chagrined—and deserve a little more commentary to nail it down/ spell it out. they tend to suggest even though it holds quite generally empirically speaking, the very compelling “rosetta diagram”
construct13 from last month is a quirk/ mirage. the “needles in the haystack” mess up any generalization about the hay so to speak. that diagram seemed to suggest that at the beginning of the postdetermined region there is sufficient mixing to lead to the slope mean-returning random walk. the new experiments show that long into the postdetermined region there can be “nearly arbitrarily” large deviations/ spikes away from this random walk.
💡 my new idea is maybe “focus more on inductive properties”. this is vague but meaningful at the same time (zen is a philosophy of life around here). one direction/ some inductive findings were sibling related analysis. then this occurred to me, maybe never calculated this. it would be interesting to measure difference in ‘cm’ glide max index and ‘cg’ glide distance between siblings and attempt to maximize them, and this is done with a minor alteration of the code where the
count subroutine has access to the prior sibling. this led to these results ‘cgd’ magenta, ‘cmd’ lightblue dashed, ‘nw’ red. both ‘cm’, ‘cg’ can be computed locally (the typical case) or globally. that turned out to have a huge effect on the optimization results here (whereas as reported in some other experiments it was difficult to find any difference); the graphs are in order of local vs global measurement,
count1, count2. in the 1st approach it seems like the metrics are clearly increasing but far less so in the 2nd.
(later) this code was adjusted to run for 40k iterations on the global metrics and shows dramatically different results just a little later. either the code paints into a corner or higher optimal metrics cant be found. it makes me want to unleash the more powerful hybrid optimizer on the same metrics to get a better picture. also have some hanging ideas on how to enhance the hybrid optimizer a little.
(3/29) somewhat similarly this is a quick riff off
backtrack that rips out the Terras code and backtracks to the same 2x ratio on an iterate with a ½ size lsb 1-run and ½ density in the ½ higher bits, showing the Terras construction is not really necessary in that case. then it looks at the evolution of the 2 higher siblings (green, blue) vs the deviation-constructed trajectory (red). it shows there is significant, basically “almost arbitrary” deviation in the “spike region” at “almost arbitrary” depth (into the postdetermined region). in other words this is an analysis of “how much” the siblings can differ from each other based on this contrived but relatively powerful deviation method. the general idea here is an induction with siblings needs to show some kind of correspondence/ similarity which is quite typical on average re the rosetta diagram but that this contrived “edge” case tends to disrupt/ thwart it to an “almost arbitrary” degree. determining/ nailing down the exact technical limits on the word “almost” there is basically equivalent to the same grand proof challenge.
❓ musing: it seems therefore that the collatz trajectories are some mix of a unconstrained vs constrained random walk. the pre- vs post-determined distinction has been helpful in delineating this concept, but in retrospect looks (far?) too simplistic, because there seems to be “not so much constraint as guessed” in the postdetermined region. some more advanced concept is necessary. it seems to be something like a semiconstrained random walk, and it is not so simple as to separate the constraint into two regions. its a semirandom walk with some kind of complex constraint mechanism. what is the fundamental nature of this constraint mechanism? how can it be expressed?
an earlier long/ oft used concept looked at nonmonotone run length. it has been known since the beginning that glides lead to arbitrary long nonmonotone run lengths in the trajectories. but looking at limits on nonmonotone run length is a kind of simple constraint mechanism. what are some more sophisticated constraint mechanisms on a random walk? what could be a basic generating model structure/ formulation? some of this seems reminiscent/ evocative of the Hurst exponent/ fractal theory that involves “semi random walks with memory”. but what kind of form do those take? maybe time to see if they fit somehow or look more into semi random walk theory.