latest collatz musings

my last post on subj is over 1mo age/ contents so am starting this new one. (in contrast to last time…) nothing earth shattering this time! (honestly, 2020 hindsight, now thinking distinct mania of last post on subj might have been influenced by Tao zeitgeist and blogging about his breakthru…) research in this area goes thru a bipolar-like cycle and am currently in a low 😦 😥 😛

…oh but as they say in kindergarten teachings, every cloud has a silver lining right? so lets turn that frown upside down! 🙄

⭐ 💡 update/ announcement/ highlight: two cstheory stackexchange questions based on some ideas relating collatz to theorem proving and undecidability wrt FSM transducers did quite well on the site at 10 and 6 points and 3 great answers total, a clever one by familiar associate Marzio who has also long been linked on this blogs sidebar. thx man! 😀 😎

finally decided to start using the lambda and function indirection capability in ruby. this code uses it to create 6 different collatz functions that differ slightly in the odd/ even dynamics. (fyi one of the 2 x 3 combinations is redundant/ identical with another leaving 5 unique; miniexercise: figure out which ones/ pair.) the syntax definitely is much more terse than case/ switching logic and presumably a hair )( faster, but its not as readable (in much the same way/ spirit that terse math formulas are not all that human readable/ parseable either). then there is a revisiting of the “bunch” logic to attempt to monotonically advance the set. this is a nontrivial idea that took a few hours to implement/ debug.

the basic idea is that the next trajectory advances/ steps either increase or decrease. the algorithm tries to attack the largest increase, but based on a “budget” of all the decreases (sum of all of them). if there is no “budget” then it chooses the smallest increase. if there is some budget then it finds the largest increase that is under budget. after choosing the largest increase, it decreases the budget by smallest budget increments until the budget is still larger than the increase but any further decrementing of the budget would “conflict” with the increase (ie not “cover it”). then it simultaneously advances/ steps both the increasing trajectory and any multiple decreasing trajectories associated with the (final) budget. overall this last idea is somewhat similar to finding a sum of integers that is nearest possible to a limit but still below it, ie similar to/ variant of “subset sum” problem.

it is somewhat striking that the 6 different collatz function variants have widely different results on the bunch algorithm, noticeably in particular significantly affecting the concavity and local minima/ maxima/ concavity/ inflections (note as indicated previously, 5 are unique and two lines are perfectly overplotting). as experienced earlier, the method does fairly well until the end where there is a large spike, but the descent/ smoothness is impressive prior to it. this all appears to indicate the algorithm is not “aggressive” enough in finding/ advancing the worst case trajectory(s) “early on”. another interpretation is that the algorithm isnt “saving” enough. it is always trying to match expenses to budget so to speak and can get “overrun” by a big “withdrawal/ expense”.

bunch20.rb

bunch20

⭐ ⭐ ⭐

💡 😈 here is an idea fusing different themes explored previously, sketched out in a loose conceptual way, a sort of phantom/ ghost thats been “haunting me” lately some. it is known that binary prefixes lead to identical trajectory initial runs. suppose that the bunch is chosen somehow with an eye toward the prefixes in the bunch. each trajectory in the bunch moves through different prefixes. could there be some pattern in these prefix “exchanges”? in particular suppose that a bunch could be advanced in a way (possibly different count of steps for each trajectory) such that while there is an interchange, the set of prefixes does not change after all the advances—that would be an extraordinary property (and directly utilizable wrt/ by induction).

then it would seem as if the prefixes are “exchanged” or “alternate” through the bunch. the only way to do this seems to be a bijection. but all bijections break down/ decompose to sets of cycles. and then each cycle can be considered separately. so the only case seems to be using a cycle of prefixes. but, this seems to be impossible based on a fairly simple proof. can you find it? the proof seems to hold even for variable width prefixes. and note a smaller prefix can be “cut/ truncated” from a larger prefix. even though this basic idea seems to be disproven, maybe there is still something exploitable/ tractable about analyzing the changing prefix dynamics of a bunch advance.

(11/1) was musing on all this a bit and came up with some new mini insights. this all seems to hint at a new concept of hardness. an algorithm that pursues bunch advances and then is “fooled” by some trajectory in the middle eg that unexpectedly spikes after a long nearly flat- or plateau-like behavior is experiencing a kind of new “hardness” unlike other “hard seed” ideas formalized into the current bag of generators.

eg maybe all the hard seed generators are creating trajectories that are “predictable” in that if they have large glides or spikes they occur early in the trajectory and not later. this all also ties into ideas long considered here about local vs global patterns. a long flatline trajectory with a big spike at the end does ok locally for most of the trajectory and then the global property/ view at the end causes a smoothing algorithm to fail, ie an incompatibility/ conflict/ mismatch between local and global behavior.

so then the question becomes about exploring whether long flat trajectories with big spikes at the end exist eg in a new generation algorithm. if they exist, with arbitrarily long plateaus, that is maybe strong resistance against a local algorithm succeeding in forecasting global trends (and possibly even against any collatz proof existing?). or (to the contrary) is there some constraint against those existing? eg need to analyze the small subset of trajectories that remain at the end of the above plots to see if they exhibit this “surprise spike” aspect.

(11/7) 😳 have long been mentioning the subset sum variant of attempting to sum as closely to, but not exceeding a threshhold. oops! didnt realize, this is a classic NP complete knapsack problem. saw it refd recently on cs stackexchange and then realized it! and of course my large and highly attentive audience is helping me out on stuff like this!—not! …. kinda reminds me at times of an expression from latin american culture… like talking to the wall! 😦 😡 😥

(11/9) this is some code that automatically/ systematically figures out some pragmatic upper ranges for the hard seed generation algorithms over all the different 5 collatz trajectory “compression” schemes. they were previously set by hand and only for a single trajectory algorithm. the algorithm is probabilistic. it assumes a fixed size to the hard seed generation array/ memory (2000) and then starts with low difficulty seeds, slowly increasing hardness until encountering failed seed generation runs. when the ceiling of probability of failure is reached the algorithm finishes for that seed generation/ collatz trajectory computation pair. here the probability is 1/10 and it is reached after the first case of two failures within 10 consecutive seed tests. notably the compression algorithms can affect the trajectory size to the point of the smallest only ~25% of the longest eg the ‘cm’ mode.

range.rb

["p", 0, 1.02, 353.5]
["p", 1, 1.05, 364.0]
["p", 2, 1.0, 343.0]
["p", 3, 1.04, 360.5]
["p", 4, 1.03, 357.0]

["c", 0, 1.01, 1200.0]
["c", 1, 0.78, 924.0]
["c", 2, 0.57, 672.0]
["c", 3, 0.53, 624.0]
["c", 4, 0.28, 324.0]

["w", 0, 1.18, 87.0]
["w", 1, 1.16, 85.5]
["w", 2, 1.1, 81.75]
["w", 3, 1.18, 87.0]
["w", 4, 0.26, nil]

["m", 0, 0.95, 325.5]
["m", 1, 0.97, 336.0]
["m", 2, 0.98, 336.0]
["m", 3, 0.98, 336.0]
["m", 4, 0.99, 343.0]

["cg", 0, 1.02, 353.5]
["cg", 1, 0.92, 318.5]
["cg", 2, 0.68, 234.5]
["cg", 3, 0.47, 161.0]
["cg", 4, 0.27, 87.5]

["cm", 0, 1.05, 206.0]
["cm", 1, 0.83, 164.0]
["cm", 2, 0.71, 138.0]
["cm", 3, 0.43, 84.0]
["cm", 4, 0.28, 54.0]

this code integrates over 1000 trajectories for ‘w’ generation scheme and 0.25 “hardness”, scales them uniformly (“horizontally” ie lengths), and looks at average statistics of the 0/1-run length averages, standard deviation, and max run lengths in each iterate. results seem significant. the 0 and 1 statistics differ substantially and have different dynamics pre/ post glide peak. 1st graph 0 run length averages, std dev, and max run lengths in red, green, blue respectively. 2nd graph is same for 1 runs. 3rd graph is iterate bit size and average density in red, green, ½ center density in blue. much could be said here. there are nearly correlated “squiggles” a the end of all the lines except mostly the iterate bit width. the average density nearing ~½ center density is a fairly good predictor of the max iterate location. the average bit lengths are asymmetric for 0 vs 1.

run5.rb

run5x

run5y

run5z

the comparison with random seeds below is quite distinct. the following graphs are for for the random seed generation algorithm ‘z’, still 0.25 hardness. 1st graph again is 0 run average lengths, deviation in lengths, and max length. 1 run graph not included looks nearly identical. then 2nd graph again the iteration bit size/ density.

run5bx

run5by

(11/10) ⭐ 💡 😎 😀 (maybe have been operating under some mental block that a loop invariant needs to have a “single form”…) this is some promising code that synthesizes many prior ideas into a larger construct aiming for a generalized loop invariant within a trajectory. an earlier experiment found a nearly-bounded max non mono run length for pre-core density decrease, in particular for the high compression collatz function. a recent experiment (not posted, similar/ minor variant on last code) just found that maybe the max 0/1 run lengths in binary iterates divided by total iterate bit length is bounded within the core density region, except for “small” iterates which behave erratically wrt it (here “small” is in the range ~20-50 bit size). so the overall picture for a possible 3-phase loop invariant is as follows:

  • use the high compression collatz sequence
  • outside the core density, the density convergence to the core is bounded by a constant max non mono run length.
  • then the iterate enters the core. within the core, the max 0/1 run length in the binary iterate divided by iterate bit length seems to be bounded by a constant, except for small iterates.
  • “small” iterates less than ~20-50 bit size can be treated as a finite set to verify.

this following code incorporates all these ideas and searches through all the hard seeds for the above metrics over increasing hardness. the graphs of the key parameters are generally always flat giving empirical evidence they are indeed bounded by constants. for nearly every algorithm the iteration count to enter core seems to nearly match the max nonmono run length of the density outside the core. actually for all but one generation algorithm the iteration count to enter core seems nearly bounded by a constant and this is due to the high compression and also probably due to the algorithms generating seeds generally without non-core density, ie typically within core density range. the ‘cg’ generation method has low density seeds but they apparently nevertheless converge quickly toward the core.

the other exception is the ‘r’ generation method that creates increasingly low-density starting seeds, where the iteration count to enter core density tends to increase, but nevertheless the max nonmono run length of density outside core still seems to be bounded in this case. in this code there is a new flag to determine whether the seed generation algorithm is probabilistic or not, ie will or will not fail for memory overflows. if it doesnt fail for memory overflows then the loop has a hard cutoff instead of the contrary case of terminating after 3 seed generation failures. also the hardness table upper range limit has been adjusted specifically for the 5th generation algorithm, the highest compression one.

run7.rb

(11/12) went back thru old posts trying to remember if a particular experiment had been done. turns out a nearly gaping oversight in prior analysis is that there was no inquiry into the convergence to core versus the peak trajectory value (and the prior integrated plot seemed to show a near-correlation in some case). if the peak trajectory value is somehow “close” to the start of the core, thats quite significant and maybe even nearly breakthrough. because basically after the iterate peak, the trajectory is (“very”) roughly monotonically descending. and the pre-core “convergence” via density seems extremely solid. so the big problem area is possibly post-start-core and pre-peak— where the trajectory is increasing. how “big” is it? this code analyzes it over all the 8 hard seed generation methods and for the high compression version.

somewhat surprisingly the iterations-to-core is low and seemingly bounded (green) except for 7th method ‘r’. for several of the seed generations, the post-core-pre-peak count (red) is low and seemingly bounded, but for others, it is obviously increasing esp the 5th/ 6th ‘cg’ and ‘cm’ methods. nonmono run length pre-peak is in blue.

run8.rb

run8

(11/13) heres an analysis of some metrics over the ‘cm’ metric in particular integrated over 2000 trajectories and the post-core-pre-peak region only, scaled. it takes a lot of integrations to show it over the noise (ie a rather weak signal) but there does seem to be a trend in the 1 runs statistics (average length, std dev, max length, 1st graph, red, green, blue resp, with nearby constant lines for comparisons) and the density (2nd graph, green, with average iterate bit size in red showing the upward trend). the 0 run statistics are flat, ie apparently “random” (pre-integration) in the same range.

run10.rb

run10

run10x

this code uses metrics inspired by the last graph, basically tracking the negative value of the upper increasing metrics, for four different metrics: 1 run avg length, 1 run length std dev, 1 run max length. the 1-run average divided by std dev was also noted to be declining in the prior analysis so that is added also. then this code calculates the max nonmono run length of these metrics and graphs in red, green, blue, magenta respectively. prior code also did not output the “post-core-pre-peak” length, that is corrected here, the black line. that black line addition tells a lot of the story; the other metrics are low because they are all guaranteed below it and it is not reliably or strongly increased in at least half of the generation algorithms.

also the seed bit size is in light blue. unfortunately these metrics while apparently attenuating the max nonmono run length of the post-core-pre-peak range, do not succeed in bounding it by a constant, in particular for the problematic ‘cm’ generation scheme which not surprisingly also steadily increases the post-core-pre-peak count (6th region left-to-right in plot).

PS 😳 @#%& bug, just noticed the non mono run length subroutine is unintentionally taking the absolute value of the sequence values. however fixed that and reran it for ‘cm’ in particular and still got similar results. weird…

run9b.rb

run9b

(11/17) 😮 (holy cow, entirely new wordpress editor, looks pretty good so far, may comment later, crossing my fingers all over the place it wont mess something up esp after past issues with clobbering code…) it immediately occurred to me to try to quantify the monotonicity of those long narrow increasing lines above. even though they look far better than noise, the prior experiment shows they seem not be decreasing max nonmono run length much. so this code looks at the nonmono run length for increasing number of samples, from 1 to 2000, for the negations of the slowly increasing metrics. the 5 unnegated metrics are iterate bit width, average 1 run length, std dev 1 run length, max 1 run length, negative average divided by std dev (line 197).

then, plotted nonmono run lengths of the negated metrics as red, green, blue, magenta, lightblue respectively. the nonmono run length of the negated iterate bit size decreases reliably for increasing samples but the other metrics do not show much change downward, in fact even in cases deteriorating and increasing as sample count increases! so overall this seems somewhat “another dead end”. even though the plots of the lines look very smooth, the decline range is so narrow that there are still large nonmono run lengths that cant be decreased with increasing integration. a case where human aesthetics/ cosmetics do not match the hard bottom line. 😐

run11.rb

run11

⭐ ⭐ ⭐

⭐ 💡 ❗ 😮 😀 this following code/ graph may look simple & like no big deal, but it is! pondering it led to a huge shift in thinking! am posting it immediately but exploring it further… hint: the bunch20 algorithm above may be “good enough”!

more (descr) later…

bunch21.rb

bunch21

(11/18) this code starts with the idea. the basic concept is to work with a bunch of trajectories but its not necessary to drive every trajectory to 1, only to drive every one below its starting seed. doing so in a way that preserves the monotonic decline of the current total trajectory iterates is not easy, but this code shows that it succeeds for some starting values after searching randomly for them. this is quite notable in that perfectly monotonically declining starting points while rare can indeed be found. the code outputs the successful starting seed/ “base” and # of tries to find it. it uses 150 adjacent trajectories. the advance strategy is not particularly smart (its just a fairly simple greedy algorithm similar to the prior one involving the “budget”) and might advance trajectories that are already below the starting seed.

bunch22.rb

this overall strategy looks promising because of two key ideas:

  • in prior algorithms eg the “budget” algorithm, mostly monotonically declining totals were observed up until the end where they failed. this could help them terminate in the consistent decline region before reaching the problematic later zone.
  • there is some reason to think that larger starting seeds/ bases can be built out of smaller ones by arbitrarily changing upper msbs, because (roughly) the algorithm makes all its decisions based on lsbs of the trajectories. figuring out the details of that is a key next question to pursue.

(11/20) oh! this is maybe a bit obvious in 2020 hindsight (this blog is nearly “stream of consciousness/ thinking out loud” at times). a key aspect of this approach is the longest glide in the bunch. the hard generation algorithm empirically show arbitrarily longer glides exist. so at best this approach can resolve increasing bunches with longest glides bounded, which is inherently limited (and it will take at least as many iterations as the longest glide). again the arbitrarily increasing glide lengths overall is a problem. the prior code is roughly biased towards finding bunches where all the glide lengths are shorter.

however, it maybe leads to another way of looking at the problem. consider resolving bunches as advancing through a series of random binary strings of random lengths. the idea above is to try to do so with some kind of “enforcable order” ie some kind of invariant, in this prior case the monotonically declining iterate total.

(11/23) heres some bunch code for an idea thats long occurred to me, to have a constant-sized “lookahead buffer” in this case 10 (line 22). the code advances a trajectory with the max value in the lookahead buffer and it all comes out fairly smooth, with the exception of some jumpiness at the beginning, although looks can be deceiving here and some of the flat-looking plateaus are actually very gradually increasing.

bunch24.rb

bunch24

(11/24) speaking of deceptive results, decided to revisit bunch20 and found something surprising. its “very nonmonotonic” (before the ending gyrations/ spikes). but in a way that is very effectively concealed. based on quick/ small modification of the code, a metric that counts sequential nonmonotonic steps as a fraction of total steps is graphed below (1st). its quite substantial except for the highest compression sequence where it falls dramatically. in other words there are many nonmonotonic steps but theyre all small. so immediately am wondering, maybe a constant-sized buffer that skips the nonmonotonic steps may lead to major improvement?

the 2nd graph investigates whether the idea of early termination based on glides only instead of full trajectories will help any and comes up with a negative conclusion to that. it graphs the percent of trajectories that are still in a glide. somewhat surprisingly for all the algorithms it drops in the early stage but then returns/ climbs to all the glides in the (remaining) bunch. therefore the bunch20 algorithm dynamics can be seen to be holding off on advancing many/ “most” glides until the end. this can be seen as something of an emergent property that was not specifically designed for.

bunch20b.rb

bunch20b

bunch20bx

(11/25) did not expect anything spectactular but had all the code lying around & figured might as well try this out (also useful as a rough benchmark). its 8 hard seed generation vs bunch20 algorithm, and the nonmono run length graphed. (2020 hindsight after this ran for a few hours, it would be better to graph the separate generation algorithms in different colors.) the algorithm does “not bad,” still with noticeable uptrends on almost all seed generation methods, except on the increasing random seeds where it does the worst, with dramatic increasing spikes.

bunch26.rb

bunch26

this following code was based on bunch24, was not very successful and was thinking of skipping posting it, but it took quite awhile to debug and has some new ideas. it greedily attacks the trajectory based on the largest indicator in the lookahead buffer, and has 5 different ideas for determining indicators. it also greedily finds descending steps to try to match the greedy increase. it has a lot different dynamic than the prior bunch24 and shows how the effects of some changes are hard to anticipate. one good thing that can be said is that after the initial noisy regime, there is a 2nd regime that is mostly smoothly declining.

bunch25.rb

bunch25

this is some more rather underperforming code but is more streamlined and took awhile to debug so am posting it. the idea behind this code returns to a budget concept. the code is looking for trajectories to advance based on an indicator value. the indicator value is different than the step size for that trajectory. this code greedily looks for the highest indicator value (with an associated delta) that can be “covered” by the current decrease budget.

there are two simple metrics to measure its success. one metric just incrementally adds all the nonnegative weights only (green). it can be seen to be slowly increasing even as it appears the total trendline is decreasing (red). the other obvious metric is deltas on the total, here the log deltas are plotted in the 2nd plot. the descent is fairly monotonic although the algorithm seems to be too greedy at the beginning with the huge cliff. the nonmonotonicity fails more strongly/ noisily into about the 3rd part.

bunch27.rb

bunch27

bunch27x

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s