collatz 2016— return of the FSMs

hi all, have been playing with collatz and FSM code for over a decade now but not for a long time recently.[a] got the urge to revisit it. the excellent AT&T FSM library is a very good tool in this regard,[a5] but went to go download it today, and the url fails. is it down? broken forever? have long been thinking it might go down. its very difficult to keep software going indefinitely. this is a very good package, hope that it is not lost to the ash-bin of history….

so then was hunting down FSM code. there are a lot of state machine implementations in ruby but does not seem to be an optimizer written in ruby anywhere. found the OpenFST library, and there is a windows build, and it has the optimizer. it has a contribution by Mehryar Mohri and maybe others who worked on the AT&T FSM library. it seems to be nearly plug-in replacement/ compatible.

it has a default option to output graphviz dotty files. however, tried the sfdp rendering on some “not large” files and it failed badly with all the nodes plotted on top of each other. argh! trying to do some serious science around here with sometimes-seemingly toy tools! the idea was to try to look for emergent patterns. maybe will have to cook up some other visualization method… the general idea is that the compressed graphs presumably have some half order mixed in the (incessant) disorder. and also any method of creating the sequence of graphs through induction would be equivalent to a proof.

the basic idea behind this code is to output the parity sequences as FSMs for sequential trajectories starting at a low value. then compress/ minimize the FSM and look at growth of the FSM state counts. there is a lot of redundancy in these sequences as discovered in many earlier experiments and many variations. near the end the determinize step ends up taking about ~100MB and the minimize step nearly that much also (ok, so the fsm library turns out to have redeeming qualities). the red line is the original state count and the green line is the minimized version. the final row has values 461697, 12525.

fsm2.bat

makefsm2.rb

fsm2

(1/21) heres an idea that occurred to me in thinking over some of the last post ideas. a recent finding was that the trajectories (glides), at least those found by the hard code (and this could be biased) seem to be self-similar in the sense of basically having an uptrend section and then a downtrend section, in contrast eg to a sideways interchanging up/down pattern (which is more in line with random walks). so then it seems interesting to question the uptrend and downtrend differently. this code separates them and looks for the nonmono run length of the (inverted) uptrend and the same for the downtrend. was hoping for a constant-bounded scenario, and is seen with most of the hard strategies, but this also shows that for some hard seeds its not there. the red line is for uptrend and green for downtrend. esp for 6th ‘cm’ strategy the red uptrend side has strong increasing nonmono run lengths even as the downtrend nonmono run length is mostly flat.

bunch44.rb

bunch44

a quick riff on that last one, it occurred to me to scale the horizontal width of the series by the trajectory (glide) size and calculate the nonmono run width based on the scaled version. that leads to possibly bounded results. so can this be exploited somehow?

bunch45.rb

bunch45

(1/29) this code looks at “upside” and “downside” of the glide for each hard algorithm. each is normalized to the range x and y in 0..1 and plotted as red, green respectively. as found from this code/ analysis, some of the algorithms namely ‘p’, ‘m’, ‘z’ while increasing in trajectory lengths do not generate significant glides, and this could have a lot to do with explaining some of their different properties as seen previously. the details/ zoomins are for the ‘w’, ‘cg’, ‘cm’, ‘r’ methods. on ‘w’ and ‘r’ the upslope and downslope have noticeably different form/ shape eg in amount of variance. or # of unique paths etc. its also noticeable the self-similar shapes of all the walks which are not really all that random and generally have small nonmonotonic ranges. however its not known if this is an “artefact” of the hard generation process or something deeper.

bunch48.rb

bunch48

bunch48wbunch48xbunch48ybunch48z

(2/12) this code looks at the upslope and downslope of hard seeds also (glide region only). the idea here is to look at the neighborhood of the hard seed, in this case the nearest 1K trajectories. the code separates the upslopes and downslopes for all trajectories greater than 5 steps, scales them horizontally and integrates them. did not expect this, but long glides are extremely rare for large seeds and mostly less than ~1% of the neighborhood glides have more than 5 steps. therefore one would not expect much smoothing effect. the code then measures nonmono run length of the resulting series. the red is the upslope and the green is the downslope. for #1 and #6 there is a big disparity between the upslope and downslope statistics. for these hard seeds the statistics are again apparently dominated by the single hard seed in the larger 1K neighborhood.

bunch49.rb

bunch49

this code roughly repeats an earlier test over all the hard seed patterns for the compressed trajectory iteration function. it measures difference between glide length for the hard seed and its sibling in red. the difference in glide lengths is interesting that the results seem to flip between either bounded or unbounded. the max glide value difference between glide and its sibling is in green and looks consistently bounded.

bunch50.rb

bunch50

(2/19) came up with the idea of a “sibling cascade”. a larger width seed has a whole series of siblings, as many bits as there are in the seed. each one can be analyzed independently. a striking property revealed from some experiments is that some key statistics can be computed/ estimated roughly via linear interpolation. for example all the siblings have their own glide lengths, and they’re roughly linearly increasing. same with the max glide value and the index of the max glide value.

and series composed of these statistics can also be analyzed for other key metrics such as nonmono run length etc.; and this all has connections to the idea of “total induction” where a glide length or trajectory depends fairly closely on all prior cascade trajectories. (and ofc again, the incessant plaguing question, how to prove this in general?)

this next experiment looks at nonmono run lengths of sequences derived from the sibling cascade: glide length, max glide value, total glide “area”, glide max index. all the statistics have increasing nonmono run lengths, some more dramatic than others. 1st graph is nonmono run lengths of max glide value and total glide area sequences (green, red resp.). the nonmono run lengths of the glide lengths versus the glide max indexes sequences track very closely in the 2nd graph (red, green resp.).

bunch51c.rb

bunch51x

bunch51y

 

this following experiment separates out the upslope (red) and downslope (green) of all the cascade siblings, scales them horizontally and vertically, integrates them, and then looks at nonmono run length of the resulting sequence, and finds its very smooth/ orderly. this can be taken in contrast to the neighborhoods of large hard seeds which as found above, combined with the hard trajectory do not smooth out overall integrations much, because the hard trajectory is like a “needle” in a haystack of mostly short trajectories that tend to be correlated with each other.

bunch53.rb

bunch53

a. fsm
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s