Category Archives: collatz

collatz pivot/ new ideas

hi all, have been working on some other ideas re A(G)I, heavily promoting them all over cyberspace + analyzing/ collecting copious references, and havent been banging on collatz quite as much last few weeks. honestly its a bit of a (well deserved) break or respite. however, its always at the back of my mind. feel that am getting close to a solution but theres a lot of trickiness/ subtlety in the current stage.

here is a new analogy/ pov. the linear regression is finding a “global/ local gradient”. for the theoretical trajectory it is both, for the actual trajectory there are local perturbations/ disturbances/ noise fluctuations in the global trend. the picture is something like the wind blowing a leaf. the leaf has a very definite position but does a sort of multi-dimensional (3d) random walk in the wind. the wind is a general trend. now the basic idea/ question is whether the leaf will land at a given location/ circumscribed area given a predictable/ consistent wind dynamic.

further thought, another way of looking at it is that the leaf has a very dynamic/ even sharp response to the wind depending on what its current orientation is, and it also has an internal momentum. actually since the (real) wind is typically so dynamic whereas the linear regression is fixed (although arriving at the final regressions was dynamic, cf earlier saga of that), one might instead use the similar analogy of an irregularly shaped object in a (more consistent/ uniform) fluid flow, maybe even a field.

Continue reading

Advertisements

collatz ad infinitum

threw this code together and it was unusually/ surprising timeconsuming to debug yet has rather little code/ logic. the difficulty is related to the early-discovered property of how skewed (nonlinear) a random sample of trajectory distributions is. nearly all are very short and it seems quite possible its in line with some power law (never tried to fit it, but do have the overwhelming urge to, and just go hunting for as many power-law properties in the data one can find, strongly suspect they might be quite widespread).

the idea is to try to bias (or unbias depending on pov) the sample of randomly chosen trajectories (based on initial density spread sample) so that they are nearly linearly distributed in length. was developing the test data to test the trajectory (meta function) calculation logic. to be more exact was partly working off of the matrix6 code and wanting to improve the aesthetic graph results with “more linearity” in the sample. did not end up with meta function calculation logic tied to it worth saving, but given how tricky it was, do want to save this sampling code for future reference! this generates 10 samples of 500 points over uniform density, and then finds the min and ¾ the max of trajectory lengths, and tries to sample linearly over that range (starting top down), and the result is quite smooth as in the graph (of trajectory length).

Continue reading

collatz, loose ends

as mentioned in the last post, am zooming in on the “power iteration” algorithm. it is explained as, “if you want to find the dominant eigenvector, use the power iteration”. in my case, found it by discovering that “if you use the power iteration, it will lead to the dominant eigenvector”. kind of subtle right? maybe reminds me of that old saying “all paths lead to rome”. and then ofc, the classic, “rome wasnt built in a day”.

here is the code that compares the (normalized) current state vector to the dominant eigenvector, which apparently ruby organizes it as the leftmost column of the left eigen-decomposition matrix. it uses/ selects the 95th/100th density iteration which tends to lead to a longer trajectory. am in good company, as wikipedia notes the power iteration is used at the core of Google pagerank algorithm! 😀 😎 💡 ⭐ ❗ ❤

Continue reading

collatz perplexity

hi all, last month collatz installment made some progress, but unf have been a bit tied up with work, where a fiscal year transition/ boundary tends to lead to some crunch-like dynamics, leaving less time for one of my favorite side projects, namely this one, bummer/ ouch.

but, here is a small trickle/ dribble of some newer ideas, mainly benefitting from google searches and maybe a little algebra.

Continue reading

collatz adversarial attack and the light at end of tunnel

❗ 💡 😀 😎 ⭐ last month was quite a tour de force against collatz, the culmination of many months or even years of hard work and creative ideas, and many different approaches all combined (pyramid-like) to lead to very solid results verging on a “candidate solution”. another theme that was pursued earlier here are “adversarial algorithms” which has been used with great success eg by Google/ Deepmind against Go. the basic theme is “two algorithms competing against each other” so to speak. along these lines there is one final idea to try against the prior Collatz “solution”.

Continue reading