Category Archives: collatz

collatz summer vac

the title is semi ironic, because it seems there is never a vacation with a hard problem. only a hiatus? did go on a trip last wk and had a great time, didnt think about math much at all! which is a good thing! work/ life balance and all that! although for some, math is life! would go into more juicy detail for all my loyal readers (like last years epic saga) but alas, havent heard from any of you in ages so not sure you really exist 😳 o_O 😥

just picture me in tattered/ dusty/ dirty clothes on the side of a busy cyberhighway, weathered/ sunburned/ wrinkled/ unshaven skin, sitting in the blazing hot sun with a cardboard sign scrawled with marker, million-mile staring-into-distance plaintively…

will write for comments! 😐 😳 o_O 🙄

Continue reading

collatz matrix model quantified

💡 ❗ ⭐ this code has a lot of moving parts and took quite awhile but is also a culmination of many previous ideas. it has the same distribution generation mechanism as last run (slightly modified/ adjusted), but its a 1st cut on evaluating the significance/ accuracy of the MDE/RR model as a predictor of trajectory length. the better the accuracy of its modelling, the more “plausible” a (“analytic”!) model it is and viable for exploiting for further derivations.

Continue reading

springtime with Collatz, the Sea Monster, and a (Very?) Big Idea

157097F7-480D-483B-9A2E-199DC0D4DD49-1264-0000009F24919BBEhi all, have gotten a sizeable spike in hits over the last week on collatz related blog posts! it seems to be traceable to an old reddit page on collatz from jun 2015 by level1807 talking about using a new feature in Mathematica to graph the collatz conjecture. profiled that finding/ graph myself here in this blog around that time. not sure how people are finding that page again, but googling around, it looks like this graph is now immortalized in a mathematical coloring book which was announced in a recent March 28th numberphile video getting just a few thousand short of ~200K hits at the time of writing this (maybe a few ten thousand in a few days!), and profiled the same day by a popular mechanics blogger weiner under the title of the Sea Monster. so, essentially viral, but putting the bar a little lower for mathematics! and as for the “elephant in the room” much to my amusement/ chagrin the video never once uses the word fractal (bipolar moods again attesting to a longterm love-hate relationship, not to mention the other (mal?)lingering facet of mania-depression!).

and this coincides very nicely with my announcement of the following. have been making some big hints lately and think finally have a Big Picture/ Big Idea from the most recent experiments. (yeah, no hesitation in the open Big Reveal on a mere blog after years of a similar routine…)

what is looking very plausible at this point is a formula in the form of a matrix difference equation/ matrix recurrence relation. the devil is of course in the details, but heres a rough sketch. prior experiments have some “indicator metrics” that are based mainly on binary density of iterates, and other “surface-like” aspects such as 0/1 run lengths etc… and its now shown that these are strong enough to predict future iterate sizes (10 for now) with some significant degree of accuracy.

Continue reading

collatz, tightening screws on general invariant

this tightens the screws some )( on the prior findings and show they generalize nicely. prior analysis seems very solid but there is always the shadow of question of bias in the generation algorithms which start from small seed sizes and build larger ones out of smaller ones. an entirely different strategy for generating sample seeds is used here based on a genetic algorithm. the idea is to start with a fixed bit width and alter the bits in it just like “genes”. fitness is based on glide length (or equivalently ‘h2’ for a constant seed width). it starts with 20 random parents of given length. there is 1 mutation operator and 2 crossover operators. 1 crossover operator selects adjacent/ contiguous bits from parents at a random cutoff/ crossover point (left to right) and the other just selects bits randomly from parents not wrt adjacent/ contiguous position. fit24 is again used for the linear regression fit. these runs are for 50, 80, 100 bit sizes and ~200 points generated for each. 50K iterations.

because of declining # of points for higher widths, this is circumstantial evidence that as widths increase long glides (relative to width, ie ‘h2’) are (1) increasingly hard to find and/ or (2) rare. these two aspects interrelate but are not nec exactly equivalent. hardness of finding seeds with certain qualities ie computational expense does not nec mean theyre rare. an example might be RSA algorithm. finding twin primes is somewhat computationally expensive (although still in P) but theyre not so rare. technically/ theoretically rareness and computational expense are related through probabilistic search aka “randomized algorithms”.

Continue reading