hi all. this is a fairly short update, maybe with additions later. last may, posted a codegolf.se challenge on collatz based on some earlier ideas documented on this blog, about iterating sequential starting trajectories in parallel, to get a monotonic decrease in the overall sum. this led to a basic conjecture asking if it was even possible. so this is both a case of a dream come true, and be careful what you wish for, you might get it!

that post sat dormant/ unanswered for ~¾ year, until recently I cited it in codegolf meta asking about codegolfers putting together a site history question/ answer (in meta). 💡

youd think that (hardcore) site regulars would like the idea of promoting their site, but se users can be very fickle, so somewhat both against/ within expectation and despite highly visible/ successful precedents/ cases on other se sites, that idea did not get very much appreciation or traction ~~(its currently at -2 votes)~~ *(just got deleted?!?)* *( @#$% doncha just luv se)* 😮 😡

anyway it also cited my collatz question, and it drew new votes and attention, and even two submissions! so the collective intelligence of stackexchange eventually kicked into gear, albeit belatedly.

one submission does not seem to be significant, but the other by user “randomra” submitted a rather complex solution in python that apparently succeeds in purely monotonic decreases for significant intervals. unfortunately he didnt describe his rather tricky code at all, only posted it “as is”. but the graphs looked very intriguing.

but, spent HOURS trying to decipher this code & STILL dont have a full grasp of it! but finally decided just to port it to ruby for further study/ analysis. this code seems to be a correct copy however there is some sort logic for lists of equivalent sizes, and therefore the original algorithm is somewhat “nondeterministic”. in fact it seems to (accidentally/ unintentionally!) depend on this nondeterminism for the range [200, 400] because the output is slightly NONmonotonic for the ruby code. he apparently intended it to work on the range [n, 2n].

intend to bang on this some more, but thought this was worth sharing/ archiving for now. 2nd diagram “x” range [500..1000], 3rd “y” [1000..2000], 4th “z” [2000..4000].

the algorithm “seems to do the right thing” but does not at all make it clear how it is related to the original problem of picking iterates out of the original list, and loses that information in its operation, so adding that back would be one of the 1^{st} priorities in further work on it. right now dont quite understand, are the long flat lines/ “plateaus” possibly indicative of not picking *any* iterate *at all* in the list? generated the same graphs for the python code and there are indeed small differences in the graphs.

❗ ⭐ this post is also filed under **“breakthrough”** because finally after over ~2½ yr of copious blogging & lots of promotion/ encouragement/ invitations, “someone else” (2 ppl!) is *finally* (“technically!”) building on research ideas presented on this site! another dream come true! with all the massive cyberspace collaboration on open source, it seems surprising it would be so challenging to get volunteers for open science projects and very famous problems, but that is indeed the case.

my explanation is that, as always & long noticed, (*1997* by Goldhaber) *attention/ engagement* is the truly scarce resource/ commodity in the cyber age. easier said than done! its at-times harsh reality all bloggers, & collaboration/ open science advocates have to face. and the same aspect can be found in se chat rooms. many highly intelligent ppl like to hang out in half-serious chats, incl highly technical subjects such as eg programming or physics etc, but it tends to be quite fleeting/ transitory and building *sustainable/ continuous projects* is quite challenging. 😐

❗ **(update)** randomra read this post & already reacted! he already has new running code that tracks the list indexes! hes also planning a new codegolf collatz challenge, plz upvote if you get a chance! 😀

def pen(n) n > 0 ? n : 0 end def itfull(n) l = [n] while (n > 1) n = n.even? ? n / 2 : (n * 3 + 1) / 2 l << n end return l end def gen(sp, c, r) sc = [[Float::INFINITY, 0]] (1 ... r.size).each \ { |i| rg = sc[i - 1][0] + pen(r[i]) dg = sp[i - 1][0] + pen(r[i] + c) sv = [rg, dg].min sd = rg > dg ? 1 : 0 sc += [[sv, sd]] } return sc end def traceandadd(s, c, r) h = c.size - 1 rn = r.dup (r.size - 1).downto(0).each \ { |i| if (s[h][i][1] == 1) then rn[i] += c[h] h -= 1 end } return rn end def addfullit(c, r) s = [[[0, 0]]] (1 ... r.size).each { |i| s[0] += [[s[0][-1][0] + pen(r[i]), 0]] } (0 ... c.size - 1).each { |i| s += [gen(s[i], c[i + 1], r)] } rn = traceandadd(s, c, r) return rn end def sum(l) t = 0 l.each { |x| t += x } return t end n1 = 200 n2 = 400 a = (n1 ... n2 + 1).to_a b = a.map { |n| itfull(n) } c = b.sort_by { |l| l.size }.reverse cd = c.map { |l| (1...l.size).to_a.map { |i| l[i] - l[i - 1] } } r = [0] * ((c[0].size + 1) * 2) cd.each { |cde| r = addfullit([0] + cde, r) } rf = [r[0] - sum(r)] (1 ... r.size).each { |i| rf += [rf[-1] + r[i]] } rf.each { |x| puts(x) }

import matplotlib.pyplot as plt #from random import randrange def itOne(n): return 1 if n==1 else (n*3+1)//2 if n%2 else n//2 return 1 if n==1 else n*3+1 if n%2 else n//2 def itFull(n): ni=n cs=[ni] while ni>1: ni=itOne(ni) cs.append(ni) return cs def pen(n): return n**1.0 if n>0 else 0 ###alternatives ##return n**1.0 if n>0 else 0 #-(abs(n)**1.1)/1000 def gen(sp,c,r): if c==-10: aaa=1 sc=[(float('inf'),0)] for i in range(1,len(r)): rg=sc[i-1][0]+pen(r[i]) dg=sp[i-1][0]+pen(r[i]+c) sv = min( rg, dg ) sd = int( rg > dg ) ###alternatives ##sd = int( rg >= dg ) ##if rg==dg: ## sd=randrange(2) sc+=[(sv,sd)] return sc def traceAndAdd(s,c,r): h=len(c)-1 rn=r[:] for i in range(len(r)-1,-1,-1): if s[h][i][1] == 1: rn[i]+=c[h] h-=1 return rn def addFullIt(c,r): s=[ [(0,0)] ] for i in range(1,len(r)): s[0]+=[(s[0][-1][0]+pen(r[i]),0)] for i in range(0,len(c)-1): s+=[gen(s[i],c[i+1],r)] rn=traceAndAdd(s,c,r) return rn n1=200#0 n2=400#0 a=list(range(n1,n2+1)) b=list(map(itFull,a)) c=sorted(b,key=lambda t:len(t),reverse=True) r=[0]*((len(c[0])+1)*2) cd=list(map(lambda l: list(map(lambda i,k:i-k, l[1:], l[:-1])), c)) rds=[] for cde in cd: ###progress mark ##print('*',end='') cde=[0]+cde ro=r[:] r=addFullIt(cde,r) rd=list(map(lambda i,k:i-k, r,ro)) rds+=[rd] ###reiteration, doesn't help much ##for cde,rde in zip(cd,rds): ## print('*',end='') ## cde=[0]+cde ## r=list(map(lambda i,k:i-k, r,rde)) ## r=addFullIt(cde,r) rf=[r[0]-sum(r)] for i in range(1,len(r)): rf+=[rf[-1]+r[i]] rps=sum(map(pen,r)) print('max: ',max(rf),', pen: ',rps) plt.plot(rf) plt.show()

⭐ ⭐ ⭐

**(3/12)** this is a novel, interesting idea that occurred to me yesterday and strangely has a connection to a question discussed in CS chat about “closure” of the `cons` operation in CS. the idea works on any random walk but here is applied to Collatz. it groups “local” runs of values that exceed a minima and gives a very useful grouping of local “height” regions. the output is for integers 1-50. the same grouping is applied to the mod 2 values of the iterates also so these groupings are output in pairs.

there are several ways to analyze these results. one simple idea is to scan the parenthetical group expression and just pay attn to the parenthesis, which define a 2-directional random walk (corresponding to “open” and “close” parentheses); this leads to graphs somewhat similar to some earlier posted graphs. another natural idea is to look at the tree-structure of the parenthetical expressions. that takes a little more tricky analysis (ie a recursive approach) & am intending to write that up shortly.

def f(n) l = [] l2 = [] while (n > 1) l << n l2 << n % 2 n = n.even? ? n / 2 : 3 * n + 1 end return l, l2 end def add(l, l2) return if (l2.size <= 1) r = (l2[0]..l2[-1]) l.push(r) if (!l.member?(r)) end def nest(l, l2) s = '' s << l2[0] l.size.times { |i|s << l[i].inspect + ' ' + l2[i + 1] + ' ' } puts(s) return s end def run(n) l, l1 = f(n) #p(l) l4 = [] l.sort.each \ { |x| l2 = (0...l.size).to_a.select { |i| l[i] >= x }.map { |y| y + 1 } l3 = [] while (!l2.empty?) l3 << l2.shift if (l3.size >= 2 && l3[-2] + 1 != l3[-1]) then add(l4, l3[0..-2]) l3[0..-2] = [] next end end add(l4, l3[0..-1]) } #p(l4) l2 = (1..l.size + 2).map { '' } l4.each \ { |r| l2[r.first - 1] = l2[r.first - 1] + '(' l2[r.last] = ')' + l2[r.last] } #p(l2) nest(l, l2) s = nest(l1, l2) puts return s end (1..ARGV[0].to_i).each { |n| run(n) }

2 0 ((3 (( 10 5 ( 16 8 )) 4 )) 2 ) ((1 (( 0 1 ( 0 0 )) 0 )) 0 ) (4 2 ) (0 0 ) (((5 ( 16 8 )) 4 ) 2 ) (((1 ( 0 0 )) 0 ) 0 ) ((6 3 (( 10 5 ( 16 8 )) 4 )) 2 ) ((0 1 (( 0 1 ( 0 0 )) 0 )) 0 ) ((((7 (( 22 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 )) 5 ( 16 8 )) 4 ) 2 ) ((((1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 )) 1 ( 0 0 )) 0 ) 0 ) ((8 4 ) 2 ) ((0 0 ) 0 ) (((((9 ( 28 14 )) 7 (( 22 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 )) 5 ( 16 8 )) 4 ) 2 ) (((((1 ( 0 0 )) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 )) 1 ( 0 0 )) 0 ) 0 ) (((10 5 ( 16 8 )) 4 ) 2 ) (((0 1 ( 0 0 )) 0 ) 0 ) (((((11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((12 6 ) 3 (( 10 5 ( 16 8 )) 4 )) 2 ) (((0 0 ) 1 (( 0 1 ( 0 0 )) 0 )) 0 ) (((((13 ( 40 20 )) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((1 ( 0 0 )) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((14 7 (( 22 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 )) 5 ( 16 8 )) 4 ) 2 ) ((((0 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 )) 1 ( 0 0 )) 0 ) 0 ) (((((15 (( 46 23 ( 70 35 (( 106 53 ( 160 80 )) 40 ))) 20 )) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((1 (( 0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 )) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((16 8 ) 4 ) 2 ) (((0 0 ) 0 ) 0 ) ((((((17 ( 52 26 )) 13 ( 40 20 )) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((1 ( 0 0 )) 1 ( 0 0 )) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((((18 9 ( 28 14 )) 7 (( 22 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 )) 5 ( 16 8 )) 4 ) 2 ) (((((0 1 ( 0 0 )) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 )) 1 ( 0 0 )) 0 ) 0 ) ((((((19 (( 58 29 ( 88 44 )) 22 )) 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((1 (( 0 1 ( 0 0 )) 0 )) 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((20 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((0 0 ) 1 ( 0 0 )) 0 ) 0 ) (((((21 ( 64 32 )) 16 ) 8 ) 4 ) 2 ) (((((1 ( 0 0 )) 0 ) 0 ) 0 ) 0 ) (((((22 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((((23 ( 70 35 (( 106 53 ( 160 80 )) 40 ))) 20 ) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 ) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((24 12 ) 6 ) 3 (( 10 5 ( 16 8 )) 4 )) 2 ) ((((0 0 ) 0 ) 1 (( 0 1 ( 0 0 )) 0 )) 0 ) (((((((25 ( 76 38 )) 19 (( 58 29 ( 88 44 )) 22 )) 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((((1 ( 0 0 )) 1 (( 0 1 ( 0 0 )) 0 )) 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((((26 13 ( 40 20 )) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((0 1 ( 0 0 )) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((((((27 (( 82 41 ( 124 62 )) 31 (( 94 47 (( 142 71 (( 214 107 (( 322 161 ( 484 242 )) 121 ( 364 182 ))) 91 (( 274 137 ( 412 206 )) 103 (( 310 155 ((( 466 233 ( 700 350 )) 175 ( 526 263 (( 790 395 (( 1186 593 ( 1780 890 )) 445 ( 1336 668 ))) 334 ))) 167 (( 502 251 (( 754 377 ( 1132 566 )) 283 (( 850 425 ( 1276 638 )) 319 ((( 958 479 (( 1438 719 (( 2158 1079 ( 3238 1619 (( 4858 2429 ( 7288 3644 )) 1822 ))) 911 (( 2734 1367 ( 4102 2051 (( 6154 3077 ( 9232 4616 )) 2308 ))) 1154 ))) 577 ( 1732 866 ))) 433 ( 1300 650 )) 325 ( 976 488 ))))) 244 ))) 122 )))) 61 ( 184 92 ))) 46 ))) 23 ( 70 35 (( 106 53 ( 160 80 )) 40 ))) 20 ) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((((1 (( 0 1 ( 0 0 )) 1 (( 0 1 (( 0 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 1 (( 0 1 ( 0 0 )) 1 (( 0 1 ((( 0 1 ( 0 0 )) 1 ( 0 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ))) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 (( 0 1 ( 0 0 )) 1 ((( 0 1 (( 0 1 (( 0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 1 (( 0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 ))) 1 ( 0 0 ))) 1 ( 0 0 )) 1 ( 0 0 ))))) 0 ))) 0 )))) 1 ( 0 0 ))) 0 ))) 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 ) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((((28 14 ) 7 (( 22 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 )) 5 ( 16 8 )) 4 ) 2 ) (((((0 0 ) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 )) 1 ( 0 0 )) 0 ) 0 ) (((((((29 ( 88 44 )) 22 ) 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((((1 ( 0 0 )) 0 ) 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((((30 15 (( 46 23 ( 70 35 (( 106 53 ( 160 80 )) 40 ))) 20 )) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((0 1 (( 0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 )) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((((((31 (( 94 47 (( 142 71 (( 214 107 (( 322 161 ( 484 242 )) 121 ( 364 182 ))) 91 (( 274 137 ( 412 206 )) 103 (( 310 155 ((( 466 233 ( 700 350 )) 175 ( 526 263 (( 790 395 (( 1186 593 ( 1780 890 )) 445 ( 1336 668 ))) 334 ))) 167 (( 502 251 (( 754 377 ( 1132 566 )) 283 (( 850 425 ( 1276 638 )) 319 ((( 958 479 (( 1438 719 (( 2158 1079 ( 3238 1619 (( 4858 2429 ( 7288 3644 )) 1822 ))) 911 (( 2734 1367 ( 4102 2051 (( 6154 3077 ( 9232 4616 )) 2308 ))) 1154 ))) 577 ( 1732 866 ))) 433 ( 1300 650 )) 325 ( 976 488 ))))) 244 ))) 122 )))) 61 ( 184 92 ))) 46 )) 23 ( 70 35 (( 106 53 ( 160 80 )) 40 ))) 20 ) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((((1 (( 0 1 (( 0 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 1 (( 0 1 ( 0 0 )) 1 (( 0 1 ((( 0 1 ( 0 0 )) 1 ( 0 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ))) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 (( 0 1 ( 0 0 )) 1 ((( 0 1 (( 0 1 (( 0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 1 (( 0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 ))) 1 ( 0 0 ))) 1 ( 0 0 )) 1 ( 0 0 ))))) 0 ))) 0 )))) 1 ( 0 0 ))) 0 )) 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 ) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((32 16 ) 8 ) 4 ) 2 ) ((((0 0 ) 0 ) 0 ) 0 ) ((((((((33 ( 100 50 )) 25 ( 76 38 )) 19 (( 58 29 ( 88 44 )) 22 )) 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((((1 ( 0 0 )) 1 ( 0 0 )) 1 (( 0 1 ( 0 0 )) 0 )) 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((((34 17 ( 52 26 )) 13 ( 40 20 )) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((0 1 ( 0 0 )) 1 ( 0 0 )) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((((35 (( 106 53 ( 160 80 )) 40 )) 20 ) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((1 (( 0 1 ( 0 0 )) 0 )) 0 ) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((((36 18 ) 9 ( 28 14 )) 7 (( 22 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 )) 5 ( 16 8 )) 4 ) 2 ) ((((((0 0 ) 1 ( 0 0 )) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 )) 1 ( 0 0 )) 0 ) 0 ) (((((((37 ( 112 56 )) 28 ) 14 ) 7 (( 22 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 )) 5 ( 16 8 )) 4 ) 2 ) (((((((1 ( 0 0 )) 0 ) 0 ) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 )) 1 ( 0 0 )) 0 ) 0 ) ((((((38 19 (( 58 29 ( 88 44 )) 22 )) 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((0 1 (( 0 1 ( 0 0 )) 0 )) 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((((((39 ( 118 59 (( 178 89 ( 268 134 )) 67 (( 202 101 ( 304 152 )) 76 )))) 38 ) 19 (( 58 29 ( 88 44 )) 22 )) 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((((1 ( 0 1 (( 0 1 ( 0 0 )) 1 (( 0 1 ( 0 0 )) 0 )))) 0 ) 1 (( 0 1 ( 0 0 )) 0 )) 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((((40 20 ) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((0 0 ) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((((((41 ( 124 62 )) 31 (( 94 47 (( 142 71 (( 214 107 (( 322 161 ( 484 242 )) 121 ( 364 182 ))) 91 (( 274 137 ( 412 206 )) 103 (( 310 155 ((( 466 233 ( 700 350 )) 175 ( 526 263 (( 790 395 (( 1186 593 ( 1780 890 )) 445 ( 1336 668 ))) 334 ))) 167 (( 502 251 (( 754 377 ( 1132 566 )) 283 (( 850 425 ( 1276 638 )) 319 ((( 958 479 (( 1438 719 (( 2158 1079 ( 3238 1619 (( 4858 2429 ( 7288 3644 )) 1822 ))) 911 (( 2734 1367 ( 4102 2051 (( 6154 3077 ( 9232 4616 )) 2308 ))) 1154 ))) 577 ( 1732 866 ))) 433 ( 1300 650 )) 325 ( 976 488 ))))) 244 ))) 122 )))) 61 ( 184 92 ))) 46 )) 23 ( 70 35 (( 106 53 ( 160 80 )) 40 ))) 20 ) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((((1 ( 0 0 )) 1 (( 0 1 (( 0 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 1 (( 0 1 ( 0 0 )) 1 (( 0 1 ((( 0 1 ( 0 0 )) 1 ( 0 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ))) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 (( 0 1 ( 0 0 )) 1 ((( 0 1 (( 0 1 (( 0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 1 (( 0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 ))) 1 ( 0 0 ))) 1 ( 0 0 )) 1 ( 0 0 ))))) 0 ))) 0 )))) 1 ( 0 0 ))) 0 )) 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 ) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((((42 21 ( 64 32 )) 16 ) 8 ) 4 ) 2 ) (((((0 1 ( 0 0 )) 0 ) 0 ) 0 ) 0 ) ((((((((43 (( 130 65 ( 196 98 )) 49 ( 148 74 ))) 37 ( 112 56 )) 28 ) 14 ) 7 (( 22 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 )) 5 ( 16 8 )) 4 ) 2 ) ((((((((1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 1 ( 0 0 )) 0 ) 0 ) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 )) 1 ( 0 0 )) 0 ) 0 ) ((((((44 22 ) 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((0 0 ) 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((((((45 ( 136 68 )) 34 ) 17 ( 52 26 )) 13 ( 40 20 )) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((((1 ( 0 0 )) 0 ) 1 ( 0 0 )) 1 ( 0 0 )) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((((46 23 ( 70 35 (( 106 53 ( 160 80 )) 40 ))) 20 ) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 ) 0 ) 1 ( 0 0 )) 0 ) 0 ) ((((((((47 (( 142 71 (( 214 107 (( 322 161 ( 484 242 )) 121 ( 364 182 ))) 91 (( 274 137 ( 412 206 )) 103 (( 310 155 ((( 466 233 ( 700 350 )) 175 ( 526 263 (( 790 395 (( 1186 593 ( 1780 890 )) 445 ( 1336 668 ))) 334 ))) 167 (( 502 251 (( 754 377 ( 1132 566 )) 283 (( 850 425 ( 1276 638 )) 319 ((( 958 479 (( 1438 719 (( 2158 1079 ( 3238 1619 (( 4858 2429 ( 7288 3644 )) 1822 ))) 911 (( 2734 1367 ( 4102 2051 (( 6154 3077 ( 9232 4616 )) 2308 ))) 1154 ))) 577 ( 1732 866 ))) 433 ( 1300 650 )) 325 ( 976 488 ))))) 244 ))) 122 )))) 61 ( 184 92 ))) 46 ) 23 ( 70 35 (( 106 53 ( 160 80 )) 40 ))) 20 ) 10 ) 5 ( 16 8 )) 4 ) 2 ) ((((((((1 (( 0 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 1 (( 0 1 ( 0 0 )) 1 (( 0 1 ((( 0 1 ( 0 0 )) 1 ( 0 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ))) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 (( 0 1 ( 0 0 )) 1 ((( 0 1 (( 0 1 (( 0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 1 (( 0 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 ))) 1 ( 0 0 ))) 1 ( 0 0 )) 1 ( 0 0 ))))) 0 ))) 0 )))) 1 ( 0 0 ))) 0 ) 1 ( 0 1 (( 0 1 ( 0 0 )) 0 ))) 0 ) 0 ) 1 ( 0 0 )) 0 ) 0 ) (((((48 24 ) 12 ) 6 ) 3 (( 10 5 ( 16 8 )) 4 )) 2 ) (((((0 0 ) 0 ) 0 ) 1 (( 0 1 ( 0 0 )) 0 )) 0 ) ((((((((49 ( 148 74 )) 37 ( 112 56 )) 28 ) 14 ) 7 (( 22 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 )) 5 ( 16 8 )) 4 ) 2 ) ((((((((1 ( 0 0 )) 1 ( 0 0 )) 0 ) 0 ) 1 (( 0 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 )) 1 ( 0 0 )) 0 ) 0 ) (((((((50 25 ( 76 38 )) 19 (( 58 29 ( 88 44 )) 22 )) 11 (( 34 17 ( 52 26 )) 13 ( 40 20 ))) 10 ) 5 ( 16 8 )) 4 ) 2 ) (((((((0 1 ( 0 0 )) 1 (( 0 1 ( 0 0 )) 0 )) 1 (( 0 1 ( 0 0 )) 1 ( 0 0 ))) 0 ) 1 ( 0 0 )) 0 ) 0 )

…done! here is some code that generates the tree as a ruby list format, uses the ruby eval to parse it (as a shortcut/ bit of a hack) and outputs graphviz format output which was processed with `dotty` for good results, showing the trees with the sublists as node/ vertex labels. for the larger trajectories the graph labels make the graphs blow up eg to ~1MB & are turned off. starting from n= 15, 17, 19, 21, 23, 25, 27 in that order, last with labels turned off.

def f(n) l = [] l2 = [] while (n > 1) l << n l2 << n % 2 n = n.even? ? n / 2 : 3 * n + 1 end return l, l2 end def add(l, l2) return if (l2.size <= 1) r = (l2[0]..l2[-1]) l.push(r) if (!l.member?(r)) end def add2(s, a, b, e) s << ', ' if (s[-1,1] != ' ' && s[-1, 1] != '[') s << a s << ', ' if (b[-1,1] == '[') s << b return if (e) s << ', ' if (s[-1,1] == ']') end def nest(l, l2) s = '' s << l2[0] l.size.times { |i| add2(s, l[i].inspect, l2[i + 1], i == l.size - 1) } return s end def run(n) l, l1 = f(n) #p(l) l4 = [] l.sort.each \ { |x| l2 = (0...l.size).to_a.select { |i| l[i] >= x }.map { |y| y + 1 } l3 = [] while (!l2.empty?) l3 << l2.shift if (l3.size >= 2 && l3[-2] + 1 != l3[-1]) then add(l4, l3[0..-2]) l3[0..-2] = [] next end end add(l4, l3[0..-1]) } #p(l4) l2 = (1..l.size + 2).map { '' } l4.each \ { |r| l2[r.first - 1] = l2[r.first - 1] + '[' l2[r.last] = ']' + l2[r.last] } return nest(l, l2) end def tree(l, y, x) $n += 1 n1 = $n if (l.is_a?(Integer)) then # p([l, x, y]) return end l.each_with_index \ { |z, x| n2 = $n + 1 t = "" t = "[ label=\"#{z.inspect}\"]" puts("\t\"#{n1}\" -- \"#{n2}\" " + t) tree(z, y + 1, x) } end n = ARGV[0].to_i s = run(n) l = eval(s) #p(l) #exit puts("graph {") $n = 0 tree(l, 0, 0) puts("}")

⭐ ⭐ ⭐

### “pseudo-(complete)-induction” example/ demonstration

**(3/13)** 💡 ❗ ⭐ this following result looks quite promising! (it probably deserves an entirely separate post but for now its the conservative approach.) an earlier blog looked at grammar decompositions of parity sequences, where the parity sequence is simply the mod 2 sequence of trajectory iterates. as observed there is a lot of redundancy in these sequences and they exhibit mean-returning aspects. this is another way to look at that.

imagine each starting iterate and its associated parity sequence, a list from 1 to n. now take the next parity sequence. how is this related to earlier parity sequences? is there any relation? the earlier “successful” grammar decompositions reveals that there is. another simple/ similar decomposition idea from another angle is something like this. greedily find earlier parity sequences prefixes that match the current parity sequence prefix. then remove that matched prefix from the current sequence and continue the process iteratively.

does this succeed? it turns out it indeed does and leads to a whole new conception of this problem and some new theory.

**defn parity sequence recursive generator:** imagine a highly random but deterministic and recursive parity sequence generator (same as a random walk generator for each integer) as follows (it is recursive in both the sense of “inductively constructed” and “provably halts”). take a concatenated set of subsequences of prior generated parity sequences according to some rule, and that is the new subsequence for the current “n^{th}” case. repeat.

this is a recursive-inductive-like process, generating “pseudo-random walks” apparently much like collatz sequences ( ❓ ), leading to some new ideas and analysis.

define **pseudo induction** as a process that “seems” somewhat inductive. (this is currently a subjective idea and impossible to technically quantify at the moment but seemingly will have objective aspects.)

the collatz conjecture itself is pseudo-inductive in that empirically each trajectory eventually “falls below” its starting seed into “lower” seeds.

the above sequence generating system is also pseudo-inductive using what is also known as complete induction where the inductive rule is assumed to hold for *all* prior cases and not just the immediately preceding case.

then the question becomes, how “orderly” is the prior-set selection rule? the more orderly it can be made/ formulated, the more the pseudo-induction becomes real. (also for example if upper limits/ bounds/ ceilings can be placed on the number of times prior sequences are reused its key to the inductive lens.) the less orderly it can be made and more random it looks, the “more undecidable” and unresolvable the problem.

here is an implementation of these ideas and shows/ demonstrates they are very relevant to the collatz problem in this following context. this code greedily finds maximal prior parity subsequence matches. it outputs the prior seed values that are used to construct the new parity sequences as a list. it turns out the “first best prior matched seed parity prefix” in the list is *strikingly orderly,* as in the graph! the later seeds in the list do not seem so orderly. the results also show positively (col 2 of the output) that prior subsequences are not reused at all in this greedy algorithm apparently except for lower iterates below some low bound (~1650). it also has a very clear self-similar fractal structure/ pattern.

but anyway this is remarkable, possibly groundbreaking, and begging for deeper analysis; maybe a new key property not previously discovered that is quite relevant to collatz inductive analysis? 😮

def f(n) s = '' while (n != 1) s << (n % 2).to_s n = n.even? ? n / 2 : n * 3 + 1 end return s end n = 3 l = [] t = {} while (n < ARGV[0].to_i) x = f(n) x2 = x.dup a = {1 => 0} l2 = [] while (x != '') break if (l.empty?) z = (0...l.size).to_a.map \ { |j| y = l[j] i = 0 i += 1 while (i < x.length && i < y.length && x[0, i] == y[0, i]) [i, j, x[0, i]] } i, j, p = z.max l2 << j # p(z) a[j] = 0 if (a[j].nil?) a[j] += 1 x[0, i] = '' x[0, 1] = '' while (x[0, 1] == '0') end puts([n, a.values.max, t[l2.first], l2[1..-1].map { |y| t[y] }.inspect].join("\t")) if (l2.size > 1) $stdout.flush t[l.size] = n l << x2 n += 2 end #p(t)

## ➡ ❗ ❗ ❗ ⭐ ⭐ ⭐

## (later) holy @#%& cow, *BREAKTHROUGH* 😮 ❓

*BREAKTHROUGH*

here are some astonishing results. bkg: about 3 months ago discovered the incredibly significant/ key metric of hamming density & it seemed pivotal. went in many different directions with it. one of the programs `dense2.rb` did a fairly obvious greedy search for long nonmonotone non-core density runs and found that they increase. *but the simple change of the collatz function to compress div-2 runs was not tried* under the assumption that results are often comparable. retried that idea and find an ** apparent upper bound** on the nonmonotone non-core density run metric!

this code/ graph generates/ compares the “straight” and “compressed” functions side by side. this is exactly the kind of thing that the recent GA algorithms were built/ developed to find but never isolated after many weeks of searching/ tuning! however those didnt use the div-2 compressed version of the function either under similar rationale, so maybe its time to make the minor adjustment and reexamine those!

#!/usr/bin/ruby1.8 def dense(n) s = n.to_s(2) c = 0 s.length.times \ { |i| c += s[i, 1].to_i } d = c.to_f / s.length return (0.5 - d).abs end def f(n) d2 = 1.0 i = 0 j = 0 m = 0 while (n != 1) if (!$w) then n = n.even? ? n / 2 : n * 3 + 1 else n = n * 3 + 1 if (n.odd?) n /= 2 while (n.even?) end d = dense(n) if (d < d2) then d2 = d j = i end m = i - j if (i - j > m) break if (d < 0.1) i += 1 end return m end def sample(t, w) $w = w l = [[1, 1, 0]] m = [0, nil] t.times \ { |i| l = l.sort_by { |z| z[2] }.reverse j = rand([l.size, 10].min) n, p, x = l.delete_at(j) m = [x, n] if (x > m[0]) p <<= 1 l.push([n, p, x]) x2 = f(n) l.push([n + p, p, x2]) } return m end ARGV[0].to_i.times { |t| puts([t, [false, true].map { |w| sample(t, w) }.transpose].join("\t")) }

**(later)** earlier results show that max nonmonotone run lengths may be correlated with low density starting seeds, so one might wonder if the prior greedy algorithm is getting “stuck” in some limited density region. so heres a variation that uses more sophisticated code to traverse the regions in the search. it looks for larger nonmonotone noncore density run lengths based on penetrating top runs over the different density distribution range and gets roughly identical results. this is further excellent evidence that *nonmonotone noncore density run lengths are bounded… *(however there is a small questionable issue where this code does not seem to be finding any d< ~0.2 seeds, not exactly sure why that is yet; either the greedy dynamics is avoiding/ missing them or they dont exist…?)

def dense(n) s = n.to_s(2) c = 0 s.length.times \ { |i| c += s[i, 1].to_i } return c.to_f / s.length end def f(n) d1 = dense(n) d2 = 1.0 i = 0 j = 0 m = 0 while (n != 1) if (!$w) then n = n.even? ? n / 2 : n * 3 + 1 else n = n * 3 + 1 if (n.odd?) n /= 2 while (n.even?) end d = (0.5 - dense(n)).abs if (d < d2) then d2 = d j = i end m = i - j if (i - j > m) break if (d < 0.1) i += 1 end return m, d1 end def sample(t, w) $w = w l = [[1, 1, 0, 1.0]] m = [0, nil] t.times \ { |i| a = {} (0...l.size).each \ { |j| x = l[j][2] d = l[j][3] b = (d * 100).to_i a[b] = [] if (a[b].nil?) a[b] << [x, j] } k = a.keys l2 = a[k[rand(k.size)]] l2.sort! l2.reverse! j = l2[rand([l2.size, 3].min)][1] n, p, x, d = l.delete_at(j) m = [x, n] if (x >= m[0]) p <<= 1 l.push([n, p, x, d]) x2, d = f(n) # puts([x2, d, n].join("\t")) l.push([n + p, p, x2, d]) } $stdout.flush return m end ARGV[0].to_i.times { |t| puts([t, [false, true].map { |w| sample(t, w) }.transpose].join("\t")) }

**(3/19)** 💡 some earlier analysis looked at various metrics wrt the starting seed and how it related to trajectory lengths and showed some weak effect. have a big new idea that maybe the key insight is to focus on the dynamics of trajectories inside the “core density”. this is a start on that direction.

the following code generates top 3 long glides using the familiar bitwise greedy search. then it shows some connection between these seeds. the seeds if considered in base-2 reverse format have identical prefixes, the larger the seed, the longer the prefix.

the 1^{st} shows how the trajectories match up to ~460 iterations and then diverge when normalizing/ subtracting starting seed values/ “matching prefixes” from the trajectories. ie they have identical seed prefixes up to that point and after that point the seed prefixes are different (in fact sparse 1s among mostly 0s looking at the initial seeds).

the code looks at how much of the prefixes match as the trajectories progress, and other metrics. eg 2^{nd} graph shows the scaled prefix match length (red line, 1 max) along with density changes. the densities closely track each other, strongly correlated and gradually decreasing, up to no correlation at the ~460 “transition point” because they are being computed over base-2 values (strings) that nearly match, except for long suffixes that average out to nearly the same contribution.

at the 0.5 scaled prefix match length, ½ of the total initial max prefix is matched. a model for this is looking at/ comparing two sequences/ random walks *x _{i}* and

*x*where |

_{i}+ y_{i}*y*| gradually gets larger wrt |

_{i}*x*| as the sequences progress (larger

_{i}*i*) ie gradual, linear-like divergence/ decorrelation. (where the 3

^{rd}sequence is similarly

*x*for gradually increasing |

_{i}+ z_{i}*z*|.)

_{i}yet another way to picture this is as base-2 string sequences with a (matching/ common) prefix region and a suffix region. the prefix region gradually shrinks and the mismatching suffixes (region) expands. finally when the matching prefix shrinks to zero size, the relative walks no longer match and then diverge completely. as a key property clear from several different angles (eg long FSM transducer analysis on this site and in prior 1^{st} graph), the relative initial trajectories are identical for matching base-2 prefixes.

the final graphs #3 and zoomin detail #4 is similar to some earlier graphs for average bit run lengths and associated standard deviations. however those runs were for random seeds and not special “hard” seeds eg these long glides, which have a quite distinct rapid trending early converging region. however its interesting that even though they converge quickly to a compressed range, they are still in the middle of (actually early in) the glide at up to ~460 iterations. in other words these statistics correlate some (where the early strong trend is somewhat indicative of/ correlated with the glide) but overall not a lot with the “gliding” region.

def stat(l) return 0, 0 if (l.empty?) t = t2 = 0 l.each \ { |x| t += x t2 += x ** 2 } c = l.size a = t.to_f / c z = t2.to_f / c - a ** 2 sd = Math.sqrt(z < 0 ? 0 : z) raise if (sd.nan?) return a, sd end def dense(n) s = n.to_s(2) c = 0 s.length.times \ { |i| c += s[i, 1].to_i } return c.to_f / s.length, c, s.length end def f(n) n1 = n c = 0 while (n != 1 && n >= n1) n = n * 3 + 1 if (n.odd?) n /= 2 if (n.even?) c += 1 end return {'c' => c, 'd' => (0.5 - dense(n1)[0]).abs} end def f2(n, i) f = File.open("out#{i + 1}.txt", 'w') n1 = n x = 0 l = [] while (n != 1 && n >= n1) x += 1 n = n * 3 + 1 if (n.odd?) n /= 2 if (n.even?) s = n.to_s(2) a1, sd1 = stat(s.split('0').map { |x| x.length }) a0, sd0 = stat(s.split('1').map { |x| x.length }) l << s.reverse f.puts([x, Math.log(n) - Math.log(n1), dense(n), a1, sd1, a0, sd0].join("\t")) end f.close return l end def prefix(l) i = 0 l.map { |x| x.length }.max.times \ { j = 1 j += 1 while (j < l.size && l[j][0..i] == l[0][0..i]) break if (j < l.size) i += 1 } return i end l = [{'n' => 1, 'p' => 0}.merge(f(1))] t = 0 l2 = [] loop \ { l = l.sort_by { |x| [x['c'], -x['d']] }.reverse x = l.shift 1.times \ { if (!l2.index{ |y| y['n'] == x['n'] }) then if (l2.size >= 3) then j = (0...l2.size).min_by { |i| l2[i]['c'] } l2.delete_at(j) if (!j.nil?) end l2 << x end # puts([t, x['c'], x['n'].to_s(2).length, x['p'], x['d']].join("\t")) p2 = x['p'] + 1 n2 = x['n'] + 2 ** p2 x1 = x.dup x1['p'] = p2 l << x1 x2 = {'n' => n2, 'p' => p2}.merge(f(n2)) l << x2 t += 1 } break if (t >= 500) } l = [] l2.each_with_index { |x, i| l << f2(x['n'], i) } f = File.open('out.txt', 'w') p2 = nil l.map { |x| x.size }.max.times \ { |t| j = 0 l1 = l.map { |x| x[t] } p1 = prefix(l1) f.puts(p1.to_f / p2) if (!p2.nil?) p2 = p1 if (p2.nil?) break if (p1 == 0) }

**(3/21)** 💡 ❗ ⭐ this is some fairly complex code building on long glides analysis. as numerous prior experiments attest, there is some important statistical skewing of bit string patterns inside the density core. what is the nature of it? here is another apparent big advance and in some ways, other statistics maybe are reflecting this primary detail.

*there seems to be a constant limit/ ceiling on the length of 0-runs in the parity sequence* to around ~15 max! recall the parity sequence in this case is just the sequence of odd/ even bits encountered during a trajectory, ie the trajectory iterates mod 2. this is some long analysis on this apparently quite significant aspect, up to 3000 different long glides, and many different statistics to try to delve into this complex picture/ phenomenon. the run time is several hours and the glides are long, and it was challenging detailing/ portraying/ summarizing the diverse info in a few graphs some with a lot of different elements.

the 1^{st} “bottom line” graph shows time to enter the density core in green line, and the max 0-runs encountered in the parity sequence inside the core. notice how flat the latter trend is! there is also a huge nonlinear transition/ spike in the time to enter the core at around ~900 and there is some interaction with the max 0-runs in the parity sequence observable around that transition & some other locations.

some other graphs have many other statistics. the 2^{nd} graph has the total bit count in the starting seed in light blue upper line, this coincides with the glide generation loop count iterations (1^{st} arg 3000) because one bit is added on each iteration. the total glide iterations is the next highest point in dark blue. the time (iteration count) to enter the density core is again in this graph in the lowest purple line.

then there is some analysis where the max parity sequence 0-run occurs. the max may either occur once or more than once. this code tracks the 1^{st} and last max occurrence in red, green respectively which are then sandwiched between the end of the run (dark blue) and the start of the density core (purple). for many points the 1^{st} and last max occurrence coincide, in others they are distinctly different. the 1^{st} and last occurrence sometimes happen at the “end” of the initial trajectory, or rather in this case more exactly stated as at the end of the glide but still in the middle of the full trajectory.

so this (non)coincidence and location of the max is a bit tricky and looked into it further in the last 3^{rd} graph. here again the count to enter the core is graphed in blue. then there is some logic to look at how the max occurs near the start of the glide (post core density) or at the end of the glide. the first and last max are found, and for many points these are the same, but for those where they are different, the distance from the core density start versus the distance from the glide end are calculated, and the least is determined. if the least distance is nearer to the core density start, that is plotted in red. if the least distance is nearer to the glide end, that is plotted in green as a negated value. this leads to a graph that combines several noticeable trends and isnt easy to summarize.

#!/usr/bin/ruby1.8 def dense(n) s = n.to_s(2) c = 0 s.length.times \ { |i| c += s[i, 1].to_i } return c.to_f / s.length end def f(n) n1 = n c = 0 while (n != 1 && n >= n1) n = n * 3 + 1 if (n % 2 == 1) n /= 2 if (n % 2 == 0) c += 1 end return {'c' => c, 'd' => (0.5 - dense(n1)).abs} end def f2(n) n1 = n x = x0 = 0 f = false m = [0, 0] m2 = m.dup d2 = 0 while (n != 1 && n >= n1) d = (0.5 - dense(n)) f, x0 = true, x if (!f && d * d2 < 0) d2 = d x += 1 n = n * 3 + 1 if (n % 2 == 1) c = 0 while (n % 2 == 0) n /= 2 c += 1 break if (n < n1) end if (f) then m = [c, x] if (c > m[0]) m2 = [c, x] if (c >= m2[0]) end end x1 = m[1] x2 = m2[1] d1 = x1 - x0 d2 = x - x2 return m[0], x0, x1, x2, x, [[d1, d1, 0], [d2, 0, -d2]].min[1..-1], n1.to_s(2).length end l = [{'n' => 1, 'p' => 0}.merge(f(1))] loop \ { l = l.sort_by { |x| [x['c'], -x['d']] }.reverse x = l.shift # puts([l.size, x['c'], x['n'].to_s(2).length, x['p']].join("\t")) 1.times \ { p2 = x['p'] + 1 n2 = x['n'] + 2 ** p2 x1 = x.dup x1['p'] = p2 l << x1 x2 = {'n' => n2, 'p' => p2}.merge(f(n2)) l << x2 } break if (l.size >= ARGV[0].to_i) } l = l.sort_by { |x| x['n'] } l.each { |x| puts([x['n'].to_s(2).length, f2(x['n'])].join("\t")); $stdout.flush }