💡 ❗ ❗ ❗ this is some code that was just written & run today & there is no choice but to archive (& share) it immediately! the basic idea is a riff on the last theme of looking at base-2 suffixes that lead to large glides. a simple obvious variation is to compare the “subsequent” (adjacent) “parity sequences” of nodes of this graph/ tree. a property of this recursive algorithm/ enumeration is that prefixes must match up to some position, the question is, how much? then one can just look at the “incremental work/ computation” to determine that a glide falls below, and how much is that work in general? *the jawdropping finding here is that this work can be sorted into a FINITE 6 equivalence classes, disregarding the equivalent parity sequence prefixes.*

⭐ ⭐ ⭐ ~~at this point think almost surely this property can be turned into an inductive proof based on the same algorithmic structure.~~ ❓

the output shows the parity length, the total number of glide trajectories for that length, and the 6 equivalence classes as triples. the triples are the differences/ “leftover” in the two compared parity sequences after removing the longest equal prefix ie a sort of “diff”, and the total number of occurrences (like a 6-bin histogram). arg *“n”*=20.

def check(n, b, y) a = n b0 = b s = '' while (a >= n) s << (b % 2).to_s if (b.odd?) then a *= 3 b = b * 3 + 1 else return [[b0, y + [s]]] if (!a.even?) a /= 2 b /= 2 end end return [] end def recur(n) l = [[1, []]] (2..n).each \ { |p| l2 = [] l.each \ { |l1| x, y = l1 l2 += check(2 ** p, x, y) l2 += check(2 ** p, x + (2 ** (p - 1)), y) } l = l2 out(l, p) } return l end def prefix(s1, s2) s = '' while (s1.length > 0 && s2.length > 0 && (s1[0] == s2[0])) s += s1[0] s1[0..0] = '' s2[0..0] = '' end return [s.length, s1, s2] end def diff(x, l) # puts(x.to_s(2)) (1...l.size).each \ { |i| z = prefix(l[i].dup, l[i - 1].dup) $a[z[1..2]] = 0 if (!$a.member?(z[1..2])) $a[z[1..2]] += 1 # puts("\t" + z.to_s) } end def out(l, n) $a = {} l.sort.each { |x| diff(x[0], x[1]) } p([n, l.size, $a.size, $a.sort]) end n = ARGV[0].to_i recur(n)

[2, 1, 0, []] [3, 2, 2, [[["0", ""], 1], [["100", "0"], 1]]] [4, 3, 3, [[["0", ""], 1], [["10", ""], 1], [["100", "0"], 4]]] [5, 4, 4, [[["0", ""], 3], [["10", ""], 1], [["100", "0"], 7], [["1010", "0"], 1]]] [6, 8, 5, [[["0", ""], 8], [["00", "10"], 1], [["10", ""], 4], [["100", "0"], 17], [["1010", "0"], 2]]] [7, 13, 6, [[["0", ""], 14], [["00", "10"], 2], [["010", "10"], 1], [["10", ""], 9], [["100", "0"], 34], [["1010", "0"], 5]]] [8, 19, 6, [[["0", ""], 23], [["00", "10"], 2], [["010", "10"], 3], [["10", ""], 18], [["100", "0"], 56], [["1010", "0"], 12]]] [9, 38, 6, [[["0", ""], 53], [["00", "10"], 10], [["010", "10"], 12], [["10", ""], 48], [["100", "0"], 115], [["1010", "0"], 28]]] [10, 64, 6, [[["0", ""], 106], [["00", "10"], 18], [["010", "10"], 27], [["10", ""], 97], [["100", "0"], 211], [["1010", "0"], 53]]] [11, 128, 6, [[["0", ""], 244], [["00", "10"], 56], [["010", "10"], 66], [["10", ""], 226], [["100", "0"], 438], [["1010", "0"], 122]]] [12, 226, 6, [[["0", ""], 484], [["00", "10"], 111], [["010", "10"], 129], [["10", ""], 456], [["100", "0"], 826], [["1010", "0"], 254]]] [13, 367, 6, [[["0", ""], 877], [["00", "10"], 198], [["010", "10"], 221], [["10", ""], 822], [["100", "0"], 1446], [["1010", "0"], 473]]] [14, 734, 6, [[["0", ""], 1949], [["00", "10"], 488], [["010", "10"], 526], [["10", ""], 1816], [["100", "0"], 2987], [["1010", "0"], 1042]]] [15, 1295, 6, [[["0", ""], 3749], [["00", "10"], 941], [["010", "10"], 1002], [["10", ""], 3509], [["100", "0"], 5565], [["1010", "0"], 2069]]] [16, 2114, 6, [[["0", ""], 6610], [["00", "10"], 1623], [["010", "10"], 1707], [["10", ""], 6227], [["100", "0"], 9665], [["1010", "0"], 3764]]] [17, 4228, 6, [[["0", ""], 14289], [["00", "10"], 3792], [["010", "10"], 3944], [["10", ""], 13499], [["100", "0"], 19836], [["1010", "0"], 8060]]] [18, 7495, 6, [[["0", ""], 27146], [["00", "10"], 7218], [["010", "10"], 7480], [["10", ""], 25692], [["100", "0"], 36824], [["1010", "0"], 15560]]] [19, 14990, 6, [[["0", ""], 58117], [["00", "10"], 16302], [["010", "10"], 16821], [["10", ""], 55054], [["100", "0"], 75547], [["1010", "0"], 32989]]]

**(9/2)** 😮 😳 *mea culpa* there is a good idea in all that as far as intentions go and apparently some nontrivial property isolated. the code did not implement what was intended. the parity sequences are only computed for partial sequences and not calculated for longer “fall below” lengths. there does seem to be some distinct “maybe new” property exhibited here, but not sure what it is right now….

stay tuned for “v2” or a rewrite. just testing you! 😈 actually the “test” “proves” that even if one were to claim discovery of an amazing property of collatz, lots of ppl are willing to read about it, but )( none so far to analyze/ replicate/ verify it/ check for significance…. or maybe the real “proof” is that blogging often is not really all that different than thinking/ talking out loud to oneself or sophisticated cyber-solipsism… 😦 😥

**(9/10)** here is some more advanced code that explores/ computes/ shows that the sequences after the earlier computed sequences above are, as usual, highly random. it also tracks more “metadata” of each sequence. something nice that can be said is that the theme does seem to get to a basic boundary between order/ randomness in the problem and that maybe studying the problem is somehow natural or central.

def check(p, b, y, z) n = 2 ** p a = n b0 = b b0b = sprintf("%0#{p}d", b0.to_s(2)) #.reverse s = '' # while (a >= n) while (b >= b0) s << (b % 2).to_s if (b.odd?) then a *= 3 b = b * 3 + 1 else if (!a.even?) then $l2 << [p, b0, y + [s]] if (p == ARGV[0].to_i && b0 != 1) return [[b0, y + [s], z + [b0b]]] end a /= 2 b /= 2 end end $l1 << [p, b0b] return [] end def recur(n) l = [[1, [], []]] (2..n).each \ { |p| l2 = [] l.each_with_index \ { |l1, i| x, y, z = l1 s = l2.size l2 += check(p, x, y, z) l2 += check(p, x + (2 ** (p - 1)), y, z) } l = l2 } # p(l.map { |x| x[0] }.sort) return l end def prefix(s1, s2) s = '' while (s1.length > 0 && s2.length > 0 && (s1[0] == s2[0])) s += s1[0] s1[0..0] = '' s2[0..0] = '' end return [s.length, s1, s2] end def diff(l) (1...l.size).each \ { |i| z = prefix(l[i].dup, l[i - 1].dup) $a[z[1..2]] = 0 if (!$a.member?(z[1..2])) $a[z[1..2]] += 1 } end def f(n) n0 = n c = 0 while (n >= n0) n = n.even? ? n / 2 : n * 3 + 1 c += 1 end return c end n = ARGV[0].to_i $l1 = [] $l2 = [] l = recur(n) $x = $l1.map { |x| x[1] } a = {} t = 0 $l2.each \ { |x| p, n, l = x z = sprintf("%0#{x[0]}d", n.to_s(2)) z = prefix(l[-2].dup, l[-1].dup) c = z.shift j = f(n) p([p, n, j - c]) t += (j - c) a[z] = 0 if (!a.member?(z)) a[z] += 1 } p([$l2.size, t, t.to_f / $l2.size])

theophanes raptisYou have done some work with this. Let me try to be of assistance if I can. Some years ago I made a short examination of similar problems trying to understand the dynamics mostly. I figured out that it is easier to understand it dynamically even in the absence of formal proofs. It is rather easy to try the below loop:

x(1) = …

i <–(2: t) {

x(i) <– Collatz(x(i-1));

write Length[ factor(x(i)) != 2 ]

}

If one tries initial conditions of the sort 2*3*5*7*11*… one finds that the number of prime factors is diminishing in the long term. One has to check the dynamics step by step in order to understand this. Of the two branches of this sequence, the first is valid as long as some prime factors are powers of 2. The second branch though gets activated when all the powers of 2 get decimated. Immediately after, the second branch gets activated but only once because 3*x+1 maps every odd integer back to an even one. Hence, the dynamics looks quite like a math equivalent of a thyristor or even a nonlinear neuron. An interesting side effect of this is if you keep track of the activation events with a bit sequence you will always end up with sequences that are integers coded in the Fibonacci – Zeckendorf representation.

One can predict the exact number of steps at the first branch by just finding the power of 2 between each activation event of the second branch but afterwards the number of factors has changed. Yet, the numerical evidence suggests that the probability of the factorization being shorter than the previous one is always greater than that of being larger hence leading to a kind of a fixed point theorem if you could formally prove the above using some sort of distributions. On the other hand, my motive for looking at all this is rather practical and as long as you are interested in volunteering I could probably share some material if you can drop me

an email. I am talking applications of course!

vznvznPost authorhi, thx for idea. that looks very interesting. have done some rough study of factors but did not notice the property you mention. have long conjectured that maybe some analysis of the factoring may be a key to unlock the problem somehow. do you have a stackexchange account? there are some chat rooms there for extended dialogs & have been using them for many discussions over a long time, the site is very good overall for math/ cstheory/ theoretical research. yes would be interested in any writeup you have. plz post it somewhere & link to it in a comment.

theophanes raptisI would prefer to avoid using chatrooms at the moment as there are some things involving prime factorization that could also have “other” applications. Try for example to figure if there are summands of periodic functions with roots exactly at the set of primes. There is a nice relationship given as an exercise at p. 42 in the book by Crandal and Pomerance “Prime Numbers: A Computational Perspective”, NY Springer (2001). You will find the same relation w/out proof in the Wikipedia article but note that the Inf summation is unnecessary.

http://en.wikipedia.org/wiki/Floor_and_ceiling_functions#Formulas_for_prime_numbers

At a later stage there may be an article on that but I am also waiting a colleague to respond.