collatz gamechanger, FINALLY?

hi all this is a somewhat quick-dash post on a remarkable new idea re collatz that came to me this week & tried it out in a long stream/flow of consciousness multi experiment run, and got promising results. in manic moments it seems like it could be a breakthru or at least a gamechanger…? 💡 ❓ ❗

in an older post, there is shown an amazing power of aggregating (integrating) multiple trajectories and how this can smooth the behavior of collatz-function-related metrics eg sum of stopping distance counts. in that case the integration was over increasingly large intervals.

it is not easy to describe aspects of Collatz but one phenomenon is that everything about it seemingly gets “larger” without any apparent recurring patterns. in the simple case, a single trajectory generally gets larger before it gets smaller, and this is a big part of the complexity of attacking the problem.

it is hard to formalize any general proof/ attack idea/ concept/ strategy exactly; however one might be, roughly, re induction and the prior nature/aspect, if there is any discoverable function metric that gets “somewhat consistently” smaller for higher iterates, that would be a key to a proof.

so then came up with this wild but hopefully quite brilliant idea. imagine looking at $n$ consecutive iterates at a time, ie the iterates starting at $n$ through $n + c$. in other words, “blocks” of size $c$. now as with other ideas, use some kind of algorithmic process and heuristic choice to choose different iterates in the block to iterate next. but, under the constraint, and heres the kicker—minimizing the overall sum of the iterates. simple, right? then there is some hope of inductive proof if this block of iterates can always be “reduced” below this starting threshhold ceiling!

an even better fantasy is that the sum of the block of iterates can decrease monotonically during the “reduction”. that is a tall order, and it seems quite challenging, but on the other hand, can it be ruled out entirely?

here is a big batch of programs exploring this overall theme that were cranked out quickly one after the other, trying different heuristic ideas, and the results are indeed quite promising! some even verge on monotonically decreasing. 😮 most generally indeed stay below the initial starting sum threshold, which is even somewhat exciting. 😀 its a rare consistently decreasing aspect of the problem.

the inspiration for this was partly the idea of CSLs, context sensitive languages, that can be recognized on TMs with tapes limited by the size of inputs. (on the other hand, also any decrease always smaller than any monotonically increasing function f(x) of the starting sum would also be a proof, and that sounds a lot like all the other formulations of the problem. is this just another dead end? 😥 👿 )

a proof would consist of showing that the reductions can always find some path toward zero that stays below the starting threshhold. this approach opens up new possibilities. even if the function is not monotonically decreasing it could be considered in weaker cases like monotonically decreasing in $x$ increments of $t$ say. etc!

so, dont have the whole story/answer yet, but feel highly encouraged by these latest directions and that this is at least if not a gamechanger a big lead! a key apparent feature of these, hinted but not fully revealed in the following plots, is that some strategies seem to consistently “roughly-well-behavedly” converge (to zero) for all $n$ for some constant $c$.

all the different ideas are in the code, and it will take awhile to unwind all of them in prose, but feel it was just a little )( like edison & the lightbulbs for a few days.

a few brief words on each. it would be possible to refactor all these into a single big piece of experimental code but that would take too long right now, & not really sure which will be worthy of further study.

a sketch/ overview/ characterization of the dynamics: these are all basically variants on randomized algorithms but mostly based on the randomness intrinsic in the problem (some later ones add randomness along with intrinsic randomness driving selection probabilities). a way to visualize this is that of the $c$ starting consecutive iterates, some lead to “small changes” and others lead to “big changes” roughly proportional to the current value. so there is generally some concept of a greedy algorithm wrt the change magnitudes. the better strategies seem to balance out small changes with big changes. if the algorithm avoids all the large changes then they seem to show up at the end and seem to cause instability then. so its a sort of a balancing of short term versus the long term horizon.

another basic way to visualize this, think of a game tree. the potential choices expand exponentially at each iteration/level. is there a winning strategy every time? these strategies below basically are all depth-1 lookahead into the game tree. next want to look into some algorithms that explore the choice tree to several levels and maneuver based on its structure.

• bunch1 – args 200-400. simple starting idea, just run through iterates sequentially in a cycle.
• bunch2 – args 1-200. look at $a$ iterates that go up and $b$ iterates going down, $a \times b$ total, and choose the pair combination with the smallest decrease (possibly 0) first, thereby forcing a monotonic decrease as long as possible. may run out of choices in the middle with a nil exception. the open question would be whether some other strategy might fully succeed (in converging to zero). also this strategy turns out only to work for low $n$.
• bunch3 – args 200-400 here and all following. choose the smallest of the bunch as next.
• bunch4 – choose largest of the bunch next.
• bunch5 – same as bunch4 but with a “seen” cache. all following programs use the hit cache.
• bunch6 – iterate in a cycle over the bunch but resort the bunch after each iteration.
• bunch7 – iterate through the least or the max in the bunch on odd/even iterations.
• bunch8 – move left or right thru min/max bunch values depending on increasing/decreasing next iterate.
• bunch9 – choose the $n$th value in the bunch based on sorting & parameter in this case 0.9
• bunch10 – choose random n’th value based on probabilities proportional to size of value wrt total of bunch.
• bunch11 – similar except use probability based on difference with max value, ie weighting small values inversely and therefore more probable than large values.

def seq(n)

return 1 if (n == 1)
n = n * 3 + 1 if (n.odd?)
n /= 2 if (n.even?)
#	n >>= 1 while (n.even?)

return n

end

def avg(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i..ARGV[1].to_i).to_a

loop \
{
p(avg(l))

l.size.times \
{
|i|
l[i] = seq(l[i])
}

break if (l.select { |x| x == 1}.size == l.size)

}

def seq(n)

return 1 if (n == 1)
n = n * 3 + 1 if (n.odd?)
n /= 2 if (n.even?)

return n

end

def sum(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i..ARGV[1].to_i).to_a
f = File.open('out.txt', 'w')

c = 0
loop \
{

up = []
down = []

l.each_with_index \
{
|n, i|
next if (n == 1)
if (n.even?) then
n2 = n
while (n2.even?)
n2 >>= 1
down.push([n2 - n, i])
end
next
end
up.push([n * 2 + 1, i])
}

l2 = []
up.each \
{
|x, i|
down.each \
{
|y, j|
l2 << [x + y, -(x.abs + y.abs), x, y, i, j]
}
}
l2.sort!
l1 = l2.select { |x| x[0] <= 0 }.sort.reverse
#	p(l1)

t = l1.shift
d, z, a, b, x, y = t
l[x] += a
l[y] += b

c += 1
p(s = [c, l1.size, sum(l), d])
f.puts(s.join("\t"))

break if (l.select { |x| x == 1}.size == l.size)
}

def sum(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a

loop \
{

i = (0...l.size).min_by { |x| l[x] }
x = l.delete_at(i)

x = x * 3 + 1 if (x.odd?)
x >>= 1 while (x.even?)

l << x if (x != 1)
p(sum(l))
break if (l.empty?)
}

def sum(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a

loop \
{

i = (0...l.size).max_by { |x| l[x] }
x = l.delete_at(i)

x = x * 3 + 1 if (x.odd?)
x >>= 1 while (x.even?)

l << x if (x != 1)
p(sum(l))
break if (l.empty?)
}

def sum(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a
seen = {}

loop \
{

i = (0...l.size).min_by { |x| l[x] }
x = l.delete_at(i)

x = x * 3 + 1 if (x.odd?)
#	x >>= 1 while (x.even?)
x >>= 1 if (x.even?)

if (x != 1 && !seen.member?(x)) then
l << x
seen[x] = nil
end

puts([sum(l), x].join("\t"))
break if (l.empty?)
}

def sum(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a
seen = {}

i = 0
loop \
{
l.sort!

x = l.delete_at(i)

x = x * 3 + 1 if (x.odd?)
x >>= 1 while (x.even?)

if (x != 1 && !seen.member?(x)) then
l << x
seen[x] = nil
end

puts([sum(l), x].join("\t"))

break if (l.empty?)
i += 1
i = i % l.size
}

def sum(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a
seen = {}

i = 0
loop \
{
l.sort!

x = i.even? ? l.shift : l.pop

x = x * 3 + 1 if (x.odd?)
x >>= 1 while (x.even?)

if (x != 1 && !seen.member?(x)) then
l << x
seen[x] = nil
end

puts([sum(l), x].join("\t"))

break if (l.empty?)
i += 1
}

def sum(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a
seen = {}

i = 0
loop \
{
l.sort!

x2 = x = l.delete_at(i)

x2 = x2 * 3 + 1 if (x2.odd?)
x2 >>= 1 while (x2.even?)

if (x2 != 1 && !seen.member?(x2)) then
l << x2
seen[x2] = nil
end

puts([sum(l), x2, l.size, i].join("\t"))

break if (l.empty?)
i += (x2 < x) ? 1 : -1
#	i += (x2 > x) ? 1 : -1
i = 0 if (i < 0)
i = l.size - 1 if (i > l.size - 1)
}

def sum(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a
seen = {}

i = 0
loop \
{
l.sort!
i = (l.size * 0.9).to_i

x2 = x = l.delete_at(i)

x2 = x2 * 3 + 1 if (x2.odd?)
x2 >>= 1 while (x2.even?)

if (x2 != 1 && !seen.member?(x2)) then
l << x2
seen[x2] = nil
end

puts([sum(l), x2, l.size, i].join("\t"))

break if (l.empty?)
}

def sum(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a
seen = {}

i = 0
loop \
{
t = sum(l)
j = rand(t)

i = 0
l.each \
{
|x|
j -= x
break if (j < 0)
i += 1
}

x2 = x = l.delete_at(i)

x2 = x2 * 3 + 1 if (x2.odd?)
x2 >>= 1 while (x2.even?)

if (x2 != 1 && !seen.member?(x2)) then
l << x2
seen[x2] = nil
end

puts([sum(l), x2, l.size, i].join("\t"))

break if (l.empty?)
}

def sum(l)

y = 0
l.each { |x| y += x }
return y

end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a
seen = {}

i = 0
loop \
{
m = l.max

l2 = l.map { |x| m - x + 1 }
t = sum(l2)
j = rand(t)

i = 0
l2.each \
{
|x|
j -= x
break if (j < 0)
i += 1
}

x2 = x = l.delete_at(i)

x2 = x2 * 3 + 1 if (x2.odd?)
x2 >>= 1 while (x2.even?)

if (x2 != 1 && !seen.member?(x2)) then
l << x2
seen[x2] = nil
end

puts([sum(l), x2, l.size, i].join("\t"))

break if (l.empty?)
}


⭐ ⭐ ⭐

(~3wk later…) here is a different approach. this algorithm has two greedy aspects. it decreases the most possible but while looking for a chance to increase the most possible, but all while staying below the starting sum as a limit. it seems to succeed for various ranges. what is remarkable (and the main reason for including this) is that it becomes very regular-looking in some kind of pattern at the end. it starts out rather random and converges toward a very ordered pattern at the end. (not sure what to make of this yet but seems significant.) it seems to be iterating over many values that are nearly equally distributed over a range some fraction of the original limit. 2nd graph is the next iterate, 3rd graph is difference with last iterate. last two graphs are detail of this iterate/difference order tail. (note run args 600-800)

def seq(n)

n = n * 3 + 1 if (n.odd?)
n /= 2 while (n.even?)
return n
end

def sum(l)
z = 0
l.each { |x| z += x}
return z
end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a

c = 0
t2 = 0
loop \
{
l2 = l.map { |x| seq(x) }
l3 = []
l2.size.times { |i| l3 << [l2[i] - l[i], i] }
z = l3.max
z = l3.min if (c + z[0] > 0)

t = l[z[1]] = l2[z[1]]
c += l3[z[1]][0]

l.delete_if{ |x| x == 1 }
break if (l.empty?)
puts([sum(l), t, t2 - t].join("\t"))
t2 = t
}


(1 day later…) oops! that was kind of obvious wasnt it? the algorithm was ending up iterating on some of the same values at the end because it doesnt have a “seen” cache. here is nearly the same algorithm with only a seen cache added which leads to remarkably, even quite strikingly different results and one of the best decreasing runs…! (same args 600-800) note added later: next few algorithms have a defect in the simplistic/naive “seen” cache logic. oops! “too good to be true!” foiled again! exercise for reader to find it!

def seq(n)

n = n * 3 + 1 if (n.odd?)
n /= 2 while (n.even?)
return n
end

def sum(l)
z = 0
l.each { |x| z += x}
return z
end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a

c = 0
t2 = 0
seen = {}
loop \
{
l2 = l.map { |x| seq(x) }
l3 = []
l2.size.times { |i| l3 << [l2[i] - l[i], i] }
z = l3.max
z = l3.min if (c + z[0] > 0)

t = l[z[1]] = l2[z[1]]
c += l3[z[1]][0]

l.delete_if{ |x| x == 1 || seen.member?(x) }
seen[t] = nil
break if (l.empty?)
puts([sum(l), t, t2 - t].join("\t"))
t2 = t
}


💡 💡 💡

(1 day later, results from yesterday…) holy cow tried this simple idea of iterating next values close to the average of the current list and got striking results. this looks almost perfectly monotonic and did some analysis on how closely it was based on taking delta values; it turns out there are small “jiggles” or “interruptions” in the monotonicity and which also seem to increase slightly in length for larger ranges. but in any case, nearly astonishing and seems to indicate some deeper principle at play. moreover it seems to be very resilient to different/larger ranges in contrast to some of the other algorithms. following is args 10000-15000. esp remarkable is the pattern in the “next value” graph, 2nd below, showing a bifurcation-like trend.

def seq(n)

n = n * 3 + 1 if (n.odd?)
n /= 2 while (n.even?)
return n
end

def sum(l)
z = 0
l.each { |x| z += x}
return z
end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a
seen = {}

t2 = nil
c = 0
loop \
{
l.sort!
l2 = l.map { |x| seq(x) }
t = sum(l)
a = t.to_f / l.size

l3 = []
l2.size.times { |i| l3 << [(l2[i] - a).abs, i] }
z = l3.min

y = l[z[1]] = l2[z[1]]

l.delete_if{ |x| x == 1 || seen.member?(x) }
seen[y] = nil
break if (l.empty?)
d = t2.nil? ? 0 : t - t2
(d < 0) ? c = 0 : c += 1
puts(([t, d, c, y, a] + z).join("\t"))
t2 = t
}


(several days & some more analysis later…) further investigation seems to suggest that the greedy algorithm related to the next value is key, in contrast to many of the prior algorithms that branch/choose based on the current value. this simple algorithm doesnt use the average, it simply selects the iterate/choice that will lead to the minimum next value. the trend line is very smooth. again the 2nd “next value” graph shows a very characteristic pattern of embedded/staggered/scattered line trends. (arg range 10000-15000). this is reminiscent of the trends lines in the “reverse tree visualization” from awhile back.

def seq(n)

n = n * 3 + 1 if (n.odd?)
n /= 2 while (n.even?)
return n
end

def sum(l)
z = 0
l.each { |x| z += x}
return z
end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a
seen = {}

t2 = nil
c = 0
loop \
{
l.sort!
t = sum(l)
a = t.to_f / l.size
l2 = l.map { |x| seq(x) }

l3 = []
l2.size.times { |i| l3 << [l2[i], i] }
z = l3.min
y2 = l[z[1]]
y = l[z[1]] = l2[z[1]]

l.delete_if{ |x| x == 1 || seen.member?(x) }
seen[y] = nil
break if (l.empty?)
d = t2.nil? ? 0 : t - t2
(d < 0) ? c = 0 : c += 1
puts(([t, d, c, y, y2, a] + z).join("\t"))
t2 = t
}



(again later…) the prior decreases visually look very smooth/monotonic but on close inspection tend to “jiggle” between decreases followed by increases especially near the end. this next strategy is a greedy decrease followed by several greedy increases in a way that attempts to remain monotonically decreasing. it succeeds in a mostly monotonic decrease until the end where it is more unstable. the concave downward trend is notable/new/unusual. (args range 6000-7000).

def seq(n)

n = n * 3 + 1 if (n.odd?)
n /= 2 while (n.even?)
return n
end

def sum(l)
z = 0
l.each { |x| z += x}
return z
end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a
seen = {}

t2 = nil
c = 0
loop \
{
l.sort!
t = sum(l)
a = t.to_f / l.size
l2 = l.map { |x| seq(x) }

l3 = []
l2.size.times { |i| l3 << [l2[i], i] }
z = l3.min
j = z[1]
y2 = l[j]
y = l[j] = l2[j]
d = y2 - y

n = 0
while (d > 0 && n <= 2)

l3 = []
l2.size.times \
{
|i|
x = l2[i] - l[i]
next if (x == 0)
l3 << [x, i] if (x <= d)
}
break if (l3.empty?)

z = l3.max
d -= z[0]
j = z[1]
l[j] = l2[j]
n += 1
end

l.delete_if{ |x| x == 1 || seen.member?(x) }
seen[y] = nil
break if (l.empty?)
d = t2.nil? ? 0 : t - t2
(d <= 0) ? c = 0 : c += 1
puts(([t, d, c, y, y2, a] + z).join("\t"))
t2 = t
}


(later…) ok for completeness here is similar code with the seen cache logic fixed & different results. args range 1000-2000. the instability at the end of the run suggest an idea: the algorithm should attempt to “attack” the largest changes (increases) as early as possible instead of at the end.

def seq(n)

n = n * 3 + 1 if (n.odd?)
n /= 2 while (n.even?)
return n
end

def sum(l)
z = 0
l.each { |x| z += x}
return z
end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a

seen = {1 => nil}

t2 = nil
c = 0
loop \
{
l.sort!
t = sum(l)
a = t.to_f / l.size
l2 = l.map { |x| seq(x) }
j = 0
while (j < l.size)
if (seen.member?(l2[j])) then
l[j, 1] = []
l2[j, 1] = []
next
end
j += 1
end
break if (l.empty?)

l3 = []
l2.size.times { |i| l3 << [l2[i], i] }
z = l3.min
j = z[1]
y = l[j]
y2 = l[j] = l2[j]

seen[y2] = nil

d = y - y2
c2 = 0
n = 0
while (d > 0) # && n <= 4)

l3 = []
l2.size.times \
{
|i|
x = l2[i] - l[i]
next if (x == 0)
l3 << [x, i] if (x <= d)
}
break if (l3.empty?)

z = l3.max
d -= z[0]
c2 += z[0]
j = z[1]
l[j] = l2[j]
seen[l[j]] = nil
n += 1
end

d2 = t2.nil? ? 0 : t - t2
(d2 <= 0) ? c = 0 : c += 1
puts(([t, d, c2, c, n, y, y2, a] + z).join("\t"))
t2 = t
}


later… this is another idea where the code chooses a largest increase followed by series of decreases that greedily “fit” the increase, but by also attempting to “conserve” the decrease “at the end”. it succeeds in monotonically decreasing until a spike at about 2/3 overall. args 1000-2000.

def seq(n)

n = n * 3 + 1 if (n.odd?)
n /= 2 while (n.even?)
return n
end

def sum(l)
z = 0
l.each { |x| z += x}
return z
end

l = (ARGV[0].to_i .. ARGV[1].to_i).to_a

seen = {1 => nil}

c = 0

loop \
{
l.sort!
d2 = 0

l2 = l.map { |x| seq(x) }
j = 0
while (j < l.size)
if (seen.member?(l2[j])) then
d2 += l[j]
l[j, 1] = []
l2[j, 1] = []
next
end
j += 1
end
break if (l.empty?)

l3 = []
l2.size.times { |i| l3 << [l2[i], i] }

z = l3.max
j = z[1]
y1 = y = l[j]
y2 = l[j] = l2[j]

seen[y2] = nil

d = y2 - y - d2
n = 0

while (d > 0)

l3 = []
l2.size.times \
{
|i|
x = l[i] - l2[i]
next if (x <= 0)
l3 << [x, i]
}

if (l3.empty?) then
c += 1
break
end

z = l3.max
z = l3.select { |x| x[0] > d }.min if (z[0] > d)

j = z[1]
y = l[j]
y2 = l[j] = l2[j]

d += (y2 - y)
seen[y2] = nil
n += 1
end
t = sum(l)
puts(([t, l.size, c, n, y1] + z).join("\t"))

}


(5/29) an interesting idea is to plot the full trajectories for a starting interval but scaled all to the same horizontal width to see their relative motions (all ending at 1). this ends up looking a bit jagged but some trends can be seen (range 500-600 & log amplitude). there seem to be two major paths/clusters, where trajectories peak early and then decline somewhat gradually later and those that peak late and decline rapidly.

another interesting idea is to integrate the random walks and that plot looks a little more comprehensible. it seems to have fluid-dynamic-like motions/waves. the long ending horizontal lines are slow decreases in trajectory (leading to slowly increasing integrations) prior to hitting 1. range 500-600.

def seq(n)

n = n.odd? ? (n * 3 + 1) : n / 2
return n
end

def seq2(n)

l = []

while (n != 1)
l << n
n = seq(n)
end
l << 1

#	l.reverse!
c = l.size - 1
(0...(c - 1)).each \
{
|i|
puts([ i.to_f / (c - 1), Math.log(l[i])].join("\t"))
puts([ (i + 1).to_f / (c - 1), Math.log(l[i + 1])].join("\t"))
puts
}

end

(ARGV[0].to_i .. ARGV[1].to_i).each \
{
|n|

seq2(n)
}

def seq(n)

n = n.odd? ? (n * 3 + 1) : n / 2
return n
end

def seq2(n)

l = []

t = 0
c = 0
while (n != 1)
t += n
c += 1
l << t.to_f #/ c
n = seq(n)
end

#	l.reverse!
c = l.size - 1
(0...(c - 1)).each \
{
|i|
puts([ i.to_f / (c - 1), Math.log(l[i])].join("\t"))
puts([ (i + 1).to_f / (c - 1), Math.log(l[i + 1])].join("\t"))
puts
}

end

(ARGV[0].to_i .. ARGV[1].to_i).each \
{
|n|

seq2(n)
}