Collatz, striking visualization of binary density for 3x + 1 operation!

❗ ❓ ❗ “OMG”, holy cow, this is a very simple exercise banged out minutes ago yet has an incredibly striking result, never-before-seen, not documented anywhere afaik, and its the kind of utterly basic yet significant exercise with 2020 hindsight wish was done ~2 decades ago when starting out on collatz analysis. feel very dumb saying this, but its totally unexpected and has me aghast for the moment. it seems to be a excitingly critical piece of the puzzle if not an astoundingly pivotal aspect. kicking myself! 😕 😮 👿 ^^’ o_O

this simply looks at the density of 1s in the binary representation of a f(x) = 3x + 1 operation, with input and output density plotted on a graph, for a range of randomly sampled starting numbers of varying density having 500 binary positions (bits). it shows the 3x + 1 operation contrary to expectation does not at all scramble the density as (utterly naively!) expected and in fact basically preserves it in a highly correlated, near-linear, sigmoidal-like fashion. 2000 astonishing points in a scatterplot. the idea came about thinking some on the density view/ angle in the last batch of posted experiments (near the end).

map3

def d(l)
	return l.select { |x| x }.size.to_f / l.size
end

def d2(x)

	return d(x.split('').map { |x| {'0' => false, '1' => true}[x] })
end

z = 500
a = (0...z).to_a

m = 2000
m.times \
{
	|x|
	c = (x.to_f / m * z).to_i

	n = ('0' * (z - 1)) + '1'
	a2 = a.dup.shuffle
	c.times \
	{
		|j|
		n[a2[j], 1] = '1'
	}
	puts([d2(n), d2((n.to_i(2) * 3 + 1).to_s(2))].join("\t"))
}

(12/1) more insight and a significant story and immediate confirmation of a simple/ natural hypothesis. a simple change in this code plots the bit density of the first tth iterates. see graphs below for t=20, t=50, t=100. this shows that a Collatz iteration has a very specific/ significant effect on binary density of successive iterates. starting with different densities, the density converges to ½. if it is initially below it trends upward and if it is initially above it trends downward. after a certain number of iterations, it transitions into a new “regime” of basically a mean-returning random walk around 0.5.

def d(x)
	l = x.split('')
	return l.select { |x| x == '1' }.size.to_f / l.size
end

z = 500
a = (0...z).to_a
t = ARGV[0].to_i

m = 100
m.times \
{
	|x|
	c = (x.to_f / m * z).to_i

	n = '1' + ('0' * (z - 1)) + '1'
	a.shuffle!
	c.times { |j| n[a[j], 1] = '1' }

	t.times \
	{
		n2 = n.to_i(2)
		n2 = n2.even? ? n2 / 2 : n2 * 3 + 1

		n = n2.to_s(2)
		p(d(n))
		break if (n2 == 1)
	}
	puts
}

map4x

map4y

map4z

next a minor change in the code superimposes the prior “concatenated” trajectory trends in a plot. results for the same t=20, t=50, t=100. this shows a major variation in the convergence rate for the different starting “seeds”. trajectories start at bottom and move upward. some rare outliers converge “very slowly” in comparison to most which converge more “quickly”. the plots remind me of scenes of the Master Control Program from the tron movie and even looks a bit like the movie poster. 😎 actually, chose the vertical down-up trend instead of the horizontal left-right trend just out of this amusement/ visual metaphor. oh it reminds me of tesla coils too. 🙂

def d(x)
	l = x.split('')
	return l.select { |x| x == '1' }.size.to_f / l.size
end

z = 500
a = (0...z).to_a
t = ARGV[0].to_i

m = 100
m.times \
{
	|x|
	c = (x.to_f / m * z).to_i

	n = '1' + ('0' * (z - 1)) + '1'
	a.shuffle!
	c.times { |j| n[a[j], 1] = '1' }

	t.times \
	{
		|t2|

		n2 = n.to_i(2)
		n2 = n2.even? ? n2 / 2 : n2 * 3 + 1

		n = n2.to_s(2)
		puts([d(n), t2].join("\t"))
		break if (n2 == 1)
	}
	puts
}

map4bx

map4by

map4bz

just to shake things up, flipped the vertical axis on this next idea. it simply scales density trajectories by the time they take to reach the center (or more exactly, to leave the half section they start in), hence some “removal” of the dense “core” and some more visibility of it, where above it is being obscured by many dense overlapping lines. it bears a vague resemblance to famous drip paintings by Jackson Pollock, or an upside-down volcano.

def d(x)
	l = x.split('')
	return l.select { |x| x == '1' }.size.to_f / l.size
end

z = 500
a = (0...z).to_a

m = 100
m.times \
{
	|x|
	r = x.to_f / m
	h = r < 0.5
	c = (r * z).to_i

	n = '1' + ('0' * (z - 1)) + '1'
	a.shuffle!
	c.times { |j| n[a[j], 1] = '1' }

	l = []
	loop \
	{
		n2 = n.to_i(2)
		n2 = n2.even? ? n2 / 2 : n2 * 3 + 1

		n = n2.to_s(2)
		d = d(n)
		l << d

		h2 = d < 0.5
		break if (n2 == 1 || h != h2)
	}

	l.each_with_index { |d, t| puts([d, 1.0 - t.to_f / l.size].join("\t")) }
	puts
}

map5

this next experiment looks at the iteration count to reach the core, which shows a “U-shaped” correlation with the initial seed bit density. however after that, there seems to be no further correlation (or only very weak correlation) of initial bit density with the remaining iteration count, or the initial “core-entry” bit density with other metrics.

def d(x)
	l = x.split('')
	return l.select { |x| x == '1' }.size.to_f / l.size
end

z = 500
a = (0...z).to_a

m = 500
m.times \
{
	|x|
	r = x.to_f / m
	c = (r * z).to_i

	n = '1' + ('0' * (z - 1)) + '1'
	a.shuffle!
	c.times { |j| n[a[j], 1] = '1' }

	t = 0
	t2 = nil
	h = r < 0.5
	while (n != '1')

		n2 = n.to_i(2)
		n2 = n2.even? ? n2 / 2 : n2 * 3 + 1

		n = n2.to_s(2)
		t += 1
		if (t2.nil?) then
			d = d(n)
			t2, d2 = [t, d] if ((d < 0.5) != h)
		end
	end

	puts([r, t2, t, d2].join("\t"))
}

map6b

(12/2) this next experiment looks at “density distance from core” versus subsequent density. a striking empirical observation is that for densities distances starting “away” from the core, it consistently takes only a single operation/ iteration to move them closer. then there is a phase transition for density distance trajectories that are starting close to the core. so earlier plots showing non-uniformly-converging densities (ie “non-monotonic trends”) need a little more interpretation. an initial iteration outside the core is empirically always converging toward the core, but in those trajectories starting outside and later showing some “bouncing” (“direction reversals”), they are rapidly reaching the core, the only* density regime where any “bouncing” is possible, within “not so many” iterations.

in the 1st graph, the red line is the initial density distance from the ½ critical point. the green scatterplot is the 1st subsequent density distance (ie single iteration). density distances near ½ sometimes move farther away when initially starting from near the critical point. ie this is a phase transition where the collatz operation “usually” decreases the density distance except when crossing some threshhold near the critical point. the big question of the moment, how/can this be proved mathematically? the 2nd graph shows the first “distance density delta” after a single collatz operation, and its always decreasing (negative delta) except near the ½ critical point. clearly these are many of the signs/ symptoms of a strange attractor.

another obvious analogy after all this is that the core is like a “drain” or a “black hole” with a “gravitational attraction/ force”. trajectories away from the hole “rapidly swirl” toward the hole consistently. trajectories “within” the hole “vibrate inside” the hole for some time. this leads to a wild idea that maybe there could be an even stronger analogy here than a superficial one with black hole numerical properties/ physics. 💡 ❗ 😮

def d(x)
	l = x.split('')
	return (0.5 - l.select { |x| x == '1' }.size.to_f / l.size).abs
end

z = 500
a = (0...z).to_a

m = 1000
m.times \
{
	|x|
	r = x.to_f / m
	c = (r * z).to_i

	n = '1' + ('0' * (z - 1)) + '1'
	a.shuffle!
	c.times { |j| n[a[j], 1] = '1' }

	d1 = d(n)
	d2 = nil
	1.times \
	{
		n2 = n.to_i(2)
		n2 = n2.even? ? n2 / 2 : n2 * 3 + 1

		n = n2.to_s(2)
		d2 = d(n)
	}
	puts([r, d1, d2].join("\t"))
}

map7x

map7y

* (12/3) ❓ there is some kind of mystery here and havent figured it out yet, & am attempting to come up with a new metric that isolates/ identifies/ explains it. from earlier diagrams, initial seeds chosen randomly outside the core are uniformly converging toward the core in density distance. however, just a few iterations later, even for densities far from the core, they frequently may bounce/reverse. it appears then that the uniform convergence is an illusion of the distribution, where there are rare needles in the haystack that bounce/reverse, but they arent chosen at random in the uniform random (initial seed) distribution. looking for a metric that determines this bounce/reversal.

on initial inspection it looks like iterate sizes (eg measured in bit length) are not decreasing very quickly from initial seeds. it also looks like maybe there is a concept of “bit bunching” in the later iterates where 0’s and 1’s tend to bunch together. so the collatz operation is apparently affecting both density and bunching and quickly bunching an unbunched initial seed…? one rough way of measuring this is to count the number of (consecutive left-to-right) 0-to-1 and 1-to-0 transitions in the bit string. (this metric is sometimes used in image processing as a rough/ approximate measure of “entropy”.) on trying it out, that measure is correlated with the density, but it does not seem to be correlated with the probability of toward-vs-away core density distance direction. scratching head right now, still looking…

(12/5) whew! that took quite a bit of ingenuity/ work but here is an interesting metric that does indeed numerically quantify a difference in the subsequent bit distributions versus a normal distribution, taking away some of the mystery and confirming a type of statistical “bunching”. the metric is to measure the length of all the 0 and 1 runs in the binary string, and then compute the standard deviation of all those lengths. for normally distributed bits, this is low (green line), for bunched distributions, it is high (red line, ie the bit-runs in trajectory iterations as binary strings tend to be either short or long and not close to the average). blue line measures the difference. this following code looks at an average of these two statistics over multiple initial trajectories (10). it is started with ARGV[0]=5 which defines a 5% initial ‘1’ distribution. slightly strangely some of the “wiggles” in the lines seem to be “real” ie are not smoothed out with more samples in the averages.

def d(x)
	l = x.split('')

	c = 0
	b2 = ''
	l.each \
	{
		|b|
		c += 1 if (b == '1')
	}

	return c.to_f / l.size, c
end

def mix(c)
	n = '1' + ('0' * ($z - 1)) + '1'
	$a.shuffle!
	c.times { |j| n[$a[j], 1] = '1' }

	return n
end

def stat(l)

	t = t2 = 0
	l.each \
	{
		|x|
		t += x
		t2 += x**2
	}
	c = l.size
	a = t.to_f / c
	z = t2.to_f / c - a ** 2
	return a, Math.sqrt(z < 0 ? 0 : z)
end

def bunch(n)

	return stat((n.split('1') + n.split('0')).map { |x| x.length })

end

$z = 500
$a = (0...$z).to_a
x = ARGV[0].to_i

m = 100

r = x.to_f / m

l1 = []
l2 = []

t = 60

10.times \
{
	c = (r * $z).to_i
	n = mix(c)
	d2, = d(n)

	t.times \
	{
		|i|
		a, sd = bunch(n)

		n1 = mix(c)
		a2, sd2 = bunch(n1)

		n2 = n.to_i(2)
		n2 = n2 * 3 + 1
		n2 /= 2 while (n2.even?)

		n = n2.to_s(2)

		d1, c = d(n)
#		puts([d1, Math.log(d1 / d2), a, sd, a2, sd2].join("\t"))

		l1[i] = [] if (l1[i].nil?)
		l1[i] << sd
		l2[i] = [] if (l2[i].nil?)
		l2[i] << sd2

		d2 = d1
	}
}

t.times \
{
	|i|
	a1, = stat(l1[i])
	a2, = stat(l2[i])
	puts([a1, a2].join("\t"))
}

map4e

an interesting point/ observation about the density convergence toward the core is that it seems much more “nearly monotonic” than the collatz trajectories. one way of measuring this is to count max # of iterations between consecutive maximal points (in this code, prior to reaching the core). if that metric is higher, then there are longer nonmonotonic runs. this is a plot that shows for 500 binary digits theres a max of about 23 nonmonotonic iterations and the longer nonmonotonic runs happen for the lower density starting points. out of 1000 samples.

def d(x)
	l = x.split('')

	c = 0
	b2 = ''
	l.each \
	{
		|b|
		c += 1 if (b == '1')
	}

	return c.to_f / l.size, c
end

def mix(c)
	n = '1' + ('0' * ($z - 1)) + '1'
	$a.shuffle!
	c.times { |j| n[$a[j], 1] = '1' }

	return n
end

$z = 500
$a = (0...$z).to_a

m = 0.4

l = []
1000.times \
{
	r = rand * m

	c = (r * $z).to_i
	n = mix(c)

	d2 = 0

	i = 0
	i2 = 0
	t = 0
	loop \
	{
		n2 = n.to_i(2)
		n2 = n2 * 3 + 1
		n2 /= 2 while (n2.even?)

		n = n2.to_s(2)

		d1, c = d(n)
		d2, i = d1, 0 if (d1 > d2)

		i += 1
		i2 = i if (i > i2)
		break if (d1 > m)
		t += 1
	}
	l << [r, i2, t]
}

l.sort.each { |x| puts(x.join("\t")) }

map8

the next step is to look for the maximal nonmonotonic runs for increasing bit sizes, here in the range 400..500 (1000 samples at each), and see if there is any increasing trend. no trend is apparent but its noisy.

def d(x)
	l = x.split('')

	c = 0
	b2 = ''
	l.each \
	{
		|b|
		c += 1 if (b == '1')
	}

	return c.to_f / l.size, c
end

def mix(c)
	n = '1' + ('0' * ($z - 1)) + '1'
	$a.shuffle!
	c.times { |j| n[$a[j], 1] = '1' }

	return n
end

(400..500).each \
{
	|z|

	$z = z
	$a = (0...$z).to_a

	m = 0.4

	l = []
	1000.times \
	{
		r = rand * m

		c = (r * $z).to_i
		n = mix(c)

		d2 = 0

		i = 0
		i2 = 0
		t = 0
		loop \
		{
			n2 = n.to_i(2)
			n2 = n2 * 3 + 1
			n2 /= 2 while (n2.even?)

			n = n2.to_s(2)

			d1, c = d(n)
			d2, i = d1, 0 if (d1 > d2)

			i += 1
			i2 = i if (i > i2)
			break if (d1 > m)
			t += 1
		}
		l << [r, i2, t]
	}
	puts([$z, l.max_by { |x| x[1] }].join("\t"))
	$stdout.flush
}

map8b

(12/10) if youve read this far, then its worth getting to some kind of punchline by now. a proof sketch/ outline based on these directions might be apparent to experts, but heres the idea. it is known that if the “ending” binary 0’s and 1’s are somehow roughly balanced, ie odd/ even iterations, then the decreases in the function must be larger than the increases. but the ending digit is simply correlated with density. so resolving the conjecture seems to be somehow based on determining the maximum “imbalance” possible in odd/ even iterates. so this density measurement gives strong support to this attack direction. the idea is that if there is an imbalanced density in a starting value, it cant remain imbalanced for long, and becomes balanced, ie near the ½ level, after which point convergence is guaranteed.

so then, how to study this density metric? one possible lead: it reminds me very much of looking at convolution functions in fourier analysis. each collatz iteration is apparently very similar to a convolution operation on binary digits/ strings.

on to something else. another direction that has been tagging at my thoughts is looking for the differential between the maximum of an individual trajectory versus the starting value, ie some kind of bound. for most trajectories it is low but it is known from empirical search of “low” trajectories there are rare “needles in the haystack” for which this is large. how to find these needles? heres a greedy search that attempts to maximize the ratio between “digits in max trajectory value” versus “digits in starting trajectory value”.

it comes up with a ratio of about ~2 for very long trajectories found via the recursive prefix search, which is what is found on large searches of all trajectories starting at “low” values say less than ~10M or so, the max of the plot on the right. this also helps look into the open question proposed earlier of whether there is a way to monotonically decrease the sum of a batch of sequentially starting trajectories. in a way the “edge” or “difficult/ hard” cases are where the sequential start ranges have these (rare, “black-swan-like”) “pointy needles” in them.

def max(n)
	m = 0
	a = Math.log(n)
	while (n != 1)
		n = n.even? ? n / 2 : n * 3 + 1
		m = [n, m].max
	end
	b = Math.log(m)
	return b / a
end

def check(p, b)

	n = 2 ** p
	a = n
	b0 = b
        while (a >= n) 

		if (b.odd?) then
			a *= 3
			b = b * 3 + 1
		else
			return [[b0, max(b0), p + 1]] if (!a.even?)
			a /= 2
			b /= 2
		end
	end

	return []

end

def recur3(m, j)

	l = [[1, 0, 2]]
	t = 0

	loop \
	{
		l.sort_by! { |x, c, p| -c }

		t += 1
		break if (t >= m)
		j.times \
		{
			x, c, p = l.shift

			l += check(p, x)
			l += check(p, x + (2 ** (p - 1)))
		}
	}
	return l
end

l = recur3(200, 10)

l.reverse.each \
{
	|xcp|
	puts([xcp].join("\t"))

}

pow13b

(12/26) this is some code that tries a different strategy, of simplifying some of the previous greedy searches but looks for long nonmonotonic runs of high density distance (outside of the “center region”) based on adjusting previous run seeds (ie functionally similar but conceptually simpler than the previous “recursive traversal”). also noisy but in contrast to the prior experiment it succeeds in finding a noticeable upward trend of longer nonmonotonic runs for larger seeds. it also tracks the starting seed in the 3rd output column.

the traversal dynamic has some analogy/ similarity to GAs, genetic algorithm approaches. each seed can be thought of as a long bit string, and there is a general metric aka “fitness function” that associates a score with each. new seeds are generated from previously traversed ones via a simple single-bit mutation operator (which can turn on from a previously-off). the mutation bit moves from lsb to msb, least significant bit to most significant bit, and may also skip positions.

#!/usr/bin/ruby1.8

def dense(n)

        s = n.to_s(2)
        c = 0
        s.length.times \
        {
                |i|
                c += s[i, 1].to_i
        }
        d = c.to_f / s.length
        return (0.5 - d).abs

end

def f(n)

        d2 = 1.0
        i = 0
        j = 0
        m = 0
        while (n != 1)

                n = (n % 2 == 0) ? n / 2 : n * 3 + 1
                d = dense(n)

                if (d < d2) then
                        d2 = d
                        j = i
                end

                m = i - j if (i - j > m)

                break if (d < 0.1)
                i += 1
        end

        return m
end

def sample(t)

        l = [[1, 1, 0]]

        m = 0
        n2 = nil
        t.times \
        {
                |i|

                l = l.sort_by { |z| z[2] }.reverse

                j = rand([l.size, 10].min)

                n, p, x = l.delete_at(j)

                m, n2 = [x, n] if (x > m)

                p <<= 1
                l.push([n, p, x])

                x2 = f(n)
                l.push([n + p, p, x2])
        }
        return m, n2
end

1000.times \
{
        |t|
        puts([t, sample(t)].join("\t"))
        $stdout.flush
}

image

one can then sort this output file by the 2nd column descending and then one has the largest nonmonotone density distance runs at the top. an idea that occurred at this point was to look at a 2d random walk involving the trajectory and density as two separate coordinates using the following simple code which takes the seed as the program argument, and that leads to some interesting plots/ patterns.

here are the top 3 starting seeds found in the run above. the y axis is density distance and the x axis is log of trajectory value. the plots start at the far upper right with larger density distance and move lower leftward. they seem to hit temporary trajectory plateaus/ regimes but for which within the density distance ranges randomly. also there is a recurring triangular motif. it reminds me of chinese dragons with the “head” at the top right. also the left tail tends to fan out. in the 1st there is also a notable feature where there is a “front (lower right) foot” where the trajectory climbs over/ in front of the starting seed. (“here there be dragons!”)

811     47      491730577128250424268160649
821     46      29819088008175195352773971
873     43      28555559826102533786963
958     43      32064868418684891070298111
863     41      50706024009129176061122612151977
546     38      504403158489011141
975     36      122705970690884861472917247
701     35      1976310373080946530938879
942     35      3742532991515703805665684991
999     35      7626508533038888148634916177
851     34      8764712192206067004630355
860     34      12105675799598789779
924     34      84624807373024667383799463
945     34      49517601571415250891437550185
762     33      58935133706213195158475039
829     33      29920914035462077561988435
718     32      7862002324215010973300051
711     31      113012203149732509
747     31      14086942977050447
766     31      3377699725376805
#!/usr/bin/ruby1.8

def dense(n)

        s = n.to_s(2)
        c = 0
        s.length.times \
        {
                |i|
                c += s[i, 1].to_i
        }
        d = c.to_f / s.length
        return (0.5 - d).abs

end

def f(n)

        d2 = 1.0
        i = 0
        while (n != 1)

                n = (n % 2 == 0) ? n / 2 : n * 3 + 1
                d = dense(n)

                puts([Math.log(n), d].join("\t"))

                i += 1
        end

end

f(ARGV[0].to_i)

image

image

image

returning to an earlier theme, this is another simple study of stopping time versus the “op ratio” ie the ratio of 3n+1 ops over div-2 ops, for some trajectories in the starting range around 2^{50}. its a smooth quadratic-looking curve.

#!/usr/bin/ruby1.8

def f(n)

        a = b = c = 0
        while (n != 1)

                if (n % 2 == 0)
                        b += 1
                else
                        a += 1
                end

                n = (n % 2 == 0) ? n / 2 : 3 * n + 1
                c += 1
        end

        return [a.to_f / b, c]
end

z = 50

2000.times \
{
        n = 2 ** z + rand(2 ** (z - 1))

        r = f(n)
        puts(r.join("\t"))
}

image

this is another quick study that looks at average number of div-2 ops between each 3n+1 op for starting seeds 2^{20} - 2^{500} versus the trajectory length (which is roughly correlated with size of starting seed). the average tends toward ~2 and the divergence of the metric decreases for higher starting seeds even as the trajectory length divergences increase.

#!/usr/bin/ruby1.8

def f(n)

        m = nil
        a = c = t = i = 0
        while (n != 1)

                if (n % 2 == 0)
                        a += 1
                else
                        t += a
                        a = 0
                        i += 1
                end

                n = (n % 2 == 0) ? n / 2 : 3 * n + 1
                c += 1
        end

        return [t.to_f / i, c]
end

(20..500).each \
{
        |z|
        n = 2 ** z + rand(2 ** (z - 1))

        m = f(n)
        puts([z, m].join("\t"))
}

image

(12/27) as noted, something is strange about the bit distribution in sequential iterates, its certainly not very random. heres some quick code that looks at length of sequential runs of 0 and 1 bits in the binary string. it graphs averages for trajectories starting from random seeds around 2^{50}. it shows they tend to hover around ~1 bit average length. theres also an alternating pattern/ phases of longer 0 runs versus 1 runs. some other analysis shows (not surprisingly) these phases correspond to contiguous/ alternating below-½ and above-½ density regions (respectively). here are 3 separate runs.

#!/usr/bin/ruby1.8

def cut(s, x)

        s.split(x).map { |x| x.length }
end

def avg(a)

        t = 0
        a.map { |y| t += y }
        return t.to_f / a.size

end

def f(n)

        while (n != 1)
                n2 = n
                if (n % 2 == 0)
                        n = n / 2
                else
                        n = n * 3 + 1
                end
                s = n.to_s(2)
                a = avg(l1 = cut(s, '0'))
                b = avg(l2 = cut(s, '1'))
                c = avg(l1 + l2)

                puts([a, b, c].join("\t"))
        end
end

z = 50

n = 2 ** z + rand(2 ** (z - 1))
f(n)

image

image

image

(12/30) more to the story on 0/1 bit runs! the 12/3 intuition is born out by this more subtle experiment looking for slight bias. this experiment focused on sampling runs starting from a seed with exactly the same ½ bit density, 100 samples at a time, 40 binary digits and the 1-bits randomly located. then it sorts the array by the trajectory length and computes bit run statistics on the starting seeds. then it takes averages over 100K tests. it found subtle but very identifiable trends. in these results the trends are scaled by min and max around the following values. line 1 is the averaged/scaled trajectory lengths. lines 2/4 are the average bit run lengths 1/0 respectively. line 3 (1-bit run lengths std dev) omitted; came out random-looking. line 5 is the std deviation in the 0-bit runs. 6 and 7 are the average and std deviation of all 0/1 run lengths which also show trends. lines 5 and 7 seem to show a nearly coinciding left bump, dont know if its real. (this is now starting to remind me of particle physics and searching for the higgs boson!)

overall this is probably quite significant. despite long looking for the obvious, almost nothing about initial seeds of the same bit length seems to have any correlation with trajectory lengths. to find even these weak effects of a statistic associated with initial seeds and trajectory lengths is remarkable. o_O (but uh oh, how much of this can be simply explained by the magnitude of the starting seed? that needs to be controlled, oops)

max     541.501 1.034   1.267   1.016   1.339   1.003   1.306
min     124.834 0.991   1.257   0.981   1.302   0.998   1.287
avg     282.445 1.005   1.261   1.000   1.322   0.999   1.295
def f1(n)

	c = 0
	while (n != 1)
		n = n * 3 + 1
		n /= 2 while (n.even?)
		c += 1
	end
	return c
end

def f2(n)

	c = 0
	while (n != 1)
		n = n.even? ? n / 2 : n * 3 + 1
		c += 1
	end
	return c
end

def f3(n)

	c = 0
	while (n != 1)
		n = n.even? ? n / 2 : (n * 3 + 1) / 2
		c += 1
	end
	return c

end

def stat(l)

    t = t2 = 0
    l.each \
    {
        |x|
        t += x
        t2 += x**2
    }
    c = l.size
    a = t.to_f / c
    z = t2.to_f / c - a ** 2
    return a, Math.sqrt(z < 0 ? 0 : z)
end

def stats(n)

	l0 = n.split('0')
	l1 = n.split('1')
	e0 = stat(l0.map { |x| x.length })
	e1 = stat(l1.map { |x| x.length })
	e2 = stat((l0 + l1).map { |x| x.length })
	return e0, e1, e2
end

def run()

	z = 40

	l = []

	$s.times \
	{
		a = (0...(z - 1)).to_a

		l2 = [0] * (z - 1)

		(z / 2 - 1).times { l2[a.delete_at(rand(a.size))] = 1 }

		n = ('1' + l2.join).to_i(2)

		c = f2(n)

		l << [c, n.to_s(2)]
	}

	l.sort_by! { |x| x[0] }

	l3 = []

	l.each { |c, n| l3 << [c.to_f] + stats(n).flatten }

	return l3
end

def out(var, l)

	$stderr.print("#{var}\t")
	l.each { |x| $stderr.printf("%.3f\t", x) }
	$stderr.puts
end

$s = 100
w = 7
l2 = (0...$s).map { [0] * w }

t = 100000
t.times \
{
	l = run()
	$s.times { |j| w.times { |i| l2[j][i] += l[j][i] } }
}

mn = []
mx = []
a = []

l1 = []
w.times \
{
	|i|
	l = l2.map { |x| x[i] / t }
	mn[i] = l.min
	mx[i] = l.max
	a[i], = stat(l)
	l.map! { |x| (x - mn[i]) / (mx[i] - mn[i]) }
	l1 << l
}

out('max', mx)
out('min', mn)
out('avg', a)

l1.transpose.each { |x| puts(x.join("\t")) }

size2b

(12/31) nearly the same code with a few minor changes and included below for completeness, this next algorithm includes averaging the initial starting seeds to look for the trend there. it runs for 1M samples and took several hours.

this time the plots are split into two and this is definitely easier to interpret. the 1st plot is the average run length line 1, avg 1-bit run lengths line 3, avg 0-bit run lengths line 5, avg 0/1-bit run lengths line 7.

the 2nd plot this time includes starting seed avg line 2, 1-bit run length std dev line 4, 0-bit run length std dev line 6, 0/1-bit run length std dev line 8. so the 1-bit run length std dev was probably not so random after all & shows a trend roughly in line with the other lines. there are various bumps which all correlate roughly & a leftmost major spike. but notice that the avg run lengths do not correlate with the bumps (eg in the seed avg) showing that the former trends are somewhat independent of the latter one. ie the trend of (anti)correlation of avg run lengths with trajectory lengths is not entirely explained by the simple magnitude of the starting seed value.

max     541.709 27.415  1.033   1.267   1.016   1.340   1.003   1.306
min     124.889 27.407  0.991   1.257   0.982   1.301   0.998   1.285
avg     282.442 27.411  1.005   1.261   1.000   1.322   0.999   1.295

def f1(n)
	
	c = 0
	while (n != 1)
		n = n * 3 + 1
		n /= 2 while (n.even?)
		c += 1
	end
	return c
end

def f2(n)
	
	c = 0
	while (n != 1)
		n = n.even? ? n / 2 : n * 3 + 1
		c += 1
	end
	return c
end

def f3(n)
	
	c = 0
	while (n != 1)
		n = n.even? ? n / 2 : (n * 3 + 1) / 2
		c += 1
	end
	return c
	
end

def stat(l)
 
    t = t2 = 0
    l.each \
    {
        |x|
        t += x
        t2 += x**2
    }
    c = l.size
    a = t.to_f / c
    z = t2.to_f / c - a ** 2
    return a, Math.sqrt(z < 0 ? 0 : z)
end
 
def stats(n)
		
	l0 = n.split('0')
	l1 = n.split('1')
	e0 = stat(l0.map { |x| x.length })
	e1 = stat(l1.map { |x| x.length })
	e2 = stat((l0 + l1).map { |x| x.length })
	return Math.log(n.to_i(2)), e0, e1, e2
end

def run()
	
	z = 40

	l = []

	$s.times \
	{
		a = (0...(z - 1)).to_a
		
		l2 = [0] * (z - 1)
		
		(z / 2 - 1).times { l2[a.delete_at(rand(a.size))] = 1 }
		
		n = ('1' + l2.join).to_i(2)
		
		c = f2(n)
		
		l << [c, n.to_s(2)]
	}

	l.sort_by! { |x| x[0] }

	l3 = []

	l.each { |c, n| l3 << [c.to_f] + stats(n).flatten }

	return l3
end

def out(var, l)
	
	$stderr.print("#{var}\t")
	l.each { |x| $stderr.printf("%.3f\t", x) }
	$stderr.puts
end

$s = 100
w = 8
l2 = (0...$s).map { [0] * w }

t = 1000000
t.times \
{
	l = run()
	$s.times { |j| w.times { |i| l2[j][i] += l[j][i] } } 
}

mn = []
mx = []
a = []

l1 = []
w.times \
{
	|i|
	l = l2.map { |x| x[i] / t }
	mn[i] = l.min
	mx[i] = l.max
	a[i], = stat(l)
	l.map! { |x| (x - mn[i]) / (mx[i] - mn[i]) }
	l1 << l
}

out('max', mx)
out('min', mn)
out('avg', a)

l1.transpose.each { |x| puts(x.join("\t")) }

size2cx

size2cy

Advertisements

14 thoughts on “Collatz, striking visualization of binary density for 3x + 1 operation!

    1. vznvzn Post author

      hi PS thx for the interest, do you have some experience with the collatz fn? have you heard of stackexchange? hang out there on chat. there is a nice number theory chat room etc. planning on expanding on this & writing up more very soon here. do you do any coding? what languages? might describe the code in more detail at some pt eg in more math oriented notation although as you can see the code is already quite brief. ruby is usually a very straightfwd/ elegant language and all the code on this site is proudly in it for that reason. maybe youd be interested in translating it to some other language of your choice? mathematica is used a lot nowadays for stuff like this & have used it some many yrs ago.

      there is some analysis of “binary density” of collatz-related aspects as noted in some of the earlier blogs here under the “Collatz” topic. however for a more professional writeup from the literature pointing in this direction try Code → Collatz Conjecture Experiments → ref[25] by Lagarias, sec3.2 “Patterns”, table 2, p9, note “1 ratio” column/ quantity/ measurement.

      Reply
  1. Phillip Somerville

    Thanks for the reply vzn. The Collatz fn is new to me so I can’t hazard much in the way of commentary I’m afraid. Your posts, which I’ve had a (superficial) read of, I find intriguing.
    I self taught C++ (ThinkC) for some attempts at quaternion ray tracing of Mandelbrot set decades ago but have probably forgotten more than I ever learned, and the programs I wrote were very basic, static efforts. The only thing I would hazard a.t.m is that I’m thinking of the Collatz fn as an attractor, a ‘semi-balanced’ attractor? The algorithm on my page perhaps has some parallel in that regard? My function: for x < 1, (x^2) – 1, when squared again, reflects ≈ to 2 × x. Instead of plus 1, I'm doing a transform around -1 to 1.
    I don't if that is relevant… all I can think of just now.
    I'm only just started on crypto stackexchange and after two answers I think I should stop trolling for page views and ask some questions or offer something a bit more constructive! 😉

    Reply
    1. vznvzn Post author

      yes saw/ skimmed your PRNG paper on your site. & already commented briefly in the cstheory salon chat room on the very interesting paper by De Castro turned up re your blog on it. yes your function appears to exhibit some connection with known PRNG theory. and collatz fn has many close parallels to PRNGs, have pointed that out & drawn analogies but havent seen it mentioned much in the published literature so far. see the Code → Collatz Conjecture Experiments page for some more on that. there is also some superficial similarity with the mandelbrot set generator, and have found/ graphed/ documented fractal aspects in collatz plots.

      looked up your stackexchange user, strangely you seem to have a chat ID but currently no main ID. (maybe your answers were deleted by mods already?) if you can get 20pts anywhere on stackexchange (any site, either Q/A, thats 4 Q upvotes or 2 A upvotes) you can chat anywhere, shouldnt be too hard right? questions might work better at 1st. can walk you through basic collatz experiments in chat including the one above when you get chat privileges.

      Reply
      1. Alexandre de Castro

        vznvzn, thank you for your comment on my paper in Physica A. I think if there exists a bijection {0,1}^n→{0,1}^n that becomes one-way in the greatest lower bound of Rényi’s entropy (min-entropy), then a pseudorandom generator G:{0,1}^n→{0,1}^2n can be built by a linear-size circuit from this one-way bijection. From my paper in Physica A, it is possible to show that this circuit is a sequence of two controlled-NOT gate running backwards in adiabatic boundary conditions.
        I am working on this Idea, here:
        http://www.researchgate.net/publication/265385377_Pseudorandom_generator_%28PRG%29_from_the_controlled-NOT_gate_running_backwards

      2. vznvzn Post author

        hi AdC very cool that you found my brief comment on your paper like a needle in a haystack, did you use google for that or something? am not expert enough to follow the details of the whole paper but feel intuitively its on the right track. hope that you get further feedback from experts in the field. if you have any spare time can chat with you about it sometime in cstheory chat room.

  2. amg

    The standard tool in science/math community to do this kind of research is iPython notebook. matplotlib is used to build plots, and you can convert your notebook immediately to a top-notch publishing quality. ruby is very cryptic and most people with number theory background who are qualified to comment on your research wouldn’t bother to read it.
    As for the bit distribution, I have no idea where you get it from that 3x+1 is going to “scramble” bits in binary representation. +1 changes bit 0 with probability 1, bit 1 with probability 1/2, bit 2 with probability 1/4, etc. Multiplication by 3 also doesn’t scramble much if the binary representation has mostly 1s or mostly 0s. Play with a calculator in binary mode.

    Reply
    1. vznvzn Post author

      there are many great tools, mathematica comes to mind, & agreed python is great/ widespread, another mentioned PARI to me in a chat room, do not discourage any uses, they all have their places, but ruby is the language for me for various reasons, and it is easy to convert this code to any language. there are almost no comments so far on my now quite numerous/ copious experimental number theory/ collatz experiments/ writeups after over ~2yr blogging (this post is a rare exception) & think its naive to believe that its (merely) because its written in ruby. my conclusion is that while eg the collatz problem is quite notorious and famous, very few mathematicians have actually worked on it or paid it much attention, and at any given moment, even fewer are working on it, and of that small set, they are not communicating or collaborating much except by intermittently publishing papers, and to research problems through active collaborations across blogs is more novel and rare. slightly paradoxically this site is not necessarily aimed at experts. its aimed at anyone who is curious and wants to participate. in contrast to some other math blogs (eg Tao) its purposely written at a very introductory level that even undergraduates could grasp. not that it would be possible for me to write like masterful Tao anyway. 😛

      Reply
    1. vznvzn Post author

      yep you can find several of my comments on Taos great page dating back years & hey dont knock collatz, even love has sometimes been called a disease too 😉 re PS/SG comments/ engagement/ recent interest in stackexchange, inspired/ decided to finally write up some history/ dynamics of misc chat adventures, check it out ❗ 😎 💡 😀

      Reply
  3. Pingback: *breakthru!* collatz codegolf adventures, stackexchange “collective intelligence” realized! | Turing Machine

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s