this is a hazy idea thats been pulling at me for several years, finally came up with a way to solidify it and then decided to try it out. results are unfortunately lackluster so far but it was a relief to finally see how it works (otherwise would just keep wondering if its the “one that got away”!). and anyway think the overall conceptual idea is at least somewhat sound and still has promise maybe with some other revision/ angle.
the prior runs showed that theres a roughly predictable linear relationship between decidable metrics for each iterate and the glide length (“horizontal ratio”). so, one of my rules of thumb discovered the hard way over the years, once its proven at least a simple machine learning algorithm works, then one can look at more sophisticated ones that should at least outperform linear regression. (and conversely, if there is not even a weak linear relationship present, it seems unlikely that machine learning will “pull a rabbit out of a hat”. this is surely not an absolute phenomenon, but it seems to hold in practice, and think it would be very interesting to isolate exceptions, suspect they are rare.)
the idea explored here on specific collatz data is a general one for machine learning. suppose that one does not have very many coordinates in ones data. each coordinate is something like a “feature”. and one would like to increase the number of features using only the supplied ones. this is similar to what a neural network does but typically each neuron has many connections. one wonders, can it be done with few connections? the smallest case is 2 connections. is it possible that only 2-way analysis of input data, built on recursively, could have predictive capability? the answer here for this data is mostly no but maybe theres more to the story with further tweaking. also it might work for some other type of data.
hi all, 2016 was another banner year for computer science. its been on a phenomenal roll the last few years and there seems to be no end in sight. dont really know what is causing all the wave, its likely a variety of factors. one large factor is the headline-grabbing success of AI in the last few years and that areas momentum shows no signs of abating. another neat factor is that president Obama has been a major friend of coding/ CS. it will be a big vacuum in authoritative support after he leaves office, its hard to think of a more enthusiastic or high profile position/ proponent/ advocate of coding. wrt this (and ofc other ways) he will surely be sorely missed.[d2][d3][d4]
hi all its only been 3 wks since posting on the fake news and election integrity topic, and its turned into a fast firestorm in a small amt of time, and various bombshells are reverberating. hard to keep up! am unloading this batch of links.
obama is releasing CIA intelligence report to congress/ public[d] which implicates russia/putin[c] as intentionally attempting to push election to Trump.[e]
congress is making loud noises about a bipartisan investigation. that is a rare occurrence. but one wonders if the partisans have some kind of secret agenda. in washington, increasingly the only thing that isnt bipartisan is the actual physical infrastructure, say the statues, and even then one wonders. (oops, even inanimate objects like statues have a bipartisan slant!) which reminds me of that old expr “reality has a liberal bias” and that in turn reminds me of old quote “We’re an empire now, and when we act, we create our own reality.”[x]
which leads nicely/ quickly into the next subj of “fake news”.[a] the technical term for this ofc is misinformation, disinformation, or propaganda. not exactly sure why those terms are not being used. guess we need a nice new buzzword in this 21st cyber century eh?
some media is already declaring jill steins recount effort over.[b] it looks like 2/3 states rejected it via judicial review. but probably the litigation will go on for months at least it seems like. conspiracy sites are even calling out outright fraud/ hacking.[b3][b12]
clearly we are right smack dab in the middle of a major cyber/infowar with massive domestic and intl implications.
facebook has done a 180 from zuckerbergs claims “theres nothing to see here folks” and is already agreeing to work on the issue with various near-term and long-term approaches.
this took quite a bit of effort and is an idea that builds on
extend23.rb, some of the effort is related to more need for analyzing results. it works with the long trajectory database. it uses 5 of the glide algorithms that typically lead to longer glides of at least 10 steps. then it does a linear regression on the trajectory iterates metrics to fit the estimated iterations left in the glide, but scaled by the initial seed bit width. this was called the “horizontal scale” in some earlier experiments and seems to be stationary in the statistical sense. then the optimization algorithm attempts to find trajectories that maximally “thwart” the estimate ie maximize error.
whew! 2016 has been quite the fast-paced dizzying blur. this is the year that it seemed social media was utterly ubiquitous and not only that, probably had a big influence on the election.[a]
its quite a torrent. its grip on popculture seems to be nearly complete. how could it be any more coupled? we have multimilliondollar youtube stars and supermodels becoming so based on instagram accounts. although, have we reached Peak social media? kim kardashian backing off on her intensity somewhat might be a milestone or gamechanger. or maybe not!