😎 ⭐ ❗ 😮 hi all. some rare, living on the edge, near-liveblogging this time around. am delighted to report, now typing this on a brand new chromebook, my 1st ever after eyeing them for years, just bought about 90m ago, my mask just taken off is probably still a little warm! this is a sort of modern miracle of technology. the netbook lives! scifi realized in my lifetime. it only took about 2 decades. or maybe ~1½. the latter is roughly how long google has been working on the software/ hardware; google apps predates chromebooks by maybe ½ decade? however the phrase probably originated in the mid to early 1990s from legendary Suns Gage hence now a sprawling 2½+ decade undertaking/ saga; dating it to the origins of the internet puts it at over ½ century of work!
so “standing on the shoulders of giants” in the 20th century,
in search of incredible™ on it outside/ inside…)
- the model is an
asus cx22NA. the list price of this machine is already remarkable at $200. it was only an extraordinary $140 at microcenter discounted $60, ie knockout 30% discount, what might be called dirt cheap. this is currently $50 less than best buy which has now been open only 1mo since shutdown. anyway my local best buy was sold out of the machines at the higher price! very strangely, asus does not currently list this model on their own web site! the wonders of modern supply chains/ merchandising systems never cease to amaze/ baffle me.
- hooked up a microsoft USB wireless mouse/ receiver without any trouble. ran so fast without any setup screen/ msg, am not even sure the machine downloaded a driver…? plugged it in after the machine setup sequence but wondering if it might have worked even at beginning? for someone who used to sweat bullets trying to install linux on windows machines even in the mid 1990s with (early) device drivers one of the biggest grating achilles heels of linux, feel this is another extraordinary feat of modern technology/ cyber age…
- ⭐ ⭐ ⭐ 💡 ❗ 😮 😎 grand finale magic trick viola— it has linux nearly built in! just bring up “chrome app switcher” and type
terminal. it downloads quickly, for me within minutes. it seems to use about ~1.3GB (acc to “storage management/ linux beta” section on the settings screen). this apparently uses amazing new linux/ VM technology called
debian (stretch). it was announced ~summer 2019. although it showed up in new versions as early as late 2018 or even earlier in the summer of 2018 but was maybe not very supported/ widespread. my idea was to buy the cheapest chromebook and see if linux worked on it— either recipe for disaster, or success! honestly this is nearly more than 80% of my motivation on this impulse buy and am kicking myself a bit for not trying this cool feature sooner! but yes, its cutting edge stuff conspicuously marked BETA. so “at my age” still living a bit on the wild side.
easy, right? … not so fast! more background color, did a bunch of research before buying, but you know that sinking feeling when all of it seems to go out the window once in the store? and when you want to buy something nearly solely based on a critical feature that is known to exist based on publicity/ 2ndhand reporting but not testable, listed anywhere on the product, or known to the salesmen? the bestbuy salesman had heard of linux terminal, said hed seen it work. the microcenter one had not. the machines run in the store in demo mode and there was apparently no way to verify that the linux capabilities were built in/ enabled in the machine(s). (would like to know that trick if it exists!)
some key but minimalist pages from google are both helpful and not helpful:
from 1st pg apparently looking for
chrome://components works in the non demo mode, but dont think that worked for me in store demo mode (hey google, maybe rethink that! even more radical, consider mentioning this feature somewhere in the demo!), and there is apparently no way to exit store demo mode. in theory one should only have to look in the chromium OS version but this is apparently not easily found anywhere either in the store details sheet or the machine itself! 2nd pg is a nice relatively current list of supported hardware but did not list my machine.
after the initial headspinning excitement— and yes am feeling quite a buzz, this is my 1st new development machine purchased in over a decade, hint, last one was a Gateway desktop for the netflix prize, bought at circuit city, ran ubuntu, now maybe being stripped in (overseas?) junkyard since disposing of a few months ago in all the whirlwind commotion, lol!— am now going to have to figure out some of its quirks and uh, inevitable limitations
- hmmm, lol, is that right? a ~15m battery even after full charge? wait, it now says 99% and 10m left? huh? lol, basic battery status indicator inaccurate/ broken? oh wait is that now 7HOURS? holy cow! oops, thats not a bug, its a feature, lol!
- oops! there seems to be some temporary glitch connecting to my internet router in the setup process and was starting to worry it was all for nothing, sweating a few bullets that maybe it wouldnt even setup! (speaking from experience/ some scars to show for it, this was not an uncommon experience in the early days of linux + off-the-shelf hardware!) disconnected, reconnected, seemed to fix issue.
- was typing fast and hit the CAPS LOCK key by old habit and the same positioned button brings up the chrome search window! lol, do these machines even have a caps lock? still havent found it!
- 1366×768 11½ in screen is not so big/ sharp full into the days/ era of the amazing “retina display,” (what a clever/ great ad slogan there!) but its not small either…! oh, but its very bright!
- oh geez, sigh! its apparently not simple to merely turn off the touchpad in chromium.
- oh, ok, saving the most sensitive/ overarching/ maybe gaping issue for last, am somewhat nervous about this and the jury on it is not in yet. as a software engr, “coming down to earth,” this is the somewhat tough pill to swallow bordering on potential dealbreaker. its so big/ potentially forefront, deserves paragraphs all by itself… in 1 word, the hammer/ sword of our time/ age… performance. quick take, so far measured only so far on web pages its “decent,” even bordering on “snappy!”
as we all know from life on planet earth, the two delightful, buzzing aspects of cheap + powerful are fundamentally incompatible due to the basic interplay of economics + physics. running any “serious/ heavy” apps/ code in a VM on a low-power cpu is the massive elephant in the room here. no matter how miraculous-bordering-on-magical Moore or Google push the limits of technology/ capability, there is inevitably no getting around the “price performance” ratio aka what long ago was called “paying the piper.”
am realistically/ pragmatically not expecting a lot here… looking only/ merely for “acceptable/ serviceable” range… bottom line here this is POC “proof of concept” for me, and anyway there is now a huge variety of machines to choose from in the google list (with apparently others not even listed!) to obtain higher performance if this experiment continues to work out.
however, heres another personal consideration. the end goal driving this,
ruby code to explore collatz problem, is known to not really consume large CPU even for some of the more “heavy” investigations or sophisticated optimizations looking into “larger” trajectories/ seeds. was working on this code almost 1 decade ago on much slower machine. while more CPU definitely leads to more insight in general, so far a lot of insight has been gained without CPU limitation/ bounding so to speak. it cant really be said of almost any experiments that dramatically different results/ findings would be expected with higher CPU. although my only possible exception, the “generator/ adversary” algorithm pattern alluded to here intermittently does seem like someday it might scale better with more CPU…
so anyway nevertheless to me all this is such an extraordinary feat to buy a working linux capable computer with a substantial screen for not much more than decent cell phones cost, even substantially less. due to the virus calamity, while have been calling this the lost summer, looks like 2020 is not a total washout! (so a little retail therapy is not dead for me even now with ½ century of hedonic normalization, actually instead some rare thrill/ zing…) everything else in the world that our lives depend on like food + rent costs exhorbitantly and goes up now even visibly with nervewracking inflation, and (mere cost of living) salary increases barely budge the needle in years. insert a few huge cheers for IT managing to nearly defy crushing economics of our time and wring the final sunset glimmers and drops out of moores law…! at some times )( really luv my “job,” software engr in general that is, as for in particular, thats an upcoming story…
next step, need to install RUBY…! alas have been googling RUBY ON CHROMEBOOK and theres not much coming up, what does is old and not even any luv from stackexchange, eg 2014 question closed due to not fitting in guidelines). am feeling a little bit like a dinosaur. python has sped past ruby in the last ½ decade. legendary guido von rossum edges out/ passes by Matz as Alpha Male of Coding. thats ok always thought/ felt Ruby (a gem of a language) has more of a low-key zen feel to me…
ok, ok, ok, heres more of the fineprint. its been possible to run linux on chromebooks thru developer mode + crouton for many years. however this typically tends to require serious hacking effort and seems easy to mess up. this in contrast is an official “idiot proof” nod of the head to linux. with the announcement that MS is soon going to have a linux shell built into the OS, now linux really meets the unwashed masses, not just thru a shiny candy-coated facade like Android… it appears that this is the 1st case of linux access built into
cheap inexpensive mass merchandise computers… have been waiting for this for ~2½ decades myself… so jazzed! feel like calling all my circle of techie friends to tell them the news! oops, dont have any techie friends who would care that much alas lol… actually just drop all the qualifiers, dont have a whole lot of friends “period” at all right now! 😮
⭐ ⭐ ⭐
😳 some more of the backstory. the last blog update other than collatz here was a quite glaring, honestly somewhat embarrassingly aged, ~1½ year ago, jan 2019, marking a big shift in this blog contents since then and its far from prior spirit/ long intentions. have been very aware of this clock ticking even as nobody else has. did not really intend this early on in the effectively “hiatus” but thats how the situation has evolved/ developed. have a massive backlog of blog material on my favorite/ critical zeitgeist topics eg AI/ fluid dynamics etc… the deeper story as alluded to earlier was that Big Corp disabled chrome bookmark sharing (~4/2019), and also furthermore boobytrapped their machines to prevent “exfiltration” of data, making my hobby/ avocation of building amazing histories of links on cutting edge topics off the table aka impossible— talk about adding insult to injury.
this means pdfs, zip files, everything. maybe not images, but possibly including those also. by boobytrap, it means, “automated software tracks it, and immediately emails your manager upon detected violations.” the software is both a bit stupid and arbitrary/ indiscriminate and draconian at the same time, eg triggered by such things such as including lists of entirely public/ open URLs. holy @#%&! and possibly soon even including short code snippets in arbitrary languages, hint, hint. IT companies dont mess around these days! it would seem like the jaws are drawing tighter all around here, and its like the terrible scene with Luke Skywalker and his motley crew band of cohorts in the garbage crusher! it still hurts me to think about it over 4 decades later! what a metaphor for the Age of Anxiety, the Era of the Precariat, etc
now, how about this new story for more Cyber/ Shutdown Blues? last Thu had not one but TWO cyber “near death experiences”! blogging is for stories, right? it was all “virtually” renamable to “WTFDAY”!
- Big Corp sends down security update to their software, now much more thorough/ secure (LOL!) scanning program runs very hot at up to 85-90% of CPU, and this ofc triggers my CPU overheating issue, which shuts down my machine. the machine is disabled even when running because all the apps grind to a halt. cant connect to Big Corp VPN which might fix issue, also triggers overheating/ shutdown. it took me several hours to discover the issue was not limited to my own machine, after rebooting and managing to get my email to halfway function. amazing! this is 2020 and we dont have effective CPU limit systems on individual processes! mindblowing! this is basic engineering principles but it still eludes the mass IT system. so the IT field RE-discovers the concept of tragedy of the commons in the 21st century cyber age… talk about reinventing the wheel… except “they” still havent reinvented it yet! lol! humans seem not merely to be mere amateurs at building/ streamlining fault tolerant/ gracefully degrading systems, in fact they are at times imbeciles…
- oh, same day, within about 90m, my internet router grinds to a halt! now sweating bullets, out of the mounting stress/anxiety related to looming/ crushing/ deadlines/ multiple impatient/ unrealistic/ abrasive overseers/ supervisors/
“spectator(s)/ passenger(s)/ armchair quarterback(s)”and/ or having some hazy half-baked thoughts of “cyber attack,” and possibly audibly emitting at least 1 FOUR letter imprecation! again, not sure if it is a local or larger issue. wait hours for it to come up, manage to connect to comcast thru my phone, log into my account, and sure enough, helpfully there is a msg in my inbox about the outage and that its going to be fixed at 530p. oh, such service, they managed to get it fixed 1hr earlier, so it was only fully amounting to a ~½ (business)day outage. taking a walk in my neighborhood, spot the massive looming hydraulic Ditch Witch tunnel excavator with the XFinity sticker on it (note to self, next time seeing it need to try to spot model # for fun!), hot sun + ~95 degrees out, and multiple crews in the middle of big dirt holes and holding wires with worried sweating supervisors staring down the wage slaveslow-paid mexican laborers! at least they dont have the problem/ stress/ anxiety of “more generals than soldiers”!
maybe based on star (MIS)alignments, these two calamities were dizzyingly intertwined such that my internet was restored at 4:15p but at which time my Big Corp machine was still ½ disabled, and a bunch of emails managed to show up at an excruciating dribble pace, from earlier in the day, sort of like a “WTF time machine” or Murphys law frozen in (cyber)amber, where 1st my perplexed manager announced the pileup/ wreck happened to his machine, emailing the whole team, but with no awareness of it happening to anyone else, followed by “DEJA VU” replies from ~½ dozen of the team, followed by an official company-wide acknowledgement of “technical issues”. OH REALLY!? so a sort of weak wave of less than ½ relief for my nausea rolled thru me… “at least am not the only one.”
talking to the tech support today mon 3x times, FOUR DAYS LATER! (not cleared up over fri thru weekend after many tries despite Thu Eve official email assurance notification that it would be!) on 2nd call today tech tries some commands, gets error, and says “thats never happened so far!” LOL! and the 3rd tech today (inexplicably) gets a little farther with same commands retried (after my restarting the machine again from an overheating shutdown), and informs me the broken update hit ~1500 employees! oh but am personally esp “lucky/ distinguished/ SPECIAL” the way it triggers my preexisting CPU overheating issue! OH YEAH! HITTING THE PERFECT TRIFECTA OF CALAMITY!
these are all the straws that
killed broke this camels back, and caused me to take a course of action actually within my control, versus the visibly extremely numerous events that have recently spiralled out of my control. drove my car to best buy and microcenter and walked home with a shiny new computer that am now banging on emphatically, pouring out my copious recent troubles to the vast emptiness of cyberspace, safe/ secure/ comforted with the inescapable perception/ knowledge/ SURETY built over years of confirmation that UTTERLY NOBODY IS LISTENING. as gallows humor legend george carlin used to say, ITS A BIG CLUB, AND YOU AINT IN IT! written in the spirit of what psychologists discreetly call… journaling.
👿 😈 anyway, heres some actual silver lining. my massive bookmarks and browsing history all load up PERFECTLY/ SEAMLESSLY/ FLAWLESSLY on this new machine, almost without even blinking. Big Corp(s) can screw up one expensive laptop with breathtaking thoroughness/ flair six ways to Sunday (almost literally!), but my new personal $140 chromebook loads web pages FAST and DOESNT EVEN HAVE A FAN! take that, EVIL MONOLITHIC PROFITMONGERING BIGCORP(S)! so the goal of churning out high quality blogs, building an audience, getting lots of lively commentary from engaged readers, and solving Collatz is all that much closer/ within reach. LOL!
oh, so as the acute reader wants to know, is the title bait and switch, or what? what does all this have to do with COLLATZ anyway? the quick scoop on that, got some really cool ideas on it this weekend, couldnt work on them for days on the paralyzed Big Corp machine, and am hoping to get Ruby set up on this machine SOON. PLAN B NOW IN PLAY. and only got thwarted by the missing caps lock key on this chromebook with the accidentally triggered popup search screen about ~1 Dozen times in the last few paragraphs. EVERYTHING IS GOING
PERFECTLY ACCORDING TO PLAN BWAHAHAHA!
when life hands you lemons, make lemonade™— go buy a new toy to play with! also, rant about it all in colorful minute detail on the internet. howl it out in the wind.™ or as a semifriendly/ even wise(?) contractor once said to me last summer after asked how hes doing, “I could complain, but nobody would listen.”
⭐ ⭐ ⭐
(7/15) now, next some play-by-play detail, unsurprisingly sweating over some of the more tricky hurdles/ glitches and testing my software engineering skills/ cachet/ finesse/ badge(s) earned over decades. (badges? we dont need no stinking badges!) brought up google playstore 3 times and it said “server error.” 4th time it came up. next, all the online instructions say to run this command. lol ends with a SSL/ certificate error.
sudo apt-get update Ign:1 http://deb.debian.org/debian stretch InRelease Hit:2 http://deb.debian.org/debian stretch Release Get:3 http://security.debian.org/debian-security stretch/updates InRelease [53.0 kB] Ign:4 https://storage.googleapis.com/cros-packages/72 stretch InRelease Hit:5 https://storage.googleapis.com/cros-packages/72 stretch Release Get:6 https://storage.googleapis.com/cros-packages/72 stretch Release.gpg [819 B] Get:8 http://security.debian.org/debian-security stretch/updates/main amd64 Packages [531 kB] Ign:6 https://storage.googleapis.com/cros-packages/72 stretch Release.gpg Fetched 584 kB in 1s (322 kB/s) Reading package lists... Done W: GPG error: https://storage.googleapis.com/cros-packages/72 stretch Release: The following signatures were invalid: EXPKEYSIG 1397BC53640DB551 Google Inc. (Linux Packages Signing Authority) W: The repository 'https://storage.googleapis.com/cros-packages/72 stretch Release' is not signed. N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use. N: See apt-secure(8) manpage for repository creation and user configuration details.
am thinking it all downloaded and installed but only left off with the ominous scary warning. next,
synaptic is a nice linux package mgr. like ability to search/ run without cmd line. lol this pg doesnt explain how to run it, just saying “start it” but it doesnt appear on chrome menus after the
apt install. from google searches apparently the chromium taskbar is called the shelf. this pg says run
synaptic-pkexec. ok! that worked. cool, ruby has its whole own section. check
ruby, install. works, awesome! 🙂 😎 ⭐
ruby --version ruby 2.3.3p222 (2016-11-21) [x86_64-linux-gnu]
gnuplot --version gnuplot 5.0 patchlevel 5
oh look it even has
SCITE my favorite graphical editor! everything is awesome™!
SCITE leads to this mysterious inscrutable error on the cmd line, but the editor seems to be functional. (an editor doing network IO/ connecting to a server… the wonders/ bizarre intricacies of linux will never cease to amaze, lol!) also, it seems to be better configured than
synaptic. after running the
synaptic icon does show up as a penguin symbol on the shelf, but right clicking on only gives options
new window/ close. it right clicking on
SCITE icon has the option
pin to shelf. oh yeah!
** (scite:1751): WARNING **: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
💡 next, copy last months last
construct138.rb off gist and run it. the copy/ paste includes a
script tag into
SCITE, a minor quibble. pasting/ adjusting it into
SCITE has correct syntax highlighting, nice! but strangely the mouse wheel wont scroll the text editor window, lol, sigh, argh, whatever. ok, moving on, oops,
-bash: ./construct138.rb: Permission denied; so requires to
chmod +x filename (reminiscing, brings back old memories now, last cmd 1st entered as a teenager learning sco unix!), oh then
./construct138.rb: line 1: syntax error near unexpected token `(', so need to add
#!/usr/bin/ruby to the top (oh then hey for that
vi/m works great, complete with color syntax highlighting!). then, (drum roll… in what might be called a modern, personal, cutting edge tribute to the towering pyramid of modern information technology all the way down to the theory of Turing completeness…)
gnuplot display script, comes up perfectly! amazing, drag and drop file from chromium file viewer into chrome window and it is correctly detected/ handled by the wordpress editor just like on MSWindows! breathtaking! BRAVE NEW WORLD! it seems there may be a very slight detectable difference in the font, and by default its perfectly square in constrast to the windows 640×480 default dimensions. certainly can understand/ live with that…!
my immediate idea is now to do (“cross platform”) performance comparisons…
(later) adding lines
srand(0) / puts(Time.now.to_s) into the code also with timing line at end. the random number generator seed correctly initializes to a default version on both platforms and both ruby versions (doncha luv it when things just work™?). the identical probabilistic (PRNG) count output is like a convenient verification/ receipt the calculations are exactly the same, this code coincidentally/ conveniently turned out to be “just the ticket” for a comparison… speaking of probability, just to add even more thrill/ excitement/ living on the edge™, the laptop test involves a race against a random clock, not entirely unlike the pure quantum randomness associated with diodes, such that the high CPU triggers the overheating temperature threshhold/ sensors and the device shutdown, and wrapping “the whole enchilada” is not entirely unlike a complex physics experiment at heart…
2020-07-15 20:17:51 +0000 [10, 133, 15] [20, 97, 38] [30, 49, 28] [40, 24, 30] [50, 27, 29] [60, 13, 46] [70, 6, 42] [80, 14, 51] [90, 4, 38] [100, 3, 36] 2020-07-15 20:19:52 +0000
2020-07-15 14:17:41 -0600 [10, 133, 15] [20, 97, 38] [30, 49, 28] [40, 24, 30] [50, 27, 29] [60, 13, 46] [70, 6, 42] [80, 14, 51] [90, 4, 38] [100, 3, 36] 2020-07-15 14:20:35 -0600
⭐ ❗ 😮 🙄 😎 this is AMAZING, SHOCKING. never would have guessed— guess which one is the laptop?
ok fair disclosure the laptop is running at 100% CPU even before starting the code due to aforementioned Big Corp jinxing. its a intel core i7-5600U 2.6GHz. the chromebook
help/ your chrome book screen lists the CPU as MediaTek processor M8183C. the microcenter link for the machine says this is a Intel Celeron N3350 Processor 1.1GHz. huh?
121s / 174s → 30% (difference) — chromebook ran FASTER by this much!
(ie basically 2m vs 3m!) voila, these are eyepopping numbers. again, the chromium version ran in a VM on a low power CPU running at only 40% of the clock speed. staggering! so extraordinary/ unexpected bottom line, the asus ad slogan
in search of incredible™ seems to have come full circle/ been vindicated… this aged software engr is deeply, utterly impressed and thinks maybe google + its hardware partners have a new ad slogan that truly transcends the typical everpresent marketing hype… veni, vidi, vici.
talk about pulling a rabbit out of the hat™… really dont know how they did it. there must be some deeper story here and its still leaving me gobsmacked and scratching my head… and am thinking am not really sure running it on a non-maxed out laptop CPU would change the #s that much… for comparison the laptop ruby version is
2.2.4 but cant imagine the ruby performance would have changed much either… ❓
another misc interspersed thought: not a small )( number of mathematicians and other thinkers might regard collatz as a nearly useless problem. ah, but here there is at least 1 highly useful aspect to add to its long resume (its a long resume for me at least): computer benchmarking! lol! said in ~½ in jest but nevertheless theres a highly serious aspect here. if 2^60 trajectories have the same properties, they can be verified on any computer and totally aligned performance comparisons can be made.
anyway all of this also a case of why didnt I try all this sooner™… maybe could have pulled off much/ most of this exercise over 1yr ago… coulda, shoulda, woulda… similar strong sentiments/ theme felt/ echoing/ head-ringing this year in another social context…
(later) some bloom off the rose/ flash of woe/ near brush with death has visited with corresponding/ associated flashes/ visions of the machine failing to boot along with tedious/ timeconsuming process of trying to return the machine. (restocking fee?) am running some new collatz experiments that were quickly hacked together, joy! but then just moved some terminal windows around, resized, loaded a few web pages, and then the whole GUI ground to a halt! figured out how to bring up task mgr, Search+Esc or right click on window top panels. it showed high CPU for the chrome browser. oh thanks for narrowing it down, chromium. close all windows while CPU/GUI bog down to nearly totally unusable/ unresponsive, doesnt fix issue! shut down machine, restart, the machine does not start after many seconds of holding down power button! holy @#%& cow thats (a) serious issue(s)! kept holding it down, it finally restarted, but then 2nd time ran into some kind of overloaded CPU issue… yikes! 😮
❗ ❓ now it seems back to normal, so but what the @#%& was that all about? alas another scary interlinked chain of WTF moment(s), but maybe this time less elongated/ excruciating…?! maybe
- some kind of CPU related bug in GUI actions?
- eg resizing terminal windows before they initialize?
- some kind of interaction between browser actions and the linux VM?
- web page had some videos on it, another element to question?
- its as if there was some code deep in the linux/ VM/ browser/ OS that started running hot and consuming all cpu at the expense of everything else even after closing all the windows/ running apps.
- oh wait, its all a blur, but did tell machine to upgrade chromium! maybe accidentally working concurrently? cant remember exactly when in the sequence! rats, forgot to get prior version! [ok, now its
Version 83.0.4103.119 (Official Build) (64-bit).] “update”? oh yeah aka really asking for trouble™, accident waiting to happen™… how many machines have been killed by buggy updates? its almost like a basic antipattern of the I(T) Age…
👿 oh geez, @#%& deja vu, am paranoid after the jinxed Big Corp machine… again, manifestation of tragedy of the commons in cyberage, where to spell it out if not obvious (which it apparently still isnt to many!) “commons” is CPU power and the “grazers” are apps! any of all this is kind of a downward spiral/ “race to the bottom”/ near showstopper/ dealbreaker to say the least. and the intermittent/ race-condition like nature is even more deadly. but, lol! again brings back old memories, reminds me of the early days of Netscape barely loading web pages with images, slowing to totally crashing… yep as the old ad slogans used to say, youve come a long way baby.™
oh, but the ability to scroll thru the SCITE window code with mouse scroll button has inexplicably returned. oh, but now resizing terminal windows now does not show the centered character dimensions display indicator after working just yesterday. sigh. oh and getting intermittent msgs in gnuplot about delay in loading fonts or something like that…? wtf? think literally got msg from gnuplot something like
"this error should never happen." lol! someone with a sense of humor there! alas, didnt save these multiple badges of war in the middle of combat… sweating some bullets thinking that
gnuplot is already screwed up in only 1day in a way that might be hard to fix… 😳 😮
chromium did display a message about automatically updating linux… with no status indicator or further info… again the ageold dichotomy of transparent vs opaque systems + cutting edge challenge of engineering stability/ reliability/ resilience/ fault tolerance… argh! yes this inexplicable intermittent stuff is maybe going to be like chasing ghost(s) in the machine… and again in the spirit of deja vu am utterly terrified of some update screwing up some basic machine functionality. in simple terms, so to speak, the higher the feat, the greater the fall. in a single word, EXPOSED. 😮
sometimes IT is all such an utterly brittle/ fragile house of cards™… so do the machines remind you of human life at all? “built in their image,” it would seem the machines take on both the strong, light and vulnerable, dark characteristics of their creators… it seems to err is human, and “machine”… on closer look the shiny gold polish has dings/ dents/ scratches on it… its that sinking feeling of feeling less like a customer and more like a PAYING beta tester, in more ways than 1, all wrt cost(s)/ hardware(s)/ apps(s)… ok, ok, it did say LINUX BETA, sigh! (kind of a pleonasm huh!) 😮 ❗ ❓
(7/16) ⭐ 🙂 😎 ❗ ❤ it occurred to me that maybe the
apt-get cmd might run better after the chromium update… wow! everything is awesome!™ some
days moments everything just works!™ works like a charm!™
sudo apt-get update Ign:1 http://deb.debian.org/debian stretch InRelease Hit:2 http://security.debian.org/debian-security stretch/updates InRelease Hit:3 http://deb.debian.org/debian stretch Release Hit:4 https://deb.debian.org/debian stretch-backports InRelease Ign:5 https://storage.googleapis.com/cros-packages/83 stretch InRelease Hit:6 https://storage.googleapis.com/cros-packages/83 stretch Release Hit:7 https://apt.llvm.org/stretch llvm-toolchain-stretch-7 InRelease Reading package lists... Done
hmmm did this change? almost )( need a time machine for this kind of work lol!
uname -a Linux penguin 4.19.113-08528-g5803a1c7e9f9 #1 SMP PREEMPT Thu Apr 2 15:21:14 PDT 2020 x86_64 GNU/Linux
💡 new idea: maybe the earlier reboot troubles was related to (an unusually?) long shutdown sequence. looks like there is a power light and it eventually, but not quickly, goes off after the shutdown ends. but huh? why isnt there a restart option on the menus? also am always a bit worried/ wary about “soft shutdown” switches, now wondering is there a hard shutdown/ restart?
(later) incredible! after nearly 1 full week Big Corp is not dead yet™ + still in the running™ + back from zombieland and apparently managed to send down silent update to fix their box. ok, ok, its still gonna overheat if the cpu goes to 100% for x amt of time (exact # there related to electronics + solid dynamics + QM + heisenbergs uncertainty principle), but so nice to see the cpu go down to 10%, its maybe running even better than before their broken security scan update. so, gotta be fair here. reran the code on low CPU.
2020-07-16 17:38:58 -0600 [10, 133, 15] [20, 97, 38] [30, 49, 28] [40, 24, 30] [50, 27, 29] [60, 13, 46] [70, 6, 42] [80, 14, 51] [90, 4, 38] [100, 3, 36] 2020-07-16 17:40:12 -0600
😳 😥 ofc the whole chromebook thing was too good to be true.™ the years old 4-core dual-processor box ran in 74s ie 74s / 121s → ~40% faster than the chromebook and the jinxed machine is almost 74s / 174s → ~40% speed of “normal.” so the jig is up!™ oh well! still doing cutting edge scientific/ math research for less than $150. 🙂 ⭐ ❤
⭐ ⭐ ⭐
(later) 💡 😎 voila after much runup fanfare, now introducing the 1st new idea/ code generated on the chromebook! this was a simple idea that was worth adding variants to, started out with 1 case and then looked into others and then decided to include all the variants. the basic idea here is to look at the Terras density glides via different horizontal alignments, right aligning by different variables namely
nw, cm, cg, cgcm, c. 1st is starting bit width 50, and 2nd to last is the difference. and again am now thinking should have looked at this basic idea a lot sooner.
it does seem to reveal some remarkable/ important/ key trends esp in drains which can be seen visually by a/ the careful observer and maybe have been hidden until now, but also some deja vu and need to crunch some more to draw them out further/ highlight/ more quantitatively. very little structure seems to have been found in drains so far, so maybe this is something of a breakthru… ps drag + drop multiple image files into wordpress web editor works just fine too!
some addl analysis: last diagram #5 is very similar to the trend found in
construct101c from 1/2020, those were calculated over entire trajectory sets (postdetermined glides) of a given length. #1 is similar to the now highly referenced “rosetta diagram” except that it discards non postdetermined glides affecting in particular the bottom upglide region, basically removing some portion of it. otherwise, as an overall note there is some overplotting effect in the diagrams.
more backstory: long ago was able to post ruby code directly to
gist without interference by Big Corp. that was quickly “kiboshed” years ago by the new security software as the term goes. it also rejects pasting into nice site
pastebin whereas seem to recall that was allowed for awhile also. so now hundreds of times have posted code snippets into yahoo mail, then copied them from the draft email and back into
gist as a rudimentary but workable transfer mechanism. today, am again pasting directly into
gist. oh, the sheer pleasure/ joy! slowly/ at last released from some of the shackles and regaining lost functionality; the sunbeam comes out of the clouds. 🙂
(7/17) 💡 ❗ it really looked like there may be some patterns lurking there but they seem to be very well concealed. now (re)thinking maybe some anthropomorphism bias in the sense of finding shapes in clouds and/ or overplotting illusion. one idea that probably have tried a lot before and coming up emptyhanded in past and currently is trying to find some kind of connection between the Terras glide density and the postdetermined range. pounded out a lot of ideas and it all mostly amounted to zilch as far as trying to extract signal. this all attests to the long ago discovered “undifferentiation” property.
however, finally something seems to have turned up. this was found after some intermediate ideas/ code that are for now left undocumented. this code ended up looking at (x, y) pairs where x is the width of an interval size/ length and y is the parity density over the interval. the intervals are what might be called postdetermined glide, postdetermined postpeak glide, and postdetermined postglide ie from ‘nw’ bit width, ‘cm’ peak, and ‘cg’ glide to the end of the glides, these are red, green, blue scatterplot points resp.
😮 there is a remarkable tightening/ curve over the ‘cg’ pairs. this is unexpected because the ‘cg’ sequences are generally shorter. am trying to think of a “natural” way to compare this to unbiased random walks. right now it seems to appear to reveal some kind of sophisticated bias but need a “control” model to draw this out. again, to reiterate/ clarify, below densities and sequence lengths were examined closely/ thoroughly in several ways to find any correlations with Terras seed parity densities and nothing seemed to turn up. getting a sense of deja vu on that particular direction but itd take awhile to survey prior stuff.
(7/18) 💡 thinking this over, its not rocket science™… a lot of that seems to have to do with that for these Terras parity density glides starting from the different ‘cg’ metrics forward are starting at approximately/ roughly the same starting point which doesnt hold for ‘cm’ or ‘nw’. which reminds me of some other ideas have been thinking about. imagine glides that have varying predetermined slopes but end at nearly the same position, what would dynamics/ properties would they have? presumably probably something quite similar to these ‘cg’ trajectory sets. did some similar work recently on Terras glide construction and need to grab/ write up that code. and dont quite have it figured out how to do the “varying slope ending at nearly same position” idea yet. exercise for reader™… oh, looking thru old posts, it seems close/ similar to
rwalk2 on 12/2019.
💡 heres another rather natural idea related to old code to calculate 1-run histograms using the
convert subroutine, which last showed up quite awhile ago, apparently over ½ yr ago on 10/2019 in
construct79. (oh, 2nd/ further look, it did also show up in
construct114 on 1/2020). this code looks at “evolving histograms” and the left side is a control diagram for randomized generated binary strings with the same/ “matching” density as collatz sequences starting from ½ density, which is displayed on the right side. the hotter colors are later in the sequences. there is quite a bit of variation on rerunning this code.
there are several prior experiments that seem to find very subtle biases eg in the higher lengths. if there is not left/ right mirror symmetry this indicates some kind of bias. but it looks like the bias is too subtle to strongly isolate with this diagram idea. it does deviate from mirror symmetry but here (single) looks can be deceiving and so far not in a consistently reproducible way, ie the deviations show up but not consistently in the “same places”. but maybe some further finetuning… ❓
even though maybe nearly a null result, also a further quantization of undifferentiation/ indistinguishability. but actually nearly independent of the collatz considerations its a great visualization idea that helps picture many prior themes & there is also a key demonstration here of the 1-run “scale shifting” of ½ density binary strings wrt overall size of string, aka fractal scale invariance associated with ½ density binary strings. another way to think about this is that a lot of the optimization searches are actually just tracking these self-similar scale changes. yes, theres been some danger in not thinking about this as a basic control comparison against a lot of prior experiments. as expr goes the light is slowly dawning…™
⭐ ⭐ ⭐
(7/19) 💡 after some flurry of experiments and thinking and some null results, some late at nite, feel a little “spent” (an evocative expr!) and dont have a whole lot of ideas at the moment. in a less busy moment wrt coding thought maybe would share some misc newer ideas about the bigger picture.
this has been outlined before in different ways, not as/ so much recently, but to summarize, my idea is that the overall goal may be to use ML to understand/ predict dynamical systems. dynamical systems started to arise into prominence in math in the 1960s or so ie about ½ century ago, and one legendary paradigm shift was the discovery of the lorenz attractor.
there is an old question which has probably been studied in some papers but am not sure where in particular. its somewhat simple and utterly complicated at the same time. one refers to the “lorenz attractor” as if its real. but how does one actually prove the existence of this attractor? my understanding is that it is discovered empirically and not nec proven to exist. that may sound paradoxical. what does it mean to prove it exists? basically one would like to show that there are trajectories that are stable and go in endless cycles without diverging. note the similarity to the collatz problem!
so this seems to suggest a similarity between the lorenz problem and fractals, and yes, it seems to me like the lorenz orbit patterns are yet another kind of fractal. self similar.
which then leads to the idea of fractals. and the mandelbrot set. now consider a single point/ coordinate in the mandelbrot mapping/ calculation (x, y). suppose it is either “converging or diverging”. notice that both are calculated empirically using limit methods. how does one prove an individual (x, y) point is either converging or diverging? as far as known there are no such proofs. but again this has a strong similarity to the collatz problem.
and cant now resist mentioning 2 foremost related (deep, “still open”!) scientific problems from dynamical systems theory, centuries old, dating roughly to Newtons breakthroughs. stability of the solar system, and the 3 body problem.
💡 ❗ so this (finally!) gives an overall framework for what the fundamental problem is:
using ML to understand strange attractors/ convergence/ divergence of points or sets of points in dynamical systems, and trying to use ML pattern detection to come up with actual proofs.
under this analysis merely proving the existence of the lorenz attactor would be a breakthrough…
⭐ ⭐ ⭐
(7/22) the recent
construct146 code/ graph gave me a weird sense of deja vu and tried to understand it. have been going thru old blogs and a theres a slow dawning realization going on here. this has never actually been stated specifically, and there are 2 different povs on it that have been alternating/ juxtaposing over the years. but ever since discovery of the binary density significance on iterates a few years ago, its been clear it has a foremost-to-central significance on the problem. the 2 povs can be summarized in a single sentence as follows, with many near-variant ideas; the idea has been exploited but think maybe not explicitly/ fully and think never before written here; deeply/ mindfully embracing it hopefully leading to further insight/ leverage. here is maybe a new distinction of differentiable vs distinguishable:
the (postdetermined!) drain iterates are (nearly) indistinguishable from ½-density numbers.
the two povs in the single sentence are that there is both distinguishability and indistinguishability. but an aspect of indistinguishability is newly identified in
construct146 that was found in an earlier context. need to do a survey on this; this is a reoccuring theme in the last ½ year and ofc over all years!
- indistinguishable —
- the 1-runs distribution of drain iterates closely matches ½ density numbers. this is in alignment/ correspondence/ contrast to
construct123, construct123c, construct124, construct125on 2/2020 looking at 1-runs distributions in the predetermined region and finding a divergence from a mean. that histogram bin calculation is extremely similar to the latest
construct146analysis except on predetermined vs postdetermined ranges, resp.
construct117on 1/2020 found the same indistinguishability but didnt really take it to heart! that blog says in pondering it, going again in a circle! the twist is that this is about undifferentiability in parity sequences but as missed at the time, it could probably just as easily be about/ apply to the ½ density iterates!
just did a quick experiment looking at the same bit-word analysis over ½ density parity sequences. the histogram curve is nearly perfectly identical with only tiny deviations measurable. in other words the postdetermined glide seems to be basically ½ density parity sequences and the prior histogram curve is characteristic of them. again, proving something like this is nearly the entire open question.
- distinguishable —
construct137experiment seemed to show distinguishability but think it is actually indistinguishability from ½ density iterates.
construct127, stepwise6from 5/2020 were able to find “chunkiness” in the density distance for postdetermined iterates.
construct129seemed to find something but am thinking it is negligible and didnt do a ½-density control comparison.
construct128esame month finding some edge (also) now looks to me like a different angle of uncovering of the trajectory merging property.
construct114on 1/2020 seemed to find a thin edge.
construct76, construct79from 10/2019 found a very thin distinguishable edge. notably these last 2 all use the
convertlogic sensitive to very minute difference(s) in histogram statistics.
it seems there has been some (subtle/ near unconscious) bias in my thinking in that maybe there is some “special” property of the drain iterates other than merely being “very close” to ½ density iterates. but in light of multiple findings and many nonfindings (ie null results) that needs a serious reconsideration. there is some very slight bias measurable in density distance, but it seems almost nothing else, and other ideas explored along the lines seem to fade/ melt away on closer scrutiny.
this blog has probably mostly been using the words “differentiable” and “distinguishable” nearly interchangeably, but heres a possible subtle distinction. “differentiable” being used more in context of comparison with “pure noise” and “distinguishable” being used more in context of comparing 2 different distributions. what is somewhat eluding me, maybe now finally dawning on me after the data keeps “speaking” it from different angles— and that last diagram is esp revelatory/ decisive— is that (restating the main reoccuring theme again from slightly different angle) the “noise” of the undifferentiated region is actually equivalent to ½-density iterates.
so all this now leads to some better idea about what/ how to prove. this all carefully “walks the fine line/ threads the needle” of all known observations/ properties and is still much along the lines of prior proof ideas.
- look for some measure of “histogram discrepancy” in 0/1 runs vs ½ density iterates (at least several are already built as just listed).
- try to show that if it is within some bound x, some (sequence) length f(x) exists such that the remaining iterates are all declining and within the same rough (histogram discrepancy) bound.
- show that all (“initial”) postdetermined iterates are within the bound. show that corresponding f(x) is sufficient to terminate the remaining postdetermined glide.
- in a sense all this is about a sort of order decay, where histogram discrepancy can measure order, and that this orderliness can decay incrementally/ gradually over time, but not “spike” unless previously exceeding some threshhold. ofc this is a sort of reversal/ flip of ideas about the undifferentiated region being the height of disorder. yes, it has to be said… a paradox worthy of stating explicitly, a Big Idea that might even be wild/ crazy enough for the basis of a proof: the disorderliness has a sort of (measurable) orderliness to it.
simple “histogram discrepancy” measurements are already well known: density or entropy. from that it is known from the ufo studies that a histogram discrepancy measure can spike substantially eg a ufo “coming/ emerging out of nowhere” in apparently ½-density iterates, but this new proof idea hinges on whether there is any hint of it in prior, “more sophisticated” histogram discrepancy metrics. in other words (as has long been known/ observed/ emphasized!) the histogram discrepancy metric has to be sensitive to some additional feature(s) other than merely iterate density or entropy…
an idea that has been on my mind is
backtrack29 from 11/2019. that did backtracking on ufos to find “twin strings” and they were found significantly farther back in/ preceding the iterates, “almost as far” in iterate counts as the size of the ufo. the twin strings function almost something like a foreshadowing pattern. in a sense the shadow “grows in size” until it reaches/ “turns into”/ emerges as the ufo. the foreshadowing (“size dynamics”) also seems to relate to the “descent bound function” f(x). it was observed that the backtracking twin strings/ foreshadowing patterns have ~½ density but a key question wrt proof possibility is whether their histogram discrepancies are different…? seems likely in the affirmative! ❓
(later) speaking of deja vu, its time to come clean, have been holding out some. ofc this can be guessed/ inferred some from filename orderings listed in blogs, ie some identifiable gaps. from filestamp on 2/25 applied this basic idea but didnt report/ record/ write it up then. was thinking of coming back to it. with the histogram discrepancy concept re-arising the experiments themselves have pointed again to it. this is slightly further modified code for minor cosmetic/ display change. the end of 2/2020 found a significant function that is named here
histdiff. its properties over Terras glide density seeds is very striking/ orderly. but what about otherwise?
a basic question is to try it over the seed database, and it looked so promising, tried it out immediately at the time. havent used that database for ages, just based on the direction/ drive of research, but its still very useful. this code runs thru the 8 generation methods, selects those with glides more than 20, which is 6 of them, and then applies/ calculates the
histdiff calculation across entire longest glide by each method, red. the basic calculation by single iterate is very noisy and this does a 200 cumulative adjacent sample calculation instead.
there are some striking trends. it is quite nonnoisy, ie nearly continuous, which reveals some hidden connected order in the histogram distributions. it cant be said to be clearly delineating any particular regions of the trajectory from basic pov of glide/ increase/ decrease, peak, drain, predetermined vs postdetermined etc., but neither can it be said to be devoid of signal wrt those transitions either…! it transitions from positive to negative with seemingly some order. ie overall it seems to be signalling something… but what? while somewhat ambiguous/ not clearcut/ mysterious nevertheless its definitely a prime candidate for a histogram discrepancy function. ❓
on the other hand, after some more poking at this, again feeling/ thinking looks can be deceiving, and feeling some doubts/ danger about various/ misc exercises that even while timeconsuming and novel sometimes verging on near hacking. ie maybe something in the difference/ distinction between experimenting and directed experimenting, & getting beyond “mere” reflexive/ incremental steps/ moves into deep(er)/ bigger picture/ out-of-the-box thinking. looking for shapes in plots is sometimes akin to finding shapes in clouds/ random walks… as is long noticed some signals are very strong in the differentiable region and disappear/ “vaporize” in the undifferentiable region. one might say even more dramatically the undifferentiable region is where signals/ features go to die… 😮 👿
(7/23) this is the same basic code with a quick riff, probably should have started with this, but life is lived fwd and understood bkwd.™ it looks at
histdiff over single iterates. the positive vs negative swinging is quite random and so separated them into green and red regions. results show a signal but it was initially unanticipated, but makes sense on 2020 hindsight.
there is a clear general left-to-right wedge effect with the larger side on left/ larger iterates, more pronounced for some generation methods than others. after seeing it, its quite immediate to realize, a histogram discrepancy measure has to somehow be scale invariant wrt iterates and
histdiff apparently on initial analysis does not fit that bill;
construct146 above however has many immediate clues on how to build one that is, not yet followed up on, the low hanging fruit of the moment is to just immediately revisit this earlier close
histdiff idea with minimal tweaking. another hangup, never before mentioned, but now glaring to notice (and surprising how long it took): its clear that none of the generation methods actually create very long glides! at least in comparison with the very long drains. the 1st longest is at 141 iterations (note “compressed” measure from database code, not same as uncompressed diagram) whereas length is around ~1350 (from diagram).
(7/24) this is a fairly simple and tricky idea at the same time that was written on 7/2, alluded to on 7/18, and finally getting around to it after a flurry of other ideas, just posting it to not lose it, it took quite a bit of time/ attn to figure out/ carefully construct. my line of thinking was that the Terras density seeds have been very worth studying, and what are some other variants that could shed light on the problem? this code builds trajectories using Terras subroutine to match a kind of template. it starts with the steepest increase that can be found that increases by ‘w’ bits to the 1st postdetermined point here 100, ie starting and 1st postdetermined point differ by ‘w’ bits, and every iterate (over the predetermined range) is an increase. then the slope is decreased gradually using the additional parameter ‘c1’ but the ‘w’ difference is maintained. in a little more detail the initial bit width/ predetermined length for each glide is “set/ fixed/ built” to
w0 + c1 where
w0 (calculated but not named in the code) is the smallest possible bit width for “all increases” over predetermined range.
describing how it all works is a bit tricky and a longer story with some algebra/ logarithms but in short its similar to a line-following technique originally appearing in some earlier code (see eg the very 1st
construct on 2/2019 and as previously mentioned
rwalk2 on 12/2019 and overall/ final results below ofc quite similar to the legendary
construct13 rosetta diagram also 2/2019). my general idea is to look more at how predetermined dynamics affect postdetermined dynamics. the result is, predictable from earlier observations, that there really is not much effect at all wrt this generation technique. the postdetermined drains are all apparently roughly the same type of random walk. also notice how the deviation does not really increase much either, so these are very clearly mean-returning… which it would seem would put significant constraints on eg the drain parity sequence such as limiting max runs etc. (which is an old noticed measurable property in many various other contexts…)
💡 ❓ full disclosure: this is the idea that eventually came out after pursuing another idea that seemed harder after working on it some. initial idea, wanted to create a series of glides that start and all end at nearly the same w1, w2 bit widths with gradually varying slopes. it is nontrivial to construct those but on other hand shouldnt be a problem again with Terras construction techniques. exercise for (ingenious) reader… came up with some code/ algebra for it myself based on this same subroutine but it seemed to run into nonlinearities and break down, the attempt/ idea is worth writing up but it would take awhile.*
(later) * 😳 (oops, lol!) it seems on further thought this is impossible to have nearly linear trajectories with same
w1, w2, somewhat trivially there is only a single line through the two points wrt bit widths but maybe there is some exception to that. so revised, the idea is to have the same ‘w2’ with varying ‘w1’. another idea, use smoothly varying curves between the fixed points. another basic idea is to have a nearly fixed density parity sequence in predetermined range but with variations in its distribution away from “nearly even/ uniform,” actually current code has been generating those without noticing/ thinking about it/ focusing on it more closely so far… ❓
yet another idea: one could have fixed
w1, w2 and a predetermined range that has a drain inside it starting at varyingly deep/ rightward positions, something to look at. but that doesnt fulfill the idea of ‘w2’ as start of 1st postdetermined point.
(7/26) that offhand remark about the slopes being very stable led me to think about the parity sequence over postdetermined glides. there has been some basic study, but maybe deserves a little further attn. scanning back thru about ½ year turns up
stepwise3 on 1/2020 looking at ‘pmx01’ which tried to maximize max 0/1 runs in the postdetermined glide parity sequence; note that there was 10x scaling of the metric in that graph and it seemed to max out around ~10 unscaled. this then made me think of the max 0/1 runs in the postdetermined iterates and comparing them. more directly
construct120 from 2/2020 studied that. also esp notable in my mind was the formula estimate devised for
bitwise49 on 9/2019 which relates bit run scaling to trajectory dynamics/ evolution/ iterate mixing.
this is a graph of these metrics over the 1-triangles. the iterate max 0/1 runs go up slightly more than the parity sequence max 0/1 runs red/ green which go up very gradually, the latter “explaining” much of the (slope) stability properties of the postdetermined drain— so gradual one might even argue for a plateau at the end, but it seems improbable and inconsistent with general scaling tendencies. there seems to be a bias at the end toward longer 0 runs in the parity sequence, difference is graphed in blue.
notably/ finally, remarkably ‘mx1’ lightblue stays at 20 for nearly ⅔ of the entire graph, one might even say “suspiciously long.” the earlier
construct120 1st graph only went to 1K iterate widths and so missed this at that time. however comparison with the 2nd Terras density graph which scales in a general continuous trend suggests that the long flat end seen here is “merely” associated with the 1-triangle iterates, these behaving somewhat more “blockily” likely due to the trajectory merging dynamics, and an actual limit seems highly unlikely/ implausible given all other findings, but on other hand this is not really clearly something to dismiss/ not pursue further. overall all this prior code is a fairly simple calculation but takes quite while to run as trajectories get very long, around 1hr total over all 2K trajectories.
again these results are pushing me in the direction of needing to get a better understanding of statistics of “control” distributions namely ½ density iterates and how they compare to these esp to figure if anything is “out of the ordinary.”
(7/27) it seems timely to now remark and expand on this basic property. from statistics there is a concept of a uniform distribution. the postdetermined glides, aka undifferentiated region, ½ density iterates are apparently a kind of uniform distribution. many experiments can now be seen to be poking at different aspects of this distribution without really grasping its “complete nature” much like scratching the surface.™ its not entirely obvious so far, but starting to sink in, that many of the “dimensions” measured are interconnected wrt the distribution, ie not so much properties of Collatz as previously thought/ imagined— even, now in 2020 hindsight, “projected,” but instead associated with/ properties of the uniform distribution.
for example as in last experiment 0/1 runs (lengths) in the iterates seem to correspond to spans (lengths) in the parity sequence; this has somewhat been noted/ remarked on previously in some contexts. another way to look at this is that some of the so-called features turn out to have “exactly the same” trends as the uniform distribution, ie measuring them on Collatz postdetermined glides or (“control”) ½ density iterates turn up the same trends. some work so far has been done to find these correspondences but also a lot of work is more reductionistic/ compartmentalized/ “bits and pieces” and hasnt (inter)connected it all together yet. aka connecting the dots.™ ❓
in some defense, the different general/ scaling properties/ features of ½ density numbers are not entirely so simple.
(7/28) “on a roll.”™ looking back at nearly the end, this month is one of the longer ones wrt wordcount now over 10K. the chromebook inspiration fueled a lot of ideas, invigorated energy. on the other hand, more words than code. reminds me of wigners quote on Bohms book, “good, but too much schmoozing” (lol!) which prob was about the ratio of words to eqns which is high for Bohm in general. but Bohm, in strong contrast/ even opposition to “shut up and calculate,” likes to think. in this milieu, its code instead of eqns.
💡 had an idea about using the optimization algorithm approach to try to understand how the “½ uniform density” ideas “naturally” relate to trajectory lengths. its maybe a new fundamental twist, relating to some old ideas. there is some basic study of this “fwd” in Terras seed density ideas. what about the “bkwd” direction of collecting a range of trajectory lengths found from an “unbiased” search and then looking at seed properties? ie another “reverse engineering” approach. the new analysis would seem to mesh better with the hybrid logic than the
stepwise optimization thats been used heavily now ~½ year. that led me to look back when hybrid logic was last used.
it is interesting to look at evolution of ideas here. the edit distance idea 1st showed up on 12/2019 as a mere, nearly offhand musing, also same time hybrid logic was last used in
hybrid26b. edit distance was then 1st used as a entropy traversal idea on 1/2020 in
traverse15b. immediately then used more directly as optimization in
traverse16. later it showed up fruitfully/ heavily in optimization searches. as designed/ intended/ hoped its turned out to be a great intermediate balance between the very lightweight
bitwise and the heavier
hybrid algorithm and has turned into something of a new workhorse, even default optimization approach.
the basic new idea is to try to get a traversal that achieves a uniform distribution over a particular parameter; here the ideas about traversal vs optimization tend to get blurred. this is very similar to earlier traversal ideas that look at gaps between values and try to order/ adjust/ target them somehow. it is tricky trying to come up with ideas about uniform distributions with variables that have no max values, ie are unbounded. but then another parameter can be fixed. in this case, consider a fixed bit width. there will be a bounded corresponding range of trajectory sizes. a key question is how this range influences the initial seed, ie wrt known features or otherwise.
there has been some study of this but typically, previously almost entirely in the sense of optimizing a variable instead of attempting to “evenly distribute it;” eg
hybrid24, hybrid26b from 12/2019. these will find distributions on the way of optimizations, ie progressing over the run from unoptimized to optimized, but in contrast dont directly work to “even out” the distribution during the search. but notably the exact same experiments embody the idea of “reverse engineering” features; they optimize for long trajectories and then find strong identifiable feature(s) in the starting iterates (namely starting 1-triangles). actually, this backward engineering technique or approach was exactly how iterate density was discovered years ago…
in contrast there is some idea about “even distribution” on earlier traversal ideas. however, they were expensive to compute, scanning entire traversal frontier point sets per advance, leading to linear time per iteration. what is needed is an efficient incremental even distribution logic calculator, and while conceptually not so complex, after some initial unfinished work on this last nite am realizing its far from trivial. also, fixing a bit width means the
stepwise logic is not so robust because then it only reduces to a single bit mutation operator. so then, back to the
(later) here is the quick-cut code based on optimizing the gaps in trajectory lengths, ie minimizing max gap. there are some incremental gap routines called
addgap, rmgap, gap, called with
testgap for testing. this code is optimized using a binary search conveniently built into
Ruby, a very useful algorithm but which has been rarely-to-never used previously in this blog. it is not simple to describe the gap logic but basically it uses a gap array structure to retain consistent intermediate results.
results are interesting. the iterates have an identifiable feature. they have a kind of density gradation, aka gradient from lsb to msb. this in a sense vindicates some earlier ideas/ features that were looking at/ comparing densities in the 2 halves of the binary iterate. 1st graph is the generated iterates sorted by trajectory length, green. 2nd graph is binary iterates from left to right, 250 sample, lsb bottom msb top, density red and entropy green. both start out at near zero, and density rises/ ranges to over ½ with entropy ranging over the lower ½. then theres the immediate idea of graphing the random walk signified by the bits in the strings, lsb to msb, 3rd graph, hotter colors longer trajectories. there is a sort of curving/ concavity toward the center. wondering, is all this detecting Terras density glide trends from a different angle? ❓
there are some notable optimization dynamics trends found in another graph, the algorithm tends to move/ shift to different regions to optimize ending more in the midrange than ends/ extremes (lower/ higher). another key dynamic of this approach is that the global gap is mostly monotonically decreasing after some initial nonmonotonic variation and the optimization “discovers”/ visits a large gap early on. there is maybe something to reconsider/ adjust in the code based on that the gaps are remembered/ calculated over all visited points but the optimizer retains only the top/ latest points according to bin size
bs=1000. this has to do with that the “gap” is not exactly a property of a point but of the current visited and/ or frontier point sets.
a natural question is how this logic differs in results from optimizing by a particular measure eg trajectory length and using intermediate optimization iterates as the distribution range, the latter strategy used many times previously. presumably the latter approach will focus on/ “be more biased toward” more optimized (extreme) iterates.
but, on the dark side, after substantial effort there is some uncorresponding, crushing result here. as it seems many times in the past, here included, the entire concept of “features” is an anthropomorphic bias. there is some identifiable trend, but theres also a distinct lack of continuity, ie a lot of discontinuity, and a lot of noise. in short there are unmistakable emergent features but they remain out of grasp wrt powerful exploitability. my only remaining idea here is to look at histogram statistics which could potentially be less noisy. it would seem overall there is some missing of the crux or spine of the problem… but also the problem often seems, to borrow a word reserved for particularly intense adversaries, spineless…
(7/29) was questioning that gap logic somewhat. as alluded the optimizer wants properties to be associated with points and not so much global optimization properties such as “max gap.” it seemed to work ok but still had doubts about another method maybe giving better performance, at least wanted to try/ code the idea. this is a slight adjustment of the gap logic that associates gaps with the optimization points. a point will have an adjacent left and right gap, this associates the max of the two with that point, and has incremental logic to optimize the (re)calculation. then the optimizer “targets” max gaps, ie combines/ focuses on points with larger gaps, with the intent of breaking them into smaller ones. suspect the performance wrt more even distribution is somewhat improved, eg maybe less initial nonmonotonic descent in gap trend, but overall results are similar. this code also adds/ refactors the analysis logic.