the last installment found a way to construct large “ufos” in the postdetermined region and in crisp clinical terms refuted some longrunning hypotheses, in more informal pov “throwing a wrench in the works” of big overall proof strategies painstakingly built up over years and showing both limitations and strengths of the overall empirical/ datamining approach. however, maybe there is a silver lining; there was a larger hypothesis that maybe escapes largely unscathed, ie in more dramatic terms can yet be rescued amidst some of the substantial wreckage/ smoking ruins (aka “easy come, easy go”™). a recent experiment found that measuring “slope correlation” in the postdetermined region gives high adherence to linearity (ie close to 1) even as iterates get larger. this was discovered with
bitwise optimization and never did convert that finding to the stronger/ more rigorous
hybrid optimization and it was in the back of my mind to do that.
looked over the hybrid code and had the urge to refactor it. the bit vector initialization was servicable but really funky. also had an idea to extend vectors at a few bit positions at a time, not just at the msbs. also came up with the idea of a corresponding/ symmetric shortening operator.
the basic experiment here was trying to minimize slope correlation ‘sc’ for larger iterates. however the naive code simply found a significantly low ‘sc’ for some small iterates and then didnt get past those smaller iterates, the search got stuck so to speak. while this finding is consistent with the hypothesis my real question was what the trend for ‘sc’ was for higher iterates. so then bit the bullet and did multiparameter optimization for this hybrid code, something that is novel wrt prior code/ experiments also. the multiparameter optimization is based on gaussian normalization recalculated every 100 samples. the optimization is to push up ‘nw’ the iterate bit width and push down ‘sc’ the slope correlation. the code reaches very high iterate sizes ~4.5K and yet cant push down ‘sc’ more than ~0.9684. the multioptimization means that there are smaller iterates with smaller slope correlations, but the algorithm moves past those as pushed to search higher iterates. so this general observation/ trend still seems “robust”.
a few more thoughts on this wrt last month findings. the recent ufo experiments seem to show that a upslope of count ‘n’ where ‘n’ is the current iterate bit width within the postdetermined downslope is possible. its not clear how many such “counterslopes” are possible (the contrived algorithm only creates 1) although it seems like they might be at least limited to bit-width sizes. but ‘sc’ metric seems to have a built-in scale invariance aspect to it such that maybe the counterslope(s) cant disrupt near-linear ‘sc’ measurements much, at least that is what is found with this code, ie its search is aimed at finding nearly the same or a similar “distortion.” on the other hand maybe counterslopes are very hard to find with a greedy search, even if its a powerful genetic algorithm.
at this point am taking a step back from many months of hard core crunching and wringing out a lot from existing algorithms/ patterns and trying to come up with/ aka consolidate/ synthesize a higher pov. part of the struggle (as always!) is overcoming some built up pet theories bordering on biases and going whereever direction the data leads.
😡 😳 🙄 👿 ❗ other misc annoying/ frustrating/ aggravating news:
- Big Corp disabled state-of-the-art bookmark sharing in chrome via sitewide administration policy, and of course security restrictions prevent altering the registry to override it. @#$& argh dont see any way to override it. partys over! this seriously interferes with/ undermines my routine blogging modus operandii around here. yeah that whole bookmarks sharing thing is so subversive, better turn it off as a security risk. lol! have huge piles of links to blog on but esp with nearly zilch audience response on the blog its feeling (even!) more like a slog sometimes… there are many aspects that fuel my enthusiasm but audience reaction around here is like that old Eagles song running on fumes/ empty… rhetorically questioning is there any intelligent life out there?™ 😮 😦 🙄 😳 😥
- chat room micromanager busybody physics Mod A gave me a 7d chat suspension for posting a link to an arxiv paper to physics chat room and then lol at his rejecting it. what the f@#%& is it with these heavyhanded physics/ SE mods? their election policy specifically says LIGHT MODERATION.
open to some light but firm moderation to keep the community on track and resolve (hopefully) uncommon disputes and exceptions.[x]
lol! guess the longer theyre elected, the less they care about such restraint. talked to a so-called SE community mgr who didnt give a @#%&. so ½ decade chatting there means )(! sometimes it all adds up to zilch! petty dictators, crushing flies with sledgehammers! updated the chat pg with a caveat emptor during my copious time off lol. which reminds me see also recent related headline news
- presumably due to newly introduced bug with an update, either that or some new html code pattern, my chrome browser on Ipad is crashing intermittently within a few dozen seconds on loading dailymail site. argh @#%& maybe will have to get a new computer just to be able to handle this image/ video heavy site. reminds me of the early days of netscape. youd think browser reliability might be a little better in 2 decades, esp with google on the job. sometimes its all a big farce. lol!
⭐ ⭐ ⭐
❗ 💡 😀 (later) this is a quick fusion of
backtrack3 and the last code slope correlation calculation related to immediately quantifying some of the prior musing. the idea here is to create large ufos that span almost the entire iterate width and backtrack on them. ie its an attempt to directly thwart the linear slope correlation finding/ apparent property. it was found that some random msbs are required even though the remaining iterate is all 1-bits. here the top 10 msbs are random and the remainder of the iterate is 1-bits. it constructs iterates of size 100 with increasing 10 bit increments up to 500 (where only the 10 msb bits are random, remaining 1-bits). it backtracks on the large ufo exactly the same # of iterations as the initial bit width ie creates a “synthetic” predetermined region of exactly the right size. then it measures ‘sc’ on the overall trajectory (which is calculated over the postdetermined region). in other words these are constructed/ contrived so that the “anomalous” intermediate climb starts at exactly the beginning of the postdetermined region (“anomalous” wrt the earlier rosetta diagram finding which always found similar downslopes in the postdetermined region from Terras-method-constructed samples) and then which forces the slope correlation calculation spanning over the “divergent feature.”
wasnt sure what to expect, however maybe didnt expect this, but its nevertheless good news—the 1st graph shows the trajectories and the 2nd is the ‘sc’ measurements. ‘sc’ hovers very closely to ~0.95 as found already with the
hybrid code even as bit widths go up by 5x, ie nearly a constant! the two diagrams combined show the striking scale invariance of this scheme/ analysis. ‘sc’ calculation with its theoretical “inherent internal scaling” property (ie by standard deviation on x/ y axes) is maybe a natural way to effectively capture the fractal aspect of the problem. this seems like further strong evidence for a very significant invariance property at the heart of a proof. even more promisingly, it seems like these types of artificial ufos maybe are or even ought to be the largest “deviations” that are possible in the postdetermined region.
the question always at the edge of consciousness: how could this apparent property be proved in general? the concept of siblings comes to mind here as recently in the
backtrack3 formulation. maybe there is some other related concept to siblings. this construction shows that while these are not sibling trajectories in the prior strictly defined sense (ie differing msbs only) they are nevertheless highly “related.” while the construction was created “in the opposite direction” ie smaller to larger, the diagram hints that maybe for every “larger” trajectory there is some way to find a “closely related” (ie highly similar) smaller trajectory but by some other method more sophisticated than just basic modification of the initial iterate as with the sibling concept.
😳 lol! dashed that off a little too quick. its close but off. ideas/ words vs code/ logic dont align exactly. did you see/ catch/ spot the mistake? its subtle… wondering how the fix will affect results… ❓
the glitch is fixed at line 64 in following code that handles the stopping criteria and also line 172 determining the range of the postdetermined region had to be adjusted/ fixed. the prior code naively used the width of the ufo iterate as a count of pre-iterations. but the pre-iteration count must be based on the “current” width of the initial pre-iterate exceeding the iteration count to the ufo. however this does not change results a whole lot ie the inexact code can be thought of as a near approximation. a tiny visual difference is seen in the 1st graph where the peak is post 1k tic mark whereas in prior graph its slightly prior to it. ‘sc’ seems to hover in nearly identical range in 2nd graph. also hooked up the binary grid analyzer to look at the initial bits incl density/ entropy and didnt find anything anomalous. yes it all looks nearly identical to prior run and almost anyone else but me would have just revised the blog. but to err is human™… + just think of this as, analogous to reality tv, “reality blogging” 😮 🙄
(4/24) re google crashing. pg now crashes sometimes within seconds. they recommend clearing the cache or loading the page incognito, lol, ofc yeah all that makes perfect sense. latter approach maybe working for me… 🙄
❗ 😮 also update: sticking my neck out in a big/ bold/ dramatic way, both ~½ calculated and fly-by-seat-of-pants throw-caution-to-the-wind, making the case for “kinder, gentler, lighter” modding on physics in the chat room and in a “rogue” comment (below) not yet
censored deleted by the ironfisted petty dictator(s) illustrious mgt in charge of the site… so far not decapitated. as my grandfather liked to say cant get hung for trying.™ honestly sometimes feel they are more interested in undercutting/ undermining consensus-building rather than enabling it. quite impressive how seamlessly they can vaporize any signs of dissent, but thats all built into SE design, and can sometimes be very heavily applied. reminds me of those hollywood movie set facades. really living on the wild side now, my heart is racing, feeling a little paranoid )(, the Fuzz is after me… not much reaction from anyone so far. what a difference from a mere 1yr ago when the composition/ dynamics of the room was utterly different in the extraordinary
0celo7 era. feedback is that improving my punctuation and grammar might get different results. am sure they also told that to ee cummings. lol/ rofl! 🙄 😛 😈
a lot of suspensions seem to have been issued wrt physics hbar chat room over the years over various issues by misc mods, some from physics and some from outside. any comment(s)/ reaction(s)/ idea(s) on that? current mods tend to reject discussions of the suspension policy, insisting that it relates to SE policy that mods cannot discuss suspensions, and there is much in the transcript that relates to that, and this is enforced very strongly in some cases with suspensions for attempts to discuss the suspension policy. would you carry the same policy? how much should mods manage the room?