➡⭐⭐⭐ hi all. it is with great pleasure to introduce this next guest and chat series and welcome a larger audience. Daniel Sank, Phd working at Google Martinis quantum computing lab is our 3rd guest speaker in a guest chat series hosted by the physics stackexchange site/ chat room. thanks to moderator David Z, Phd for the suggestion of the idea, coordination and graciously agreeing to hosting in his already well-attended biweekly chat sessions.
we have now hosted two prior events in series with great success with Samuel Lereah, Masters in physics, and yuggib, Phd math working in mathematical physics, thanks so much to these groundbreaking/ enthusiastic guest speakers for their particularly inspired/ dedicated participation.
this was quite a long time in the making but finally got everything all lined up. this is an experiment to see if any of the basic metrics explored so far may have a linear prediction capability on trajectory length. prior experiments “from another direction” suggest there is some weak link. can it be quantified? also, if there is a nonrandom prediction, how can this be turned into some kind of proof? thats a big gap/ leap of course.
in a blog from long ago it was conjectured that curve fitting approaches including general machine learning approaches would fail to generalize in the sense of having unboundedly increasing error for larger starting seeds. it would seem that any exception to this might hint at some kind of loop invariant and hence some kind of proof structure.
hi all, hoo, @#%& this sure took a lot of work, nearly a day. it was murphys law all over again so to speak.
wanted to do some multiple regression in ruby. simple right?
hi all this eponymously named blog was started a few months after Turing centennial birthday in 2012. so far there hasnt been much writing on him here except for around 2014 when the Imitation Game movie came out. finally time to fill that (unfillable) gap some. Turings birthday passed in june and was hoping to time this post with it (its been on my “to do” pile for months now).
Turing was an inspiration to me since a teenager, mainly through the writing of Hofstadter and his remarkable books, Godel Escher Bach and Metamagical Themas (these books are near-legendary for inspiring nearly an entire generation of computer scientists…). the deep mystery of Turing machines transfixed me, and later realized it was tied with emerging research into “complexity theory”. how could such a simple yet ingenious object capture such incredible complexity, almost subsume mathematics itself? it seems a question that is still being investigated and answered and maybe at the heart of continued advances.
even at young age it also seemed to me the concept of undecidability had some kind of unfinished or provisional aspect to it, that it wasnt the complete story somehow. apparently mathematicians attacked nearly undecidable problems all the time to come up with some kind of proofs anyway. isnt this some kind of deep paradox? over the years/ decades have delved into this deep mystery myself and added some degree of awareness/ insight, but in many ways feel its still mostly unresolved. as Nietzsche said, “when you stare deeply into the abyss, the abyss stares back.”
this idea occurred to me. as remarked some prior, there seems to be some pattern in a lot of trajectories in that its basically just an upslope and a downslope, eg within the glide region. how much of a pattern is this? or is it just highly related to all the glide generation algorithms devised so far? what would a trajectory that deviates from it look like?
this code splits the trajectory (glide only) into left and right upslope and downslope sides and then measures the mx value for each separately, trying to maximize it. the code has 3 modes: 1 maximizes left, 2 maximizes right, and 3 maximizes over both, but noticed the last one tends to settle on optimizing left after many iterations. the purple line is the mx value and the green line is the length of the “side” (upslope or downslope). another funky aspect of this code is that it seems to occasionally “paint itself into a corner” and fail to find increasing trends after long runs.