hi all, have been pondering bell tests again. it would seem that “loophole free” tests done in the last decade or so are airtight, eg . it seems there is just no room left for (einsteinian) locality. or is there? have been pondering on the deep mystery of QM entanglement for over ~2½ decades myself now and have to revise my thinking in light of new experimental insight.
for me the detection loophole seemed like it could remain open indefinitely. so many photons are lost in experiments, eg detection at one arm and not the other, and there are many cases of mismatches in detector channels ie detecting H/V (horizontal/ vertical) polarizations at same time. despite questioning this years ago on the blog when some were announced, am now trying to come to grips with that the loophole free tests are probably correct. score one for very careful science/ experimental verification.
alas its been my longtime hope as expressed in this blog that entanglement experiments/ technology/ devices would become more accessible to the layman. other than open quantum computing eg offered by IBM this has just not really come about much. its possible it could happen some day but almost nobody is working on it directly. the apparatuses are just too complex. there has been motion on undergraduate experiments over the years. but overall even though there are lots of bell experiments in the literature, its still very “remote/ exotic” to work on them, in the sense they are nearly all done in expensive/ highly specialized physics laboratories by now-small armies of Phds.
 is an example of the challenge. it took an undergraduate 10 weeks and some significant equipment/ expenses just to find that the SPDC two-cone (signal/ idler) generation alone was too hard to replicate, very candidly/ honestly mentioning this “nonresult” in the abstract. p14 “In the ten weeks that we had to perform this experiment we have made considerable progress although we have not yet been able to create an image of the two photon rings.” abstract: “we discuss possible problems and our plans for future attempts.” and my thinking is that this is a competent/ even talented undergraduate/ mentor. ouch!
to be more detailed, candid disclosure myself, if it hasnt already been indicated in this blog (thinking it has more than once in various places) it has been my longtime dream to construct and work on an experiment myself and communicate/ collaborate with experts in the field on it. such an effort would “really make life worth living” so to speak. but it looks like it still takes around a minimum $10K and an optical table setup and very high expertise/ time requirements. after a few decades the words “remote/ exotic” still ring in my mind.
⭐ ⭐ ⭐
to the best of my knowledge the following is not proposed anywhere and yet may be an as-yet-undiscovered “explanation” for entanglement. it seems that many researchers may be missing something right in front of them. the following has maybe been briefly mentioned by some authorities, but am not aware of anyone who has really zoomed in on it.
found this on a whim by looking for bell tests using pulsed laser beams. maybe looked for this long ago and didnt find anything. it seems this has not been done for many decades even when other early experiments were developed, but maybe in more recent times there has been experimentation with pulsed SPDC and bell tests. there does not seem to be any surveys yet.
in  there is a remarkable quote:
In the cw pumped SPDC, entangled photon pairs occurs randomly since the process is “spontaneous”. Due to the long coherence time of the cw pump, whereabouts of the photon pair is completely uncertain within the coherence length of the pump laser beam. This huge time uncertainty makes it difficult to apply for applications such as generation of multi-photon entangled state, quantum teleportation, etc, as interactions between entangled photon pairs generated from different sources are required. This difficulty was thought to be solved by using a femtosecond pulse laser as a pump. Unfortunately, femtosecond pulse pumped type-II SPDC shows poor quantum interference visibility due to the inherent distinguishability as the pump pulse acts as a clock providing distinguishing information about the photon pair born time inside the crystal [6,7]. Traditionally, the following methods were used to restore the quantum interference visibility, hence Bell states, in femtosecond pulse pumped type-II SPDC: (i) use a thin nonlinear crystal (≈ 100µm)  or (ii) use narrow-band spectral filters in front of detectors [6,7]. Both methods, however, reduce the available flux of the entangled photon pair significantly .
now heres the big issue that everyone seems to miss. there is some mention of “visibility” in bell tests. this is rarely or not spelled out, but basically a measure of how much the bell inequalities can be violated by the apparatus at hand. but notice that, and this is very key that researchers are glossing over, it is different for every apparatus. this is a very big deal when one stops to think about it carefully. researchers seem to handwave away/ sweep this under the rug. the exact bell equation quantity measured is different in different experiments. there is sometimes a brief comment that a nonideal value is measured due to nonideal experimental imperfections/ imprecisions.
what contributes to this difference? why exactly is it different? you could probably look at dozens of papers on bells theorem and find almost no discussion of this whatsoever. the goal/ standard was established long ago by early(est) workers in the field, maybe even CHSH, count the number of standard deviations in the test, try to maximize it, and offer no further comment on the exact quantity. some of this is due to the disconnect in that Bell proposed ideal experiments, and experimenters such as CHSH converted the theory to real apparatuses, but in that transition, something seems to have been missed/ lost.
apparently detector efficiency is a prime measure or influence on the bell value/ visibility. however, how so? what are the exact formulas for this? why is nobody building experiment+theory to extract/ examine/ focus visibility variations? my thinking is that this is a massive loophole here. as mentioned in  even since the very beginning (see also Wick, Infamous Boundary), Holt+Pipkin did an experiment that measured einstein locality. they circulated a preprint that was never published, they basically retracted it (or chose not to publish) thinking there was an error. nobody remembers. nobody pays much or any attention. one alternative angle is to consider it highly significant and just that Holt+Pipkin accidentally created a setup with low visibility. but does that mean its “wrong”?
it appears that visibility is a key element of the question of nonlocality and experiments are considering it an inconsequential artifact or imperfection or something to airbrush away. my thinking is that missing the bigger picture here, and not zooming in on this discrepancy, is what is wrong, in the sense of incurious/ inexacting/ unprofessional/ unscientific.
⭐ ⭐ ⭐
ok, now, after a few years of turning this over in my head, recently “clicking” more with the Minev findings/ angle, and now finally “publishing” on it; heres my (Very?) Big Idea: maybe entanglement is essentially a combination of detector back-action plus resonance/ timing. the excerpt hints at exactly that. it appears that the two detector arms along with the photon source are acting as a resonant system. my idea is that “echoes” are being propagated in the reverse direction of light propagation by the backaction and eventually result in a resonant system. for shorthand call this the back-action resonance hypothesis. this also seems to closely relate to the time symmetric interpretation of QM.[x]
the basic idea is that EPR1935 talked about “measuring a system without in any way disturbing it”. but increasingly that seems to be the key misconception, a loophole, ie possibly almost or exactly a false premise (missed by the “master” himself). what if, as all signs are, in QM, fundamentally, to the contrary, measuring a system disturbs it. that is apparently what new experiments like Minev et all are revealing. “detection” seems to be inextricably linked to detector back action.
this makes almost obvious sense when one starts to think about detector cascades carefully. this reminds me of “flashes” in the GRW-flash theory. the “flash” is the detector going off and causing an “explosion-like” event in the QM probability fluid! (see earlier blogs on probability fluid interpretation!) which propagates away from the detector, influencing/ entangling with the measured system, meaning that all measurements after the system “stabilizes”/ not carefully timed/ interrupted are of emergent systems re Emergent QM theory.[x] but, apparently, the exact nature of the interaction can be extracted with careful timing scenarios eg pulsed experiments. the key would seem to be to interrupt the stabilizing which happens nearly at the speed of light.
ok, to a researcher in the field this will sound very farfetched. (told you it was radical!) what are the immediate objections? how could something like this be missed by dozens of intricate, micron- and nano-second crafted experiments by battalions of Phds going back decades? but as einstein says the theory determines what can be observed. it tells you where to look. (re the drunk looking under the streetlight for his keys.) it seems to me that sometime researchers are building an experiment to achieve a result/ replicate a finding, rather than investigate a phenomenon. yes, it sounds nearly impossible, but the difference can be subtle.
one explanation is that the phenomenon is very difficult to pinpoint/ extract without certain types of experiments that havent been done a lot and require very precise isolation, mainly wrt timing. my thinking is that pulsed laser experiments are exactly that. most Bell experiments may have very precise timing in detection of photons, but very rarely in the laser source, which is turned on and experimenters wait a relatively “long time” for it to stabilize as part of standard experimental protocol. one even wonders if they have been measuring stabilization scenarios already to make sure their lasers/ experiments are “ready to test!”
the big objection that comes to mind are random polarization switching experiments, done all the way back in the early 1980s by Aspect, that are done to supposedly rule out influences that travel up to as fast as the speed of light, eg possible “echoes” from detectors. wouldnt that disrupt a resonance created by backaction? my explanation for this, just a rough idea, is that maybe the backaction resonance is independent of the polarizer orientations, in other words, even with the random switching, the systems studied have been “stabilizing.”
anyway my thinking is that careful experiments with timing and pulsed laser beams along the lines mentioned above that try to vary/ pinpoint the boundary between bell violation and nonviolation could reveal new physics here. the idea is to create a single experiment that ranges between violations and nonviolations, presumably with shorter-to-longer pulse durations and then model it with equations. the prior quote from  is talking about visibility/ bell nonlocality measurement differences due to “thin nonlinear crystal” and “narrow band spectral filters on detectors” which apparently hint/ contribute to some kind of quantitative tradeoff in the formula/ result. my suspicions are that the violations really happen with detector back-action and enough time for an “echo” to propagate between both arms of the system.
isolating such an effect of course would be breakthrough, paradigm shifting, and could lead to the long conjectured/ considered subquantum theory. it would be local, and a new framework can be constructed to explain how local interactions not carefully extracted lead to emergent systems and entanglement-based mathematics. such a theory is already hinted at with the stochastic schroedinger equation variants studied in Quantum Trajectory Theory by Carmichael, minev et al.
 is a general survey of CSL theories that are essentially a variant of stochastic schroedinger equations aka back-action. am not an expert on CSL theory but it seems that it is relatively vague on where the localizations occur in space, only sketching out so to speak when/ when they happen.  in contrast is very close to my own (new) ideas on this, also hinted in the Minev work: the stochastic jump spatial localization can be considered to emanate in the detector(s).
another line of thinking on this. the singlet state studied in Bell correlation experiments seems as if it can be modelled as a single bloch sphere or two equidistant, opposite points on a single bloch sphere, ie similar to a single qubit, although have not seen myself a nice simple description of this; qubit concepts came from quantum computing where Bell entanglement scenarios apparently can be constructed but are a special case. anyway, there are now experiments pinpointing measurements of quantum state on bloch spheres, in single cases, ie as in the abstract “single realizations of the experiment” eg . this is done by including the effect of backaction on the quantum state. aka “full tomography by direct averaging” or “single quantum trajectories.”
looking around, in  there seems to be very similar measurements under a different terminology, quantum protective measurements, and “measuring the expectation value of a physical variable on a single particle” but looking into it some, to me it sounds a lot like dynamically creating “opposing” magnetic fields that essentially counteract backaction effects of the detectors. has anyone else made the link? dont see it yet. need to google more!
a quick google scan,  seems to nicely sketch out the basic correspondence. a single qubit is modelled as SU(2) and the singlet state involves SO(3). “This is a deep property of spin-1/2 particles and is possible only because the structures of the continuous groups SU(2) and SO(3) are the same. In fact, as is well known, SU(2) is exactly a double covering of SO(3). An accessible proof of this is given in Appendix A.” there are some other nice papers on Bell-type experiments on quantum machines eg . my longtime dream is that quantum computing experiments might even push the boundaries of knowledge of the dynamics of (Bell) type entanglement. that has now been largely realized with the Minev experiments. are further findings/ surprises in store? it seems possible or even likely to me.
this particular blog is dedicated to Boltzmann for ulterior reasons that may someday become apparent.[x]
⭐ ⭐ ⭐
(4/12) some followup.  mentions/ has its ref  pointing to these two refs . they have some variation of “distinguishability” (and apparently therefore visibility) based on pulse width and bandwidth (are those interrelated?). it would be nice to see some reference that tries to consolidate/ survey this type of info/ angle. there is an interplay between pump pulse width vs the signal/ idler photon “coherence” and the distinguishability/ visibility.
We demonstrate experimentally that the quantum interference between two photons generated in each of the nonlinear crystals will degrade significantly as the duration of the femtosecond pump pulse becomes shorter than the coherence time of the signal and idler photons.
The visibility of the interference pattern is reduced for larger pump bandwidths. This effect can be understood in terms of the spectral distinguishability of the photon pairs.
but wait! arent these experiments basically showing that the pump pulse width causes a difference between Bell nonlocality, vs einstein locality, ie is controlled by it? it would seem that merely taking these pulse variation experiments and combining them with/ converting them into a bell test could already lay out 80-90% of the new physics to find….? arent these 2 papers already saying, the laser pulse is basically resulting in more local, classical physics as it gets shorter? how does QM explain that? isnt this pointing to a classical-quantum boundary based on “elements of reality” exactly as Einstein outlined, and almost exactly where one would expect to find it (even a long time ago), ie laser pulse widths?
it is notable that maybe the 1st pulsed bell experiments were not done until the 1990s, many years after Bells initial proposal and a decade after the Aspect experiments. and it seems that experimenters have not zoomed in on this apparent loophole. nobody is even calling it a loophole. but what if this is the biggest loophole of all, while everyone is mass-effort chasing other stuff like detector/ sampling loopholes etc aka all on a different bandwagon/ parade now spanning for/ over decades? what if this is the currently unrecognized loophole that could finally blow the whole theory open, and show how all of einstein/ schroedinger/ bell/ bohm et al vs bohr were right, in a sense?
for now it is hard for me to (big) picture how essentially these short pulses are leading to more local results, but there is probably some basic way to sketch it out, and it seems some new comprehensive theory just waiting to be outlined/ unleashed on the world. in a word, promising-looking…?
also at this point its obvious to try to look for more recent experiments than the 2-decade old 90s burst/ spurt, will try to track something down… maybe someone else has a lead?
 A strong loophole-free test of local realism
 Bell State Preparation using Pulsed Non-Degenerate Two-Photon Entanglement
 Entanglement by Aczel
 Design for Undergraduate Experiment using Photon Entanglement/ thompson
 In Praise and in Criticism of the Model of Continuous Spontaneous Localization of the Wave-Function/ Wechsler
 Bell, Bohm, and qubit: EPR remixed/ Press
 Five Experimental Tests on the 5-Qubit IBM Quantum Computer
 Quantum interference and indistinguishability with femtosecond pulses
 Spectral distinguishability in ultrafast parametric down-conversion