Bell’s inequality 50 years later

This is a jubilee year.* In November 1964, John Bell submitted a paper to the obscure (and now defunct) journal Physics. That paper, entitled “On the Einstein Podolsky Rosen Paradox,” changed how we think about quantum physics.

The paper was about quantum entanglement, the characteristic correlations among parts of a quantum system that are profoundly different than correlations in classical systems. Quantum entanglement had first been explicitly discussed in a 1935 paper by Einstein, Podolsky, and Rosen (hence Bell’s title). Later that same year, the essence of entanglement was nicely and succinctly captured by Schrödinger, who said, “the best possible knowledge of a whole does not necessarily include the best possible knowledge of its parts.” Schrödinger meant that even if we have the most complete knowledge Nature will allow about the state of a highly entangled quantum system, we are still powerless to predict what we’ll see if we look at a small part of the full system. Classical systems aren’t like that — if we know everything about the whole system then we know everything about all the parts as well. I think Schrödinger’s statement is still the best way to explain quantum entanglement in a single vigorous sentence.

To Einstein, quantum entanglement was unsettling, indicating that something is missing from our understanding of the quantum world. Bell proposed thinking about quantum entanglement in a different way, not just as something weird and counter-intuitive, but as a resource that might be employed to perform useful tasks. Bell described a game that can be played by two parties, Alice and Bob. It is a cooperative game, meaning that Alice and Bob are both on the same side, trying to help one another win. In the game, Alice and Bob receive inputs from a referee, and they send outputs to the referee, winning if their outputs are correlated in a particular way which depends on the inputs they receive.

But under the rules of the game, Alice and Bob are not allowed to communicate with one another between when they receive their inputs and when they send their outputs, though they are allowed to use correlated classical bits which might have been distributed to them before the game began. For a particular version of Bell’s game, if Alice and Bob play their best possible strategy then they can win the game with a probability of success no higher than 75%, averaged uniformly over the inputs they could receive. This upper bound on the success probability is Bell’s famous inequality.**

Classical and quantum versions of Bell's game. If Alice and Bob share entangled qubits rather than classical bits, then they can win the game with a higher success probability.

Classical and quantum versions of Bell’s game. If Alice and Bob share entangled qubits rather than classical bits, then they can win the game with a higher success probability.

There is also a quantum version of the game, in which the rules are the same except that Alice and Bob are now permitted to use entangled quantum bits (“qubits”)  which were distributed before the game began. By exploiting their shared entanglement, they can play a better quantum strategy and win the game with a higher success probability, better than 85%. Thus quantum entanglement is a useful resource, enabling Alice and Bob to play the game better than if they shared only classical correlations instead of quantum correlations.

And experimental physicists have been playing the game for decades, winning with a success probability that violates Bell’s inequality. The experiments indicate that quantum correlations really are fundamentally different than, and stronger than, classical correlations.

Why is that such a big deal? Bell showed that a quantum system is more than just a probabilistic classical system, which eventually led to the realization (now widely believed though still not rigorously proven) that accurately predicting the behavior of highly entangled quantum systems is beyond the capacity of ordinary digital computers. Therefore physicists are now striving to scale up the weirdness of the microscopic world to larger and larger scales, eagerly seeking new phenomena and unprecedented technological capabilities.

1964 was a good year. Higgs and others described the Higgs mechanism, Gell-Mann and Zweig proposed the quark model, Penzias and Wilson discovered the cosmic microwave background, and I saw the Beatles on the Ed Sullivan show. Those developments continue to reverberate 50 years later. We’re still looking for evidence of new particle physics beyond the standard model, we’re still trying to unravel the large scale structure of the universe, and I still like listening to the Beatles.

Bell’s legacy is that quantum entanglement is becoming an increasingly pervasive theme of contemporary physics, important not just as the source of a quantum computer’s awesome power, but also as a crucial feature of exotic quantum phases of matter, and even as a vital element of the quantum structure of spacetime itself. 21st century physics will advance not only by probing the short-distance frontier of particle physics and the long-distance frontier of cosmology, but also by exploring the entanglement frontier, by elucidating and exploiting the properties of increasingly complex quantum states.

frontiersSometimes I wonder how the history of physics might have been different if there had been no John Bell. Without Higgs, Brout and Englert and others would have elucidated the spontaneous breakdown of gauge symmetry in 1964. Without Gell-Mann, Zweig could have formulated the quark model. Without Penzias and Wilson, Dicke and collaborators would have discovered the primordial black-body radiation at around the same time.

But it’s not obvious which contemporary of Bell, if any, would have discovered his inequality in Bell’s absence. Not so many good physicists were thinking about quantum entanglement and hidden variables at the time (though David Bohm may have been one notable exception, and his work deeply influenced Bell.) Without Bell, the broader significance of quantum entanglement would have unfolded quite differently and perhaps not until much later. We really owe Bell a great debt.

*I’m stealing the title and opening sentence of this post from Sidney Coleman’s great 1981 lectures on “The magnetic monopole 50 years later.” (I’ve waited a long time for the right opportunity.)

**I’m abusing history somewhat. Bell did not use the language of games, and this particular version of the inequality, which has since been extensively tested in experiments, was derived by Clauser, Horne, Shimony, and Holt in 1969.

2017-01-13T10:05:38+00:00 November 23rd, 2014|Reflections, The expert's corner, Theoretical highlights|18 Comments

18 Comments

  1. Jack Sarfatti November 23, 2014 at 12:41 pm - Reply

    But it’s not obvious which contemporary of Bell, if any, would have discovered his inequality in Bell’s absence. Not so many good physicists were thinking about quantum entanglement and hidden variables at the time (though David Bohm may have been one notable exception, and his work deeply influenced Bell.) Without Bell, the broader significance of quantum entanglement would have unfolded quite differently and perhaps not until much later. We really owe Bell a great debt

    For the historical record, I actually had an intuitive sense of Bell’s theorem while a graduate student in physics at Brandeis in 1961. I had read David Bohm’s book Quantum Theory as a Cornell undergrad in a one-one tutorial with the late Robert Brout and was very aware of the Einstein-Podolsky-Rosen effect as described by Bohm. While at Brandeis as a National Defense Fellow Title IV I read a Rev Mod Phys paper by David Inglis on the Tau-Theta puzzle which mentioned EPR in such a way as it triggered the thought in my mind of a conflict between Einstein’s “no signal faster than light” and the observed EPR correlations. However, I was told by Sylvan Schweber and other faculty not to work on this problem. I was upset by their attitude and resigned going to work for a CIA contractor Tech/Ops on Route 2 before returning to grad school at Cornell and then at UCSD in La Jolla. MIT physics professor David Kaiser talks about my role in this part of the history of recent physics in his award-winning book “How the Hippies Saved Physics” where Kaiser mentions my name more than 600 times according to the Kindle edition.

    • preskill November 23, 2014 at 1:35 pm - Reply

      Yes, it’s an interesting book, which prominently features the activities of you and other members of the “Fundamental Fysiks Group” in Berkeley during the 1970s. That group deserves credit for recognizing the importance of Bell’s theorem earlier than most other physicists. I don’t recall Kaiser mentioning your interest in the subject while at Brandeis, though he does mention Bell’s 1964 visit to Brandeis.

  2. mason42 November 23, 2014 at 9:36 pm - Reply

    For physics frontiers, I consider “in between physical descriptions” (semiclassics, continuum versus particulate in granular materials, etc.) to also be a frontier in physics and to be a rather fascinating one. In my mind, this is a different frontier from the complexity one. I also consider “nonlinearity” to either be within the “complexity” frontier or to perhaps even to be a still different frontier.

    Biases in my topics of interest are very much revealed above.

    • preskill November 24, 2014 at 9:43 am - Reply

      Sure. I did not mean to suggest that this list of frontiers is complete.

  3. rrtucci November 24, 2014 at 9:22 am - Reply

    Bell’s inequality is a special case of the utterly trivial statement that the classical probabilities and the quantum probabilities for the same experiment will sometimes disagree.

    • preskill November 24, 2014 at 9:42 am - Reply

      Yes, but nevertheless important. We should not expect probabilistic classical computers to be able to simulate quantum computers because quantum systems violate Bell inequalities. Feynman makes this point at length in his famous 1981 lecture on “Simulating physics with computers” (though he does not cite Bell or CHSH).

      • Lubos Motl November 24, 2014 at 11:58 pm - Reply

        Dear John, I am with rrtucci. One example of a trivial and well-known statements can’t really be important without specifying what makes it special.

        Moreover, the sentences you added in the comment – about infeasibility – don’t really follow from Bell’s results because the classical computers that could be conjectured to simulate a quantum system may be intrinsically nonlocal, and therefore compatible with Bell’s inequality. The actual surviving difficulties are about the “exponential demands on the capacity” of the classical computer etc. and all these things are independent from the results of Bell’s theorem, or go well beyond it.

        And I think that the best person to be credited with the infeasibility of quantum computing is another RP, namely R.P. Poplavskii, Thermodynamical models of information processing” (in Russian), 1975. So all these ways to credit various people for various advances seem rather strange. They would create lots of animosity if the people were still alive. “Fortunately”, all the people who actually deserve the credit for the physical basis of all these things – the founders of quantum mechanics – have been dead for decades so they can’t witness this bastardization and “rediscovery” of their results.

        • preskill November 25, 2014 at 7:12 am - Reply

          Well, I agree that the founders of quantum mechanics made a far deeper contribution to science than Bell’s. And they get a lot of well-deserved credit for that.

          And unlike Bell and others I don’t regard Bell’s theorem as an indication that something is wrong with quantum mechanics; rather it highlights the difference between classical and quantum systems.

          You are right that its a big leap from Bell’s theorem, which concerns bipartite quantum systems, to quantum computing, which is about the complexity of systems with many parts. Still Bell’s theorem captures the essential idea that the randomness of quantum systems is intrinsic rather than a consequence of our ignorance, indicating that simulating a quantum computer with a classical probabilistic computer will not succeed.

          I don’t know Poplavskii’s paper. Has it been translated into English?

          • Parth November 25, 2014 at 9:05 am

            Interestingly, Feynman downplays the importance of Bell’s Theorem from 24:45 in part 4 of his lecture ‘Quantum Mechanical View of Reality’: http://www.youtube.com/watch?v=hWTbtXgqYMo.
            In the talk he explains the uncertainty principle, EPR paradox and GHZ paradox in a simple manner (using Mermin’s analogy) and emphasises that Bell’s Theorem only mathematically describes concepts that were already understood before it was published, so it isn’t a “big deal”.
            Are there any other talks/texts you know in which he explicitly talks about Bell’s Theorem and ‘local realism’ etc.?

          • preskill November 25, 2014 at 11:12 am

            Yes. In Feynman’s lecture on Simulating Physics with Computers
            http://mengquantumalgorithm.googlecode.com/svn/tags/release1.1/Report/papers/Simulating%20Physics%20with%20Computers.pdf
            he describes Bell’s inequality (without calling it that) and concludes:
            “That’s all. That’s the difficulty. That’s why quantum mechanics can’t
            seem to be imitable by a local classical computer.”

          • Lubos Motl November 26, 2014 at 9:06 am

            Dear John, thanks for your reply. The different feelings may partly be due to different backgrounds or something. But I don’t see why you believe these things.

            First, I don’t think that the quantum founding fathers get the appropriate credit. It seems to me that even e.g. John Bell is currently more celebrated than Werner Heisenberg, who is arguably the #1 man behind the quantum revolution, and Bell’s theorem gets more celebrated than the uncertainty principle.

            While this comparison of the coverage of Heisenberg vs Bell might be questionable, it’s easy to present another one in which the gap is unquestionable. Take Pascual Jordan vs John Bell. Here, the advantage in favor of Bell is several orders of magnitude. You know, I happen to think that Pascual Jordan has made deeper contributions to physics. It’s not just the co-fathership of the matrix version of quantum mechanics. It was the main guy behind the anticommutators for fermionic variables etc.

            Of course, Jordan had close links to the bad party in Germany, and he was a real believer, but so was a majority of the Germans which regained their status of a humane nation within decades. So the politicization of his achievements is really wrong with the hindsight when the Third Reich has been defeated for such a long time. Even if one looks among politically “clean” folks, one still finds many founding fathers or “nearly” founding fathers – like John von Neumann – who get almost no credit for anything these days. That’s very ironic e.g. when it comes to the periodic celebrations of Hugh Everett who largely copied whatever makes sense in his texts on QM from John von Neumann.

            I also disagree with your remark that Bell’s theorem has been used to defend the intrinsically probabilistic character of quantum mechanics. Instead, Bell remained a defender of the Bohmian mechanics whose very fundamental feature is the exactly opposite assumption, namely that the probability is always explained deterministically when one looks more carefully. At the end, I think that there’s absolutely no reason why someone who proves something like Bell’s theorem should have a deep understanding of quantum mechanics. It’s a theorem assuming classical physics. So if one is following the proof etc., he is not thinking in the quantum way – and Bell wasn’t thinking in the quantum way – at all! That’s really the fundamental problem.

            So Bell, due to his opinions and authority earned by the theorem, is still being used to defend Bohmian mechanics – and the “more detailed” message behind the theorem was actually used to defend Bohmian-like hidden variables – and the idea about the “emergent” nature of the quantum probabilities. And it’s much more true for the enthusiastic followers of Bell that they still hope in some classical “deeper” (shallower) explanation beneath QM. You may be the only fan of Bell’s in the world who doesn’t fit this description. But most of them naturally do because the real message of the fame behind Bell’s proof is that one becomes “famous” or “deep” if he thinks about consequences of classical axioms!

            I don’t know of an English translation of Poplavskii. But Russian is cool and sometimes, the equations are the real language you want to understand! 😉 The scanned paper in Russian is here:

            http://ufn.ru/ufn75/ufn75_3/Russian/r753d.pdf

            I think that if you review the format and the type of the equations and inequalities over there, you will see that it’s quite a serious paper finding some computation-relevant differences between classical and quantum physics, including the effect of temperature etc.

          • preskill November 26, 2014 at 10:26 am

            It’s a good point about Jordan. His contributions were very substantial and he should be better known.

            I’m not a fan of Bohm’s model, or of other nonlocal hidden variable models. And of course it’s true that Bell’s theorem establishes a property of classical correlations, not quantum correlations.

            I’m just saying that the (experimentally verified) violations of Bell’s inequality tell us something important about quantum correlations, which we appreciate because of Bell’s work.

          • Lubos Motl November 26, 2014 at 9:58 pm

            Dear John, I understand that this is the seemingly uncontroversial core of your point about the importance of this result, but I think it is incorrect, too.

            Preparing to write a blog post showing that “inequalities implied by any local realist theories and violated by QM” exist for simple and previously discussed setups such as the double slit experiment, and the fact that QM predicts results for the quantity outside the interval allowed by local realist theories was well appreciated by principles older than Bell’s theorem – just principles using different words.

  4. Jack Sarfatti November 24, 2014 at 9:58 am - Reply

    I left Brandeis in 1962. Whether or not Kaiser was explicit about it in the book, it is a true story and there are email exchanges between me and Kaiser before he finished the book. FYI
    http://stardrive.org/stardrive/index.php/all-blog-articles/myblog-ftd/my-reply-to-sylvan-schwebers-review-in-sept-2011-physics-today.html

    • Jack Sarfatti November 24, 2014 at 10:01 am - Reply

      I mean there are email exchanges about this very Brandeis story. One might wonder what happened in the parallel universe next door where Schweber encouraged me to work on this problem in 1961. 😉

  5. About Music and Science November 27, 2014 at 5:06 pm - Reply

    Reblogged this on susanjfeingold.

  6. Paul Ginsparg November 27, 2014 at 9:23 pm - Reply

    I was going to complain about your seeming claim that “Bell described a game that can be played by two parties”, i.e., the CSHS game, but then reached your (**) note, so will instead mention two other things (though still think it could have been phrased something like “Bell described a situation that can also be considered a game …” and sidestep the historical “abuse”), anyway:

    1) You say, “Without Gell-Mann, Zweig could have formulated the quark model.”
    Neither of us were around, but couldn’t we say that without Gell-Mann, we know Zweig *did* formulate the quark model (of course calling them aces)? — a somewhat stronger statement.
    It’s fun to look at his on-line versions:
    http://cds.cern.ch/record/352337/files/CERN-TH-401.pdf (v1, dated 17 Jan 1964)
    http://cds.cern.ch/record/570209/files/CERN-TH-412.pdf (v2, dated 21 Feb 1964)
    (Both abstracts end with the sentence “An experimental search for the aces is suggested”, whereas Gell-Man seemed to view them as “purely mathematical entities”.)
    Gell-Man’s PRL http://hep.caltech.edu/gm/images/quarks.pdf
    is published “1 Feb 1964”, “received 4 Jan 1964”, and comments that the
    “ideas were developed during a visit to Columbia University in March 1963”.

    But it’s fun to poke around a bit further…
    In http://arxiv.org/abs/1007.0494 (describing their “different paths to quarks”, though also filled with fun anecdotes about Caltech physics social dynamics during that period), Zweig elaborates:
    “According to Bob Serber [letter dated Jul 1980], in the spring of 1963 over lunch at the Columbia faculty club, Serber told Murray about a scheme he had been thinking about in which baryon representations were made from three fundamental representations of SU(3) (3 x 3 x 3), and meson representations from the fundamental representation and the representation representing the antiparticles of the fundamental representation (3 x 3bar). After a moment’s calculation Murray found that this would imply that the members of the fundamental representation would have fractional charge, a fact that Serber had not realized. No more was said, but in February of 1964 Murray proposed using the three fractionally charged objects in the fundamental representation as fields from which to construct the currents of a toy field theory.”
    And in this one http://authors.library.caltech.edu/18969/1/Origins_of_the_Quark_Model_Final_Zweig%5B1%5D.pdf
    Zweig relates that M. G-M left for MIT in late ’62 and they didn’t see each other again for almost two years, until Zweig returned from CERN in ’64, so it appears they followed independent paths. (That document also has in an epilogue: “When the physics department of a leading University was considering an appointment for me, their senior theorist, one of the most respected spokesmen for all of theoretical physics, blocked the appointment at a faculty meeting by passionately arguing that the ace model was the work of a “charlatan.”)

    And I recently came across this interview from a year ago,
    http://ph-news.web.cern.ch/content/interview-george-zweig
    where he relates the astounding story (I’d heard from him directly long ago):
    “I wanted to send a paper to the Physical Review, but the head of the Theory Division, Leon Van Hove, wouldn’t allow it. He told me that all reports from CERN had to be published in European journals, even though American institutions paid my salary, overhead, and publication costs. When I asked the theory secretary, Madame Fabergé, to retype the paper for publication, she politely refused, saying that Van Hove had instructed her not to type any of my papers. This was a real problem because I didn’t know how to type, and didn’t have a typewriter (remember, I was trained as a typesetter, not a typist).
    I was scheduled to give a theory seminar at CERN titled “Dealer’s choice: Aces are Wild”. Van Hove cancelled the seminar, and I was not allowed to reschedule it. When Van Hove and Kokedee published a book four years later reprinting articles on the quark model they did not include either of the CERN reports. Van Hove deliberately and systematically tried to keep my work from public view.”

    I happened to talk at some length with George Z. at a meeting in Chicago just last month, and he was in ever good spirits. Catching up on other things, I forgot to ask him how the two versions did eventually get typed … but I did recall to him my comment from over 20 years ago (when his office was right around the corner from mine at Los Alamos Nat’l Lab and we saw each other daily), that he would have been well served had hep-th@xxx.lanl.gov been available 30 years before…

    2) You say “We should not expect probabilistic classical computers to be able to simulate quantum computers”, which is true, but sometimes leads to confusion. Because you also point out correctly in your old QC course notes that an n-qubit system can be faithfully simulated on a classical computer by multiplying 2^n x 2^n unitary matrices. (I mention this because it’s an eye-opener to students, who say, “but wait a minute, so I can simulate any quantum system after all.”) The caveat is that it cannot be done *efficiently* (in the terminology repeatedly employed last Jan, highlighting the 2^n), so there’s a very implicit appeal to notions of computational complexity when that seemingly obvious statement is made.

    • preskill November 28, 2014 at 10:08 am - Reply

      I used the cautious wording “Without Gell-Mann, Zweig could have formulated the quark model” because Gell-Mann had made so many seminal contributions to particle theory before 1964, making the counterfactual “without Gell-Mann” hard to assess. In particular, SU(3) symmetry was an antecedent of quarks. We know Ne’eman discovered SU(3) symmetry independently of Gell-Mann, but even there Gell-Mann’s work had helped to set the stage before Ne’eman came along …

      Zweig’s interesting reminiscences, which you link to, reinforce this point.

      You’re right, of course, about the need to emphasize the importance of efficiency in discussing classical simulation of quantum systems. In the original post I had that distinction in mind when I said “accurately predicting the behavior of highly entangled quantum systems is beyond the capacity of ordinary digital computers” though maybe I should have been more explicit. I was trying to state the core point succinctly without being overly pedantic.

      I hesitated before writing “Bell described a game that can be played by two parties” for the reason you mention, but I decided I preferred using the footnote, avoiding the awkwardness of saying something like “Bell described a situation that can also be considered a game.” Using the footnote gave me cover and an opportunity to credit CHSH without disturbing the flow of the text.

Leave A Comment