Polarizer: Rise of the Efficiency

How should a visitor to Zürich spend her weekend?

Launch this question at a Swiss lunchtable, and you split diners into two camps. To take advantage of Zürich, some say, visit Geneva, Lucerne, or another spot outside Zürich. Other locals suggest museums, the lake, and the 19th-century ETH building.


The 19th-century ETH building

ETH, short for a German name I’ve never pronounced, is the polytechnic from which Einstein graduated. The polytechnic houses a quantum-information (QI) theory group that’s pioneering ideas I’ve blogged about: single-shot information, epsilonification, and small-scale thermodynamics. While visiting the group this August, I triggered an avalanche of tourism advice. Caught between two camps, I chose Option Three: Contemplate polar codes.

Polar codes compress information into the smallest space possible. Imagine you write a message (say, a Zürich travel guide) and want to encode it in the fewest possible symbols (so it fits in my camera bag). The longer the message, the fewer encoding symbols you need per encoded symbol: The more punch each code letter can pack. As the message grows, the encoding-to-encoded ratio decreases. The lowest possible ratio is a number, represented by H, called the Shannon entropy.

So established Claude E. Shannon in 1948. But Shannon didn’t know how to code at efficiency H. Not for 51 years did we know.

I learned how, just before that weekend. ETH student David Sutter walked me through polar codes as though down Zürich’s Banhofstrasse.


The Banhofstrasse, one of Zürich’s trendiest streets, early on a Sunday.

Say you’re encoding n copies of a random variable. When I say, “random variable,” think, “character in the travel guide.” Just as each character is one of 26 letters, each variable has one of many possible values.

Suppose the variables are independent and identically distributed. Even if you know some variables’ values, you can’t guess others’. Cryptoquote players might object that we can infer unknown from known letters. For example, a three-letter word that begins with “th” likely ends with “e.” But our message lacks patterns.

Think of the variables as diners at my lunchtable. Asking how to fill a weekend in Zürich—splitting the diners—I resembled the polarizer.

The polarizer is a mathematical object that sounds like an Arnold Schwarzenegger film and acts on the variables. Just as some diners pointed me outside Zürich, the polarizer gives some variables one property. Just some diners pointed me to within Zürich, the polarizer gives some variables another property. Just as I pointed myself at polar codes, the polarizer gives some variables a third property.

These properties involve entropy. Entropy quantifies uncertainty about a variable’s value—about which of the 26 letters a character represents. Even if you know the early variables’ values, you can’t guess the later variables’. But we can guess some polarized variables’ values. Call the first polarized variable u1, the second u2, etc. If we can guess the value of some ui, that ui has low entropy. If we can’t guess the value, ui has high entropy. The Nicole-esque variables have entropies like the earnings of Terminator Salvation: noteworthy but not chart-topping.

To recap: We want to squeeze a message into the tiniest space possible. Even if we know early variables’ values, we can’t infer later variables’. Applying the polarizer, we split the variables into low-, high-, and middling-entropy flocks. We can guess the value of each low-entropy ui, if we know the foregoing uh’s.

Almost finished!

In your camera-size travel guide, transcribe the high-entropy ui’s. These ui’s suggest the values of the low-entropy ui’s. When you want to decode the guide, guess the low-entropy ui’s. Then reverse the polarizer to reconstruct much of the original text.

The longer the original travel guide, the fewer errors you make while decoding, and the smaller the ratio of the encoded guide’s length to the original guide’s length. That ratio shrinks–as the guide’s length grows–to H. You’ve compressed a message maximally efficiently. As the Swiss say: Glückwünsche.

How does compression relate to QI? Quantum states form messages. Polar codes, ETH scientists have shown, compress quantum messages maximally efficiently. Researchers are exploring decoding strategies and relationships among (quantum) polar codes. With their help, Shannon-coded travel guides might fit not only in my camera bag, but also on the tip of my water bottle.

Should you need a Zürich travel guide, I recommend Grossmünster Church. Not only does the name fulfill your daily dose of umlauts. Not only did Ulrich Zwingli channel the Protestant Reformation into Switzerland there. Climbing a church tower affords a panorama of Zürich. After oohing over the hills and ahhing over the lake, you can shift your gaze toward ETH. The worldview being built there bewitches as much as the vista from any tower.


A tower with a view.

With gratitude to ETH’s QI-theory group (particularly to Renato Renner) for its hospitality. And for its travel advice. With gratitude to David Sutter for his explanations and patience.


The author and her neue Freunde.

2017-01-13T10:05:49+00:00 November 4th, 2013|Real science, Reflections, Theoretical highlights|5 Comments


  1. Mark M. Wilde November 4, 2013 at 10:15 am - Reply

    I might point readers of this blog to some other relevant works on quantum polar codes:



    • Nicole Yunger Halpern November 4, 2013 at 11:45 am - Reply

      Thanks for the information, Mark.

      And there you have it, folks — from someone who knows loads more about polar codes than I!

  2. George Lukacs November 4, 2013 at 11:37 am - Reply

    The single line, “The polarizer is a mathematical object that sounds like an Arnold Schwarzenegger film …” made my entire day! Thanks for offering a helping hand to an English teacher bedazzled by what you are doing (and writing!).

    • Nicole Yunger Halpern November 4, 2013 at 11:46 am - Reply

      Now *your* comment has made *my* day. 🙂

  3. […] and one-shot information theory. Entropies quantify uncertainties and efficiencies. Imagine compressing many copies of a message into the smallest possible number of bits (units of memory). How few bits […]

Leave A Comment