Browsed by
Tag: python

The Dartboard Paradox

The Dartboard Paradox

On November 5, 2019, I will be at PyData NYC to give a talk called The Inspection Paradox is Everywhere. Here’s the abstract:

The inspection paradox is a statistical illusion you’ve probably never heard of. It’s a common source of confusion, an occasional cause of error, and an opportunity for clever experimental design. And once you know about it, you see it everywhere.

The examples in the talk include social networks, transportation, education, incarceration, and more. And now I am happy to report that I’ve stumbled on yet another example, courtesy of John D. Cook.

In a blog post from 2011, John wrote about the following counter-intuitive truth:

For a multivariate normal distribution in high dimensions, nearly all the probability mass is concentrated in a thin shell some distance away from the origin.

John does a nice job of explaining this result, so you should read his article, too. But I’ll try to explain it another way, using a dartboard.

If you are not familiar with the layout of a “clock” dartboard, it looks like this:

File:Dartboard diagram.svg

I got the measurements of the board from the British Darts Organization rules, and drew the following figure with dimensions in mm:

Now, suppose I throw 100 darts at the board, aiming for the center each time, and plot the location of each dart. It might look like this:

Suppose we analyze the results and conclude that my errors in the x and y directions are independent and distributed normally with mean 0 and standard deviation 50 mm.

Assuming that model is correct, then, which do you think is more likely on my next throw, hitting the 25 ring (the innermost red circle), or the triple ring (the middlest red circle)?

It might be tempting to say that the 25 ring is more likely, because the probability density is highest at the center of the board and lower at the triple ring.

We can see that by generating a large sample, generating a 2-D kernel density estimate (KDE), and plotting the result as a contour.

In the contour plot, darker color indicates higher probability density. So it sure looks like the inner ring is more likely than the outer rings.

But that’s not right, because we have not taken into account the area of the rings. The total probability mass in each ring is the product of density and area (or more precisely, the density integrated over the area).

The 25 ring is more dense, but smaller; the triple ring is less dense, but bigger. So which one wins?

In this example, I cooked the numbers so the triple ring wins: the chance of hitting triple ring is about 6%; the chance of hitting the double ring is about 4%.

If I were a better dart player, my standard deviation would be smaller and the 25 ring would be more likely. And if I were even worse, the double ring (the outermost red ring) might be the most likely.

Inspection Paradox?

It might not be obvious that this is an example of the inspection paradox, but you can think of it that way. The defining characteristic of the inspection paradox is length-biased sampling, which means that each member of a population is sampled in proportion to its size, duration, or similar quantity.

In the dartboard example, as we move away from the center, the area of each ring increases in proportion to its radius (at least approximately). So the probability mass of a ring at radius r is proportional to the density at r, weighted by r.

We can see the effect of this weighting in the following figure:

The blue line shows estimated density as a function of r, based on a sample of throws. As expected, it is highest at the center, and drops away like one half of a bell curve.

The orange line shows the estimated density of the same sample weighted by r, which is proportional to the probability of hitting a ring at radius r.

It peaks at about 60 mm. And the total density in the triple ring, which is near 100 mm, is a little higher than in the 25 ring, near 10 mm.

If I get a chance, I will add the dartboard problem to my talk as yet another example of length-biased sampling, also known as the inspection paradox.

You can see my code for this example in this Jupyter notebook.

Left, right, part 4

Left, right, part 4

In the first article in this series, I looked at data from the General Social Survey (GSS) to see how political alignment in the U.S. has changed, on the axis from conservative to liberal, over the last 50 years.

In the second article, I suggested that self-reported political alignment could be misleading.

In the third article I looked at responses to this question:

Do you think most people would try to take advantage of you if they got a chance, or would they try to be fair?

And generated seven “headlines” to describe the results.

In this article, we’ll use resampling to see how much the results depend on random sampling. And we’ll see which headlines hold up and which might be overinterpretation of noise.

Overall trends

In the previous article we looked at this figure, which was generated by resampling the GSS data and computing a smooth curve through the annual averages.

This image has an empty alt attribute; its file name is image.png

If we run the resampling process two more times, we get somewhat different results:

Now, let’s review the headlines from the previous article. Looking at different versions of the figure, which conclusions do you think are reliable?

  • Absolute value: “Most respondents think people try to be fair.”
  • Rate of change: “Belief in fairness is falling.”
  • Change in rate: “Belief in fairness is falling, but might be leveling off.”

In my opinion, the three figures are qualitatively similar. The shapes of the curves are somewhat different, but the headlines we wrote could apply to any of them.

Even the tentative conclusion, “might be leveling off”, holds up to varying degrees in all three.

Grouped by political alignment

When we group by political alignment, we have fewer samples in each group, so the results are noisier and our headlines are more tentative.

Here’s the figure from the previous article:

This image has an empty alt attribute; its file name is image-1.png

And here are two more figures generated by random resampling:

Now we see more qualitative differences between the figures. Let’s review the headlines again:

  • Absolute value: “Moderates have the bleakest outlook; Conservatives and Liberals are more optimistic.” This seems to be true in all three figures, although the size of the gap varies substantially.
  • Rate of change: “Belief in fairness is declining in all groups, but Conservatives are declining fastest.” This headline is more questionable. In one version of the figure, belief is increasing among Liberals. And it’s not at all clear the the decline is fastest among Conservatives.
  • Change in rate: “The Liberal outlook was declining, but it leveled off in 1990.” The Liberal outlook might have leveled off, or even turned around, but we could not say with any confidence that 1990 was a turning point.
  • Change in rate: “Liberals, who had the bleakest outlook in the 1980s, are now the most optimistic”. It’s not clear whether Liberals have the most optimistic outlook in the most recent data.

As we should expect, conclusions based on smaller sample sizes are less reliable.

Also, conclusions about absolute values are more reliable than conclusions about rates, which are more reliable than conclusions about changes in rates.

Matplotlib animation in Jupyter

Matplotlib animation in Jupyter

For two of my books, Think Complexity and Modeling and Simulation in Python, many of the examples involve animation. Fortunately, there are several ways to do animation with Matplotlib in Jupyter. Unfortunately, none of them is ideal.

FuncAnimation

Until recently, I was using FuncAnimation, provided by the matplotlib.animation package, as in this example from Think Complexity. The documentation of this function is pretty sparse, but if you want to use it, you can find examples.

For me, there are a few drawbacks:

  • It requires a back end like ffmpeg to display the animation. Based on my email, many readers have trouble installing packages like this, so I avoid using them.
  • It runs the entire computation before showing the result, so it takes longer to debug, and makes for a less engaging interactive experience.
  • For each element you want to animate, you have to use one API to create the element and another to update it.

For example, if you are using imshow to visualize an array, you would run

    im = plt.imshow(a, **options)

to create an AxesImage, and then

    im.set_array(a)

to update it. For beginners, this is a lot to ask. And even for experienced people, it can be hard to find documentation that shows how to update various display elements.

As another example, suppose you have a 2-D array and plot it like this:

    plot(a)

The result is a list of Line2D objects. To update them, you have to traverse the list and invoke set_xdata() on each one.

Updating a display is often more complicated than creating it, and requires substantial navigation of the documentation. Wouldn’t it be nice to just call plot(a) again?

Clear output

Recently I discovered simpler alternative using clear_output() from Ipython.display and sleep() from the time module. If you have Python and Jupyter, you already have these modules, so there’s nothing to install.

Here’s a minimal example using imshow:

%matplotlib inline

import numpy as np
from matplotlib import pyplot as plt
from IPython.display import clear_output
from time import sleep

n = 10
a = np.zeros((n, n))
plt.figure()

for i in range(n):
    plt.imshow(a)
    plt.show()
    a[i, i] = 1
    sleep(0.1)
    clear_output(wait=True)

The drawback of this method is that it is relatively slow, but for the examples I’ve worked on, the performance has been good enough.

In the ModSimPy library, I provide a function that encapsulates this pattern:

def animate(results, draw_func, interval=None):
    plt.figure()
    try:
        for t, state in results.iterrows():
            draw_func(state, t)
            plt.show()
            if interval:
                sleep(interval)
            clear_output(wait=True)
        draw_func(state, t)
        plt.show()
    except KeyboardInterrupt:
        pass

results is a Pandas DataFrame that contains results from a simulation; each row represents the state of a system at a point in time.

draw_func is a function that takes a state and draws it in whatever way is appropriate for the context.

interval is the time between frames in seconds (not counting the time to draw the frame).

Because the loop is wrapped in a try statement that captures KeyboardInterrupt, you can interrupt an animation cleanly.

You can see an example that uses this function in this notebook from Chapter 22 of Modeling and Simulation in Python, and you can run it on Binder.

And here’s an example from Chapter 6 of Think Complexity, which you can also run on Binder.

Local regression in Python

Local regression in Python

I love data visualization make-overs (like this one I wrote a few months ago), but sometimes the tone can be too negative (like this one I wrote a few months ago).

Sarah Leo, a data journalist at The Economist, has found the perfect solution: re-making your own visualizations. Here’s her tweet.

And here’s the link to the article, which you should go read before you come back here.

One of her examples is the noisy line plot on the left, which shows polling results over time.


Here’s Leo’s explanation of what’s wrong and why:

Instead of plotting the individual polls with a smoothed curve to show the trend, we connected the actual values of each individual poll. This happened, primarily, because our in-house charting tool does not plot smoothed lines. Until fairly recently, we were less comfortable with statistical software (like R) that allows more sophisticated visualisations. Today, all of us are able to plot a polling chart like the redesigned one above.

This confession made me realize that I am in the same boat they were in: I know about local regression, but I don’t use it because I haven’t bothered to learn the tools.

Fortunately, filling this gap in my toolkit took less than an hour. The StatsModels library provides lowess, which computes locally weighted scatterplot smoothing.

I grabbed the data from The Economist and read it into a Pandas DataFrame. Then I wrote the following function, which takes a Pandas Series, computes a LOWESS, and returns a Pandas Series with the results:

from statsmodels.nonparametric.smoothers_lowess import lowess

def make_lowess(series):
    endog = series.values
    exog = series.index.values

    smooth = lowess(endog, exog)
    index, data = np.transpose(smooth)

    return pd.Series(data, index=pd.to_datetime(index)) 

And here’s what the results look like:

The smoothed lines I got look a little different from the ones in The Economist article. In general the results depends on the parameters we give LOWESS. You can see all the details in this Jupyter notebook.

Thanks to Sarah Leo for inspiring me to learn to use LOWESS, and for providing the data I used to replicate the results.