The Dartboard Paradox

The Dartboard Paradox

On November 5, 2019, I will be at PyData NYC to give a talk called The Inspection Paradox is Everywhere. Here’s the abstract:

The inspection paradox is a statistical illusion you’ve probably never heard of. It’s a common source of confusion, an occasional cause of error, and an opportunity for clever experimental design. And once you know about it, you see it everywhere.

The examples in the talk include social networks, transportation, education, incarceration, and more. And now I am happy to report that I’ve stumbled on yet another example, courtesy of John D. Cook.

In a blog post from 2011, John wrote about the following counter-intuitive truth:

For a multivariate normal distribution in high dimensions, nearly all the probability mass is concentrated in a thin shell some distance away from the origin.

John does a nice job of explaining this result, so you should read his article, too. But I’ll try to explain it another way, using a dartboard.

If you are not familiar with the layout of a “clock” dartboard, it looks like this:

File:Dartboard diagram.svg

I got the measurements of the board from the British Darts Organization rules, and drew the following figure with dimensions in mm:

Now, suppose I throw 100 darts at the board, aiming for the center each time, and plot the location of each dart. It might look like this:

Suppose we analyze the results and conclude that my errors in the x and y directions are independent and distributed normally with mean 0 and standard deviation 50 mm.

Assuming that model is correct, then, which do you think is more likely on my next throw, hitting the 25 ring (the innermost red circle), or the triple ring (the middlest red circle)?

It might be tempting to say that the 25 ring is more likely, because the probability density is highest at the center of the board and lower at the triple ring.

We can see that by generating a large sample, generating a 2-D kernel density estimate (KDE), and plotting the result as a contour.

In the contour plot, darker color indicates higher probability density. So it sure looks like the inner ring is more likely than the outer rings.

But that’s not right, because we have not taken into account the area of the rings. The total probability mass in each ring is the product of density and area (or more precisely, the density integrated over the area).

The 25 ring is more dense, but smaller; the triple ring is less dense, but bigger. So which one wins?

In this example, I cooked the numbers so the triple ring wins: the chance of hitting triple ring is about 6%; the chance of hitting the double ring is about 4%.

If I were a better dart player, my standard deviation would be smaller and the 25 ring would be more likely. And if I were even worse, the double ring (the outermost red ring) might be the most likely.

Inspection Paradox?

It might not be obvious that this is an example of the inspection paradox, but you can think of it that way. The defining characteristic of the inspection paradox is length-biased sampling, which means that each member of a population is sampled in proportion to its size, duration, or similar quantity.

In the dartboard example, as we move away from the center, the area of each ring increases in proportion to its radius (at least approximately). So the probability mass of a ring at radius r is proportional to the density at r, weighted by r.

We can see the effect of this weighting in the following figure:

The blue line shows estimated density as a function of r, based on a sample of throws. As expected, it is highest at the center, and drops away like one half of a bell curve.

The orange line shows the estimated density of the same sample weighted by r, which is proportional to the probability of hitting a ring at radius r.

It peaks at about 60 mm. And the total density in the triple ring, which is near 100 mm, is a little higher than in the 25 ring, near 10 mm.

If I get a chance, I will add the dartboard problem to my talk as yet another example of length-biased sampling, also known as the inspection paradox.

You can see my code for this example in this Jupyter notebook.

UPDATE November 6, 2019: This “thin shell” effect has practical consequences. This excerpt from The End of Average talks about designing the cockpit of a plan for the “average” pilot, and discovering that there are no pilots near the average in 10 dimensions.

What should you do?

What should you do?

In my previous post I asked “What should I do?“. Now I want to share a letter I wrote recently for students at Olin, which appeared in our school newspaper, Frankly Speaking.

It is addressed to engineering students, but it might also be relevant to people who are not students or not engineers.

Dear Students,

As engineers, you have a greater ability to affect the future of the planet than almost anyone else.  In particular, the decisions you make as you start your careers will have a disproportionate impact on what the world is like in 2100.

Here are the things you should work on for the next 80 years that I think will make the biggest difference:

  • Nuclear energy
  • Desalination
  • Transportation without fossil fuels
  • CO₂ sequestration
  • Alternatives to meat
  • Global education
  • Global child welfare
  • Infrastructure for migration
  • Geoengineering

Let me explain where that list comes from.

First and most importantly, we need carbon-free energy, a lot of it, and soon.  With abundant energy, almost every other problem is solvable, including food and desalinated water.  Without it, almost every other problem is impossible.

Solar, wind, and hydropower will help, but nuclear energy is the only technology that can scale up enough, soon enough, to substantially reduce carbon emissions while meeting growing global demand.

With large scale deployment of nuclear power, it is feasible for global electricity production to be carbon neutral by 2050 or sooner.  And most energy use, including heat, agriculture, industry, and transportation, could be electrified by that time. Long-range shipping and air transport will probably still require fossil fuels, which is why we also need to develop carbon capture and sequestration.

Global production of meat is a major consumer of energy, food, and water, and a major emitter of greenhouse gasses.  Developing alternatives to meat can have a huge impact on climate, especially if they are widely available before meat consumption increases in large developing countries.

World population is expected to peak in 2100 at 9 to 11 billion people.  If the peak is closer to 9 than 11, all of our problems will be 20% easier.  Fortunately, there are things we can do to help that happen, and even more fortunately, they are good things.

The difference between 9 and 11 depends mostly on what happens in Africa during the next 30 years.  Most of the rest of the world has already made the “demographic transition“, that is, the transition from high fertility (5 or more children per woman) to low fertility (at or below replacement rate).

The primary factor that drives the demographic transition is child welfare; increasing childhood survival leads to lower fertility.  So it happens that the best way to limit global population is to protect children from malnutrition, disease, and violence. Other factors that contribute to lower fertility are education and economic opportunity, especially for women.

Regardless of what we do in the next 50 years, we will have to deal with the effects of climate change, and a substantial part of that work will be good old fashioned civil engineering.  In particular, we need infrastructure like sea walls to protect people and property from natural disasters. And we need a new infrastructure of migration, including the ability to relocate large numbers of people in the short term, after an emergency, and in the long term, when current population centers are no longer viable.

Finally, and maybe most controversially, I think we will need geoengineering.  This is a terrible and dangerous idea for a lot of reasons, but I think it is unavoidable, not least because many countries will have the capability to act unilaterally.  It is wise to start experiments now to learn as much as we can, as long as possible before any single actor takes the initiative.

Think locally, act globally

When we think about climate change, we gravitate to individual behavior and political activism.  These activities are appealing because they provide opportunities for immediate action and a feeling of control.  But they are not the best tools you have.

Reducing your carbon footprint is a great idea, but if that’s all you do, it will have negligible effect.

And political activism is great: you should vote, make sure your representatives know what you think, and take to the streets if you have to.  But these activities have diminishing returns. Writing 100 letters to your representative is not much better than one, and you can’t be on strike all the time.

If you focus on activism and your personal footprint, you are neglecting what I think is your greatest tool for impact: choosing how you spend 40 hours a week for the next 80 years of your life.

As an early-career engineer, you have more ability than almost anyone else to change the world.  If you use that power well, you will help us get through the 21st Century with a habitable planet and a high quality of life for the people on it.

What should I do?

What should I do?

I am planning to be on sabbatical from June 2020 to August 2021, so I am thinking about how to spend it. Let me tell you what I can do, and you can tell me what I should do.

Data Science

I consider myself a data scientist, but that means different things to different people. More specifically, I can contribute in the following areas:

  • Data exploration, modeling, and prediction,
  • Bayesian statistics and machine learning,
  • Scientific computing and optimization,
  • Software engineering and reproducible science
  • Technical communication, including data visualization.

I have written a series of books related to data science and scientific computing, including Think Stats, Think Bayes, Physical Modeling in MATLAB, and Modeling and Simulation in Python.

And I practice what I teach. During a previous sabbatical, I was a Visiting Scientist at Google, working in their Make the Web Faster initiative. I worked on measurement and modeling of network performance, related to my previous research.

As a way of developing, demonstrating, and teaching data science skills, I write a blog called Probably Overthinking It.

Software Engineering

I’ve been programming since before you (the median-age reader of this article) were born, mostly in C for the first 20 years, and mostly in Python for the last 20. But I’ve also worked in Java, MATLAB, and a bunch of functional languages.

Most of my code has been for research or education, but in my time at Google I learned to write industrial-grade code with professional software engineering tools.

I work in public view, so you can see the good, the bad, and the ugly on GitHub. As a recent example, here’s a library I am designing for representing discrete probability distributions.

I work on teams: I have co-taught classes, co-authored books, consulted with companies and colleges, and collaborated on software projects. I’ve done Scrum training, and I use agile methods and tools on most of my projects (with varying degrees of fidelity).

Curriculum design

If you are creating a new college from scratch, I am one of a small number of people with that experience. When I joined Olin College in 2003, the first year curriculum had run once. I was in for the creation of Years 2, 3, and 4, as well as the reinvention of Year 1.

Since then, Olin has come to be recognized as a top undergraduate engineering program and a world leader in innovative education. I am proud of my work here and the amazing colleagues I have done it with.

My projects focus on the role of computing and data science in education, especially engineering education.

  1. I was part of a team that developed a novel introduction to computational modeling and simulation, and I wrote a book about it, now available for MATLAB and Python.
  2. I developed an introductory data science course for Olin, a book, and an online class. Currently I am working with a team at Harvard to develop a data science class for their GenEd program.
  3. Bayesian statistics is not just for grad students. I developed an undergraduate class that teaches Bayesian methods first, and wrote a book about it.
  4. Data structures is a problematic class in the Computer Science curriculum. I developed a class on Complexity Science as an alternative approach to the topic, and wrote a book about it. And for people coming to the topic later, I developed an online class and a book.

I have also written a series of books to help people learn to program in Python, Java, and C++. Other authors have adapted my books for Julia, Perl, OCaml, and other languages.

My books and curricular materials are used in universities, colleges, and high schools all over the world.

I have taught webcasts and workshops on these topics at conferences like PyCon and SciPy, and for companies developing in-house expertise.

If you are creating a new training program, department, or college, maybe I can help.

What I am looking for

I want to work on interesting projects with potential for impact. I am especially interested in projects related to the following areas, which are the keys we need to get through the 21st Century with a habitable planet and a high quality of life for the people on it:

  • Nuclear energy
  • Desalination
  • CO₂ sequestration
  • Geoengineering
  • Alternatives to meat
  • Transportation without fossil fuels
  • Global education
  • Global child welfare
  • Infrastructure for natural disaster and rising sea level

I live in Needham MA, and probably will not relocate for this sabbatical, but I could work almost anywhere in eastern Massachusetts. I would consider remote work, but I would rather work with people face to face, at least sometimes.

And I’ll need financial support for the year.

So, what should I do?

For more on my background, here is my CV.

What’s the frequency, Kenneth?

What’s the frequency, Kenneth?

First, if you get the reference in the title, you are old. Otherwise, let me google that for you.

Second, a Reddit user recently posted this question

I have temperatures reading over times (2 secs interval) in a computer that is control by an automatic fan. The temperature fluctuate between 55 to 65 in approximately sine wave fashion. I wish to find out the average time between each cycle of the wave (time between 55 to 65 then 55 again the average over the entire data sets which includes many of those cycles) . What sort of statistical analysis do I use?

[The following] is one of my data set represents one of the system configuration. Temperature reading are taken every 2 seconds. Please show me how you guys do it and which software. I would hope for something low tech like libreoffice or excel. Hopefully nothing too fancy is needed.

A few people recommended using FFT, and I agreed, but I also suggested two other options:

  1. Use a cepstrum, or
  2. Keep it simple and use zero-crossings instead.

And then another person suggested autocorrelation.

I ran some experiments to see what each of these solutions looks like and what works best. If you are too busy for the details, I think the best option is computing the distance between zero crossings using a spline fitted to the smoothed data.

If you want the details, they are in this Jupyter notebook.

Watch your tail!

Watch your tail!

For a long time I have recommended using CDFs to compare distributions. If you are comparing an empirical distribution to a model, the CDF gives you the best view of any differences between the data and the model.

Now I want to amend my advice. CDFs give you a good view of the distribution between the 5th and 95th percentiles, but they are not as good for the tails.

To compare both tails, as well as the “bulk” of the distribution, I recommend a triptych that looks like this:

There’s a lot of information in that figure. So let me explain.

The code for this article is in this Jupyter notebook.

Daily changes

Suppose you observe a random process, like daily changes in the S&P 500. And suppose you have collected historical data in the form of percent changes from one day to the next. The distribution of those changes might look like this:

If you fit a Gaussian model to this data, it looks like this:

It looks like there are small discrepancies between the model and the data, but if you follow my previous advice, you might look at these CDFs and conclude that the Gaussian model is pretty good.

If we zoom in on the middle of the distribution, we can see the discrepancies more clearly:

In this figure it is clearer that the Gaussian model does not fit the data particularly well. And, as we’ll see, the tails are even worse.

Survival on a log-log scale

In my opinion, the best way to compare tails is to plot the survival curve (which is the complementary CDF) on a log-log scale.

In this case, because the dataset includes positive and negative values, I shift them right to view the right tail, and left to view the left tail.

Here’s what the right tail looks like:

This view is like a microscope for looking at tail behavior; it compresses the bulk of the distribution and expands the tail. In this case we can see a small discrepancy between the data and the model around 1 percentage point. And we can see a substantial discrepancy above 3 percentage points.

The Gaussian distribution has “thin tails”; that is, the probabilities it assigns to extreme events drop off very quickly. In the dataset, extreme values are much more common than the model predicts.

The results for the left tail are similar:

Again, there is a small discrepancy near -1 percentage points, as we saw when we zoomed in on the CDF. And there is a substantial discrepancy in the leftmost tail.

Student’s t-distribution

Now let’s try the same exercise with Student’s t-distribution. There are two ways I suggest you think about this distribution:

1) Student’s t is similar to a Gaussian distribution in the middle, but it has heavier tails. The heaviness of the tails is controlled by a third parameter, ν.

2) Also, Student’s t is a mixture of Gaussian distributions with different variances. The tail parameter, ν, is related to the variance of the variances.

For a demonstration of the second interpretation, I recommend this animation by Rasmus Bååth.

I used PyMC to estimate the parameters of a Student’s t model and generate a posterior predictive distribution. You can see the details in this Jupyter notebook.

Here is the CDF of the Student t model compared to the data and the Gaussian model:

In the bulk of the distribution, Student’s t-distribution is clearly a better fit.

Now here’s the right tail, again comparing survival curves on a log-log scale:

Student’s t-distribution is a better fit than the Gaussian model, but it overestimates the probability of extreme values. The problem is that the left tail of the empirical distribution is heavier than the right. But the model is symmetric, so it can only match one tail or the other, not both.

Here is the left tail:

The model fits the left tail about as well as possible.

If you are primarily worried about predicting extreme losses, this model would be a good choice. But if you need to model both tails well, you could try one of the asymmetric generalizations of Student’s t.

The old six sigma

The tail behavior of the Gaussian distribution is the key to understanding “six sigma events”.

John Cook explains six sigmas in this excellent article:

“Six sigma means six standard deviations away from the mean of a probability distribution, sigma (σ) being the common notation for a standard deviation. Moreover, the underlying distribution is implicitly a normal (Gaussian) distribution; people don’t commonly talk about ‘six sigma’ in the context of other distributions.”

This is important. John also explains:

“A six-sigma event isn’t that rare unless your probability distribution is normal… The rarity of six-sigma events comes from the assumption of a normal distribution more than from the number of sigmas per se.”

So, if you see a six-sigma event, you should probably not think, “That was extremely rare, according to my Gaussian model.” Instead, you should think, “Maybe my Gaussian model is not a good choice”.

Left, right, part 4

Left, right, part 4

In the first article in this series, I looked at data from the General Social Survey (GSS) to see how political alignment in the U.S. has changed, on the axis from conservative to liberal, over the last 50 years.

In the second article, I suggested that self-reported political alignment could be misleading.

In the third article I looked at responses to this question:

Do you think most people would try to take advantage of you if they got a chance, or would they try to be fair?

And generated seven “headlines” to describe the results.

In this article, we’ll use resampling to see how much the results depend on random sampling. And we’ll see which headlines hold up and which might be overinterpretation of noise.

Overall trends

In the previous article we looked at this figure, which was generated by resampling the GSS data and computing a smooth curve through the annual averages.

This image has an empty alt attribute; its file name is image.png

If we run the resampling process two more times, we get somewhat different results:

Now, let’s review the headlines from the previous article. Looking at different versions of the figure, which conclusions do you think are reliable?

  • Absolute value: “Most respondents think people try to be fair.”
  • Rate of change: “Belief in fairness is falling.”
  • Change in rate: “Belief in fairness is falling, but might be leveling off.”

In my opinion, the three figures are qualitatively similar. The shapes of the curves are somewhat different, but the headlines we wrote could apply to any of them.

Even the tentative conclusion, “might be leveling off”, holds up to varying degrees in all three.

Grouped by political alignment

When we group by political alignment, we have fewer samples in each group, so the results are noisier and our headlines are more tentative.

Here’s the figure from the previous article:

This image has an empty alt attribute; its file name is image-1.png

And here are two more figures generated by random resampling:

Now we see more qualitative differences between the figures. Let’s review the headlines again:

  • Absolute value: “Moderates have the bleakest outlook; Conservatives and Liberals are more optimistic.” This seems to be true in all three figures, although the size of the gap varies substantially.
  • Rate of change: “Belief in fairness is declining in all groups, but Conservatives are declining fastest.” This headline is more questionable. In one version of the figure, belief is increasing among Liberals. And it’s not at all clear the the decline is fastest among Conservatives.
  • Change in rate: “The Liberal outlook was declining, but it leveled off in 1990.” The Liberal outlook might have leveled off, or even turned around, but we could not say with any confidence that 1990 was a turning point.
  • Change in rate: “Liberals, who had the bleakest outlook in the 1980s, are now the most optimistic”. It’s not clear whether Liberals have the most optimistic outlook in the most recent data.

As we should expect, conclusions based on smaller sample sizes are less reliable.

Also, conclusions about absolute values are more reliable than conclusions about rates, which are more reliable than conclusions about changes in rates.

Matplotlib animation in Jupyter

Matplotlib animation in Jupyter

For two of my books, Think Complexity and Modeling and Simulation in Python, many of the examples involve animation. Fortunately, there are several ways to do animation with Matplotlib in Jupyter. Unfortunately, none of them is ideal.

FuncAnimation

Until recently, I was using FuncAnimation, provided by the matplotlib.animation package, as in this example from Think Complexity. The documentation of this function is pretty sparse, but if you want to use it, you can find examples.

For me, there are a few drawbacks:

  • It requires a back end like ffmpeg to display the animation. Based on my email, many readers have trouble installing packages like this, so I avoid using them.
  • It runs the entire computation before showing the result, so it takes longer to debug, and makes for a less engaging interactive experience.
  • For each element you want to animate, you have to use one API to create the element and another to update it.

For example, if you are using imshow to visualize an array, you would run

    im = plt.imshow(a, **options)

to create an AxesImage, and then

    im.set_array(a)

to update it. For beginners, this is a lot to ask. And even for experienced people, it can be hard to find documentation that shows how to update various display elements.

As another example, suppose you have a 2-D array and plot it like this:

    plot(a)

The result is a list of Line2D objects. To update them, you have to traverse the list and invoke set_xdata() on each one.

Updating a display is often more complicated than creating it, and requires substantial navigation of the documentation. Wouldn’t it be nice to just call plot(a) again?

Clear output

Recently I discovered simpler alternative using clear_output() from Ipython.display and sleep() from the time module. If you have Python and Jupyter, you already have these modules, so there’s nothing to install.

Here’s a minimal example using imshow:

%matplotlib inline

import numpy as np
from matplotlib import pyplot as plt
from IPython.display import clear_output
from time import sleep

n = 10
a = np.zeros((n, n))
plt.figure()

for i in range(n):
    plt.imshow(a)
    plt.show()
    a[i, i] = 1
    sleep(0.1)
    clear_output(wait=True)

The drawback of this method is that it is relatively slow, but for the examples I’ve worked on, the performance has been good enough.

In the ModSimPy library, I provide a function that encapsulates this pattern:

def animate(results, draw_func, interval=None):
    plt.figure()
    try:
        for t, state in results.iterrows():
            draw_func(state, t)
            plt.show()
            if interval:
                sleep(interval)
            clear_output(wait=True)
        draw_func(state, t)
        plt.show()
    except KeyboardInterrupt:
        pass

results is a Pandas DataFrame that contains results from a simulation; each row represents the state of a system at a point in time.

draw_func is a function that takes a state and draws it in whatever way is appropriate for the context.

interval is the time between frames in seconds (not counting the time to draw the frame).

Because the loop is wrapped in a try statement that captures KeyboardInterrupt, you can interrupt an animation cleanly.

You can see an example that uses this function in this notebook from Chapter 22 of Modeling and Simulation in Python, and you can run it on Binder.

And here’s an example from Chapter 6 of Think Complexity, which you can also run on Binder.