Browsed by
Month: February 2023

Driving Under the Influence

Driving Under the Influence

This recent article in the Washington Post reports that “a police department in Maryland is training officers to spot the signs of driving high by watching people toke up in a tent”. The story features Lt. John O’Brien, who is described as a “trained drug recognition expert”. It also quotes a defense attorney who says, “There are real questions about the scientific validity of what they’re doing.”

As it happens, the scientific validity of Drug Recognition Experts is one of the examples in my forthcoming book, Probably Overthinking It. The following is an excerpt from Chapter 9: “Fairness and Fallacy”.

In September 2017 the American Civil Liberties Union (ACLU) filed suit against Cobb County, Georgia on behalf of four drivers who were arrested for driving under the influence of cannabis. All four were evaluated by Officer Tracy Carroll, who had been trained as a “Drug Recognition Expert” (DRE) as part of a program developed by the Los Angeles Police Department in the 1970s.

At the time of their arrest, all four insisted that they had not smoked or ingested any cannabis products, and when their blood was tested, all four results were negative; that is, the blood tests found no evidence of recent cannabis use.

In each case, prosecutors dismissed the charges related to impaired driving. Nevertheless, the arrests were disruptive and costly, and the plaintiffs were left with a permanent and public arrest record.

At issue in the case is the assertion by the ACLU that, “Much of the DRE protocol has never been rigorously and independently validated.”

So I investigated that claim. What I found was a collection of studies that are, across the board, deeply flawed. Every one of them features at least one methodological error so blatant it would be embarrassing at a middle school science fair.

As an example, the lab study most often cited to show that the DRE protocol is valid was conducted at Johns Hopkins University School of Medicine in 1985. It concludes, “Overall, in 98.7% of instances of judged intoxication the subject had received some active drug”. In other words, in the cases where one of the Drug Recognition Experts believed that a subject was under the influence, they were right 98.7% of the time.

That sounds impressive, but there are several problems with this study. The biggest is that the subjects were all “normal, healthy” male volunteers between 18 and 35 years old, who were screened and “trained on the psychomotor tasks and subjective effect questionnaires used in the study”.

By design, the study excluded women, anyone older than 35, and anyone in poor health. Then the screening excluded anyone who had any difficulty passing a sobriety test while they were sober — for example, anyone with shaky hands, poor coordination, or poor balance.

But those are exactly the people most likely to be falsely accused. How can you estimate the number of false positives if you exclude from the study everyone likely to yield a false positive? You can’t.

Another frequently-cited study reports that “When DREs claimed drugs other than alcohol were present, they [the drugs] were almost always detected in the blood (94% of the time)”. Again, that sounds impressive until you look at the methodology.

Subjects in this study had already been arrested because they were suspected of driving while impaired, most often because they had failed a field sobriety test.

Then, while they were in custody, they were evaluated by a DRE, that is, a different officer trained in the drug evaluation procedure. If the DRE thought that the suspect was under the influence of a drug, the suspect was asked to consent to a blood test; otherwise they were released.

Of 219 suspects, 18 were released after a DRE performed a “cursory examination” and concluded that there was no evidence of drug impairment.

The remaining 201 suspects were asked for a blood sample. Of those, 22 refused and 6 provided a urine sample only.

Of the 173 blood samples, 162 were found to contain a drug other than alcohol. That’s about 94%, which is the statistic they reported.

But the base rate in this study is extraordinarily high, because it includes only cases that were suspected by the arresting officer and then confirmed by the DRE. With a few generous assumptions, I estimate that the base rate is 86%; in reality, it was probably higher.

To estimate the base rate, let’s assume:

  • All 18 of the suspects who were released were, in fact, not under the influence of a drug, and
  • The 28 suspects who refused a blood test were impaired at the same rate as the 173 who agreed, 94%.

Both of these assumptions are generous; that is, they probably overestimate the accuracy of the DREs. Even so, they imply that 188 out of 219 blood tests would have been positive, if they had been tested. That’s a base rate of 86%.

Because the suspects who were released were not tested, there is no way to estimate the sensitivity of the test, but let’s assume it’s 99%, so if a suspect is under the influence of a drug, there is a 99% chance a DRE would detect it. In reality, it is probably lower.

With these generous assumptions, we can use the following table to estimate the sensitivity of the DRE protocol.

SuspectsProb PositiveCasesPercent
Impaired860.9985.1493.8
Not impaired140.405.606.2

With 86% base rate, we expect 86 impaired suspects out of 100, and 14 unimpaired. With 99% sensitivity, we expect the DRE to detect about 85 true positives. And with 60% specificity, we expect the DRE to wrongly accuse 5.6 suspects. Out of 91 positive tests, 85 would be correct; that’s about 94%, as reported in the study.

But this accuracy is only possible because the base rate in the study is so high. Remember that most of the subjects had been arrested because they had failed a field sobriety test. Then they were tested by a DRE, who was effectively offering a second opinion.

But that’s not what happened when Officer Tracy Carroll arrested Katelyn Ebner, Princess Mbamara, Ayokunle Oriyomi, and Brittany Penwell. In each of those cases, the driver was stopped for driving erratically, which is evidence of possible impairment. But when Officer Carroll began his evaluation, that was the only evidence of impairment.

So the relevant base rate is not 86%, as in the study; it is the fraction of erratic drivers who are under the influence of drugs. And there are many other reasons for erratic driving, including distraction, sleepiness, and the influence of alcohol. It’s hard to say which explanation is most common. I’m sure it depends on time and location. But as an example, let’s suppose it is 50%; the following table shows the results with this base rate.

SuspectsProb PositiveCasesPercent
Impaired500.9949.571.2
Not impaired500.4020.028.8

With 50% base rate, 99% sensitivity, and 60% specificity, the predictive value of the test is only 71%; under these assumptions, almost 30% of the accused would be innocent. In fact, the base rate, sensitivity, and specificity are probably lower, which means that the value of the test is even worse.

The suit filed by the ACLU was not successful. The court decided that the arrests were valid because the results of the field sobriety tests constituted “probable cause” for an arrest. As a result, the court did not consider the evidence for, or against, the validity of the DRE protocol. The ACLU has appealed the decision.

Sources:

One Queue or Two

One Queue or Two

I’m happy to report that copyediting of Modeling and Simulation in Python is done, and the book is off to the printer! Electronic versions are available now from No Starch Press; print copies will be available in May, but you can pre-order now from No Starch Press, Amazon, and Barnes and Noble.

To celebrate, I just published one of the case studies from the end of Part I, which is about simulating discrete systems. The case study explores a classic question from queueing theory:

Suppose you are designing the checkout area for a new store. There is room for two checkout counters and a waiting area for customers. You can make two lines, one for each counter, or one line that serves both counters.

In theory, you might expect a single line to be better, but it has some practical drawbacks: in order to maintain a single line, you would have to install rope barriers, and customers might be put off by what seems to be a longer line, even if it moves faster.

So you’d like to check whether the single line is really better and by how much.

Simulation can help answer this question. The following figure shows the three scenarios I simulated:

The leftmost diagram shows a single queue (with customers arriving at rate 𝜆) and a single server (with customers completing service at rate 𝜇).

The center diagram shows a single queue with two servers, and the rightmost diagram shows two queue with two servers.

So, which is the best, and by how much? You can read my answer in the online version of the book. Or you can run the Jupyter notebook on Colab.

Here’s what some of the results look like:

This figure shows the time customers are in the system, including wait time and service time, as a function of the arrival rate. The orange line shows the average we expect based on analysis; the blue dots show the result of simulations.

This comparison shows that the simulation and analysis are consistent. It also demonstrates one of the features of simulation: it is easy to quantify not just the average we expect but also the variation around the average.

That capability turns out to be useful for this problem because, as it turns out, the difference between the one-queue and two-queue scenarios is small compared to the variation, which suggests the advantage would be unnoticed in practice.

I conclude:

The two configurations are equally good as long as both servers are busy; the only time two lines is worse is if one queue is empty and the other contains more than one customer. In real life, if we allow customers to change lanes, that disadvantage can be eliminated.

From a theoretical point of view, one line is better. From a practical point of view, the difference is small and can be mitigated. So the best choice depends on practical considerations.

On the other hand, you can do substantially better with an express line for customers with short service times. But that’s a topic for another case study.

Don’t discard that pangolin

Don’t discard that pangolin

Sadly, today is my last day at DrivenData, so it’s a good time to review one of the projects I’ve been working on, using probabilistic predictions from Zamba to find animals in camera trap videos, like this:

Zamba is one of the longest-running projects at DrivenData. You can read about it in this blog post: Computer vision for wildlife monitoring in a changing climate.

And if you want to know more about my part of it, I wrote this series of articles.

Most recently, I’ve been working on calibrating the predictions from convolutional neural networks (CNNs). I haven’t written about it, but at ODSC East 2023, I’m giving a talk about it:

Don’t Discard That Pangolin, Calibrate Your Deep Learning Classifier 

Suppose you are an ecologist studying a rare species like a pangolin. You can use motion-triggered camera traps to collect data about the presence and abundance of species in the wild, but for every video showing a pangolin, you have 100 that show other species, and 100 more that are blank. You might have to watch hours of video to find one pangolin.

Deep learning can help. Project Zamba provides models that classify camera trap videos and identify the species that appear in them. Of course, the results are not perfect, but we can often remove 80% of the videos we don’t want while losing only 10-20% of the videos we want.

But there’s a problem. The output from deep learning classifiers is generally a “confidence score”, not a probability. If a classifier assigns a label with 80% confidence, that doesn’t mean there is an 80% chance it is correct. However, with a modest number of human-generated labels, we can often calibrate the output to produce more accurate probabilities, and make better predictions.

In this talk, I’ll present use cases based on data from Africa, Guam, and New Zealand, and show how we can use deep learning and calibration to save the pangolin… or at least the pangolin videos. This real-world problem shows how users of ML models can tune the results to improve performance on their applications.  

The ODSC schedule isn’t posted yet, but I’ll fill in the details later.