{"id":1686,"date":"2025-12-26T20:46:29","date_gmt":"2025-12-26T20:46:29","guid":{"rendered":"https:\/\/www.allendowney.com\/blog\/?p=1686"},"modified":"2025-12-28T18:45:27","modified_gmt":"2025-12-28T18:45:27","slug":"the-raven-paradox","status":"publish","type":"post","link":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/","title":{"rendered":"The Raven Paradox"},"content":{"rendered":"\n<p>Suppose you are not sure whether all ravens are black. If you see a white raven, that clearly refutes the hypothesis. And if you see a black raven, that supports the hypothesis in the sense that it increases our confidence, maybe slightly. But what if you see a red apple \u2013 does that make the hypothesis any more or less likely?<\/p>\n\n\n\n<p>This question is the core of the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Raven_paradox\">Raven paradox<\/a>, a problem in the philosophy of science posed by Carl Gustav Hempel in the 1940s. It highlights a counterintuitive aspect of how we evaluate evidence and confirm hypotheses.<\/p>\n\n\n\n<p>No resolution of the paradox is universally accepted, but the most widely accepted is what I will call the standard Bayesian response. In this article, I\u2019ll present this response, explain why I think it is incomplete, and propose an extension that might resolve the paradox.<\/p>\n\n\n\n<p><a href=\"https:\/\/colab.research.google.com\/github\/AllenDowney\/ThinkBayes2\/blob\/master\/examples\/raven.ipynb\">Click here to run this notebook on Colab<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Problem<\/h2>\n\n\n\n<p>The paradox starts with the hypothesis<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>A: All ravens are black<\/p>\n<\/blockquote>\n\n\n\n<p>And the contrapositive hypothesis<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>B: All non-black things are non-ravens<\/p>\n<\/blockquote>\n\n\n\n<p>Logically, these hypotheses are identical \u2013 if A is true, B must be true, and vice versa. So if we have a certain level of confidence in A, we should have exactly the same confidence in B. And if we observe evidence in favor of A, we should also accept it as evidence in favor of B, to the same degree.<\/p>\n\n\n\n<p>Also, if we accept that a black raven is evidence in favor of A, we should also accept that a non-black non-raven is evidence in favor of B.<\/p>\n\n\n\n<p>Finally, if a non-black non-raven is evidence in favor of B, we should also accept that it is evidence in favor of A.<\/p>\n\n\n\n<p>Therefore, a red apple (which is a non-black non-raven) is evidence that all ravens are black.<\/p>\n\n\n\n<p>If you accept this conclusion, it seems like every time you see a red apple (or a blue car, or a green leaf, etc.) you should think, \u201cNow I am slightly more confident that all ravens are black\u201d.<\/p>\n\n\n\n<p>But that seems absurd, so we have two options:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Discover an error in the argument, or<\/li>\n\n\n\n<li>Accept the conclusion.<\/li>\n<\/ol>\n\n\n\n<p>As you might expect, many versions of (1) and (2) have been proposed.<\/p>\n\n\n\n<p>The standard Bayesian response is to accept the conclusion but, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Raven_paradox#Standard_Bayesian_solution\">quoth Wikipedia<\/a> \u201cargue that the amount of confirmation provided is very small, due to the large discrepancy between the number of ravens and the number of non-black objects. According to this resolution, the conclusion appears paradoxical because we intuitively estimate the amount of evidence provided by the observation of a green apple to be zero, when it is in fact non-zero but extremely small.\u201d<\/p>\n\n\n\n<p>It is true that when the number of non-ravens is large, the amount of evidence we get from each non-black non-raven is so small it is negligible. But I don\u2019t think that\u2019s <em>why<\/em> the conclusion is so acutely counterintuitive.<\/p>\n\n\n\n<p>To clarify my objection, let me present a smaller example I\u2019ll call the Roulette paradox. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Roulette Paradox<\/h2>\n\n\n\n<p>An American roulette wheel has 36 pockets with the numbers <code>1<\/code> to <code>36<\/code>, and two pockets labeled <code>0<\/code> and <code>00<\/code>. The non-zero pockets are red or black, and the zero pockets are green.<\/p>\n\n\n\n<p>Suppose we work in quality control at the roulette factory and our job is to check that all zero pockets are green. If we observe a green zero, that\u2019s evidence that all zeros are green. But what if we observe a red 19?<\/p>\n\n\n\n<p>In this example, the standard Bayesian response fails:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>First, the number of non-zeros is not particularly large, so the weight of the evidence is not negligible.<\/li>\n\n\n\n<li>Also, the Bayesian response doesn\u2019t address what I think is actually the key: The non-green non-zero <strong>may or may not be evidence<\/strong>, depending on how it was sampled.<\/li>\n<\/ul>\n\n\n\n<p>As I will demonstrate,<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>If we choose a pocket at random and it turns out to be a non-green non-zero, that <em>is not<\/em> evidence that all zeros are green.<\/li>\n\n\n\n<li>But if we choose a non-green pocket and it turns out to be non-zero, that <em>is<\/em> evidence that all zeros are green.<\/li>\n<\/ol>\n\n\n\n<p>In both cases we observe a non-green non-zero, but \u201cobserve\u201d is ambiguous. Whether the observation is evidence or not <strong>depends on the sampling process<\/strong> that generated the observation. And I think confusion between these two scenarios is the foundation of the paradox.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Setup<\/h2>\n\n\n\n<p>Let\u2019s get into the details. Switching from roulette back to ravens, we will consider four scenarios:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>You choose a random thing and it turns out to be a black raven.<\/li>\n\n\n\n<li>You choose a random thing and it turns out to be a non-black non-raven.<\/li>\n\n\n\n<li>You choose a random raven and it turns out to be black.<\/li>\n\n\n\n<li>You choose a random non-black thing and it turns out to be a non-raven.<\/li>\n<\/ol>\n\n\n\n<p>The key to the raven paradox is the difference between scenarios 2 and 4.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Scenario 2 is what most people imagine when they picture \u201cobserving a red apple\u201d. And in this scenario, the red apple is irrelevant, exactly as intuition insists.<\/li>\n\n\n\n<li>In Scenario 4, a red apple is evidence in favor of A, because we\u2019re systematically checking non-black things to ensure they\u2019re not ravens \u2013 so finding they aren\u2019t is confirmation. But this sampling process is a more contrived interpretation of \u201cobserving a red apple\u201d.<\/li>\n<\/ul>\n\n\n\n<p>The reason for the paradox is that we imagine Scenario 2 and we are given the conclusion from Scenario 4.<\/p>\n\n\n\n<p>It might not be obvious why the red apple is evidence in Scenario 4, but not Scenario 2. I think it will be clearer if we do the math.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Math<\/h2>\n\n\n\n<p>We\u2019ll start with a small world where there are only <code>N = 9<\/code> ravens and <code>M = 19<\/code> non-ravens. Then we\u2019ll see what happens as we vary <code>N<\/code> and <code>M<\/code>.<\/p>\n\n\n\n<p>I\u2019ll use <code>i<\/code> to represent the unknown number of black ravens, which could be any value from <code>0<\/code> to <code>N<\/code>, and <code>j<\/code> to represent the unknown number of black non-ravens, from <code>0<\/code> to <code>M<\/code>.<\/p>\n\n\n\n<p>We\u2019ll use a joint distribution to represent beliefs about <code>i<\/code> and <code>j<\/code>; then we\u2019ll use Bayes\u2019s Theorem to update these beliefs when we see new data.<\/p>\n\n\n\n<p>Let\u2019s start with a uniform prior over all possible combinations of <code>(i, j)<\/code>. For this prior, the probability of <code>A<\/code> is 10%. We\u2019ll see later that the prior affects the strength of the evidence, but it doesn\u2019t affect whether an observation is in favor of <code>A<\/code> or not.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Scenario 1<\/h2>\n\n\n\n<p>Now let\u2019s consider the first scenario: we choose a thing at random from the universe of things, and we find that it is a black raven.<\/p>\n\n\n\n<p>The likelihood for this observation is: <code>i \/ (N + M)<\/code>, because <code>i<\/code> is the number of black ravens and <code>N + M<\/code> is the total number of things. <\/p>\n\n\n\n<p>In this scenario the posterior probability of A is 20%. The posterior probability is higher than the prior, so the black raven is evidence in favor of <code>A<\/code>.<\/p>\n\n\n\n<p>To quantify the strength of the evidence, we\u2019ll use the log odds ratio, which is 0.81. Later we\u2019ll see how the strength of the evidence depends on the prior distribution of <code>i<\/code> and <code>j<\/code>.<\/p>\n\n\n\n<p>Before we go on, let\u2019s also look at the marginal distribution of <code>i<\/code> (number of black ravens) before and after.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"442\" height=\"292\" src=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png\" alt=\"_images\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png\" class=\"wp-image-1695\" srcset=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png 442w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7-300x198.png 300w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7-409x270.png 409w\" sizes=\"auto, (max-width: 442px) 100vw, 442px\" \/><\/figure>\n\n\n\n<p>As expected, observing a black raven increases our confidence that all ravens are black. The posterior distribution shifts toward higher values of <code>i<\/code>, and the probability that <code>i = N<\/code> increases.<\/p>\n\n\n\n<p>In Scenario 1, the likelihood depends only on <code>i<\/code>, not on <code>j<\/code>, so the update doesn\u2019t change our beliefs about <code>j<\/code> (the number of black non-ravens). <\/p>\n\n\n\n<p>Finally, let\u2019s visualize posterior joint distribution of <code>i<\/code> and <code>j<\/code>.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"442\" height=\"262\" src=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/3f8f1dc012592d11ac19f20c5698984fc6134a93c7eeec35c1ba1aed5913a1a2.png\" alt=\"_images\/3f8f1dc012592d11ac19f20c5698984fc6134a93c7eeec35c1ba1aed5913a1a2.png\" class=\"wp-image-1697\" srcset=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/3f8f1dc012592d11ac19f20c5698984fc6134a93c7eeec35c1ba1aed5913a1a2.png 442w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/3f8f1dc012592d11ac19f20c5698984fc6134a93c7eeec35c1ba1aed5913a1a2-300x178.png 300w\" sizes=\"auto, (max-width: 442px) 100vw, 442px\" \/><\/figure>\n\n\n\n<p>Because we started with a uniform distribution and the data has no bearing on <code>j<\/code>, the joint posterior probabilities don\u2019t depend on <code>j<\/code>.<\/p>\n\n\n\n<p>In summary, Scenario 1 is consistent with intuition: a black raven is evidence in favor of <code>A<\/code>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Scenario 2<\/h2>\n\n\n\n<p>In this scenario, we choose a thing at random from the universe of <code>N + M<\/code> things, and it turns out to be a red apple \u2013 which we will treat generally as a non-black non-raven.<\/p>\n\n\n\n<p>The likelihood of this observation is: <code>(M - j) \/ (N + M)<\/code>, because <code>M - j<\/code> is the number of non-black non-ravens and <code>N + M<\/code> is the total number of things.<\/p>\n\n\n\n<p>In this scenario, the posterior probability of <code>A<\/code> is the same as the prior. In fact, the entire distribution of <code>i<\/code> is unchanged.<\/p>\n\n\n\n<p>So the red apple is not evidence in favor of <code>A<\/code> or against it. This is consistent with the intuition that the red apple (or any non-black non-raven) is irrelevant.<\/p>\n\n\n\n<p>However, the red apple is evidence about <code>j<\/code>, as we can confirm by comparing the marginal distribution of <code>j<\/code> before and after.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"442\" height=\"292\" src=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/b99c195fe739e438b9f9d21fbbfc7e12e264b93bff2de276ae0dbe7effc395dc.png\" alt=\"_images\/b99c195fe739e438b9f9d21fbbfc7e12e264b93bff2de276ae0dbe7effc395dc.png\" class=\"wp-image-1694\" srcset=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/b99c195fe739e438b9f9d21fbbfc7e12e264b93bff2de276ae0dbe7effc395dc.png 442w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/b99c195fe739e438b9f9d21fbbfc7e12e264b93bff2de276ae0dbe7effc395dc-300x198.png 300w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/b99c195fe739e438b9f9d21fbbfc7e12e264b93bff2de276ae0dbe7effc395dc-409x270.png 409w\" sizes=\"auto, (max-width: 442px) 100vw, 442px\" \/><\/figure>\n\n\n\n<p>And here\u2019s the posterior joint distribution of <code>i<\/code> and <code>j<\/code>.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"442\" height=\"262\" src=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/de8e2c353c1109b5324bd83ac12ec0262d1267021a2736b91d8f83ec641bc11d.png\" alt=\"_images\/de8e2c353c1109b5324bd83ac12ec0262d1267021a2736b91d8f83ec641bc11d.png\" class=\"wp-image-1696\" srcset=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/de8e2c353c1109b5324bd83ac12ec0262d1267021a2736b91d8f83ec641bc11d.png 442w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/de8e2c353c1109b5324bd83ac12ec0262d1267021a2736b91d8f83ec641bc11d-300x178.png 300w\" sizes=\"auto, (max-width: 442px) 100vw, 442px\" \/><\/figure>\n\n\n\n<p>Because the red apple has no bearing on <code>i<\/code>, the posterior probabilities in this scenario don\u2019t depend on <code>i<\/code>.<\/p>\n\n\n\n<p>In summary, Scenario 2 matches our intuition: a red apple (chosen at random) is <em>not<\/em> evidence about whether all ravens are black.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Scenario 3<\/h2>\n\n\n\n<p>In this scenario, we choose a raven first and then observe that it is black.<\/p>\n\n\n\n<p>The likelihood for this observation is: <code>i \/ N<\/code>, because <code>i<\/code> is the number of black ravens and <code>N<\/code> is the total number of ravens.<br>   <br>In this scenario,  the posterior probability of A is 20%, the same as in Scenario 1.So we conclude that the black raven is evidence in favor of <code>A<\/code>, with the same strength regardless of whether we are in:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Scenario 1: Select a random thing and it turns out to be a black raven or<\/li>\n\n\n\n<li>Scenario 3: Select a random raven and it turns out to be black.<\/li>\n<\/ul>\n\n\n\n<p>In fact, the entire posterior distribution is the same in both scenarios. That&#8217;s because the likelihoods in Scenarios 1 and 3 differ only by a constant factor, which is removed when the posterior distributions are normalized.<\/p>\n\n\n\n<p>In summary, Scenario 3 is consistent with intuition: if we choose a raven and find that it is black, that is evidence in favor of <code>A<\/code>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Scenario 4<\/h2>\n\n\n\n<p>In the last scenario, we first choose a non-black thing (from all non-black things in the universe), and then observe that it is a non-raven.<\/p>\n\n\n\n<p>The likelihood of this observation is: <code>(M - j) \/ (N - i + M - j)<\/code> because <code>M - j<\/code> is the number of non-black non-ravens and <code>N - i + M - j<\/code> is the total number of non-black things.<\/p>\n\n\n\n<p>This likelihood <strong>depends on both<\/strong> <code>i<\/code> and <code>j<\/code>, unlike Scenario 2. This is the key difference that makes Scenario 4 informative about whether all ravens are black.<\/p>\n\n\n\n<p> The posterior probability of A is about 15%, which is greater than the prior, so the non-black non-raven is evidence in favor of <code>A<\/code>. The log odds ratio is about 0.47, which is smaller than in Scenarios 1 and 3, because there are more non-ravens than ravens. As we\u2019ll see, the strength of the evidence gets smaller as <code>M<\/code> gets bigger.<\/p>\n\n\n\n<p>Here is the marginal distribution of <code>i<\/code> (number of black ravens) before and after.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"442\" height=\"292\" src=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/90ae0b51191d2f3b068cc9a4d382e172f23d8ec19a5708c3e5f51677d0b966cd.png\" alt=\"_images\/90ae0b51191d2f3b068cc9a4d382e172f23d8ec19a5708c3e5f51677d0b966cd.png\" class=\"wp-image-1691\" srcset=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/90ae0b51191d2f3b068cc9a4d382e172f23d8ec19a5708c3e5f51677d0b966cd.png 442w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/90ae0b51191d2f3b068cc9a4d382e172f23d8ec19a5708c3e5f51677d0b966cd-300x198.png 300w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/90ae0b51191d2f3b068cc9a4d382e172f23d8ec19a5708c3e5f51677d0b966cd-409x270.png 409w\" sizes=\"auto, (max-width: 442px) 100vw, 442px\" \/><\/figure>\n\n\n\n<p>And here\u2019s the marginal distribution of <code>j<\/code> (number of black non-ravens) before and after.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"442\" height=\"292\" src=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/f7b88c2307331de2664b17bd98bdd246a61d436d1f5fc263d05e80ea706ce875.png\" alt=\"_images\/f7b88c2307331de2664b17bd98bdd246a61d436d1f5fc263d05e80ea706ce875.png\" class=\"wp-image-1692\" srcset=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/f7b88c2307331de2664b17bd98bdd246a61d436d1f5fc263d05e80ea706ce875.png 442w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/f7b88c2307331de2664b17bd98bdd246a61d436d1f5fc263d05e80ea706ce875-300x198.png 300w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/f7b88c2307331de2664b17bd98bdd246a61d436d1f5fc263d05e80ea706ce875-409x270.png 409w\" sizes=\"auto, (max-width: 442px) 100vw, 442px\" \/><\/figure>\n\n\n\n<p>Finally, here\u2019s the posterior joint distribution of <code>i<\/code> and <code>j<\/code>.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"442\" height=\"262\" src=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/83ec65bdd92130d86aad0ff90c7b99232096fffa2e14c5ec96647f66e8137d0c.png\" alt=\"_images\/83ec65bdd92130d86aad0ff90c7b99232096fffa2e14c5ec96647f66e8137d0c.png\" class=\"wp-image-1693\" srcset=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/83ec65bdd92130d86aad0ff90c7b99232096fffa2e14c5ec96647f66e8137d0c.png 442w, https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/83ec65bdd92130d86aad0ff90c7b99232096fffa2e14c5ec96647f66e8137d0c-300x178.png 300w\" sizes=\"auto, (max-width: 442px) 100vw, 442px\" \/><\/figure>\n\n\n\n<p>In Scenario 4, the likelihood depends on <strong>both<\/strong> <code>i<\/code> and <code>j<\/code>, so the update changes our beliefs about both parameters.<\/p>\n\n\n\n<p>And in Scenario 4 a non-black non-raven (chosen from non-black things) is evidence in favor of <code>A<\/code>. This might still be surprising, but let me suggest a way to think about it: in this scenario we are checking non-black things to make sure they are not ravens. If we find a non-black raven, that contradicts <code>A<\/code>. If we don\u2019t, that supports <code>A<\/code>.<\/p>\n\n\n\n<p>In all four scenarios, the results are consistent with intuition. So as long as you are clear about which scenario you are in, there is no paradox. The paradox is only apparent if you think you are in Scenario 2 and you imagine the result from Scenario 4.<\/p>\n\n\n\n<p>In the context of the original problem:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>If you walk out of your house and the first thing you see is a red apple (or a blue car, or a green leaf) that has no bearing on whether raven are black.<\/li>\n\n\n\n<li>But if you deliberately select a non-black thing and check whether it\u2019s a raven, and you find that it is not, that actually is evidence that all ravens are black \u2013 but consistent with the standard Bayesian response, it is so weak it is negligible.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">Successive updates<\/h2>\n\n\n\n<p>In these examples, we started with a uniform prior over all combinations of <code>i<\/code> and <code>j<\/code>. Of course that\u2019s not a realistic representation of what we believe about the world. So let\u2019s consider the effect of other priors.<\/p>\n\n\n\n<p>In general, different priors lead to different posterior distributions, and in this case they lead to different conclusions about the <em>strength<\/em> of the evidence. But they lead to the same conclusion about the <em>direction<\/em> of the evidence.<\/p>\n\n\n\n<p>To demonstrate, let\u2019s see what happens if we observe a series of black ravens (in Scenario 1 or 3). For simplicity, assume that we sample with replacement.<\/p>\n\n\n\n<p>The following function computes multiple updates, starting with the uniform prior and then using the posterior from each update as the prior for the next.<\/p>\n\n\n\n<p>This table shows the results in Scenario 1 (which is the same as in Scenario 3). For each iteration, the table shows the prior and posterior probability of <code>A<\/code>, and the log odds ratio.<br><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Iteration<\/th><th>Prior<\/th><th>Posterior<\/th><th>LOR<\/th><\/tr><\/thead><tbody><tr><th>0<\/th><td>0.100000<\/td><td>0.200000<\/td><td>0.810930<\/td><\/tr><tr><th>1<\/th><td>0.200000<\/td><td>0.284211<\/td><td>0.462624<\/td><\/tr><tr><th>2<\/th><td>0.284211<\/td><td>0.360000<\/td><td>0.348307<\/td><\/tr><tr><th>3<\/th><td>0.360000<\/td><td>0.427901<\/td><td>0.284942<\/td><\/tr><tr><th>4<\/th><td>0.427901<\/td><td>0.488715<\/td><td>0.245274<\/td><\/tr><tr><th>5<\/th><td>0.488715<\/td><td>0.543171<\/td><td>0.218261<\/td><\/tr><tr><th>6<\/th><td>0.543171<\/td><td>0.591920<\/td><td>0.198796<\/td><\/tr><tr><th>7<\/th><td>0.591920<\/td><td>0.635551<\/td><td>0.184196<\/td><\/tr><tr><th>8<\/th><td>0.635551<\/td><td>0.674590<\/td><td>0.172914<\/td><\/tr><tr><th>9<\/th><td>0.674590<\/td><td>0.709512<\/td><td>0.163995<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>As we see more ravens, the posterior probability of <code>A<\/code> increases, but the LOR decreases \u2013 which means that each raven provides weaker evidence than the previous one. In the long run the LOR converges to a value greater than 0 (about 0.11), which means that each raven provides at least some additional evidence, even when the prior is far from the uniform distribution we started with.<\/p>\n\n\n\n<p>In the worst case, if the prior probability of <code>A<\/code> is <code>0<\/code> or <code>1<\/code>, nothing we observe can change those beliefs, so nothing is evidence for or against <code>A<\/code>. But there is no prior where a black raven provides evidence <em>against<\/em> <code>A<\/code>.<\/p>\n\n\n\n<p>[Proof: The likelihood of the observation is maximized when all ravens are black (i = N). Therefore, for any prior that gives non-zero probability to both A and its complement, the LOR is positive: these observations can never be evidence against A.]<\/p>\n\n\n\n<p>The following table shows the results in Scenario 4, where we select a non-black thing and check that it is not a raven.<br><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Iteration<\/th><th>Prior<\/th><th>Posterior<\/th><th>LOR<\/th><\/tr><\/thead><tbody><tr><th>0<\/th><td>0.100000<\/td><td>0.149403<\/td><td>0.457933<\/td><\/tr><tr><th>1<\/th><td>0.149403<\/td><td>0.201006<\/td><td>0.359272<\/td><\/tr><tr><th>2<\/th><td>0.201006<\/td><td>0.253991<\/td><td>0.302582<\/td><\/tr><tr><th>3<\/th><td>0.253991<\/td><td>0.307217<\/td><td>0.264273<\/td><\/tr><tr><th>4<\/th><td>0.307217<\/td><td>0.359496<\/td><td>0.235611<\/td><\/tr><tr><th>5<\/th><td>0.359496<\/td><td>0.409837<\/td><td>0.212911<\/td><\/tr><tr><th>6<\/th><td>0.409837<\/td><td>0.457528<\/td><td>0.194344<\/td><\/tr><tr><th>7<\/th><td>0.457528<\/td><td>0.502141<\/td><td>0.178860<\/td><\/tr><tr><th>8<\/th><td>0.502141<\/td><td>0.543477<\/td><td>0.165785<\/td><\/tr><tr><th>9<\/th><td>0.543477<\/td><td>0.581514<\/td><td>0.154644<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>The pattern is similar. Each non-black thing that turns out not to be a raven is weaker evidence than the previous one. But it is always in favor of <code>A<\/code> \u2013 in this scenario, there is no prior where a non-black non-raven is evidence against <code>A<\/code>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Varying <code>M<\/code><\/h2>\n\n\n\n<p>Finally, let\u2019s see how the strength of the evidence varies as we increase <code>M<\/code>, the number of non-ravens. The following function computes results in Scenario 4 for a range of values of <code>M<\/code>, holding constant the number of ravens, <code>N = 9<\/code>.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>M<\/th><th>Prior<\/th><th>Posterior<\/th><th>LOR<\/th><\/tr><\/thead><tbody><tr><th>20<\/th><td>0.1<\/td><td>0.147655<\/td><td>0.444110<\/td><\/tr><tr><th>50<\/th><td>0.1<\/td><td>0.124515<\/td><td>0.246875<\/td><\/tr><tr><th>100<\/th><td>0.1<\/td><td>0.114530<\/td><td>0.151946<\/td><\/tr><tr><th>200<\/th><td>0.1<\/td><td>0.108495<\/td><td>0.091022<\/td><\/tr><tr><th>500<\/th><td>0.1<\/td><td>0.104100<\/td><td>0.044751<\/td><\/tr><tr><th>1000<\/th><td>0.1<\/td><td>0.102331<\/td><td>0.025640<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>As <code>M<\/code> increases (more non-ravens in the universe), the strength of the evidence decreases. This is consistent with the standard Bayesian response, which notes that in a realistic scenario, the evidence is negligible.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>The standard Bayesian response to the Raven paradox is correct in the sense that <em>if<\/em> a non-black non-raven is evidence that all ravens are black, it is so extremely weak. But that doesn\u2019t explain why the roulette example \u2013 where the number of non-green non-zero pockets is relatively small \u2013 is still so contrary to intuition.<\/p>\n\n\n\n<p>I think a better explanation for the paradox is the ambiguity of the word \u201cobserve\u201d. If we are explicit about the sampling process that generates the observation, we find that a non-black non-raven may or may not be evidence that all ravens are black.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Scenario 2: If we choose a random thing and find that it is a non-black non-raven, that <em>is not<\/em> evidence.<\/li>\n\n\n\n<li>Scenario 4: If we choose a non-black thing and find that it is a non-raven, that <em>is<\/em> evidence.<\/li>\n<\/ul>\n\n\n\n<p>The first case is entirely consistent with intuition. The second case is less obvious, but if we consider smaller examples like a roulette wheel, and do the math, it can be reconciled with intuition.<\/p>\n\n\n\n<p>Confusion between these scenarios causes the apparent paradox, and clarity about the scenarios resolves it.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Symmetry and Asymmetry<\/h2>\n\n\n\n<p>It might still seem strange that a black raven is always evidence for <code>A<\/code> and <code>B<\/code>, but a non-black non-raven may or may not be, depending on the sampling process. If <code>A<\/code> and <code>B<\/code> are logically identical, and a black raven supports <code>A<\/code>, it\u2019s still not clear why a non-black non-raven doesn\u2019t always support <code>B<\/code>.<\/p>\n\n\n\n<p>After all, if we start with <code>B<\/code>, we conclude that a non-black non-raven is always evidence for <code>B<\/code> (and <code>A<\/code>), and a black raven may or may not be. Where does this asymmetry come from?<\/p>\n\n\n\n<p>We broke the symmetry when we formulated \u201cAll ravens are black\u201d as \u201cOut of all ravens, how many are black?\u201d This formulation first divides the world into ravens and non-ravens, then asks how many in each group are black.<\/p>\n\n\n\n<p>Conversely, if we start with \u201cAll non-black things are non-ravens\u201d, we formulate it as \u201cOut of all non-black things, how many are ravens?\u201d In this formulation, we divide the world into black and non-black things, then ask how many in each group are ravens.<\/p>\n\n\n\n<p>The asymmetry is apparent when we parameterize the models. If we start with A, we define <code>i<\/code> to be the number of ravens that are black. And we find that in Scenario 1, the likelihood of a black raven depends on <code>i<\/code>, and in Scenario 2, the likelihood of a non-black non-raven does not.<\/p>\n\n\n\n<p>If we start with <code>B<\/code>, we define <code>i<\/code> to be the number of non-black things that are non-ravens. Then in Scenario 1 we find that a non-black non-raven pertains to <code>i<\/code>, but a black raven does not.<\/p>\n\n\n\n<p>So the symmetry is broken when we formulate the hypothesis in a way that is testable with data. In propositional logic, <code>A<\/code> and <code>B<\/code> are equivalent in the sense that evidence for one must be evidence for the other. In the Bayesian formulation, \u201cHow many ravens are black?\u201d and \u201cHow many non-black things are non-ravens?\u201d are not equivalent; evidence for one is not necessarily evidence for the other.<\/p>\n\n\n\n<p>A critic might say that the Bayesian formulation is a non-resolution \u2013 that is, it doesn\u2019t solve the original problem posed by Hempel; it only solves a related problem by making additional assumptions.<\/p>\n\n\n\n<p>A Bayesian response is that the Raven Paradox is only problematic in the abstract world of propositional logic; as soon as we formulate the question in a way that connects it to the real world through observation, it disappears. So the Raven Paradox is similar to the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Principle_of_explosion\">principle of explosion<\/a> \u2013 it demonstrates a brittleness in propositional logic that makes it unsuitable for reasoning about many real-world hypotheses.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Related Reading<\/h2>\n\n\n\n<p>I am not the first to notice that the interpretation of evidence depends on a model of the data-generating process. In the context of the Raven Problem, Richard Royall <a href=\"https:\/\/www.taylorfrancis.com\/books\/mono\/10.1201\/9780203738665\/statistical-evidence-tibshirani-richard-royall\">wrote<\/a>:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>We see that the observation of a red pencil can be evidence that all ravens are black. To make the proper interpretation, we must have an additional piece of information. Whether the observation is or is not evidence supporting the hypothesis (A) that all ravens are black versus the hypothesis (B) that only a fraction \u2026 are black is determined by the sampling procedure. A randomly selected pencil that proves to be red is not evidence that all ravens are black, but a randomly selected red object that proves to be a pencil is.<\/p>\n<\/blockquote>\n\n\n\n<p>This analysis appears in an appendix of <em>Statistical evidence: a likelihood paradigm<\/em>, first published in 1997. I found it in a footnote of <a href=\"https:\/\/www.researchgate.net\/profile\/Prasanta-Bandyopadhyay-4\/publication\/317098249_Belief_Evidence_and_Uncertainty\/links\/5925c584aca27295a8eee52d\/Belief-Evidence-and-Uncertainty.pdf\"><em>Belief, Evidence, and Uncertainty: Problems of Epistemic Inference<\/em><\/a>, published in 2016:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Royall in his commentary on the Raven Paradox \u2026 observes that how one got the white shoes is inferentially important. If you grabbed a non-raven object at random, then it does not bear on the question of whether all ravens are black. If on the other hand you grabbed a random non-black object, and it turned out to be a pair of shoes, then it provides a very tiny amount of evidence for the hypothesis that all ravens are black \u2026<\/p>\n<\/blockquote>\n\n\n\n<p>Royall is right that the sampling process determines whether a red pencil (or white shoe) is evidence about ravens, and he analyzes a version of what I\u2019m calling Scenario 4. But I don\u2019t think his analysis quite explains why the paradox feels so counterintuitive, and it seems to have had little impact on the discussion of the Raven paradox in the confirmation theory literature.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>Here is a <a href=\"https:\/\/allendowney.github.io\/ThinkBayes2\/raven.html\">longer version of this article<\/a> that includes all of the code and a list of objections with my responses. You can also <a href=\"https:\/\/colab.research.google.com\/github\/AllenDowney\/ThinkBayes2\/blob\/master\/examples\/raven.ipynb\">click here to run the notebook on Colab<\/a>.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Suppose you are not sure whether all ravens are black. If you see a white raven, that clearly refutes the hypothesis. And if you see a black raven, that supports the hypothesis in the sense that it increases our confidence, maybe slightly. But what if you see a red apple \u2013 does that make the hypothesis any more or less likely? This question is the core of the Raven paradox, a problem in the philosophy of science posed by Carl&#8230;<\/p>\n<p class=\"read-more\"><a class=\"btn btn-default\" href=\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/\"> Read More<span class=\"screen-reader-text\">  Read More<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[112,113,110,107,111],"class_list":["post-1686","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-bayesianism","tag-confirmation","tag-hempel","tag-paradox","tag-raven-paradox"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Raven Paradox - Probably Overthinking It<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Raven Paradox - Probably Overthinking It\" \/>\n<meta property=\"og:description\" content=\"Suppose you are not sure whether all ravens are black. If you see a white raven, that clearly refutes the hypothesis. And if you see a black raven, that supports the hypothesis in the sense that it increases our confidence, maybe slightly. But what if you see a red apple \u2013 does that make the hypothesis any more or less likely? This question is the core of the Raven paradox, a problem in the philosophy of science posed by Carl... Read More Read More\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/\" \/>\n<meta property=\"og:site_name\" content=\"Probably Overthinking It\" \/>\n<meta property=\"article:published_time\" content=\"2025-12-26T20:46:29+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-28T18:45:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png\" \/>\n\t<meta property=\"og:image:width\" content=\"442\" \/>\n\t<meta property=\"og:image:height\" content=\"292\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"AllenDowney\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@AllenDowney\" \/>\n<meta name=\"twitter:site\" content=\"@AllenDowney\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"AllenDowney\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"16 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/\"},\"author\":{\"name\":\"AllenDowney\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/4e5bfb2e9af6c3446cb0031a7bf83207\"},\"headline\":\"The Raven Paradox\",\"datePublished\":\"2025-12-26T20:46:29+00:00\",\"dateModified\":\"2025-12-28T18:45:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/\"},\"wordCount\":3360,\"publisher\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png\",\"keywords\":[\"bayesianism\",\"confirmation\",\"Hempel\",\"paradox\",\"raven paradox\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/\",\"url\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/\",\"name\":\"The Raven Paradox - Probably Overthinking It\",\"isPartOf\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png\",\"datePublished\":\"2025-12-26T20:46:29+00:00\",\"dateModified\":\"2025-12-28T18:45:27+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#primaryimage\",\"url\":\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png\",\"contentUrl\":\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png\",\"width\":442,\"height\":292},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.allendowney.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Raven Paradox\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#website\",\"url\":\"https:\/\/www.allendowney.com\/blog\/\",\"name\":\"Probably Overthinking It\",\"description\":\"Data science, Bayesian Statistics, and other ideas\",\"publisher\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.allendowney.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#organization\",\"name\":\"Probably Overthinking It\",\"url\":\"https:\/\/www.allendowney.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/03\/probably_logo.png\",\"contentUrl\":\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/03\/probably_logo.png\",\"width\":714,\"height\":784,\"caption\":\"Probably Overthinking It\"},\"image\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/AllenDowney\",\"https:\/\/www.linkedin.com\/in\/allendowney\/\",\"https:\/\/bsky.app\/profile\/allendowney.bsky.social\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/4e5bfb2e9af6c3446cb0031a7bf83207\",\"name\":\"AllenDowney\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/fb01b3a7f7190bea1bbf7f0852e686c2f8c03b099222df2ce4bc7926f15bcb43?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/fb01b3a7f7190bea1bbf7f0852e686c2f8c03b099222df2ce4bc7926f15bcb43?s=96&d=mm&r=g\",\"caption\":\"AllenDowney\"},\"url\":\"https:\/\/www.allendowney.com\/blog\/author\/allendowney_6dbrc4\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Raven Paradox - Probably Overthinking It","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/","og_locale":"en_US","og_type":"article","og_title":"The Raven Paradox - Probably Overthinking It","og_description":"Suppose you are not sure whether all ravens are black. If you see a white raven, that clearly refutes the hypothesis. And if you see a black raven, that supports the hypothesis in the sense that it increases our confidence, maybe slightly. But what if you see a red apple \u2013 does that make the hypothesis any more or less likely? This question is the core of the Raven paradox, a problem in the philosophy of science posed by Carl... Read More Read More","og_url":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/","og_site_name":"Probably Overthinking It","article_published_time":"2025-12-26T20:46:29+00:00","article_modified_time":"2025-12-28T18:45:27+00:00","og_image":[{"width":442,"height":292,"url":"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png","type":"image\/png"}],"author":"AllenDowney","twitter_card":"summary_large_image","twitter_creator":"@AllenDowney","twitter_site":"@AllenDowney","twitter_misc":{"Written by":"AllenDowney","Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#article","isPartOf":{"@id":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/"},"author":{"name":"AllenDowney","@id":"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/4e5bfb2e9af6c3446cb0031a7bf83207"},"headline":"The Raven Paradox","datePublished":"2025-12-26T20:46:29+00:00","dateModified":"2025-12-28T18:45:27+00:00","mainEntityOfPage":{"@id":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/"},"wordCount":3360,"publisher":{"@id":"https:\/\/www.allendowney.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#primaryimage"},"thumbnailUrl":"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png","keywords":["bayesianism","confirmation","Hempel","paradox","raven paradox"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/","url":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/","name":"The Raven Paradox - Probably Overthinking It","isPartOf":{"@id":"https:\/\/www.allendowney.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#primaryimage"},"image":{"@id":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#primaryimage"},"thumbnailUrl":"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png","datePublished":"2025-12-26T20:46:29+00:00","dateModified":"2025-12-28T18:45:27+00:00","breadcrumb":{"@id":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#primaryimage","url":"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png","contentUrl":"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/12\/73acb5dc58b1e987037e9ef21aff0c9bc014e6a3f841f015b7365f9352d648c7.png","width":442,"height":292},{"@type":"BreadcrumbList","@id":"https:\/\/www.allendowney.com\/blog\/2025\/12\/26\/the-raven-paradox\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.allendowney.com\/blog\/"},{"@type":"ListItem","position":2,"name":"The Raven Paradox"}]},{"@type":"WebSite","@id":"https:\/\/www.allendowney.com\/blog\/#website","url":"https:\/\/www.allendowney.com\/blog\/","name":"Probably Overthinking It","description":"Data science, Bayesian Statistics, and other ideas","publisher":{"@id":"https:\/\/www.allendowney.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.allendowney.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.allendowney.com\/blog\/#organization","name":"Probably Overthinking It","url":"https:\/\/www.allendowney.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.allendowney.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/03\/probably_logo.png","contentUrl":"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/03\/probably_logo.png","width":714,"height":784,"caption":"Probably Overthinking It"},"image":{"@id":"https:\/\/www.allendowney.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/AllenDowney","https:\/\/www.linkedin.com\/in\/allendowney\/","https:\/\/bsky.app\/profile\/allendowney.bsky.social"]},{"@type":"Person","@id":"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/4e5bfb2e9af6c3446cb0031a7bf83207","name":"AllenDowney","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/fb01b3a7f7190bea1bbf7f0852e686c2f8c03b099222df2ce4bc7926f15bcb43?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/fb01b3a7f7190bea1bbf7f0852e686c2f8c03b099222df2ce4bc7926f15bcb43?s=96&d=mm&r=g","caption":"AllenDowney"},"url":"https:\/\/www.allendowney.com\/blog\/author\/allendowney_6dbrc4\/"}]}},"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[{"id":622,"url":"https:\/\/www.allendowney.com\/blog\/2021\/05\/25\/in-search-of-simpsons-paradox\/","url_meta":{"origin":1686,"position":0},"title":"In Search Of: Simpson&#8217;s Paradox","author":"AllenDowney","date":"May 25, 2021","format":false,"excerpt":"Is Simpson's Paradox just a mathematical curiosity, or does it happen in real life? And if it happens, what does it mean? To answer these questions, I've been searching for natural examples in data from the General Social Survey (GSS). A few weeks ago I posted this article, where I\u2026","rel":"","context":"In \"general social survey\"","block_context":{"text":"general social survey","link":"https:\/\/www.allendowney.com\/blog\/tag\/general-social-survey\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2021\/05\/image-10.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2021\/05\/image-10.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2021\/05\/image-10.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":596,"url":"https:\/\/www.allendowney.com\/blog\/2021\/05\/03\/simpsons-paradox-and-age-effects\/","url_meta":{"origin":1686,"position":1},"title":"Simpson&#8217;s Paradox and Age Effects","author":"AllenDowney","date":"May 3, 2021","format":false,"excerpt":"As people get older, do they become more racist, sexist, and homophobic? To find out, you could use data from the General Social Survey (GSS), which asks questions like: Do you think there should be laws against marriages between Blacks\/African-Americans and whites?Should a man who admits[mfn]If you find the wording\u2026","rel":"","context":"In \"general social survey\"","block_context":{"text":"general social survey","link":"https:\/\/www.allendowney.com\/blog\/tag\/general-social-survey\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2021\/05\/fepol_vs_age_by_cohort10.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2021\/05\/fepol_vs_age_by_cohort10.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2021\/05\/fepol_vs_age_by_cohort10.jpg?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":590,"url":"https:\/\/www.allendowney.com\/blog\/2021\/05\/01\/simpsons-paradox-and-education\/","url_meta":{"origin":1686,"position":2},"title":"Simpson&#8217;s Paradox and Education","author":"AllenDowney","date":"May 1, 2021","format":false,"excerpt":"Is Simpson's paradox a mathematical curiosity or something that matters in practice? To answer this question, I'm searching the General Social Survey (GSS) for examples. Last week I published the first batch, examples where we group people by decade of birth and plot their opinions over time. In this article\u2026","rel":"","context":"In \"general social survey\"","block_context":{"text":"general social survey","link":"https:\/\/www.allendowney.com\/blog\/tag\/general-social-survey\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2021\/05\/libath_vs_year_by_degree.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2021\/05\/libath_vs_year_by_degree.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2021\/05\/libath_vs_year_by_degree.jpg?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":319,"url":"https:\/\/www.allendowney.com\/blog\/2019\/10\/18\/the-dartboard-paradox\/","url_meta":{"origin":1686,"position":3},"title":"The Dartboard Paradox","author":"AllenDowney","date":"October 18, 2019","format":false,"excerpt":"On November 5, 2019, I will be at PyData NYC to give a talk called The Inspection Paradox is Everywhere [UPDATE: The video from the talk is here]. Here's the abstract: The inspection paradox is a statistical illusion you\u2019ve probably never heard of. It\u2019s a common source of confusion, an\u2026","rel":"","context":"In \"dartboard\"","block_context":{"text":"dartboard","link":"https:\/\/www.allendowney.com\/blog\/tag\/dartboard\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2019\/10\/darts2-1.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2019\/10\/darts2-1.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2019\/10\/darts2-1.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2019\/10\/darts2-1.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":1618,"url":"https:\/\/www.allendowney.com\/blog\/2025\/10\/16\/simpsons-what\/","url_meta":{"origin":1686,"position":4},"title":"Simpson&#8217;s What?","author":"AllenDowney","date":"October 16, 2025","format":false,"excerpt":"I like Simpson\u2019s paradox so much I wrote three chapters about it in Probably Overthinking It. In fact, I like it so much I have a Google alert that notifies me when someone publishes a new example (or when the horse named Simpson\u2019s Paradox wins a race). So I was\u2026","rel":"","context":"In \"epidemiology\"","block_context":{"text":"epidemiology","link":"https:\/\/www.allendowney.com\/blog\/tag\/epidemiology\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/10\/image-1.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":1027,"url":"https:\/\/www.allendowney.com\/blog\/2023\/09\/19\/the-overton-paradox-in-three-graphs\/","url_meta":{"origin":1686,"position":5},"title":"The Overton Paradox in Three Graphs","author":"AllenDowney","date":"September 19, 2023","format":false,"excerpt":"Older people are more likely to say they are conservative. And older people believe more conservative things. But if you group people by decade of birth, most groups get more liberal as they get older. So if people get more liberal, on average, why are they more likely to say\u2026","rel":"","context":"Similar post","block_context":{"text":"Similar post","link":""},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2023\/09\/overton5.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2023\/09\/overton5.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2023\/09\/overton5.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2023\/09\/overton5.png?resize=700%2C400&ssl=1 2x"},"classes":[]}],"_links":{"self":[{"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/posts\/1686","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/comments?post=1686"}],"version-history":[{"count":6,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/posts\/1686\/revisions"}],"predecessor-version":[{"id":1700,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/posts\/1686\/revisions\/1700"}],"wp:attachment":[{"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/media?parent=1686"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/categories?post=1686"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/tags?post=1686"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}