{"id":674,"date":"2021-09-05T18:13:38","date_gmt":"2021-09-05T18:13:38","guid":{"rendered":"https:\/\/www.allendowney.com\/blog\/?p=674"},"modified":"2021-11-19T03:55:17","modified_gmt":"2021-11-19T03:55:17","slug":"emitter-detector-redux","status":"publish","type":"post","link":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/","title":{"rendered":"Emitter Detector Redux"},"content":{"rendered":"\n<p>In the first edition of Think Bayes, I presented what I called the <a href=\"https:\/\/www.greenteapress.com\/thinkbayes\/html\/thinkbayes015.html#sec130\" target=\"_blank\" rel=\"noreferrer noopener\">Geiger counter problem<\/a>, which is based on an example in Jaynes, <em>Probability Theory<\/em>. But I was not satisfied with my solution or the way I explained it, so I cut it from the second edition.<\/p>\n\n\n\n<p>I am re-reading Jaynes now, following the <a href=\"https:\/\/www.youtube.com\/watch?v=rfKS69cIwHc&amp;list=PL9v9IXDsJkktefQzX39wC2YG07vw7DsQ_\" target=\"_blank\" rel=\"noreferrer noopener\">excellent series of videos by Aubrey Clayton<\/a>, and this problem came back to haunt me. On my second attempt, I have a solution that is much clearer, and I think I can explain it better.<\/p>\n\n\n\n<p>I&#8217;ll outline the solution here, but for all of the details, you can <a href=\"https:\/\/allendowney.github.io\/ThinkBayes2\/radiation.html\" target=\"_blank\" rel=\"noreferrer noopener\">read the bonus chapter<\/a>, or <a href=\"https:\/\/colab.research.google.com\/github\/AllenDowney\/ThinkBayes2\/blob\/master\/examples\/radiation.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">click here to run the notebook on Colab<\/a>.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">The Emitter-Detector Problem<\/h1>\n\n\n\n<p>Here\u2019s the example from Jaynes, page 168:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>We have a radioactive source \u2026 which is emitting particles of some sort \u2026 There is a rate <em>p<\/em>, in particles per second, at which a radioactive nucleus sends particles through our counter; and each particle passing through produces counts at the rate <em>\u03b8<\/em>. From measuring the number {c1 , c2 , \u2026} of counts in different seconds, what can we say about the numbers {n1 , n2 , \u2026} actually passing through the counter in each second, and what can we say about the strength of the source?<\/p><\/blockquote>\n\n\n\n<p>As a model of the source, Jaynes suggests we imagine \u201c<em>N<\/em> nuclei, each of which has independently the probability <em>r<\/em> of sending a particle through our counter in any one second\u201d. If <em>N<\/em> is large and <em>r<\/em> is small, the number of particles emitted in a given second is well modeled by a Poisson distribution with parameter <em>s<\/em>=<em>Nr<\/em>, where <em>s<\/em> is the strength of the source.<\/p>\n\n\n\n<p>As a model of the sensor, we\u2019ll assume that \u201ceach particle passing through the counter has independently the probability <em>\u03d5<\/em> of making a count\u201d. So if we know the actual number of particles, <em>n<\/em>, and the efficiency of the sensor, <em>\u03d5<\/em>, the distribution of the count is Binomial(<em>n<\/em>,<em>\u03d5<\/em>).<\/p>\n\n\n\n<p>With that, we are ready to solve the problem. Following Jaynes, I\u2019ll start with a uniform prior for <em>s<\/em>, over a range of values wide enough to cover the region where the likelihood of the data is non-negligible. To represent distributions, I&#8217;ll use the Pmf class from empiricaldist.<\/p>\n\n\n\n<pre id=\"codecell0\" class=\"wp-block-preformatted\">ss = np.linspace(0, 350, 101)\nprior_s = Pmf(1, ss)<\/pre>\n\n\n\n<p>For each value of <em>s<\/em>, the distribution of <em>n<\/em> is Poisson, so we can form the joint prior of <em>s<\/em> and <em>n<\/em> using the <code>poisson<\/code> function from SciPy. The following function creates a Pandas <code>DataFrame<\/code> that represents the joint prior.<\/p>\n\n\n\n<pre id=\"codecell3\" class=\"wp-block-preformatted\">def make_joint(prior_s, ns):\n    ss = prior_s.qs\n    S, N = np.meshgrid(ss, ns)\n    ps = poisson(S).pmf(N) * prior_s.ps\n    joint = pd.DataFrame(ps, index=ns, columns=ss)\n    joint.index.name = 'n'\n    joint.columns.name = 's'\n    return joint<\/pre>\n\n\n\n<p>The result is a DataFrame with one row for each value of <em>n<\/em> and one column for each value of <em>s<\/em>.<\/p>\n\n\n\n<p>To update the prior, we need to compute the likelihood of the data for each pair of parameters. However, in this problem the likelihood of a given count depends only on <em>n<\/em>, regardless of <em>s<\/em>, so we only have to compute it once for each value of <em>n<\/em>. Then we multiply each column in the prior by this array of likelihoods. The following function encapsulates this computation, normalizes the result, and returns the posterior distribution.<\/p>\n\n\n\n<pre id=\"codecell8\" class=\"wp-block-preformatted\">def update(joint, phi, c):\n    ns = joint.index\n    likelihood = binom(ns, phi).pmf(c)\n    posterior = joint.multiply(likelihood, axis=0)\n    normalize(posterior)\n    return posterior<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">First update<\/h2>\n\n\n\n<p>Let\u2019s test the update function with the first example, on page 178 of <em>Probability Theory<\/em>:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>During the first second, <code>c1 = 10<\/code> counts are registered. What can [we] say about the number <code>n1<\/code> of particles?<\/p><\/blockquote>\n\n\n\n<p>Here\u2019s the update:<\/p>\n\n\n\n<pre id=\"codecell9\" class=\"wp-block-preformatted\">c1 = 10\nphi = 0.1\nposterior = update(joint, phi, c1)<\/pre>\n\n\n\n<p>The following figures show the posterior marginal distributions of <em>s<\/em> and <em>n<\/em>.<\/p>\n\n\n\n<pre id=\"codecell11\" class=\"wp-block-preformatted\">posterior_s = marginal(posterior, 0)<\/pre>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png\" alt=\"_images\/radiation_24_1.png\"\/><\/figure>\n\n\n\n<pre id=\"codecell13\" class=\"wp-block-preformatted\">posterior_n = marginal(posterior, 1)<\/pre>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_25_1.png\" alt=\"_images\/radiation_25_1.png\"\/><\/figure>\n\n\n\n<p>The posterior mean of <em>n<\/em> is close to 109, which is consistent with Equation 6.116. The MAP is 99, which is one less than the analytic result in Equation 6.113, which is 100. It looks like the posterior probabilities for 99 and 100 are the same, but the floating-point results differ slightly.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Jeffreys prior<\/h2>\n\n\n\n<p>Instead of a uniform prior for <em>s<\/em>, we can use a Jeffreys prior, in which the prior probability for each value of <em>s<\/em> is proportional to 1\/<em>s<\/em>. This has the advantage of \u201cinvariance under certain changes of parameters\u201d, which is \u201cthe only correct way to express complete ignorance of a scale parameter.\u201d However, Jaynes suggests that it is not clear \u201cwhether <em>s<\/em> can properly be regarded as a scale parameter in this problem.\u201d Nevertheless, he suggests we try it and see what happens. Here\u2019s the Jeffreys prior for <em>s<\/em>.<\/p>\n\n\n\n<pre id=\"codecell19\" class=\"wp-block-preformatted\">prior_jeff = Pmf(1\/ss[1:], ss[1:])<\/pre>\n\n\n\n<p>We can use it to compute the joint prior of <em>s<\/em> and <em>n<\/em>, and update it with <code>c1<\/code>.<\/p>\n\n\n\n<pre id=\"codecell20\" class=\"wp-block-preformatted\">joint_jeff = make_joint(prior_jeff, ns)\nposterior_jeff = update(joint_jeff, phi, c1)<\/pre>\n\n\n\n<p>Here\u2019s the marginal posterior distribution of <em>n<\/em>:<\/p>\n\n\n\n<pre id=\"codecell21\" class=\"wp-block-preformatted\">posterior_n = marginal(posterior_jeff, 1)<\/pre>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_35_1.png\" alt=\"_images\/radiation_35_1.png\"\/><\/figure>\n\n\n\n<p>The posterior mean is close to 100 and the MAP is 91; both are consistent with the results in Equation 6.122.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Robot A<\/h2>\n\n\n\n<p>Now we get to what I think is the most interesting part of this example, which is to take into account a second observation under two models of the scenario:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>Two robots, [A and B], have different prior information about the source of the particles. The source is hidden in another room which A and B are not allowed to enter. A has no knowledge at all about the source of particles; for all [it] knows, \u2026 the other room might be full of little [people] who run back and forth, holding first one radioactive source, then another, up to the exit window. B has one additional qualitative fact: [it] knows that the source is a radioactive sample of long lifetime, in a fixed position.<\/p><\/blockquote>\n\n\n\n<p>In other words, B has reason to believe that the source strength <em>s<\/em> is constant from one interval to the next, while A admits the possibility that <em>s<\/em> is different for each interval. The following figure, from Jaynes, represents these models graphically.<\/p>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/github.com\/AllenDowney\/ThinkBayes2\/raw\/master\/examples\/jaynes177.png\" alt=\"\" width=\"462\" height=\"207\"\/><\/figure>\n\n\n\n<p>For A, the \u201cdifferent intervals are logically independent\u201d, so the update with <code>c2 = 16<\/code> starts with the same prior.<\/p>\n\n\n\n<pre id=\"codecell25\" class=\"wp-block-preformatted\">c2 = 16\nposterior2 = update(joint, phi, c2)<\/pre>\n\n\n\n<p>Here\u2019s the posterior marginal distribution of <code>n2<\/code>. <\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_41_1.png\" alt=\"_images\/radiation_41_1.png\"\/><\/figure>\n\n\n\n<p>The posterior mean is close to 169, which is consistent with the result in Equation 6.124. The MAP is 160, which is consistent with Equation 6.123.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Robot B<\/h2>\n\n\n\n<p>For B, the \u201clogical situation\u201d is different. If we consider <em>s<\/em> to be constant, we can \u2013 and should! \u2013 take the information from the first update into account when we perform the second update. We can do that by using the posterior distribution of <em>s<\/em> from the first update to form the joint prior for the second update, like this:<\/p>\n\n\n\n<pre id=\"codecell30\" class=\"wp-block-preformatted\">joint = make_joint(posterior_s, ns)\nposterior = update(joint, phi, c2)\nposterior_n = marginal(posterior, 1)<\/pre>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_45_1.png\" alt=\"_images\/radiation_45_1.png\"\/><\/figure>\n\n\n\n<p>The posterior mean of <em>n<\/em> is close to 137.5, which is consistent with Equation 6.134. The MAP is 132, which is one less than the analytic result, 133. But again, there are two values with the same probability except for floating-point errors.<\/p>\n\n\n\n<p>Under B\u2019s model, the data from the first interval updates our belief about <em>s<\/em>, which influences what we believe about <code>n2<\/code>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Going the other way<\/h2>\n\n\n\n<p>That might not seem surprising, but there is an additional point Jaynes makes with this example, which is that it also works the other way around: Having seen <code>c2<\/code>, we have more information about <em>s<\/em>, which means we can \u2013 and should! \u2013 go back and reconsider what we concluded about <code>n1<\/code>.<\/p>\n\n\n\n<p>We can do that by imagining we did the experiments in the opposite order, so<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li>We\u2019ll start again with a joint prior based on a uniform distribution for <em>s<\/em><\/li><li>Update it based on <code>c2<\/code>,<\/li><li>Use the posterior distribution of <em>s<\/em> to form a new joint prior,<\/li><li>Update it based on <code>c1<\/code>, and<\/li><li>Extract the marginal posterior for <code>n1<\/code>.<\/li><\/ol>\n\n\n\n<pre id=\"codecell36\" class=\"wp-block-preformatted\">joint = make_joint(prior_s, ns)\nposterior = update(joint, phi, c2)\nposterior_s = marginal(posterior, 0)\n\njoint = make_joint(posterior_s, ns)\nposterior = update(joint, phi, c1)\nposterior_n2 = marginal(posterior, 1)<\/pre>\n\n\n\n<p>The posterior mean is close to 131.5, which is consistent with Equation 6.133. And the MAP is 126, which is one less than the result in Equation 6.132, again due to floating-point error.<\/p>\n\n\n\n<p>Here\u2019s what the new distribution of <code>n1<\/code> looks like compared to the original, which was based on <code>c1<\/code> only.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_58_0.png\" alt=\"_images\/radiation_58_0.png\"\/><\/figure>\n\n\n\n<p>With the additional information from <code>c2<\/code>:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>We give higher probability to large values of <em>s<\/em>, so we also give higher probability to large values of <code>n1<\/code>, and<\/li><li>The width of the distribution is narrower, which shows that with more information about <em>s<\/em>, we have more information about <code>n1<\/code>.<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Discussion<\/h2>\n\n\n\n<p>This is one of several examples Jaynes uses to distinguish between \u201clogical and causal dependence.\u201d In this example, causal dependence only goes in the forward direction: \u201c<em>s<\/em> is the physical cause which partially determines <em>n<\/em>; and then <em>n<\/em> in turn is the physical cause which partially determines <em>c<\/em>\u201d.<\/p>\n\n\n\n<p>Therefore, <code>c1<\/code> and <code>c2<\/code> are causally independent: if the number of particles counted in one interval is unusually high (or low), that does not cause the number of particles during any other interval to be higher or lower.<\/p>\n\n\n\n<p>But if <em>s<\/em> is unknown, they are not <em>logically<\/em> independent. For example, if <code>c1<\/code> is lower than expected, that implies that lower values of <em>s<\/em> are more likely, which implies that lower values of <code>n2<\/code> are more likely, which implies that lower values of <code>c2<\/code> are more likely.<\/p>\n\n\n\n<p>And, as we\u2019ve seen, it works the other way, too. For example, if <code>c2<\/code> is higher than expected, that implies that higher values of <em>s<\/em>, <code>n1<\/code>, and <code>c1<\/code> are more likely.<\/p>\n\n\n\n<p>If you find the second result more surprising \u2013 that is, if you think it\u2019s weird that <code>c2<\/code> changes what we believe about <code>n1<\/code> \u2013 that implies that you are not (yet) distinguishing between logical and causal dependence.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the first edition of Think Bayes, I presented what I called the Geiger counter problem, which is based on an example in Jaynes, Probability Theory. But I was not satisfied with my solution or the way I explained it, so I cut it from the second edition. I am re-reading Jaynes now, following the excellent series of videos by Aubrey Clayton, and this problem came back to haunt me. On my second attempt, I have a solution that is&#8230;<\/p>\n<p class=\"read-more\"><a class=\"btn btn-default\" href=\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/\"> Read More<span class=\"screen-reader-text\">  Read More<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[71,89],"class_list":["post-674","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-bayesian-statistics","tag-jaynes"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Emitter Detector Redux - Probably Overthinking It<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Emitter Detector Redux - Probably Overthinking It\" \/>\n<meta property=\"og:description\" content=\"In the first edition of Think Bayes, I presented what I called the Geiger counter problem, which is based on an example in Jaynes, Probability Theory. But I was not satisfied with my solution or the way I explained it, so I cut it from the second edition. I am re-reading Jaynes now, following the excellent series of videos by Aubrey Clayton, and this problem came back to haunt me. On my second attempt, I have a solution that is... Read More Read More\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/\" \/>\n<meta property=\"og:site_name\" content=\"Probably Overthinking It\" \/>\n<meta property=\"article:published_time\" content=\"2021-09-05T18:13:38+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-11-19T03:55:17+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png\" \/>\n<meta name=\"author\" content=\"AllenDowney\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@AllenDowney\" \/>\n<meta name=\"twitter:site\" content=\"@AllenDowney\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"AllenDowney\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/\"},\"author\":{\"name\":\"AllenDowney\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/4e5bfb2e9af6c3446cb0031a7bf83207\"},\"headline\":\"Emitter Detector Redux\",\"datePublished\":\"2021-09-05T18:13:38+00:00\",\"dateModified\":\"2021-11-19T03:55:17+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/\"},\"wordCount\":1466,\"publisher\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png\",\"keywords\":[\"bayesian statistics\",\"Jaynes\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/\",\"url\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/\",\"name\":\"Emitter Detector Redux - Probably Overthinking It\",\"isPartOf\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png\",\"datePublished\":\"2021-09-05T18:13:38+00:00\",\"dateModified\":\"2021-11-19T03:55:17+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#primaryimage\",\"url\":\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png\",\"contentUrl\":\"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.allendowney.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Emitter Detector Redux\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#website\",\"url\":\"https:\/\/www.allendowney.com\/blog\/\",\"name\":\"Probably Overthinking It\",\"description\":\"Data science, Bayesian Statistics, and other ideas\",\"publisher\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.allendowney.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#organization\",\"name\":\"Probably Overthinking It\",\"url\":\"https:\/\/www.allendowney.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/03\/probably_logo.png\",\"contentUrl\":\"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/03\/probably_logo.png\",\"width\":714,\"height\":784,\"caption\":\"Probably Overthinking It\"},\"image\":{\"@id\":\"https:\/\/www.allendowney.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/AllenDowney\",\"https:\/\/www.linkedin.com\/in\/allendowney\/\",\"https:\/\/bsky.app\/profile\/allendowney.bsky.social\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/4e5bfb2e9af6c3446cb0031a7bf83207\",\"name\":\"AllenDowney\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/fb01b3a7f7190bea1bbf7f0852e686c2f8c03b099222df2ce4bc7926f15bcb43?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/fb01b3a7f7190bea1bbf7f0852e686c2f8c03b099222df2ce4bc7926f15bcb43?s=96&d=mm&r=g\",\"caption\":\"AllenDowney\"},\"url\":\"https:\/\/www.allendowney.com\/blog\/author\/allendowney_6dbrc4\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Emitter Detector Redux - Probably Overthinking It","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/","og_locale":"en_US","og_type":"article","og_title":"Emitter Detector Redux - Probably Overthinking It","og_description":"In the first edition of Think Bayes, I presented what I called the Geiger counter problem, which is based on an example in Jaynes, Probability Theory. But I was not satisfied with my solution or the way I explained it, so I cut it from the second edition. I am re-reading Jaynes now, following the excellent series of videos by Aubrey Clayton, and this problem came back to haunt me. On my second attempt, I have a solution that is... Read More Read More","og_url":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/","og_site_name":"Probably Overthinking It","article_published_time":"2021-09-05T18:13:38+00:00","article_modified_time":"2021-11-19T03:55:17+00:00","og_image":[{"url":"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png","type":"","width":"","height":""}],"author":"AllenDowney","twitter_card":"summary_large_image","twitter_creator":"@AllenDowney","twitter_site":"@AllenDowney","twitter_misc":{"Written by":"AllenDowney","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#article","isPartOf":{"@id":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/"},"author":{"name":"AllenDowney","@id":"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/4e5bfb2e9af6c3446cb0031a7bf83207"},"headline":"Emitter Detector Redux","datePublished":"2021-09-05T18:13:38+00:00","dateModified":"2021-11-19T03:55:17+00:00","mainEntityOfPage":{"@id":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/"},"wordCount":1466,"publisher":{"@id":"https:\/\/www.allendowney.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#primaryimage"},"thumbnailUrl":"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png","keywords":["bayesian statistics","Jaynes"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/","url":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/","name":"Emitter Detector Redux - Probably Overthinking It","isPartOf":{"@id":"https:\/\/www.allendowney.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#primaryimage"},"image":{"@id":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#primaryimage"},"thumbnailUrl":"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png","datePublished":"2021-09-05T18:13:38+00:00","dateModified":"2021-11-19T03:55:17+00:00","breadcrumb":{"@id":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#primaryimage","url":"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png","contentUrl":"https:\/\/allendowney.github.io\/ThinkBayes2\/_images\/radiation_24_1.png"},{"@type":"BreadcrumbList","@id":"https:\/\/www.allendowney.com\/blog\/2021\/09\/05\/emitter-detector-redux\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.allendowney.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Emitter Detector Redux"}]},{"@type":"WebSite","@id":"https:\/\/www.allendowney.com\/blog\/#website","url":"https:\/\/www.allendowney.com\/blog\/","name":"Probably Overthinking It","description":"Data science, Bayesian Statistics, and other ideas","publisher":{"@id":"https:\/\/www.allendowney.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.allendowney.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.allendowney.com\/blog\/#organization","name":"Probably Overthinking It","url":"https:\/\/www.allendowney.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.allendowney.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/03\/probably_logo.png","contentUrl":"https:\/\/www.allendowney.com\/blog\/wp-content\/uploads\/2025\/03\/probably_logo.png","width":714,"height":784,"caption":"Probably Overthinking It"},"image":{"@id":"https:\/\/www.allendowney.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/AllenDowney","https:\/\/www.linkedin.com\/in\/allendowney\/","https:\/\/bsky.app\/profile\/allendowney.bsky.social"]},{"@type":"Person","@id":"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/4e5bfb2e9af6c3446cb0031a7bf83207","name":"AllenDowney","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.allendowney.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/fb01b3a7f7190bea1bbf7f0852e686c2f8c03b099222df2ce4bc7926f15bcb43?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/fb01b3a7f7190bea1bbf7f0852e686c2f8c03b099222df2ce4bc7926f15bcb43?s=96&d=mm&r=g","caption":"AllenDowney"},"url":"https:\/\/www.allendowney.com\/blog\/author\/allendowney_6dbrc4\/"}]}},"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[{"id":656,"url":"https:\/\/www.allendowney.com\/blog\/2021\/07\/23\/the-left-handed-sister-problem\/","url_meta":{"origin":674,"position":0},"title":"The Left-Handed Sister Problem","author":"AllenDowney","date":"July 23, 2021","format":false,"excerpt":"Suppose you meet someone who looks like the brother of your friend Mary. You ask if he has a sister named Mary, and he says \"Yes I do, but I don't think I know you.\" You remember that Mary has a sister who is left-handed, but you don't remember her\u2026","rel":"","context":"In \"Bayes&#039;s Theorem\"","block_context":{"text":"Bayes&#039;s Theorem","link":"https:\/\/www.allendowney.com\/blog\/tag\/bayess-theorem\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":586,"url":"https:\/\/www.allendowney.com\/blog\/2021\/04\/30\/whats-new-in-think-bayes-2\/","url_meta":{"origin":674,"position":1},"title":"What&#8217;s new in Think Bayes 2?","author":"AllenDowney","date":"April 30, 2021","format":false,"excerpt":"I'm happy to report that the second edition of Think Bayes is available for preorder now. What's new in the second edition? I wrote a new Chapter 1 that introduces conditional probability using the Linda the Banker problem and data from the General Social Survey.I added new chapters on survival\u2026","rel":"","context":"In \"bayesian statistics\"","block_context":{"text":"bayesian statistics","link":"https:\/\/www.allendowney.com\/blog\/tag\/bayesian-statistics\/"},"img":{"alt_text":"Cover of Think Bayes second edition","src":"https:\/\/i0.wp.com\/www.allendowney.com\/blog\/wp-content\/uploads\/2021\/04\/think_bayes_2e_cover.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":462,"url":"https:\/\/www.allendowney.com\/blog\/2020\/07\/06\/maxima-minima-and-mixtures\/","url_meta":{"origin":674,"position":2},"title":"Maxima, Minima, and Mixtures","author":"AllenDowney","date":"July 6, 2020","format":false,"excerpt":"I am hard at work on the second edition of Think Bayes, currently working on Chapter 6, which is about computing distributions of minima, maxima and mixtures of other distributions. Of all the changes in the second edition, I am particularly proud of the exercises. I present three new exercises\u2026","rel":"","context":"In \"bayesian statistics\"","block_context":{"text":"bayesian statistics","link":"https:\/\/www.allendowney.com\/blog\/tag\/bayesian-statistics\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":74,"url":"https:\/\/www.allendowney.com\/blog\/2018\/10\/18\/how-tall-is-a\/","url_meta":{"origin":674,"position":3},"title":"How tall is A?","author":"AllenDowney","date":"October 18, 2018","format":false,"excerpt":"Here are a series of problems I posed in my Bayesian statistics class: 1) Suppose you meet an adult resident of the U.S. who is 170 cm tall. What is the probability that they are male? 2) Suppose I choose two U.S. residents at random and A is taller than\u2026","rel":"","context":"Similar post","block_context":{"text":"Similar post","link":""},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1655,"url":"https:\/\/www.allendowney.com\/blog\/2025\/11\/04\/think-dsp-second-edition\/","url_meta":{"origin":674,"position":4},"title":"Think DSP second edition","author":"AllenDowney","date":"November 4, 2025","format":false,"excerpt":"I have started work on a second edition of Think DSP! You can see the current draft here. I started this project in part because of this announcement: Once in a while, a few of the Scicloj friends will meet to learn about signal processing, following the Think DSP book\u2026","rel":"","context":"In \"DSP\"","block_context":{"text":"DSP","link":"https:\/\/www.allendowney.com\/blog\/tag\/dsp\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":80,"url":"https:\/\/www.allendowney.com\/blog\/2018\/10\/21\/the-game-of-ur-problem\/","url_meta":{"origin":674,"position":5},"title":"The Game of Ur problem","author":"AllenDowney","date":"October 21, 2018","format":false,"excerpt":"Here's a probability puzzle to ruin your week. In the Royal Game of Ur, players advance tokens along a track with 14 spaces. To determine how many spaces to advance, a player rolls 4 dice with 4 sides. Two corners on each die are marked; the other two are not.\u2026","rel":"","context":"With 6 comments","block_context":{"text":"With 6 comments","link":"https:\/\/www.allendowney.com\/blog\/2018\/10\/21\/the-game-of-ur-problem\/#comments"},"img":{"alt_text":"","src":"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/1\/1d\/British_Museum_Royal_Game_of_Ur.jpg","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/posts\/674","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/comments?post=674"}],"version-history":[{"count":6,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/posts\/674\/revisions"}],"predecessor-version":[{"id":685,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/posts\/674\/revisions\/685"}],"wp:attachment":[{"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/media?parent=674"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/categories?post=674"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.allendowney.com\/blog\/wp-json\/wp\/v2\/tags?post=674"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}