Section from Volume 3 – Reading Notebook #V: "The Intelligence of Intuition" (Gigerenzer 2023)
For new readers:
Please read the “Pinned Post” at the top of this Substack’s Home Page, and titled Why Use Public Peer-Review to Write a Book? - “See for Yourself”.
For returning readers & subscribers:
This post takes a break from updating the “Terms-of-Art” definitions in the Glossary, and presents the first draft of a section from Volume 3 – Reading Notebook #V: From “Trust Them” to “Show Me”, and “See for Yourself” – The Fast & Slow Retractions of Behavioral Economics.
In the Workbooks from Volume 1, and the Handbooks from Volume 2, we learned from Rodolfo Llinas that our evolutionary adaptation to make intelligent inferences in the presence of incomplete and uncertain information suggests that “Brains” developed to manage “Motions” through “Predictions”. We also learned from Alfred Korzybski that we use conceptual representations, “Mind-Maps” based on “Cause & Effect”, to summarize the “Territory” of our uncertain “Task Environments”. Finally, we learned from Rachael Jack & Richard Prum that “Primary Emotions” emerged to prioritize “Observations”, and “Predictions”, at the risk of runaway “Bubble Formation” in the “Mind”. This means that our objective experience of Nature “Red in Tooth & Claw” keeps our “Predictions” and “Motions” in check, and that our subjective experience of our own “Mind-Maps” has no such constraints on “Mind Exuberance”. Unlike the external checks-and-balance of “Natural Selection” on objective experience, subjective ideas & ideologies will go into “Bubble Formation”, and over-reach past their range of usefulness. This requires individual effort, such as the “Constructive Skepticism” described in these Workbooks, Handbooks, and Reading Notebooks, to create internal checks-and-balances.
The Heuristics & Bias Program provides an example of subjective ideas & ideologies that have gone deep into “Bubble Formation”, and require individual effort to create internal checks-and-balance based on “Constructive Skepticism”:
In his 2023 book titled “The Intelligence of Intuition”, Gigerenzer shows how research designs behind key results of the Heuristics & Bias Program conflate “Logical Equivalence” with “Informational Equivalence” by failing to take “Task Environments” into consideration. This conflations creates “Feet of Clay” for this giant case of “Bubble Formation” as shown below.
Using the example of intuitive “Perceptions” of “Randomness” - a central finding for the validation of recommendations such as “Nudging” from Behavioral Economics, Behavioral Finance, and their overall umbrella of the Heuristics & Bias Program - Gigerenzer illustrates how their research designs do not make the distinction between “Population Statistics” and “Sample Statistics”. Thus, fail to see that their decision-criterion of “Logical Equivalence” only applies in the case of a specific “Task Environment” based on “Population Statistics”, and not in the case of real-life “Task Environments” based on “Sample Statistics”.
Let that sink in for a moment, something has gone wrong at the core of the apple, and the following example, that you can replicate yourself, should make it clear.
The research designs of the Heuristics & Bias Program, based on flipping a coin several times, and recording the patterns of Heads (“H”) & Tails (“T”) in order to test human intuitions about “Randomness”, use “Population Statistics” when the number of throws (“n”) equals the length of the observed sequence of throws (“k”). In this specific & unique case where n = 3, and k = 3, we observe eight cases ranging from (H, H, H) to (T, T, T). Each case has the same probability of 1/8, and the sequence (H, H, H) has the same probability as the sequence (H, H, T). Researchers from the Heuristics & Bias Program use such results from “Population Statistics” as evidence of an irrational bias when human subjects express their intuitive opinion that the sequence (H, H, T) is more likely to occur than the sequence (H, H, H). Note how this assertion of irrationality extends the range of applicability of the results from a specific case of “Population Statistics” to a general statement about “Sample Statistics”.
The research designs of the “Fast & Frugal” Heuristics Program, based on flipping a coin several time, and recording the patterns of Heads (“H”) & Tails (“T”) in order to test human intuitions about “Randomness”, use “Sample Statistics” when the number of throws (“n”) becomes greater than the length of the observed sequence of throws (“k”). In the many cases where k < n < Infinity in general, and using n = 4 and k = 3 as an example in particular, we observe sixteen cases ranging from (H, H, H, H) to (T, T, T, T). Each case has the same probability of 1/16. However, the observed sequence (H, H, H) occurs in three of these cases while the observed sequence (H, H, T) occurs in four of these cases. This means that the observed, sample sequence (H, H, H) has a relative frequency of 3/16 = .19, and the observed, sample sequence (H, H, T) has a relative frequency of 4/16 = .25. Researchers from the “Fast & Frugal” Heuristic Program use such results from “Sample Statistics” as evidence that human intuition is correct when human subjects express their intuitive opinion that the sequence (H, H T) is more likely to occur than the sequence (H, H, H). Note how this assertion of “Ecological Rationality” matches the applicability of the results to “Task Environments” defined by “Sample Statistics”.
This example shows that before we pass judgment on the irrationality of intuition, we need to describe the matching “Task Environment”. Do we live in a risky & complete “Small World” where equiprobable “Population Statistics” apply, or do we live in an uncertain & incomplete “Large World” where the relative frequencies of specific samples of observations depart from equiprobability?
Humans make decisions with samples of “Observations” that require the use of “Sample Statistics” as opposed to the use of “Population Statistics”. Gigerenzer documents other cases to show how the Heuristics & Bias Program has a bias in seeing individual biases in everything, and calls it the “Bias Bias”.
This “Bias Bias” explains the lack of reproducibility of famous results that, to the surprise of many, include the expected effects of Framing, the irrationality of intuitive “Perceptions” of “Randomness” [as illustrated in the above example], and the “Hot Hand” Fallacy.
Yet, the research findings of the Heuristics & Bias Program dominate the thinking of public and private institutions, perhaps in part - and not just a small part as Gigerenzer observes - because they line up easily into some of the historical battle lines of Psychology, including the battle between institutional paternalism vs. individuals capable of giving informed refusal as well as informed consent.
Gigerenzer further points out that foundational work such as the work of Jean Piaget, and others concluded that by age 12 children demonstrate good individual judgment and intuitions about chance, frequency, and randomness, up to the point of a 1967 review of 110 such papers by Cameron Peterson and Lee Roy Beach, and aptly titled “Man as an Intuitive Statistician”.
However, starting in 1973, Amos Tversky & Daniel Kahneman wrote a paper, “Judgment under Uncertainty: Heuristics & Biases” that summarized four of their own published papers, and took the field of Psychology in the opposite direction. This paper set the direction for the Heuristics & Bias Program.
This new direction seeks to prove that people do not have good individual judgment and intuitions about chance, frequency, and randomness, and thus must be guided by their betters with interventions that range from surreptitious “Nudging”, to the “In-Your-Face” exercise of Power.
As we contemplate the record of the last 50 years of such top-down interventions, and return to reading foundational works to find logical, calculation & interpretation errors in this research direction of the last 50 years, what other indications can we find that this tide of institutional paternalism has turned, and that the future belongs to individuals capable of exercising informed refusal, as well as informed consent?
In your own experience, what other ideas & ideologies demonstrate a state of over-reach, and require an individual effort of “Constructive Skepticism” on your part?
- Would “Evidence-Based” decisions based on “Expected Value” show up on your list?
In this were the case, Ole Peters’ Ergodicity Economics (EE) – like Gerd Gigerenzer’s “Fast & Frugal” Heuristics Program - shows new distinctions, and calculates critical differences.
EE shows the difference between the metrics of the Ensemble Average (“Expected Value”) that justify the “Casino” perspective with prescriptive, scaled-up recommendations, and the experience of the typical growth trajectory (“Time Average”) that justify an individual perspective with descriptive, variance-reducing decision-criteria to rank opportunities from the bottom-up, including formulas that estimate how long it will take to see the difference between the two perspectives on a timeline.
The “Tools, Checklists & Processes” documented in these Workbooks, Handbooks, and Readings Notebooks - about Making Good Individual, Business, and Investment Decisions during our individual “Window of Life” – include “Sample Statistics” & Heuristics as well as non-ergodic “Growth Dynamics” & their matching ergodic transformation functions.
An interesting question for another post as I read material for the forthcoming EE2024 conference: How small a cooperative organization do you need to build-up in order to reap the ergodic value of variance reduction before the communication and corruption inefficiencies of scale gum things up?
Developing…
“CTRI by Francois Gadenne” writes a book in three volumes, published at the rate of one two-pages section per day on Substack for public peer-review. The book connects the dots of life-enhancing practices for the next generation, free of controlling algorithms, based on the lifetime experience of a retirement age entrepreneur, & continuously updated with insights from reading Wealth, Health, & Statistics (i.e. AI/ML/LLM) research papers on behalf of large companies as the co-founder of CTRI.