An Axiomatic Approach to the Law of Small Numbers


With beliefs over the outcomes of coin-tosses as our primitive, we formalize the Law of Small Numbers (Tversky and Kahneman (1974)) by an axiom that expresses a belief that the sample mean of any sequence will tend towards the coin’s perceived bias along the entire path. The agent is represented by a belief that the bias of the coin is path-dependent and self-correcting. The model is consistent with the evidence used to support the Law of Small Numbers, such as the Gambler’s Fallacy. In the setting of Bayesian inference, we show how learning is affected by the interplay between two potentially opposing forces: a belief in the absence of streaks and a belief that the sample mean will tend to the true bias. We show that, unlike other learning results in the literature (Rabin (2002), Epstein, Noor and Sandroni (2010)), the latter force ensures that the agent at least admits the true parameter as possible in the limit, if not learn with certainty that it is true. In an evolutionary setting, we show that agents who believe in the Law of Small Numbers are never pushed out of the evolutionary race by “standard” agents who correctly understand randomness.