by Gene Callahan

Dan Klein’s *Knowledge and Coordination* has something interesting to say about Bayesian inference, although he never explicitly addresses that topic. Consider the following:

Here, we have the distinction between responding to the realization of events within a framework of recognized variables and relationships and the discovery of a fresh opportunity to embrace a new and better framework or interpretation. This element of epiphany, of finding fortune by interpreting the world differently, is the subtle and vital element in human decision making. Yet, it is absent from equilibrium model building. In equilibrium stories, agents never have a “light bulb” moment… (p. 13)

Kirzner’s alertness is the individual’s re-interpretation of that world [of a world of already-interpreted “facts”]. (p. 14)

“Equilibrium” is meaningful only in reference to a specified model… (p. 28)

Bayesian inference, similar to equilibrium theorizing, works within a fixed frame of interpretation: it “is meaningful only in reference to a specified model.” It cannot extend across instances when a new interpretive framework takes the place of the old. Consider Bayes’s original paper introducing his calculus:

Postulate. 1. Suppose the square table or plane ABCD to be so made and levelled, that if either of the balls o or W be thrown upon it, there shall be the same probability that it rests upon any one equal part of the plane as another, and that it must necessarily rest somewhere upon it.

2. I suppose that the ball W shall be 1st thrown, and through the point where it rests a line os shall be drawn parallel to AD, and meeting CD and AB in s and o; and that afterwards the ball O shall be thrown p + q or n times, and that its resting between AD and os after a single throw be called the happening of the event M in a single trial. These things supposed…

The point here is the Bayes only sets his calculus going within a very definite framework of interpretation. If, for instance, it was found that our interpretive framework was all wrong — perhaps, say, that the balls contained an iron core, and there was a man under the table manipulating them with a magnet, postulate 1 would not hold. We would no longer be working in this interpretive framework. The right thing to do in this case is not proceed with Bayesian updating, but throw out one’s priors and start all over again setting up new postulates.

I would suggest that in science and in practical life, both types of situations occur. Einstein’s theory of relativity was an instance of the creating an entirely new interpretive framework. Until he did so, it would have been quite reasonable for a scientists to have had a prior as near zero as one wishes for the idea that one’s speed of travel would make time flow differently for one. But, given Einstein’s new interpretive framework, the idea suddenly became quite plausible. It was time, not to proceed with Bayesian inference as usual, but to set new priors and start over. In fact, we can see here a close parallel between the situations that require only Bayesian updating and those that require new priors to Kuhn’s distinction between “normal science” and paradigm shifts.

Exactly. This is the real stuff.

I’ve made this point repeatedly, a point contained in Thomas Kuhn’s work

If memory serves, I’ve also corresponded with Klein on this general point.

The point can’t be emphasized enough.

Thanks Greg.

[…] Gene Callahan writes, Dan Klein’s Knowledge and Coordination has something interesting to say about Bayesian inference, although he never explicitly addresses that topic. Consider the following: […]

In my view, this is the error in Bryan Caplan’s attempt to characterize entrepreneurship as an optimal search, and thus just another application of decision theory. The problem is that optimal search involves a given prior distribution on outcomes of search.

It’s not just Bayesian inference, other ideas about probability and statistics work in the same way. There are what you could call fixed entrances from which information flows into the model and other doors that closed off.

Lots of science and engineering, even the “normal” sort has situations like this, it’s just a matter of the importance of it. In my engineering job I encountered a case just a few weeks ago where something that “couldn’t happen” happened. That means all the critical assumptions had to be revisited and lots of experimental work done.

Lots of people when they see inexplicable things jump to the conclusion that some important scientific law must be wrong. That’s almost never the case, normally just a matter of subtle misapplication. (Textbooks contain plenty of simplistic application of scientific laws though).

Often in courses on “experiment design” they teach you a way to minimize on work. Suppose parameters x, y and z are to be changed. There are 4 possible values for each parameter. Now, why do all 64 experiments? Why not sparsely populate the matrix and then once the interesting points are found zoom in on them? This recommended procedure is almost always a bad idea, because in most practical situations there are some extra parameter the experimenter isn’t aware of that make a difference. Although it’s technically inefficient to vary each of x, y and z in turn while keeping the other constant in practice it quickly eeks out the flaws in the experiment and model or problem framework in was based on.

Isn’t this similar to Knight’s distinction between risk and uncertainty the former implying that one knows the list of all possible outcomes, while uncertainty means there are outcomes that you cannot know about – its open ended? Risk can be of two kinds. You may know the outcomes *and* and their probabilities (and the probability distribution); or you may know the outcomes but not the probabilities (or the distribution). The latter corresponds to a Bayesian situation and it is not what Knight means by uncertainty.

Good post. Bayesian inference implicitly assumes you know the entire universe of possibilities and are trying to find out where you are in it. (This is obvious if you develop Bayes’ theorem with Venn diagrams.) The entrepreneur and the scientist at the frontier do not know the universe of possibilities. They often discover previously unknown and unimagined possibilities. When that happens, “priors” are not “updated.” They are swept aside.

Alan Greenspan has a 2004 piece in the AER in which he defends discretionary monetary policy against the relatively strict rules-based approach favored by John Taylor. Greenspan explicitly invokes Knightian uncertainty in describing his (relatively) discretionary “risk-management” approach to monetary policy. But then says, “In essence, the risk-management approach to monetary policymaking is an application of Bayesian decision-making.” I suppose the weasel word “essence” saves him, but Bayesian decision making is inapplicable to situations of Knightian uncertainty, as both Gene and Peter Lewin are saying.

Thanks Allan!

Comrade Greenspan’s misapplication of Bayesian decision-making wasn’t the half of it. A socialist trying to mimic what the market does as a matter of course is doomed to failure. See Selgin, The Theory of Free Banking, p. 104.

Bayesian inference has a mechanism by which you can downgrade your confidence in the validity of your model, and thus is responsible to such disconfirmatory evidence. (This also handles the supposed problem Quine brings up about “which belief did this observation falsify?” because Bayesian inference has a rigorous process for saying which belief — the model, the parameters, or the observation — gets the biggest hit to credence.)

“Bayesian inference has a mechanism by which you can downgrade your confidence in the validity of your model, and thus is responsible to such disconfirmatory evidence.”

“Having a mechanism for x” and “handling X well” are not equivalent. And if someone thinks Quine’s concern can be dismissed with “a rigorous process,” they have simply misunderstood Quine’s concern.

“Leon”: I don’t think I understand your comment. Bayesian logic can certainly be applied to model selection. Is that what you meant? But in Bayesian model selection, you have to specify a model space, which becomes the “fixed frame of interpretation” Gene spoke of. I guess I don’t really see where your comments are connecting to Gene’s point.

Gene did at one point speak of “a specified model,” but he was quoting Dan Klein (who was not criticizing Bayesianism) in order to clarify the link between his comments and Dan’s. Otherwise I think he always spoke of an “interprentive framework.” Anyway, the model space is itself a model, so I don’t see how Gene’s use of the phrase “a specified model” really exposes him to your criticism. That’s why I think I must not understand what your criticism is. I mean, isn’t it possible that an observation could show you that the true model is outside the model space you were using? I think that’s what Gene was getting at.

I think we could summarize Gene’s comment with the remark attributed variously to Will Rogers and Mark Twain:

“It ain’t what you don’t know that hurts you, it’s what you know for sure that just ain’t so.” That line describes perfectly, I think, the limits of Bayesianism to which Gene wishes to draw our attention.

The hardest question in any investigation is how to populate the probability space in the first place. That applies to standard statistics in any event.

Bayesian inference tells us that the value of a piece of evidence depends upon the hypotheses being compared. If one changes the hypotheses, one changes the value of the evidence. New hypotheses can emerge and of course if new evidence is discovered one takes it into account and it may cause one to consider new hypotheses. see Ed Jaynes, Where do we stand on Maximum Entropy and ch 4 of his book Probability,, the logic of science”.

In short, it is not clear what point the author here is making and it cannot be clear until he compares Bayesian reasoning with some other form of reasoning and explains how it is superior.