Saturday, February 14, 2009

Milton Friedman's F-twist

The left-wing blogosphere has apparently chosen Milton Friedman (Nobel memorial prize in economics, 1976) as their target of the month.

CommunistSocialistSwine copies an article by a Brad DeLong. Although it is very clear that DeLong understands the basic structure of Friedman's deep monetarist ideas, he describes Friedman in a hostile way, without having any evidence against his ideas.

Friedman has understood that the money supply was the only major "central" parameter that an economy based on money seems to depend upon.

A central bank, ideally independent from the government, should keep the money supply pretty much constant, in order to avoid inflation or downturns caused by deflation. The economy is optimized when all other parameters (and decisions about the allocation of resources) are left to the invisible hand of the free markets - so the optimal way to fight deflation or a plummeting money supply is to "throw the money from the helicopters" while the right way to fight inflation is simply to stop printing the banknotes.

The history of the economies shows that Friedman was pretty much right on the money but his irrational foes can never be convinced by any arguments. They even try to sling mud on the Chilean economy, the country that adopted the liberal, monetarist policies in the 1970s before it saw the miracle of Chile that allowed the country (blue line on the graph) to double its GDP, relatively to the South American average (red line), and become the clear leader of the subcontinent.




But once again, the people whose skulls are filled with the socialist dirt can't be persuaded by any theoretical or empirical evidence. Their flawed dogmas are just too strong and important for them to allow such a thing.

Back to the F-twist

Backreaction tries to attack Friedman's F-twist. No, it is not a topological twist of F-theory.

Friedman argued that only the evidence, not the plausibility of the assumptions, should decide about the validity of a theory. He used to say that "assumptions don't matter". In fact, he preferred theories with seemingly unrealistic assumptions - an attitude that an uncle of Larry Summers, namely Paul Samuelson (Nobel memorial prize in economics, 1970), later called the F-twist. As Sabine correctly points out, Friedman summarized the situation concisely in The Methodology of Positive Economics (1953):
Truly important and significant hypotheses will be found to have “assumptions” that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense).
The reason is simple. A hypothesis is important if it “explains” much by little, that is, if it abstracts the common and critical elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained and permits valid predictions on the basis of them alone. To be important, therefore, a hypothesis must be descriptively false in its assumptions; it takes account of, and accounts for, none of the many other attendant circumstances, since its very success shows them to be irrelevant for the phenomena to be explained.
What do I think about these ideas? I obviously find them very deep and true - both in economics and in natural sciences. In fact, I believe that every good theoretical physicist agrees with them.

When we say that the "impression" that assumptions make upon us doesn't matter, we are simply stating that a scientist - or a rational economist - doesn't care about his or her dogmas and he focuses on the evidence. There can't be any controversy at this point.

Unrealistic assumptions: Bayesian analysis

But Friedman is saying more than that. The more "unrealistic" the assumptions are, the more important the theory is.

As Friedman wrote, such a significant theory succeeds in picking the essential features of reality and doesn't get distracted by the facts that turn out to be irrelevant. In the case of physics, one can mention relativity or quantum mechanics as examples of significant paradigm shifts. If an insight can be verified to be true even though it looks crazy at the beginning, it is a truly remarkable and profound insight.

However, I believe that one can be more explicit about the reasons why the F-twist is correct. Bayes' theorem tells us to update our perceived probability that a hypothesis H is correct, once evidence has shown that E holds, to the following posterior probability:
P(H|E) = P(H) P(E|H) / P(E).
Here, P(H) was the prior probability of H, before evidence E was taken. P(E|H) is the conditional probability of evidence E calculated from (or predicted by) the assumed hypothesis H. Finally, P(E) is the marginal probability of E that H is observed according to any hypothesis:
P(E) = Sum P(E|Hi) P(Hi).
This P(E) can be interpreted as the normalization factor needed for the sum of probabilities of all mutually exclusive hypotheses H_i to equal to one.

A non-dogmatic choice of priors gives a chance - a reasonably nonzero, comparable value of P(H_i) - to all qualitatively different hypotheses: that's the fair initial choice.

However, when a hypothesis is contrived (or has many assumptions), i.e. when it is one of many "variations" of a more general hypothesis H (or their class), the initially high prior P(H) has to be divided among all of its variations. That makes their priors lower. And that's the reason why simple hypotheses (measured by the number and complexity of their independent assumptions) are preferred over the more complicated ones if they seem to be almost equally compatible with the evidence: Bayes' formula tells us that the prior probability is higher for the simpler ones. It is also the reason why physicists try to avoid fine-tuning i.e. why they believe in the "naturalness".

In a similar way, one also wants a hypothesis to explain as much as possible (ideally, "much follows from little") because if a hypothesis correctly predicts a lot of empirical data E at the same time, the marginal probability that a "random" theory does so, P(E), is very small. Because P(E) appears in the denominator of Bayes' formula, P(H|E) becomes very large for the hypothesis H that passes the extensive and nontrivial tests.

The case of "unrealistic" hypotheses is analogous.

If the assumptions of a simple hypothesis H are "unrealistic", it means that the hypothesis is going "very far" from the apparently observed phenomena. If such a simple or qualitative hypothesis is nevertheless able to "hit the target" from such a huge distance, according to the empirical evidence and without fine-tuning, the agreement between the theory and the evidence becomes even more nontrivial. It shows that the theory must be a really good "shooter".

In terms of Bayes' formula, there must exist many equally "unrealistic", linguistically related but qualitatively distinct hypotheses that must be included into the analysis: if we're going "very far" from the "apparent observations", the number of possible hypotheses at the distance L in the space of ideas has to increase with L as a power law, to say the least. However, because most of the "unrealistic" theories that were just included in the a priori allowed hypotheses don't agree with the evidence E, it means that P(E), the marginal probability that appears in the denominator of Bayes' formula, is tiny.

It follows that the ratio itself, the posterior probability, is very high for the "unrealistic" hypothesis that agrees with the evidence, unlike its unsuccessful cousins.

For example, someone - let us call him Zephir Quantoken - could say that classical physics was pretty much OK except that the viscosity and the voltage were being mixed into one another in some seemingly bizarre way: he could argue that such a theory explains turbulence or electric shocks. That sounds crazy. Another person - let us call him Albert Einstein - could argue that space and time were mixed with one another which should explain the behavior of space and time at any speed. These two theories sound equally "unrealistic" and there are clearly many crazy theories that are possible a priori.

The fact that one of these theories based on "unrealistic" assumptions, special relativity, agrees with all the evidence is very nontrivial. It is so nontrivial exactly because there are so many "seemingly comparably unrealistic" theories that have no chance to survive the empirical tests. Clearly, I could give you many other examples. Quantum mechanics is also "crazy" yet "simple" which is why its agreement with all the observed microscopic phenomena should be taken very seriously.

The significant theories happen to succeed in choosing the right aspects of reality that should be studied, or the right zeroth approximation of reality.

The observation that general relativity can be exactly reproduced by vibrating closed strings is also "crazy": why not little green men or billions of other choices? Nevertheless, closed strings pass this empirical test and no one else does (except for dual descriptions of these strings). Exactly because the chance of such a "seemingly crazy" theory to pass the test was so low at the beginning, its ability to pass them is more nontrivial a piece of evidence than it would be otherwise. That's why Edward Witten experienced his most intense intellectual excitement of his life when he learned how closed strings predict gravity.

My final modern example are the solutions to the information loss puzzle.

The solution that we know to be correct could be interpreted as "unrealistic" or "insufficiently conservative" by some people, before they saw (and/or understood) the evidence, because of various reasons: for example, physics has to be nonlocal at macroscopic distances.

Nevertheless, a detailed analysis of the microscopic theories - the evidence - shows that the solution agrees with everything it has to agree with, and the required "unrealistic" non-locality is just an irrelevant noise that doesn't play any role for the solution because it is actually strongly violated as the black hole is evaporating - while the "realistic" observations of locality are exactly the irrelevant distractions that Friedman is talking about. Once it is seen that the correct solution has all the required properties, its initially "unrealistic" character makes the case for its validity even stronger!

Of course, the "unrealistic" assumptions that Friedman was actually interested in were economic in character: "people act rationally, in their self-interest" and "the whole Western civilization can be seen as a by-product of the voluntary cooperation driven by such greed". That's unrealistic, ingenious, and it turns out to explain the dynamics of all the essential degrees of freedom in the economy. The invisible hand of the free market is as essential an insight about the world as the laws of thermodynamics.

Friedman was well aware of the similarities between his economic ideas and physics. That's why he could also mention Newton's "unrealistic" initial assumptions that objects lived in the vacuum and moved freely without friction - which was the assumption that actually led to to the classical mechanics. Aristotle's opposite assumption that objects almost instantly stop couldn't do such a significant job. The appearance of "free" motion in both significant theories - in mechanics and economics - is not quite a coincidence. ;-)

In the very same way, some people only try to revert or "complexify" the assumptions about the free markets - and about people's being driven by their own interests - not because they have any evidence that this assumption leads to wrong predictions but because they just find it "unrealistic", and such an attitude based on a priori criticism of the assumptions is simply unscientific, as Friedman explains.



Milton Friedman: How the free markets created the pencil. Continues with Power of the Market (31 videos in total). These videos are just a small part of Friedman's PBS programs.

Must a theory know its limits?

Sabine Hossenfelder doesn't understand any of these basic and crucial gnoseological facts - not even in her comment section where Just Learning and others (even Arun!) are patiently explaining her very crisply what Friedman is saying and why it's true (and Sabine only replies with some irritated, inconsistent chaos). But she even adds one more deep misunderstanding that is not directly related to the previous points. She believes that if a theory doesn't tell us its range of validity at the very beginning, it is not a scientific theory. Wow. She writes:
"If you don't specify the range of validity of your assumptions (typically by showing that the effect of deviations from the assumptions is negligible for the result), your model is not falsifiable and thus not scientific."
Sabine is not the only left-wing hater of science whose understanding of science is so stunningly narrow-minded exactly because she or he tries to squeeze all of science into her or his brain - which is just too limited a piece of space for such an ambitious goal. Sabine Hossenfelder, much like Peter Woit, think that everything that they are incapable to understand with their naive, mediocre brains (and that includes most of theoretical physics, among other disciplines) is simply not science.

Needless to say, Sabine's opinion is pure crap. If we know the range of validity of a theory (and we know how to estimate all kinds of corrections to all kinds of approximate theories these days), it surely means that our understanding of reality is more complete than it is in the case when the range of validity is unknown. But the range of validity of an approximate theory is only something that a more complete theory can understand. It is utterly absurd to demand that such a knowledge must be a part of the approximate theory.

It is trivial to show that according to Hossenfelder's criterion, all of classical mechanics - and pretty much all other important theories in physics - would have been unscientific at the time of their birth. Why?

Well, classical mechanics only holds when angular momenta, actions, and similar quantities are much greater than Planck's constant. But Planck's constant only appeared in science at the very end of the 19th century. Newton didn't know anything about it so he couldn't possibly include inequalities such as "J >> hbar" among the "assumptions" of his theory.

The non-relativistic limit is analogous. The speed of light was sometimes thought to be infinite while other people preferred to believe that it was finite. The people who thought it was finite often guessed that it had to be close to the speed of sound: in this sense, I would even say that those who believed that the speed of light was infinite were closer to the truth. Clearly, nothing was really known about this issue until the 17th century. Ole Christensen Rømer made the first decent astronomical estimates of the speed of light in 1676. But people had to wait until Hippolyte Fizeau's experiments in 1849 to see the finite speed of light in the lab.

However, even Fizeau's experiment wasn't enough to determine that Newton's theory breaks down at speeds that are comparable to the speed of light. The physics world had to wait for Albert Einstein who figured this nontrivial insight out in 1905 when he discovered special relativity. Sabine's statement that people should have rejected Newton's theory as unscientific because it hadn't included all those future results is beyond ridiculous.

A different situation occurs when empirical evidence contradicting an approximate theory is actually known at the moment when the approximate theory is being proposed. Obviously, the proponent of such a theory must have some - at least qualitative - explanation of the disagreement. Clearly, the closer and the greater the corrections (or discrepancies) are, the more useless the approximate theory is. But it is still true that the source and location of the "corrections" that invalidate the approximate theory is only known empirically: it can only be understood theoretically once a more complete theory or framework is found.

Only when the actual evidence emerges suggesting that the assumptions were not quite correct (or accurate enough), because they lead to wrong or inaccurate predictions, one has a rational reason to revise the assumptions.

If a theory works i.e. if its predictions agree with the observations, one should never be "obliged" to respond to critics who find the assumptions "too idealized" or "unrealistic" in this sense. And no theory driven purely by the feeling that the assumptions of the successful theory are "too idealized" (e.g. theories about "imperfect competition") should ever become influential: the successful theory is OK even if its assumptions seem "idealized" as long as it agrees with the evidence. That was Friedman's crucial point, it is very true, very important, and it is sad that the highly unpleasant and arrogant German girl can't get it, yet she feels self-confident enough to attack geniuses who have been 10 intellectual levels above her own.

By the way, I am completely stunned by her dishonesty. She writes the very article in order to attack Friedman's ideas - and to argue that theories have to know all of their limitations in advance and all this stuff - but when you explain her that her opinion is incorrect, she denies that she has ever written what she has been writing since the beginning - but she repeats the same stupid things in the following sentence again. Her dishonesty makes her similar to a jellyfish or Lee Smolin: you can't talk to them in any sensible way. But Smolin is at least clever enough not to repeat his stupidities for a few hours or days after they are proved to be stupidities.

1 comment: