Saturday, April 14, 2012

Friedrich August von Hayek VS John Maynard Keynes

The confrontation between John Maynard Keynes, and his Austrian born free market adversary and friend, Friedrich August von Hayek, is one of the most famous in the history of contemporary economic thought.  The debate took place during the Great Depression of the 1930s about the causes and remedies of business cycle downturns in market economies.

The origins of this debate can be traced back to the book ‘Treatise on Money’ (1930) written by Keynes, a rather obscure book, that was superseded by his masterpiece ‘The General Theory of Employment, Interest, and Money’ (1936).  ‘Treatise on Money’ was a difficult book to read, and this probably caused Hayek and Keynes to misunderstand each other.  As Keynes and Hayek were building their economic models at the same time, their debate was very much dominated by terminological definitions.  One of the main topics that Keynes and Hayek corresponded about was the definition of savings and investment, and Hayek wrote three extensive systematic reviews of ‘Treatise of Money’. In turn, Keynes wrote only one article in response accusing Hayek of misrepresentation.

The debate on ‘Treatise of Money’ was rather one sided, and in 1932 Keynes withdrew from the debate to reshape and improve his central argument, which was to become ‘The General Theory’.  This work became probably one of the most influential economic treatises immortalizing Keynes as one of the greatest 20th century economists.  His lasting legacy, that was to become known as Keynesianism, is an economic perspective that argues that private sector decisions sometimes lead to inefficient macroeconomic outcomes.  The theory, therefore, advocates active policy responses by the public sector, including monetary policy actions by the central bank, and fiscal policy interventions by the government, to stabilize economic output over a business cycle.
Many Keynesian economists have not regarded Hayek as their man’s equal.  However, there is an increasing agreement today that Hayek, although controversial, was one of the most influential 20th century economists.  He made fundamental contributions to economics in the theory of business cycles, capital theory, and monetary theory.  He was also awarded the Nobel Prize for economics in 1974, jointly with Gunnar Myrdal, “for their pioneering work in the theory of money and economic fluctuations”.

Most of Hayek’s work in the 1930s and the 1940s focused on the Austrian theory of business cycles.  He believed that the price system of a free market was an efficient mechanism to coordinate people’s actions, and that markets were a result of spontaneous order that had evolved slowly over a long period of time, as a result of economic exchanges between people.  Contrary to the statement in Wapshott’s book, that the Austrian School economists were more theoretical and mechanistic in their approach to economics, Hayek believed that markets were highly organic, and any interference with the spontaneous order of free markets would distort their efficient operation.  In fact, it can be argued that Keynes’ economic theory was more mechanistic, as economies could be manipulated in a machine-like fashion to behave according to the wishes of economic planners.
A true Renaissance man, Hayek also made intellectual contributions in political theory, psychology, and methodology.  It is perhaps because of his work in political theory that some economists, especially those with a Keynesian orientation, have wrongly dismissed his core economic research as ideologically motivated.  This is the trap that Wapshott seems to walk in, either intentionally, or because of Hayek’s criticism of the Keynesian model, that had become de facto orthodoxy for the most part of the 20th century, extends many decades, and to some extent, has remained unnoticed, or ignored, by many economists and policy-makers.
Friedrich  von Hayek

Wapshott’s book ‘Keynes Hayek: The clash that defined modern economics’ is a commendable effort to bring economic thought to the attention of the general reading public.  It is written in an engaging ‘human interest’ style, and I am certain it will sell well.  Its publication is also well timed, because there has been a marked increase in public interest about economics and economic policy, as a consequence of the ‘Great Recession’, and sovereign debt crisis that currently grips the world.  And this is where the book fails to deliver.  A reader should not expect any great insight into how Keynesian or Hayekian economics could be applied in today’s economic situation beyond ‘truly Keynesian’, e.g. political, government policy interventions, as outlined in Wapshott’s book.

Nevertheless, the book provides a delightful insight into the personalities of Keynes and Hayek.  Keynes is portrayed as a privileged and bright economist at the top of his game effortlessly moving between academia, political elites, and his bohemian ‘Bloomsbury group’ of friends.  Hayek, however, is painted as a stiff, humorless, theoretical, and linguistically challenged, central European scholar, brought to London School of Economics (LSE) by Lionel Robbins to provide an alternative to the theories of Keynes and his ‘Cambridge circus’ of almost evangelical followers. (3) Robbins, and the dons of the LSE, considered Keynes’ view that when free markets were left to their own devices, this sometimes caused economic slumps, and that decisive government action was needed to pull the economy back to an equilibrium state of full employment, as heresy.  In contrast to Keynes, the Austrian economists thought that free markets, driven by people’s choices tended to adjust to equilibrium if left alone, and free from government intervention.  Concerned with the increasing intellectual and policy influence by the new generation of Keynesian economists at Cambridge, Hayek was appointed to LSE to counterbalance Keynesian interventionist doctrine.

Much of Wapshott’s book is about the political philosophy that divided Keynes and Hayek in terms of the role of the government in the running of an economy.  Much less is spent on understanding the economics upon which the big-picture conflict was based.  Indeed, Wapshott overemphasizes Hayek’s 1944 book ‘The Road to Serfdom’, on the dangers of socialism.  This book was written after Hayek moved to Britain where he observed that many British socialists were advocating some of the same policies of government control that had been advocated in Germany in the 1920s.  His basic argument was that government control of people’s economic lives was a form of totalitarianism: “Economic control is not merely control of a sector of human life which can be separated from the rest…. it is the control of the means for all our ends” (1944).  The book became a best seller in the USA and it established Hayek as a leading classical liberal, or ‘libertarian’, as he would be called today.  However, the success of the book, which was serialized in ’Reader’s Digest’, typecast Hayek as a free market ideologue, detracting attention away from his scientific contribution in economics.
Wapshott provides a ‘workmanlike’ description of Keynes’ theory, but his treatment of Hayek’s economics and the critique of ‘The General Theory’, is woefully inadequate.

The fundamental tenet of ‘The General Theory’ is that there is a direct and positive relationship between employment and the aggregate expenditure in an economy.  Therefore, according to Keynes, total demand determines the employment level in the economy, and the existence of unemployment indicates that aggregate demand is insufficient to employ all factors of production.  Keynes considered that the capitalist system was volatile, and there were times when the level of demand would be insufficient to maintain full employment.  Therefore, Keynes recommended that the public sector should address this by controlling the level of aggregate spending in the economy.  His recommendations to reduce unemployment can be categorized as follows:
• Interest rates should be reduced as far as possible to encourage private investment;
• A progressive tax system should be used to divert income from the wealthy to the lower paid, as their propensity to consume is higher; 
• The government should actively participate in public investment activity to supplement private investment, should this prove insufficient to maintain a level of aggregate expenditure that corresponds with full employment.
John Maynard Keynes

After the publication of ‘The General Theory’, Hayek did not critique Keynes’ work as was expected; this he regretted ever after (Hayek in Sanz-Bas, 2011).  However, Hayek’s critique of Keynes is incorporated into many of his works including ‘Monetary Nationalism and International Stability’ (1937), ‘Profit, Interest, and Investment’ (1939), ‘The Pure Theory of Capital’ (1941), ‘The Campaign Against Keynesian Inflation’ (1974), ‘The Fatal Conceit’ (1988). It is perhaps because of the extended period of Hayek’s writing that Wapshott fails to provide a full account of Hayek’s economic thinking in general, and the critique of Keynesian theory in particular.
It is beyond the scope of this review to discuss Hayek’s critique in detail.  However, one of Hayek’s main criticisms of ‘The General Theory’ was about Keynes’ assumption that unemployment could be solved through increases in aggregate spending.  Keynes linked aggregate spending with employment; if spending in the economy was increased sufficiently, this would result in workers getting their old jobs back, and the economic crisis would be averted.  In contrast to Keynes, Hayek argued that the crisis was a direct result of the misallocation of resources during the previous economic booms.  Hence, Keynes’ solution to reestablish the same distribution of resources would not provide a sustainable solution to unemployment.  The only solution to systemic unemployment, according to Hayek, required a liquidation of wrong investments and reallocation of productive resources.  To quote Hayek:
“If the real cause of unemployment is that the distribution of labour does not correspond with the distribution of demand, the only way to create stable conditions of high employment which is not dependent on continued inflation (or physical controls) is to bring about a distribution of labour which matches the manner in which in which a stable money income will be spent” (1950).

What we can infer is that Keynes’ solution to economic crises was a short-term panacea, while Hayek advocated a market driven solution that would result in a more sustainable productive economic structure.  Such a structure would be consistent with consumer preferences.  Trade cycles, according to Hayek, were a result of the government interference with the spontaneous order of the markets.  Hence, the only way to avoid booms and busts, trade cycles, is to prevent them form occurring in the first place.

Wapshott concludes his book by crediting Keynes for “saving capitalism a second time”.  He makes a reference to Keynesian doctrine for solving the Great Depression, and the applicability of the same dogmatic panacea for the Great Recession from the 2008 onwards.  He conjures the ghost of the Keynesian high priest, John Kenneth Galbraith, who scolds conservatives in the English-speaking countries for embracing Hayekian economics: “better to accept the unemployment, idled plants, and mass despair of the Great Depression, with all the resulting damage to the reputation of the capitalist system, than to retreat on true principle….”.  What Wapshott misses in his argument is Hayek’s central proposition: booms and busts are a result of malinvestment created by the government interference in the operation of free market, a result of the very policies advocated by the dogmatic Keynesians of today.

Thursday, April 12, 2012

Sir Clive Granger & Robert Engle

In 2003 Clive Granger shared the Nobel prize for economics with the American academic Robert Engle for their work on the concept of co-integration – ways of analysing sequences of economic data recorded at regular intervals, known as time series.Their use of sophisticated statistical techniques now help the understanding of market movements and economic trends.
The economics prize, introduced in 1968 by the Swedish central bank, is the only Nobel award not established by the Swedish industrialist Alfred Nobel.

 Clive Granger
Granger's principal achievements – so-called Granger causality and co-integration – were acclaimed as having helped build the statistical plumbing of modern economics and finance. "It may not be glamorous," declared the magazine BusinessWeek, "but as the Nobel committee recognised, it's indispensable."
Trained in statistics, Granger specialised in research that helped to demystify the often baffling behaviour of financial markets, pioneering a range of different ways of analysing statistical data which have since become used routinely by government departments, world banks, economists and academics.
Sir Clive Granger

 Clive always had a particular passion for identifying what was predictable in economic relationships. He emphasized the importance of sorting out which variables are helpful for purposes of forecasting others as one of the first steps in understanding underlying causal relationships. That philosophy came to be a regular feature in thousands of economic studies and seminars, as scholars would routinely report investigations of "Granger-causality."Clive also discovered, in a paper with Paul Newbold published in 1974, the phenomenon of spurious regression. The pair demonstrated by Monte Carlo simulation that if a researcher fails to take account of the underlying dynamics, a regression of one trending variable on another can produce what look like marvelously significant t-statistics, even though the reality could be that the two series are completely independent of each other.The complex methods Granger devised are used to analyse links between such factors as wealth and consumer spending, price levels and exchange rates. They allow the construction of economic models that help the understanding of how trends develop over time, and how relationships evolve between different variables.
He first came to notice in the 1960s with his work on time series and the development of the concept of Granger causality, an idea rooted in the work of the mathematician Norbert Wiener. Granger was primarily concerned with time series that were non-stationary (ie statistics such as a country's gross domestic product that, despite periodic fluctuations, follow a long-term trend of growth or shrinkage; by contrast, unemployment figures or interest rates tend to remain at or around a particular level and accordingly are described as stationary).The current value of a time series can often be predicted from its own past values. For example, gross domestic product this quarter is imperfectly predicted by GDP data from the past few years. But the introduction of a second time series can improve predictive accuracy, a concept that became known as "Granger Causality".In the 1970s Granger moved on to redefine the field of econometrics (using mathematical or statistical techniques to study economic problems) by overturning much of the received wisdom in the study of time series data. As BusinessWeek later observed, Granger showed that the statistical techniques used by forecasters to come up with patterns in historical data "were simply wrong".Best-developed statistics formerly assumed that time series were stationary, tending to vary randomly around a common long-term mean (or average) value or around a non-random trend. Many economic time series, however, seem to be non-stationary, following processes related to the so-called "random walk", a term suggested by the idea of a drunken man stumbling along a street, who is just as likely to go one way as another.

For want of better techniques, economists often applied statistics designed for stationary data to non-stationary data. But in 1974, Granger and his post-doctoral student Paul Newbold, building on the earlier work of the British statistician G Udny Yule, showed that pairs of non-stationary time series could frequently display highly significant correlations when there was no causal connection between them. For example, the US federal debt and the number of deaths due to Aids between 1981 and 2000 are highly correlated but are clearly not causally connected. Such "nonsense correlations" called into question the meaningfulness of many econometric studies.
Robert Engle , Mark Machina and Clive Granger

Working with Engle, Granger realised that not all long-term associations between non-stationary time series are nonsense. Suppose, as the American academic Kevin D Hoover explained, that the randomly-walking drunk has a faithful (and sober) friend who follows him down the street from a safe distance to make sure he does not injure himself.

"Because he is following the drunk, the friend, viewed in isolation, also appears to follow a random walk, yet his path is not aimless; it is largely predictable, conditional on knowing where the drunk is," Hoover noted. Granger and Engle coined the term "co-integration" to describe the genuine relationship between two non-stationary time series. Time series are "co-integrated" when the difference between them is itself stationary – the friend never gets too far away from the drunk, but, on average, stays a constant distance behind.

Granger's discovery had an enormous impact, leading, as one of his students put it, to "chaos for a few years". In order to forecast non-stationary variables, new techniques had to be developed to replace the ones Granger had debunked.
Clive William John Granger was born on September 4 1934 in Swansea, where his father worked for the Chivers jam and jelly firm.When he was still a baby his parents moved to Lincoln and during the Second World War his father enlisted in the RAF. His mother took him first to Cambridge, and thence to Nottingham where, at West Bridgford grammar school, he showed promise as a mathematician and foresaw a career in insurance or meteorology. At the University of Nottingham he was among the original intake for the first joint degree course in Economics and Mathematics.

When he graduated with a First in 1955, Granger stayed on to become a lecturer in Statistics, gaining his PhD in 1959. After spending an academic year at Princeton, New Jersey, he married and spent his honeymoon travelling across the United States. Returning to Nottingham he remained on the faculty until 1973, with occasional visiting positions at other universities. In 1974 he returned to America to become a professor at the University of California at San Diego. It was there, with Engle, that Granger introduced methods for testing for co-integration among variables in a paper in the journal Econometrica in 1987. The concept would have remained useful only in theory had not Granger and Engle introduced powerful statistical methods for estimating and testing hypotheses, groundbreaking work that earned them the Nobel Prize.

With Paul Newbold, in 1977 Granger published Forecasting Economic Time Series (one of 12 books he produced in the course of his career) that became a standard reference work on time series forecasting.On his retirement from UCSD in June 2003, Granger became a visiting scholar at Canterbury University in New Zealand. Weeks later he heard he had shared what was then the £779,000 Nobel Prize with Engle, having been told the news in a 3am telephone call which he suspected at first might have been a hoax. He was reassured only after speaking to members of the prize committee. "I know one of them rather well," he explained, "so when I heard his voice I knew it wasn't a hoax, so I was then cheerfully relaxed."
Granger was knighted in 2005. In the same year, to mark his Nobel achievement, Nottingham University's Economics and Geography department premises were renamed the Sir Clive Granger Building.
Granger once wrote: "A teacher told my mother that 'I would never become successful', which illustrates the difficulty of long-run forecasting on inadequate data."Clive Granger married, in 1960, Patricia Loveland, who survives him with their son and daughter (who, although born four years apart, share the same birthday).

 Robert Engle

 Engle was born in Syracuse, New York in November, 1942. He graduated from Williams College in 1964 obtaining bachelor's degree of physics in 1964, and received master's degree of physics and doctor's degree of economics from Cornell University in 1969. Engle acted as a professor of MIT between 1968 and 1974. In 1975, he transferred to University of California at San Diego to act as a professor and served as dean of Department of Economics there between 1990 and 1994. He acts as a professor majoring in financial management of Leonard N. Stern School of Business since 1999 and now is a member of American Economic Association and American Academy of Arts & Sciences. Engle had worked with Clive W. J. Granger in University of California.

His works mainly include Cointegration, Causality, and Forecasting: A Festschrift in Honour of Clive W. J. Granger, Robert F. Engle and Halbert White, eds., Oxford: Oxford University Press, 1999, ARCH Selected Readings, Oxford University Press1995, Handbook of Econometrics, Robert F. Engle and McFadden, North Holland Press, 1994, Long-run Economic Relationships: Readings in Cointegration, Robert F. Engle and C. W. J. Granger, Oxford University Press, 1991.Engle has acted as a member of American Academy of Arts & Sciences, consultant of Econometrics Association, research fellow of National Bureau of Economic Research (NBER), a member of Econometrics Association, and had won Roger F Murray Award, Institute for Quantitative Research in Finance and Outstanding Teacher, MIT Graduate Economics Association.

He established the concept of economic time series with time-varying volatility: ARCH and developed series of volatility models and methods of statistic analysis. The Royal Swedish Academy of Sciences (RSAS) declared that he is not only an excellent example of research fellows but also a model of financial analyser and that he provided not only a indispensable tool for the research fellows but also find out a short cut in asset pricing, asset allocation, risk assessment for the analysers.As an analyzer of time series, his research involves wave band spectrum regression, hypothesis testing, exogeneity and cointegration analysis, ARCH analysis and high frequency analysis of financial asset return data in 1980s.

As an important pioneer of financial econometrics during the recent 20 years, Engle had great interest in financial market analysis and financial econometrics including financial market microstructure, equity asset, interest rate and option. In Engle's view, with the development of electronic trade, financial econometrics is helpful for the market maker, broker and traders of financial market to establish the optimum strategy in accordance with specific market environment and object in virtue of statistic analysis.
Robert Engle


The thesis over 100 and three published works made Engle a productive economist. In addition, he sometimes made speeches in academic circles and business circles. Just like what he had said, it was dull to research without application, but it was also bald to bear too many consultant responsibilities without research significance. The splendid achievements of Engle could be attributed to not only academic contribution with Granger, David F. Hendry and other economists and econometrician of University of California at San Diego, but also the actual environment in New York and New York University. New York, as the World Financial Center, provided him with data required for analysis of financial problems and models for his academic research. Additionally, viewpoints about practical problems put forward by his companions majoring in financial practice in New York University: Stern enlightened him for model studies.

It is interesting that Engle, a winner of Nobel Prize in Economics, had been willing to become a physicist during his university time.He had applied for postgraduate degree in physics of Cornell University and University of California at Berkeley. Because his contact via telephone with postgraduate college of University of California at Berkeley had been delayed, he chose Cornell University finally. At the beginning, he had being eager to become a member of superconductor research group. However, he decided to transfer to economic department like many friends one year later. Engle was expert in study of economics although he was majoring in physics, and he obtained his doctor's degree in economics soon after he got his master's degree in physics. Actually, many economists, such as Daniel L. McFadden who won the Nobel Prize in Economics 2000, had learned physics.

According to the Royal Swedish Academy of Sciences, Engle's ARCH theory mode is now an indispensable tool for study of economics and evaluating price and risks by financial market analysers.


Tuesday, April 10, 2012

Isaiah Berlin

English social historian, political and social philosopher, essayist, friend of the Russian authors Anna Akhmatova and Boris Pasternak. According to an anecdote, the Cold War began in 1945, when Berlin visited Akhmatova in Leningrad. This so irritated Stalin that he personally ran the philosopher down on the telephone. Berlin never wrote a single-volume magnum opus but only brilliant essays. Like Karl Popper, he is widely acclaimed for his anti-authoritarian social philosophy and criticism of totalitarian doctrines. Berlin's most important contributions in social and political theory include the essays 'Historical Inevitability' (1954) and 'Two Concepts of Liberty' (1958). The Independent stated that "Isaiah Berlin was often described, especially in his old age, by means of superlatives: the world's greatest talker, the century's most inspired reader, one of the finest minds of our time ... there is no doubt that he showed in more than one direction the unexpectedly large possibilities open to us at the top end of the range of human potential"
"It is seldom, moreover, that there is only one model that determines our thought; men (or cultures) obsessed at their models are rare, and while they may be more coherent at their strongest, they tend to collapse more violently when, in the end, their concepts are blown up by reality - experienced events, 'inner' or 'outer' that get in the way." (from 'Does Political Theory Still Exist?, 1962)
Isaiah Berlin

Isaiah Berlin was one of the most formidable defenders of philosophical liberalism and distinguished practitioners of the history of ideas. He took R. G. Collingwood's (1889-1943) idea that the thought of a period or an individual is organized by 'constellations of absolute presuppositions'. Thus philosophical analysis required a historical dimension, but Berlin argued against the Marxist determinist view of history and rejection of free will.In his youth, Berlin studied Marx, although during World War I and the Revolution he had witnessed the ill omens of a totalitarian ideology struggling for power, and was not enchanted by the promises of the Bolsheviks. "History does not reveal causes; it presents only a blank succession of unexplained events", he said. However, he acknowledged that a historian's approach to his subject cannot be entirely objective or value-free, some degree of moral or psychological evaluation is inevitable. "I do not here wish to say that determinism is necessarily false, only that we neither speak nor think as if it could be true, and that it is difficult, and perhaps beyond our normal powers, to conceive what our picture of the world would be if we seriously believed it..." (from 'Historical Inevitability', 1953)

Berlin was born in Riga, Latvia, the son of a Jewish timber merchant. Berlin's grandfather on his mother's side was a Hasidic rabbi. Berlin had some religious education but confessed later that as a child he found the Talmud a ''very, very boring book." At the age of 11 Berlin emigrated with his parents to England. A prodigy, he had already read War and Peace.

Berlin was educated at St Paul's School and Corpus Christi College, Oxford. To his father's disappointment, he did not go into the family timber business. From 1932 he taught at Oxford, where he became friends with A. J. Ayer and other leading analytic philosophers. Berlin was the first Jew to be accepted into the college. With Ayer he argued about philosophy, but where Ayer was a radical liberal and had a great appetite for life, Berlin was sexually insecure and unsure of his beliefs. "You're not much of a crusader are you?" Ayer once teased his friend. Later Berlin became disillusioned with Ayer as a philosopher, saying that "he never had an original idea in his life. He was like a mechanic, he fiddled with things and tried to fix them." ( A.J. Ayer: A Life by Ben Rogers, 1999)

Berlin was a brilliant talker. The author Virginia Woolf, who first met Berlin in 1933, said that he looked like a swarthy Portuguese Jew and talked with the vivacity and assurance of a young Maynard Keynes. In 1934 Berlin visited Palestine. Zionism was to him as important as Communism was to Guy Burgess, his friend, who was a Soviet spy.Berlin was a founding father with Austin, Ayer, and others of Oxford philosophy. After publishing several papers on the rebellion against idealism, he broke away from the general spirit of positivism. With his first book, KARL MARX (1939), Berlin disengaged himself from the fashionable philosophical trends, and started his lifelong examination of such thinkers as Vico, Herzen, Herder, Tolstoy, Machiavelli, and such topics as liberty, determinism, relativism, historicism, nationalism – and his most distinctive doctrine: pluralism. He also translated into English several Russian classic authors, among them Turgenev.
Isaiah Berlin


During World War II Berlin served in the British Information Service in New York and later as First Secretary of the British Embassy in Washington. After the war he met in the Soviet Union the film director Sergei Eisenstein, who was in poor spirits after Stalin had condemned his film Ivan the Terrible, and the writers Boris Pasternak and Anna Akhmatova, both also in disfavour by the authorities. Later Berlin gave his account of them in the essay 'Meetings With Russian Writers.' "When we met in Oxford in 1965 Ahkmatova told me that Stalin had been personally enraged by the fact that she had allowed me to visit her: 'So our nun now receives visits from foreign spies,' he is alleged to have remarked, and followed this with obscenities which she could not at first bring herself to repeat to me." (from 'Conversations with Akhmatova and Pasternak' in The Proper Study of Mankind, 1997)

From 1947 to 1958 Berlin wrote and lectured at Oxford, in London, and in several American universities. Joseph Brodsky, the Russian-born poet and essayist, once said that Berlin's English was just like his Russian, only faster, '"courting the speed of light." Rayner. An extrovert by nature, Berlin has also been characterized as a "compulsive chatterer". Between the years 1957 and 1967 Berlin held the prestigious Chichele Chair in Social and Political Theory at Oxford. In his inaugural lecture at Oxford 'Two Concepts of Liberty' Berlin attempted clearly to distinguish 'negative' and 'positive' liberty. He contrasted the ideas of such thinkers as J.S. Mill, Herzen, and others, who made minimal assumptions about the ultimate nature and needs of the subject, with Hegel and German Idealism, and their 'despotic vision' and dogmatic assumptions about the essence of the subject.

Among many other academic distinctions, Berlin was from 1966 a professor at the City University of New York. During the period which led up to the foundation of the State of Israel, he was a close friend of Chaim Weizmann (1874-1952), the first president of Israel. Berlin led a bachelor's life until 1956, when he married Aline de Gunzbourg at Hampstead Synagogue; she was an aristocratic Frenchwoman, who had three sons from previous marriages. They moved into Headington House, where Berlin lived for the rest of his life. "I don't mind death," he once said, "But I find dying a nuisance. I object to it." Berlin died in 1997 in Oxford of a heart attack following a long illness at the age of 88.

In one of his most famous essays, 'The Hedgehog and The Fox' (1953), Berlin focused on the tension between monist and pluralist visions of the world and history, and drew the line between different authors and philosophers. As the Greek poet Archilochus said: "The fox knows many things, but the hedgehog knows one big thing." The Hedgehog needs only one principle, that directs its life. Typical examples are Plato, Dante, Pascal, Nietzsche and Proust. The Fox, pluralist, travels many roads, according to the idea that there can be different, equally valid but mutually incompatible concepts of how to live. The roads do not have much connection, as is seen in the works of Aristotle, Montaigne, Shakespeare, Moliére, Goethe and Balzac. In Tolstoy, whose view of history inspired Berlin to write the essay, he saw a fox who believed in being a hedgehog. Berlin's central dichotomy of monists and pluralists and his interest in such Counter-Enlightenment figures as Vico, was later interpreted as an attack on the values of Enlightenment. He was also accused of ultra-individualism.

In his essays on Machiavelli, Montesquieu and Hamann in AGAINST THE CURRENT (1979), Berlin stated that these thinkers replaced the doctrine that all reality forms a rational whole with a radical pluralism, from which sprang such -isms as irrationalism, nationalism, fascism, populism, existentialism, and, above all, some of the central values of liberalism. Berlin stated in FOUR ESSAYS ON LIBERTY (1969) that liberty is in essence the casting off of chains. 'Negative liberty' allows men the freedom to act diversely and 'positive liberty' limits some freedoms to achieve a higher good. Berlin once stated: "Total liberty can be dreadful, total equality can be equally frightful." All those doctrines which define liberty as self-realization and then prescribe what this is, end up by defending liberty's opposite. To the perennial human problems there are no final answers. "Liberty is liberty, not equality or fairness or justice or human happiness or a quiet conscience." Liberal governments should recognize that all political values must end up in conflict, and all conflicts require negotiation.

For further reading : The Legacy of Isaiah Berlin, ed. by Mark Lilla et al (2001); Isaiah Berlin: A Life by Michael Ignatieff (1998); Isaiah Berlin by John Gray (1996); Isaiah Berlin's Liberalism by Claude J. Galipeau (1994); Conversations With Isaiah Berlin by Isaiah Berlin, Ramin Jahanbegloo (1992); Isaiah Berlin, ed. by Edna and Avishai Margalit (1991); On the Thought of Isaiah Berlin by Avishai Margalit et al (1990); Critical Appraisal of Sir Isaiah Berlin's Political Philosophy by Robert Kocis (1989); Personal Impressions, ed. by Henry Hardy (1981); The Idea of Freedom, ed. by Alan Ryan (1979); 'Berlin and the liberal tradition' by M. Cohen, in Philosophical Quarterly 10 (1960) -

Michel Foucault

Michel Foucault was a French philosopher or more specifically a historian of systems of thought, a self-made title created when he was promoted to a new professorship at the prestigious Collège de France in 1970. Foucault is generally accepted as having been the most influential social theorist of the second half of the twentieth century,he also remains an influential political philosopher(a Nietzschean).Foucault was listed as the most cited scholar in the humanities in 2007 by the ISI Web of Science.He was born on October 15, 1926, in Poitiers, France, and died in Paris in 1984 from an AIDS-related illness.

 As an openly homosexual man he was one of the first high-profile intellectuals to succumb to the illness, which was at the time still most unknown. However, it would appear that he knew he had AIDS and he reportedly was not afraid to die as he sometimes shared with his friends his thoughts of suicide. Yet, he continued working relentlessly until the end, spending the last eight months of his life working on the last two volumes of The History of Sexuality, which happened to come out just before he died in Paris at the hospital on June 26th 1984. He is buried at the Cimetière du Vendeuvre in Vienne, in the Rhone-Alpes Region, not far from Poitier the city where he was born.

Foucault’s father was a surgeon, and encouraged the same career for his son. Foucault graduated from Saint-Stanislas school having studied philosophy with Louis Girard who would become a notorious professor. After that Foucault attended the Lycée Henri-IV in Paris, then in 1946, equipped with an impressive academic record he entered the École Normale Supérièure d’Ulm, which is the most prestigious French school for humanities studies. Fascinated by psychology he received the equivalent of a BA degree in Psychopathology in 1947. In 1948, working under the famous phenomenologist Maurice Merleau-Ponty, he received another BA type of degree in Philosophy. In 1950 he failed his his agrégation (French University high-level competitive examination for the recruitment of professors) in Philosophy, but succeeded in 1951. During the 1950s he worked in a psychiatric hospital, then from 1954-58 he taught French at the University of Uppsala in Sweden. He then spent a year at the University of Warsaw, and a year at the university of Hamburg.
Psychiatric Power

Through his impressive career Foucault became known for his many demonstrative arguments that power depends not on material relations or authority but instead primarily on discursive networks. This new perspective as applied to old questions such as madness, social discipline, body-image, truth, normative sexuality etc. were instrumental in designing the post-modern intellectual landscape we are still in nowadays. Today Michel Foucault is listed as the most cited intellectual worldwide in the humanities by The Times Higher Education Guide. This is not so, however if we consider the field of philosophy alone, and that in spite of it being the discipline Foucault was largely educated in, and which, it is safe to say he might have identified with the most. This is probably because Foucault’s definition of philosophy focuses on the critique of truth and does so by conceiving it as inextricable from a critique of history. This is because according to him, it makes philosophy a much richer discipline. Linking philosophy and history, however is considered by many as irreconcilable with the generally accepted definition of philosophy as being independent of it.

In 1959 Foucault received his doctorat d'état under the supervision of Georges Canguilhem, the famous French philosopher. The paper he presented was published two years later with the name Folie et déraison: Histoire de la folie à l'âge classique ( Madness and Unreason: History of Madness in the Classical Age, 1961). In this text, Foucault abolished the possibility of separating madness and reason into universally objective categories. He did so by studying how the division has been historically established, how the distinctions we make between madness and sanity are a result of the invention of madness in the Age of Reason. He does a reading of Descartes' First Meditation, and accuses him of being able to doubt everything except his own sanity, thus excluding madness from his famous hyperbolic doubt.
Michel Foucault

In the 1960s Foucault was head of the philosophy departments at the University of Clermont-Ferrand. It was at this time that he met the philosophy student Daniel Defert, whose political activism would be a major influence on Foucault. When Defert went to fulfill his volunteer service requirement in Tunisia, Foucault followed, teaching in Tunisia from 1966-68. They returned to Paris during the time of the student revolts, an event that would have a profound effect on Foucault's work. He took the position of head of the Philosophy Department at the University of Paris-VII at Vincennes where he brought together some of the most promising thinkers in France at the time, which included Alain Badiou and Jacques Rancière. Both went on to become leading thinkers of their generation, and both have taught at EGS. It was also in 1968 that he formed, with others, the Prison Information Group, an organization that gave voice to the concerns of prisoners.

In The History of Sexuality, Volume 2: The Use of Pleasure, one of his last far-reaching works he wrote: "[W]hat is philosophy today–philosophical activity, I mean–if it is not the critical work that thought brings to bear on itself?". Foucault is here practicing the very kind of critical questioning he is hinting at. It is a sort of reflective movement of thought that challenges the all-too-often uncritical tendencies of philosophical thinking, especially when it fails to see that it is itself part of what needs to be critiqued. In this light, Foucault is not simply stating something to be accepted or refuted, for that too would lead to complacent thinking. On the contrary, in his very use of language here and elsewhere there is a clear opening for something other, perhaps even unknown, which is made possible in part through a challenging use of the questioning mode.
Foucault’s project, then, should not be confused with traditional history and needs to be wrestled with. He helpfully continues: "In what does it [philosophy] consist, if not in the endeavor to know how and to what extent it might be possible to think differently, instead of legitimating what is already known?" Significantly, he is questioning the very discourse of philosophy as an established tradition whose tendency towards rigidity needs to be interrogated. Foucault’s re-defining of "philosophical activity" characterizes what philosophy needs to be today if it is to do more than simply perpetuate the status quo. There is thus in a very real sense a political and ethical level to Foucault’s work. This is to varying degrees evident in all of his corpus, hence the appeal many critical thinkers still find in his research today.

Foucault always endeavors to write what he calls a "history of the present" and in spite of the apparent contradiction it is a critical move that has political reach. Because what matters today has roots in the past, a history of the present is a productive space for critical thinking. In Foucault’s own words: "The game is to try to detect those things which have not yet been talked about, those things that, at the present time, introduce, show, give some more or less vague indications of the fragility of our system of thought, in our way of reflecting, in our practices." Early on he refers to such history in terms of archeology and later as his research become more directly political, as genealogy, taking his cue from Friedrich Nietzsche.

Nietzsche's Genealogy and also Foucault's work is read by all the legal scholars.
Legal scholars have only recently begun to address the radical challenges for law and legal theory that follow from Friedrich Nietzsche’s pathbreaking work and how Nietzsche’s philosophical and rhetorical interventions illuminate the failures of contemporary legal theory.Nietzsche refuses to accept the notion that values such as duty carry absolute necessity as a given fact; in other words, he asks why is duty an a priori value. We can understand his account by exploring the first part that describes the genealogy of moral valuation and then by understanding the critique that grows out of the genealogy.  Foucault later advanced Nietzsche's work on Genealogy.
Foucault's work on Genealogy is inspired by Nietzsche

His numerous archaeological, or epistemological studies recognize the changing frameworks of production of knowledge through the history of such practices as science, philosophy, art and literature. In his later genealogical practice, he argues that institutional power, intrinsically linked with knowledge, forms individual human "subjects", and subjects them to disciplinary norms and standards. These norms are produced historically, there is no timeless truth behind them. For him truth is something that is historically produced. Foucault examines the "abnormal" human subject as an object-of-knowledge of the discourses of human and empirical science such as psychiatry, medicine, and penalization.Foucault published The Order of Things in 1966, which immediately became a bestseller in France, perhaps surprisingly given the level of complexity of the book (arguably his most difficult to read). It is an archeological study of the development of biology, economics and linguistics through the 18th and 19th centuries. It is in this book that he makes his famous prediction at the end that "man", a subject formed by discourse as a result of the arrangement of knowledge over the last two centuries, will soon be "erased, like a face drawn in sand at the edge of the sea." Less poetically and in the same book: "As the archeology of our thought easily shows, man is an invention of a recent date. And one perhaps nearing its end."

Foucault's book Archaeology of Knowledge was published in 1969. As with The Order of Things, this text uses an approach to the history of knowledge inspired by Nietzsche's work, although not yet using Nietzsche's terminology of "geneaology", and this is a rare major work for Foucault that does not include a historical study per se. Because what Foucault is really after in this book is the question of archeology as a method of historical analysis. This attitude to history is based on the idea that the historian is only interested in what has implications for present events, so history is always written from the perspective of the present, and fulfills a need of the present. Thus, Foucault's work can be traced to events in his present day. The Order of Things would have been inspired by the rise of structuralism in the 1960s, for example, and the prison uprisings in the early 1970s would have inspired Discipline and Punish: The Birth of the Prison (1975). Discourses are governed by such historical positioning, which have their own logic, which Foucault refers to as an "archive". Archeology, Foucault explains, is the very excavation of such archive.

In 1975 with the publication of Discipline and Punish: The Birth of the Prison, his work begins to focus more explicitly on power. He rejects the Enlightenment's philosophical and juridical interpretation of power as conceptualized particularly in relation to representative government, and he introduces instead the notion of power as "discipline" and takes the penal system as the context of his analysis, only to generalize it further to society at large. He shows this kind of discipline is a specific historical form of power that was taken up by the state from the army in the 17th century, which spread widely across society through institutions. Here he begins to examine the relationship of power to knowledge and to the body, which would become a pivotal Foucaultian move in his future research. He argues that these institutions, including the army, the factory and the school, all discipline the bodies of their subjects through surveilling, knowledge-gathering techniques, both real and perceived. Indeed, the goal of such exercise of power is to produce "docile bodies" that can be monitored, and which lead to the psychological control of individuals. Foucault goes as far as arguing that such power produces individuals as such. In maping the emergence of a disciplinary society and its new articulation of power, he uses the model of Jeremy Bentham's Panopticon to illustrate the structure of power through an architecture designed for surveillance. The design of Bentham's prison allows for the invisible surveillance of a large number of prisoners by a small number of guards, eventually resulting in the embodiment of surveillance by the prisoners, making the actual guards obsolete. The prison is a tool of knowledge for the institutional formation of subjects, thus power and knowledge are inextricably linked. The rather controversial conclusion of the book is that the prison system is actually an institution whose purpose is to produce criminality and recidivism.
Michel Foucault

During the 1970s and 1980s Foucault's reputation grew and he lectured all over the world. In 1971 he was invited to debate Noam Chomsky in on Dutch television for The International Philosophers Project. It gave rise to a fascinating debate, which has been published several times since then. Chomsky argued for the concept of human nature as a political guide for activism while Foucault argued that any notions of human nature cannot escape power and must thus first be analyzed as such.

During the later years of his professorship at the Collège de France he started writing The History of Sexuality, a major project he would never finish because of his untimely death. The first volume of the work was published in 1976 in French and the English version would follow two years later, entitled The History of Sexuality Volume I: An Introduction. However, the French title was much more indicative of what Foucault was after: "Histoire de la sexualité, tome 1 : La Volonté de savoir", which translates as The History of Sexuality Volume I: The Will to Knowledge (a newer edition is simply named The Will to Knowledge). It is an amazingly prominent work, maybe even his most influential. The main thesis of the work is to be found in part two of the book called "The Repressive Hypothesis" where Foucault articulately explains that in spite of the generally accepted belief that we have been sexually repressed, the notion of sexual repression cannot be separated from the concomitant imperative for us to talk about sex more than ever before. Indeed, according to Foucault it follows in the name of liberating so-called innate tendencies, certain behaviors are actually produced.

With the contention that modern power operates to produce the very behaviors it targets, Foucault attacks here again the notion of power as repression of something that is already in place. Such new notion of power has been and continues to be incredibly influential in various fields.His last two books, the second and third volumes of the history of sexuality research, entitled The Uses of Pleasure and The Care of the Self respectively, both relate the Western subject's understanding of ourselves as sexual beings to our moral and ethical lives. He traces the history of the construction of subjectivity through the analyses of ancient texts. In The Uses of Pleasure he looks at pleasure in the Greek social system as a play of power in social relations; pleasure is derived from the social position realized through sexuality. Later, in Christianity, pleasure was to become linked with illicit conduct and transgression. In The Care of the Self, Foucault looks at the Greeks' systems of rules that were applied to sexual and other forms of social conduct. He analyses how the rules of self-control allow access to pleasure and to truth. In this structure of a subject's life dominated by the care for the self, excess becomes the danger, rather than the Christian deviance.
Foucault's work on Power

What Foucault made from delving into these ancient texts, is the notion of an ethics to do with one’s relation to one’s self. Indeed the constitution of the self is the overarching question for Foucault at the end of his life. Yet the point for him was not to present a new ethics. Rather, it was the possibility for new analyses that focused on subjectivity itself. Foucault became very interested in the way subjectivity is constructed and especially how subjects produce themselves vis-à-vis truth.

In his last few books Foucault works with a system of control, not understood by traditional concepts of authority, which he calls bio-power. Bio-power can be understood as the prerogative of the state to "make live and let die", which is distinct from the rule of the sovereign power which would "let live and make die" by rule of the king. This attitude toward the lives of social subjects is a way of understanding the new formation of power in Western society. Foucault's history of sexuality suggests that pleasure is found in regulation and self-discipline rather than in libertine or permissive conduct, and encourages resistance to the state through the development of individual ethics towards the production of an admirable life: "We must at the same time conceive of sex without the law, and power without the king."

Monday, April 9, 2012

John von Neumann:Father of Game Theory

Man and Machine 

John von Neumann, one of this century’s preeminent scientists, along with being a great mathematician and physicist, was an early pioneer in fields such as game theory, nuclear deterrence, and modern computing. He was a polymath who possessed fearsome technical prowess and is considered "the last of the great mathematicians". His was a mind comfortable in the realms of both man and machine. His kinship with the logical machine was displayed at an early age by his ability to compute the product of two eight-digit numbers in his head. His strong and lasting influence on the human world is apparent through his many friends and admirers who so often had comments as to von Neumann’s greatness as a man and a scientist.He made major contributions to the field of  set theory, functional analysis, quantum mechanics, ergodic theory, geometry, fluid dynamics, economics, linear programming, game theory, computer science, numerical analysis, hydrodynamics, nuclear physics and statistics.

Although he is often well known for his dominance of logic and rigorous mathematical science, von Neumann’s genius can be said to have grown from a comfortable and relaxed upbringing.

 

Early Life and Education in Budapest

He was born Neumann Janos on December 28, 1903, in Budapest, the capital of Hungary. He was the first born son of Neumann Miksa and Kann Margit. In Hungarian, the family name appears before the given name. So, in English, the parent’s names would be Max Neumann and Margaret Kann. Max Neumann purchased a title early in his son’s life, and so became von Neumann.

Max Neumann, born 1870, arrived in Budapest in the late 1880s. He was a non-practicing Hungarian Jew with a good education. He became a doctor of laws and then worked as a lawyer for a bank. He had a good marriage to Margaret, who came from a prosperous family.In 1903, Budapest was growing rapidly, a booming, intellectual capital. It is said that the Budapest that von Neumann was born into "was about to produce one of the most glittering single generations of scientists, writers,artists, musicians, and useful expatriate millionaires to come from one small community since the city-states of the Italian Renaissance. Indeed, John von Neumann was one of those who,through his natural genius and prosperous family, was able to excel in the elitist educational system of the time.

At a very young age, von Neumann was interested in math, the nature of numbers and the logic of the world around him. Even at age six, when his mother once stared aimlessly in front of her, he asked, "What are you calculating?" thus displaying his natural affinity for numbers. The young von Neumann was not only interested in math, though. Just as in his adult life he would claim fame in a wide range of disciplines (and be declared a genius in each one), he also had varying interests as a child. At age eight he became fascinated by history and read all forty-four volumes of the universal history, which resided in the family’s library. Even this early, von Neumann showed that he was comfortable applying his mind to both the logical and social world.His parents encouraged him in every interest, but were careful not to push their young son, as many parents are apt to do when they find they have a genius for a child. This allowed von Neumann to develop not only a powerful intellect but what many people considered a likable personality as well.

It was never in question that von Neumann would attend university, and in 1914, at the age of 10, the educational road to the university started at ,the Lutheran Gymnasium. This was one of the three best institutions of its kind in Budapest at the time and gave von Neumann the opportunity to develop his
great intellect. Before he would graduate from this high school he would be considered a colleague by most of the university mathematicians. His first paper was published in 1922, when he was 17, in the  Journal of the German Mathematical Society, dealing with the zeros of certain minimal polynomials.

 
University — Berlin, Zurich and Budapest

In 1921 von Neumann was sent to become a chemical engineer at the University of Berlin and then to Zurich two years later. Though John von Neumann had little interest in either chemistry or engineering, his father was a practical man and encouraged this path. At that time chemical engineering was a popular career that almost guaranteed a good living, in part due to the success of German chemists from 1914 to 1918. So, von Neumann set on the road planned in part by his father Max. He would spend two years in Berlin in a non-degree chemistry program. After this he would take the entrance exam for second year standing in the chemical engineering program at the prestigious Eidgennossische Technische Hochschule (ETH) in Zurich, where Einstein had failed the entrance exam in 1895 and then gained acceptance a year later.

During this time of practical undergraduate study, von Neumann was executing another plan that was more in tune with his interests. In the summer after his studies at Berlin and before he went to Zurich he enrolled at the Budapest University as a candidate for an advanced doctorate in mathematics. His
Ph.D. thesis was to attempt the axiomatization of set theory, developed by George Cantor. At the time, this was one of the hot topics in mathematics and had already been studied by great professors, causing a great deal of trouble to most of them. None the less, the young von Neumann, devising and executing this plan at the age of 17, was not one to shy away from great intellectual challenges.

Von Neumann breezed through his two years at Berlin and then set himself to the work on chemical engineering at the ETH and his mathematical studies in Budapest. He received excellent grades at the ETH, even for classes he almost never attended. He received a perfect mark of 6 in each of his courses during his first semester in the winter of 1923-24; courses including organic chemistry, inorganic chemistry, analytical chemistry, experimental physics, higher mathematics and French language.

From time to time he would visit Budapest University when his studies there required his presence and to visit his family. He worked on his set theory thesis in Zurich while completing classes for the ETH. After finishing his thesis he took the final exams in Budapest to receive his Ph.D. with highest honors. This was just after his graduation from the ETH, so in 1926 he had two degrees, one an undergraduate degree in chemical engineering and the other a Ph.D. in mathematics, all by the time he was twenty-two.
This von Neumann stamp, issued in Hungary in 1992, honors his contributions to mathematics and computing.

Game Theory 

Von Neumann is commonly described as a practical joker and always the life of the party. John and Klara held a party every week or so, creating a kind of salon at their house. Von Neumann used his phenomenal memory to compile an immense library of jokes which he used to liven up a conversation. Von Neumann loved games and toys, which probably contributed in great part to his work in Game Theory.

An occasional heavy drinker, Von Neumann was an aggressive and reckless driver, supposedly totaling a car every year or so. According to William Poundstone's Prisoner's Dilemma, "an intersection in Princeton was nicknamed "Von Neumann Corner" for all the auto accidents he had there."

His colleagues found it "disconcerting" that upon entering an office where a pretty secretary worked, von Neumann habitually would "bend way way over, more or less trying to look up her dress." (Steve J. Heims, John Von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death, 1980, quoted in Prisoner's Dilemma, p.26) Some secretaries were so bothered by Von Neumann that they put cardboard partitions at the front of their desks to block his view.Despite his personality quirks, no one could dispute that Von Neumann was brilliant. Beginning in 1927, Von Neumann applied new mathematical methods to quantum theory. His work was instrumental in subsequent "philosophical" interpretations of the theory.


For Von Neumann, the inspiration for game theory was poker, a game he played occasionally and not terribly well. Von Neumann realized that poker was not guided by probability theory alone, as an unfortunate player who would use only probability theory would find out. Von Neumann wanted to formalize the idea of "bluffing," a strategy that is meant to deceive the other players and hide information from them.
In his 1928 article, "Theory of Parlor Games," Von Neumann first approached the discussion of game theory, and proved the famous Minimax theorem. From the outset, Von Neumann knew that game theory would prove invaluable to economists. He teamed up with Oskar Morgenstern, an Austrian economist at Princeton, to develop his theory.

Their book, Theory of Games and Economic Behavior, revolutionized the field of economics. Although the work itself was intended solely for economists, its applications to psychology, sociology, politics, warfare, recreational games, and many other fields soon became apparent.Although Von Neumann appreciated Game Theory's applications to economics, he was most interested in applying his methods to politics and warfare, perhaps stemming from his favorite childhood game, Kriegspiel, a chess-like military simulation. He used his methods to model the Cold War interaction between the U.S. and the USSR, viewing them as two players in a zero-sum game.

From the very beginning of World War II, Von Neumann was confident of the Allies' victory. He sketched out a mathematical model of the conflict from which he deduced that the Allies would win, applying some of the methods of game theory to his predictions.In 1943, Von Neumann was invited to work on the Manhattan Project. Von Neumann did crucial calculations on the implosion design of the atomic bomb, allowing for a more efficient, and more deadly, weapon. Von Neumann's mathematical models were also used to plan out the path the bombers carrying the bombs would take to minimize their chances of being shot down. The mathematician helped select the location in Japan to bomb. Among the potential targets he examined was Kyoto, Yokohama, and Kokura.

"Of all of Von Neumann's postwar work, his development of the digital computer looms the largest today." (Poundstone 76) After examining the Army's ENIAC during the war, Von Neumann came up with ideas for a better computer, using his mathematical abilities to improve the computer's logic design. Once the war had ended, the U.S. Navy and other sources provided funds for Von Neumann's machine, which he claimed would be able to accurately predict weather patterns.Capable of 2,000 operations a second, the computer did not predict weather very well, but became quite useful doing a set of calculations necessary for the design of the hydrogen bomb. Von Neumann is also credited with coming up with the idea of basing computer calculations on binary numbers, having programs stored in computer's memory in coded form as opposed to punchcards, and several other crucial developments. Von Neumann's wife, Klara, became one of the first computer programmers.

Von Neumann later helped design the SAGE computer system designed to detect a Soviet nuclear attack in 1948, Von Neumann became a consultant for the RAND Corporation. RAND (Research ANd Development) was founded by defense contractors and the Air Force as a "think tank" to "think about the unthinkable." Their main focus was exploring the possibilities of nuclear war and the possible strategies for such a possibility.

Von Neumann was, at the time, a strong supporter of "preventive war." Confident even during World War II that the Russian spy network had obtained many of the details of the atom bomb design, Von Neumann knew that it was only a matter of time before the Soviet Union became a nuclear power. He predicted that were Russia allowed to build a nuclear arsenal, a war against the U.S. would be inevitable. He therefore recommended that the U.S. launch a nuclear strike at Moscow, destroying its enemy and becoming a dominant world power, so as to avoid a more destructive nuclear war later on. "With the Russians it is not a question of whether but of when," he would say. An oft-quoted remark of his is, "If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o'clock, I say why not one o'clock?"
Just a few years after "preventive war" was first advocated, it became an impossibility. By 1953, the Soviets had 300-400 warheads, meaning that any nuclear strike would be effectively retaliated.
In 1954, Von Neumann was appointed to the Atomic Energy Commission. A year later, he was diagnosed with bone cancer. William Poundstone's Prisoner's Dilemma suggests that the disease resulted from the radiation Von Neumann received as a witness to the atomic tests on Bikini atoll. "A number of physicists associated with the bomb succumbed to cancer at relatively early ages.


Quantum Mechanics

Von Neumann was a creative and original thinker, but he also had the ability to take other people’s suggestions and concepts and in short order turn them into something much more complete and logical. This is in a way what he did with quantum mechanics after he went to the university in Göttingen, Germany after receiving his degrees in 1926.Quantum mechanics deals with the nature of atomic particles and the laws that govern their actions. Theories of quantum mechanics began to appear to confront the discrepancies that occurred when one used purely Newtonian physics to describe the observations of atomic particles.

One of these observations has to do with the wavelengths of light that atoms can absorb and emit. For example, hydrogen atoms absorb energy at 656.3 nm, 486.1 nm, 434.0 nm or 410.2 nm, but not the wavelengths in between.This was contrary to the principles of physics as they were at the end of the nineteenth century, which would predict that an electron orbiting the nucleus in an atom should radiate all wavelengths of light, therefore losing energy and quickly falling into the nucleus. This is obviously not what is observed, so a new theory of quanta was introduced by Berliner Max Plank in 1900 that said energy could only be emitted in certain definable packets.

This lead to two competing theories describing the nature of the atom, which could only absorb and emit energy in specific quanta. One of these, developed by Erwin Schrödinger, suggested that the electron in hydrogen is analogous to a string in a musical instrument. Like a string, which emits a
specific tone along with overtones, the electron would have a certain "tone" at which it would emit energy. Using this theory, Schrödinger developed a wave equation for the electron that correctly predicted the wavelengths of light that hydrogen would emit.

Another theory, developed by physicists at Göttingen including Werner Heisenberg, Max Born, and Pascual Jordan, focused on the position and momentum of an electron in an atom. They contested that these values were not directly observable (only the light emitted by the atom could be observed) and so could behave much differently from the motion of a particle in Newtonian physics. They theorized that the values of position and momentum should be described by mathematical constructs other than ordinary numbers. The calculations they used to describe the motion of the electron made use of matrices and matrix algebra.

These two systems, although apparently very different, were quickly determined to be mathematically equivalent, two forms of the same principle. The proponents of the two systems, none the less, denounced the others theories and claimed their own to be superior. It is in this environment, in 1926, that von Neumann appears on the scene and quickly went to work reconciling and advancing the theories of quantum mechanics.

Von Neumann wanted to find what the two systems, wave mechanics and matrix mechanics, had in common. Through a more rigorous mathematical approach he wanted to find a new theory, more fundamental and powerful than the other two. He abstracted the two systems using an axiomatic approach, in which each logical state is the definite consequence of the previous state. Von Neumann constructed the rules of "abstract Hilbert space" to aid in his development of a mathematical structure for quantum theory. His formalism of the subject allowed considerable advances to be made by others and even predicted strange new consequences, one that consciousness and observations alone can affect electrons in a Labaratory.



.


Marriages and America

From 1927-29, after his formalization of quantum mechanics, von Neumann traveled extensively to various academic conferences and colloquia and turned out mathematical papers at the rate of one a month at times. By the end of 1929 he had 32 papers to his name, all of them in German, and each
written in a highly logical and orderly manner so that other mathematicians could easily incorporate von Neumann’s ideas into their own work.

Von Neumann was now a rising star in the academic world, lecturing on new ideas, assisting other great minds of the time with their own works, and creating an image for himself as a likable and witty young genius in his early twenties. He would often avoid arguments with the more confrontational of his colleagues by telling one of his many jokes or stories, some of which he could not tell in the presence of ladies (though there were few women at these mathematical seminars). Other times he would bring up some interesting fact from ancient history, changing the subject and making von Neumann seem surprisingly learned for his age and professional interests.
Neumann at Princeton

Near the end of 1929 he was offered a lectureship at Princeton in an America that was trying to stimulate its mathematical sciences by seeking out the best of Europe. At this same time, von Neumann decided to marry Mariette Kovesi, whom he had known since his early childhood. Their honeymoon was a cruise across the Atlantic to New York, although most of their trip was subdued
by Mariette’s unexpected seasickness.

They had a daughter, Marina, in 1935. Von Neumann was affectionate with his new daughter, but did not contribute to the care of her or to the housework, which he considered to be the job of the wife. The gap between the lively 26-year-old Mariette and the respectable 31-year-old John von Neumann
began to increase and in 1936 they broke up, Mariette going home to Budapest and von Neumann, after drifting around Europe to various engagements, went to the United States. Soon after, on a trip to Budapest, he met Klari Dan and they were married in 1938.

Although this marriage lasted longer than his first, von Neumann was often distant from his personal life, obsessed and engrossed in his thoughts and work. In this personal tradeoff of von Neumann’s the world of science profited tremendously, and much of his work changed all of our lives. Two of the
most influential and well known of von Neumann’s interests during his time in America, from 1933 (when he was appointed as one of the few original members of the Institute for Advanced Studies at Princeton) to 1957 (when he died of cancer), were the development of nuclear weapons and the invention of the modern digital computer.

Von Neumann’s Role in Nuclear Development

In the biography of a genius such as von Neumann it would be easy to overestimate his role in the development of nuclear weapons in Los Alamos in 1943. It is important to remember that there was a collection of great minds there, recruited by the American government to produce what many saw as a necessary evil. The fear that Germany would produce an atomic bomb before the US
drove the effort at Los Alamos.Von Neumann’s two main contributions to the Los Alamos project were the mathematicization of development and his contributions to the implosion bomb.

The scientists at Los Alamos were used to doing scientific experiments but it’s difficult to do many experiments when developing weapons of mass destruction. They needed some way to predict what was going to happen in these complex reactions without actually performing them. Von Neumann therefore was a member of the team that invented modern mathematical modeling. He applied his math skills at every level, from helping upper officials to make logical decisions to knocking down tough calculations for those at the bottom of the ladder.

The atomic bombs that were eventually dropped were of two kinds, one using uranium-235 as its fissionable material, the other using plutonium. An atomic chain reaction occurs when the fissionable material present in the bomb reaches a critical mass, or density. In the uranium-235 bomb, this was done using the gun method. A large mass of uranium-235, still under the critical mass, would have another mass of uranium-235 shot into a cavity. The combined masses would then reach critical mass, where an uncontrolled nuclear fission reaction would occur. This process was known to work and was a relatively simple procedure. The difficult part was obtaining the uranium-235, which has to be separated from other isotopes of uranium, which are chemically identical.Plutonium, on the other hand, can be separated using chemical means, and so production of plutonium based bombs could progress more quickly. The problem here was that plutonium bombs could not use the gun method. The plutonium would need to reach critical mass through another technique, implosion. Here, a mass of plutonium is completely surrounded by high explosives that are ignited simultaneously to cause the plutonium mass to compress to supercritical levels and explode.

Although von Neumann did not arrive first at the implosion technique for plutonium, he was the one who made it work, developing the "implosion lens" of high explosives that would correctly compress the plutonium.This is just one more example of von Neumann’s ability to pick up an idea and advance it where others had gotten stuck.



Development of Modern Computing
Von Neumann with first super Computer


Just like the project at Los Alamos, the development of the modern computer was a collaborative effort including the ideas and effort of many great scientists. Also like the development of nuclear weaponry, there have been many volumes written about the development of modern computer. With so much involved in the process and von Neumann himself being involved in so much of it,only a few contributions can be covered here.

A von Neumann language is any of those programming languages that are high-level abstract isomorphic copies of von Neumann architectures. As of 2009, most current programming languages fit into this description, likely as a consequence of the extensive domination of the von Neumann computer architecture during the past 50 years[

Von Neumann’s experience with mathematical modeling at Los Alamos, and the computational tools he used there, gave him the experience he needed to push the development of the computer. Also, because of his far reaching and influential connections, through the IAS, Los Alamos, a number of Universities and his reputation as a mathematical genius, he was in a position to secure
funding and resources to help develop the modern computer. In 1947 this was a very difficult task because computing was not yet a respected science. Most people saw computing only as making a bigger and faster calculator. Von Neumann on the other hand saw bigger possibilities.

Von Neumann wanted computers to be put to use in all fields of science, bringing a more logical and precise nature to those fields as he had tried to do. With his contributions to the architecture of the computer, which describe how logical operations are represented by numbers that can then be read
and processed, many von Neumann’s dreams have come true. Today we have extremely powerful computing machines used in scores of scientific fields, as well many more non-scientific fields.

In von Neumann’s later years, however, he worked and dreamed of applications for computers that have not yet been realized. He drew from his many other interests and imagined powerful combinations of the computer’s ability to perform logically and quickly with our brain’s unique ability to solve ill defined problems with little data, or life’s ability to self-reproduce and evolve.In this vein, von Neumann developed a theory of artificial automata. Von Neumann believed that life was ultimately based on logic, and so any construct that supports logic should be able to support life. Artificial automata, like their natural counter parts, process information and proceed in their actions based on data received from their environment in light of rules and instructions they hold internally. Cellular automata are a class of automata that exist in an infinite plane that is covered by square cells, much like a sheet of graph paper. Each of these cells can rest in a number of states. The whole plane of cells will go through time steps, where the new state of each cell is determined by its own state and the state of the cells neighboring it. In these simple actions there lies a great complexity and the basis for life like actions.

Untimely End

Perhaps all deaths can be considered to come too early; John von Neumann’s own death came far too early. He died on February 8, 1957, 18 months after he was diagnosed with cancer.He never finished his work on automata theory, although he worked as long as he possibly could. He attended ceremonies held in his honor using a wheelchair, and tried to keep up appearances with his family and friends. Though he had accomplished so much in his years he could not accept death, could not consider a world that existed without his mind constantly thinking and solving. But today, his ideas live on and affect our lives in more ways than the few examples given here can demonstrate.

Wednesday, April 4, 2012

Who Was Milton Friedman?

The history of economic thought in the twentieth century is a bit like the history of Christianity in the sixteenth century. Until John Maynard Keynes published The General Theory of Employment, Interest, and Money in 1936, economics—at least in the English-speaking world—was completely dominated by free-market orthodoxy. Heresies would occasionally pop up, but they were always suppressed. Classical economics, wrote Keynes in 1936, “conquered England as completely as the Holy Inquisition conquered Spain.” And classical economics said that the answer to almost all problems was to let the forces of supply and demand do their job.
But classical economics offered neither explanations nor solutions for the Great Depression. By the middle of the 1930s, the challenges to orthodoxy could no longer be contained. Keynes played the role of Martin Luther, providing the intellectual rigor needed to make heresy respectable. Although Keynes was by no means a leftist—he came to save capitalism, not to bury it—his theory said that free markets could not be counted on to provide full employment, creating a new rationale for large-scale government intervention in the economy.

Keynesianism was a great reformation of economic thought. It was followed, inevitably, by a counter-reformation. A number of economists played important roles in the great revival of classical economics between 1950 and 2000, but none was as influential as Milton Friedman. If Keynes was Luther, Friedman was Ignatius of Loyola, founder of the Jesuits. And like the Jesuits, Friedman’s followers have acted as a sort of disciplined army of the faithful, spearheading a broad, but incomplete, rollback of Keynesian heresy. By the century’s end, classical economics had regained much though by no means all of its former dominion, and Friedman deserves much of the credit.

I don’t want to push the religious analogy too far. Economic theory at least aspires to be science, not theology; it is concerned with earth, not heaven. Keynesian theory initially prevailed because it did a far better job than classical orthodoxy of making sense of the world around us, and Friedman’s critique of Keynes became so influential largely because he correctly identified Keynesianism’s weak points. And just to be clear: although this essay argues that Friedman was wrong on some issues, and sometimes seemed less than honest with his readers, I regard him as a great economist and a great man.

John Maynard Keynes


Milton Friedman


Milton Friedman played three roles in the intellectual life of the twentieth century. There was Friedman the economist’s economist, who wrote technical, more or less apolitical analyses of consumer behavior and inflation. There was Friedman the policy entrepreneur, who spent decades campaigning on behalf of the policy known as monetarism—finally seeing the Federal Reserve and the Bank of England adopt his doctrine at the end of the 1970s, only to abandon it as unworkable a few years later. Finally, there was Friedman the ideologue, the great popularizer of free-market doctrine. Did the same man play all these roles? Yes and no. All three roles were informed by Friedman’s faith in the classical verities of free-market economics. 

Moreover, Friedman’s effectiveness as a popularizer and propagandist rested in part on his well-deserved reputation as a profound economic theorist. But there’s an important difference between the rigor of his work as a professional economist and the looser, sometimes questionable logic of his pronouncements as a public intellectual. While Friedman’s theoretical work is universally admired by professional economists, there’s much more ambivalence about his policy pronouncements and especially his popularizing. And it must be said that there were some serious questions about his intellectual honesty when he was speaking to the mass public.

But let’s hold off on the questionable material for a moment, and talk about Friedman the economic theorist. For most of the past two centuries, economic thinking has been dominated by the concept of Homo economicus. The hypothetical Economic Man knows what he wants; his preferences can be expressed mathematically in terms of a “utility function.” And his choices are driven by rational calculations about how to maximize that function: whether consumers are deciding between corn flakes or shredded wheat, or investors are deciding between stocks and bonds, those decisions are assumed to be based on comparisons of the “marginal utility,” or the added benefit the buyer would get from acquiring a small amount of the alternatives available.

It’s easy to make fun of this story. Nobody, not even Nobel-winning economists, really makes decisions that way. But most economists—myself included—nonetheless find Economic Man useful, with the understanding that he’s an idealized representation of what we really think is going on. People do have preferences, even if those preferences can’t really be expressed by a precise utility function; they usually make sensible decisions, even if they don’t literally maximize utility. You might ask, why not represent people the way they really are? The answer is that abstraction, strategic simplification, is the only way we can impose some intellectual order on the complexity of economic life. And the assumption of rational behavior has been a particularly fruitful simplification.The question, however, is how far to push it. Keynes didn’t make an all-out assault on Economic Man, but he often resorted to plausible psychological theorizing rather than careful analysis of what a rational decision-maker would do. Business decisions were driven by “animal spirits,” consumer decisions by a psychological tendency to spend some but not all of any increase in income, wage settlements by a sense of fairness, and so on.

But was it really a good idea to diminish the role of Economic Man that much? No, said Friedman, who argued in his 1953 essay “The Methodology of Positive Economics” that economic theories should be judged not by their psychological realism but by their ability to predict behavior. And Friedman’s two greatest triumphs as an economic theorist came from applying the hypothesis of rational behavior to questions other economists had thought beyond its reach.

In his 1957 book A Theory of the Consumption Functionnot exactly a crowd-pleasing title, but an important topic—Friedman argued that the best way to make sense of saving and spending was not, as Keynes had done, to resort to loose psychological theorizing, but rather to think of individuals as making rational plans about how to spend their wealth over their lifetimes. This wasn’t necessarily an anti-Keynesian idea—in fact, the great Keynesian economist Franco Modigliani simultaneously and independently made a similar case, with even more care in thinking about rational behavior, in work with Albert Ando. But it did mark a return to classical ways of thinking—and it worked. The details are a bit technical, but Friedman’s “permanent income hypothesis” and the Ando-Modigliani “life cycle model” resolved several apparent paradoxes about the relationship between income and spending, and remain the foundations of how economists think about spending and saving to this day.

Friedman’s work on consumption behavior would, in itself, have made his academic reputation. An even bigger triumph, however, came from his application of Economic Man theorizing to inflation. In 1958 the New Zealand–born economist A.W. Phillips pointed out that there was a historical correlation between unemployment and inflation, with high inflation associated with low unemployment and vice versa. For a time, economists treated this correlation as if it were a reliable and stable relationship. This led to serious discussion about which point on the “Phillips curve” the government should choose. For example, should the United States accept a higher inflation rate in order to achieve a lower unemployment rate?
In 1967, however, Friedman gave a presidential address to the American Economic Association in which he argued that the correlation between inflation and unemployment, even though it was visible in the data, did not represent a true trade-off, at least not in the long run. “There is,” he said, “always a temporary trade-off between inflation and unemployment; there is no permanent trade-off.” In other words, if policymakers were to try to keep unemployment low through a policy of generating higher inflation, they would achieve only temporary success. According to Friedman, unemployment would eventually rise again, even as inflation remained high. The economy would, in other words, suffer the condition Paul Samuelson would later dub “stagflation.”

How did Friedman reach this conclusion? (Edmund S. Phelps, who was awarded the Nobel Memorial Prize in economics this year, simultaneously and independently arrived at the same result.) As in the case of his work on consumer behavior, Friedman applied the idea of rational behavior. He argued that after a sustained period of inflation, people would build expectations of future inflation into their decisions, nullifying any positive effects of inflation on employment. For example, one reason inflation may lead to higher employment is that hiring more workers becomes profitable when prices rise faster than wages. But once workers understand that the purchasing power of their wages will be eroded by inflation, they will demand higher wage settlements in advance, so that wages keep up with prices. As a result, after inflation has gone on for a while, it will no longer deliver the original boost to employment. In fact, there will be a rise in unemployment if inflation falls short of expectations.

At the time Friedman and Phelps propounded their ideas, the United States had little experience with sustained inflation. So this was truly a prediction rather than an attempt to explain the past. In the 1970s, however, persistent inflation provided a test of the Friedman-Phelps hypothesis. Sure enough, the historical correlation between inflation and unemployment broke down in just the way Friedman and Phelps had predicted: in the 1970s, as the inflation rate rose into double digits, the unemployment rate was as high or higher than in the stable-price years of the 1950s and 1960s. Inflation was eventually brought under control in the 1980s, but only after a painful period of extremely high unemployment, the worst since the Great Depression.

By predicting the phenomenon of stagflation in advance, Friedman and Phelps achieved one of the great triumphs of postwar economics. This triumph, more than anything else, confirmed Milton Friedman’s status as a great economist’s economist, whatever one may think of his other roles.One interesting footnote: although Friedman made great strides in macroeconomics by applying the concept of individual rationality, he also knew where to stop. In the 1970s, some economists pushed Friedman’s analysis of inflation even further, arguing that there is no usable trade-off between inflation and unemployment even in the short run, because people will anticipate government actions and build that anticipation, as well as past experience, into their price-setting and wage-bargaining. This doctrine, known as “rational expectations,” swept through much of academic economics. But Friedman never went there. His reality sense warned that this was taking the idea of Homo economicus too far. And so it proved: Friedman’s 1967 address has stood the test of time, while the more extreme views propounded by rational expectations theorists in the Seventies and Eighties have not.


“Everything reminds Milton of the money supply. Well, everything reminds me of sex, but I keep it out of the paper,” wrote MIT’s Robert Solow in 1966. For decades, Milton Friedman’s public image and fame were defined largely by his pronouncements on monetary policy and his creation of the doctrine known as monetarism. It’s somewhat surprising to realize, then, that monetarism is now widely regarded as a failure, and that some of the things Friedman said about “money” and monetary policy—unlike what he said about consumption and inflation—appear to have been misleading, and perhaps deliberately so.
To understand what monetarism was all about, the first thing you need to know is that the word “money” doesn’t mean quite the same thing in Economese that it does in plain English. When economists talk of the money supply, they don’t mean wealth in the usual sense. They mean only those forms of wealth that can be used more or less directly to buy things. Currency—pieces of green paper with pictures of dead presidents on them—is money, and so are bank deposits on which you can write checks. But stocks, bonds, and real estate aren’t money, because they have to be converted into cash or bank deposits before they can be used to make purchases.
If the money supply consisted solely of currency, it would be under the direct control of the government—or, more precisely, the Federal Reserve, a monetary agency that, like its counterpart “central banks” in many other countries, is institutionally somewhat separate from the government proper. The fact that the money supply also includes bank deposits makes reality more complicated. The central bank has direct control only over the “monetary base”—the sum of currency in circulation, the currency banks hold in their vaults, and the deposits banks hold at the Federal Reserve—but not the deposits people have made in banks. Under normal circumstances, however, the Federal Reserve’s direct control over the monetary base is enough to give it effective control of the overall money supply as well.

Before Keynes, economists considered the money supply a primary tool of economic management. But Keynes argued that under depression conditions, when interest rates are very low, changes in the money supply have little effect on the economy. The logic went like this: when interest rates are 4 or 5 percent, nobody wants to sit on idle cash. But in a situation like that of 1935, when the interest rate on three-month Treasury bills was only 0.14 percent, there is very little incentive to take the risk of putting money to work. The central bank may try to spur the economy by printing large quantities of additional currency; but if the interest rate is already very low the additional cash is likely to languish in bank vaults or under mattresses. Thus Keynes argued that monetary policy, a change in the money supply to manage the economy, would be ineffective. And that’s why Keynes and his followers believed that fiscal policy—in particular, an increase in government spending—was necessary to get countries out of the Great Depression.

Why does this matter? Monetary policy is a highly technocratic, mostly apolitical form of government intervention in the economy. If the Fed decides to increase the money supply, all it does is purchase some government bonds from private banks, paying for the bonds by crediting the banks’ reserve accounts—in effect, all the Fed has to do is print some more monetary base. By contrast, fiscal policy involves the government much more deeply in the economy, often in a value-laden way: if politicians decide to use public works to promote employment, they need to decide what to build and where. Economists with a free-market bent, then, tend to want to believe that monetary policy is all that’s needed; those with a desire to see a more active government tend to believe that fiscal policy is essential.
Economic thinking after the triumph of the Keynesian revolution—as reflected, say, in the early editions of Paul Samuelson’s classic textbook gave priority to fiscal policy, while monetary policy was relegated to the sidelines. As Friedman said in his 1967 address to the American Economic Association:
The wide acceptance of [Keynesian] views in the economics profession meant that for some two decades monetary policy was believed by all but a few reactionary souls to have been rendered obsolete by new economic knowledge. Money did not matter.
Although this may have been an exaggeration, monetary policy was held in relatively low regard through the 1940s and 1950s. Friedman, however, crusaded for the proposition that money did too matter, culminating in the 1963 publication of A Monetary History of the United States, 1867–1960, with Anna Schwartz.

Although A Monetary History is a vast work of extraordinary scholarship, covering a century of monetary developments, its most influential and controversial discussion concerned the Great Depression. Friedman and Schwartz claimed to have refuted Keynes’s pessimism about the effectiveness of monetary policy in depression conditions. “The contraction” of the economy, they declared, “is in fact a tragic testimonial to the importance of monetary forces.”


In interpreting the origins of the Depression, the distinction between the monetary base (currency plus bank reserves), which the Fed controls directly, and the money supply (currency plus bank deposits) is crucial. The monetary base went up during the early years of the Great Depression, rising from an average of $6.05 billion in 1929 to an average of $7.02 billion in 1933. But the money supply fell sharply, from $26.6 billion to $19.9 billion. This divergence mainly reflected the fallout from the wave of bank failures in 1930–1931: as the public lost faith in banks, people began holding their wealth in cash rather than bank deposits, and those banks that survived began keeping large quantities of cash on hand rather than lending it out, to avert the danger of a bank run. The result was much less lending, and hence much less spending, than there would have been if the public had continued to deposit cash into banks, and banks had continued to lend deposits out to businesses. And since a collapse of spending was the proximate cause of the Depression, the sudden desire of both individuals and banks to hold more cash undoubtedly made the slump worse.
Friedman and Schwartz claimed that the fall in the money supply turned what might have been an ordinary recession into a catastrophic depression, itself an arguable point. But even if we grant that point for the sake of argument, one has to ask whether the Federal Reserve, which after all did increase the monetary base, can be said to have caused the fall in the overall money supply. At least initially, Friedman and Schwartz didn’t say that. What they said instead was that the Fed could have prevented the fall in the money supply, in particular by riding to the rescue of the failing banks during the crisis of 1930–1931. If the Fed had rushed to lend money to banks in trouble, the wave of bank failures might have been prevented, which in turn might have avoided both the public’s decision to hold cash rather than bank deposits, and the preference of the surviving banks for stashing deposits in their vaults rather than lending the funds out. And this, in turn, might have staved off the worst of the Depression.

An analogy may be helpful here. Suppose that a flu epidemic breaks out, and later analysis suggests that appropriate action by the Centers for Disease Control could have contained the epidemic. It would be fair to blame government officials for failing to take appropriate action. But it would be quite a stretch to say that the government caused the epidemic, or to use the CDC’s failure as a demonstration of the superiority of free markets over big government.Yet many economists, and even more lay readers, have taken Friedman and Schwartz’s account to mean that the Federal Reserve actually caused the Great Depression—that the Depression is in some sense a demonstration of the evils of an excessively interventionist government. And in later years, as I’ve said, Friedman’s assertions grew cruder, as if to feed this misperception. In his 1967 presidential address he declared that “the US monetary authorities followed highly deflationary policies,” and that the money supply fell “because the Federal Reserve System forced or permitted a sharp reduction in the monetary base, because it failed to exercise the responsibilities assigned to it”—an odd assertion given that the monetary base, as we’ve seen, actually rose as the money supply was falling. (Friedman may have been referring to a couple of episodes along the way in which the monetary base fell modestly for brief periods, but even so his statement was highly misleading at best.)

By 1976 Friedman was telling readers of Newsweek that “the elementary truth is that the Great Depression was produced by government mismanagement,” a statement that his readers surely took to mean that the Depression wouldn’t have happened if only the government had kept out of the way—when in fact what Friedman and Schwartz claimed was that the government should have been more active, not less.
Why did historical disputes about the role of monetary policy in the 1930s matter so much in the 1960s? Partly because they fed into Friedman’s broader anti-government agenda, of which more below. But the more direct application was to Friedman’s advocacy of monetarism. According to this doctrine, the Federal Reserve should keep the money supply growing at a steady, low rate, say 3 percent a year—and not deviate from this target, no matter what is happening in the economy. The idea was to put monetary policy on autopilot, removing any discretion on the part of government officials.

Friedman’s case for monetarism was part economic, part political. Steady growth in the money supply, he argued, would lead to a reasonably stable economy. He never claimed that following his rule would eliminate all recessions, but he did argue that the wiggles in the economy’s growth path would be small enough to be tolerable—hence the assertion that the Great Depression wouldn’t have happened if the Fed had been following a monetarist rule. And along with this qualified faith in the stability of the economy under a monetary rule went Friedman’s unqualified contempt for the ability of Federal Reserve officials to do better if given discretion. Exhibit A for the Fed’s unreliability was the onset of the Great Depression, but Friedman could point to many other examples of policy gone wrong. “A monetary rule,” he wrote in 1972, “would insulate monetary policy both from arbitrary power of a small group of men not subject to control by the electorate and from the short-run pressures of partisan politics.”
Monetarism was a powerful force in economic debate for about three decades after Friedman first propounded the doctrine in his 1959 book A Program for Monetary Stability. Today, however, it is a shadow of its former self, for two main reasons.

First, when the United States and the United Kingdom tried to put monetarism into practice at the end of the 1970s, both experienced dismal results: in each country steady growth in the money supply failed to prevent severe recessions. The Federal Reserve officially adopted Friedman-type monetary targets in 1979, but effectively abandoned them in 1982 when the unemployment rate went into double digits. This abandonment was made official in 1984, and ever since then the Fed has engaged in precisely the sort of discretionary fine-tuning that Friedman decried. For example, the Fed responded to the 2001 recession by slashing interest rates and allowing the money supply to grow at rates that sometimes exceeded 10 percent per year. Once the Fed was satisfied that the recovery was solid, it reversed course, raising interest rates and allowing growth in the money supply to drop to zero.

Second, since the early 1980s the Federal Reserve and its counterparts in other countries have done a reasonably good job, undermining Friedman’s portrayal of central bankers as irredeemable bunglers. Inflation has stayed low, recessions—except in Japan, of which more in a second—have been relatively brief and shallow. And all this happened in spite of fluctuations in the money supply that horrified monetarists, and led them—Friedman included—to predict disasters that failed to materialize. As David Warsh of The Boston Globe pointed out in 1992, “Friedman blunted his lance forecasting inflation in the 1980s, when he was deeply, frequently wrong.”

By 2004, the Economic Report of the President, written by the very conservative economists of the Bush administration, could nonetheless make the highly anti-monetarist declaration that “aggressive monetary policy”—not stable, steady-as-you-go, but aggressive—”can reduce the depth of a recession.”
Now, a word about Japan. During the 1990s Japan experienced a sort of minor-key reprise of the Great Depression. The unemployment rate never reached Depression levels, thanks to massive public works spending that had Japan, with less than half America’s population, pouring more concrete each year than the United States. But the very low interest rate conditions of the Great Depression reemerged in full. By 1998 the call money rate, the rate on overnight loans between banks, was literally zero.
And under those conditions, monetary policy proved just as ineffective as Keynes had said it was in the 1930s. The Bank of Japan, Japan’s equivalent of the Fed, could and did increase the monetary base. But the extra yen were hoarded, not spent. The only consumer durable goods selling well, some Japanese economists told me at the time, were safes. In fact, the Bank of Japan found itself unable even to increase the money supply as much as it wanted. It pushed vast quantities of cash into circulation, but broader measures of the money supply grew very little. An economic recovery finally began a couple of years ago, driven by a revival of business investment to take advantage of new technological opportunities. But monetary policy never was able to get any traction.

In effect, Japan in the Nineties offered a fresh opportunity to test the views of Friedman and Keynes regarding the effectiveness of monetary policy in depression conditions. And the results clearly supported Keynes’s pessimism rather than Friedman’s optimism.
Friedman with George Bush


In 1946 Milton Friedman made his debut as a popularizer of free-market economics with a pamphlet titled “Roofs or Ceilings: The Current Housing Problem” coauthored with George J. Stigler, who would later join him at the University of Chicago. The pamphlet, an attack on the rent controls that were still universal just after World War II, was released under rather odd circumstances: it was a publication of the Foundation for Economic Education, an organization which, as Rick Perlstein writes in Before the Storm (2001), his book about the origins of the modern conservative movement, “spread a libertarian gospel so uncompromising it bordered on anarchism.” Robert Welch, the founder of the John Birch Society, sat on the FEE’s board. This first venture in free-market popularization prefigured in two ways the course of Friedman’s career as a public intellectual over the next six decades.

First, the pamphlet demonstrated Friedman’s special willingness to take free-market ideas to their logical limits. Neither the idea that markets are efficient ways to allocate scarce goods nor the proposition that price controls create shortages and inefficiency was new. But many economists, fearing the backlash against a sudden rise in rents (which Friedman and Stigler predicted would be about 30 percent for the nation as a whole), might have proposed some kind of gradual transition to decontrol. Friedman and Stigler dismissed all such concerns.In the decades ahead, this single-mindedness would become Friedman’s trademark. Again and again, he called for market solutions to problems—education, health care, the illegal drug trade—that almost everyone else thought required extensive government intervention. Some of his ideas have received widespread acceptance, like replacing rigid rules on pollution with a system of pollution permits that companies are free to buy and sell. Some, like school vouchers, are broadly supported by the conservative movement but haven’t gotten far politically. And some of his proposals, like eliminating licensing procedures for doctors and abolishing the Food and Drug Administration, are considered outlandish even by most conservatives.

Second, the pamphlet showed just how good Friedman was as a popularizer. It’s beautifully and cunningly written. There is no jargon; the points are made with cleverly chosen real-world examples, ranging from San Francisco’s rapid recovery from the 1906 earthquake to the plight of a 1946 veteran, newly discharged from the army, searching in vain for a decent place to live. The same style, enhanced by video, would mark Friedman’s celebrated 1980 TV series Free to Choose.The odds are that the great swing back toward laissez-faire policies that took place around the world beginning in the 1970s would have happened even if there had been no Milton Friedman. But his tireless and brilliantly effective campaign on behalf of free markets surely helped accelerate the process, both in the United States and around the world. By any measure—protectionism versus free trade; regulation versus deregulation; wages set by collective bargaining and government minimum wages versus wages set by the market—the world has moved a long way in Friedman’s direction. And even more striking than his achievement in terms of actual policy changes has been the transformation of the conventional wisdom: most influential people have been so converted to the Friedman way of thinking that it is simply taken as a given that the change in economic policies he promoted has been a force for good. But has it?

Consider first the macroeconomic performance of the US economy. We have data on the real income—that is, income adjusted for inflation—of American families from 1947 to 2005. During the first half of that fifty-eight-year stretch, from 1947 to 1976, Milton Friedman was a voice crying in the wilderness, his ideas ignored by policymakers. But the economy, for all the inefficiencies he decried, delivered dramatic improvements in the standard of living of most Americans: median real income more than doubled. By contrast, the period since 1976 has been one of increasing acceptance of Friedman’s ideas; although there remained plenty of government intervention for him to complain about, there was no question that free-market policies became much more widespread. Yet gains in living standards have been far less robust than they were during the previous period: median real income was only about 23 percent higher in 2005 than in 1976.Part of the reason the second postwar generation didn’t do as well as the first was a slower overall rate of economic growth—a fact that may come as a surprise to those who assume that the trend toward free markets has yielded big economic dividends. But another important reason for the lag in most families’ living standards was a spectacular increase in economic inequality: during the first postwar generation income growth was broadly spread across the population, but since the late 1970s median income, the income of the typical family, has risen only about a third as fast as average income, which includes the soaring incomes of a small minority at the top.

This raises an interesting point. Milton Friedman often assured audiences that no special institutions, like minimum wages and unions, were needed to ensure that workers would share in the benefits of economic growth. In 1976 he told Newsweek readers that tales of the evil done by the robber barons were pure myth:
There is probably no other period in history, in this or any other country, in which the ordinary man had as large an increase in his standard of living as in the period between the Civil War and the First World War, when unrestrained individualism was most rugged.
(What about the remarkable thirty-year stretch after World War II, which encompassed much of Friedman’s own career?) Yet in the decades that followed that pronouncement, as the minimum wage was allowed to fall behind inflation and unions largely disappeared as an important factor in the private sector, working Americans saw their fortunes lag behind growth in the economy as a whole. Was Friedman too sanguine about the generosity of the invisible hand?

To be fair, there are many factors affecting both economic growth and the distribution of income, so we can’t blame Friedmanite policies for all disappointments. Still, given the common assumption that the turn toward free-market policies did great things for the US economy and the living standards of ordinary Americans, it’s striking how little support one can find for that proposition in the data.
Similar questions about the lack of clear evidence that Friedman’s ideas actually work in practice can be raised, with even more force, for Latin America. A decade ago it was common to cite the success of the Chilean economy, where Augusto Pinochet’s Chicago-educated advisers turned to free-market policies after Pinochet seized power in 1973, as proof that Friedman-inspired policies showed the path to successful economic development. But although other Latin nations, from Mexico to Argentina, have followed Chile’s lead in freeing up trade, privatizing industries, and deregulating, Chile’s success story has not been replicated.

On the contrary, the perception of most Latin Americans is that “neoliberal” policies have been a failure: the promised takeoff in economic growth never arrived, while income inequality has worsened. I don’t mean to blame everything that has gone wrong in Latin America on the Chicago School, or to idealize what went before; but there is a striking contrast between the perception that Friedman was vindicated and the actual results in economies that turned from the interventionist policies of the early postwar decades to laissez-faire.

On a more narrowly focused topic, one of Friedman’s key targets was what he considered the uselessness and counterproductive nature of most government regulation. In an obituary for his one-time collaborator George Stigler, Friedman singled out for praise Stigler’s critique of electricity regulation, and his argument that regulators usually end up serving the interests of the regulated rather than those of the public. So how has deregulation worked out?
It started well, with the deregulation of trucking and airlines beginning in the late 1970s. In both cases deregulation, while it didn’t make everyone happy, led to increased competition, generally lower prices, and higher efficiency. Deregulation of natural gas was also a success.
But the next big wave of deregulation, in the electricity sector, was a different story. Just as Japan’s slump in the 1990s showed that Keynesian worries about the effectiveness of monetary policy were no myth, the California electricity crisis of 2000– 2001—in which power companies and energy traders created an artificial shortage to drive up prices—reminded us of the reality that lay behind tales of the robber barons and their depredations. While other states didn’t suffer as severely as California, across the nation electricity deregulation led to higher, not lower, prices, with huge windfall profits for power companies.
Those states that, for whatever reason, didn’t get on the deregulation bandwagon in the 1990s now consider themselves lucky. And the luckiest of all are those cities that somehow didn’t get the memo about the evils of government and the virtues of the private sector, and still have publicly owned power companies. All of this showed that the original rationale for electricity regulation—the observation that without regulation, power companies would have too much monopoly power—remains as valid as ever.
Should we conclude from this that deregulation is always a bad idea? No—it depends on the specifics. To conclude that deregulation is always and everywhere a bad idea would be to engage in the same kind of absolutist thinking that was, arguably, Milton Friedman’s greatest flaw.

In his 1965 review of Friedman and Schwartz’s Monetary History, the late Yale economist and Nobel laureate James Tobin gently chided the authors for going too far. “Consider the following three propositions,” he wrote. “Money does not matter. It does too matter. Money is all that matters. It is all too easy to slip from the second proposition to the third.” And he added that “in their zeal and exuberance” Friedman and his followers had too often done just that.A similar sequence seems to have happened in Milton Friedman’s advocacy of laissez-faire. In the aftermath of the Great Depression, there were many people saying that markets can never work. Friedman had the intellectual courage to say that markets can too work, and his showman’s flair combined with his ability to marshal evidence made him the best spokesman for the virtues of free markets since Adam Smith. But he slipped all too easily into claiming both that markets always work and that only markets work. It’s extremely hard to find cases in which Friedman acknowledged the possibility that markets could go wrong, or that government intervention could serve a useful purpose.

Friedman’s laissez-faire absolutism contributed to an intellectual climate in which faith in markets and disdain for government often trumps the evidence. Developing countries rushed to open up their capital markets, despite warnings that this might expose them to financial crises; then, when the crises duly arrived, many observers blamed the countries’ governments, not the instability of international capital flows. Electricity deregulation proceeded despite clear warnings that monopoly power might be a problem; in fact, even as the California electricity crisis was happening, most commentators dismissed concerns about price-rigging as wild conspiracy theories. Conservatives continue to insist that the free market is the answer to the health care crisis, in the teeth of overwhelming evidence to the contrary.

What’s odd about Friedman’s absolutism on the virtues of markets and the vices of government is that in his work as an economist’s economist he was actually a model of restraint. As I pointed out earlier, he made great contributions to economic theory by emphasizing the role of individual rationality—but unlike some of his colleagues, he knew where to stop. Why didn’t he exhibit the same restraint in his role as a public intellectual?

The answer, I suspect, is that he got caught up in an essentially political role. Milton Friedman the great economist could and did acknowledge ambiguity. But Milton Friedman the great champion of free markets was expected to preach the true faith, not give voice to doubts. And he ended up playing the role his followers expected. As a result, over time the refreshing iconoclasm of his early career hardened into a rigid defense of what had become the new orthodoxy.In the long run, great men are remembered for their strengths, not their weaknesses, and Milton Friedman was a very great man indeed—a man of intellectual courage who was one of the most important economic thinkers of all time, and possibly the most brilliant communicator of economic ideas to the general public that ever lived. But there’s a good case for arguing that Friedmanism, in the end, went too far, both as a doctrine and in its practical applications. When Friedman was beginning his career as a public intellectual, the times were ripe for a counterreformation against Keynesianism and all that went with it. But what the world needs now, I’d argue, is a counter-counterreformation.