Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

The Current Financial and Economic Crisis: Empirical and Methodological Issues

The Current Financial and Economic Crisis: Empirical and Methodological Issues In this paper we describe the main causes of the recent financial crisis as a result of many theoretical, methodological, and practical shortcomings mostly according to heterodox, but also including some important orthodox economists. At theoretical level, there are problems concerning teaching and using economic models with overly unrealistic assumptions. In the methodological front, we find the unsuspected shadow of Milton Friedman's `unrealisticism of assumptions' thesis lurking behind the construction of this kind of models and the widespread neglect of methodological issues. Of course, the most evident shortcomings are at the practical level: (i) huge interests of the participants in the financial markets (banks, central bankers, regulators, rating agencies mortgage brokers, politicians, governments, executives, economists, etc. mainly in the US, Canada and Europe, but also in Japan and the rest of the world), (ii) in an almost completely free financial and economic market, that is, one (almost) without any regulation or supervision, (iii) decision-taking upon some not well regarded qualities, like irresponsibility, ignorance, and inertia; and (iv) difficulties to understand the current crisis as well as some biases directing economic rescues by governments. Following many others, we propose that we take this episode as an opportunity to reflect on, and hopefully redirect, economic theory and practice. Keywords: financial crises, economic methodology, model-building. JEL Classification: G01, B41, B23 `Theorizing in economics, I have argued, is an attempt at understanding and I now add that bad theorizing is a premature claim to understand' (Hahn, 1985) 1. Introduction The recent world financial crisis has been inducing lively debates on the current status of economic theory. In this paper we set out an outline of these debates. We state the questions we are concerned with as follows: first, what were the infirmities of theories and empirical behavior underlying most of the views of the policy-makers, regulators and market operators? To answer this question, we lean on a host of evaluations of "what went wrong" with mainstream models of financial markets, both by orthodox and heterodox economists. This is a 1st and preliminary draft, without a major revision, which was approved and/or presented in: First International Conference in Political Economy ­ IIPPE (International Initiative for Promoting Political Economy) International Conference "Beyond the Crisis", Rethymno, Crete, Greece, September 10-12, 2010; Conference on "Impacts, Responses & Initial Lessons of the Financial Crisis for Low Income Countries", Copenhagen, Denmark, October 14-15, 2010; and Forum on Capital as Power ­ Crisis of Capital, Crisis of Theory, Keele Campus of York University, Toronto, Ontario, Canada, October 29-31, 2010. Naturally, the usual caveats remain and the authors are responsible for any mistake in this draft. Professional Address: Rodovia Araraquara-Jaú, km 1, Caixa Postal 174, CEP: 14.800-901, Araraquara SP ­ Brazil. Phone/Fax: +55 16 3301-6272. Yet, there are many reasons why formal modeling could be damaging, underlined mostly by heterodox economists. Thus, although there are some signs of theoretical `recantation', most of the propositions and proponents of efficient markets and rational expectations hypotheses are unshaken, quite paradoxically, as could assert Minsky, because of the very prompt intervention of the State, broadly speaking, through expansionist monetary policy and Big Government action. Second, what are the methodological foundations of those mainstream models? We claim that a mix of methodological confusion and ontological neglect, often resting on a prejudice towards methodologically-minded critiques, lend an unwarranted stamp of `scientificity' (only) to mainstream theorizing practices (Dow, 2008). As a result, economists of all stripes are now urging for caution when dealing with models (Lawson, 2009). Even those who defend those models on basis similar to Churchill's defense of democracy ("it is the worst form of government except all the others that have been tried") are now urging for attention to the content and truth value of economic theories. This situation provides an opportunity to revisit Friedman's (1953) influential methodological essay and similar works. Whether arguing a case for or against Friedman's theses, most methodologists find it a hard work to determine to which specific philosophical school it should belong (Mayer, 1993; Mäki, 1986). However, the philosophical allegiances of Friedman's essay are not our focus. Rather, we intend to show a lingering and unsuspected shadow of Friedman in all mainstream justification for its practices (Blaug, 2002, p. 30). Next, we point out the pitfalls of using `unrealistic assumptions' in economic theories ­ a practice sanctioned by Friedman. Ironically, Friedman's admonishments for testing assumptions by its predictive power remain in oblivion since he spelled them out. Economists pay lip-service to it, at best (Colander et al., 2009). Since Friedman's essay, economics has grown more and more formalistic. We claim, following Mongin (1987), Hands (2009) and Blaug (1997b, 2002, 2003), that whatever Friedman's designs and caveats were (see Friedman, 1999), we can detect his influence in the overly formalistic methods of present-day economics. Primacy of formalism, by its turn, can give and, in fact, undoubtedly gives real support to the use of unrealistic assumptions (Lawson, 2003, chap. 1 and 10; 2009), on grounds that resonate Friedman's theses at every bit (e.g., Blinder, 1999; Marcet, 2010). The main results are: at the substantive level, one constructs economic theories in near complete disdain for real world problems and the `academic game' is played almost only for its own sake. At the methodological level, economic theories are plagued with known falsehoods that hinder causal explanations. This is why mainstream economists have sometimes to retreat and recant their positions. We set out that lest we are trapped in another "unexpected event" like the recent financial crisis, a bolder turn in economic theorizing should be achieved. This transformation is already in progress, at least among some economists and schools of thought. Nevertheless, we think one could move faster by helping to promote those analyses and research programs which make their methodological underpinnings clear and pay attention to the plainly important items of the institutional fabric of society (Lawson, 1997, pp. 157-198; 2003, pp. 28-62; Hodgson, 1998). This article consists of five parts. After this Introduction, we give, in Section 2, an outline of the causes of the current crisis, with a specific part dedicated to its onset. In Section 3, we explain the fragile foundations over which rests much of the policies that brought the current crisis about. Section 4 presents some methodological issues concerning ontological end epistemological views of economics, their relation to the current crisis and some possible ways to deal with them. In Section 5, we conclude briefly. 2. An Outline of the Crisis An outline of institutional setting changes which would finally result in the "subprime crisis" of 2007/2008 started in the 1960s. It marks the growth of the importance of institutional investors in relation to deposit institutions (commercial banks) in the market for wealth and credit management, to which commercial banks Volume III Issue 1 (5) Summer 2012 reply with a series of financial innovations: conglomeration, underwriting, insurances, repurchase agreement, pension and investment funds, etc. In the 1980s there was "the removal of Regulation Q placing ceilings on interest rates on retail deposits" and in the 1990s "the elimination of the Glass-Steagall restrictions on mixing commercial and investment banking"(Eichengreen, 2008). In 1994, the Riegle-Neal Interstate Banking and Branching Efficiency Act allowed the expansion of branches and interstate operations. In 1999, further liberalization permitted bank holding companies to have insurance companies and investment banks among other assets in their portfolio. In addition, in the 80's, the rise of the decoupling between interests and maturities of assets and liabilities brought about increasing problems to the Saving and Loans institutions (S&Ls), causing a housing financing crisis in the US. As a consequence, there were major changes in securitization, which after 2002 would finally beget an extraordinary expansion in mortgage issues of various kinds, finally resulting in September 2008, after Lehman Brothers bankruptcy, in the so-called subprime crisis.1 A more detailed sketch of the last speculation cycle, however, could be presented like this: after the1970's, there was a huge rise in investments in the mortgage markets, for there were real guarantees backing those assets, improving national and also international assets to liabilities requirements, through better capital ratios, i.e., a bank's capital related to its risk-weighted assets, and also better balance sheets. Moreover, the process of housing and commercial mortgages securitization, that is, of mortgages creation and further securitization through selling generated huge receipts for those originators: Freddie Mac developed the first private mortgage-backed security for conventional mortgages, known as the PC (participation certificate); and the purpose was to buy mortgages from lenders and to pool them together and sell them as mortgage backed securities. Thus, the seed for linking the mortgage markets with the broader capital markets were planted in 1968 and 1970 with the restructuring of Fannie Mae and Ginnie Mae, and the establishment of Freddie Mac (Colton, 2002). Thus, in 1970 S&Ls responded for 47.7% of all mortgages creation, 60.6% in 1976, but in 1997 this share had been reduced to 17.8%, increasing to 20.7% in 2000. On the other hand, the share of commercial banks (CBs) and chiefly mortgage companies (MCs) went from 46.9% in 1970 (21.9% for CBs and 25% for MCs), to 35.7% in 1976 (21.7% for CBs and 14% for MCs, the lowest percentage for MCs for the entire period 1970-2000), and 79.3% in 2000 (21.4% for CBs and 57.9% for MCs; Colton, 2002, p. 35). That is to say, there was an oscillation of the share of CBs mortgage creation from 18.6% to 27.3% in the period 1970-2000, with the exception of 1990 with 33.4%, and 1998 with 15.3%. More importantly, however, the MCs share has risen to an all-time high of 61.1% in 1998, reduced to still astonishing 57.9%, in 2000. In other words, the main mortgage generators changed from S&Ls in the 1970's to MCs in the 1990's, with CBs roughly maintaining their shares (Colton, 2002). Concomitantly, from 1970 to 2003 the share in total mortgage stock of federal institutions and Government Sponsored Enterprises (GSEs), like Fannie Mae and Freddie Mac, departed from 8.1% to 42.9%, while the share of S&Ls went from 43.9% to 9.5%. Thus, private institutions maintained in their balance sheets only credits beyond the acquisition ceiling determined for the GSEs, i.e., the non-conforming loans or those assets whose risks implied an excessive discount to be sold (Cagnin, 2009a; Acharya and Richardson, 2009). Nevertheless, total issuance of new mortgages went from $36 million in 1970, to $1.3 billion in 1998, $2.2 billion in 2001 ($190 million subprime or 8.6%, from which $95 million securitized or 50.4%), an all-time high of $3.95 billion in 2003 ($335 million subprime or 8.5%, from which $202 million securitized or 60.5%), $2,9 billion in 2004 ($540 million subprime or 18.5%, from which $401 million securitized or 74.3%), $3.1 billion in 2005 ($625 million subprime or 20%, from which $507 million securitized or 81.2%) and $3 billion in 2006 ($600 million subprime or, again, 20%, from which $483 million securitized or 80.5%; Wray, 2007, p. 30). Another important detail is that the relevance of the largest CBs in the origination of new mortgages, including subprime and Alt-A, and of those securities in the assets are disproportional in relation to small banks (Graph 1). Colton, 2002; Torres-Filho and Borça Jr., 2008; Eichengreen, 2008; Wessel, 2009; Kregel, 2009, p. 661; Lavoie, 2010; Cagnin, 2009a; 2009b. For a critic of the very term `subprime crisis', see Patnaik (2010). As we know, those `heterodox' (subprime and Alt-A) assets have some important differences: Alt-A assets are those issued to borrowers which have not presented all the required documentation but that are `nearprime' (Roubini, 2007), i.e., could be a prime borrower according to their borrowing records, while subprime borrowers are those who have at least one record showing default or relevant delay in payment of an installment. Subprime borrowers present the records showed in Graphs 2 and 3 (taken from Wray, 2007). Graph 1. Derivatives as a Percent of Assets, 1992­2008: Small (<$1 Billion in Assets) vs. Big (>$1 Billion in Assets) Banks Source: Dymski, 2010. As we can see in Graphs 2 and 3, subprime assets displayed quite worse records both for delinquency and foreclosure rates. Notwithstanding, the originators from the 1990's onward, CBs and predominantly MCs, as we showed above, baited potential subprime borrowers with teaser rate mortgages (Kregel, 2009). Graph 2. Comparisons of Prime vs. Subprime Delinquency Rates, Total U.S. 1998-2007 Volume III Issue 1 (5) Summer 2012 Graph 3. Comparisons of Prime vs. Subprime Foreclosure Rates, Total U.S. 1998-2007 As Randall Wray points out: From 2004-2006 (when lending standards were loosest) 8.4 million adjustable rate mortgages were originated, worth $2.3 trillion; of those, 3.2 million (worth $1.05 trillion) had "teaser rates" that were below market and would reset in 2-3 years at higher rates (...) Of the $1 trillion dollars of teaser rate mortgages, $431 billion had initial interest rates at or below 2%.(...) An example will help. A subprime hybrid adjustable rate mortgage on a $400,000 house might have initial payments of about $2200 per month for interest-only at a rate of 6.5%. After a reset, the payments rise to $4000 per month at an interest rate of 12% plus principle (Wray 2007). But how and why CBs and MBs, mainly, did this? Because they do not have to maintain these credits in their balance sheet, i.e., they bundled together a series of these assets ­ in fact more than a thousand ­ in a mortgage pool, divide this pool in tranches ­ generally called senior (for the best shares of these tranches), mezzanine (medium rated shares) and junior (the riskiest shares of the tranches) and sold these tranches to the market (Volcker, 2008, pp. 104-7; Acharya and Richardson, 2009). They needed beforehand to rate the tranches through a credit rating agency (Moody's, Fitch and Standard and Poor's, chiefly, but also others ­ White, 2009; Crotty and Epstein, 2009). James Galbraith (2010, pp. 8-9) explains the trick: The business model was no longer one of originating mortgages, holding them, and earning income as home owners paid off their debts; it was one of originating the mortgage, taking a fee, selling the mortgage to another entity, and taking another fee. To do that, the mortgages had to be packaged. They had to be sprinkled with the holy water of quantitative risk-management models. They had to be presented to ratings agencies and blessed and sanctified, at least in part, as triple-A, so that they could legally be acquired by pension funds and other fiduciaries, which have no obligation to do any due diligence beyond looking at the rating. Alchemy was the result: a great deal of lead was marketed as gold. I think it's fair to say that if this sounds to you like a criminal enterprise, that's because that's exactly what it was. There was even a criminal language associated with it: liars' loans, NINJA loans (no income, no job or assets) ­ it sounds funny, but in fact this is why the world financial system has melted down ­ neutron loans (loans that would explode, killing the people but leaving the buildings intact), toxic waste (that part of the securitized collateral debt obligation that would take the first loss). These are terms that are put together by people who know what they are doing, and anybody close to the industry was familiar with those terms. Again, there's no innocent explanation. I would argue that what happened here was an initial act of theft by the originators of the mortgages; an act exactly equivalent to money laundering by the ratings agencies, which passed the bad securities through their process and relabeled them as good securities, literally leaving the documentation in the hands of the originators (the computer files and underlying documents were examined by the ratings agencies only very, very sporadically); and a fencing operation, or the passing of stolen goods, by the large banks and investment banks, which marketed them to the likes of IKB Deutsche Industriebank, the Royal Bank of Scotland, and, of course, pension funds and other investors across the world. The reward for being part of this was the extraordinary compensation of the banking sector. The originators maintained only a small part of these assets in their balance sheets (Volcker, 2010) or in Structured Investment Vehicles (SIVs) ­ enterprises whose only purpose were to issue asset-backed securities ­ because of difficulties to sell some tranches, prospective profitability of some assets or circumvention of Basel II and national regulations, since those assets remained off balance sheets, or even because of repurchase agreements. Thus, although CBs and MBs created quite risky assets, they did not remain with most of those assets, selling them to other investors and earning big fees for this `service'.2 That is to say, they become free of much of the own risk which their very entrepreneurial behavior generated (Kregel, 2009; Dymski, 2010), although they many times remained with shares of these more risky loans, usually the riskiest shares (Krugman and Wells, 2010). However, this is not the end of this unbelievable metamorphosis: some of the tranches, mostly the mezzanine ones, were recombined in new assets rather paradoxically some of them received better rates than the original ones, even AAA, making possible their acquisition, in this last case, also by pension funds, mutual funds and agents less prone to risk.3 A Collateralized Debt Obligation (CDO) backed in those assets was then issued and also divided in tranches, hence making feasible the creation of brand new securities, with new risk and profitability ratings, etc., and so on, in a multilayer pyramid. These issues of CDOs grew exponentially from 2002-2007, from $ 11.9 billion in 2000 to $108.8 billion in 2005, and then achieving their highest levels in 2006, with $186.7 billion, and 2007, with $177.6 billion (Torres Filho and Borça Jr., 2008). Finally, as the whole scheme was a mix of Ponzi finance, speculation on the profitability or at least maintenance of one's investment values, fraudulent action, overlook of regulators, authorities, etc. (Guttman, 2009; Galbraith, 2010), the majority of the agents, debtors or creditors, needed, as always (Galbraith, 1954; Kindleberger, 1978), at least two factors happening together, with no interruption, in order to maintain that scheme: a) a continued and increasing entrance of capital, feeding a pyramid (Ponzi) scheme, that is to say, making possible not only to maintain but also to augment the prices of the assets which backed the securities. For, as we know, and as a logical conclusion of the scheme outlined yet in this paper, the prices ­of the mortgages, since this speculation was built up mainly on housing and commercial mortgages ­ must rise in order to bring about the expected and desired profitability of the majority of the agents, making possible a continuous and even increasing inflow of capital to this market, with only minors non auspicious events, like minor crisis, bankruptcies, etc., quickly circumvented by the expert action of Central Banks (Federal Reserve, in the US case) and Big Government, as Minsky (1982, 1986) explained a long time ago. Furthermore, the continuous rise in assets prices, in spite of these minor upsetting events, seemed to corroborate almost all the market expectations as well as the algorithms used to calculate and distribute risks according to historical (which?) data (Zendron, 2006; Colander et al., 2009; Dow, 2008; Davidson, 1982-3; Minsky, 1982), and also yields, subdivide tranches, etc. Of course, the entire scheme would collapse if prices stopped to rise. In addition, houses are the main assets for many families and, thus, several of these families used those assets with rising values to increase their borrowings through renewed mortgages, piggybacks, etc. (Goodhart and Hoffmann, 2008; Goodhart et al., 2009). 2 Crotty (2009) asserts that "total fees from home sales and mortgage securitization from 2003 to 2008 have been estimated at $ 2 trillion." Certainly this caused unavoidable principal-agent problems. 3 White (2009), Crotty and Epstein (2009), Kregel (2009). Lawson (2009) shows that "at one point roughly 60% of structured products were triple-A rated according to Fitch Ratings (2007) compared with less than 1% of corporate bond issues. And one result of all this was the generation of a perception (as it turned out, an illusion) that structured securities were comparable in terms of safety or riskiness with single name corporate finance". Volume III Issue 1 (5) Summer 2012 Graph 4. Residential Prices in the US ­ 1992-2008 (variation in relation to the same quarter of the previous year) Source: Office of Federal Housing Enterprise Oversight, apud Cagnin, 2009a, p. 269; 2009b. As a matter of fact, there was an almost continuous rise in the prices of housings in the US, from 1992 to the middle of 2005 (Cagnin, 2009a; 2009b). From this moment, which almost exactly coincides with the acme of housing selling in the US that occurred in the fourth quarter of 2005, with 8.5 million houses sold (1.3 million new), those prices and selling started a uninterrupted decrease. In the third quarter of 2008, the housing selling had achieved only 5.4 million units (a 36.5% reduction in less than three years), with 0.5 million new (an astonishing 61.5% decrease in the same period; Torres Filho and Borça Jr. 2008). b) a benign action, in a Minskian sense, of monetary authorities, keeping low interests rates in the entire period (Cagnin, 2009b). This will allow many orthodox economists to blame these policies for the crisis, together with supposed naive and misconceived aims directed to guarantee at least a house for each American family, despite their income level (Taylor, 2009; Gjerstad and Smith, 2009; Patnaik, 2010; Krugman and Wells, 2010). In any case, probably the majority of the economics establishment, whatever their explicit or implicit theoretical strand, will agree that low interest rates, by the Federal Reserve, fed the housing and housing prices boom, although some could consider an impossible mission to attain all the goals attributed by the mainstream to the same monetary policy: low inflation rates, full employment, mild asset speculation, etc. (Greenspan, 2007). Moreover, as also explained by Minsky (1982), any more or less radical change in this benign monetary policy would imply simultaneously in changes in current and prospective prices of all assets, disturbing the upswing and certainly bringing about pressures for reversion of policies and/or blames for the premature explosion of the speculation bubble. 2.1. The onset of the crisis The crisis began with the reversion of the growth of the prices of the housings, which started to fall, as we have seen, in the middle of 2005. As we explained a stabilization of the housing prices would damage all the pyramid schemes which had as a sine qua non a steady rise in the prices. A reversion would be even more harmful, increasing losses and difficulties to service or even to roll over debts (Minsky, 1982). Moreover, American laws allowed mortgage debtors to abandon (`walk away') their residences, i.e., to transfer them to the creditors if they want to retrench from paying their mortgages, what started to be done with the fall on the residences prices. In addition, as we have seen in Graphs 2 and 3, the delinquency and foreclosure rates of subprime debtors were excessively large compared to those of prime debtors. There was an important reduction therefore in the yields of the SIVs, with their main owners, commercial and investment banks, having to cover payment delays, losses, etc., and not least, requiring those banks to record these losses in their balance sheets, what had not been done beforehand. Of course, there were enormous costs also to several tranches of CDOs. Therefore, it became then clear that the balance sheets of many financial intermediaries, even of some of the largest banks in the US and Europe, could not be trusted, because of the absence of knowledge on the share of toxic assets on the balance sheets of those financial institutions (Dymski, 2010; Galbraith, 2010, Einchengreen et al., 2009; Kregel, 2009). Creditors began to withdraw their investments in SIVs, mutual funds, etc., in the usual `flight to quality', i.e., to US Treasuries, increasing rapidly the spreads between the rates needed to attract investors and the FED Funds (Eichengreen et al., 2009; Torres-Filho and Borça Jr., 2008). Consequently, there was a retrenchment of creditors from financial institutions, of financial institutions from borrowers, and so on, in a much known vicious cycle which simultaneously diminished credits and rose interests (Minsky, 1982), including interbank loans ­ chiefly after the infamous Lehman Brothers bankruptcy ­ feeding back the decline in house prices and investments, and even turning impossible the pricing of mortgage backed securities. That is the reason for the first strong signs of the coming crisis: the bankruptcy of Ownit Solutions, a nonbank specialist in subprime and Alt-A mortgages, in 2006; the August 9, 2007 halting of withdrawals from three investment funds by BNP Paribas, with about $2.2 billion in total assets, after Bear Sterns, on July 31, and Union Investment Management GmbH, on August 3, had recurred to the same measures, a week before (Boyd, 2007; Acharya and Richardson, 2009, p. 208). In reality, the markets were then disturbed, but almost returned to `business as usual', until the need of Bear Sterns to be sold to J.P. Morgan, on the weekend of 15-16 March, 2008, in a rush to avoid a financial panic before of the opening of the markets in Asia, on Monday. Bear Sterns was sold with a special financing from the FED to fund up to $30 billion of Bear Sterns' less liquid assets. And all this was needed despite a startling 93% price discount to that investment bank closing stock price on the New York Stock Exchange, on Friday 14 March or 99% considering those prices a year before (Sorkin and Thomas Jr., 2008). However, sheer panic was avoided until the much known policy mistake with Lehman Brothers, on the weekend of 12-15 September of that same year (Lavoie, 2010, pp. 5-6; Taylor, 2010, pp. 360-1) and the decision of the US Treasury, just on 16 September to lend $85 billion to AIG in exchange for a stake of almost 80% in that Group, in order to prevent its bankruptcy (Wessel, 2009). Wachovia (-73.2%), Wells-Fargo (-65.5%), Citigroup (41.2%), J.P. Morgan (-25.5%) and Bank of America (-19.2%) assets also faced huge losses in their August 2008 market prices in comparison to July 2007 (Torres-Filho and Borça Jr., 2008; Guttman, 2009). As Crotty (2009,) affirms, "[i]t is estimated that by February 2009, almost half of all the CDOs ever issued had defaulted... Defaults led to a 32% drop in the value of triple A rated CDOs composed of super-safe senior tranches and a 95% loss on triple A rated CDOs composed of mezzanine tranches". 3. Reliance on Fragile Theoretical Foundations One important issue in contention is the methodological underpinnings supporting (or not) one's personal (or even a scientific group's ­ Kuhn, 1962; Lakatos, 1970) view of financial markets and the analyses and proposals which are derived of those views (Laidler, 2010). We will divide this discussion in two major parts, presented in this item ­ first, an analysis of financial markets and the current economic crisis ­ and second ­ a view (or understanding) of economics and financial markets and some broad considerations on methodological issues. That is to say, we will not discuss in this paper proposals for the current crisis, although they could be considered a rather logical consequence of our paper. For this would require practically another paper. We can follow the outline sketched by Krugman and Wells (2010) to present the arguments of several economists on the crisis. They divide their explanation in four major issues, nor mutually exclusives: a) the low interest rate policy of the Federal Reserve after the 2001 recession; b) the global savings glut; c) financial innovations that disguised risk; and d) government programs that created moral hazard. Volume III Issue 1 (5) Summer 2012 a) The low interest rate policy of the Federal Reserve after the 2001 recession A large stream of economists contends that too low interest rates, from at least 2002 to 2006 are the main or even the sole responsible for the crisis. As Krugman and Wells (2010) explain, after the burst of technology bubble in the late 1990s, central banks cut base short-term interest rates, in an attempt to avert a slump. The Federal Reserve cut its overnight from 6.5 percent at the beginning of 2000 to 1 percent in 2003, keeping the rate at this low point until the beginning of the summer of 2004. Graph 5. Federal Funds Rate, Actual and Counterfactual (in %), U.S. 2000-2007 Source: apud Taylor (2010). As Taylor (2010) proposes, Graph 5 would show that the actual monetary policy in the U.S. was excessively expansionist, not following the Taylor rule which "worked well during the historical experience of the `Great Moderation' that began in the early 1980s (...) This was an unusually big deviation from the Taylor rule. There has been no greater or more persistent deviation of actual Fed policy since the turbulent days of the 1970s. So there is clearly evidence of monetary excesses during the period leading up to the housing boom"(Taylor, 2010). He also provides "statistical evidence" that that "interest-rate deviation could plausibly bring about a housing boom. In this way, an empirical proof was provided that monetary policy was a key cause of the boom and hence the bust and the crisis" (Taylor, 2010). Inflation rates, measured through CPI inflation, would also have been lower, around the 2% target suggested by many policy-makers ­ of course, adept of inflationtargeting policies ­ instead of the 3.2% during the previous five years. Moreover, "housing was also a volatile part of GDP in the 1970s, a period of monetary instability before the onset of the Great Moderation. The monetary policy followed during the Great Moderation had the advantages of keeping both the overall economy stable and the inflation rate low" (Taylor, 2010). In addition, interest rates in several European countries ­ strongly influenced by the American monetary policies ­ were also below those that historical regularities according to the Taylor rule would have predicted. And the housing booms would have been the largest where this deviation was the largest. However, as he candidly asserts, "One can challenge this conclusion, of course, by challenging the model, but an advantage of using a model and an empirical counterfactual is that one has a formal framework for debating the issue"(Taylor, 2010). Also, according to the subjacent efficient market model of his analysis (Laidler, 2010), the rating agents would have underestimated the securities risks "either because of a lack of competition, poor accountability or, most likely, an inherent difficulty in assessing risk owing to the complexity".(Taylor, 2010). Finally, the behavior of GSEs, like Fannie Mae and Freddie Mac, encouraged to expand and to buy Mortgage Backed Securities (MBS), "should be added to the list of government interventions that were part of the problem" (Taylor, 2010). Consequently, according to Taylor, the major problem after the crisis was one of risk rather than liquidity, made worse by wrong policies which engendered Lehman Brothers' bankruptcy, for they made unpredictable which financial institutions government will save and support. As a conclusion, "government actions and interventions caused, prolonged, and worsened the financial crisis. They caused it by deviating from historical precedents and principles for setting interest rates that had worked well for twenty years. They prolonged it by misdiagnosing the problems in the bank credit markets and thereby responding inappropriately by focusing on liquidity rather than risk. They made it worse by providing support for certain financial institutions and their creditors but not others in an ad hoc fashion, without a clear and understandable framework. While other factors were certainly at play, these government actions should be first on the list of answers to the question of what went wrong."(Taylor 2010). Certainly this is not only Taylor's opinion. Many economists share his view (Krugman and Wells, 2010, Patnaik, 2010, Cassidy, 2010; Wickens, 2009). However, as Krugman and Wells (2010) explain, there are some serious problems with this view. For one thing, there were good reasons for the Fed to keep its overnight, or "policy," rate low. Although the 2001 recession wasn't especially deep, recovery was very slow--in the United States, employment didn't recover to pre-recession levels until 2005. And with inflation hitting a thirty-five-year low, a deflationary trap, in which a depressed economy leads to falling wages and prices, which in turn further depress the economy, was a real concern. It's hard to see, even in retrospect, how the Fed could have justified not keeping rates low for an extended period. The fact that the housing bubble was a North Atlantic rather than purely American phenomenon also makes it hard to place primary blame for that bubble on interest rate policy. The European Central Bank wasn't nearly as aggressive as the Fed reducing the interest rates it controlled only half as much as its American counterpart; yet Europe's housing bubbles were fully comparable in scale to that in the United States. These considerations suggest that it would be wrong to attribute the real estate bubble wholly, or even in large part, to misguided monetary policy. b) the global savings glut According to some economists (Eichengreen, 2008) the global savings glut is a major cause for the crisis: The other element helping to set the stage for the crisis was the rise of China and the decline of investment in Asia following the 1997-8 crisis. With China saving nearly 50 per cent of its GNP, all that money had to go somewhere. Much of it went into U.S. treasuries and the obligations of Fannie Mae and Freddie Mac. This propped up the dollar. It reduced the cost of borrowing for Americans, on some estimates, by as much as 100 basis points, encouraging them to live beyond their means. It created a more buoyant market for Freddie and Fannie and for financial institutions creating close substitutes for their agency securities, feeding the originate- and-distribute machine. Again, these were not exactly policy mistakes. Lifting a billion Chinese out of poverty is arguably the single most important event of our lifetimes, and it is widely argued that the policy strategy in which China exported manufactures in return for high-quality financial assets was a singularly successful growth recipe. Similarly, the fact that the Fed responded quickly to the collapse of the high-tech bubble prevented the 2001 recession from becoming even worse. But there were unintended consequences. Those adverse consequences were aggravated by the failure of regulators to tighten capital and lending standards when capital inflows combined with loose Fed policies to ignite a credit boom. They were aggravated by the failure of China to move more quickly to encourage higher domestic spending commensurate with its higher incomes (Eichengreen, 2008). Volume III Issue 1 (5) Summer 2012 The main idea supporting it is that the savings of countries like Germany and many Asian are used to buy securities in deficit nations, like the US, UK, Spain and so on. Historically, developing countries have run trade deficits with advanced countries as they buy machinery and other capital goods in order to raise their level of economic development. In the wake of the financial crisis that struck Asia in 1997­1998, this usual practice was turned on its head: developing economies in Asia and the Middle East ran large trade surpluses with advanced countries in order to accumulate large hoards of foreign assets as insurance against another financial crisis (Krugman and Wells, 2010). An important problem with this explanation is that Central Banks throughout the world set the basic rates. These capital inflows also drove down interest rates--not the short-term rates set by central bank policy, but longer-term rates, which are the ones that matter for spending and for housing prices and are set by the bond markets. In both the United States and the European nations, long-term interest rates fell dramatically after 2000, and remained low even as the Federal Reserve began raising its short-term policy rate. At the time, Alan Greenspan called this divergence the bond market "conundrum," but it's perfectly comprehensible given the international forces at work. And it's worth noting that while, as we've said, the European Central Bank wasn't nearly as aggressive as the Fed about cutting short-term rates, long-term rates fell as much or more in Spain and Ireland as in the United States--a fact that further undercuts the idea that excessively loose monetary policy caused the housing bubble (...) the global glut story provides one of the best explanations of how so many nations managed to get into such similar trouble (Krugman and Wells, 2010). We can agree with Krugman and Wells if the savings are understood as influencing long term interest rates, i.e., if they are used to buy and make possible lower long term interest rates for these securities. Of course, to this savings we must add, at least for some individuals and groups, US, UK, Spain, etc., private savings. That is to say, the issue is not so much of a savings glut ­ for the sum of private, public and private savings in every country amounts to zero (Godley and Zezza, 2006; Godley et al., 2007; 2008) ­ but one of where to put the financial resources to those who own them. c) financial innovations that disguised risk Many authors consider that several models which packed together many mortgage debts with other debts ­ even student loans, leveraged loans, credit card debts, corporate bonds, etc. (Acharya and Richardson, 2009; Wallison, 2009b) ­ were the main responsible for the crisis, for they disguised the implicit risks of the many assets included in each CDO. As many analysts assert, it is simply impossible to rate risks in these CDOs and also, consequently, to know the entire situation of the financial institutions and of the whole financial system, even by the most savant. Banks and some other financial institutions acted then chiefly as originators of credit, i.e., as intermediaries (Kregel, 2009), usually not keeping them in their balance sheets. This behavior was one of those responsible for the crisis, since those originators were not worried about real conditions of debtors, but mainly with creating new mortgages, in order to package them in CDOs and then sell them to the market, generating substantial fees for the originators (Stiglitz, 2009; Krugman and Wells, 2010). Moreover, systemic risks were disregarded in the models used by financial institutions (Zendron, 2006; Colander et al., 2009; Crotty, 2009). This turned risks invisible to agents, considered individually or systemically. We would add to these the failure of rating agencies to rate more correctly those CDOs, in spite of the inherent difficulties or even impossibilities we stressed before for such rating. Nonetheless, the rating agencies mostly rated these packaged securities with very good ratings, normally with an AAA. This behavior denotes a conflict of interests, for the rating agencies were regularly paid for these ratings, having interests to remain as good raters for the credit originators, in order to receive those payments regularly (Stiglitz, 2009; White, 2009). Furthermore, there were also conflicts of interests within the staff of the financial institutions, for their components received earnings based on profits also generated through fees paid for mortgages and other debts originations. It was quite possible that if a financial institution would face problems in the future those would not happen at the time the then members of the staff would be in those institutions. Besides, even those members could believe that the financial models to calculate risks were trustable and so even they could find out that they were doing a fair and good job for all. Regulators also believed somehow in market efficiency and those who had doubts about it were stifled by the "true believers." In addition to that ideological issue, there were practical incentives like Wall Street (and other financial centers) political and ideological pressure ­ since many central bankers, secretaries and other regulators are connected to the financial institutions to be regulated or can work for them in the future (Crotty, 2009). Finally, Wall Street and other financial centers are very important financial contributors to increasingly more expensive political campaigns.4 To sum up, all the incentives structure of the financial markets was flawed (Stiglitz, 2009; Wray, 2009). Everyone ignored both the risks posed by a general housing bust and the degradation of underwriting standards as the bubble inflated (that ignorance was no doubt assisted by the huge amounts of money being made). When the bust came, much of that AAA paper turned out to be worth just pennies on the dollar.(...) [However,] Three points seem relevant. First, the usual version of the story conveys the impression that Wall Street had no incentive to worry about the risks of subprime lending, because it was able to unload the toxic waste on unsuspecting investors throughout the world. But this claim appears to be mostly although not entirely wrong: while there were plenty of naive investors buying complex securities without understanding the risks, the Wall Street firms issuing these securities kept the riskiest assets on their own books. In addition, many of the somewhat less risky assets were bought by other financial institutions, normally considered sophisticated investors, not the general public. The overall effect was to concentrate risks in the banking system, not pawn them off on others. Second, the comparison between Europe and America is instructive. Europe managed to inflate giant housing bubbles without turning to American-style complex financial schemes. Spanish banks, in particular, hugely expanded credit; they did so by selling claims on their loans to foreign investors, but these claims were straightforward, "plain vanilla" contracts that left ultimate liability with the original lenders, the Spanish banks themselves. The relative simplicity of their financial techniques didn't prevent a huge bubble and bust. A third strike against the argument that complex finance played an essential role is the fact that the housing bubble was matched by a simultaneous bubble in commercial real estate, which continued to be financed primarily by old-fashioned bank lending. So, exotic finance wasn't a necessary condition for runaway lending even in the United States (Krugman, and Wells, 2010). In conclusion: What is arguable is that financial innovation made the effects of the housing bust more pervasive: instead of remaining a geographically concentrated crisis, in which only local lenders were put at risk, the complexity of the financial structure spread the bust to financial institutions around the world (Krugman, and Wells, 2010). d) government programs that created moral hazard As Stiglitz (2009) shows, there are conservative critics who point to the government as the principal culprit for the crisis. For the creation of the Community Reinvestment Act (CRA) required that banks lent a certain share "Most elected officials responsible for overseeing US financial markets have been strongly influenced by efficient market ideology and corrupted by campaign contributions and other emoluments lavished on them by financial corporations. Between 1998 and 2008, the financial sector spent $1.7 billion in federal election campaign contributions and $3.4 billion to lobby federal officials. Moreover, powerful appointed officials in the Treasury Department, the SEC, the Federal Reserve System and other agencies responsible for financial market oversight are often former employees of large financial institutions who return to their firms or lobby for them after their time in office ends. Their material interests are best served by letting financial corporations do as they please in a lightly regulated environment. We have, in the main, appointed foxes to guard our financial chickens" (Crotty, 2009). Volume III Issue 1 (5) Summer 2012 of their portfolio to underserved minority communities (Wallison, 2009a; Patnaik, 2010). They also blame GSEs, like Freddie Mac and Fanny Mae, which played a very large role in mortgage markets, despite their privatization in 1968. Nevertheless, as Stiglitz (2009) underscores, a recent Fed study showed that the default rate among CRA mortgagors is actually below average. The problems in America's mortgage markets began with the subprime market, while Fannie Mae and Freddie Mac primarily financed `conforming' (prime) mortgages.(...) To be sure, Fannie Mae and Freddie Mac did get into the high-risk high leverage "games" that were the fad in the private sector, though rather late, and rather ineptly. Here, too, there was regulatory failure; the government-sponsored enterprises have a special regulator which should have constrained them, but evidently, amidst the deregulatory philosophy of the Bush Administration, did not. Once they entered the game, they had an advantage, because they could borrow somewhat more cheaply because of their (ambiguous at the time) government guarantee. They could arbitrage that guarantee to generate bonuses comparable to those that they saw were being "earned" by their counterparts in the fully private sector. Krugman and Wells add the much known political motivation for this economic "analysis". Those authors are careful not to name names and attribute the blame to generic "politicians" it is clear that Democrats are largely to blame in his worldview. By and large, those claiming that the government has been responsible tend to focus their ire on Bill Clinton and Barney Frank, who were allegedly behind the big push to make loans to the poor.(...) The huge growth in the subprime market was primarily underwritten not by Fannie Mae and Freddie Mac but by private mortgage lenders like Countrywide. Moreover, the Community Reinvestment Act long predates the housing bubble. Overblown claims that Fannie Mae and Freddie Mac single-handedly caused the subprime crisis are just plain wrong. As others have pointed out, Fannie and Freddie actually accounted for a sharply reduced share of the home lending market as a whole during the peak years of the bubble. To the extent that they did purchase dubious home loans, they were in pursuit of profit, not social objectives ­ in effect they were trying to catch up with private lenders. Meanwhile, few of the institutions engaged in subprime lending were commercial banks subject to the Community Reinvestment Act. Beyond that, there were the other bubbles ­ the bubble in US commercial real estate, which wasn't promoted by public policy at all, and the bubbles in Europe. The fact that US residential housing was just part of a much larger phenomenon would seem to be presumptive evidence against any view that relies heavily on supposed distortions created by US politicians. Was government policy entirely innocent? No... Fannie and Freddie shouldn't have been allowed to go chasing profits in the late stages of the housing bubble; and regulators failed to use the authority they had to stop excessive risk-taking (Krugman, and Wells, 2010). 4. Considering Methodological Issues In this section we sketch a conception of what are the fundamental, meta-theoretical failures involved in, and explaining, in our view, the theoretical problems of economics. For us, the main problem (formalism) allows the use of very inappropriate models to understand economic reality. Since formalism presupposes an ontology of `closed systems' it is unable to avoid economic disasters caused by phenomena of `open systems', like uncertainty, bounded rationality, herd psychology, etc. Our interest is not primarily in debates within economic methodology, but in using methodological critiques of economic theory in order to see if we can learn from this episode how can we profit from paying attention to the real word when designing models. We do so in three steps: outlining the critical realist approach to economic methodology, which draws attention to the ontology of economic world; discussing formalism and the problems concerning design and use of overly unrealistic models; and finally proposing some ways to proceed in the future. 4.1. The Critical-Realist Conception of Scientific Explanation Critical realism is a comparatively new and expanding approach to the methodology of economics. Proposed by the British philosopher of science Roy Bhaskar (1975, 1979), it was introduced in economics by a group of economists and other social scientists mostly associated with the University of Cambridge. It has made an appeal to philosophically-oriented economists and schools, like Post Keynesians and Austrians. The best known name of critical realist economic methodology is Tony Lawson (1997, 2003, and several papers), formerly editor of Cambridge Journal of Economics. Closely associated are Sheila Dow (2002, 2003) and the (old) institutionalism and evolutionary economic theorist Geoffrey Hodgson (2004, 2006). The pivotal theme within critical realism is the nature of scientific activity and explanation. According to that approach, traditional issues in economic methodology (logical empiricism and Popperian falsificationism) mistake the nature of scientific activity and so propose an misleading aim to (natural as well as social) working scientist. For in the natural sciences, theories are law-like statements from which implications that `explain' the object of interest are deduced. Thus, for example, an explanation of falling bodies is a deduction from Galileo's Law, plus a series of auxiliary or attending or simplifying/idealizing statements (e.g. perfect vacuum, flat surface of earth, etc.) According to the traditional methodology, explanation is subsuming a case of falling body into at least one general law. Prediction, on the other side, is to expect that from the same cause (a general law) the same effect will always (deterministically or probabilistically) ensue. Explanation and deduction are symmetrical (the famous Hempel-Oppenheim's symmetry thesis). The empirical test of theories is at the same time condition for its acceptance, and a sign of growth of knowledge5. To prescribe this methodology for economics involves two implications: (i) there is only one valid method of inquiry all over the sciences (`methodological monism'); and (ii) the search for regularities or constant conjunctions of events is the only possible mean of attaining knowledge (`epistemic fallacy'). Starting from the latter, for critical realists constant conjunctions of events are neither necessary nor sufficient condition to claiming scientific knowledge. Scientific law-like statements are formulated in experimental (i.e. controlled) settings (`closed systems'), where constant conjunction of events obtains because one causal factor of interest is sealed off from any other countervailing factors that bear on the phenomenon of interest, such that we can always say `whenever (event type) X, then (event type) Y'. If valid, these statements will be successfully applied also in nature (an `open system'). How is that possible? Traditional methodologists have a problem here: if stable conjunction of events are sought after, they are rather rarely spontaneously founded (astronomical laws, one of Lawson's favorite example of spontaneous regularity, is indeed obtained under conditions of closure, see Mäki, 1992b); if, on the other hand, the subject matter of scientific investigation is explained by the scientist's intervention, then they are bound to admit that there is no genuine laws in the nature. Critical realists solve this problem by claiming that the aim of experimental activity is to isolate a putative causal factor from all others bearing on the phenomenon of interest. When the theory obtained is successfully applied in open systems is because scientists have identified correctly the causal (i.e., dominant) factor. This picture has important implications for critical realist's account of science. First, science should not be seen as the search for a constant conjunction of events. The prime interest of critical realists is in ontology. Ontology is the study of the nature of world and what there is in it (its `ontic furniture'). Critical realists advance a series of ontological propositions. Reality is structured in layers, each of them more encompassing and deep from top-down. The first layer is the empirical domain (our sensory perception of events and state of affairs), the second one is the actual domain (things `as they really are', 5 We will not delve into the details and historical crumbling of `received' (i.e., logical empiricist) and Popperian views of methodology (but see Hands, 2001, chap. 3). This does not prevent Blaug (1997a), an important supporter of Popperian ideas in economic methodology, to say that `the Methodology which best supports the economist's striving for substantive knowledge of economic relationships is the philosophy of science associated with the names of Karl Popper and Imre Lakatos. To full attain the ideal of falsifiability is, I still believe, the prime desideratum in economics' Volume III Issue 1 (5) Summer 2012 irrespective to our knowing of or feeling them) and the third one is the real or deep domain, populated by structures, mechanisms, powers and tendencies that shape and condition the events of actual domain. Structures are the properties of an object of inquiry of their mode of being. Mechanisms are the way an object operates, due to its structure. Powers are capacities of the object, what it can cause when its mechanisms are triggered. Yet these mechanisms do not operate in isolation, but in open systems, such that many other mechanisms (enhancing or countervailing) might typically be at work simultaneously ­ thus concealing the mechanism we are interested in. That is why critical realists claim that mechanisms operate as tendencies, i.e., when triggered, a mechanism will necessarily operate, no matter what events ensue. In this ontological commitment, reality is stratified (in layers) and structured (any layer may be out of phase from each other), but an explanation is the move from the empirical domain into even deeper layers of reality, searching for the causal mechanisms of what exists in the actual domain and which we perceive in the empirical domain6. Second, due to this account, explanation does not require strict regularities. A unique event can be explained if we have sufficient information on its structure, and antecedent knowledge from where to start the research. Constant conjunctions of events are insufficient for explanations, too. For explaining a phenomenon is studying its structure looking for plausible mechanisms causally responsible for its occurrence, rather than simply recording correlations between empirical events. In fact critical realists charge positivists of all stripes of what they call `epistemic fallacy' ­ mistakenly conflating ontological questions to epistemic questions. For example, the restlessness search for models that better `fit' facts to a theory is a case in point, insofar as it reduces all phenomena to some measurable and all-compassing analytical categories referring only to empirical events. At last, thirdly, social scientific research can be done along lines broadly similar to natural science. The structures, mechanisms, etc., are obviously different, but the aim is equal: unearth causal mechanisms, powers and tendencies of structured objects of knowledge. From this point, Lawson (1997, chap. 14 to 16; 2003, chap. 2) elaborates the nature of social reality at length. It is characterized by internal (constitutive) and external (contingent) social relations, mediated by positions (hierarchies) and rules (norms, mores, conventions, etc.). Society is thus an unbroken net of relations, dependent of individual action but irreducible to it, with mechanisms and powers of its own. It constrains the alternative courses of action for individual decisions, but does not determine the action actually chosen. Moreover, in any time individual action is simultaneously reproducing and transforming society. Critical realists like Archer (1995, chap. 5) and Fleetwood (1995, pp. 86-90) call this process `the transformational model of social activity': in society our actions are always based on structures inherited from the past and always transforming or reproducing this same structure for the future. That is why Lawson (2009, p. 764) claims that social processes are `a totality in motion'. One last question is: how can we obtain knowledge of these hidden structures? Critical realism is not a disguised form of outdated essentialism? That is where critical realists claim their position as falibilism ­ there is no guarantee for putative mechanisms besides its power to illuminate some reality (natural or social). The problem of discriminating among alternative theories (the old problem of `identification') is to be solved by the degree in which each theory can explain more (and in a better way) events than its competitors. Of course, this is a very hotly debated issue in the philosophy of science, opening the doors for relativism. Critical realists call in their help two notions: knowledge, as a social product, is itself a `produced mean of production' of knowledge, such that in the start of any research we have at least one theory to proceed. Moreover, the ontological commitment (`how reality is') traces a divide between knowledge of reality and its object. This make possible to be fallibilist, not relativist: reality is the contrastive backdrop onto all scientific claims can be evaluated and our prior (scientific) beliefs revised. Thus, critical realists can (and relativists cannot) differentiate changes in the world from changes in knowledge. When we perceive some event (supposedly) disjunctive to our existing knowledge, we can abduct a mechanism (i.e., propose a cause for that effect) and investigate its occurrence. We Certainly that is only a sketchy picture of critical realist account of scientific practices. See a lengthy and sophisticated discussion of these matters in Lawson (1997). shall deal with the question of how to identify a causal mechanism shortly. Before that, we deal with the problem of formalism in economics and its (supposed) culpability for the crisis. 4.2. Formalism and Economic Models In the wake of the global financial crisis of 2008 often we found opinions on the effect that economic theory was made irrelevant due to its formalism. In the simple and plain words of Blaug (2002): "what characterizes "formalism" is that technicalities are prized as ends in themselves, such that theories which do not lend themselves to technical treatment are set aside and with them the problems they address. Formalism is the worship of technique and that is what is wrong with it." In fact, several critics and supporters of mainstream economics have made pronouncements on this regard. Of course, not all mainstream economists recognize a problem in the way of doing economics. They typically blame some factor exogenous to the discipline (regulatory shortcomings, excessively lax monetary policy, irrational optimism and pessimism, and so on) for the financial crisis.7 Others think that is just business as usual. Note, for example, LSE professor Marcet: What do economists do? We think economics as a science. That means if you don't have a model, data is a mess... We need models just to see where to look. I should teach to our MSc students and undergraduate students theories (with internal consistency) that the research community has thoroughly and very strongly tested in empirical terms, because that's our job. Necessarily, economic models are oversimplified [there follows the Galileo's Law as an example] and thus, in order a model to be a model is the easiest thing in the world to make of economic theories. But, unless I find better models, it isn't fair to make fun (Marcet, 2010; our transcription). Yet Alan Blinder, when praised the progress of economics, has recognized the problem: Economics was off to the mathematical races. Intellectual giants like Samuelson and Arrow led the way, sweeping away the old, more literary tradition in economics and attracting a small army of scholars with a more scientific bent.' And he adds: "But somewhere along the way the warm embrace of mathematics developed first into a infatuation, and then into an obsession. And that, I am afraid, is where economics lost at least some of its scientific moorings ­ moorings we have yet to regain. [Mathematics] is, of course, both a high and exceedingly difficult form of thought and an indispensable tool for every science... But mathematics seems entirely too selfreferential, too deductive, one might almost say too pure to be considered a science. Let me dwell on these three words ­ self-referential, deductive, and pure ­ for they describe where economics has gone wrong, in my view (Blinder, 1999). In our view, the theoretical shortcomings we have seen in the previous section are closely linked to methodological and ontological presuppositions mostly held by mainstream economists. The problem concerns to something that Dow (1990) calls `Cartesian mode of thought' and Lawson (1997) calls `deductivism'. In short, this mode of thought sees theories only as logically derived series of propositions8. Moreover, those propositions are interpreted as entities apt for formalization. The next short step is supposing that, since logical structures have truth value inter-subjectively demonstrable, they are the only valid and sound theorizing in any science, 7 For a sad report on how the failures of economist's models of efficient markets and dynamic stochastic general equilibrium are being received by their supporters, see Cassidy (2010) and Cohen (2009). As a matter of fact, some mainstream economists are simply losing their temper. In a reply to Krugman (2009), Cochrane (2009) defends his own stance as follows: "Imagine this weren't economics for a moment. Imagine this were a respected scientist turned popular writer, who says, most basically, that everything everyone has done in his field since the mid-1960s is a complete waste of time. Everything that fills its academic journals, is taught in its PhD programs, presented at its conferences, summarized in its graduate textbooks, and rewarded with the accolades a profession can bestow, including multiple Nobel prizes, is totally wrong. Instead, he calls for a return to the eternal verities of a rather convoluted book written in the 1930s, as taught to our author in his undergraduate introductory courses. If a scientist, he might be a global-warming skeptic, an AIDS-HIV disbeliever, a stalwart that maybe continents don't move after all, or that smoking isn't that bad for you really." 8 This characterization is more apt in Dow's case than Lawson's, as it is doubtful whether mainstream economic methodology is empiricist or axiomatic (Viskovatoff, 1998). Lawson's account of deductivism is in terms of the PopperHempel hypothetical-deductive model of explanation which is empiricist (while mainstream is not). Volume III Issue 1 (5) Summer 2012 economics included. And since formal structures are content less9, this presupposition also amounts (even unwillingly) to sacrifice relevance for rigor, elegance and precision, for practical implications. But, if there is a problem with formalism in economics, what exactly is the problem? How could we come to such a state? Can we do any better? To begin with, formalism is a complex term, interwoven with mathematization, axiomatization and model-building. Following Chick (1998, p. 1860) ­ who in turn follows Woo (1986, p. 20, n. 1) ­ our focus will be on axiomatization and model-building as forms ­ syntactical and semantically, respectively ­ of formalism. Chick, once again, helps to understand each of them: The axiomatic approach and less rigorous mathematical models have certain symmetry. In the first, one starts with "self-evident" axioms, applies the deductive method using agreed rules of logic and, providing one's logic is correct, arrives at demonstrable truths. Mathematical modeling is more relaxed and less ambitious: assumptions need not be "self-evident"; thus there is some scope for the theorist's judgment, and that judgment may be questioned (the "realism of assumptions" debate). In both cases, transformations are then made following agreed rules, and the conclusions follow as long as the rules have been obeyed. This procedural homology allows one to order one's thoughts into points about the issue of the appropriate starting point of analysis, precision, and the biases inherent in conventional models (Chick, 1998). This passage has many points that are worthwhile to note. Axiomatic and model-building both require an appropriate translation of empirical objects of interest into their formal counterparts. This is made by the modeler "judgment"10. Models apparently also meet their user's anxiety for "precision" and "certainty", giving logical consistency to reasoning based on a model. They are used also to promote agreement on a given issue, by supposing that it is correctly described in the model. But note that benefit is gained only at the syntactical level: models are also inherently interpretative, semantically such that their elements are debatable, and their assumptions can be questioned. So, the problem with formalism, in economics or elsewhere, is the misunderstanding that precision and consistency are completely different from validity, let alone practical implications ­ and giving to the formers the ultimate worth. This is enough to point out that models can be certainly valuable and important, but must be carefully used. Dow (2008) gives examples of how the `framing' of a question in a formal model can hinder further understanding of or, worse, distort the object of inquiry. Models of asset-pricing supposing equilibrium `as the end-state of market processes' are a case in point; another is `new' behavioral economics: `While there is reference in behavioral economics to social framing, as in the conditioning of choice by social norms, there is little exploration of how it arises, although sociology might well have provided insights. Because of the axiomatic focus on atomic individuals, the influence of society is limited to the introduction of social norms as exogenous constraints on rational individual behavior, without explanation for the emergence of these norms or the reasons that rational individuals accept them.' In a similar vein, Blaug (2002) asserts that, despite using higher techniques, `it is difficult to see how the new economic geography illuminates the locational aspects of economic activity any better than the old economic geography.' Several commentators find Milton Friedman's 1953 essay on `The Methodology of Positive Economics' the prime source of formalism in economics11, nevertheless statements in contrary in this very piece (Friedman 9 "While there are different formalist programs, the unifying principle is self-contained rule-following, by which to construct formal languages and deductive systems that are independent of content"(Chick, 1998). 10 The mathematician Christian Henning is in full accordance: `Mathematical modeling always requires the interpretation of elements of the formal mathematical domain in terms of (personal or social, non-mathematical) reality. There is no formal way to check whether such interpretations are `true', and the mathematical truth of theorems applied to such models does not warrant claims of `objective truth' concerning the modeled reality.' (Henning, 2010, p. 46) 11 See Chick (1998), Blaug (2002), Lawson (2009), Hodgson (2009), Dow (2008), and especially Hands (2009). 1953), in other places (Friedman, 1999), and in the most pronounced influence on others, like von Newman, Morgenstern, Arrow and Debreu (Blaug 2002; 2003).12 Take the following, rightly considered the most important (and controversial) methodological statement of the essay: `In so far as a theory can be said to have "assumptions" at all, and in so far as their "realism" can be judged independently of the validity of predictions, the relation between the significance of a theory and the "realism" of its "assumptions" is almost the opposite of that suggested by the view under criticism. Truly important and significant hypotheses will be found to have "assumptions" that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense). The reason is simple. A hypothesis is important if it "explains" much by little, that is, if it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained and permits valid predictions on the basis of them alone. To be important, therefore, a hypothesis must be descriptively false in its assumptions; it takes account of, and accounts for, none of the many other attendant circumstances, since its very success shows them to be irrelevant for the phenomena to be explained.' (Friedman, 1953, our italics) A footnote attached to this passage, reads: `the converse of the proposition does not of course hold: assumptions that are unrealistic (in this sense) do not guarantee a significant theory'. Why Friedman is so important to the formalization of economics? Based on this only passage, almost every writer finds Friedman `licensing' the free use of unrealistic assumptions while constructing economic models. In the context of the global financial crisis, his fingerprints are found in sanctioning models which contain assumptions of substantive rationality, efficient markets, dynamic stochastic general equilibrium, and so on. No matter what did Friedman think on economic theory and practice, his essay cried louder. A supporter of Friedman methodological statements evaluated its far-reaching consequences this way: `working economists look for heuristics that orient them in the fruitful direction and also make them feel that their work is scientific. When seeking fruitful heuristics, coherence and philosophical sophistication are not necessarily the dominant considerations. Crude, intuitive notions may be perfectly adequate to point an economist in the right direction' (Mayer, 1993, our italics). Interestingly, in spite of so much ink spent with his essay, Friedman never disavowed any comment, whether in support or in attack of it13. Thus, the way was freed for economists to pursue any kind of assumption, no matter how `wildly inaccurate' it may be.14 12 Backhouse and Medema (2009) find also an influence of Lionel Robbins on the path towards formalization, once his definition of economics as allocation of scarce resources was seen by mathematical economists (mostly associated to the Cowles Commission) as easier (than Marshall's?) to formalize. 13 A conference celebrating the fiftieth anniversary of Friedman's essay was organized by the Erasmus Institute of Philosophy and Economics, in 2003. In the published book of this conference, Milton Friedman was invited to write the `Final Word', in 2004. Here is Mäki's (2009) comment: "To my knowledge, this is the first time that he has publicly spelled out his views about what others have written about his essay, but unsurprisingly perhaps, he keeps his statement very general and polite (while in private correspondence and conversations, he has been active in reacting to various criticisms and suggestions in more substantive ways). He had decided to stick to his old private rule according to which he will let the essay live its own life. It remains a challenge to the rest of us to live our academic lives together with the methodological essay that he left behind." 14 Moreover, when Friedman (1953) delimitates the domain of validity of theories according to his methodology, any theory/hypothesis is made unassailable: "Viewed as a body of substantive hypotheses, theory is to be judged by its predictive power for the class of phenomena which it is intended to `explain'." Lawson (1992) takes this "class of phenomena" to mean posited conditions of closure, thereby stable conjunction of events can be obtained. In a similar vein, Mongin (1987) asserts that being so vaguely stated this domain of applicability "corresponds, in a circular reasoning, to simply exclude the known falsifiers of that theory." It would be a short step to treating economics as a kind of intellectual game played for its own sake. All in all, we find that much of this explain the alluring of Friedman's essay for working economists ­ a rhetorical success gained at the expense of methodological coherence. However, limitations of space prevents a detailed methodological treatment of these issues, but see Nagel (1963); Brunner (1969); Musgrave (1981); Caldwell (1982); Mäki (1986, 1992b); and Lawson (1992), for experts critiques of Friedman. Volume III Issue 1 (5) Summer 2012 At this point it is important to note that Hands (2009) in fact denies any responsibility of Friedman-the-man for formalism in economics. And he points to his just mentioned warns against excessively `tautological' methods of analysis. However this does not mean that economists, when they need to make their methodological allegiances explicit, refuse to take comfort in Friedman's essay and feel themselves good scientists. We think they do take comfort in it, indeed. Of course, methodological strictures do not run in a vacuum. Hodgson (2009), for example, provides a range of socio-cultural factors accounting for the winning of formalism, like the changing system of university teaching and research ­ towards more and more specialization and quantification, the downplaying of `big questions' concerning society and the ultimate aims of scientific endeavor, and the `publish-or-perish' pressure. Outside universities, market individualism and the cult of (quantifiable) performance has been in line with these developments. But one should not imply, from the critique of formalism sketched above, an utter rejection of mathematical methods in economics. A more tempered stance was recommended by Dow (1995) drawing on Keynes's remarks on the use of mathematics in economics. The key point is Keynes's turning the focus away from the dichotomy use/do not use, to situations in which mathematical modeling is appropriate. These conditions can be briefly stated: (i) when the assumption of constant structure is reasonable for the subject at hand; (ii) when the object of theorizing does not include significant non-quantifiable elements; and (iii) when variables are commensurable. There is also conditions for using formal reasoning, independent of quantification: (iv) that the structure being analyzed can reasonably be represented as constant, such that the variables can be represented as independent, or, if not constant, that interdependence can be expressed deterministically; (v) that all relevant factors can in practice be expressed formally (the danger with giving priority to mathematization is that the range of relevance is limited to those factors which can, given current capabilities, be expressed formally); and (vi) that the internal logic of the mathematical model is sufficient for persuasion. That is, the words employed in presenting mathematical argument themselves carry moral authority. Summing up, the more constant the structure of interest and the more it can be expressed formally, the more confident can one be of properly using formal models. However, this does not exhaust the possible uses of formal models15. Henning (2010) lists the following ones: (1) to improve mutual understanding; (2) to support agreement; (3) to reduce complexity; (4) for prediction; (5) to support decision; (6) to explore different (quantifiable) scenarios; (7) to explore the implications of the model (8) to guide observations and support learning; (9) to lend beauty and elegance to theories. It is apparent that Keynes's concerns regard the purposes (4)-(6) and (8), whereas the method of idealization (Mäki, 1992a; Nowak, 1989) regards purpose (3), and `conceptual exploration' (Hausman, 1992) regards purposes (7)-(9). Purposes (1) and (2) are uncontroversial16. Sugden (2002) offers a different view of models. Analyzing Schelling's segregation model and Akerlof's "market for lemons", he notes that these models do not fit in any of above conditions or uses. Akerlof's model, for example, does not predict the price for almost new cars. Nor Schelling's model predicts any behavior of racial discrimination in industrial cities. Thus, they are not concerned with prediction or control. Sugden asserts that these models can be interpreted as `conceptual exploration', but that is not all about them. They are constructed as counter-examples, counterfactuals, to shed light in some unperceived stretch of reality, likely to explain real world phenomena. Models do this job caricaturing, exaggerating, deforming some feature, isolating some putative causal factor, but keeping correspondence with reality (pp. 114-117). There is an increasing literature on models, their relation to reality and their construction. Here we can only redirect the interested reader to it. See the papers included in the Part III of Mäki (2002), in Morgan and Morrison (1999) and in a rather recent issue of Erkenntnis (January 2009). 16 Suppes (1968) argues for the use of formalism in science, but considers only purposes (1)-(4) and (7). This interpretation accepts Mäki's (1992a; 2005) vision of models as (idealized) "thought experiments", but, in Sugden's (2002) words: if a thought experiment is to tell us anything about the real world (rather than merely about the structure of our own thoughts), our reasoning must in some way replicate the workings of the world. For example, think how a structural engineer might use a theoretical model to test the strength of a new design. This kind of modeling is possible in engineering because the theory which describes the general properties of the relevant class of structures is already known, even though its implications for the new structure are not. Provided the predictions of the general theory are true, the engineer's thought experiment replicates a physical experiment that could have been carried out. On this interpretation, then, a model explains reality by virtue of the truth of the assumptions that it makes about the causal factors it has isolated. Therefore, models are devices to think about real world phenomena; its validity depends on what we know about the real word and if the workings of causal factors cohere with it. Models are deductive devices and we fill the gap between the model world and the real world by making inductive inferences from the world of the model to the real world. If a model is genuinely to tell us something, however limited, about the real world, it cannot be just a description of a self-contained imaginary world. And yet theoretical models in economics often are descriptions of self-contained and imaginary worlds. These worlds have not been formed merely by abstracting key features from the real world; in important respects, they have been constructed by their authors (Sugden, 2002). In sum, it seems that there are good reasons to require realisticism of models. Although no model is, perfectly realistic (`the-whole-truth'), our acceptance of them depends on their realistically picturing the workings of some isolated causal factor (`nothing but the truth'). That is, they must correspond to what we do know about the real world. Therefore, it would be an unjustified leap of faith to suppose that models which are unrealistic in both senses could, nevertheless, illuminate phenomena of the world we live in. However, they could function as heuristic devices or might be serviceable for `conceptual explorations'. This point leads us back to the issue of identifying real causal factors in a hidden, intransitive layer of reality ­ which we deal by discussing the possible ways to follow, in the near future. 4.3. Proposing Alternative Modes of Thought in the Aftermath of the Crisis Since formalism is a process which exhibit, according to Hodgson, path-dependency and positive feedbacks, it would be naïve to expect its instant abandoning. Yet, while some authors, like Colander et al. (2009), Keen (2009), Kirman (2009), or even Blaug (2002), propose a way out through looking for better, more empirically-driven models (e.g. complexity theory, experimental and behavioral economics), Lawson considers any adjust in the economist's toolkit unhelpful. It is clear that the recent crisis situation (like almost any social situation) is something that needs to be understood rather than modeled... [I]t seems overly heroic to suppose that in order to capture the sorts of developments that occurred, all that is required of modern academic economics is a different type of mathematics, or internal `theoretical' adjustments like the treating of a model's still isolated atoms as heterogeneous or as forming independent expectations; or focusing on the possibility of multiplicity and evolution of equilibria; or hoping that cointegrated vector autoregression (VAR) models will uncover robust structures within a set of data, and so forth. It is apparent that the legitimate and feasible goal of economic analysis is not to attempt to mathematically model and perhaps thereby predict crises and such like, but to understand the ever emerging relational structures and mechanisms that render them more or less feasible or likely. Amongst other things, this requires an account of the background conditions against which ongoing developments are taking place. In the current context, this includes understanding how the credit expansion triggered by liberalized financial markets set the conditions for the current situation, and the assortment of developments and mechanisms by which it has come about (Lawson, 2009, pp. 774-775). Volume III Issue 1 (5) Summer 2012 From the previous two sections it is easy to see why Lawson takes such a stark position. Mathematical modeling amounts to suppose ontology of closed systems. Reality, as we see it, is in contrary, an open system. Therefore, formalism would be ex definitione inappropriate for studying processes in the real world. As others (e.g., Hodgson, 2006; Chick and Dow, 2005; Mearman, 2002) have pointed out, Lawson runs in difficulty here. Recall that we leave an open question above: how to identify causal factors hidden in the deep layer of reality? Lawson's (1997, chap. 15) answer is: by examining contrastive pattern of events or demi-regularities (equivalent to Nicholas Kaldor's stylized facts). Such events are `rough and ready', not strict, semi regularities, etc. Thus, an example from Lawson himself will help to understand the point and its problem. Take the pattern of productivity growth in the British manufacturing sector in the twentieth century. It is inferior to (otherwise similar) advanced countries. That is a contrastive demi-regularity. We can abduct a cause to it in a non-empirical domain (e.g. the British system of labor relations), and we can corroborate or revise it with further research, always giving prime concern to the ontology of the object. This is a research conducted in an open system? No, because it isolated as negligible, or temporarily `out of focus', many other facts as worth of being considered `causes' as the isolated one. Thus, demi-regularities are partial closures, and for two reasons: (i) it is impossible to take all the relevant facts at once; theorizing is necessarily to discriminate and therefore to exclude some aspects of reality from our model world; and (ii) as Chick and Dow (2005) argue at length, the distinction between open and closed systems is not just one of on/off, as Lawson has lead us to belief, but is more nuanced. They identify eight conditions for a system to be open and other eight conditions to be closed. It requires satisfying any one of the former to be open, but all of the latter to be closed. Moreover, `complete openness is incompatible with a system remaining recognizable as a system.'(p. 367) So it is important to have in mind that when Lawson insist on the pointlessness of modeling work, we would assume he is reflecting Keynes's concerns on mathematical modeling, which dwelled mostly with stability of, and prediction upon (quantifiable) data. Closeness is often partial and this feature provides scope to discuss meanings, aims, and assumptions of the models, including its ontological commitments. The problem, as we see it, is not the use of models per se, but what are the elements, the method and the judgment made in its design. But these statements do not mean that we are enthusiast of the new assortment of modeling techniques, such as complexity theory, behavioral economics, evolutionary game theory, and so on. Along similar lines of Lawson's critique (see the previous quotation) of Colander et al. (2010), Hodgson (2009) and Dow (2008) also cast doubts on these new techniques. And for a fundamental reason: it is a mirage, a Sisyphean task, to look for models that `fit' better the data of recent financial turmoil. Mathematical modeling is inherently unhelpful to deal with stuff that makes for strong uncertainty, such as innovation, inexistent information, coordination of agents (Hahn, 1984) and animal spirits. New modelers seem to let this uncomfortable feature pass in oblivion. Yet, we have already paid a high price for placing prediction above understanding. If our assessment is valid, there are strengths and weakness in both positions. So, could we do better? Our answer would be in line with two pluralist statements. Twenty-five years ago, the sociologist Etzioni (1985) proposed "a medical model" for economics, which consists in making use of `findings from a variety of basic sciences', including sociology, political science, environmental science, psychology, etc., aiming at transcending the rational economic man, but `without reverting to a much less analytical science, to the way [nineteenth century] political economy was'. Similarly, Chick (1998, p. 1868) says: `I hope that I have argued persuasively that the role of formalism is to be precise and rigorous where that is possible, and that other modes of analysis exist as valid and valuable complements. Formalism is fine, but it must know its place'. This way, we hope, it will be possible transforming economics into a more realistic and useful science. Concluding Remarks The recent financial crisis gives an opportunity for reflection on the foundations of economic theory and the practices resting upon it. Despite some factors which could have been avoided ­ such as excessive reliance on the self-correction properties of markets, on rating agencies, on the self-regulation capacity of market participants, on excessive freedom, or even the cult, of market, seen as guardian of growth and entrepreneurship, and on the damaging effects of believing in normalcy of self-seeking behavior ­ we think this episode brings with it deeper lessons. At the practical level, that of norms, regulations and operation of markets, there is a need for an increase and change in regulation and incentives for many of the most important market players (Volcker, 2008; Stiglitz, 2009). We described the roots of the crisis and the real causes which finally started it. We also presented a quite detailed explanation of the four major issues, not mutually exclusives, which brought about the crisis, following Krugman, and Wells (2010): a) low interest rates, mainly by the Federal Reserve among many others Central Banks, after the 2001 recession; b) the so-called global savings glut; c) the disguise of risk by financial institutions, rating agencies and models used by these major actors, and appalling failures in the rewarding system for many of the agents working in the financial markets; and d) government programs which would have created moral hazard. At the theoretical level, our paper echoes a host of non-orthodox economists who urge for a change in the foundations of economic theory (Dymski, 2010). The dominance of the New Keynesian thought, and its twin conceptions of (systemic) equilibrium and (representative agent and) substantive rationality (alas, conceptions `imported' from New Classical economics), are dangerously fragile and even damaging in episodes like this crisis. How can one explain the volatility of asset prices, once one assumes that markets are in continuous equilibrium through time, in a random process? Moreover, how can one sustain that this macro equilibrium emerges from optimizing decisions of agents with perfect knowledge, not only about economic fundamentals, but even about the dynamics of markets, such that they do not commit systematic errors? In other words, these perplexities clearly point out that these models are overly unrealistic in the sense defined in this paper, namely, that a model validity depends on what we know about real economic systems, rather than on dogmas of competitive (and thus efficient) markets. Orthodox economists certainly would explain the crisis by failures in models of evaluation of risks and in predictions provided by them. They would blame governments for their ubiquitous failures. They would also complain that bailouts could jeopardize public belief in market systems (or even in `free societies') by hindering market discipline (i.e. bankruptcy). They will keep on seeking more sophisticated models to provide previsions "fitting" better the data. And they will keep on preaching about the virtues of markets and the sinfulness of regulators (Acemoglu, 2009). From these quarters one should have low expectations of transforming economics because of what Keen (2009) calls `inertia of the immovable object of the economic belief'. Thus, the orthodox lessons from the crisis oscillate between recitation of old sermons and marketing of new techniques. We shall not discuss ­ we would not even dare ­ how changes in the scientific community's beliefs will take place. But economic methodology can be helpful to assess arguments for change the economist toolkit. The economists from who we have drawn upon in this paper hold converging views that failures of orthodox economic theories can be tracked down to methodological misunderstandings, though methodology is seldom explicitly discussed by those theories. And that is why the influence of Friedman's essay plays such an important role in our argument. Despite the perception of Friedman as a foe by formalist revolutionaries, or Friedman's admonitions on the importance of empirical testing of theories, `once the assumption do not matter, the cat was out of the methodological bag, the profession was free to go speeding down the formalist road'(Hands 2009). Assumptions of DSGE, efficient markets, representative agents, etc., simply do not matter, only its empirical predictive implications. During the booms, reality seems to authorize this kind of presumption. Volume III Issue 1 (5) Summer 2012 Moreover, `it is all very well to have economic theory dominated by a school of thought with an innate faith in the stability of markets when those markets are forever gaining ­ whether by growth in the physical economy, or via rising prices in the asset markets. In those circumstances, [heterodox] academic economists can rail about the logical inconsistencies in mainstream economics all they want: they will be, and were, ignored by government, the business community, and most of the public, because their concerns don't appear to matter.'(Keen, 2009) The methodological approach endorsed here, that of critical realism, puts forthright emphasis on the importance of considering the ontology of objects under scientific economic investigation. It argues for considering the nature of objects of interest for economists, like households, firms, markets, production, distribution, trade, money, etc., as they really are in the world we live in, rather than as they could be in an idealized world model. Mäki (1992a) could make an objection to that claim, since by defending realisticness we are, in fact, restraining our view to `common-sense realism' (as opposite to `scientific realism' which contains non-observable entities). However, as we have seen, economists of different persuasions would claim that model credibility is not divorced from what we know about the real word, the world existing out of the model. This approach is, notwithstanding, skeptical about the capability of new formal models to solve the theoretical problems we are faced, even though their ontological compromises are richer than the orthodox ones. And that is so because: (i) a theory have to be translated into a formal language to be a model; in such a translation problems are "stripped out" of most of its non-formalisable aspects; and (ii) creativity and surprise are difficult to be modeled. It is clear enough that computer simulations, for example, depend on the instructions on how to ascribe/change probability distributions over results, according to rules defined from the programmer. Thus, although they are important and superior to overly simplified worlds of neoclassical models, those models hardly can improve our knowledge of social and economic reality where decision-taking under uncertainty is part of the ontology. Yet those methods need not be abandoned. They can provide heuristic frames for better theories, function as pedagogical devices and, in some cases, give insights on counterfactuals (Sugden, 2002). But they really must be very carefully handled. And they are very limited tools for prediction, as Keynes said long ago. That is, in our view, the point of many warnings from Hodgson and Lawson. At last, it seems that the depth and length of the crisis was not enough to force economists to take this warnings seriously, paradoxically as a consequence of the success of the very heterodox policies followed by many governments (Minsky, 1982; 1986). Anyway, economic theory has nothing to lose in taking ontological and methodological issues seriously. It is past time to shake off the old prejudice of Lord Kelvin, and embrace less formalism in doing economics. If this path is not chosen, the dismal science may lose by persisting in their `physics envy' and cyclical recantations when some `past masters' are needed to be rescued from the dustbin. That is to say, by not doing that a large part of economics may, in due course, be doomed to irrelevance. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Advanced Studies in Finance de Gruyter

The Current Financial and Economic Crisis: Empirical and Methodological Issues

Loading next page...
 
/lp/de-gruyter/the-current-financial-and-economic-crisis-empirical-and-methodological-pvLwJWxjq0

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
de Gruyter
Copyright
Copyright © 2012 by the
ISSN
2068-8393
DOI
10.2478/v10259-012-0007-x
Publisher site
See Article on Publisher Site

Abstract

In this paper we describe the main causes of the recent financial crisis as a result of many theoretical, methodological, and practical shortcomings mostly according to heterodox, but also including some important orthodox economists. At theoretical level, there are problems concerning teaching and using economic models with overly unrealistic assumptions. In the methodological front, we find the unsuspected shadow of Milton Friedman's `unrealisticism of assumptions' thesis lurking behind the construction of this kind of models and the widespread neglect of methodological issues. Of course, the most evident shortcomings are at the practical level: (i) huge interests of the participants in the financial markets (banks, central bankers, regulators, rating agencies mortgage brokers, politicians, governments, executives, economists, etc. mainly in the US, Canada and Europe, but also in Japan and the rest of the world), (ii) in an almost completely free financial and economic market, that is, one (almost) without any regulation or supervision, (iii) decision-taking upon some not well regarded qualities, like irresponsibility, ignorance, and inertia; and (iv) difficulties to understand the current crisis as well as some biases directing economic rescues by governments. Following many others, we propose that we take this episode as an opportunity to reflect on, and hopefully redirect, economic theory and practice. Keywords: financial crises, economic methodology, model-building. JEL Classification: G01, B41, B23 `Theorizing in economics, I have argued, is an attempt at understanding and I now add that bad theorizing is a premature claim to understand' (Hahn, 1985) 1. Introduction The recent world financial crisis has been inducing lively debates on the current status of economic theory. In this paper we set out an outline of these debates. We state the questions we are concerned with as follows: first, what were the infirmities of theories and empirical behavior underlying most of the views of the policy-makers, regulators and market operators? To answer this question, we lean on a host of evaluations of "what went wrong" with mainstream models of financial markets, both by orthodox and heterodox economists. This is a 1st and preliminary draft, without a major revision, which was approved and/or presented in: First International Conference in Political Economy ­ IIPPE (International Initiative for Promoting Political Economy) International Conference "Beyond the Crisis", Rethymno, Crete, Greece, September 10-12, 2010; Conference on "Impacts, Responses & Initial Lessons of the Financial Crisis for Low Income Countries", Copenhagen, Denmark, October 14-15, 2010; and Forum on Capital as Power ­ Crisis of Capital, Crisis of Theory, Keele Campus of York University, Toronto, Ontario, Canada, October 29-31, 2010. Naturally, the usual caveats remain and the authors are responsible for any mistake in this draft. Professional Address: Rodovia Araraquara-Jaú, km 1, Caixa Postal 174, CEP: 14.800-901, Araraquara SP ­ Brazil. Phone/Fax: +55 16 3301-6272. Yet, there are many reasons why formal modeling could be damaging, underlined mostly by heterodox economists. Thus, although there are some signs of theoretical `recantation', most of the propositions and proponents of efficient markets and rational expectations hypotheses are unshaken, quite paradoxically, as could assert Minsky, because of the very prompt intervention of the State, broadly speaking, through expansionist monetary policy and Big Government action. Second, what are the methodological foundations of those mainstream models? We claim that a mix of methodological confusion and ontological neglect, often resting on a prejudice towards methodologically-minded critiques, lend an unwarranted stamp of `scientificity' (only) to mainstream theorizing practices (Dow, 2008). As a result, economists of all stripes are now urging for caution when dealing with models (Lawson, 2009). Even those who defend those models on basis similar to Churchill's defense of democracy ("it is the worst form of government except all the others that have been tried") are now urging for attention to the content and truth value of economic theories. This situation provides an opportunity to revisit Friedman's (1953) influential methodological essay and similar works. Whether arguing a case for or against Friedman's theses, most methodologists find it a hard work to determine to which specific philosophical school it should belong (Mayer, 1993; Mäki, 1986). However, the philosophical allegiances of Friedman's essay are not our focus. Rather, we intend to show a lingering and unsuspected shadow of Friedman in all mainstream justification for its practices (Blaug, 2002, p. 30). Next, we point out the pitfalls of using `unrealistic assumptions' in economic theories ­ a practice sanctioned by Friedman. Ironically, Friedman's admonishments for testing assumptions by its predictive power remain in oblivion since he spelled them out. Economists pay lip-service to it, at best (Colander et al., 2009). Since Friedman's essay, economics has grown more and more formalistic. We claim, following Mongin (1987), Hands (2009) and Blaug (1997b, 2002, 2003), that whatever Friedman's designs and caveats were (see Friedman, 1999), we can detect his influence in the overly formalistic methods of present-day economics. Primacy of formalism, by its turn, can give and, in fact, undoubtedly gives real support to the use of unrealistic assumptions (Lawson, 2003, chap. 1 and 10; 2009), on grounds that resonate Friedman's theses at every bit (e.g., Blinder, 1999; Marcet, 2010). The main results are: at the substantive level, one constructs economic theories in near complete disdain for real world problems and the `academic game' is played almost only for its own sake. At the methodological level, economic theories are plagued with known falsehoods that hinder causal explanations. This is why mainstream economists have sometimes to retreat and recant their positions. We set out that lest we are trapped in another "unexpected event" like the recent financial crisis, a bolder turn in economic theorizing should be achieved. This transformation is already in progress, at least among some economists and schools of thought. Nevertheless, we think one could move faster by helping to promote those analyses and research programs which make their methodological underpinnings clear and pay attention to the plainly important items of the institutional fabric of society (Lawson, 1997, pp. 157-198; 2003, pp. 28-62; Hodgson, 1998). This article consists of five parts. After this Introduction, we give, in Section 2, an outline of the causes of the current crisis, with a specific part dedicated to its onset. In Section 3, we explain the fragile foundations over which rests much of the policies that brought the current crisis about. Section 4 presents some methodological issues concerning ontological end epistemological views of economics, their relation to the current crisis and some possible ways to deal with them. In Section 5, we conclude briefly. 2. An Outline of the Crisis An outline of institutional setting changes which would finally result in the "subprime crisis" of 2007/2008 started in the 1960s. It marks the growth of the importance of institutional investors in relation to deposit institutions (commercial banks) in the market for wealth and credit management, to which commercial banks Volume III Issue 1 (5) Summer 2012 reply with a series of financial innovations: conglomeration, underwriting, insurances, repurchase agreement, pension and investment funds, etc. In the 1980s there was "the removal of Regulation Q placing ceilings on interest rates on retail deposits" and in the 1990s "the elimination of the Glass-Steagall restrictions on mixing commercial and investment banking"(Eichengreen, 2008). In 1994, the Riegle-Neal Interstate Banking and Branching Efficiency Act allowed the expansion of branches and interstate operations. In 1999, further liberalization permitted bank holding companies to have insurance companies and investment banks among other assets in their portfolio. In addition, in the 80's, the rise of the decoupling between interests and maturities of assets and liabilities brought about increasing problems to the Saving and Loans institutions (S&Ls), causing a housing financing crisis in the US. As a consequence, there were major changes in securitization, which after 2002 would finally beget an extraordinary expansion in mortgage issues of various kinds, finally resulting in September 2008, after Lehman Brothers bankruptcy, in the so-called subprime crisis.1 A more detailed sketch of the last speculation cycle, however, could be presented like this: after the1970's, there was a huge rise in investments in the mortgage markets, for there were real guarantees backing those assets, improving national and also international assets to liabilities requirements, through better capital ratios, i.e., a bank's capital related to its risk-weighted assets, and also better balance sheets. Moreover, the process of housing and commercial mortgages securitization, that is, of mortgages creation and further securitization through selling generated huge receipts for those originators: Freddie Mac developed the first private mortgage-backed security for conventional mortgages, known as the PC (participation certificate); and the purpose was to buy mortgages from lenders and to pool them together and sell them as mortgage backed securities. Thus, the seed for linking the mortgage markets with the broader capital markets were planted in 1968 and 1970 with the restructuring of Fannie Mae and Ginnie Mae, and the establishment of Freddie Mac (Colton, 2002). Thus, in 1970 S&Ls responded for 47.7% of all mortgages creation, 60.6% in 1976, but in 1997 this share had been reduced to 17.8%, increasing to 20.7% in 2000. On the other hand, the share of commercial banks (CBs) and chiefly mortgage companies (MCs) went from 46.9% in 1970 (21.9% for CBs and 25% for MCs), to 35.7% in 1976 (21.7% for CBs and 14% for MCs, the lowest percentage for MCs for the entire period 1970-2000), and 79.3% in 2000 (21.4% for CBs and 57.9% for MCs; Colton, 2002, p. 35). That is to say, there was an oscillation of the share of CBs mortgage creation from 18.6% to 27.3% in the period 1970-2000, with the exception of 1990 with 33.4%, and 1998 with 15.3%. More importantly, however, the MCs share has risen to an all-time high of 61.1% in 1998, reduced to still astonishing 57.9%, in 2000. In other words, the main mortgage generators changed from S&Ls in the 1970's to MCs in the 1990's, with CBs roughly maintaining their shares (Colton, 2002). Concomitantly, from 1970 to 2003 the share in total mortgage stock of federal institutions and Government Sponsored Enterprises (GSEs), like Fannie Mae and Freddie Mac, departed from 8.1% to 42.9%, while the share of S&Ls went from 43.9% to 9.5%. Thus, private institutions maintained in their balance sheets only credits beyond the acquisition ceiling determined for the GSEs, i.e., the non-conforming loans or those assets whose risks implied an excessive discount to be sold (Cagnin, 2009a; Acharya and Richardson, 2009). Nevertheless, total issuance of new mortgages went from $36 million in 1970, to $1.3 billion in 1998, $2.2 billion in 2001 ($190 million subprime or 8.6%, from which $95 million securitized or 50.4%), an all-time high of $3.95 billion in 2003 ($335 million subprime or 8.5%, from which $202 million securitized or 60.5%), $2,9 billion in 2004 ($540 million subprime or 18.5%, from which $401 million securitized or 74.3%), $3.1 billion in 2005 ($625 million subprime or 20%, from which $507 million securitized or 81.2%) and $3 billion in 2006 ($600 million subprime or, again, 20%, from which $483 million securitized or 80.5%; Wray, 2007, p. 30). Another important detail is that the relevance of the largest CBs in the origination of new mortgages, including subprime and Alt-A, and of those securities in the assets are disproportional in relation to small banks (Graph 1). Colton, 2002; Torres-Filho and Borça Jr., 2008; Eichengreen, 2008; Wessel, 2009; Kregel, 2009, p. 661; Lavoie, 2010; Cagnin, 2009a; 2009b. For a critic of the very term `subprime crisis', see Patnaik (2010). As we know, those `heterodox' (subprime and Alt-A) assets have some important differences: Alt-A assets are those issued to borrowers which have not presented all the required documentation but that are `nearprime' (Roubini, 2007), i.e., could be a prime borrower according to their borrowing records, while subprime borrowers are those who have at least one record showing default or relevant delay in payment of an installment. Subprime borrowers present the records showed in Graphs 2 and 3 (taken from Wray, 2007). Graph 1. Derivatives as a Percent of Assets, 1992­2008: Small (<$1 Billion in Assets) vs. Big (>$1 Billion in Assets) Banks Source: Dymski, 2010. As we can see in Graphs 2 and 3, subprime assets displayed quite worse records both for delinquency and foreclosure rates. Notwithstanding, the originators from the 1990's onward, CBs and predominantly MCs, as we showed above, baited potential subprime borrowers with teaser rate mortgages (Kregel, 2009). Graph 2. Comparisons of Prime vs. Subprime Delinquency Rates, Total U.S. 1998-2007 Volume III Issue 1 (5) Summer 2012 Graph 3. Comparisons of Prime vs. Subprime Foreclosure Rates, Total U.S. 1998-2007 As Randall Wray points out: From 2004-2006 (when lending standards were loosest) 8.4 million adjustable rate mortgages were originated, worth $2.3 trillion; of those, 3.2 million (worth $1.05 trillion) had "teaser rates" that were below market and would reset in 2-3 years at higher rates (...) Of the $1 trillion dollars of teaser rate mortgages, $431 billion had initial interest rates at or below 2%.(...) An example will help. A subprime hybrid adjustable rate mortgage on a $400,000 house might have initial payments of about $2200 per month for interest-only at a rate of 6.5%. After a reset, the payments rise to $4000 per month at an interest rate of 12% plus principle (Wray 2007). But how and why CBs and MBs, mainly, did this? Because they do not have to maintain these credits in their balance sheet, i.e., they bundled together a series of these assets ­ in fact more than a thousand ­ in a mortgage pool, divide this pool in tranches ­ generally called senior (for the best shares of these tranches), mezzanine (medium rated shares) and junior (the riskiest shares of the tranches) and sold these tranches to the market (Volcker, 2008, pp. 104-7; Acharya and Richardson, 2009). They needed beforehand to rate the tranches through a credit rating agency (Moody's, Fitch and Standard and Poor's, chiefly, but also others ­ White, 2009; Crotty and Epstein, 2009). James Galbraith (2010, pp. 8-9) explains the trick: The business model was no longer one of originating mortgages, holding them, and earning income as home owners paid off their debts; it was one of originating the mortgage, taking a fee, selling the mortgage to another entity, and taking another fee. To do that, the mortgages had to be packaged. They had to be sprinkled with the holy water of quantitative risk-management models. They had to be presented to ratings agencies and blessed and sanctified, at least in part, as triple-A, so that they could legally be acquired by pension funds and other fiduciaries, which have no obligation to do any due diligence beyond looking at the rating. Alchemy was the result: a great deal of lead was marketed as gold. I think it's fair to say that if this sounds to you like a criminal enterprise, that's because that's exactly what it was. There was even a criminal language associated with it: liars' loans, NINJA loans (no income, no job or assets) ­ it sounds funny, but in fact this is why the world financial system has melted down ­ neutron loans (loans that would explode, killing the people but leaving the buildings intact), toxic waste (that part of the securitized collateral debt obligation that would take the first loss). These are terms that are put together by people who know what they are doing, and anybody close to the industry was familiar with those terms. Again, there's no innocent explanation. I would argue that what happened here was an initial act of theft by the originators of the mortgages; an act exactly equivalent to money laundering by the ratings agencies, which passed the bad securities through their process and relabeled them as good securities, literally leaving the documentation in the hands of the originators (the computer files and underlying documents were examined by the ratings agencies only very, very sporadically); and a fencing operation, or the passing of stolen goods, by the large banks and investment banks, which marketed them to the likes of IKB Deutsche Industriebank, the Royal Bank of Scotland, and, of course, pension funds and other investors across the world. The reward for being part of this was the extraordinary compensation of the banking sector. The originators maintained only a small part of these assets in their balance sheets (Volcker, 2010) or in Structured Investment Vehicles (SIVs) ­ enterprises whose only purpose were to issue asset-backed securities ­ because of difficulties to sell some tranches, prospective profitability of some assets or circumvention of Basel II and national regulations, since those assets remained off balance sheets, or even because of repurchase agreements. Thus, although CBs and MBs created quite risky assets, they did not remain with most of those assets, selling them to other investors and earning big fees for this `service'.2 That is to say, they become free of much of the own risk which their very entrepreneurial behavior generated (Kregel, 2009; Dymski, 2010), although they many times remained with shares of these more risky loans, usually the riskiest shares (Krugman and Wells, 2010). However, this is not the end of this unbelievable metamorphosis: some of the tranches, mostly the mezzanine ones, were recombined in new assets rather paradoxically some of them received better rates than the original ones, even AAA, making possible their acquisition, in this last case, also by pension funds, mutual funds and agents less prone to risk.3 A Collateralized Debt Obligation (CDO) backed in those assets was then issued and also divided in tranches, hence making feasible the creation of brand new securities, with new risk and profitability ratings, etc., and so on, in a multilayer pyramid. These issues of CDOs grew exponentially from 2002-2007, from $ 11.9 billion in 2000 to $108.8 billion in 2005, and then achieving their highest levels in 2006, with $186.7 billion, and 2007, with $177.6 billion (Torres Filho and Borça Jr., 2008). Finally, as the whole scheme was a mix of Ponzi finance, speculation on the profitability or at least maintenance of one's investment values, fraudulent action, overlook of regulators, authorities, etc. (Guttman, 2009; Galbraith, 2010), the majority of the agents, debtors or creditors, needed, as always (Galbraith, 1954; Kindleberger, 1978), at least two factors happening together, with no interruption, in order to maintain that scheme: a) a continued and increasing entrance of capital, feeding a pyramid (Ponzi) scheme, that is to say, making possible not only to maintain but also to augment the prices of the assets which backed the securities. For, as we know, and as a logical conclusion of the scheme outlined yet in this paper, the prices ­of the mortgages, since this speculation was built up mainly on housing and commercial mortgages ­ must rise in order to bring about the expected and desired profitability of the majority of the agents, making possible a continuous and even increasing inflow of capital to this market, with only minors non auspicious events, like minor crisis, bankruptcies, etc., quickly circumvented by the expert action of Central Banks (Federal Reserve, in the US case) and Big Government, as Minsky (1982, 1986) explained a long time ago. Furthermore, the continuous rise in assets prices, in spite of these minor upsetting events, seemed to corroborate almost all the market expectations as well as the algorithms used to calculate and distribute risks according to historical (which?) data (Zendron, 2006; Colander et al., 2009; Dow, 2008; Davidson, 1982-3; Minsky, 1982), and also yields, subdivide tranches, etc. Of course, the entire scheme would collapse if prices stopped to rise. In addition, houses are the main assets for many families and, thus, several of these families used those assets with rising values to increase their borrowings through renewed mortgages, piggybacks, etc. (Goodhart and Hoffmann, 2008; Goodhart et al., 2009). 2 Crotty (2009) asserts that "total fees from home sales and mortgage securitization from 2003 to 2008 have been estimated at $ 2 trillion." Certainly this caused unavoidable principal-agent problems. 3 White (2009), Crotty and Epstein (2009), Kregel (2009). Lawson (2009) shows that "at one point roughly 60% of structured products were triple-A rated according to Fitch Ratings (2007) compared with less than 1% of corporate bond issues. And one result of all this was the generation of a perception (as it turned out, an illusion) that structured securities were comparable in terms of safety or riskiness with single name corporate finance". Volume III Issue 1 (5) Summer 2012 Graph 4. Residential Prices in the US ­ 1992-2008 (variation in relation to the same quarter of the previous year) Source: Office of Federal Housing Enterprise Oversight, apud Cagnin, 2009a, p. 269; 2009b. As a matter of fact, there was an almost continuous rise in the prices of housings in the US, from 1992 to the middle of 2005 (Cagnin, 2009a; 2009b). From this moment, which almost exactly coincides with the acme of housing selling in the US that occurred in the fourth quarter of 2005, with 8.5 million houses sold (1.3 million new), those prices and selling started a uninterrupted decrease. In the third quarter of 2008, the housing selling had achieved only 5.4 million units (a 36.5% reduction in less than three years), with 0.5 million new (an astonishing 61.5% decrease in the same period; Torres Filho and Borça Jr. 2008). b) a benign action, in a Minskian sense, of monetary authorities, keeping low interests rates in the entire period (Cagnin, 2009b). This will allow many orthodox economists to blame these policies for the crisis, together with supposed naive and misconceived aims directed to guarantee at least a house for each American family, despite their income level (Taylor, 2009; Gjerstad and Smith, 2009; Patnaik, 2010; Krugman and Wells, 2010). In any case, probably the majority of the economics establishment, whatever their explicit or implicit theoretical strand, will agree that low interest rates, by the Federal Reserve, fed the housing and housing prices boom, although some could consider an impossible mission to attain all the goals attributed by the mainstream to the same monetary policy: low inflation rates, full employment, mild asset speculation, etc. (Greenspan, 2007). Moreover, as also explained by Minsky (1982), any more or less radical change in this benign monetary policy would imply simultaneously in changes in current and prospective prices of all assets, disturbing the upswing and certainly bringing about pressures for reversion of policies and/or blames for the premature explosion of the speculation bubble. 2.1. The onset of the crisis The crisis began with the reversion of the growth of the prices of the housings, which started to fall, as we have seen, in the middle of 2005. As we explained a stabilization of the housing prices would damage all the pyramid schemes which had as a sine qua non a steady rise in the prices. A reversion would be even more harmful, increasing losses and difficulties to service or even to roll over debts (Minsky, 1982). Moreover, American laws allowed mortgage debtors to abandon (`walk away') their residences, i.e., to transfer them to the creditors if they want to retrench from paying their mortgages, what started to be done with the fall on the residences prices. In addition, as we have seen in Graphs 2 and 3, the delinquency and foreclosure rates of subprime debtors were excessively large compared to those of prime debtors. There was an important reduction therefore in the yields of the SIVs, with their main owners, commercial and investment banks, having to cover payment delays, losses, etc., and not least, requiring those banks to record these losses in their balance sheets, what had not been done beforehand. Of course, there were enormous costs also to several tranches of CDOs. Therefore, it became then clear that the balance sheets of many financial intermediaries, even of some of the largest banks in the US and Europe, could not be trusted, because of the absence of knowledge on the share of toxic assets on the balance sheets of those financial institutions (Dymski, 2010; Galbraith, 2010, Einchengreen et al., 2009; Kregel, 2009). Creditors began to withdraw their investments in SIVs, mutual funds, etc., in the usual `flight to quality', i.e., to US Treasuries, increasing rapidly the spreads between the rates needed to attract investors and the FED Funds (Eichengreen et al., 2009; Torres-Filho and Borça Jr., 2008). Consequently, there was a retrenchment of creditors from financial institutions, of financial institutions from borrowers, and so on, in a much known vicious cycle which simultaneously diminished credits and rose interests (Minsky, 1982), including interbank loans ­ chiefly after the infamous Lehman Brothers bankruptcy ­ feeding back the decline in house prices and investments, and even turning impossible the pricing of mortgage backed securities. That is the reason for the first strong signs of the coming crisis: the bankruptcy of Ownit Solutions, a nonbank specialist in subprime and Alt-A mortgages, in 2006; the August 9, 2007 halting of withdrawals from three investment funds by BNP Paribas, with about $2.2 billion in total assets, after Bear Sterns, on July 31, and Union Investment Management GmbH, on August 3, had recurred to the same measures, a week before (Boyd, 2007; Acharya and Richardson, 2009, p. 208). In reality, the markets were then disturbed, but almost returned to `business as usual', until the need of Bear Sterns to be sold to J.P. Morgan, on the weekend of 15-16 March, 2008, in a rush to avoid a financial panic before of the opening of the markets in Asia, on Monday. Bear Sterns was sold with a special financing from the FED to fund up to $30 billion of Bear Sterns' less liquid assets. And all this was needed despite a startling 93% price discount to that investment bank closing stock price on the New York Stock Exchange, on Friday 14 March or 99% considering those prices a year before (Sorkin and Thomas Jr., 2008). However, sheer panic was avoided until the much known policy mistake with Lehman Brothers, on the weekend of 12-15 September of that same year (Lavoie, 2010, pp. 5-6; Taylor, 2010, pp. 360-1) and the decision of the US Treasury, just on 16 September to lend $85 billion to AIG in exchange for a stake of almost 80% in that Group, in order to prevent its bankruptcy (Wessel, 2009). Wachovia (-73.2%), Wells-Fargo (-65.5%), Citigroup (41.2%), J.P. Morgan (-25.5%) and Bank of America (-19.2%) assets also faced huge losses in their August 2008 market prices in comparison to July 2007 (Torres-Filho and Borça Jr., 2008; Guttman, 2009). As Crotty (2009,) affirms, "[i]t is estimated that by February 2009, almost half of all the CDOs ever issued had defaulted... Defaults led to a 32% drop in the value of triple A rated CDOs composed of super-safe senior tranches and a 95% loss on triple A rated CDOs composed of mezzanine tranches". 3. Reliance on Fragile Theoretical Foundations One important issue in contention is the methodological underpinnings supporting (or not) one's personal (or even a scientific group's ­ Kuhn, 1962; Lakatos, 1970) view of financial markets and the analyses and proposals which are derived of those views (Laidler, 2010). We will divide this discussion in two major parts, presented in this item ­ first, an analysis of financial markets and the current economic crisis ­ and second ­ a view (or understanding) of economics and financial markets and some broad considerations on methodological issues. That is to say, we will not discuss in this paper proposals for the current crisis, although they could be considered a rather logical consequence of our paper. For this would require practically another paper. We can follow the outline sketched by Krugman and Wells (2010) to present the arguments of several economists on the crisis. They divide their explanation in four major issues, nor mutually exclusives: a) the low interest rate policy of the Federal Reserve after the 2001 recession; b) the global savings glut; c) financial innovations that disguised risk; and d) government programs that created moral hazard. Volume III Issue 1 (5) Summer 2012 a) The low interest rate policy of the Federal Reserve after the 2001 recession A large stream of economists contends that too low interest rates, from at least 2002 to 2006 are the main or even the sole responsible for the crisis. As Krugman and Wells (2010) explain, after the burst of technology bubble in the late 1990s, central banks cut base short-term interest rates, in an attempt to avert a slump. The Federal Reserve cut its overnight from 6.5 percent at the beginning of 2000 to 1 percent in 2003, keeping the rate at this low point until the beginning of the summer of 2004. Graph 5. Federal Funds Rate, Actual and Counterfactual (in %), U.S. 2000-2007 Source: apud Taylor (2010). As Taylor (2010) proposes, Graph 5 would show that the actual monetary policy in the U.S. was excessively expansionist, not following the Taylor rule which "worked well during the historical experience of the `Great Moderation' that began in the early 1980s (...) This was an unusually big deviation from the Taylor rule. There has been no greater or more persistent deviation of actual Fed policy since the turbulent days of the 1970s. So there is clearly evidence of monetary excesses during the period leading up to the housing boom"(Taylor, 2010). He also provides "statistical evidence" that that "interest-rate deviation could plausibly bring about a housing boom. In this way, an empirical proof was provided that monetary policy was a key cause of the boom and hence the bust and the crisis" (Taylor, 2010). Inflation rates, measured through CPI inflation, would also have been lower, around the 2% target suggested by many policy-makers ­ of course, adept of inflationtargeting policies ­ instead of the 3.2% during the previous five years. Moreover, "housing was also a volatile part of GDP in the 1970s, a period of monetary instability before the onset of the Great Moderation. The monetary policy followed during the Great Moderation had the advantages of keeping both the overall economy stable and the inflation rate low" (Taylor, 2010). In addition, interest rates in several European countries ­ strongly influenced by the American monetary policies ­ were also below those that historical regularities according to the Taylor rule would have predicted. And the housing booms would have been the largest where this deviation was the largest. However, as he candidly asserts, "One can challenge this conclusion, of course, by challenging the model, but an advantage of using a model and an empirical counterfactual is that one has a formal framework for debating the issue"(Taylor, 2010). Also, according to the subjacent efficient market model of his analysis (Laidler, 2010), the rating agents would have underestimated the securities risks "either because of a lack of competition, poor accountability or, most likely, an inherent difficulty in assessing risk owing to the complexity".(Taylor, 2010). Finally, the behavior of GSEs, like Fannie Mae and Freddie Mac, encouraged to expand and to buy Mortgage Backed Securities (MBS), "should be added to the list of government interventions that were part of the problem" (Taylor, 2010). Consequently, according to Taylor, the major problem after the crisis was one of risk rather than liquidity, made worse by wrong policies which engendered Lehman Brothers' bankruptcy, for they made unpredictable which financial institutions government will save and support. As a conclusion, "government actions and interventions caused, prolonged, and worsened the financial crisis. They caused it by deviating from historical precedents and principles for setting interest rates that had worked well for twenty years. They prolonged it by misdiagnosing the problems in the bank credit markets and thereby responding inappropriately by focusing on liquidity rather than risk. They made it worse by providing support for certain financial institutions and their creditors but not others in an ad hoc fashion, without a clear and understandable framework. While other factors were certainly at play, these government actions should be first on the list of answers to the question of what went wrong."(Taylor 2010). Certainly this is not only Taylor's opinion. Many economists share his view (Krugman and Wells, 2010, Patnaik, 2010, Cassidy, 2010; Wickens, 2009). However, as Krugman and Wells (2010) explain, there are some serious problems with this view. For one thing, there were good reasons for the Fed to keep its overnight, or "policy," rate low. Although the 2001 recession wasn't especially deep, recovery was very slow--in the United States, employment didn't recover to pre-recession levels until 2005. And with inflation hitting a thirty-five-year low, a deflationary trap, in which a depressed economy leads to falling wages and prices, which in turn further depress the economy, was a real concern. It's hard to see, even in retrospect, how the Fed could have justified not keeping rates low for an extended period. The fact that the housing bubble was a North Atlantic rather than purely American phenomenon also makes it hard to place primary blame for that bubble on interest rate policy. The European Central Bank wasn't nearly as aggressive as the Fed reducing the interest rates it controlled only half as much as its American counterpart; yet Europe's housing bubbles were fully comparable in scale to that in the United States. These considerations suggest that it would be wrong to attribute the real estate bubble wholly, or even in large part, to misguided monetary policy. b) the global savings glut According to some economists (Eichengreen, 2008) the global savings glut is a major cause for the crisis: The other element helping to set the stage for the crisis was the rise of China and the decline of investment in Asia following the 1997-8 crisis. With China saving nearly 50 per cent of its GNP, all that money had to go somewhere. Much of it went into U.S. treasuries and the obligations of Fannie Mae and Freddie Mac. This propped up the dollar. It reduced the cost of borrowing for Americans, on some estimates, by as much as 100 basis points, encouraging them to live beyond their means. It created a more buoyant market for Freddie and Fannie and for financial institutions creating close substitutes for their agency securities, feeding the originate- and-distribute machine. Again, these were not exactly policy mistakes. Lifting a billion Chinese out of poverty is arguably the single most important event of our lifetimes, and it is widely argued that the policy strategy in which China exported manufactures in return for high-quality financial assets was a singularly successful growth recipe. Similarly, the fact that the Fed responded quickly to the collapse of the high-tech bubble prevented the 2001 recession from becoming even worse. But there were unintended consequences. Those adverse consequences were aggravated by the failure of regulators to tighten capital and lending standards when capital inflows combined with loose Fed policies to ignite a credit boom. They were aggravated by the failure of China to move more quickly to encourage higher domestic spending commensurate with its higher incomes (Eichengreen, 2008). Volume III Issue 1 (5) Summer 2012 The main idea supporting it is that the savings of countries like Germany and many Asian are used to buy securities in deficit nations, like the US, UK, Spain and so on. Historically, developing countries have run trade deficits with advanced countries as they buy machinery and other capital goods in order to raise their level of economic development. In the wake of the financial crisis that struck Asia in 1997­1998, this usual practice was turned on its head: developing economies in Asia and the Middle East ran large trade surpluses with advanced countries in order to accumulate large hoards of foreign assets as insurance against another financial crisis (Krugman and Wells, 2010). An important problem with this explanation is that Central Banks throughout the world set the basic rates. These capital inflows also drove down interest rates--not the short-term rates set by central bank policy, but longer-term rates, which are the ones that matter for spending and for housing prices and are set by the bond markets. In both the United States and the European nations, long-term interest rates fell dramatically after 2000, and remained low even as the Federal Reserve began raising its short-term policy rate. At the time, Alan Greenspan called this divergence the bond market "conundrum," but it's perfectly comprehensible given the international forces at work. And it's worth noting that while, as we've said, the European Central Bank wasn't nearly as aggressive as the Fed about cutting short-term rates, long-term rates fell as much or more in Spain and Ireland as in the United States--a fact that further undercuts the idea that excessively loose monetary policy caused the housing bubble (...) the global glut story provides one of the best explanations of how so many nations managed to get into such similar trouble (Krugman and Wells, 2010). We can agree with Krugman and Wells if the savings are understood as influencing long term interest rates, i.e., if they are used to buy and make possible lower long term interest rates for these securities. Of course, to this savings we must add, at least for some individuals and groups, US, UK, Spain, etc., private savings. That is to say, the issue is not so much of a savings glut ­ for the sum of private, public and private savings in every country amounts to zero (Godley and Zezza, 2006; Godley et al., 2007; 2008) ­ but one of where to put the financial resources to those who own them. c) financial innovations that disguised risk Many authors consider that several models which packed together many mortgage debts with other debts ­ even student loans, leveraged loans, credit card debts, corporate bonds, etc. (Acharya and Richardson, 2009; Wallison, 2009b) ­ were the main responsible for the crisis, for they disguised the implicit risks of the many assets included in each CDO. As many analysts assert, it is simply impossible to rate risks in these CDOs and also, consequently, to know the entire situation of the financial institutions and of the whole financial system, even by the most savant. Banks and some other financial institutions acted then chiefly as originators of credit, i.e., as intermediaries (Kregel, 2009), usually not keeping them in their balance sheets. This behavior was one of those responsible for the crisis, since those originators were not worried about real conditions of debtors, but mainly with creating new mortgages, in order to package them in CDOs and then sell them to the market, generating substantial fees for the originators (Stiglitz, 2009; Krugman and Wells, 2010). Moreover, systemic risks were disregarded in the models used by financial institutions (Zendron, 2006; Colander et al., 2009; Crotty, 2009). This turned risks invisible to agents, considered individually or systemically. We would add to these the failure of rating agencies to rate more correctly those CDOs, in spite of the inherent difficulties or even impossibilities we stressed before for such rating. Nonetheless, the rating agencies mostly rated these packaged securities with very good ratings, normally with an AAA. This behavior denotes a conflict of interests, for the rating agencies were regularly paid for these ratings, having interests to remain as good raters for the credit originators, in order to receive those payments regularly (Stiglitz, 2009; White, 2009). Furthermore, there were also conflicts of interests within the staff of the financial institutions, for their components received earnings based on profits also generated through fees paid for mortgages and other debts originations. It was quite possible that if a financial institution would face problems in the future those would not happen at the time the then members of the staff would be in those institutions. Besides, even those members could believe that the financial models to calculate risks were trustable and so even they could find out that they were doing a fair and good job for all. Regulators also believed somehow in market efficiency and those who had doubts about it were stifled by the "true believers." In addition to that ideological issue, there were practical incentives like Wall Street (and other financial centers) political and ideological pressure ­ since many central bankers, secretaries and other regulators are connected to the financial institutions to be regulated or can work for them in the future (Crotty, 2009). Finally, Wall Street and other financial centers are very important financial contributors to increasingly more expensive political campaigns.4 To sum up, all the incentives structure of the financial markets was flawed (Stiglitz, 2009; Wray, 2009). Everyone ignored both the risks posed by a general housing bust and the degradation of underwriting standards as the bubble inflated (that ignorance was no doubt assisted by the huge amounts of money being made). When the bust came, much of that AAA paper turned out to be worth just pennies on the dollar.(...) [However,] Three points seem relevant. First, the usual version of the story conveys the impression that Wall Street had no incentive to worry about the risks of subprime lending, because it was able to unload the toxic waste on unsuspecting investors throughout the world. But this claim appears to be mostly although not entirely wrong: while there were plenty of naive investors buying complex securities without understanding the risks, the Wall Street firms issuing these securities kept the riskiest assets on their own books. In addition, many of the somewhat less risky assets were bought by other financial institutions, normally considered sophisticated investors, not the general public. The overall effect was to concentrate risks in the banking system, not pawn them off on others. Second, the comparison between Europe and America is instructive. Europe managed to inflate giant housing bubbles without turning to American-style complex financial schemes. Spanish banks, in particular, hugely expanded credit; they did so by selling claims on their loans to foreign investors, but these claims were straightforward, "plain vanilla" contracts that left ultimate liability with the original lenders, the Spanish banks themselves. The relative simplicity of their financial techniques didn't prevent a huge bubble and bust. A third strike against the argument that complex finance played an essential role is the fact that the housing bubble was matched by a simultaneous bubble in commercial real estate, which continued to be financed primarily by old-fashioned bank lending. So, exotic finance wasn't a necessary condition for runaway lending even in the United States (Krugman, and Wells, 2010). In conclusion: What is arguable is that financial innovation made the effects of the housing bust more pervasive: instead of remaining a geographically concentrated crisis, in which only local lenders were put at risk, the complexity of the financial structure spread the bust to financial institutions around the world (Krugman, and Wells, 2010). d) government programs that created moral hazard As Stiglitz (2009) shows, there are conservative critics who point to the government as the principal culprit for the crisis. For the creation of the Community Reinvestment Act (CRA) required that banks lent a certain share "Most elected officials responsible for overseeing US financial markets have been strongly influenced by efficient market ideology and corrupted by campaign contributions and other emoluments lavished on them by financial corporations. Between 1998 and 2008, the financial sector spent $1.7 billion in federal election campaign contributions and $3.4 billion to lobby federal officials. Moreover, powerful appointed officials in the Treasury Department, the SEC, the Federal Reserve System and other agencies responsible for financial market oversight are often former employees of large financial institutions who return to their firms or lobby for them after their time in office ends. Their material interests are best served by letting financial corporations do as they please in a lightly regulated environment. We have, in the main, appointed foxes to guard our financial chickens" (Crotty, 2009). Volume III Issue 1 (5) Summer 2012 of their portfolio to underserved minority communities (Wallison, 2009a; Patnaik, 2010). They also blame GSEs, like Freddie Mac and Fanny Mae, which played a very large role in mortgage markets, despite their privatization in 1968. Nevertheless, as Stiglitz (2009) underscores, a recent Fed study showed that the default rate among CRA mortgagors is actually below average. The problems in America's mortgage markets began with the subprime market, while Fannie Mae and Freddie Mac primarily financed `conforming' (prime) mortgages.(...) To be sure, Fannie Mae and Freddie Mac did get into the high-risk high leverage "games" that were the fad in the private sector, though rather late, and rather ineptly. Here, too, there was regulatory failure; the government-sponsored enterprises have a special regulator which should have constrained them, but evidently, amidst the deregulatory philosophy of the Bush Administration, did not. Once they entered the game, they had an advantage, because they could borrow somewhat more cheaply because of their (ambiguous at the time) government guarantee. They could arbitrage that guarantee to generate bonuses comparable to those that they saw were being "earned" by their counterparts in the fully private sector. Krugman and Wells add the much known political motivation for this economic "analysis". Those authors are careful not to name names and attribute the blame to generic "politicians" it is clear that Democrats are largely to blame in his worldview. By and large, those claiming that the government has been responsible tend to focus their ire on Bill Clinton and Barney Frank, who were allegedly behind the big push to make loans to the poor.(...) The huge growth in the subprime market was primarily underwritten not by Fannie Mae and Freddie Mac but by private mortgage lenders like Countrywide. Moreover, the Community Reinvestment Act long predates the housing bubble. Overblown claims that Fannie Mae and Freddie Mac single-handedly caused the subprime crisis are just plain wrong. As others have pointed out, Fannie and Freddie actually accounted for a sharply reduced share of the home lending market as a whole during the peak years of the bubble. To the extent that they did purchase dubious home loans, they were in pursuit of profit, not social objectives ­ in effect they were trying to catch up with private lenders. Meanwhile, few of the institutions engaged in subprime lending were commercial banks subject to the Community Reinvestment Act. Beyond that, there were the other bubbles ­ the bubble in US commercial real estate, which wasn't promoted by public policy at all, and the bubbles in Europe. The fact that US residential housing was just part of a much larger phenomenon would seem to be presumptive evidence against any view that relies heavily on supposed distortions created by US politicians. Was government policy entirely innocent? No... Fannie and Freddie shouldn't have been allowed to go chasing profits in the late stages of the housing bubble; and regulators failed to use the authority they had to stop excessive risk-taking (Krugman, and Wells, 2010). 4. Considering Methodological Issues In this section we sketch a conception of what are the fundamental, meta-theoretical failures involved in, and explaining, in our view, the theoretical problems of economics. For us, the main problem (formalism) allows the use of very inappropriate models to understand economic reality. Since formalism presupposes an ontology of `closed systems' it is unable to avoid economic disasters caused by phenomena of `open systems', like uncertainty, bounded rationality, herd psychology, etc. Our interest is not primarily in debates within economic methodology, but in using methodological critiques of economic theory in order to see if we can learn from this episode how can we profit from paying attention to the real word when designing models. We do so in three steps: outlining the critical realist approach to economic methodology, which draws attention to the ontology of economic world; discussing formalism and the problems concerning design and use of overly unrealistic models; and finally proposing some ways to proceed in the future. 4.1. The Critical-Realist Conception of Scientific Explanation Critical realism is a comparatively new and expanding approach to the methodology of economics. Proposed by the British philosopher of science Roy Bhaskar (1975, 1979), it was introduced in economics by a group of economists and other social scientists mostly associated with the University of Cambridge. It has made an appeal to philosophically-oriented economists and schools, like Post Keynesians and Austrians. The best known name of critical realist economic methodology is Tony Lawson (1997, 2003, and several papers), formerly editor of Cambridge Journal of Economics. Closely associated are Sheila Dow (2002, 2003) and the (old) institutionalism and evolutionary economic theorist Geoffrey Hodgson (2004, 2006). The pivotal theme within critical realism is the nature of scientific activity and explanation. According to that approach, traditional issues in economic methodology (logical empiricism and Popperian falsificationism) mistake the nature of scientific activity and so propose an misleading aim to (natural as well as social) working scientist. For in the natural sciences, theories are law-like statements from which implications that `explain' the object of interest are deduced. Thus, for example, an explanation of falling bodies is a deduction from Galileo's Law, plus a series of auxiliary or attending or simplifying/idealizing statements (e.g. perfect vacuum, flat surface of earth, etc.) According to the traditional methodology, explanation is subsuming a case of falling body into at least one general law. Prediction, on the other side, is to expect that from the same cause (a general law) the same effect will always (deterministically or probabilistically) ensue. Explanation and deduction are symmetrical (the famous Hempel-Oppenheim's symmetry thesis). The empirical test of theories is at the same time condition for its acceptance, and a sign of growth of knowledge5. To prescribe this methodology for economics involves two implications: (i) there is only one valid method of inquiry all over the sciences (`methodological monism'); and (ii) the search for regularities or constant conjunctions of events is the only possible mean of attaining knowledge (`epistemic fallacy'). Starting from the latter, for critical realists constant conjunctions of events are neither necessary nor sufficient condition to claiming scientific knowledge. Scientific law-like statements are formulated in experimental (i.e. controlled) settings (`closed systems'), where constant conjunction of events obtains because one causal factor of interest is sealed off from any other countervailing factors that bear on the phenomenon of interest, such that we can always say `whenever (event type) X, then (event type) Y'. If valid, these statements will be successfully applied also in nature (an `open system'). How is that possible? Traditional methodologists have a problem here: if stable conjunction of events are sought after, they are rather rarely spontaneously founded (astronomical laws, one of Lawson's favorite example of spontaneous regularity, is indeed obtained under conditions of closure, see Mäki, 1992b); if, on the other hand, the subject matter of scientific investigation is explained by the scientist's intervention, then they are bound to admit that there is no genuine laws in the nature. Critical realists solve this problem by claiming that the aim of experimental activity is to isolate a putative causal factor from all others bearing on the phenomenon of interest. When the theory obtained is successfully applied in open systems is because scientists have identified correctly the causal (i.e., dominant) factor. This picture has important implications for critical realist's account of science. First, science should not be seen as the search for a constant conjunction of events. The prime interest of critical realists is in ontology. Ontology is the study of the nature of world and what there is in it (its `ontic furniture'). Critical realists advance a series of ontological propositions. Reality is structured in layers, each of them more encompassing and deep from top-down. The first layer is the empirical domain (our sensory perception of events and state of affairs), the second one is the actual domain (things `as they really are', 5 We will not delve into the details and historical crumbling of `received' (i.e., logical empiricist) and Popperian views of methodology (but see Hands, 2001, chap. 3). This does not prevent Blaug (1997a), an important supporter of Popperian ideas in economic methodology, to say that `the Methodology which best supports the economist's striving for substantive knowledge of economic relationships is the philosophy of science associated with the names of Karl Popper and Imre Lakatos. To full attain the ideal of falsifiability is, I still believe, the prime desideratum in economics' Volume III Issue 1 (5) Summer 2012 irrespective to our knowing of or feeling them) and the third one is the real or deep domain, populated by structures, mechanisms, powers and tendencies that shape and condition the events of actual domain. Structures are the properties of an object of inquiry of their mode of being. Mechanisms are the way an object operates, due to its structure. Powers are capacities of the object, what it can cause when its mechanisms are triggered. Yet these mechanisms do not operate in isolation, but in open systems, such that many other mechanisms (enhancing or countervailing) might typically be at work simultaneously ­ thus concealing the mechanism we are interested in. That is why critical realists claim that mechanisms operate as tendencies, i.e., when triggered, a mechanism will necessarily operate, no matter what events ensue. In this ontological commitment, reality is stratified (in layers) and structured (any layer may be out of phase from each other), but an explanation is the move from the empirical domain into even deeper layers of reality, searching for the causal mechanisms of what exists in the actual domain and which we perceive in the empirical domain6. Second, due to this account, explanation does not require strict regularities. A unique event can be explained if we have sufficient information on its structure, and antecedent knowledge from where to start the research. Constant conjunctions of events are insufficient for explanations, too. For explaining a phenomenon is studying its structure looking for plausible mechanisms causally responsible for its occurrence, rather than simply recording correlations between empirical events. In fact critical realists charge positivists of all stripes of what they call `epistemic fallacy' ­ mistakenly conflating ontological questions to epistemic questions. For example, the restlessness search for models that better `fit' facts to a theory is a case in point, insofar as it reduces all phenomena to some measurable and all-compassing analytical categories referring only to empirical events. At last, thirdly, social scientific research can be done along lines broadly similar to natural science. The structures, mechanisms, etc., are obviously different, but the aim is equal: unearth causal mechanisms, powers and tendencies of structured objects of knowledge. From this point, Lawson (1997, chap. 14 to 16; 2003, chap. 2) elaborates the nature of social reality at length. It is characterized by internal (constitutive) and external (contingent) social relations, mediated by positions (hierarchies) and rules (norms, mores, conventions, etc.). Society is thus an unbroken net of relations, dependent of individual action but irreducible to it, with mechanisms and powers of its own. It constrains the alternative courses of action for individual decisions, but does not determine the action actually chosen. Moreover, in any time individual action is simultaneously reproducing and transforming society. Critical realists like Archer (1995, chap. 5) and Fleetwood (1995, pp. 86-90) call this process `the transformational model of social activity': in society our actions are always based on structures inherited from the past and always transforming or reproducing this same structure for the future. That is why Lawson (2009, p. 764) claims that social processes are `a totality in motion'. One last question is: how can we obtain knowledge of these hidden structures? Critical realism is not a disguised form of outdated essentialism? That is where critical realists claim their position as falibilism ­ there is no guarantee for putative mechanisms besides its power to illuminate some reality (natural or social). The problem of discriminating among alternative theories (the old problem of `identification') is to be solved by the degree in which each theory can explain more (and in a better way) events than its competitors. Of course, this is a very hotly debated issue in the philosophy of science, opening the doors for relativism. Critical realists call in their help two notions: knowledge, as a social product, is itself a `produced mean of production' of knowledge, such that in the start of any research we have at least one theory to proceed. Moreover, the ontological commitment (`how reality is') traces a divide between knowledge of reality and its object. This make possible to be fallibilist, not relativist: reality is the contrastive backdrop onto all scientific claims can be evaluated and our prior (scientific) beliefs revised. Thus, critical realists can (and relativists cannot) differentiate changes in the world from changes in knowledge. When we perceive some event (supposedly) disjunctive to our existing knowledge, we can abduct a mechanism (i.e., propose a cause for that effect) and investigate its occurrence. We Certainly that is only a sketchy picture of critical realist account of scientific practices. See a lengthy and sophisticated discussion of these matters in Lawson (1997). shall deal with the question of how to identify a causal mechanism shortly. Before that, we deal with the problem of formalism in economics and its (supposed) culpability for the crisis. 4.2. Formalism and Economic Models In the wake of the global financial crisis of 2008 often we found opinions on the effect that economic theory was made irrelevant due to its formalism. In the simple and plain words of Blaug (2002): "what characterizes "formalism" is that technicalities are prized as ends in themselves, such that theories which do not lend themselves to technical treatment are set aside and with them the problems they address. Formalism is the worship of technique and that is what is wrong with it." In fact, several critics and supporters of mainstream economics have made pronouncements on this regard. Of course, not all mainstream economists recognize a problem in the way of doing economics. They typically blame some factor exogenous to the discipline (regulatory shortcomings, excessively lax monetary policy, irrational optimism and pessimism, and so on) for the financial crisis.7 Others think that is just business as usual. Note, for example, LSE professor Marcet: What do economists do? We think economics as a science. That means if you don't have a model, data is a mess... We need models just to see where to look. I should teach to our MSc students and undergraduate students theories (with internal consistency) that the research community has thoroughly and very strongly tested in empirical terms, because that's our job. Necessarily, economic models are oversimplified [there follows the Galileo's Law as an example] and thus, in order a model to be a model is the easiest thing in the world to make of economic theories. But, unless I find better models, it isn't fair to make fun (Marcet, 2010; our transcription). Yet Alan Blinder, when praised the progress of economics, has recognized the problem: Economics was off to the mathematical races. Intellectual giants like Samuelson and Arrow led the way, sweeping away the old, more literary tradition in economics and attracting a small army of scholars with a more scientific bent.' And he adds: "But somewhere along the way the warm embrace of mathematics developed first into a infatuation, and then into an obsession. And that, I am afraid, is where economics lost at least some of its scientific moorings ­ moorings we have yet to regain. [Mathematics] is, of course, both a high and exceedingly difficult form of thought and an indispensable tool for every science... But mathematics seems entirely too selfreferential, too deductive, one might almost say too pure to be considered a science. Let me dwell on these three words ­ self-referential, deductive, and pure ­ for they describe where economics has gone wrong, in my view (Blinder, 1999). In our view, the theoretical shortcomings we have seen in the previous section are closely linked to methodological and ontological presuppositions mostly held by mainstream economists. The problem concerns to something that Dow (1990) calls `Cartesian mode of thought' and Lawson (1997) calls `deductivism'. In short, this mode of thought sees theories only as logically derived series of propositions8. Moreover, those propositions are interpreted as entities apt for formalization. The next short step is supposing that, since logical structures have truth value inter-subjectively demonstrable, they are the only valid and sound theorizing in any science, 7 For a sad report on how the failures of economist's models of efficient markets and dynamic stochastic general equilibrium are being received by their supporters, see Cassidy (2010) and Cohen (2009). As a matter of fact, some mainstream economists are simply losing their temper. In a reply to Krugman (2009), Cochrane (2009) defends his own stance as follows: "Imagine this weren't economics for a moment. Imagine this were a respected scientist turned popular writer, who says, most basically, that everything everyone has done in his field since the mid-1960s is a complete waste of time. Everything that fills its academic journals, is taught in its PhD programs, presented at its conferences, summarized in its graduate textbooks, and rewarded with the accolades a profession can bestow, including multiple Nobel prizes, is totally wrong. Instead, he calls for a return to the eternal verities of a rather convoluted book written in the 1930s, as taught to our author in his undergraduate introductory courses. If a scientist, he might be a global-warming skeptic, an AIDS-HIV disbeliever, a stalwart that maybe continents don't move after all, or that smoking isn't that bad for you really." 8 This characterization is more apt in Dow's case than Lawson's, as it is doubtful whether mainstream economic methodology is empiricist or axiomatic (Viskovatoff, 1998). Lawson's account of deductivism is in terms of the PopperHempel hypothetical-deductive model of explanation which is empiricist (while mainstream is not). Volume III Issue 1 (5) Summer 2012 economics included. And since formal structures are content less9, this presupposition also amounts (even unwillingly) to sacrifice relevance for rigor, elegance and precision, for practical implications. But, if there is a problem with formalism in economics, what exactly is the problem? How could we come to such a state? Can we do any better? To begin with, formalism is a complex term, interwoven with mathematization, axiomatization and model-building. Following Chick (1998, p. 1860) ­ who in turn follows Woo (1986, p. 20, n. 1) ­ our focus will be on axiomatization and model-building as forms ­ syntactical and semantically, respectively ­ of formalism. Chick, once again, helps to understand each of them: The axiomatic approach and less rigorous mathematical models have certain symmetry. In the first, one starts with "self-evident" axioms, applies the deductive method using agreed rules of logic and, providing one's logic is correct, arrives at demonstrable truths. Mathematical modeling is more relaxed and less ambitious: assumptions need not be "self-evident"; thus there is some scope for the theorist's judgment, and that judgment may be questioned (the "realism of assumptions" debate). In both cases, transformations are then made following agreed rules, and the conclusions follow as long as the rules have been obeyed. This procedural homology allows one to order one's thoughts into points about the issue of the appropriate starting point of analysis, precision, and the biases inherent in conventional models (Chick, 1998). This passage has many points that are worthwhile to note. Axiomatic and model-building both require an appropriate translation of empirical objects of interest into their formal counterparts. This is made by the modeler "judgment"10. Models apparently also meet their user's anxiety for "precision" and "certainty", giving logical consistency to reasoning based on a model. They are used also to promote agreement on a given issue, by supposing that it is correctly described in the model. But note that benefit is gained only at the syntactical level: models are also inherently interpretative, semantically such that their elements are debatable, and their assumptions can be questioned. So, the problem with formalism, in economics or elsewhere, is the misunderstanding that precision and consistency are completely different from validity, let alone practical implications ­ and giving to the formers the ultimate worth. This is enough to point out that models can be certainly valuable and important, but must be carefully used. Dow (2008) gives examples of how the `framing' of a question in a formal model can hinder further understanding of or, worse, distort the object of inquiry. Models of asset-pricing supposing equilibrium `as the end-state of market processes' are a case in point; another is `new' behavioral economics: `While there is reference in behavioral economics to social framing, as in the conditioning of choice by social norms, there is little exploration of how it arises, although sociology might well have provided insights. Because of the axiomatic focus on atomic individuals, the influence of society is limited to the introduction of social norms as exogenous constraints on rational individual behavior, without explanation for the emergence of these norms or the reasons that rational individuals accept them.' In a similar vein, Blaug (2002) asserts that, despite using higher techniques, `it is difficult to see how the new economic geography illuminates the locational aspects of economic activity any better than the old economic geography.' Several commentators find Milton Friedman's 1953 essay on `The Methodology of Positive Economics' the prime source of formalism in economics11, nevertheless statements in contrary in this very piece (Friedman 9 "While there are different formalist programs, the unifying principle is self-contained rule-following, by which to construct formal languages and deductive systems that are independent of content"(Chick, 1998). 10 The mathematician Christian Henning is in full accordance: `Mathematical modeling always requires the interpretation of elements of the formal mathematical domain in terms of (personal or social, non-mathematical) reality. There is no formal way to check whether such interpretations are `true', and the mathematical truth of theorems applied to such models does not warrant claims of `objective truth' concerning the modeled reality.' (Henning, 2010, p. 46) 11 See Chick (1998), Blaug (2002), Lawson (2009), Hodgson (2009), Dow (2008), and especially Hands (2009). 1953), in other places (Friedman, 1999), and in the most pronounced influence on others, like von Newman, Morgenstern, Arrow and Debreu (Blaug 2002; 2003).12 Take the following, rightly considered the most important (and controversial) methodological statement of the essay: `In so far as a theory can be said to have "assumptions" at all, and in so far as their "realism" can be judged independently of the validity of predictions, the relation between the significance of a theory and the "realism" of its "assumptions" is almost the opposite of that suggested by the view under criticism. Truly important and significant hypotheses will be found to have "assumptions" that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense). The reason is simple. A hypothesis is important if it "explains" much by little, that is, if it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained and permits valid predictions on the basis of them alone. To be important, therefore, a hypothesis must be descriptively false in its assumptions; it takes account of, and accounts for, none of the many other attendant circumstances, since its very success shows them to be irrelevant for the phenomena to be explained.' (Friedman, 1953, our italics) A footnote attached to this passage, reads: `the converse of the proposition does not of course hold: assumptions that are unrealistic (in this sense) do not guarantee a significant theory'. Why Friedman is so important to the formalization of economics? Based on this only passage, almost every writer finds Friedman `licensing' the free use of unrealistic assumptions while constructing economic models. In the context of the global financial crisis, his fingerprints are found in sanctioning models which contain assumptions of substantive rationality, efficient markets, dynamic stochastic general equilibrium, and so on. No matter what did Friedman think on economic theory and practice, his essay cried louder. A supporter of Friedman methodological statements evaluated its far-reaching consequences this way: `working economists look for heuristics that orient them in the fruitful direction and also make them feel that their work is scientific. When seeking fruitful heuristics, coherence and philosophical sophistication are not necessarily the dominant considerations. Crude, intuitive notions may be perfectly adequate to point an economist in the right direction' (Mayer, 1993, our italics). Interestingly, in spite of so much ink spent with his essay, Friedman never disavowed any comment, whether in support or in attack of it13. Thus, the way was freed for economists to pursue any kind of assumption, no matter how `wildly inaccurate' it may be.14 12 Backhouse and Medema (2009) find also an influence of Lionel Robbins on the path towards formalization, once his definition of economics as allocation of scarce resources was seen by mathematical economists (mostly associated to the Cowles Commission) as easier (than Marshall's?) to formalize. 13 A conference celebrating the fiftieth anniversary of Friedman's essay was organized by the Erasmus Institute of Philosophy and Economics, in 2003. In the published book of this conference, Milton Friedman was invited to write the `Final Word', in 2004. Here is Mäki's (2009) comment: "To my knowledge, this is the first time that he has publicly spelled out his views about what others have written about his essay, but unsurprisingly perhaps, he keeps his statement very general and polite (while in private correspondence and conversations, he has been active in reacting to various criticisms and suggestions in more substantive ways). He had decided to stick to his old private rule according to which he will let the essay live its own life. It remains a challenge to the rest of us to live our academic lives together with the methodological essay that he left behind." 14 Moreover, when Friedman (1953) delimitates the domain of validity of theories according to his methodology, any theory/hypothesis is made unassailable: "Viewed as a body of substantive hypotheses, theory is to be judged by its predictive power for the class of phenomena which it is intended to `explain'." Lawson (1992) takes this "class of phenomena" to mean posited conditions of closure, thereby stable conjunction of events can be obtained. In a similar vein, Mongin (1987) asserts that being so vaguely stated this domain of applicability "corresponds, in a circular reasoning, to simply exclude the known falsifiers of that theory." It would be a short step to treating economics as a kind of intellectual game played for its own sake. All in all, we find that much of this explain the alluring of Friedman's essay for working economists ­ a rhetorical success gained at the expense of methodological coherence. However, limitations of space prevents a detailed methodological treatment of these issues, but see Nagel (1963); Brunner (1969); Musgrave (1981); Caldwell (1982); Mäki (1986, 1992b); and Lawson (1992), for experts critiques of Friedman. Volume III Issue 1 (5) Summer 2012 At this point it is important to note that Hands (2009) in fact denies any responsibility of Friedman-the-man for formalism in economics. And he points to his just mentioned warns against excessively `tautological' methods of analysis. However this does not mean that economists, when they need to make their methodological allegiances explicit, refuse to take comfort in Friedman's essay and feel themselves good scientists. We think they do take comfort in it, indeed. Of course, methodological strictures do not run in a vacuum. Hodgson (2009), for example, provides a range of socio-cultural factors accounting for the winning of formalism, like the changing system of university teaching and research ­ towards more and more specialization and quantification, the downplaying of `big questions' concerning society and the ultimate aims of scientific endeavor, and the `publish-or-perish' pressure. Outside universities, market individualism and the cult of (quantifiable) performance has been in line with these developments. But one should not imply, from the critique of formalism sketched above, an utter rejection of mathematical methods in economics. A more tempered stance was recommended by Dow (1995) drawing on Keynes's remarks on the use of mathematics in economics. The key point is Keynes's turning the focus away from the dichotomy use/do not use, to situations in which mathematical modeling is appropriate. These conditions can be briefly stated: (i) when the assumption of constant structure is reasonable for the subject at hand; (ii) when the object of theorizing does not include significant non-quantifiable elements; and (iii) when variables are commensurable. There is also conditions for using formal reasoning, independent of quantification: (iv) that the structure being analyzed can reasonably be represented as constant, such that the variables can be represented as independent, or, if not constant, that interdependence can be expressed deterministically; (v) that all relevant factors can in practice be expressed formally (the danger with giving priority to mathematization is that the range of relevance is limited to those factors which can, given current capabilities, be expressed formally); and (vi) that the internal logic of the mathematical model is sufficient for persuasion. That is, the words employed in presenting mathematical argument themselves carry moral authority. Summing up, the more constant the structure of interest and the more it can be expressed formally, the more confident can one be of properly using formal models. However, this does not exhaust the possible uses of formal models15. Henning (2010) lists the following ones: (1) to improve mutual understanding; (2) to support agreement; (3) to reduce complexity; (4) for prediction; (5) to support decision; (6) to explore different (quantifiable) scenarios; (7) to explore the implications of the model (8) to guide observations and support learning; (9) to lend beauty and elegance to theories. It is apparent that Keynes's concerns regard the purposes (4)-(6) and (8), whereas the method of idealization (Mäki, 1992a; Nowak, 1989) regards purpose (3), and `conceptual exploration' (Hausman, 1992) regards purposes (7)-(9). Purposes (1) and (2) are uncontroversial16. Sugden (2002) offers a different view of models. Analyzing Schelling's segregation model and Akerlof's "market for lemons", he notes that these models do not fit in any of above conditions or uses. Akerlof's model, for example, does not predict the price for almost new cars. Nor Schelling's model predicts any behavior of racial discrimination in industrial cities. Thus, they are not concerned with prediction or control. Sugden asserts that these models can be interpreted as `conceptual exploration', but that is not all about them. They are constructed as counter-examples, counterfactuals, to shed light in some unperceived stretch of reality, likely to explain real world phenomena. Models do this job caricaturing, exaggerating, deforming some feature, isolating some putative causal factor, but keeping correspondence with reality (pp. 114-117). There is an increasing literature on models, their relation to reality and their construction. Here we can only redirect the interested reader to it. See the papers included in the Part III of Mäki (2002), in Morgan and Morrison (1999) and in a rather recent issue of Erkenntnis (January 2009). 16 Suppes (1968) argues for the use of formalism in science, but considers only purposes (1)-(4) and (7). This interpretation accepts Mäki's (1992a; 2005) vision of models as (idealized) "thought experiments", but, in Sugden's (2002) words: if a thought experiment is to tell us anything about the real world (rather than merely about the structure of our own thoughts), our reasoning must in some way replicate the workings of the world. For example, think how a structural engineer might use a theoretical model to test the strength of a new design. This kind of modeling is possible in engineering because the theory which describes the general properties of the relevant class of structures is already known, even though its implications for the new structure are not. Provided the predictions of the general theory are true, the engineer's thought experiment replicates a physical experiment that could have been carried out. On this interpretation, then, a model explains reality by virtue of the truth of the assumptions that it makes about the causal factors it has isolated. Therefore, models are devices to think about real world phenomena; its validity depends on what we know about the real word and if the workings of causal factors cohere with it. Models are deductive devices and we fill the gap between the model world and the real world by making inductive inferences from the world of the model to the real world. If a model is genuinely to tell us something, however limited, about the real world, it cannot be just a description of a self-contained imaginary world. And yet theoretical models in economics often are descriptions of self-contained and imaginary worlds. These worlds have not been formed merely by abstracting key features from the real world; in important respects, they have been constructed by their authors (Sugden, 2002). In sum, it seems that there are good reasons to require realisticism of models. Although no model is, perfectly realistic (`the-whole-truth'), our acceptance of them depends on their realistically picturing the workings of some isolated causal factor (`nothing but the truth'). That is, they must correspond to what we do know about the real world. Therefore, it would be an unjustified leap of faith to suppose that models which are unrealistic in both senses could, nevertheless, illuminate phenomena of the world we live in. However, they could function as heuristic devices or might be serviceable for `conceptual explorations'. This point leads us back to the issue of identifying real causal factors in a hidden, intransitive layer of reality ­ which we deal by discussing the possible ways to follow, in the near future. 4.3. Proposing Alternative Modes of Thought in the Aftermath of the Crisis Since formalism is a process which exhibit, according to Hodgson, path-dependency and positive feedbacks, it would be naïve to expect its instant abandoning. Yet, while some authors, like Colander et al. (2009), Keen (2009), Kirman (2009), or even Blaug (2002), propose a way out through looking for better, more empirically-driven models (e.g. complexity theory, experimental and behavioral economics), Lawson considers any adjust in the economist's toolkit unhelpful. It is clear that the recent crisis situation (like almost any social situation) is something that needs to be understood rather than modeled... [I]t seems overly heroic to suppose that in order to capture the sorts of developments that occurred, all that is required of modern academic economics is a different type of mathematics, or internal `theoretical' adjustments like the treating of a model's still isolated atoms as heterogeneous or as forming independent expectations; or focusing on the possibility of multiplicity and evolution of equilibria; or hoping that cointegrated vector autoregression (VAR) models will uncover robust structures within a set of data, and so forth. It is apparent that the legitimate and feasible goal of economic analysis is not to attempt to mathematically model and perhaps thereby predict crises and such like, but to understand the ever emerging relational structures and mechanisms that render them more or less feasible or likely. Amongst other things, this requires an account of the background conditions against which ongoing developments are taking place. In the current context, this includes understanding how the credit expansion triggered by liberalized financial markets set the conditions for the current situation, and the assortment of developments and mechanisms by which it has come about (Lawson, 2009, pp. 774-775). Volume III Issue 1 (5) Summer 2012 From the previous two sections it is easy to see why Lawson takes such a stark position. Mathematical modeling amounts to suppose ontology of closed systems. Reality, as we see it, is in contrary, an open system. Therefore, formalism would be ex definitione inappropriate for studying processes in the real world. As others (e.g., Hodgson, 2006; Chick and Dow, 2005; Mearman, 2002) have pointed out, Lawson runs in difficulty here. Recall that we leave an open question above: how to identify causal factors hidden in the deep layer of reality? Lawson's (1997, chap. 15) answer is: by examining contrastive pattern of events or demi-regularities (equivalent to Nicholas Kaldor's stylized facts). Such events are `rough and ready', not strict, semi regularities, etc. Thus, an example from Lawson himself will help to understand the point and its problem. Take the pattern of productivity growth in the British manufacturing sector in the twentieth century. It is inferior to (otherwise similar) advanced countries. That is a contrastive demi-regularity. We can abduct a cause to it in a non-empirical domain (e.g. the British system of labor relations), and we can corroborate or revise it with further research, always giving prime concern to the ontology of the object. This is a research conducted in an open system? No, because it isolated as negligible, or temporarily `out of focus', many other facts as worth of being considered `causes' as the isolated one. Thus, demi-regularities are partial closures, and for two reasons: (i) it is impossible to take all the relevant facts at once; theorizing is necessarily to discriminate and therefore to exclude some aspects of reality from our model world; and (ii) as Chick and Dow (2005) argue at length, the distinction between open and closed systems is not just one of on/off, as Lawson has lead us to belief, but is more nuanced. They identify eight conditions for a system to be open and other eight conditions to be closed. It requires satisfying any one of the former to be open, but all of the latter to be closed. Moreover, `complete openness is incompatible with a system remaining recognizable as a system.'(p. 367) So it is important to have in mind that when Lawson insist on the pointlessness of modeling work, we would assume he is reflecting Keynes's concerns on mathematical modeling, which dwelled mostly with stability of, and prediction upon (quantifiable) data. Closeness is often partial and this feature provides scope to discuss meanings, aims, and assumptions of the models, including its ontological commitments. The problem, as we see it, is not the use of models per se, but what are the elements, the method and the judgment made in its design. But these statements do not mean that we are enthusiast of the new assortment of modeling techniques, such as complexity theory, behavioral economics, evolutionary game theory, and so on. Along similar lines of Lawson's critique (see the previous quotation) of Colander et al. (2010), Hodgson (2009) and Dow (2008) also cast doubts on these new techniques. And for a fundamental reason: it is a mirage, a Sisyphean task, to look for models that `fit' better the data of recent financial turmoil. Mathematical modeling is inherently unhelpful to deal with stuff that makes for strong uncertainty, such as innovation, inexistent information, coordination of agents (Hahn, 1984) and animal spirits. New modelers seem to let this uncomfortable feature pass in oblivion. Yet, we have already paid a high price for placing prediction above understanding. If our assessment is valid, there are strengths and weakness in both positions. So, could we do better? Our answer would be in line with two pluralist statements. Twenty-five years ago, the sociologist Etzioni (1985) proposed "a medical model" for economics, which consists in making use of `findings from a variety of basic sciences', including sociology, political science, environmental science, psychology, etc., aiming at transcending the rational economic man, but `without reverting to a much less analytical science, to the way [nineteenth century] political economy was'. Similarly, Chick (1998, p. 1868) says: `I hope that I have argued persuasively that the role of formalism is to be precise and rigorous where that is possible, and that other modes of analysis exist as valid and valuable complements. Formalism is fine, but it must know its place'. This way, we hope, it will be possible transforming economics into a more realistic and useful science. Concluding Remarks The recent financial crisis gives an opportunity for reflection on the foundations of economic theory and the practices resting upon it. Despite some factors which could have been avoided ­ such as excessive reliance on the self-correction properties of markets, on rating agencies, on the self-regulation capacity of market participants, on excessive freedom, or even the cult, of market, seen as guardian of growth and entrepreneurship, and on the damaging effects of believing in normalcy of self-seeking behavior ­ we think this episode brings with it deeper lessons. At the practical level, that of norms, regulations and operation of markets, there is a need for an increase and change in regulation and incentives for many of the most important market players (Volcker, 2008; Stiglitz, 2009). We described the roots of the crisis and the real causes which finally started it. We also presented a quite detailed explanation of the four major issues, not mutually exclusives, which brought about the crisis, following Krugman, and Wells (2010): a) low interest rates, mainly by the Federal Reserve among many others Central Banks, after the 2001 recession; b) the so-called global savings glut; c) the disguise of risk by financial institutions, rating agencies and models used by these major actors, and appalling failures in the rewarding system for many of the agents working in the financial markets; and d) government programs which would have created moral hazard. At the theoretical level, our paper echoes a host of non-orthodox economists who urge for a change in the foundations of economic theory (Dymski, 2010). The dominance of the New Keynesian thought, and its twin conceptions of (systemic) equilibrium and (representative agent and) substantive rationality (alas, conceptions `imported' from New Classical economics), are dangerously fragile and even damaging in episodes like this crisis. How can one explain the volatility of asset prices, once one assumes that markets are in continuous equilibrium through time, in a random process? Moreover, how can one sustain that this macro equilibrium emerges from optimizing decisions of agents with perfect knowledge, not only about economic fundamentals, but even about the dynamics of markets, such that they do not commit systematic errors? In other words, these perplexities clearly point out that these models are overly unrealistic in the sense defined in this paper, namely, that a model validity depends on what we know about real economic systems, rather than on dogmas of competitive (and thus efficient) markets. Orthodox economists certainly would explain the crisis by failures in models of evaluation of risks and in predictions provided by them. They would blame governments for their ubiquitous failures. They would also complain that bailouts could jeopardize public belief in market systems (or even in `free societies') by hindering market discipline (i.e. bankruptcy). They will keep on seeking more sophisticated models to provide previsions "fitting" better the data. And they will keep on preaching about the virtues of markets and the sinfulness of regulators (Acemoglu, 2009). From these quarters one should have low expectations of transforming economics because of what Keen (2009) calls `inertia of the immovable object of the economic belief'. Thus, the orthodox lessons from the crisis oscillate between recitation of old sermons and marketing of new techniques. We shall not discuss ­ we would not even dare ­ how changes in the scientific community's beliefs will take place. But economic methodology can be helpful to assess arguments for change the economist toolkit. The economists from who we have drawn upon in this paper hold converging views that failures of orthodox economic theories can be tracked down to methodological misunderstandings, though methodology is seldom explicitly discussed by those theories. And that is why the influence of Friedman's essay plays such an important role in our argument. Despite the perception of Friedman as a foe by formalist revolutionaries, or Friedman's admonitions on the importance of empirical testing of theories, `once the assumption do not matter, the cat was out of the methodological bag, the profession was free to go speeding down the formalist road'(Hands 2009). Assumptions of DSGE, efficient markets, representative agents, etc., simply do not matter, only its empirical predictive implications. During the booms, reality seems to authorize this kind of presumption. Volume III Issue 1 (5) Summer 2012 Moreover, `it is all very well to have economic theory dominated by a school of thought with an innate faith in the stability of markets when those markets are forever gaining ­ whether by growth in the physical economy, or via rising prices in the asset markets. In those circumstances, [heterodox] academic economists can rail about the logical inconsistencies in mainstream economics all they want: they will be, and were, ignored by government, the business community, and most of the public, because their concerns don't appear to matter.'(Keen, 2009) The methodological approach endorsed here, that of critical realism, puts forthright emphasis on the importance of considering the ontology of objects under scientific economic investigation. It argues for considering the nature of objects of interest for economists, like households, firms, markets, production, distribution, trade, money, etc., as they really are in the world we live in, rather than as they could be in an idealized world model. Mäki (1992a) could make an objection to that claim, since by defending realisticness we are, in fact, restraining our view to `common-sense realism' (as opposite to `scientific realism' which contains non-observable entities). However, as we have seen, economists of different persuasions would claim that model credibility is not divorced from what we know about the real word, the world existing out of the model. This approach is, notwithstanding, skeptical about the capability of new formal models to solve the theoretical problems we are faced, even though their ontological compromises are richer than the orthodox ones. And that is so because: (i) a theory have to be translated into a formal language to be a model; in such a translation problems are "stripped out" of most of its non-formalisable aspects; and (ii) creativity and surprise are difficult to be modeled. It is clear enough that computer simulations, for example, depend on the instructions on how to ascribe/change probability distributions over results, according to rules defined from the programmer. Thus, although they are important and superior to overly simplified worlds of neoclassical models, those models hardly can improve our knowledge of social and economic reality where decision-taking under uncertainty is part of the ontology. Yet those methods need not be abandoned. They can provide heuristic frames for better theories, function as pedagogical devices and, in some cases, give insights on counterfactuals (Sugden, 2002). But they really must be very carefully handled. And they are very limited tools for prediction, as Keynes said long ago. That is, in our view, the point of many warnings from Hodgson and Lawson. At last, it seems that the depth and length of the crisis was not enough to force economists to take this warnings seriously, paradoxically as a consequence of the success of the very heterodox policies followed by many governments (Minsky, 1982; 1986). Anyway, economic theory has nothing to lose in taking ontological and methodological issues seriously. It is past time to shake off the old prejudice of Lord Kelvin, and embrace less formalism in doing economics. If this path is not chosen, the dismal science may lose by persisting in their `physics envy' and cyclical recantations when some `past masters' are needed to be rescued from the dustbin. That is to say, by not doing that a large part of economics may, in due course, be doomed to irrelevance.

Journal

Journal of Advanced Studies in Financede Gruyter

Published: Jun 1, 2012

There are no references for this article.