In my first-ever blog post (apart from a Hello World! announcement), I commented on the fact that, whereas trade and current account deficits were big news in Australia, US papers buried them in the back pages. At least in the online edition of the New York Times, this is no longer the case. The latest US Trade deficit ($58.3 billion in January) is front-page news.
Despite this catch-up, it’s still true that anyone wanting coverage of economic issues in the US would do far better to read blogs than to follow either the NY Times or the WSJ, and no other mainstream media even come close. It isn’t even true, as it is in other cases, that bloggers need the established media to get the facts on which they can then comment. The NY Times story linked above is basically a rewrite of the Bureau of Economic Analysis press release which you can get by automatic email if you want.
The competition is much tougher in Australia. Media coverage of economic issues is better, the number of economist-bloggers is smaller and quite a few of us play both sides of the street anyway.
I’ve been sitting on this great post about reforms to US bankruptcy laws and how they fit into the general pattern of risk being shifted from business to workers and to ordinary people in general. But I waited too long and Paul Krugman’s already written it. So go and read his piece, and then, if you want, you can look at the things I was going to write that Krugman hasn’t said already.
First, if you’re looking for reading on this general topic, let me recommend “When All Else Fails : Government as the Ultimate Risk Manager, ” (David A. Moss), which I reviewed here Moss shows how both bankruptcy and limited liability were (correctly) viewed as significant departures from laissez-faire when they were introduced in the 19th century. Of course, there’s no hint that the sacred status of limited liability is going to be challenged any time soon.
Second, given the rising trend in bankruptcy, this is going to affect a lot of people, quite possibly most people, at some time. Currently, more people go bankrupt than get divorced every year and, although the number has declined marginally with the economic recovery, the underlying trend is clearly upward. The proposed reforms are unlikely to change this. Although the bill will make bankruptcy a less attractive option for people who are already in difficulty, this demand side effect will be more than offset by the increased willingness of credit card companies and other lenders to lend to people with precarious repayment capacity.
Finally, while Krugman is probably right in describing the target of the reformers as a system of debt peonage, my long exposure to Dickens (and more recently to Patrick O’Brien) leads me to think that the large and powerful incarceration lobby might get in on the act here - anyone for debtors’ prison ?
In the weekend edition of the Australian Financial Review (reproduced here), Justin Wolfers writes about a betting market on the Iraqi election turnout, run by the Irish betting exchange Tradesports. The bet turned on whether turnout would exceed 8 million and was roughly even money before voting began. The price of the contract rose sharply on early reports of turnouts over 70 per cent, then fell back again when to around even money when it became clear these reports had little basis. The final official turnout was about 8.4 million.
Readers will recall that something very similar happened in the US election when early exit polls favored Kerry. Modifying an old aphorism to say that “two striking observations constitute a stylised fact”, I think we can now say pretty safely that political betting markets display the wisdom of crowds who read blogs.
In economic terms, we need to look at the implications for the efficient markets hypothesis, which comes in various levels of strength. The weak form, that you can’t predicted prices on the basis of their own past movements is well confirmed, and not of much interest here. The semi-strong form is that markets make the best possible estimate, given available public information. This, I think, is still open for debate. Obviously markets react to the news, but in these two instances they appear to have over-reacted. So, it seems likely that in a market with new information arriving continuously, we would replicate the stockmarket finding of excess volatility.
Finally, there’s the strong form of the hypothesis which is that markets make the best use of all available information, public or private. This has clearly failed, or else been shown to be irrelevant. Either there was no information anywhere to suggest that the early reports were wrong in these cases, or there was no useful private information which markets failed to incorporate in prices1.
Now, let’s look back at the most controversial proposal for use of betting markets, the idea of ‘terrorism futures’. Since most of the interesting information here is private (the public gets color-coded alerts and that’s about it), the claim that a market in terrorism futures would provide useful information depends on the strong form of the efficient markets hypothesis. The (stylised) fact is that strong form efficiency doesn’t hold in political markets, and therefore that terrorism futures would provide no useful information.
1 In a strict sense, the private information obviously existed. People knew whether and how they had voted, and a perfectly aggregating information market should have been perfectly informed about the votes that had been cast, and well-informed about votes that people intended to cast.
A striking development in the US economy in the last few years has been the growth in adjustable rate mortgages. This raises a couple of questions. First, if you’re thinking about buying a house, is better to go for adjustable or fixed-rate? Second, what does this mean for the economy as a whole?
CT doesn’t offer personal finance advice (as yet), but I will point out that the fixed-rate mortgages on offer in the US are a much better deal than those in Australia (and, I think, elsewhere). In a standard comparison of fixed and adjustable rates, the fixed-rate contract is good for the borrower if interest rates go up, and bad if interest rates go down. The opposite is true for the lender.
But US fixed-rate mortages are a one-way bet. If interest rates go down, you can simply take out a new loan and use it to pay off the old one. This kind of refinancing has been a major source of consumer demand in recent years, as people have increased their borrowing and used the extra funds to finance home improvements or discretionary spending. By contrast, with the fixed rate loans on offer in Australia and elsewhere, if interest rates fall and you want to repay early, you have to pay a penalty equal to the profit the lender would otherwise have made on the deal.
This is a kind of put option, and if knew my Black-Scholes formulas as well as I should, it ought to be possible to value it (these guys have done it, but I haven’t had time to work through their paper). Looking at the relatively small interest rate difference between fixed and adjustable rate mortgages and present, you’d have to assume very low volatility in future interest rates to make the adjustable rate option the sensible choice.
With the less favorable fixed rate options available in Australia, the choice is less clear-cut and comes down to risk allocation. If you think that most variation in interest rates is likely to come from factors uncorrelated or negatively correlated with your income (such as monetary policy), fixed-rate is best given an equal initial rate. If you think that variation in interest is mainly driven by inflation, and that your own income will rise in line with inflation, adjustable is more favorable.
That’s enough on personal finance. Coming to the economic implications, the growth in adjustable rate mortgages has gone hand in hand with big growth in overseas demand for mortgage-backed securities, notably from China (hat-tip: Brad Setser).
Now from the point of view of a lender, an adjustable rate loan is like a fixed holding of short-term bonds, since it is, in effect, rolled over every time the interest rate changes. So the shift to adjustable rate mortgages is in line with the general shortening of maturity of US debt. All of this, and the Korean kerfuffle early in the week, suggests that foreign investors are edging for the doors, or at least making sure they can see a clear path to the fire escape. A rollover crisis looks ever more likely.
Now that Larry Summers has begun to live up to his putative commitment to open, freewheeling inquiry by finally releasing a transcript of his infamous remarks, various people are commenting on it. Matt Yglesias says
I don’t think you can reasonably expect any given university (or corporation, or person) to singlehandedly shoulder the burden of changing a set of social expectations that’s become very well entrenched over a very long period of time. At the same time, you can’t just do nothing about it, either.
Bitch, PhD addresses this issue pretty well, as does a correspondent of Mark Kleiman’s. The main point is the first step toward addressing what Matt properly calls “a set of social expectations that’s become very well entrenched over a very long period” is — contrary to what Summers did in his remarks — to stop treating it as a more-or-less simple result of the expression of individual preferences. Now, in other social-policy contexts, economists will jump all over you for not properly considering the incentives that shape people’s choices and smugly wheel out one-liners like “People respond to incentives, all else is commentary.” There’s a lot to that observation. But in contexts like gender and the labor market, the emphasis instead gets put on individual preferences as the mainspring of choice, rather than considering the social origins of the incentive structure.
Here is an old post of mine, written in response to something Jane Galt (aka Megan McArdle) wrote. It addresses this issue a bit, with some pointers to accessible and practical discussions of it by specialists — some of the literature that Summers just baldly ignored, or was inexcusably ignorant of. As I said back then,
Jane’s initial question — “Should we [women] stay home, or shouldn’t we? It’s a difficult question for professional women” — effectively concedes the case as lost from the get-go. It frames the problem as wholly belonging to the prospective mother. Dad has no responsibility towards his potential offspring, is not required to make any work/family tradeoffs, and indeed has so much autonomy that a woman who chooses kids over career is “taking a huge financial bet on her husband’s fidelity.” … The institutions that structure people’s career paths may have deep roots, but that’s not because they spring naturally out of the earth. Cross-national comparison shows both that there’s considerable variation in the institutionalization of child care, and that this variation can have odd origins. … [They] aren’t immutable, either. In fact, in the U.S. they’ve changed a great deal since the early 1980s … Looking at the problem this way makes one less likely to fatalism about tragic choices, wanting to have it all, and the inevitable clash of work and family. … It also has the virtue — as C. Wright Mills put it forty years ago — of letting us “grasp history and biography and the relations between the two within society,” rather than forever being stuck at the level of individual women facing insoluble work-family tradeoffs.
None of that is particularly original, by the way. It’s a well-developed perspective with plenty of empirical evidence and theoretical elaboration, and even a little bit of reading in this area would make that evident. That’s why Summers’ audience was so ticked off. In fairness to the guy, at this stage his perilous position has little to do with the remarks themselves anymore, and has become an ouster by opponents dissatisfied with his Presidency in general.
My friend Pierre-Olivier Gourinchas has co-authored a very interesting paper with Hélène Rey called “International Financial Adjustment.” (Here’s the PDF version.) You might think that’s not a title to set the world on fire, but don’t be fooled. A more appealing — though perhaps less responsible — alternative would be something like “Dude! We can predict exchange rates!” Here’s the abstract:
The paper proposes a unified framework to study the dynamics of net foreign assets and exchange rate movements. We show that deteriorations in a country’s net exports or net foreign asset position have to be matched either by future net export growth (trade adjustment channel) or by future increases in the returns of the net foreign asset portfolio (hitherto unexplored financial adjustment channel). Using a newly constructed data set on US gross foreign positions, we find that stabilizing valuation effects contribute as much as 31% of the external adjustment. Our theory also has asset pricing implications. Deviations from trend of the ratio of net exports to net foreign assets predict net foreign asset portfolio returns one quarter to two years ahead and net exports at longer horizons. The exchange rate affects the trade balance and the valuation of net foreign assets. It is forecastable in and out of sample at one quarter and beyond. A one standard deviation decrease of the ratio of net exports to net foreign assets predicts an annualized 4% depreciation of the exchange rate over the next quarter.
Now, I am not a macroeconomist so I should leave further discussion to Daniel and John. The guts of the paper are really beyond my competence to evaluate. But this is a blog, so naturally I will carry on regardless and make three points anyway.
The first point is that the paper is interesting because their result is a real surprise — especially that bit about “forecastable in and out of sample at one quarter and beyond.” You shouldn’t be able to reliably predict rates better than a random walk, and certainly not over relatively long periods. How do Gourinchas and Rey do it? They begin with the observation that if a country’s current account balance is in deficit, it must eventually get readjusted through increased exports or by depreciation of the currency. They argue that the “sharp increase in gross cross-holdings of foreign assets and liabilities” between countries that we’ve seen over the past twenty years introduces a significant pathway by which rebalancing can happen without changes in trade: “Put simply, a fall in today’s net exports or in today’s net external asset position has to be matched either by future net export growth or by future increases in the returns of the net foreign asset portfolio … The budget constraint implies that today’s current external imbalances must predict, either future export growth or future movements in returns of the net foreign asset portfolio, or both.”
They go on to derive a quantity — “the deviation from trend of the ratio of net exports to net foreign assets” — that captures movements in the external asset position. They decompose it into the bit explained by trade and the bit explained by financial adjustment. Then they collect a longitudinal dataset for the U.S. case including a measurement of this quantity and show that the financial adjustment component accounts for about a third of total readjustment, mainly in the relatively short-term. An implication of the theory is that this quantity should predict future returns on a country’s portfolio of foreign assets, and this turns out to true also.
Now, the rate of return on assets like this is determined partly what they earn abroad and partly by what the exchange rate is. The fact that the financial component of the rebalancing itself works through the mechanism of depreciation leads Gourinchas and Rey to “raise an obvious and tantalizing question: could it be that the predictability in the dollar return on gross assets arises from predictability in the exchange rate?” Again, the answer is yes:
Overall, these results are striking. Traditional models of exchange rate determination fare particularly badly at the quarterly-yearly frequencies. Our approach … finds predictability at these horizons. … a large ratio of net exports to net foreign assets predicts a subsequent appreciation of the dollar, which generates a capital loss on foreign assets.
Et voila, a theoretically grounded, empirically applicable technique for predicting movements in exchange rates.
My second point is about what happens next, assuming that Gourinchas and Rey are right. As they say themselves in their conclusion,
The challenge consists in constructing models with fully-fledged optimizing behavior compatible with the patterns we have uncovered in the data. A natural question arises as to why the rest of the world would finance the US current account deficit and hold US assets, knowing that those assets will under-perform. In the absence of such [a] model, one should be cautious about any policy seeking to exploit the valuation channel since to operate, it requires that foreigners be willing to accumulate further holdings of (depreciating) dollar denominated assets.
If rational agents knew about the relationship between external imbalances, foreign asset return and exchange rates that the paper describes, then we wouldn’t expect to find the patterns in the data that we do. If the rest of the world knew their assets were going to under-perform, they wouldn’t have made that particular investment. Now that this relationship has been uncovered, we might expect this gap to close as the knowledge is incorporated into investment decisions. It’s in the nature of economics to prove itself both true and false, in this way. True, because the specific bit of pricing technology uncovered by the paper is now grist for the decision models and pricing mechanisms used to make investment decisions (“Our theory also has asset pricing implications”, as the abstract says). So the rationality of market agents will more closely approximate the ideal state that theory says it ought to. But false, too, in the sense that the empirical relationship demonstrated in the paper might disappear as a result, and everything gravitate back to an efficient-market random walk again.
My last point follows naturally from the previous one: If Pierre is my friend, then why didn’t he let me know about all of this, oh, say, 18 months or so ago. Then I could have formed a CT investment consortium managed (for a small fee) by John and Daniel. This could have taken care of CT’s hosting bills for ever and ever, and I could be writing this from a private island in the Pacific ocean instead of a Starbucks in SeaTac airport.
Rising property values across the region have put the squeeze on taxpayers, but the bite has been especially acute for owners of Habitat for Humanity homes in Northern Virginia. In some areas, their homes have doubled and tripled in value in the past three years. At least a dozen of the 47 Habitat homeowners in Northern Virginia pay more in property taxes and insurance than they do to pay off their mortgages, according to Karen Cleveland, executive director of the Northern Virginia arm of the housing nonprofit group. It is part of an international group that builds homes with volunteers and sells them to low-income buyers.Now, you’re probably wondering why they just don’t sell. Here’s why:
In recent months, Habitat for Humanity of Northern Virginia has launched a campaign to persuade localities to provide tax relief for their homeowners. It is arguing that the Habitat homes shouldn’t be assessed at market rates because deed restrictions prevent their owners from selling the homes for profit or getting home equity loans until the 20-year mortgages are paid. If Habitat homeowners sell their homes before 20 years are up, they must sell them back to Habitat for the amount they cost — $80,000 to $120,000 in most cases, Cleveland said, which is the restricted value.Perhaps someone can help me out here; why put this onerous restriction on the deed? I can sort of see that the nature of the charitable donation would be altered if it essentially became a cash gift rather than a house. And I suppose it makes some sense to restrict immediate sale. But 20 years? This seems to deprive the recipients of one of the main benefits of homeownership: capital appreciation. What would be wrong with letting this woman sell and buy another, cheaper house elsewhere in the area, rather than petitioning the local government for tax abatement? She and her family would be just as “housed.” On the other hand, she would seem to have a good case that her house is not actually worth the assessed price, since she can’t sell it for that amount. Thoughts?
I’ve mentioned Peter Griffiths and his book “An Economist’s Tale” before, and I’m going to mention it again in future, because it’s important. The book is a detailed case study of what Griffiths did when he was working for the government of Sierra Leone during a period when the World Bank suddenly got the free market religion. It’s a fantastic read, and by reading it you will get two valuable pieces of information; you’ll understand what economic consultants (those people whose jobs are advertised in the front bits of the Economist) actually do for a living, and you’ll understand the exact why and wherefore of what it is that people are complaining about when they protest against the Bretton Woods institutions and the Washington Consensus. Griffiths isn’t an “anti” in the normal sense; he makes clear at a number of points in the book that he’s actually in favour of free market reforms as the long term solution to a lot of development problems. But he is someone with very detailed, on-the-ground experience of the problem that Joe Stiglitz identified; the regrettable state of affairs that lets poor countries’ governments get bullied around by “third-rate students from first-rate universities”, with often disastrous results.
Below the fold is an article written by Peter, summarising some of the themes of the book; there are lots of good bits (including my favourite one-sentence summary of the moral dilemma of the economics profession, on which I will post anon) which aren’t mentioned there, so reading the article isn’t a substitute for buying the book. The book can be bought from Peter’s website; link above. Non-economists are not excused this one; if you can understand a Grisham novel you can understand this. It’s pacey, it’s exciting and it all really happened. It even has a happy ending (of a sort; given that the setting is the country of Sierra Leone, a genuinely happy ending was never on the cards).
(Full disclosure: I have no commercial or personal connection with Peter Griffiths other than through sending him an email to get this article. I bought the book with my own cash after seeing it advertised on the Zed Books website).
There is a lot of money to be made from a good famine
Peter Griffiths
One person, one economist, can get a government to change its policy. I have done it. Often. It is what I do for a living. My book, The Economist’s Tale (Zed Books) shows one case where I did it. This time, I stopped a famine.
The White Man’s Graveyard, they used to call Sierra Leone. It was the Black Man’s Graveyard when I worked there. Half the children born died of hunger and disease before they were five. Life expectancy was the lowest for any country in the world.
I hoped I could do something about it. I had come from England as a consultant to do an economic analysis of food policy in four months. I was starting from scratch, as this was one of the few countries in the world with no agricultural economics and marketing department.
Economics is about people, so I started by listening to people. First there were the courtesy visits to my employers, the Minister of Agriculture, the Permanent Secretary and the Director of Agricultural Planning. This was my attempt to find out the hidden agenda for my study, what they really wanted but would not put on paper. Then I spoke to everyone I could find working on agriculture; people in the Ministry of Agriculture, in the Ministry of Commerce, in the Central Bank; the foreign aid projects in Freetown and up country; the rice wholesalers in their dark warehouses behind the bazaar, and the market women squatting on the floor, bags of rice in front of them, bargaining with customers before measuring out a cup-full or two of rice the farmers in the paddy fields and the consumers in the streets and markets.
I must have spoken to well over two hundred people in the first two months. It would have been impossible in Britain, of course – it would have taken six months even to make the appointments. However, Sierra Leone’s telephone system had collapsed since independence, so nobody expected appointments. Instead I would knock on the door of a civil servant and say, “Hello! I am Peter Griffiths. I am doing a study of food policy for the Ministry of Agriculture. Can you help me please?”
Their faces showed their thoughts “Oh no! Not another consultant. I spoke to two yesterday, three the day before. I must have spoken to a thousand over the last ten years. I tell them all the same thing, and they put it in their reports, but nothing ever happens.”
Beneath this, better hidden, was another thought, “What is this white man doing, telling us how to run our country? I have the same degrees as him. I did my master’s at Oxford. And I have worked here for ten years and I know everything about the country. What he pays for one night in the Bintumani Hotel is what I am paid in a whole year. He gets more pay in one day than I get in three years.”
But they are a polite people, and they asked me to sit down. The underlying hostility made interviewing difficult, but depth interviewing is one of my professional skills. We exchanged pleasantries, and within five minutes they were telling me the same story they told all consultants. Within twenty minutes they were telling me things they did not realize they knew – and they did indeed have a lot of experience. Within forty minutes some of them were telling me about the politics within the Ministry and between Ministries. They told me what corruption was going on. Every person I spoke to gave me a different angle, a different perspective and the inconsistencies and gaps started to show. The big picture started to come together.
But I was not interviewing just for information. I was trying to get people into the state of mind that they would read my final report when I wrote it, and read it appreciatively. I was trying to show them that I was highly intelligent and that I understood what was really going on in the country. I did this by keeping my mouth shut, listening carefully and respectfully to what they said, and writing it down.
At the same time I was collecting reports and statistics. My experience of other countries was that there should be dozens of highly relevant reports, but I could only find a handful here. The statistics were appalling. For example, there were two statistical studies on food production. They disagreed by 80% on the total area planted to rice, and by 60% on the yield per hectare even though they used the identical methodology. There were no reliable figures at all on most of the economy.
I had to resort to detective work. Making sense of the statistical and other information was rather like doing a crossword puzzle. No bit of information had any credibility until there were several cross confirmations. Even then, the credibility grew as the cross confirmations were themselves confirmed by down confirmations. Some of the key information turned out to be things like seeing Japanese rice on sale in a village market, and asking a stevedore how much rice had been unloaded from a ship.
By the end of the first month I felt I was the only person who had a broad picture of the food situation in the country, though there were still lots of gaps and loose ends. I had talked to people from the Minister down. I had talked to people who were experts in different parts of the market. I had any statistics that were available. Everything was starting to come into a coherent model.
Then, I visited the Director of Agricultural planning for a routine chat. As I left, he handed me a paper, the minutes of a meeting between a World Bank team and the Ministry of Commerce.
I read it incredulously. It was a formal agreement that the Government of Sierra Leone would immediately stop importing food – and this in a country where Government imported half its staple food. The Government would also stop subsidizing food – when a quick look round the streets made it obvious that very few people were getting enough to eat even when food was subsidized. Government was also forbidden to keep a stockpile. Nothing I had read or heard could give any support for this, but neither the World Bank nor the Ministry of Commerce appeared to have made any effort to base their Agreement on logic or analysis. Neither of them had consulted the Ministry of Agriculture, which was responsible for food production, or me, the only person with the responsibility for examining food policy.
I sat down and did my analysis. The country was importing half its staple food, rice. People could not buy enough to live on at the subsidized price, so it could be argued that removing the present 25% subsidy would push prices beyond the means of half the population. But the reality was much worse than this. Over the last year, the leone’s value had collapsed against the dollar to a tenth of its previous level. The rice currently on sale in the markets had been bought when the leone was strong. Any new rice imports would have to be paid for with a very weak leone. This meant that any new imports would have to be sold in the markets at more than ten times today’s price in leones. But wages and salaries had not gone up, so nobody would be able to buy unsubsidized rice. And this obviously meant that no private trader would import rice that they could not sell. I visited them all to check and they told me vehemently that they would not touch rice imports with a bargepole.
The unavoidable conclusion was that when present stocks ran out, in four months time, there would be no imports. The country people would keep the rice they grew, so there would be no rice at all for the urban population. Starvation would start immediately. How many people would die before emergency aid could be arranged? Quarter of a million? Half a million?
I knew what was going to happen. I doubted if anybody else did. Certainly nobody else had the broad picture. If I did not act, there would certainly be a famine. Even if I did act, though, it was unlikely that I could change things.
Before I did anything, though, I tried to work out what I was up against. Why had the World Bank imposed this Agreement on Sierra Leone? Obviously because it was part of its general policy to push an extreme free market policy on the world. If an academic sitting in a university in the west makes enough unrealistic assumptions, he can prove that an economy works most efficiently when there is no government intervention, when there is a perfectly free market. A couple of maverick economists convinced Reagan and Thatcher that this was grounds for action. The action plan included getting countries to float their foreign exchange markets, get rid of subsidies, get rid of state companies and marketing boards, dismantle controls over markets and prices, and deregulate etc. etc. Some of these actions would have been very valuable as a part of a carefully planned and structured reform of a sector or industry, but as a nostrum that can be applied everywhere without thought, they were disastrous.
The organization was committed to this policy, and it put pressure on their staff to show that they were making countries adopt it. Any staff member who did not succeed in this would find that their career suffered, and they might lose their jobs. So the staff made it clear that any country that wanted a grant or loan would have to be seen to adopt this policy. They also made it clear to consultants that they would have to support this policy if they wanted to be employed by the World Bank.
Each staff member, each group was trying to get the policy implemented before the others. While there was some obscure theoretical justification for the general application of the free market, there was none for a piecemeal application, in this case applying it just to two parts of the agricultural and food sector, those that happened to be under the control of the Ministry of Commerce. It is rather as though one decided that a car would run better with a bigger engine, a different gearbox, a different suspension, and bigger wheels, and one compromised by only putting on a large right front wheel. It would run into the first lamppost.
I was frightened at the reaction World Bank staff would have if I said that they had blundered. It was odds on that I would be fired on the spot. Aid officials may be nice guys committed to the Third World, but when it comes to the crunch, they have no compunction about firing any consultant who does not support their personal ends. I had recently got into big trouble for the relatively minor sin of saying that an international aid agency’s pet project was grossly uneconomic: this meant that a desk officer would lose brownie points for not disbursing the target amount of loans. It was made clear to me that I would not work for that agency again.
The Government had done what the World Bank told them to, because they had no alternative. They were broke. Also they had seen what the international community had done to Ghana, just down the coast, when they did not toe the line. The Government had committed themselves to the Agreement, and they would not renege lightly. Like me, they did not know how World Bank officials would react if they were told that the Agreement was a disaster. It would certainly sour long-term relations between the Government and the Bank, even if, or especially if, the Bank was shown to have blundered. No, I could see that I would be an embarrassment to them.
Equally worrying was the knowledge that a lot of politicians, civil servants and marketing board officials stood to make money from a famine. Anyone who handles emergency aid in a famine situation can make enormous black market profits, at a time when people will pay all they have for enough rice to keep them alive for a month or two. Some of these people would be very upset if I spoke up. They only had to complain that I “could not get on with the locals” to get me sacked. It happens every day to aid workers who start to find out too much about corruption.
The threats were serious. Everyone in the aid business was well aware that only four years before Steve Lombard had been sacked from his FAO job in Tanzania after he prevented a famine. He was running an early warning project, and alerted the Ministry to the danger, so they could get food aid well before people started to go hungry. The Ministry sat on the information and did nothing. The food was running out and a famine situation was imminent, one where some people would make a lot of money. Steve got the information to the World Food Programme who told a very surprised President that a famine was imminent. He also leaked the information to the BBC World Service who told the Tanzanian people. Action was taken, just in time. The Tanzanians sacked Steve. FAO and the World Bank stood back and let them do it. A shocked Steve drank himself to death over the next three years.
Yes I was vulnerable. I was vulnerable because a lot of organizations had control over my future. The Minister, the Permanent Secretary, the Director, and probably a lot of other civil servants could have me sent out on the next plane. So could the World Bank, as they were paying my salary. I could not afford to upset the other aid agencies, as I was hoping to work for some of them in the future. Nor could I afford to get a reputation with the international consultancy firms as a troublemaker, because any future job with the aid agencies was likely to be through them. The only people I could afford to ignore were the people of Sierra Leone, the people who would starve.
From a career point of view, my optimum choice was to fake a backache and get flown home, leaving someone else to handle the famine when it hit.
I am squeamish though. I did act, quietly, politically and decisively. It meant putting my career on the line, though, and I have suffered for it. But I did convince the Government that the Agreement was a disaster, and they did renege on it. It was a damn near run thing though.
Peter Griffiths’ book on this affair, The Economist’s Tale: a consultant encounters hunger and the World Bank, is published by ZedBooks at £15.95 and can be bought online at www.griffithsspeaker.com
In his push for Social Security privatization choicepersonal accounts abolition, George Bush is raising the prospect that, some time around 2050, Social Security will go bankrupt. This claim has been refuted quite a few times, so let me raise a different answer.
If you’re a young working-age American, don’t routinely pay your credit card balance(s) down to zero each month, and don’t have top-flight health insurance, it’s odds-on, based on recent experience1 that you’ll go bankrupt at some point.
About two million people, or one per cent of the working-age population2, go bankrupt every year, a number that has risen dramatically in recent years. So, given about 50 years in the working-age bracket3, and ignoring multiple bankruptcies, it’s almost exactly even money that a given person will go bankrupt. But, not surprisingly, the risk isn’t evenly distributed. The two major causes of personal bankruptcy are health crises and credit card debt.
Looking at credit cards, the population is about evenly divided between those who pay off their balance every month, and therefore get almost-free credit, and those who run balances, pay interest and keep the card companies in business. Assuming that the two groups are fairly stable, as appears to be the case the risk of credit-driven bankruptcy is fairly low for the first group, and these are also likely to have safer jobs and better health insurance. In fact, one way of viewing the credit card business is that it needs to extract as much interest as possible from members of the second group before their highly probable resort to bankruptcy.
As I’ve mentioned previously, bankruptcy is now more common than divorce. But because bankruptcy has risen so fast, the number of people who’ve experienced it is much smaller than the number who’ve been divorced. The stigma that was associated with bankruptcy is already declining, and it’s now a private financial issue rather than the public disgrace it once was. By the time Social Security supposedly goes bankrupt in 2050, Uncle Sam will merely be joining the majority of his constituents.
1 Of course, if a trend can’t be sustained, it won’t be. But there’s no obvious reason why the current upward trend in bankruptcy should halt, let alone be reversed. So it seems reasonable to assume that the current rate of bankruptcy will at least be maintained in future.
2 Children can’t go bankrupt. Retired people can, but bankruptcy rates are fairly low for this group, and are likely to stay low as long as Social Security exists.
3 Normally 15-64, but for the purposes of this post, 20-69 fits a bit better.
Since Crooked Timber’s first publication in 1953, “Ask a Nineteenth-Century Whaling Expert” has consistently been one of its most popular features. We are pleased to bring you the novelist Kenneth Gardner, author of Rich Man’s Coffin.
I’m baffled at the economics of nineteenth-century whaling. In Moby-Dick, Herman Melville says that a whaling expedition would be a success if a crew of 40 men captured the oil from 40 whales in 48 months. Each whale produced about 40-50 barrels of oil. Presumably this oil had to be cover the approximate costs of four years’ labor, plus the costs of operating the ship, plus a sizeable profit for the investors in these risky ventures.
How could whale-oil have been so valuable? I understand that it was scarce, that illumination is highly desirable, and apparently it smelled nice. But there were substitutes, weren’t there?
Ted B., Houston, TX
Kenneth Gardner writes:
At that time you had the same resistance to technology transition as we have in boom markets today which may not be as efficient as their more technologically savvy counterparts, but are still ‘cheaper’ in the eyes of their producers in terms of the amount of time and energy required to make the transition. Best example of course, is the abundance of crude oil and our resistance to move to alternative and more efficient natural sources. The same was true in the 19th century whaling industry.
Ironically, my example of crude oil also answers your question about the possibility of alternatives to whale oil in the 19th century. Yes, crude had been discovered. Did efficient or effective means of drilling and refining exist then? Hell no. Was there much pressure on society to develop this technology in the face of abundant whale oil? No again.
Also, you underestimate the value and amount of whale oil being harvested by overlooking a commonly overlooked fact about old whaling: The whole whale value. They sold the bones to dress makers, and ambergris, the basis of 19th century perfume, was worth more at that time per ounce than gold is today, relatively. One Sperm Whale could yield an 800 pound lump of ambergris, a glandular secretion. Most importantly, whale oil was almost in a pure form with the only refining being the boiling down and straining of the fat. There was a lot of money made on those 4-year voyages!
Lastly, the whaling that Melville wrote about in Moby-Dick was ‘open-ocean’ whaling which was tedious and dangerous, with Sperm Whale numbers dwindling by the mid-1800s. But bear in mind, that archaic yet lucrative practice was tied into the colonization of the world and the development of other lucrative trades like sneaking up on millions of easy harbor seals on foreign, exotic shores and bringing home coat furs that were worth their weight in gold. Moreover, whalers were discovering ‘shore-whaling,’ where they would go and find the calving shores of the whales and lay claim to an entire coastline and set up towns and just start reaping in the biggest female Black Whales, which were twice as big and yielded twice as much profit as the nearly extinct Sperm Whales. It was easy pickings, and that industry started roaring just as the kind of whaling the Melville did was dying. So don’t wonder if those investors were getting their monies worth by sending out a bunch of hacks to sail the world for a few years and bring back what they could. They were bringing back tremendous wealth.
In case my framing was too cute, Kenneth Gardner is a real guy, who has written a real historical novel called Rich Man’s Coffin about an escaped slave who joins a whaling expedition. He was kind enough to write this response for me.
The Iraqi elections seem to have been about as successful as could have been hoped, and may represent the last real chance to prevent a full-scale civil war. The pre-election analysis suggests that the United Iraqi Alliance, the main Shiite coalition, will get the biggest share of the votes, but probably not an absolute majority. If so, their leaders will face two immediate choices.
The first is what to do about forming a government. The obvious choice is a coalition with Allawi. Given the power of incumbency and the fact that there was no real campaign in many areas, his group is bound to get a fair number of votes, even though it’s clearly unpopular. There’s even talk that he could re-emerge as PM
The second choice is what to do about the Americans1. Until a couple of days ago, the UAI platform called for a timetable for US withdrawal, but this was apparently changed at the last minute Meanwhile the Pentagon has been talking about continuing full-scale occupation for at least two years. In view of the security situation and the obvious pressure from the Bush administration, the obvious course of action is to defer any talk of withdrawal to the indefinite future.
In my view, the obvious choices would be disastrous in both cases, and for much the same reason. Holding elections is great, but the point of democracy is that they should make a difference and that governments should act in accordance with the wishes of voters. If the election leaves Allawi in office (even as a coalition partner) and the Americans in charge, it will be soon come to be seen as a pointless farce. And unless the government makes early US withdrawal a central demand, it will inevitably end up being seen, at best as a client and at worst a creature, of the Americans. The Sunnis won’t be slow to point this out, and neither will the Sadrists, who have played a cautious game that has given them some representation in the new assembly while maintaining a public boycott of the election.
Of course, there are good reasons to be fearful about the consequences of a US withdrawal. But this is the same kind of reasoning that led to the elections being delayed until now, when they could have been held under far more favorable conditions a year ago. What reason is there to believe that another two years of occupation will leave Iraq more capable of managing its own security? And if the Iraqi government doesn’t grasp the nettle itself, there’s always the risk that the Americans will make a unilateral decision to cut and run at the worst possible moment.
1 Officially of course, it’s the multinational coalition. But with Poland and the Ukraine about to withdraw, and Blair talking about an indicative timetable for withdrawal, there’s not much left of this figleaf.
Readers of my previous post will have noticed that I don’t know much about MMPORPGs In fact, I don’t do much gaming these days, though I chewed up untold amounts of then-scarce mainframe computer time playing Adventure in the 1970s. Still my foray into the field has left me the kind of excitement you get the first time you wander into one of these domains and find precious jewels lying about everywhere.
I’m very interested in the implications of online communities of all kinds, and the motive that lead people to contribute to such communities. My idea du jour, playing off some thoughts of Yochai Benkler is as follows.
There are all sorts of motives which might lead people to contribute to networked social capital, for example by participating in various aspects of blogging (make posts and comments, linking and blogrolling, improving software, various kinds of metablogging). Possible motives include altruism, self-expression, advocacy of particular political or social views, display of technical expertise, social interaction and so on. In general, these motives are complementary or at least mutually consistent. However, motives like these do not co-exist well with a profit motive.
Why is this? At a superficial level, it’s obvious that people act differently, and are expected to act differently, in the context of relationships mediated by money than in other contexts. Behavior that would be regarded favorably in a non-monetary context is regarded as foolish or even reprehensible in a monetary context.
One of the most important general differences relates to rationality and reciprocity. In a non-market context, careful calculation of costs and benefits and an insistence on exact reciprocity is generally deprecated. By contrast, in market contexts, the first rule is never to give more than you get1.
Why is it more important to observe this rule in market contexts? One reason is that markets create opportunities for systematic arbitrage that don’t apply in other contexts. In an environment where trust is taken for granted, a trader who consistently gives slightly short weight can amass substantial profits. This is much more difficult to do in ordinary social contexts. Hence, much closer monitoring is required.
Similar points can be made about other motives. There are a whole range of sales tricks designed to exploit altruism, friendship, desire for self-expression and so on. Hence, to prosper in a market context, it is necessary to adopt a view that âbusiness is business’, and to (consciously or otherwise) adopt a role as a participant in the market economy that is quite distinct from what might be conceived as one’s âreal self’.
When I started thinking about MMPORPGs, I was worried that they would be a counterexample for my general argument. After all, this is a field where commercial gaming companies have mostly displaced spontaneous communities, and I was aware of the fact that items were traded on eBay and elsewhere, something which I supposed, on a priori grounds to be destructive. I knew there had been attempts to suppress such trade, but was under the mistaken impression that extra-game trading was generally accepted as a legitimate part of such games.
I posted my argument anyway, and was led to a treasure trove of discussion of these topics at Terra Nova. Most of it is broadly consistent with my starting hypothesis but there are lots of nuances I hadn’t thought of, and twisty little passages to follow up further. It’s hard to pick and choose among such glittering prizes, but for me the plover’s egg-sized emerald is KidTrade: A Design for an eBay-resistant Virtual Economy, linked here.
Going even more meta, it’s obvious to me that exploring the issues raised by online collaborative innovation from separate disciplinary perspectives, such as those of economics, sociology2 or law, then trying to put the bits together, is not the way to go. The way to make progress I think is through a collaboration that combines a range of academic perspectives with actual lived experience of the collaborative process. For me, at least, blogs in general and Crooked Timber in particular are the closest thing I’ve found to what I’m looking for in this respect.
1 The existence of gains from trade means that both parties to a market transaction can gain more than they give. But this doesn’t mean there isn’t conflict: both could do better at the expense of the other.
2 It’s obvious, for example, sociology has a lot to contribute to discussions of role-playing and rationality, instances including Weber, Goffman and Hirschman. Most importantly, there seems to be some potential for insight into the question of the circumstances in which personality is role-specific in some fundamental sense, rather than in the trivial sense in which a role is defined by the performance of specific functions.
The Economist has an interesting piece on the interaction between the economy in massively multiplayer games and that of the real world. The classic study of this question is Castronova’s analysis of the economy of Norrath, the setting for Everquest. Among various features of Norrath’s economy, one of the most interesting is trade with Earth through the sale of game items (weapons and so forth) via private treaty or on eBay1. This enables Castronova to estimate that the wage in Norrath is $US3.42 an hour, a figure that has some interesting implications.
At the Creative Commons conference last week, I heard a story to the effect that when the owners of one of these games tried to prohibit item trading they were sued and, in the course of litigation discovered that the plaintiff ran a sweatshop in Mexico where workers participated in the game solely to collect salable items. Clearly as long as the wage is below $3.42 there’s an arbitrage opportunity here. More technically sophisticated arbitrageurs have replaced human workers by scripted agents, working with multiple connections. Either way, arbitrage opportunities can’t last for ever, and are likely to be resolved either by intervention or inflation
The positive economics of all this are interesting enough. But how about policy analysis? Who benefits and who loses from this kind of trade, and do the benefits outweigh the costs?
It’s pretty easy to produce a model where no-one gains and all the ‘real’ inhabitants of Norrath (that is, people playing the game for pleasure) lose. Consider a model with two groups, Norrathians who play the game for pleasure, and Terran arbitrageurs, who are good at collecting salable items, but incur disutility from the work involved in doing so. As usual in economics, assume that the two groups are homogeneous. Suppose that each individual Norrathian enjoys the game more when they have more items than others, but that a uniform increase in the stock of items makes no-one better off. Consider the starting point before the arrival of the first Terrans. Each Norrathian is willing to pay a positive sum (in Terran dollars) for items, and we’ll suppose that this is initially higher than the cost to Terrans of collecting them. So, when the first Terrans arrive, trade begins. The Terrans are better off, but the Norrathians are worse off. Since they all expand their holding of items uniformly, they gain no more pleasure from the game than before. But they are now paying the Terrans for items. The problem here is that buying an item outside the game creates a negative externality for all players.
Now, since there are profit opportunities for Terrans, more and more keep arriving. New entrants keep entering the market until the Norrath-Terra exchange rate drives the return for Terrans down to the ordinary Terran wage, at which point the entire system is in equilibrium. Terrans are no better off, and Norrathians are strictly worse off. From a social point of view, encompassing both Terra and Norrath, the work done by the Terrans in acquiring items is entirely wasted. Hence, a welfare improvement can be realised by prohibiting trade.
I think this is basically the correct story, at least in terms of mainstream neoclassical economics. But there are some counterpoints to consider. As The Economist points out, one reason people pay to get items is that the early stages of the game (the ones you play when you don’t have many items) are less interesting than the later ones. This is a design flaw and extra-game trade is a warning about this flaw. I think a Hayekian could make more of this point than I have.
A second, more neoclassical, point is that, if Norrathians are heterogeneous, there may be real gains from trade. Time-poor cash-rich players may prefer to sweep past the obstacles, using money to smooth their path, while others may be happy to defray some of their costs by selling surplus items. I’m not sure if this works (why can’t the game owner capture all these rents and lower the average entry price), but it seems like an interesting argument.
I think there are deeper issues here, relating to the conflict between the kind of market rationality displayed by the Terrans in my model and the collaborative innovation needed to create virtual worlds, but that’s a story for another post.
1 I haven’t looked into the actual mechanics of this. But as long as trade is allowed on Norrath, it’s hard to prevent side payments being made on Earth.
For twenty-five years or so, the privatised pension scheme introduced in Chile under the Pinochet regime by his labour minister, Jose Pinera, has been touted as a model for the world to follow. It’s been particularly influential in the US debate over social security privatisation but has also had some influence in Australia, which has a somewhat similar setup, though we arrived at it by a different route - Chile scrapped its defined-benefit state pension scheme, keeping a basic safety net, Australia started with a means-tested flat-rate pension, but has tried to expand private superannuation since the 1980s
Now the New York Times reports that the Chilean scheme is not delivering the promised benefits . Lots of people are getting less than they would have under the old scheme and large numbers are falling back on the government safety net. Fees have chewed up as much as a third of contributions.
Why has this bad news taken so long to emerge. Complaints about fees have been around almost since the start, but right through the 1980s, they were ignored becuase investment returns were exceptionally high. This in turn reflects the fact that Pinera had the good luck or good judgement to start the scheme when the stock market was at an all-time low, thanks to a financial crisis (in retrospect the first of many cases where financial market darlings got into trouble). The economy recovered and the stock market boomed. Once gross returns fell back to normal levels, the bite taken out by fees became unbearable.
All of this raises the issue of risk. Under a privatised defined-contribution, your returns, and therefore your retirement, depend heavily on timing. 1981 was a great time to start investing in the Chilean stock market, and also in the US market. At least for the US, 2000 was a good time to get out. Anyone who started investing in the US market in the late 1990s (and didn’t manage to outperform it) is well behind where they would have been if they had put their money into government bonds.
On average, returns from the stockmarket are higher. But this is just another way of saying that, on average, investors want a higher return to justify the additional risk. So a switch from a defined-benefit scheme to a private accounts scheme with the same average return and higher risk is a real loss, just as if someone sought to repay a debt contracted in 1981 with the same amount in 2005 dollars.
I’ll have more to say about this soon, I hope.
The latest terrorist bombings in Iraq came closer than usual to home for Australia, with two soldiers suffering (reportedly) minor injuries in an attack on the Australian embassy1, while 20 more Iraqis were killed, adding to the tens of thousands already killed by both/all sides in this terrible war, which seems to get more brutal and criminal every day.
It’s pretty clear by now that Iraq is approaching full-scale civil war and that, as is usually the case in civil wars, the presence of foreign troops is only making things worse. But rather than arguing about this last point, it might be better to put it to the test. This NYT Op-ed piece by three researchers from the Center for Strategic and International Studies suggests a referendum on US withdrawal to be held soon after the forthcoming elections. They make a pretty good case that it would be hard for the Baathists to justify disrupting such a referendum, though no doubt some would do so anyway. At least, this would be true if the main Shiite parties adhered to their previously stated position of favoring withdrawal.
I expect such a referendum would lead to a majority vote for withdrawal. But a majority the other way would probably be an improvement on the current situation. The only really bad outcome would be the case where the Kurds voted solidly for keeping US/UK troops, reversing a majority vote the other way among Arab Iraqis.
Of course, withdrawal of troops wouldn’t produce instant peace. But I can’t see any better alternative. If military force, ruthlessly applied, was going to end the war, the levelling of Fallujah and the expulsion of the population ought to have done the trick. On the other side, I think the resistance would lose their main recruiting tool if the Americans were gone.
1 Despite this event, Australia has suffered far less direct loss in Iraq than many nations who were far less deeply involved in the decision to start the war.
Following a lead from Bill Gardner (and a tip from Henry) I’ve been reading The Status Syndrome : How Social Standing Affects Our Health and Longevity by Michael Marmot1. The core of Marmot’s book, which is fascinating in itself is his empirical work showing that, as you move up any kind of hierarchy (Marmot looked at British civil servants) your health status improves. I’ve done a little bit of work myself relating to the links between health, education and life expectancy at the national level, and Marmot’s micro findings fit very neatly with mine.
What’s even more interesting though (to me and to Bill, I think) is the general idea of autonomy as a source of good health2. He debunks, for example, the long-discredited, but still widely-believed notion of executive stress and shows that the more control you have over your work environment and your life in general, the less likely you are to suffer the classic stress-related illnesses, such as heart disease.
It seems to me that autonomy, or something like it, is at the root of many of the concerns commonly seen as part of notions like freedom, security and democratic participation. I’m still struggling with this, but reading Marmot has crystallised some thoughts I’ve had for a long time. I’ve put some thoughts over the page - comments appreciated.
The points are clearest in relation to employment. Early on, Marmot debunks the Marxian notion of exploitation (capitalists taking surplus value from workers) and says that what matters in Marx is alienation3. He doesn’t develop this in detail, and the point is not new by any means, but he’s spot on here. It’s the fact that the boss is a boss, and not the fact that capitalists are extracting profit, that makes the employment relationship so troublesome. The more bossy the boss, the worse, as a rule is the job. This is why developments like managerialism, which celebrates the bossiness of bosses, have been met with such hostility.
So part of autonomy is not being bossed around. But like Berlin’s concept of ‘negative liberty’, this is only part of the story. Most of the time it’s better to be an employee with a boss than to sell your labour piecemeal on a market that fluctuates for reasons that are totally outside your control, understanding or prediction. This is where a concept of autonomy does better than liberty, negative or positive. To have autonomy, you must be operating in an environment that is reasonably predictable and amenable to your control.
Of course, the environment consists largely of other people. So one way of increasing your autonomy is by reducing that of other people, for example by moving up an existing hierarchy at their expense. But autonomy is not a zero-sum good. Some social structures give more people more autonomy than others.
In modern market societies, everyone but the very poor has quite a lot of autonomy in their role as consumers. There’s nothing much more autonomous than a supermarket where you can take a cart or trolley round shelves stocked with a vast variety of items, pick whatever you want and take it away, swiping a credit card on the way. On the other hand, Marx’s corresponding vision of a society where you might “hunt in the morning, fish in the afternoon, rear cattle in the evening, write literary criticism after dinner just as I have a mind, without ever becoming hunter, fisherman, shepherd or critic” seems as hopelessly utopian today as it did 100 years ago. This is partly because of some unavoidable technical realities - someone who did all these things would probably not be very good at any of them - but much more so because of the social structures required to manage work. These can be changed, though not easily.
As Robert Shiller pointed out very effectively, one of the roots of the dotcom bubble was the way the Internet gave new users an incredible feeling of mastery (which might more properly be parsed as autonomy). I don’t think this was entirely illusory and I continue to believe that the Internet has genuine potential to generate the kind of social transformation that will enhance autonomy for everyone.
I’ve got a lot more to say about this, but that’s enough for now. Go ahead and pull it to pieces. After that, I’ll try to put it back together in something like working order.
1 In the same order, I bought “The Working Poor : Invisible in America” (DAVID K. SHIPLER), also well worth reading/
2 Marmot also talks about social participation and makes a lot of sense, but that’s a topic for another day.
3 This is, I think, reflected in the old joke. “Capitalism is the exploitation of man by man. Communism is exactly the reverse”.
A while ago, I looked at the ticking bomb problem and concluded that, whatever the morality of using torture to extract life-saving information in emergencies, anyone who did this was morally obliged to turn themselves in and accept the resulting legal punishment. Reader Karl Heinz Ranitzsch has pointed me to a real-life case, reported by Mrs Tilton at Fistful of Euros. The case involved a threat of torture, rather than actual torture, and the deputy police commissioner involved was convicted and fined. Without detailed knowledge of the circumstances, I tend to agree with Mrs T that this was about the right outcome.
My article The Unsustainability of U.S. Trade Deficits has just been published in The Economists’ Voice along with a piece on government deficits by Ronald McKinnon. Although relatively new and oriented to a general audience, EV looks like being a high-powered journal, having already published Stiglitz, Posner and Akerlof among others, so I’m pretty pleased to have made it into volume 1. Thanks to everyone here and on my blog who helped me to sharpen my arguments on this topic.
Update One point in my piece that I thought was at least modestly novel was my observation that the US government has been shortening the term of the Treasury securities (bonds, notes and bills) it issues. Now, via Brad DeLong, I see that Nouriel Roubini has just covered the same issue in a lot more detail, offering what he describes as “A Nightmare Hard Landing Scenario for the US $ and the US Bond Market..”. And you all thought I was bearish.
Milton Friedman has a piece in the Hoover Digest, reprinted in The Australian making the point that, even though many fewer people nowadays professes belief in socialism than did so in 1945, the general movement of policy since the end of World War II has been in a socialist direction, that is towards an expansion in the share of GDP allocated to the public sector. He draws a distinction between ‘welfare’ and the traditional socialist belief in public ownership of the means of production, seeing the former growing at the expense of the latter.
From a social-democratic perspective, I’d put things differently. There are large sectors of the economy where competitive markets either can’t be sustained or don’t perform adequately in the absence of government intervention. These include human services like health and education, social insurance against unemployment and old age, production of public goods and information, and a range of infrastructure services. In all these sectors, governments are bound to get involved. Sometimes, the best model is private production with public regulation and funding, and sometimes it is public ownership and production. The result is a mixed economy.
Over time, the parts of the economy where competitive market provision is problematic have grown in relative importance. By contrast, agriculture, the archetypal competitive industry, has declined in relative importance as have mining and manufacturing, areas where governments have usually performed poorly.
The result is that the ideological swing towards neoliberalism has done little more than slow a structural shift towards a larger role for government.
I don’t ask much of you lot, but I’m asking you to read this (yes yes, pdf, they’re not exactly uncommon you know) speech by Mervyn King, Governor of the Bank of England. As well as being one of the UK’s best technical economists, King really is uncommonly thoughtful and insightful when it comes to issues outside his direct area of specialisation (I notice that he thanks Tony Yates in the acknowledgements, who is also a top bloke). This British Academy lecture takes on the concept of risk in the abstract, and illustrates it with a number of examples related to the retirement savings industry. It’s really very good. If you take nothing else away from it, there is one point which is extremely well made; that part of the reason why we have a role for public provision of pensions is that it allows us to spread the burden of longevity risk between present and future generations.
I’ve been reading “Pay without Performance : The Unfulfilled Promise of Executive Compensation” by Bebchuk and Fried) For anyone who still believes that executive pay is based on rewarding performance, and encouraging risk-taking, this book should disabuse them. There are loads of studies pointing out, not surprisingly to anyone who reads the papers, that top executives and boards look after each other in a way that rewards failure.
The most telling detail for me is the observation p98, that every single CEO in the S&P Execucomp Database has a defined benefit pension plan. This, while bosses everywhere have been shifting their employees onto defined contribution plans, where they, and not the company, bear all the risk, and while the Republicans in the US are trying to do the same with Social Security.
One thing I would have liked more of is quantitative information about the aggregate magnitude of payments to executive pay, considered in relation to corporate profits. There’s only a little of this in the book, though the authors say hereAggregate top-five compensation was equal to 10 percent of aggregate corporate earnings in 1998-2002, up from 6 percent of aggregate corporate earnings during 1993-1997.Given that this excludes various kinds of hidden transfers1, that non-executive board members extract substantial rents (mostly through favorable corporate decisions rather than in cash) and considering senior managers, rather than merely top-5 executives, as a class, it’s apparent that the total income flowing to this group could easily be between 25 and 50 per cent of aggregate corporate profits. If this is correct, it ought to have profound implications for the way in which we model corporations, and the way in which we think about the class structure of modern capitalism.
1 It’s not clear whether retirement benefits are counted, for example, and these are as large, in present value terms, as direct compensation. Then there is the observation that executive insiders do remarkably well in trading the shares of their own companies.
Der Spiegel’s new English-language site has an intruiging article about foreigners — including a US strawberry-farmer, who have bought up German government bonds issued in the 1920s and are now trying to get the German government to pay the … billions. I have a vague memory that Piero Sraffa became fabulously rich (or his college did) because he bought up then-worthless Japanese government bonds during WW2 on the — correct — assumption that any postwar Japanese goverment would honour them. The Germans, unsurprisingly, don’t seem keen:
But investors like Fulwood [the strawberry-farmer] don’t want to wait any longer: He’s the first to take on Germany’s Bundesbank, or central bankk, to force the government to pay up. On September 10, Fulwood filed suit in the 13th Judicial District, Hillsborough County, in Tampa, Florida.
According to court documents, the strawberry farmer isn’t exactly asking for small change, either: Fulwood is demanding $382.5 million for 750 bonds.
Other bond owners are also preparing to launch legal battles. In the United States, a group of investors has formed, seeking to turn 2,000 of the old bonds into cold, hard cash. In Italy, say insiders, the grandchild of former Ethiopian emperor Haile Selassie holds 20,000 of the bonds. And a U.S-based lawyer claims to represent the heir to Japan’s emperor, who allegedly owns “countless boxes filled with these bonds.”
Is this real? Or is it like those people who claim to own Manhattan?
Continuing the debate about preventive war begun by Judge Richard Posner (the discussion was begun by him, I mean, not the war) the Medium Lobster presents a competing analysis:
[T]the probability of an attack from the moon is less than one - indeed, it is miniscule. However, the potential offensive capabilities of a possible moon man invasion could be theoretically staggering. … The Medium Lobster has calculated this probability to be 5×10-9. … the resulting costs would include the end of civilization, the extinction of the human race, the eradication of all terrestrial life, the physical obliteration of the planet, and the widespread pollution of the solar system with a mass of potentially radioactive space debris. The Medium Lobster conservatively values these costs at 3×1012, bringing the expected cost of the moon man attack on earth to 1500 (5×10-9 x 3×1012), a truly massive sum. Even after factoring in the cost of exhausting earth’s nuclear stockpile and the ensuing rain of moon wreckage upon the earth (200 and 800, respectively), the numbers simply don’t lie: our one rational course of action is to preventively annihilate the moon.
I’m a bit sorry to break it to the Medium Lobster, but Judge Posner considers scenarios of precisely this kind, and uses pretty much this methodology, in his new book, Catastrophe: Risk and Response. Cases treated include the nanotechnology gray-goo apocalypse, the rise of superintelligent robots, and a strangelet disaster at Brookhaven Labs that would annihilate a substantial chunk of spacetime in the vicinity of our solar system. A recent review of the book raises most of the relevant critical points about the approach Posner takes. In essence, it’s all good geeky fun to apply the methods to cases like these but it’s a stretch to pretend we’re learning anything decisive about what we should do, as opposed to gaining insights on the scope and limits of some techniques for assessing alternatives.
In the case of cost-benefit arguments about preventive war, there are other objections. One, pointed out by Chris in the comments to my earlier post, is the the worry that your adversaries are thinking about preemptive attacks in much the same way you are, and so will move to preempt your preemptive attack with one of their own. You can still be committed to weigh up the costs and benefits as best you can, but it would be foolish to think one had a straightforward technique that cut through the difficulty and reduced it to a matter of simple calculation. One of the most important contributions of game theory is the way it reorients your thinking from a parametric to a strategic point of view. That is, you stop thinking of other agents as passive bits of the world and realize that they, like you, are searching for the smartest decision given what they anticipate their opponents will do. If you’re inspired by rational choice theory, as Judge Posner is, the simple application of cost-benefit methods should not look very plausibly as a way of reaching a strategic decision about the use of preventive action in cases like Iraq.
More broadly, though, the reason I find it disturbing that these elementary cost-benefit methods are presented by Posner as a serious way to resolve decision problems in international politics it that it’s clear he’s aware of their shortcomings and limits. In the Catastrophe book, for instance, he acknowledges that monetizing the costs of extremely unlikely disasters, for which little in the way of good evidence about their likelihood exists, is an essentially arbitrary process. (In more run-of-the-mill cases, of course, “arbitrary” does not mean “random.” The development and application of these pricing technologies has more to do with the push to quantify many kinds of risk than the rationality of the methods themselves. But that’s an argument for another day.) At a minimum, if we’re thinking of things like the invasion of another country it just won’t do to acknowledge the arbitrariness of the numbers being assigned to the variables, and then press on regardless. Nor is it satisfactory to sketch a cost benefit approach as if it represented how a rational agent would assess a situation in practice, and then jump in to specific cases like Hitler invading the Sudetenland. It’s not that these methods aren’t powerful, it’s that they’re being misapplied. Abba Lerner once commented that
Thousands of habits of behavior and of enforced laws had to be developed over millennia to establish the nature and the minutiae of property rights before we could have buying and selling, instead of each man just taking what he wanted if only he was strong enough. … Each set of rights begins as a conflict about what somebody is doing or wants to do which affects others … An economic transaction is a solved political problem. Economics has gained the title of queen of the social sciences by choosing solved political problems as its domain.
When the parameters of a decision are settled — to buy or not buy some commodity, for instance — then a tool like cost benefit analysis can be enormously powerful, especially when there are a lot of parameters. Even when aspects of the decision become increasingly uncertain or subjective, these approaches retain much of their power as heuristics for decision making. But eventually you’ll get to cases which should provoke reflection on the limits and applicability of the methods, rather than a perverse desire to rely on them all the more just because the choice is hard and you want a definitive answer. Whatever you thought of the pros and cons of invading Iraq, the last thing it could be called was a solved political problem. In such cases, wheeling out the cost-benefit machinery in the way Judge Posner does isn’t a way to make the political choice easier, it’s a rhetorical move to make the politics of the choice magically disappear.
I’ve read lots of pieces on proposals to reform the US Social Security system, both positive and critical. Unfortunately, most of them include claims that are at best half-true and most of the rest assume a high level of knowledge of the issues. Over the fold, I’ve added a lengthy piece trying to explain the issues. Although I’m actively involved in debate on some of them, I’ve done my best to give a neutral presentation, at least until the final assessment of the proposals currently being discussed by the Administration and Congressional Republicans. This is primarily a matter of political judgement and can be summed up fairly quickly.
The Republican proposals involve accounting transfers amounting to trillions of dollars between different government accounts and newly created individual accounts. These transfers will almost certainly be packaged up with substantive changes to the Social Security system. Whether you support them depends on which you think is more likely:
You may be able to guess which of these I think more likely, but you’ll have to read (or scroll) to the end to find out.
The problems of financing public and private pension schemes in the face of a growing proportion of retired workers have raised problems all around the world. Nowhere, it seems, has the debate been more complex and confusing, than in the United States. In view of the role of the US dollar as a reserve currency, and the current budgetary problems of the US government, the difficulties of the US Social Security Fund are a matter of global significance.
The beginning of the problem is the fact that the taxes currently being levied aren’t going be enough to fulfil the promises that have been made to beneficiaries. There are various ways of measuring the shortfall, but the most relevant is that, if the problem were to be fixed with an immediate injection of cash, the amount required is around $5 trillion or about 50 per cent of current GDP The fund won’t be exhausted until 2042 on current calculations, but something will need to be done well before then.
There are at least four distinct, but interrelated issues.
First, there is a proposal to recognise part or all of the unfunded Social Security liability explicitly, by having the US government borrow money and transfer it to the Social Security fund.
Second, there are proposals for partial privatization, that is, allowing individuals to allocate part of their contributions to a personal account, which they could invest at their discretion, and reducing benefits to those individuals accordingly.
Third, there are proposals that part or all of the Social Security fund should be invested in stocks rather than, as at present, in government bonds.
Finally, there are measures to meet the shortfall either by changing the rules of the Social Security system or by finding savings elsewhere in the budget.
To see what’s going on, it may be helpful to look first at the source of the shortfall. Social Security is a defined benefit scheme, operated on a ‘pay-as-you-go’ basis. The benefit is calculated on the basis of an individuals highest 35 years of earnings, and is not directly related to contributions. Payments to current retirees are made out of the contributions of current workers.
This was a hugely beneficial deal for the first cohort of people to retire under social security. Having made contributions for only a few years, they received an earnings-related pension for the rest of their lives. Compared to a fully-funded scheme, this means that Social Security built up a large deficit in its early years. However, because both incomes and the working-age population were growing fast, this was not an immediate problem.
In the future, with a reduction in the working population relative to the retired, the process will eventually be reversed. Either some group of participants will receive less than they put in, in present value terms, or the government will have to make up the deficit out of its general revenue.
The first proposal discussed above, to refinance the social security fund with new government debt may be seen as a simple recognition of the well-known funding deficit. In isolation, it looks like a desirable move, bringing a real, but unrecognised future obligation on to the balance sheet. Since it does not involve any new obligations, it is reasonable to argue, as Republican proponents of the proposal has done, that it should not count towards measures of the budget deficit, or be treated as an increase in the net debt of the public sector. Before accepting this conclusion, however, it is necessary to consider the interaction between this proposal and the other options being put forward at present.
The second proposal, reallocation of some contributions to individual accounts, with a corresponding reduction in future benefits would, in principle, make no difference to the deficit. However, there would be substantial problems in matching the shift in contributions to the reduction in benefits. Benefits for future retirees are a complex function of past earnings, on which contributions have already been paid, and future earnings, which would attract smaller contributions to the fund. Supposing contributions were reduced by 20 per cent tomorrow, it would be very difficult to work out the appropriate reduction in benefits for someone who retired in, say, ten years time.
The third proposal is to invest some of the social security fund in stocks rather than bonds. This would happen more or less automatically with individual accounts, assuming they were treated like existing 401(k) funds. However, it would also be possible to diversify the investments of the official Social Security fund, as was proposed during the Clinton administration.
Historically, stocks have yielded higher returns than bonds in the long run, a phenomenon known as the equity premium. Higher returns would make it easier to make up the current shortfall.
The problem is that the equity premium may be presumed to reflect, in some sense, the greater riskiness of stocks. Assuming investors are averse to this risk, we might expect to see any allocation of individual accounts to stocks being offset by other portfolio shifts out of stocks and into bonds. The net result will be a wash. A possible exception to this argument arises with workers who have no personal financial assets apart from their Social Security entitlements. Such workers might wish to diversify into stocks but be constrained from doing so at present.
The Clinton proposal is more controversial. Some economists have argued that the same risk premium should apply to public as well as private capital markets. Others (including me) have argued that the observed market equity premium is much too high to be a socially optimal estimate of the cost of risk and must be due to capital market failures of various kinds. Under these assumptions, an increase in public holdings of equity, for example through Social Security diversification, will yield a net benefit. On this analysis, the higher average returns of equity investment could be used to meet at least some of the funding deficit, while fluctuations in returns could be smoothed using the government’s taxation and borrowing powers.
With the controversial exception of diversification, all of the proposals discussed so far amount to a reallocation of existing contributions and commitments, with no change in the aggregate balance. The crucial problem is that of dealing with the funding deficit, whether or not this is brought on-budget. In the late 1990s, it seemed likely that the problem would be addressed by improvements in the general budget balance. The aim at that time was to place required contributions to social security in a ‘lockbox’ and ensure that the budget, exclusive of the lockbox, was in balance or surplus. The accumulated surpluses projected at the time would have been easily sufficient to address the Social Security shortfall with no changes in benefits.
On current indications, however, there is little likelihood of a surplus in the budget, excluding Social Security any time soon. It is therefore necessary to consider either an increase in contributions, a reduction in benefits or a tightening of eligibility. The most likely option is an effective reduction in benefits, by indexing them to consumer prices rather than, as at present to earnings. This would mean that retirees would get a fixed real income, based on their lifetime earnings, rather than sharing in the benefits of general wage growth.
The issues are logically independent, but that doesn’t mean they are politically separate. We can imagine a couple of opposing scenarios. On the one hand, the proponents of policies such as explicitly recognising debt and the introduction of private accounts could seek a broad and well-informed debate leading to a comprehensive analysis of all the policy options, including options to address the funding shortfall. In an ideal political world, this would undoubtedly be the outcome.
On the other hand, it might be suggested that policies involving accounting transfers of trillions of dollars between government accounts and from governments to private individuals would provide an ideal opportunity for all manner of pork-barrelling, from handouts to existing retirees to cosy deals for Wall Street investment banks. It might also be suggested that the difficulty of matching reductions in contributions with reductions in benefits could be addressed by ensuring that nearly everyone was promised that they would be better off. Finally, it might be suggested that a combination of creative accounting and rosy scenarios could be used to justify an announcement that the problems of Social Security had been solved when in reality the accumulated shortfall was worse than ever.
Recent observations of the US Congress, the Bush Administration and the accounting treatment of pensions in the private sector make it fairly clear that the second of these scenarios is considerably more plausible than the first. Current holders of US Treasury bonds will rightly be alarmed if attempts to address the funding deficit are bundled into a complex refinancing package involving a substantial increase in official government debt.
I spent a chunk of the Thanksgiving Weekend reading Mark Blyth’s Great Transformations: Economic Ideas and Institutional Change in the Twentieth Century, on which more later. Before doing a proper post, though, I want to point to an interesting claim that Blyth makes in passing; I’ve seen versions of this argument before, but never stated as punchily. Blyth argues that there is no very good reason why we should be worried about the general effects of inflation on the economy - the empirical evidence shows that there is no statistically significant relationship between growth and inflation for inflation rates under twenty per cent per year, as acknowledged even by inflation bears such as Robert Barro. The argument that moderate-to-highish rates of inflation create real economic costs is, at the very least, contestable. Yet low inflation is one of the shibboleths of modern macroeconomic policy. Why? Blyth’s explanation (borrowing from Brian Barry) goes as follows:
Inflation acts as a redistributionary tax on holding debt. Stock prices stagnate and bond prices increase as bond holders demand a premium to guard against the effects of inflation. Investment is hit as inflation eats away at depreciation allowances and stock yields … In short, inflation is a class specific tax. Those with credit suffer while those with debt, relatively speaking, prosper. Given then that the benefits of inflation control (restoring the value of debt) are specific while the costs of inflation control (unemployment and economic decline) are diffuse, the reaction of business, particularly the financial sector, to inflation is best understood as the revolt of the investor class [italics in original]
Thus, Blyth argues that efforts to combat inflation are the result of rent-seeking by a small class of individuals (investors/creditors) with sharply defined interests who are able to push government to protect their investments even when this conflicts with the common weal. Creditors don’t want inflation - especially when it’s unexpected - while debtors benefit from it.I’m not a macroeconomist, but the basics of this argument seem plausible, even if you don’t agree with Blyth’s implied Keynesian alternative. Is there a credible alternative explanation of the clear anti-inflationary bias of most advanced industrial democracies, one that, for example, identifies real social benefits attached to low inflation? Arguments against hyperinflation don’t count, since the causal relationship between middling-to-high inflation and hyperinflation is at best underspecified.
Tyler Cowen links to Martin Wolf (FT, subscription) on the failure of the radical free-market reforms undertaken by New Zealand from 1984 to the mid-90s. The results are even more striking when you observe that the only sustained period of growth has come after 1999, when the newly-elected Labour government raised the top marginal tax rate, amended the most radical components of the Employment Contracts Act, and undertook some renationalisation. I’ve written about all this many times, for example in this AFR piece and this Victoria economic commentary published in NZ (PDF file).
Cowen considers a couple of possible explanations. The first is that the NZ economy was on the verge of collapse before the reforms in 1984. In fact, while there were plenty of problems, Australia faced equally severe difficulties and did better. The second is that New Zealand suffers from its remote location. As I observed in my economic commentaryIf correct, this explanation of the failure of reform would contain a bitter irony. Few governments have embraced the global free market more eagerly than the New Zealand governments of the 1980s and 1990s, and few have received more praise from the advocates of globalisation. However, sentiment counts for nothing in global markets, whereas the advantages and disadvantages of location seem to be increasingly important1.
My reading of the New Zealand experience yields two conclusions. First, nothing in microeconomic policy can offset the impact of bad macroeconomic decisions and NZ made these in abundance2. Second, the beneficial bits of the reforms were largely or wholly offset by mistakes such as giveaway privatisations, and the adjustment costs caused by what Brian Easton called the market Leninist approach to reform, in which massive reforms were rammed through fast, with any opposition being crushed.
1 A further point which I spell out in this piece in Next American City is that, if this explanation is correct, it shouldn’t be. Communication and transport costs have declined drastically. The fact that financial markets and corporate HQs have clustered ever more closely in a few high-cost global cities is evidence that corporate cronyism is trumping economic efficiency.
2 Their main architect, Reserve Bank head Don Brash, subsequently went into politics, and is now Leader of the Opposition.
My wife and I just bought our first jointly owned car - when we were negotiating the final details at the car dealership, they tried to use the hard sell to get us to buy Lojack, a vehicle recovery system. We didn’t bite (I don’t like hard sells), but I got to thinking afterwards that buying Lojack would have been an economically irrational contribution to a collective good (which is not to say, of course, that it would have been the wrong thing to do).
The system involves a difficult-to-detect tracer that’s put somewhere in your car - then, if the car is stolen, the police will have a much greater chance of recovering it and catching the thieves. The catch is, of course, that it doesn’t offer any visible deterrent to stealing your car - your only individual benefit is the somewhat dubious reward of getting your vehicle back, perhaps in several pieces after it’s been to the chopshop. However, Lojack offers real collective benefits if it works as the manufacturers claim. If you live in an area where there are lots of Lojack users, then car thieves are likely to be collectively deterred (or caught if they aren’t deterred).
The problem is, of course, that there will be a strong likelihood of underprovision of the collective good. If you live in a neighbourhood where there are lots of other Lojack users, then you have little incentive to buy it yourself - you can free ride on your neighbours. If you live in a neighborhood with few or no Lojack users, you still have little incentive to buy it - the marginal improvement that you make to general neighborhood security is of little value to you, compared to the substantial dollop of cash that you would have to pay to install Lojack. My musings came to an abrupt halt, however, when a Google search revealed that my clever idea had already been written up several years ago by Ian Ayres and Steven Levitt, who suggest that individual Lojack users get less than 10% of its total social benefits (I note for the record that Levitt not only comes up with fun ideas, which is no more than any decent blogger or punter can do; he really excels in finding unusual data sources to test those ideas). As Ayres and Levitt suggest, if you’re an economically rational actor, you should go instead for the Club, which shifts the risk from your car to your neighbour’s.
Over the fold, there’s a long (1500 word) piece on productivity in the US. It refers to this piece in The Economist, which was criticised by Brad DeLong. My analysis splits the difference between the two.
Anyway, I’d welcome comments and criticism.
In any assessment of the strengths and weaknesses of the US economy, productivity growth must play a central role. Most attention has been paid to positive assessments of US performance. There is plenty of good news on which to base an assessment. US productivity as measured by output per hour worked rose at an annual rate of 2.5 per cent during the dotcom boom from 1995 to 2000, considerably faster than the average for the previous 25 years.
Recessions are usually bad for productivity, as employers tend to keep workers on, even though there may be little for them to do. But no such effect was present during the US recession or the subsequent jobless recovery. Since 2000 output per hour in the nonfarm business sector has risen at an annual rate of more than 4 per cent.
However, there have been more critical interpretations of the data. A focus on productivity growth measured in terms of output per hour worked is common, but on this criterion, the United States is merely catching up to a number of European countries. The world leader in output per hour is Belgium, not normally considered an economic dynamo.
The problem here is that Belgium also has a low employment-population ratio. A lot of potential workers are either unemployed or not in the labour force, and these are presumably the less-skilled. The fact that these workers are excluded from the total increases the average, in the same way as British manufacturing productivity grew strongly under Margaret Thatcher, when large numbers of less efficient factories were forced to close.
There does not seem to be a great deal of merit in attaining high productivity levels by this route. Increased employment might reduce average output, but it would increase total output and most unemployed workers (as well as many of those who have left the labour force) would be better off in low-wage, low-productivity jobs than in their current position.
The ‘Belgium effect’ was not a problem for the United States during the ‘dotcom’ boom, which ran from 1995 to 2000. During the boom, output per hour grew strongly, at the same time as employment surged, and average hours of work increased. On the other hand, there is increasingly strong evidence that the apparent acceleration of productivity since the end of the dotcom boom is due to the exclusion of low-productivity workers. Over the period since 2000, the employment-population ratio has fallen from a peak of nearly 65 per cent, to a low of 62 per cent. There has been a slight recovery recently, but this has been associated with a slowdown in productivity growth.
As was widely noted during the recent election campaign, George Bush is the first US president since Hoover to have presided over a decline in aggregate employment. This is a surprising outcome since the recession was a relatively mild one, and output growth has been reasonably strong.
It is a matter of simple arithmetic that increasing output and declining employment must imply strongly increasing output per worker. Given that average hours worked have remained fairly stable, or declined slightly, this also implies strongly increasing output per hour.
But standard economic analysis suggests that, if productivity is increasing for all workers, employment should be increasing, not declining. A popular explanation of the observed combination of increasing (measured) productivity and declining employment is that fewer workers are needed to produce the same output. But this would make sense only if total output was constrained, for example, by inadequate demand. In the United States, by contrast, growing demand has been reflected in rapidly increasing imports.
An alternative, but related, criticism has been made by The Economist magazine. The Economist looks at multi-factor productivity, the measure most commonly used in discussions of Australia’s productivity performance. On this measure, as assessed by the OECD, the United States looks much more like the European countries with which it is commonly compared. In particular, France looks almost identical to the United States.
In both France and the United States, the rate of growth of multifactor productivity accelerated in the late 1990s. France managed multi-factor productivity growth of 1.4 per cent between 1995 and 2002, compared to 1.2 per cent in the United States. These estimates are presented by the OECD, and don’t seem to match well with national estimates (the Australian numbers are different from those produced by the ABS, though both show strong growth). In the present case, this is inevitable, since the US multi-factor productivity statistics haven’t been updated past 2001.
The Economist’s focus on multifactor productivity has been criticised by economist Brad DeLong, of the University of California, Berkeley, who argues that a major source of productivity growth is technological progress ‘embodied’ in new and improved capital goods such as computers. An improvement in the quality of capital goods is reflected in a declining quality-adjusted price. This means that, for a given level of investment spending, the quality-adjusted addition to capital stock is greater. Hence, embodied technological progress is reflected in enhanced labour productivity, but no change in capital productivity.
DeLong is right to argue that the productivity growth of recent years has been driven, in large measure, by improvements in computer and telecommunications technology, and that a measure that takes no account of this cannot be regarded as satisfactory.
On the other hand, there are two (inter-related) reasons for using MFP measures, at least in the context of international comparisons. First, since computers are commodity items nowadays, the technological progress they embody is available, on more or less equal terms, to all developed countries, with only modest time lags.
A striking illustration of this point is given by looking at the proportions of households with access to PCs and the Internet. The gap between technological leaders like the United States, and transition economies like those of Eastern Europe is less than a decade. Household internet penetration in the Czech Republic, for example, is currently 15 per cent, the level achieved by the United States in 1997. The time taken for innovations in this area to diffuse among the leading developed nations is often a matter of months, rather than years. It follows that international comparisons are unlikely to be subject to any significant bias arising from failure to take account of embodied technical change.
A second, and related point is that there are big differences in the ways in which statistical agencies different countries take account of changes in the quality of computers, and other forms of embodied technological progress. The United States uses a method called ‘hedonic regression’ which involves pricing characteristics (in the case of computers, these would include processor speed, disk drive capacity and so on) rather than specific items. By contrast, most European countries use older methods involving matching specific models. Partly because of this difference in methods, US statistical agencies generally tend to be more aggressive in quality adjustment than their European counterparts.
Most economists think that the US approach is more accurate. Regardless of which approach is right, the effect of the different approaches is to produce an upward bias in estimates of US output growth, as compared to that of European countries. As The Economist points out, any bias of this kind is matched by a corresponding bias in estimates of the volume of capital investment. In the case of MFP estimates, these two biases largely cancel out, since an overestimate (or underestimate) of the value of computers biases both the input and output measures. When these biases are eliminated, international differences in estimates of productivity growth are greatly reduced.
The resolution of these seemingly technical debates is of great importance in projecting likely future developments in the world economy. The United States is currently experiencing large and growing deficits in goods and services trade, currently equal to about 6 per cent of GDP. Such deficits cannot be sustained for more than a few years, since consistent trade deficits inevitably produce exploding current account deficits and foreign debt.
The optimistic interpretation is that the United States can accumulate debt now because international lenders anticipate high rates of productivity growth, which will permit the rapid growth in output needed to produce trade surpluses in future, thereby permitting the debt to be serviced. Against this, it may be observed that private foreign investors are increasingly unwilling to invest in the United States or buy US securities. Their place has been taken by Asian central banks, particularly the People’s Bank of China, which is trying to avoid an upward revaluation of the yuan against the dollar.
The dollar has already declined against other currencies, such as the yen and euro, and has now reached a record low against the euro. Combined with strong productivity growth, relative to the rest of the world, this ought to produce a rapid turnaround in the US trade deficit. Whether it will do so remains to be seen.
Tyler Cowen, in India, discusses how the people of Calcutta might adjust to rising sea levels , how many of them would leave, etc (via Davos Newbies ). There’s a certain Swiftian quality (no doubt unintended) to Cowen’s contemplation of the fate of these poor Indians. If the costs and burdens he suggests do fall on such people (as they probably will) then it puts in perspective the fatuousness of the arguments advanced by Bjorn Lomborg and others to the effect that we shouldn’t do anything about global warming because the costs of action will exceed the benefits. The costs will be incurred by the poor in places like India who will end up with their homes and workplaces under water, and the benefits have been and will be reaped by the already rich in the first world who carry on driving their SUVs. If the economist and policy-wonks who parrot the Lomborg line are proposing a massive compensatory transfer from the winners to the losers then I haven’t heard of it. Qu’ils mangent de la brioche .
Will Wilkinson’s thoughts on the (alleged) European taste for leisure over work had me scurrying over to my bookshelf to find a copy of Marx’s Grundrisse . Will surmises that the real reason that Europeans work shorter hours than Americans is that European taxes are too high. After all, anyone who is “economically rational” would surely work more if only the rewards were there, wouldn’t they? So goes human nature according to libertarians. Well, no Will, they might work even less if they could satisfy their consumption needs with fewer hours at the grindstone. As Kurt Vonnegut says , human beings “are here on Earth to fart around, and don’t let anybody tell you any different.” Anyway, that quote from Marx:
The Times of November 1857 contains an utterly delightful cry of outrage on the part of a West-Indian plantation owner. This advocate analyses with great moral indignation—as a plea for the re-introduction of Negro slavery—how the Quashees (the free blacks of Jamaica) content themselves with producing only what is strictly necessary for their own consumption, and, alongside this ‘use value’, regard loafing (indulgence and idleness) as the real luxury good; how they do not care a damn for the sugar and the fixed capital invested in the plantations, but rather observe the planters’ impending bankruptcy with an ironic grin of malicious pleasure, and even exploit their acquired Christianity as an embellishment for this mood of malicious glee and indolence. They have ceased to be slaves, but not in order to become wage labourers, but, instead, self-sustaining peasants working for their own consumption.
Good for them!
Thank you from the bottom of my heart to the traders on the Iowa Electronic Markets, who, on the last day of the campaign, have bid Kerry up to 51% chance of winning. Thus ensuring that, whoever wins, I will have ample material to spend the next four years teasing James Surowiecki about the “Wisdom of Panicky Crowds”.
(the really interesting thing is that the single most probable IEM outcome is still Bush to win with less than 52% of the popular vote. The big move of the bids has been from Bush>52 to Kerry >52!)
One of the least attractive features of Steven Landsburg’s column in Slate was always his habit of assuming that anyone who disagreed with him obviously did so out of ignorance, and indeed that appears to be his response to my post on the subject of quantum game theory and information. As a matter of fact, I do understand a bit (just a bit) about quantum probability and I understand a bit more after mugging up on the relevant chapter of David Williams excellent book on probability. Landsburg’s point appears to be that since no information is exchanged, there is no communication, but this won’t do. “Information” in the physical sense is not exchanged, but “quantum information” (not the same thing, but neither something completely different) is, and that is enough to turn it into a communication game. Let me elucidate with yet another variation on the cats/dogs game.
Assume that me and John Quiggin are playing the cats/dogs co-operation game for money (for details of this game, see here. Assume further that Quiggin is thoroughly bored with this game and has decided to stop playing it properly. Instead of adopting any strategy at all, he just flips a coin to decide whether he’s going to answer “yes” or “no”.
However, although he can’t be bothered playing properly himself, JQ realises that the game matters more to me, so he has decided to help me out. Since getting his Federation Fellowship, we assume that John now employs a butler, called Lurch. Every time the game is played, Lurch observes whether John was asked “do you like cats?” or “do you like dogs?”, and whether he answered (at random) “yes” or “no”? Armed with this information, Lurch, who is a kindly soul, walks round to my office, observes which question I have been asked, and tells me what to answer. He gives me the answer I need to win the money in all cases, unless JQ and I have both been asked “do you like dogs” and JQ has answered “no”. In this case, there is no win possible for me, so Lurch just tells me to answer “no”.
It can be seen that with this arrangement, JQ and I will win seven times out of eight (Lurch can always give me the right answer except in the one case where we are both asked about dogs (25% chance) and JQ answers “no” (50% chance)). But what information is being transferred to me?
Well, let’s assume that I have just been asked about cats. Lurch comes in and says “Say YES!”. I know that either JQ has been asked about cats and said “NO” or he has been asked about dogs and said “YES”. But I don’t know which of these situations has obtained. So, I am in no better a position than I was before to guess either which question JQ was asked, or what reply he made.
Similarly, if Lurch comes in and says “Say NO!”, I know either that JQ has been asked about cats and said “YES” or asked about dogs and said “NO” (in this second situation I’m about to lose, but there you go). I still don’t know which of these situations has obtained.
So, Lurch never gives me any information which tells me either which question was asked in JQ’s room, or what answer he gave. We can see this more clearly by generalising; if JQ was using a biased coin to decide on his answers and if the questioner was using a biased coin to decide what question to ask him, I would never be able to figure out the bias of either coin from the information Lurch gave me, no matter how many times we played the game; I wouldn’t know whether a preponderance of “Say YES” instructions was the result of JQ saying “NO” more often, or JQ being asked about dogs more often.
No information about the state of affairs in the other room is being transmitted here, so by the criterion Landsburg is using, I think he would have to call this a “non-communication game”. To which all I can say is that it is a pretty funny “non-communication game” where one party is sending his butler round to tell the other party which strategy to choose.
Furthermore, having looked at the Lurch game, we can begin to understand what is actually being transmitted in cases of quantum communication. Although I would never learn anything about the biases of the two coins in the other room, I would, over time, learn quite a lot about the correlation between the two coins. In this sense, information is being transmitted by Lurch between the two rooms; not in any particular case or about the average state of our rooms, but over a long run and about the relationship between them.
And this is exactly what happens in cases of quantum entanglement. Two entangled particles have their states commixed until you observe one of them, at which point you determine the state of them both. Because superposition isn’t the same as probabilistic mixture, this means that you can’t tell what the state of one is likely to be by conditioning (classically) on the state of the other. However, because their states were entangled, you can know that the measurement made of an observable on one particle will be correlated in a predictable way with the measurement made of the same observable on the other. And this correlation is a “proper” classical correlation; its frequentist interpretation makes use of the same Strong Law of Large Numbers that one uses when drawing balls out of a Polya urn, and its Bayesian interpretation makes use of the same Bayes’ Theorem that is used in WinBUGS. As Williams points out in his excellent book, if it was literally the case that you couldn’t tell anything at all about one particle by observing the other, we’d never have been able to verify propositions about quantum entanglement empirically, and we can (using normal classical statistics), so the implication that using entangled quanta isn’t cheating in a co-ordination game is clearly far too strong a claim. Like the original quanta or Landsburg’s sunglasses, Lurch is a magical device which allows me to choose a correlation between my observation and my partner’s which is different from 0. That’s cheating in what’s meant to be a noncommunication game.
Which is why I maintain my original proposition; that quantum game theory is interesting for what it tells us about quantum probability, but that its implications for the fundamentals of game theory aren’t that great. The rigorous definitions non-communication games used by the game theory nargs already use a definition of communication which rules out mobile phones, morse code, comedy butlers and anything else which allows one player to make his choice of strategy conditional on the other’s, so the only economically interesting implication of quantum game theory would be that we might want to start modelling quantum-probabilistic mixed strategies more often if we thought that particle-entangling devices were going to become as common and as portable as, say, mobile phones. Which, at present, they ain’t.
Oh goody, I’ve been waiting for Pile On Stephen Landsburg Week. That column of his in Slate has been winding me up for years.
As my contribution, check out this guest contribution to Marginal Revolution, where half-understood physics meets half-understood economics, with predictable results.
The guts of the post are as follows:Let’s play a coordination game: You and I are each asked a single question, either “Do you like cats?” or “Do you like dogs?”. Our questions are determined by independent coin flips. We both win if our answers differ, unless we’re both asked about dogs, in which case we both win if our answers match.Here’s a pretty good strategy we could agree on in advance: We’ll contrive to always differ. Whatever we’re asked, I’ll say yes and you say no. That way we win 3/4 of the time.
Can we do any better? No, if we live in a world governed by classical physics. Yes, if we live in the world we actually inhabit—-the world of quantum mechanics.
I think I know a way to do better, using only classical physics.
If I take in my right hand a notepad and in my left hand a pencil (both objects of classical physics), then I can simply write down the sentence [for example] “They asked me about dogs and I said yes”. Then I take it round to Landsburg’s office and show it to him, then he answers.
I estimate that I could get roughly 90% wins in this method (it would be 100%, but I’m allowing for the fact that 1 time in 10, Landsburg will fuck it up on purpose and write an article about how “counterintuitive” he’s being). This is better than chance, and also better than the 85% win rate that you can apparently get with some sort of funky quantum nonlocality or other.
The point is quite simple. A non-communication game is a non-communication game, and a communication game is a communication game. It doesn’t matter (for the purposes of economics) whether the communication takes the form of face-to-face contact, Bell’s effect or carrier pigeons. What does matter is whether the communication is credible or not; whether it is “cheap talk” whereby it doesn’t cost me anything to give a false signal, or whether there is an incentive condition which means that Landsburg could rely on my not intentionally misrepresenting things to gain some advantage, and whether or not the signal is observed perfectly or with noise. Landsburg is simply wrong to say that “game theory changes dramatically when players have access to quantum technology”, because no important points of economics turn on the precise mechanism of communication.
(In fairness to the authors of the paper Landsburg cites, they aren’t responsible for the use he made of their work; the “games” in their paper have no economic significance and are just being used as convenient ways to describe bounds on the information that can be communicated through a quantum channel. This way of talking about information transfer has been a standard in the engineering literature ever since the original Kelly Betting paper.
Anyone who was thinking for even a minute would have realised that this paper had nothing to do with game theory, and that the “co-ordination game” example was a clear cheat. So why didn’t Landsburg? I can only assume that it’s part of a more general phenomenon; the tendency of economists (and social scientists in general) to drop all their critical phenomena and fall into a swoon when they see the manly form of a physicist. Phil Mirowski makes a decent case that modern economics was conceived in this kind of physics-envy so it’s not exactly surprising, but that doesn’t make it any more correct. Here’s a few things that social scientists should always remember:
1) Despite what they tell you, physicists are human beings. They eat, fart and have sex more or less like the rest of us. There is no secret caste of physicists set apart from humanity; the difference between them and us is that they did a physics degree.
2) There is, to a first approximation, a continuum of physicists on a quality scale going from “genius” to “halfwit”. They come in good and bad varieties, and the bad ones are really quite bad. Furthermore, it is not difficult to tell the difference if you are prepared to learn a little mathematics and apply yourself. Cosma Shalizi tells the tale of statistical physicists who identify power law distributions by considering the R^2 value from a linear regression, and get their papers published. This on its own ought to diminish one’s respect for physicists to healthy levels.
3) That subset of physicists who regularly publish papers on social sciences is unlikely to be a sample from the cream of the profession. To the best of my knowledge, Steven Hawking has kept his opinions on the stock market to himself, and so did Richard Feynman. There are some very good papers in the econophysics literature (if you really want to know about quantum mechanics applied to game theory, here’s an introduction for you, but it doesn’t make the claim to have overturned the fundamentals of the field, or that quantum effects make a difference to classical games. NB also that Landsburg’s paper doesn’t cite Eisert et al’s paper on quantum game theory, illustrating my point that it’s not about economics), but they are massively outnumbered by pieces of work written by second-rate physicists who have decided to set out an ill-informed brain dump on some aspect of the social sciences because they erroneously believe them to be “easier” than doing physics. Have a look on the arxiv if you don’t believe me.
I’ve never been a great fan of Steven Landsburg’s ‘Everyday Economics’ columns in Slate1. While he occasionally has something interesting to say, a lot of his columns are what Orwell called ‘silly-clever’, such as this piece defending looting. Economists are often prone to this kind of thing, and it doesn’t do the profession any good in my view, but it’s usually not worth refuting.
Landsburg’s latest piece is in a different category. It’s a repetition of dishonest rightwing talking points about taxation that have been refuted over and over, but apparently need to be refuted yet again. As is his wont, Landsburg seeks to defend a paradoxical claim, namely, that “Bush’s Tax Cuts Are Unfair …To the rich.” He makes a total hash of it.
His initial presentation of the argument will be familiar to anyone who’s followed the tax debate in the US. He says thata couple with a taxable income of $60 000 a year have had almost a 24 percent income tax cut since President Bush took office. (And ditto if your income was just $20,000.) Meanwhile, the folks who make $350,000 a year got a cut of only about 12.5 percent; those who make $1 million a year got an even smaller cut.As Landsburg acknowledges almost immediately, these numbers are meaningless. Although the United States has a tax called the “income tax”, it’s not the only tax on incomes, or even, for many people, the most important one, nor does it apply to all kinds of income. For those on low and moderate incomes, the most important income taxes are the payroll taxes that finance Social Security and Medicare. For the rich, it’s necessary to look at capital gains, dividends and inheritance. When Landsburg looks at all taxes on all incomes, he finds that the rich get about the same tax cut as the median taxpayer.
Even though this analysis refutes Landsburg’s stated conclusion, it’s still hopelessly biased. To begin with, there’s a presentational trick he doesn’t explain, but which can be illustrated by an example. Suppose, in an initially progressive system, high income people initially pay 40 per cent of their income in taxes, and low income people initially pay 10 per cent. Then suppose the rates are cut to 30 per cent and 5 per cent. Clearly, the high income earners have gained twice as much, relative to pretax income, as the low-income earners. But according to Landsburg, the low-income earners have done twice as well, receiving a 50 per cent cut in their tax paid, compared to the high income earners who have only had a 25 per cent cut. There’s probably no point in arguing at length over the “right” way to present data, but Landsburg ought to make it clear what he’s doing.
The more fundamental problem is that there is no reason to look only at taxes on income. Taxes on income are the only progressive element in the tax system, being offset by regressive taxes like sales tax. Overall, the burden of taxation, relative to taxable income, is not far from being proportional2. When account is taken of the opportunities for tax avoidance and evasion, disproportionately available to those in the top quintile, it’s doubtful that those on high incomes pay any more of their income in taxes than the population as a whole.
What this means is that it doesn’t really matter whether you look at the Bush tax cuts relative to pretax income, posttax income or total tax paid. However you cut it, the top 20 per cent of income earners got the biggest benefits, and within that the group, the top 2 per cent got more than everybody else.
1 It doesn’t help that Slate’s previous economics columnist was Paul Krugman, a very hard act to follow.
2 Reader Jack Strocchi has pointed me to this study (PDF) showing that the US tax system as a whole was very weakly progressive before the Bush tax cuts and is likely to be proportional or slightly regressive once they are fully implemented . I had a go at the same topic for Australai here. Kevin Drum has a relevant graph
Sinclair Broadcasting, the company which is forcing its stations to run an anti-Kerry film this week, fired one of its bureau chiefs for speaking out against its decision. As Joe Gandelman at the Moderate Voice points out, this rather strongly undermines the premise that the film is a news event.
In the event of a Kerry victory, Sinclair is painting a big bulls-eye on themselves that says “FCC, screw here”. But even if Bush wins, they’ll be too radioactive for the new Bush administration to help. Kerry supporters are furious at Sinclair’s unprecedented descent into blatant electioneering on public airwaves. They’ll continue the boycott of their advertisers, challenge the FCC licences, and use every means possible to punish Sinclair. Either way, Sinclair is not going to get the deregulation they need to stay viable.
In the words of Lehman Brothers, airing the film “has no upside and only multi-dimensional downside”. And it’s not as if this company was in a strong position to pull a stunt with their shareholders’ money. From their most recent 10-K:
We have lost money in two of the last five years, and may continue to incur losses in the future, which may impair our ability to pay our debt obligations.We reported earnings in 2003, but we have suffered net losses in two of the last five years. In 1999 and 2000, we reported earnings, but this was largely due to a gain on the sale of our radio stations. Our losses are due to a variety of cash and non-cash expenses which may or may not recur. Our net losses may therefore continue indefinitely and as a result, we may not have sufficient funds to operate our business…
We have a high level of debt (totaling $1,732.5 million) compared to the book value of shareholders equity of $229.0 million…
We use a significant portion of our cash flow to pay principal and interest on our outstanding debt and to pay dividends on preferred stock, limiting the amount available for working capital, capital expenditures and other general corporate purposes…
If we default on our obligations, creditors could require immediate payment of the obligations or foreclose on collateral. If this happened, we could be forced to sell assets or take other action that would reduce significantly the value of our securities and we may not have sufficient assets or funds to pay our debt obligations.
If my mutual funds were investing in this company, I’d be pretty angry. Just saying.
I’m trying to build up a small archive of articles that explain important things about financial markets in clear language to an educated liberal audience. This article in the Guardian by Edmond Warner is worth ten minutes of your time.
Speaking of this, there was another passage from Howard Kurtz’s Media Notes column that caught my eye as a former market researcher. (I actually asked a question about this during the Media Notes Q&A session, but it wasn’t selected.)
Luntz, who is under contract to MSNBC, had already spent $30,000 on recruits for several focus groups…
I worked in market research from 1997 to 2001. By some measures, it wasn’t very long, but it was long enough to get an idea of the costs involved in conducting a market research project.
If I understand correctly, Luntz had 18 people in his group. That’s over $1600 in recruiting costs for each respondent that showed up.
Caveats first:
1. It might be especially hard to recruit for a televised focus group. Maybe people who want to be on TV are more rare than I suspect, or the screening requirements are unusually strict.
2. Recruiting costs must have gone up since 2001.
3. Publicizing your costs is a tactical decision. Maybe MSNBC just pays any bill Luntz puts in front of them, and Luntz pads his costs. There’s nothing terribly wrong with that. Maybe he tells everyone else they’re getting a special deal. Who knows.
4. This is, by any reasonable standard, no big deal.
5. I can’t prove any of this. No one posts their recruiting costs online (see point 3.)
6. I’m probably forgetting about Poland.
Having said all that, I’m still convinced that spending $30,000 to recruit a reasonably balanced panel of 18 voters to be on TV is extremely high. Recruiting a panel of voters with a spectrum of political beliefs should have cost, very roughly, a few hundred dollars each. It’s done by low-wage phone bank workers, and it only approaches those kinds of costs if you’re looking for extremely rare people or people who put an extremely high value on their time. If we were talking about physicians in Manhattan, and a recruitment firm wanted $1000 each to recruit them, I’d think that that was on the high side.
If Luntz is lumping in his own fees, I could see it. Luntz probably double- or triple-recruited, so that he could choose a telegenic mix at the last minute. But it’s hard for me to see how Kurtz’s reported figure- $30,000 in recruiting costs- could be accurate.
Via the Marginal Revolution lads, here’s a working paper by Charles Manski, an economist at Northwestern who’s interested in a question that we’ve often returned to at CT; what are the prices in markets like the Iowa Electronic Markets (***MARKET UPDATE*** Kerry still “dying on arse”) actually measuring? Can we really take a market price of 0.70 and unproblematically read off it that “the market thinks there is a 70% chance”?
The central argument of the Manski paper is that, for reasonable-looking assumptions, the market price is (skipping over some tough-looking calculations, which is what you have to do when Acrobat refuses to render math fonts) the average belief of the trading community, weighted by the amount of money each trader is prepared to commit to the market. In Manski’s model, this weighted average is not a meaningful measure of the central tendency of the distribution of beliefs, other than that the true average belief has to fall somewhere within fairly wide bounds of the market price, these bounds being determined by the distribution of the size of trading accounts.
This is a pretty attractive result; it gives some substance to James Surowiecki’s gut feel that preference aggregation mechanisms other than markets might be able to outperform markets, because they don’t have this confounding factor of the size of trading accounts. It also weakens the “arbitrage-based” arguments in favour of markets over other forms of economic organisation, because the distortions introduced by this factor can’t be arbitraged away unless you have information about the distribution of account size which one wouldn’t normally have.
However, I think that the actual state of the world is somewhat better for the markets crowd than Manski’s paper suggests. As I see it, the engine of the paper is that it has two big assumptions, both of which I regard as unrealistic:
1) traders in the market are assumed to be price-takers
2) the distribution of account sizes is assumed to be independent of the distribution of beliefs.
I think that the first assumption is unreasonable as a characterisation of markets in general; it’s a sort of Arrow-Debreu world in which prices are arrived at by a process of tatonnement or sealed envelope auction, and then set “all at once” at a price that equates total supply to total demand. I personally think that this is sociologically an unrealistic characterisation of all market processes everywhere1, a particularly bad way of describing securities markets in general2 and, importantly, factually wrong as a characterisation of the IEM.
The Iowa Electronic Markets operate as an open limit order book; people place orders into the market indicating the price and size that they’re prepared to buy or sell. Tradesports is more or less the same. In both of these markets, if you see the current price quoted at 0.58-0.60, you simply cannot place a Buy order at 0.60 that is bigger than the current Ask size3 at that price. If you want to put more money on, you have to enter a Bid of your own at 0.60 and hope that the 0.58 bid gets taken out. For anyone not familiar with these markets, to whom the preceding will have meant precisely nothing, just take away the view that as you enter Buy orders, you eat up the population of sellers at a price, so if you are putting on a big position like the program trade I dumped on Kerry as a practical joke[4], you end up getting the quoted price only for your first few units and paying worse prices for the rest of your trade.
And when you think about it, this stylised fact means that the size of your endowment5 or trading account can’t be independent of your beliefs. Assumption 2) was always unrealistic (it’s a feature of Manski’s model that there are no such things as weakly or strongly held beliefs; you just decide on your own fair value and buy units below that price), but in a world of limited liquidity, it’s simply unsustainable.
Consider, for example, a trader with a very large endowment6 of $500 in his trading account, facing the following order book
Price : Size
50 : 100
51: 50
52: 200
53: 100
54: 50
I’ve only given one half of the order book, because I’m assuming our hypothetical trader (call him “Fatty”) is a buyer. But how strong a buyer is he?
If Fatty thinks that the fair value of the contract is 55 or higher, he will spend his entire $500 endowment in Manski’s model. But if he only thinks the contract is worth 52, then the fact that his account is worth $500 is irrelevant; he’s only going to invest $100 @50 and $50 @51, so his account might as well only have been a $150 one. So I don’t think it makes sense to make the strong assumption that the size of trading accounts is independent of beliefs about fair price; people with extreme beliefs are going to be able to deploy big accounts but people with beliefs near the market aren’t. I’m pretty sure looking at the model that the wide error bands Manski derives are the result of big accounts having a big effect on the demand for contracts, and since the phenomenon I’ve identified would tend to attenuate that phenomenon, I’m guessing that things are probably a little bit better than his paper suggests. But the main point; that one cannot read off IEM prices as straightforward probabilities, in the way that the IEM website appears to be claiming you can7, is well made and well taken.
Footnote:
1Historically, it was this assumption which marked the big split between the Austrian economists and what was to become the neoclassical school. For an Austrian, abstracting from the actual things people do in a market to start talking about a frictionless abstraction of a market, is about the dumbest thing you can do. This is related to the point of Hayek scholarship I keep making; that markets are social institutions that (in my opinion, others differ) can’t be conjured out of thin air.
2This view is now mainstream in economics; the field of “market microstructure” deals with the way in which the price discovery process works itself out in securities markets.
3Or possibly Bid. I have a kind of dyslexia with respect to these two, and repeated checking doesn’t seem to help. Give us a shout in comments if I got it wrong.
4No I didn’t, you conspiratorial bastards.
5Sniggering when professors of economics use the word “endowment” is considered bad form.
6I said shut up!
7The IEM trading accounts size limit of $500 probably helps them a bit too.
Daniel Akst contributes yet another in a seemingly endless series of articles reminding American workers that they should “stop whining”, since they are far better off than were their forebears during the Great Depression.
What is striking about this genre is that the choice of the Depression is not an accident. You have to go back that far to get a comparison that gives a clear-cut, unqualified and substantial improvement in the pay and conditions of US workers across the board. Real hourly wages for men with high school education are now around the levels prevailing in the 1950s1. Since it’s difficult to make comparisons with the war decade of the 1940s, it’s necessary to go back to the 1930s to get a clear-cut improvement.
Correction and apology I got so annoyed by the appearance of the Depression comparison, that I failed to read the entire article properly. Akst ends by pointing outIt is noteworthy that in news media coverage of job stress, the emphasis is usually on educated middle-class professionals who, in fact, have many choices - including a lower-pressure job or simply working less. All this hand-wringing over the suffering of the relatively fortunate only distracts us from the plight of Americans whose work lives are really stressful: those who are paid $7 or $8 an hour, don’t have health insurance and lack the skills or education to better their lot. Life for these workers is a tightrope act without a net, so the least that we lucky ones can do is stop whining. Better yet, we can honor their labor by adopting social policies, like national health insurance, a higher minimum wage and tougher limits on unskilled immigration, that will ease their struggle. It will cost us something, of course. But for the working poor, yoga won’t cut it.which makes a lot of the points I would have wanted. I withdraw my criticism of Akst and apologise for misreading him. Thanks to commenter Steve Carr for pointing this out. (As there has been plenty of discussion, I’ll leave the rest of the post unchanged for the record) end correction
Of course, given enough space, there are all sorts of arguments that can be made to say that a comparison of real hourly wages for men is not the right one. Benefits such as health insurance have generally risen by more than wages. Women’s wages have risen a lot. The consumer price index doesn’t give any weight to increased product variety and arguably doesn’t make a big enough adjustment for quality. And so on.
But then, it’s easy to turn a lot of these things around. Low-wage workers are increasingly being excluded from employer-provided health benefits. There are plenty of services where quality has declined. Unions are weaker and employers more ruthless in firing unwanted workers than in the 1950s.
On balance, the decline in measured real wages for high-school educated workers probably overstates the welfare loss incurred by such workers. Hence, they are a bit better off now than similarly qualified workers in the 1950s. But the difference is far from clear-cut, which explains the continual resort to comparisons with the Depression.
But to conclude in the spirit offered by Akst, we should stop whining. If the trends of the past three decades resume, after the brief interrruption of the late 1990s, we’ll soon be reading about how much worse things were during the Industrial Revolution, or the Dark Ages, or the Paleolithic Era.
1 I couldn’t find a good series on hourly wages going back to the 1950s. But the Federal minimum wage is the relevant standard for millions of workers. It’s currently at the same real value as in the mid-1950s, and falling.
Jon’s post on Big-time college sports draws on work by Robert Frank, who treats high performance in college sports as a positional good.
By an interesting coincidence, Frank gave a seminar here in Brisbane on Friday and stayed for a very interesting chat afterwards. He argued that the growth in inequality in the US has been positively harmful to the middle class, even though their income has been roughly stationary since the 1970s.
One argument is that expenditure on positional goods by the top quintile has negative externalities such as the need to buy larger cars to protect yourself against SUVs and the need to buy more expensive clothes to appear decently-dressed in various contexts, such as job interviews. . Another has to do with informational cascades creating higher aspirations for consumption. The one-line message is “relative income matters”. Frank sees this as a big factor in declining savings rates, increasing household indebtedness and (I would infer) growth in the current account deficit.
Although I agree with a lot of this, I think another version of relative income is more significant in the case of the US. Incomes rose rapidly and across the board in the postwar period, and this followed a lot of equalisation during WWII and the New Deal. By contrast, since 1970, they’ve been flat for middle income households and have actually declined for the bottom 20 per cent of households1. This means that when people compare their current consumption to their own past peak levels or to their parents’ they may well find that it has declined.
This is even more likely to be true at a disaggregated level. Lots of new goods like computers have come on the market since 1970. While more variety is welfare-improving, if budgets are fixed, expenditure on new items must be financed by a reduction in consumption of old items. If people aspire to avoid such reductions, they must increase consumption expenditure.
As Frank observed in discussion, there is nothing in this that would have seemed surprising to economists in the 1950s and 1960s when Duesenberry’s work on previous-peak consumption models was influential. But this model has been almost entirely displaced by Friedman’s permanent income model and its successor the life-cycle model, even though the empirical performance of these models is not as good as Duesenberry’s. Clearly there are strong ideological/methodological preferences at work here.
1 Gregg Easterbrook ran hard with the idea that the bottom 20 per cent is made up largely of recent immigrants, so that native-born households have had rising incomes. But as I recall this idea didn’t stand up to scrutiny.
Having been distracted by wonkish obsessions like current account deficits, fiscal bankruptcy and the situation in Iraq, Indonesia and other unimportant countries1, I haven’t been able to keep up with the US election campaign as closely as I would like. But, following a quick tour of the press and the blogosphere, I’ve come up with the following shorter2 (© D^2)version for others who may be in a similar position.
The crucial issue is to determine which candidate has the better record on Vietnam, and will therefore make the better president. As I understand it:
That seems to be all I need to know3. Have I missed anything important?
1 Such as Australia, which is also holding an election.
2 Thanks to commenter Luis over at my blog for tech support on the copyright symbol. Now if I could just do a copyleft symbol! DD points out that it’s been released to the public domain, but I still like to acknowledge him.
3 Or would be, if I had a vote in the election that will actually determine Australian policy on most issues, rather than our local exercise in democracy.
Since I don’t often agree with Paul Wolfowitz, it’s worth mentioning it when I do, particularly when he comments on an issue close to home. His opinion piece in todays NYT denounces the bringing of criminal defamation charges against the editor of leading Indonesian magazine Tempo for a piece criticising a powerful businessman1.
Here’s a story in the Australian which makes it clear that the businessman in question is of the class who would be described, here in Australia, as colourful.
Most Australians were disappointed when the Indonesian Supreme Court overturned the conviction of one of the Bali bombers on the grounds that the retrospective laws under which he was charged were invalid. (Hopefully, he can be retried and convicted under ordinary criminal law.) But cases like that of Tempo remind us that the rule of law is an invaluable social good and should not be tampered with even for worthy ends. I hope, as indicated in the Oz report, that the courts will throw this case in the dustbin where it belongs.
It also worth emphasising that, despite incidents like this, and the Jakarta bombing that the general trend of events in Indonesia is remarkably favorable. Elections are going smoothly the intercommunal religious strife that threatened to destroy the country a couple of years ago has pretty much ceased and the military is being eased out of politics (the likely winner of the presidential election is a former general, but the real military candidate, Wiranto, ran a poor third in the first round of voting).
1 Australian readers will remember similar charges being brought against Communist writer Frank Hardy for his thinly-disguised portrayal of John Wren in Power without Glory. The book was certainly defamatory and many of the claims in it undoubtedly false, but the prosecution failed and rightly so. This should have been a civil action.
WIthout much fanfare, the US recorded its largest ever current account deficit in the June quarter, $166 billion. The NYT gave the story a fairly prominent run in the business pages , but the Washington Post ignored it altogether as far as I could see, and CBS Market Watch buried it in small print1.
At about 5.7 per cent of GDP, this CAD result is at the level at which economists (at least those who are given to worry about unsustainable current account deficits) start getting worried. But, as I’ve pointed out previously, even to stabilise the CAD at this level would require a fairly rapid reduction in the deficit on goods and services. Otherwise compound interest will come into play and the CAD will grow unsustainably. This pattern is already evident in the most recent figures, with a sharp decline in the balance on the income account, due to higher payments to foreign owners of capital2.
I’ve argued previously that, even with a depreciation of the US dollar, an autonomous smooth adjustment to trade balance is unlikely. The default adjustment path in circumstances of this kind is a currency crisis and recession. There’s also an alternative available to the US as the issuer of a reserve currency. A gradual return to, say, 10 per cent inflation would give the US the chance to wipe out most of its existing debts (at this rate, real values are halved every seven years) and start again with a clean slate. But sweating out 10 per cent inflation is a long and painful process.
So what’s the alternative? There’s one policy intervention that would, I think, have a pretty good chance of restoring balance. Introduce a tax on gasoline and petroleum consumption, and announce that it will rise gradually over time, say from now until 2010, to a level of $2 a gallon on gasoline or perhaps $30 a barrel on oil, domestic and imported (there’s a case, based on costs of road use, for taxing gasoline more heavily than other end uses).
In the long run, demand for oil is reasonably price elastic, and if users knew they faced steadily rising prices, it’s reasonable to expect that a doubling of prices would reduce demand by up to 50 per cent. In terms of motor vehicles, this would bring the US into line with Australia, a highly car-dependent country but one where the price (about $A1/litre) is close to $US4/gallon already about $US3/gallon. . That would bring US demand roughly into line with domestic production, and knock about $150 billion a year off the trade balance. Of course, there are second round general equilibrium effects to be considered, but it’s quite a big potential impact.
There are lots of other benefits. The revenue would help reduce the other deficit, that of the Federal government. The price of oil on the world market would be driven down, which would reduce the income flowing to all sorts of people who can be counted on to use it badly. And there would be some big benefits for the environment.
On the negative side, I can see one minor objection, and one major objection. The minor objection is that the incidence of the tax would be regressive, and offsetting changes in income and social security taxes would be needed.
The major objection is that the whole idea is utterly, totally politically unthinkable.
1 However, the story contained the great quote “Get out while there is still an ample supply of fools” from Peter Schiff, president of Euro Pacific Capital, who is urging clients to get out of the dollar as fast as possible.
2 Although the official figures show the US as a large net debtor, the income account is roughly in balance. Partly, I suspect, the value of US assets overseas, accumulated over a century or so, is understated. Partly, a lot of US obligations take the form of short-term debt at unsustainably low itnerest rates.
One interesting recent strand of research on justice and human well-being has been that inspired by Amartya Sen’s “capability” approach. There’s now an association dedicated to this, with Sen as its first President and Martha Nussbaum as President-elect. Details here .
Today’s NYT runs an archetypal David Brooks piece. The obligatory lame conceit is that the elite is divided into spreadsheet people (notably accountants) who vote Republican and paragraph people (notably academics) who vote Democrat.
Unusually though, Brooks seems to have some actual numbers to back his story, and they give pause for thought. The most striking is that:Back in the early 1990’s, accountants gave mostly to Democrats, but now they give twice as much to the party of Lincoln.If this is true, considering the state of US national finances under Bush, it speaks volumes about what has happened to the accounting profession in the last decade. Do the accountants supporting Bush really believe that he has a plan to cut the deficit in half or do they just think that accounts should show whatever the client wants them to show? I guess we learned the answer to that with Enron, but it’s useful to know that nothing has changed.
The Cheney-eBay controversy is a welcome break from all the terrible things happening just at the moment (like most moments, I guess) and gives me a chance to reprise my favorite economic aphorism.
Gross Domestic Product is a lousy measure of how well a country is doing, because it’s Gross, Domestic and a Product.
As I explained a while back:It’s Gross because depreciation is not subtracted. If we are concerned with measuring economic welfare, even from a narrowly materialist viewpoint, the net measure is relevant and the gross measure is not.Applying this to eBay, we can see that the value of second-hand goods sold on eBay shouldn’t count in GDP, whether they’ve depreciated (the usual case) or appreciated (antiques and so on). On the other hand, the retail services supplied by eBay should count and do. If, as Cheney asserts, people are running businesses selling stuff on eBay, then they are (self)employed and their earnings are part of GDP2. The time spent by household members shopping (including returning goods they don’t want) is not part of GDP. Garage sales and their on-line equivalents are more like returning unwanted goods than like retail business.It’s Domestic because it measures the amount produced in the country, including that which accrues to foreign owners of capital and is paid out as interest or dividends. National Product which is the output accruing to a country’s land, capital and labour is more relevant.
Finally, it’s Product, that is, a measure of marketed output that takes no account of inputs. If we increase our product by working harder or longer hours (in the market), or by consuming more natural resources, we are not necessarily better off. What matters in the end is productivity, not product.
Why then do economists pay so much attention to GDP? The answer is that it’s useful primarily as a measure of economic activity, for short-run macroeconomic management. If GDP is declining, this is a good indication that the economy is in recession and that macro policy needs to be more stimulative. Taking account of things like depreciation, international income transfers, household work and work intensity would reduce the precision of estimates of short-run growth because all things are hard to measure, and would make GDP less useful for its primary purpose. (Of course, this is a Keynesian view - national account statistics like GDP are essentially a product of the Keynesian revolution).
1 All of this rests (as Kieran implies) on the assumption that there’s a sharp division between household and market sectors. National Income is the value of what households sell to the market. As the division between household and market blurs, national income statistics become less useful.
2 To make sense of Cheney’s claim, you have to assume that eBay sellers are illegally concealing their activities, which is quite likely. But a big part of the standard free-market case for tax cuts, and a source of supposed behavioral responses, is the idea that tax cuts will shift people out of the informal/illegal sector into the legal, taxable economy. Cheney appears to be saying that the informal sector is growing under this Administration.
Dan Drezner reports on a small tiff between Paul Samuelson and Jagdish Bhagwati over outsourcing. It contains a good line that tells you a lot about neoclassical economics:
But Mr. Bhagwati … says he doubts whether the Samuelson model applies broadly to the economy. “Paul and I disagree only on the realistic aspects of this,” he said.
In contrast, Marxists tend to agree fully on the realistic aspects of things but disagree about the unrealistic ones, such as when exactly the revolution is coming, who will be in charge, and whether people or robots will clean the toilets afterwards.
As a spin-off from Daniel’s discussion of whether the DEM04 contract is overvalued on the Iowa Electronic Markets, here’s a version of the trend surface he calculated that shows differences between the Black-Scholes valuation and the observed market price over time (you can look at it in smaller PNG format or better-quality PDF). I created it using R, the free1 statistics package because I didn’t like Excel’s default effort and I hadn’t had a reason to use R’s wireframe() function before. It’s still not up to the standards of the Bill Clevelands or Ed Tuftes of this world, but it was the best I could manage on short notice. Thanks to Daniel for sending me the data, and remember that whereas I am happy to field questions about graph colors and chart widgets, technical queries about option valuation, Black-Scholes volatility fluctuations and arbitrage should still be directed to him.
1 As in “free to make your own mistakes.”
In commenting on the game of chess, Will Baude notes the following.
Professor Leitzel of Vice Squad writes in to remind me of the 1913 Zermelo’s Theorem, which establishes just that: given the game’s finiteness (established above), there exists some strategy s.t. either white always wins, black always wins, or nobody does.]
Not so, as it happens, although it’s been the conventional wisdom among game theorists until recently - the inestimable James Morrow (whose “Game Theory for Political Scientists” I’m using as a coursebook this semester) states it a little more formally when he says that
Zermelo showed that chess has a winning strategy: White can force a victory, Black can force a victory, or either can force a draw.
But, as discussed on my old blog last year, this very interesting paper by Ulrich Schwalbe and Paul Walker, shows that Zermelo said no such thing. Zermelo proved a much narrower result, and indeed explicitly states that he hasn’t proved that chess has a winning strategy.
The question as to whether the starting position … is a winning position is open. Would it be answered exactly, chess would of course lose the character of a game at all.
It would be very interesting to trace back how this error (and a variety of others) crept into the literature. Zermelo was never translated into English before Schwalbe and Walker’s paper, so I imagine that nobody much bothered to try to read him (especially since his article was published in 1913 and was quite likely printed in Fraktur). One person’s error was presumably picked up by others, and then disseminated until it became accepted dogma in the wider literature. Academic research sometimes resembles a game of Chinese whispers - because we all rely on the research of others, serious blunders can be perpetuated for generations before someone bothers to go back and recheck the work of their elders.
Update: Peter Northup has convinced me in comments and email that I’ve misunderstood Zermelo a little myself, and that the formulation that Morrow uses isn’t as offbase as I thought it was. What’s clear is that game theorists are incorrect in saying that Zermelo used backward induction as such, and that he doesn’t show that there is a winning strategy as such. I stand corrected.
Here’s bit of bad news for my American Democrat friends; your candidate is dying on his arse in the Iowa Electronic Markets at the moment.
Here’s another bit of bad news; even at these prices, he’s still overvalued.
Note to readers. There is quite a lot of financial jargon in this post, because I’m dealing with quite a few issues that are only of interest to finance bods (and only marginally to them). The interesting stuff is toward the end.
I mean “Overvalued” in a technical sense here; given the prices of the other contracts trading on IEM, the “DEM04” contract on the winner-takes-all market should be priced significantly lower than the 0.454 which was its price as of the last trade when I was writing this.
I found out this little anomaly earlier this year, when I was playing around with get rich quick schemes1 and idly wondering whether I could put together a program-trading system that would dump large numbers of trades onto the IEM all at once and make a few of my enemies shit their pants. I assumed it was due to teething troubles as the market for the ‘04 Presidential race settled down, and that things would regularise soon enough.
The issue was, that I was working on some Visual Basic code to try and calculate the “implied volatility”2 of the vote-share market from the WTA market in order to compare it against the realised volatility. I’ve worked a bit with the Black-Scholes model for pricing binary options in the past, and I know that the sensitivity to volatility of this kind of option is a pretty badly-behaved function, so it didn’t surprise me that the implied volatility formula I’d ripped off from Paul Wilmott’s book didn’t seem to work. What did surprise me is that when I’d changed the code around, put in a more robust optimisation routine, corrected a few of my arithmetic errors and debugged, it still didn’t work. I was buggering around with this, off and on, for about two months, and nothing I could do could get it to work.
So I did what I ought to have done in the first place and drew a few pictures. Below, I’ve plotted a chart of volatility against price. The purple dots represent the Black-Scholes value of a binary call with 0.152 years to maturity (ie today’s date until the election) at an interest rate of 1.75% (CD rate), with a strike price of 0.50 (see footnote 33), with the underlying at 0.491 (the current quote for KERR in the vote-share market), for different assumptions about volatility. The blue dots are simply a horizontal line at 0.454, which is the current quote for DEM04 on the WTA market. “BS valuation” refers to “Black-Scholes”, you cheeky kids.
As you can see, the model value gets close to, but doesn’t reach the market value. In other words, there is no value for volatility which can be plugged into the Black-Scholes model and deliver the market price. Or to put it another way, the option is overvalued.
This isn’t a transient phenomenon, either. I put together a horrendous Excel spreadsheet which takes minutes to calculate, but which produces a line like the one above for every day of data in the life of the DEM04 contract. Below, I’ve plotted them as deviations from the market price of the option on the same day. As you can see, these curves typically don’t cross the x-axis; it’s the rule rather than the exception that the DEM04 contract is structurally overpriced.
So what’s going on here? Seasoned finance pros will already be champing at the bit, ready to tell me that the Black-Scholes model is a completely incorrect model to use in this context. Well, chaps, allow me to differ. Fair enough, if I was acting as a market maker in size for DEM04, I would probably want to use a more realistic model. But I simply don’t believe that B-S would give such wildly inaccurate prices; one of the good properties of the model is that it’s surprisingly robust. And further investigation of the data suggests that whatever’s going on here, it’s unlikely that we can explain this all away as “holes in Black-Scholes”.
For one thing, this is a phenomenon which is almost entirely one of the DEM04 contract. The REP04 contract prices perfectly well in a Black-Scholes framework, for the most part. This allows us to get another handle on the extent of the overpricing of DEM04; we can solve the model for the implied volatility of the BUSH|KERR vote-share contract (which must be the same as the volatility of the KERR contract) and plug this number into the valuation formula for DEM04. Below, I’ve charted this (careful; avert your eyes if you’re a Kerry supporter).
As you can see, the yellow line (reflecting the “fair value” of DEM04; I’ve put this on the right hand scale for some reason) is way below the blue line (its market value). Don’t pay too much attention to the frightening drop-off in fair value in the last few days; the REP04 winner-takes-all contract has risen much faster in value than the BUSH vote-share contract, a phenomenon which could only be justified by a sharp drop in implied volatility4, which drastically reduces the fair value of DEM04.
In any case, don’t pay much attention to that chart at all because it’s very badly drawn and poorly laid out it’s clear that any attempt to work on the basis of consistency between the DEM04 and REP04 prices is doomed to failure. The two markets are simply not consistent.
Why so sure? Well, they don’t obey a basic and obvious parity relation. Think about it this way. If you were to buy one DEM04 contract plus one REP04 contract, then you would have a portfolio which would pay out $1 for certain in November. How much would you be prepared to pay for this portfolio? Well, basically one dollar, minus the interest that you could have earned by not buying the portfolio and leaving the money in the bank. So in other words, the difference between the sum of the value of (DEM04 + REP04), and 1.00, is the implied rate of interest which one earns in the financial universe of the Iowa Electronic Market.
I’ve plotted this implied money rate below:
I hope it’s clear. In general, over the last three months, the Iowa Presidential Election Winner-Takes-All market has been pricing on the basis of a negative nominal interest rate. This is not consistent with most concepts of market efficiency.
(as an aside, really astute financial econometricians will right now be screaming about “bid-ask bounce”. The idea here is that the chart above has been plotted using closing prices. However, if the last trade of the day in DEM04 was a buy and the last trade in REP04 was a sell, you shouldn’t really just add them together without making an adjustment for the bid-ask spread in the IEM. This is a fair enough point, but I checked that it doesn’t make a qualitative difference. As I sit here typing, the WTA market has bids of 0.454 for DEM04 and 0.556 for REP04. That means you could sell one of each and get an interest-free loan of a buck.)
So what does this all mean?
Well first, on a practical note, it seems to me that if you want to back the Democratic candidate in the 2004 Presidential elections, you should do so by selling REP04 rather than buying DEM04, and vice versa; this way you take advantage of the fact that option premium is overvalued in DEM04 relative to REP04.
Second, on a more theoretical basis, to my mind this is, unless I have made a howler somewhere, more or less conclusive proof that the IEM is not “efficient” in the financial economics sense of the term. A market in which basic parity relations are not observed is not efficient. One can make all sorts of excuses about the amounts of money at stake, the restricted universe of traders, the fact that short-sellers on IEM do not have full use of funds, etc, but the plain fact of the matter is that the prices on IEM are not consistent with the time value of money.
Third, we can pick up some clues as to what’s going on here from the prices themselves. Although the parity relation does not hold, casually going through the numbers suggests to me that, most of the time, the mid-price of DEM04 plus the mid-price of REP04 adds up to 1, near enough. So there’s a sort of parity rule being enforced by the market participants. That suggests to me that pricing here is being driven by a social convention that the prices ought to reflect people’s estimates of the probability of Kerry or Bush winning. This isn’t how stock options are priced. Financial options are priced on the basis of Black-Scholes and similar models, via an arbitrage pricing argument. Donald Mackenzie studied the sociological process by which financial markets moved from a “probability-pricing” norm to an “arbitrage pricing” norm, and this doesn’t seem to have happened in the IEM (yet; I’ve not entirely given up on my program-trading operation!)
This suggests that James Surowiecki is right and Robin Hanson is wrong on the way in which “information markets” of this kind work. (Update: In comments below, Robin Hanson argues pretty convincingly that this is a pretty egregious caricature of his views. Sorry Robin.)In comments to my review of James’ book, the two of them outlined the difference in their views. James fundamentally thinks of markets as a way for people to “vote” and aggregate their views, with any predictive power they have coming from the fact that they’re a kind of crowd. In particular, for James Surowiecki, markets are just a handy way of organising information aggregation; voting and other ways of summarising crowd opinion might work just as well.
Robin Hanson, on the other hand, appeared to be giving a much more particular role to markets as opposed to any other kind of social organisation. He commented that the reason why markets generated information was ” simple idea that anyone who notices a mistake in the price of a speculative market can make money by fixing that mistake.” To me, this seems like he’s committed to a view that efficiency in the sense of obeying parity conditions is the mechanism by which markets gather information.
This seems like a trival distinction, but it has big practical implications. In particular, if you believe in something like Robin’s view, then you would say that the maximally informative market prices are the most recent ones (because any difference between the prices and the state of the world is most likely to have been arbitraged away). If you believe in something like James’ view, then you might say that a more informative view of the opinion of the crowd might be a moving average of, say, the last five days’ trading. Personally, I’m still something of a sceptic about whether ‘toy’ markets of this kind really deliver the goods at all, and I must say that this exercise hasn’t exactly made me a believer6. But it seems to me that if markets like this work at all, they have to work under the Wisdom of Crowds model rather than the Theory of Sharks. And since all the big financial markets operate on the basis of “sharks” and arbitrage pricing, we need to separate them in our minds from toy markets like this; markets which don’t have a Hayekian reason to exist shouldn’t draw credibility from markets which do.
Footnotes:
1I promise that this system was in profit when I stopped the experiment; I moved jobs and left all the files behind. My guess is that it would be in loss at the moment, since it seemed to be pretty structurally long Kerry.
2Basically, the WTA contracts can be considered to be options (specifically “cash-or-nothing” binary options) written on the vote-share contracts with a strike price of 0.50. Option prices depend on the anticipated volatility of the underlying, so the “implied volatility” is the volatility number that you plug into the formula to make the model price consistent with the market price.
3As discussed here, this is consistent with the contract specification. The Iowa vote-share market is for a two-horse race, and the “winner” for the purposes of the WTA contract is the winner of the Iowa VS market. The IEM actually paid out on Gore in 2000 for this reason.
4As in, loosely speaking, the Bush contract is currently “in the money”, so volatility in the underlying hurts the binary option more than it helps.5
5Readers may think that this divergence between the vote-share and WTA contracts is unlikely to reflect a genuine decrease in uncertainty about the vote-share and may represent an investment opportunity. Since CT isn’t in the business of giving advice, readers who want to pursue that line of thought are on their own.
6A lot of the problem is that it simply isn’t profitable enough to arbitrage away the breaches of parity, unless you’re doing it just for the fun of proving a point. But I don’t think this is the only problem. And furthermore, even if it was the only problem, it appears to me to be a more or less insoluble one, unless we really believe that in the future we will live in a world where material proportions of peoples’ savings will be diverted away from productive enterprise and into these zero-sum games.
7I hereby claim the CT title for “Most Footnotes”
8And “Most Gratuitous Self-Citations”.
9In response to comments, here’s a quick plot of the sensitivity of the option to the volatility assumption (vega). This is plotting the first derivative of the value of the option with respect to volatility. If this doesn’t mean much to you, just laugh at the grotty Excel.
Those who have been following the decline and fall of the Conrad Black empire (a group which surely must include those bloggers fond of quoting the ravings of his wife, Barbara Amiel ) will be amused to learn that his friend Richard Perle has now deserted him.
… last week, Mr. Perle’s view of Lord Black changed. Issuing his first public statements since being heavily criticized in an internal report for rubber-stamping transactions that company investigators say led to the plundering of the company, Mr. Perle now says he was duped by his friend and business colleague.
Read the whole thing, as someone-or-other once said.
Australian PM John Howard has called an election for 9 October. I’ve discussed the political issues here, but CT readers will also be interested in the implications for the efficient markets hypothesis. Centrebet , which didn’t do brilliantly last time, has the (conservative) Coalition at $1.55 and Labor at $2.30. If I’ve done my arithmetic properly, and allowing for the bookies’ margin, I get the implied probabilities as 0.60 for the Coalition and 0.40 for Labor. The polls have Labor ahead, but looking at all the discussion, I’d say that the consensus view is that the election is a 50-50 proposition, and that’s also my subjective probability.
How good a test of the efficient markets hypothesis will this be? Bayesian decision theory provides an answer1. If our initial belief is that the EMH is equally likely to be true or false, and the Coalition wins, we should revise our probability for the EMH up to 0.55. If Labor wins, we should revise it down to 0.45.
1 The workings are easy for those who know Bayes’ theorem and accept the modern subjectivist interpretation , but they won’t make much sense to those who don’t.
The Bush administration always does much worse than you anticipate, no matter how low your expectations areThe others are the quality of his team and the fact that he will restore proper processes.
The reason Brad doesn’t display more enthusiasm is that Kerry hasn’t given much ground for it. Kerry has a plan to cut the deficit in half, but then, so does Bush1.
I’d like to offer an argument based on political business cycles to suggest that Kerry has to do better than Bush.
It’s unclear how long the present budget and current account deficits can run on without generating a serious crisis, but a sufficiently wild-eyed optimist could give it five years. As we’ve seen, the Bush Administration has no shortage of wild-eyed optimists. So it’s reasonable to expect that, if Bush gets back in, he’ll go an as before, planning to leave any problems to his successor.
By contrast, Kerry is presumably hoping to be his own successor, that is, to serve a second term, and not to encounter a major economic disaster while in office. No-one remotely in contact with reality imagines that current policies (or the soft options spelt out so far in Kerry’s plan) will stave off budgetary crisis beyond 2012 [the baby-boomer Social Security wave starts in 2010]. So Kerry will have to bite some bullets, fairly early in his term of office. I offered some tactical suggestions on this a while ago.
1 And if you believe Bush’s plan, you might be interested in the IPO of my new dotcom, which will replace the Brooklyn Bridge with a virtual exchange, eliminating the need for anyone to actually cross the East River.
In my previous post on US trade, I argued that if the current account deficit is to be stabilised at a sustainable level, the balance of trade on goods and services must return to surplus in the next decade or so. In this post, I’m going to ruIe out a soft option and argue that, while a smooth market-driven adjustment is not inconceivable, it’s unlikely.
The soft option is the idea that central banks will keep on buying US dollars indefinitely in order to keep the world trading system indefinitely, and that the US can therefore consume as much as it wants, subject only to the capacity of the Treasury to keep printing dollars. This option is not a goer for both economic and geopolitical reasons. On the economic front, there comes a point when the risk of being left with a pile of worthless paper exceeds any benefits from being able to export goods.
On the geopolitical front, there’s no point in spending hundreds of billions of dollars a year becoming a military hyperpower if you’re going in to hock to your rivals/potential adversaries for a similar amount. On current trends, the Chinese central bank will hold the better part of a trillion dollars in US government bonds in a few years time. Should there be any minor unpleasantness on the foreign policy front, nothing would be more natural than for the Chinese to stop buying a bit and diversify some of their existing holdings, say a hundred billion dollars or so, into yen and euros. At this point, Wall Street and the Treasury would demand immediate capitulation.
There are also various private sector versions of the soft option, based on the idea that foreigners desperately want to hold US assets, but none of these will stand up to the pressure of chronic trade deficits. As other countries have found out, relying on hot money to finance chronic deficits guarantees a crisis of confidence sooner or later.
If the soft option is ruled out, we’re left to consider paths by which the US can return to trade surplus. Currently the US exports about half as much as it imports. The imbalance could be reduced in a number of ways
To get back to balance or surplus in a decade, and without a crisis, no one of these would be sufficient. For example, to get to balance by devaluation alone would require a devaluation of the order of 50 per cent, which would certainly entail both an upsurge in inflation and an increase in interest rates. A lot of emphasis is (rightly) put on productivity but even on the most optimistic accounts, the gap between the US and other countries is no more than one percentage point per year, which is nowhere near enough. About 40 per cent of the marginal dollar goes on imports, so the restoration of balance through increased household saving alone would require an increase in saving equal to something like 12 per cent of GDP, and this seems most implausible.
If the adjustment were to begin almost immediately and everything went right, it could go smoothly. But the odds against this seem long. So it’s worth considering alternatives.
I found this story of globalisation and soft power at charlotte street, via bertramonline. As bertram says, you can’t make this kind of thing up.
I had a look at related issues in this piece
I was surprised in the discussion of Anne Alstott’s No Exit to find so much enthusiasm for means-tested benefits which, I suppose, reveals more about me and the company I keep than about anything else. I am not completely opposed to means-testing: in some areas of policy, for example funding higher education, I think it can be an effective tool for benefiting the less advantaged. And sometimes it is, given the political constraints, the best that you can do in lousy circumstances. But as a general matter universal benefits are better, and more egalitarian, than means-tested benefits. I was going to write up a lecturely account of why, after that discussion, but fortunately got distracted by summery things like making Bakewell Tarts and hanging out with my kids. And a good thing too, because Shlomi Segall has subsequently published a nice brief account of the general reasons why people like me prefer universal benefits. I’ll add one thing that Segall does not emphasize: the perverse incentives of means-testing. So, for example, the UK government’s decision to rely on the means-tested Income Guarantee Support as a top-up for the state pension introduces a disincentive to save for those nearing retirement age who think they might need it; and the old AFDC in the US reduced dollar-for-dollar as recipients earned income; recipients faced an effective marginal tax rate of 100% which even lefties like me can see might be a disincentive to work. But Segall makes the rest of the case briefly, and has thereby saved me a lot of work (which I was evidently too lazy to do anyway).
I was looking at the latest US trade figures from the Bureau of Economic Analysis and thought, rather unoriginally, that this is an unsustainable trend. Despite the decline in the value of the US dollar against most major currencies1, the US balance of trade in goods and services hit a record deficit of $55 billion (annualised, this would be about 6 per cent of Gross Domestic Product) in June. The deficit has grown fairly steadily, and this trend shows no obvious signs of reversal, at least unless oil prices fall sharply.
This naturally, and still rather unoriginally, led me to the aphorism, attributed to Herbert Stein “If a trend can’t be sustained forever it won’t be”. Sustained large deficits on goods and services eventually imply unbounded growth in indebtedness, and exploding current account deficits2, as compound interest works its magic. So, if the current account deficit is to be stabilised relative to GDP, trade in goods and services must sooner or later return to balance or (if the real interest rate is higher than the rate of economic growth) surplus
But forever is a long time. Before worrying about trends that can’t be sustained forever, it is worth thinking about how long they can be sustained, and what the adjustment process will be.
I set up a simple spreadsheet model and started with some reasonably optimistic numbers. Suppose the deficit on goods and services levels out at 5 per cent of GDP, stays at that level until 2007, and then declines steadily over the next decade years, with the balance stabilising at a surplus of 1.5 per cent of GDP. Over this period, net external obligations increase steadily, and so do the associated income payments. The equilibrium position has net obligations equal to around 80 per cent of GDP (about $8 trillion at current levels). Assuming an interest rate of around 7 per cent, the current account deficit stabilises at 4.5 per cent of GDP.
Would this be a sustainable outcome? Stephen Kirchner points to Australia to suggest that it is. After a big run of goods and services deficits in the 1980s, Australia’s position broadly stabilised in the 1990s, with net obligations around 60 per cent of GDP (still rising, but slowly), and a CAD of 4-5 per cent3.
There are several problems with Kirchner’s claim. First, it’s not clear that complacency about Australia is justified. We weren’t affected by financial panic during the Asian crisis, but that doesn’t rule out the possibility that high debt levels will produce a panic sooner or later.
Second, as Peter Gallagher observes, the US is much bigger than Australia. It’s not clear that global capital markets can call forth enough savings to finance deficits on this scale, at least not without an increase in interest rates. Any significant increase in interest rates would create huge problems for debtor countries like Australia and the US.
But the biggest problem for me is that I can’t see how the stabilisation scenario I’ve described is going to be realised without some sort of crisis. Without radical changes in the US economy, a large deficit on oil imports can be taken as a given. And there are large classes of consumer goods for which domestic production has pretty much ceased. If balance is to be reached in a decade, there has to be a big turnaround in the pattern of trade somewhere, and it’s hard to see where. There is no sector in which the US is currently running a large surplus (there’s a small surplus on services, but even here, the trend is flat or negative). Even with the recent depreciation, and much-touted productivity growth, there’s no sign that US producers are gaining market share in any part of the traded goods sector. The big decline in manufacturing employment since the late 1990s is hard to square with the idea that short-term deficits are justified by long-term growth prospects.
Finally, the scenario requires a lot of faith on the part of foreign lenders, who face a big risk of expropriation through inflation or repudiation. At a minimum, you’d expect them to try to shift their lending to the US out of loans denominated in $US and into more secure currencies. (The $A-denominated share of Australian debt is 33 per cent and falling). This in turn would weaken the position of the US as a financial centre.
If a smooth, market-driven adjustment to a sustainable position is unlikely, what are the alternatives? Stay tuned for my next post, in which I will look at this question, and some of the proposals that have already been floated.
1 The exception is China. But Chinese inflation, which is accelerating, has the same real effect as a depreciation of the dollar against the Chinese yuan
2 The current account deficit is the sum of the deficit on goods and services (the trade deficit) and the deficit on income payments (the income deficit). At present, the US has a large trade deficit, but only a small income deficit.
3 Details in this report from the Parliamentary Library (PDF file). On the way to this balance, we went through a very nasty recession, largely driven by government policies aimed at bringing down the deficit. Although these policies were rightly criticised, and most economists now oppose using contractionary policy to target the CAD, it’s not clear that a market-driven adjustment would have been painless.
Gosh, I remember this from my small collection of BCCI books, but had never realised it was the same John Kerry. This really ought to count in peoples’ minds a lot more than any tales of heroics in Vietnam. The fact that George W Bush borrowed money from BCCI in 1987 but John Kerry launched the investigation in 1988 that eventually brought them down really says about all you need to know about the character of the two men. BCCI was a really quite extraordinarily bad organisation and Kerry’s investigation opened the eyes of the whole world to the extent that it was possible to get away with corruption in high-quality financial centres. It was about this time, by the way, that the liberal media of the USA were smearing Gary Webb as a “crackpot conspiracy theorist” for reporting, accurately, on the fact that politically well-connected Nicaraguans were being allowed to get off easily on cocaine smuggling charges. The Washington Monthly story is well worth a read.
Link comes via Atrios, btw, who obviously needs the vast publicity that a CT link can generate.
I’m pretty sure that it was JK Galbraith (with an outside chance that it was Bhagwati) who noted that there is one and only one successful tactic to use, should you happen to get into an argument with Milton Friedman about economics. That is, you listen out for the words “Let us assume” or “Let’s suppose” and immediately jump in and say “No, let’s not assume that”. The point being that if you give away the starting assumptions, Friedman’s reasoning will almost always carry you away to the conclusion he wants to reach with no further opportunities to object, but that if you examine the assumptions carefully, there’s usually one of them which provides the function of a great big rug under which all the points you might want to make have been pre-swept.
A few CT mates appear to be floundering badly over this Law & Economics post at Marginal Revolution on the subject of why it’s a bad idea to have minimum standards for rented accommodation. (Atrios is doing a bit better). So I thought I’d use it as an object lesson in applying the Milton Friedman technique.
Let’s see what Alex Tabbarok has to say:” If tenants benefit from a law that says apartments must have hot water then surely a law that says tenants must have hot water and a dishwasher benefits them even more, right? What about a law that says tenants must have hot water, a dishwasher and cable tv? By now the students have cottoned on to the idea that the rent will increase. Once you realize that the law causes the rent to increase it’s no longer obvious if tenants benefit or if landlords are harmed.We can work out what happens with sone numbers. Let’s suppose that after much bargaining the tenant and landlord have agreed upon the rent and the amenities - each party to the contract is profit maximizing, doing as well as they can given market conditions and the interests of the other […]”
Hold it right there.
“No. Let’s not suppose that”
Specifically, let’s not suppose that all the negotiations between tenant and landlord have been sorted out in a reasonably equitable manner. Let’s suppose instead that those negotiations are going on right now.
It is really quite rare to find a buyer’s market for rented accommodation. Even if there is a slight oversupply of rental units for sale, time is almost always on the landlord’s side, because waiting is typically much more inconvenient for the party that has to wait without a house to do wait in. In general, when tenants and landlords are negotiating over the potential Pareto gain that could be made from renting the house, the landlord ends up capturing most or all of the surplus. The hot water and habitability laws are simply aimed at skewing things a bit in favour of the tenant and putting a floor on how bad a deal the tenant can end up accepting. It’s a standard game theory result that something which reduces your options can benefit you by reducing the number of bad options that you can end up agreeing to (most famously, the secret ballot has to be compulsory, because if you had the option to reveal your vote, you could be intimidated), and habitability laws are there for exactly this purpose. Mystery solved, through application of the Friedman technique.
This sort of issue (the way in which legal frameworks shape negotiations over the gains from economic interactions) was right there at the start of the law and economics movement with Ronald Coase. Which is why it’s a mystery to me that modern law ‘n’ economics courses appear to have abandoned them in favour of rinky-dink numerical models with politically convenient deregulatory conclusions.
Daniel Drezner posts an extract from a Wall Street Journal article (subscription only, and I don’t have a subscription), suggesting both that there is a serious shortage of skilled machinists in the US, and that “U.S. apprenticeship programs have dwindled as the large American companies that once provided the bulk of such training have cut back to save money and now outsource some of the work.” As I’ve noted before, there’s a serious case to be made that both these problems reflect underlying weaknesses in the US model of capitalism.
Margarita Estevez-Abe, Torben Iversen and David Soskice have an important piece arguing that countries like the US, which have a minimal welfare state, tend to discourage people from investing in risky specific skills. If the market changes, so that these skills are no longer rewarded on the job market, people who have these skills may find it very difficult to find new jobs, or to retool themselves for a changing economy. In contrast, countries with a well-developed welfare state can provide a buffer, allowing people more time to find new jobs that match their skills, or to learn new skills if necessary. Estevez-Abe, Iversen and Soskice argue that countries like the US tend to produce skill generalists, who can shift easily as market conditions change, whereas countries like Germany, with substantial welfare states, more easily support specialist skills.
Machining skills are highly specific - those who invest in the lengthy process of learning how to be “Swiss machinists” will find it extremely difficult to find new jobs if the demand for Swiss machinists falls. Therefore, one may reasonably expect that in countries such as the US (where there isn’t a safety net), many people will be reluctant to become Swiss machinists, instead preferring more generalized skills, that can more easily be transported from job to job. In contrast, in countries like Germany, one may reasonably expect that there will be no systematic problems of undersupply of machining skills (although there may of course be conjunctural ones).
Furthermore, as sociologist Wolfgang Streeck has noted, German-style capitalism is better in specific aspects of training. It produces “beneficial constraints” that help ensure that certain collective goods are produced. Apprenticeship training is a key example - in a free economy, every firm has an incentive to underinvest in training, because it does not realize the full benefits (the individual may leave and bring the benefits of her expensive training to a new firm). In Germany, firms are obliged to participate in collective schemes organized by local Chambers of Commerce, thus mitigating the collective problem of underprovision.
German capitalism is, of course, going through some rather serious problems at the moment. Still, in certain sectors of the economy, it does seem to do a better job than the US. This gives Germany a real comparative advantage in specialized machinery production - it exports more machinery than any other country. In the 1980’s, the popular wisdom was that US firms were being ground into in the dust by the Germans and Japanese; now it’s that the US economic system is inherently superior to its peers. It will be interesting to see whether this wisdom changes any over the next ten years.1
1 The Economist produces special supplements every once in a while covering a major capitalist economy (Germany, Italy, Japan etc). Peter Katzenstein has done an informal study of these over the last couple of decades, and claims to have detected wild variation (Germany may be the model for the world in one survey, and a basket-case a few years later), mapping conjunctural change.
Paul Beard writes to ask what the economists and social scientists at Crooked Timber think of this interview with economist Ray C. Fair. His model that is predicting that Bush will get 57.5% of the 2-party votes.
Well, the economists and social scientists at Crooked Timber are in bed, so I’ll try. Here’s the model and relevant webpage. Fair is a Yale professor who has been tweaking his model, and successfully publishing papers on it, since 1978. I am a lowly uncredentialed blogger who has spent less than an hour looking at said papers. If I were a betting man, I’d have to bet that Fair is right and I’m wrong when I raise a criticism. If I’m lucky, maybe he’ll respond to my email and humiliate me in comments.
Having said that, a few comments:
1. The model is stunningly simple. There are only two inputs: inflation and two measures of real per capita GDP growth. And the output is not the expected vote percentage of the incumbent, but of the Republican candidate. (There’s also an incumbency variable implicit in the equation he’s posted for the 2004 election.)
2. If I wanted to put this model into words, it would sound something like: “Republican presidential candidates will get more of the vote when real GDP growth is strong and inflation is relatively low.” This has a few counterintuitive implications:Good GDP growth and low inflation hurts Democratic candidates. (Even when they’re the incumbent? I’m not sure which effect is more powerful.)
Unemployment, polls, fundraising, immigration, wars, etc., don’t make much difference in how people vote for President.
If you share the reasonable belief that the President doesn’t have much control over GDP or inflation, the model implies that the candidates, even sitting Presidents, can’t do much about the percentage of the vote that they obtain.
Now, “counterintuitive” certainly doesn’t mean “wrong”, but I’d need some convincing.
3. The input GOODNEWS is rather delicately specified; it’s defined as “number of quarters in the first 15 quarters of the Bush administration in which the growth rate of real per capita GDP is greater than 3.2 percent at an annual rate”. I can’t imagine that there’s a theoretical justification for it, as it changes in different versions of the model. It seems that it’s simply a number that most accurately retroactively predicts the old elections.
Is it an overspecified model, good at describing past history but not at making predictions? It seems awfully unlikely that I’d be the one to discover this. We’ll know more in November.
Professor Fair updates his model after every election to better accomodate the new data point. I’d be interested in seeing how well each iteration predicted the elections since 1978 before they were retroactively tweaked. It did pretty well in 2000.
UPDATE: Evan Roberts at Coffee Grounds points out that I’ve missed something important. The party of the incumbent is a variable; benefits of a good economy don’t automatically benefit Republicans.
once competed successfully in congressional elections, winning significant portions of the popular vote and often gaining seats in Congress. This was true for most of the 19th century and even the early part of the 20ththe cause of their subsequent failure must be something new - political centralisation1.
Chhibber and Kollman seem to be well-regarded political scientists. But their argument here is riddled with errors, or at least large logical gaps.
First, they present hardly any data, and don’t answer the obvious empirical objections. Their claim that third parties do less well now than in the past runs into some obvious problems. Two of the last three presidential elections have been decided (or at least greatly affected) by third-party candidates, Perot in 92 and Nader in 2000. The Reform party also elected a governor, Jesse Ventura in 1998, and there’s one socialist member in Congress. That’s not much of challenge to two-party dominance, but did the parties cited by Chhibber and Kollman (Prohibition, Socialist, Populist, Greenback, Farmer-Labor) do notably better? We’re not told, but I’m pretty sure the answer is “No”. There’s also no evidence (beyond the single data point of Canada) that centralism is favorable to two-party systems.
The logical problems are even more striking. Granting, for the sake of argument, that third parties have declined since the 1930s, and that centralisation has increased at the same time, haven’t these guys ever heard that “correlation does not imply causation”. Leaving aside the possibility of purely spurious correlation, there are plenty of possible “joint cause” arguments. For example, it might be that the rise of mass media has both reduced regional diversity (which implies less reason to oppose centralisation of political decisions) and also given advantages to large parties.
But, there’s still a more significant error. Let’s suppose we’re satisfied that third parties were once strong and that their decline was caused by political centralisation. It’s still obviously true that such parties are disadvantaged by plurality voting and other features of the electoral system. If you want to encourage third parties, you can either fundamentally change the relationship between Federal and State governments, reversing 100 years of history, or you can change the voting system. Changing voting systems isn’t easy, but it’s been done in many places, and can be done on a state-by-state basis .
1 Looking on the web, I found this book chapter by Chhibber and Kollman, which seems to date the rise of the current two-party system, more plausibly in my view, to 1860. It is certainly true that the Republican Party of that time, devoted as it was to the containment and ultimate destruction of a regional “peculiar institution”, did a great deal to enhance the power of the central government.
Marty Weitzman is smarter than I am …This is brilliant. I should have seen this. I should have seen this sixteen years ago. I almost saw this sixteen years ago.Weitzman’s idea1 is the replace the sample distributions of returns on equity and debt with reasonable Bayesian subjective distributions. These have much fatter tails, allowing for a higher risk premium, lower risk free rate and higher volatility, in the context of a socially optimal market outcome. Here are some of the reasons why this is important
My immediate reaction is the same as Brad’s. Something like this has occurred to me too, but I’ve never thought hard enough or cleverly enough about it how to work it out properly. This is a very impressive achievement, and Marty Weitzman is very, very smart (which we already knew).
My second reaction is a little more sceptical. Some previous attempts at resolving the equity premium have focused on the tails of the distribution, and the possibility of catastrophic loss. Tthe problem was that it was difficult to describe an outcome where the return on equity was large and negative, but bonds were still a safe asset. The various catastrophic examples cited, such as hyperinflation, revolution and nuclear war all failed in this respect.
Applying the same reasoning to Weitzman’s argument, we need to consider whether there is a reasonable model of a stable capitalist economy, with functional financial markets, that produces a negative long-run rate of growth in outptu per person. The only one I can imagine is based on resource exhaustion, and I can’t really see belief (positive probability weight) in such a model being widespread enough to generate the observed equity premium. With less confidence, I’d assert that there are pretty good technological reasons to rule out a sustained rate of productivity growth (embodied and disembodied) of more than 5 per cent, for countries that are already at the frontier. The maximum sustainable rate of growth of output per person cannot be much above this.
I haven’t been able to check the math, but I doubt that a complete Bayesian explanation of the equity premium puzzle can be obtained if the prior distribution on the long-run rate of growth is bounded in this way.
My third reaction is eclectic. My general view is that there is no one explanation of the equity premium, but a set of problems with the standard consumption-based model of asset pricing (CCAPM) that interact to produce results radically different from those of the model. Making expectations Bayesian rather than classical will amplify the effects of any other deviation in the model, and therefore fit neatly into this story.
1 The only version of the paper I’ve seen so far is a PDF file in which the maths has not come through. But I think I’ve got the basic idea.
I looked this one up for an argument in comments to Belle’s post below, and I’ve been laughing and crying ever since. It’s a useful way to think about the extent to which “trickle down” economics has worked for the poorest in society. As we all know because people who know we’ve read Rawls keep telling us, the poorest benefit from economic growth. How much do they benefit?
According to government figures, in 1978 the mean income of the lowest quintile of the income distribution, in 2001 dollars, was $9410. By 2001, that income had risen (again in 2001 dollars) to $10136.
That’s an increase of $726 2001 dollars. By comparison, in 2001 dollars, a Starbucks latte cost $2.80. If you divide 726 by 2.8, you get 259.
So in other words, despite the invention of the internet, the peace dividend from the end of the Cold War, the end of the oil crisis of the 1970s and the greatest bull market in history, the difference between being in the poorest 20% in 1978 and in 2001 was that if you were in the bottom quintile in 2001, you could buy the consumption bundle of the poorest quintile from 1978, plus a caffe latte every weekday. That’s the extent to which trickle-down economics worked; 23 years to get a cup of fucking frothy coffee. I’m reminded of James Tobin’s poem:
The poor complain
They always do
But that’s just idle chatter
Our system brings rewards to all
At least, all those who matter
The Onion TechCentralStation on unleashing the power of the free market to capture Osama Bin Laden. Priceless!
Due to a sudden period of enforced idleness, my insomnia is back (my previous schedule of working five caffeine-fuelled 14 hour days a week and recovering at the weekend had cured it nicely. I can recommend this method to anyone although to be honest, my doctor frowned on it). As a result, I find myself thinking about the aggregativity of capital, labour theories of value, and so on. I therefore pass on this small question which may be of some amusement to those of our readership who indulge in either cannabis or value theory; the two groups may find it equally interesting.
If you had all the wealth in the world, ie you owned every single object of value that was known to humanity ….
what would you spend it on?
For what it’s worth, I contend that you’d spend it on buying the labour power of everyone else in the world to be your slaves, and that this is a (pretty weak) argument for some form of the labour theory of value. But it does highlight a quite interesting property of economic theories which don’t have a theory of value; because value arises only from exchange, they are scale-invariant, so there is no question of putting a value on the system as a whole. I’d be interested if any Austrian readers had any thoughts on this, not least because it would save me reading a few books.
Sweet dreams …
I’m preparing an introductory course on game theory at the moment, and selecting readings for the week on communication and games of limited information. One of the key contributors to this literature is Joseph Farrell (no relation) who has done seminal work on how “cheap talk” (costless communication) may affect rational actors’ behaviour when it conveys useful information about an actor’s type. He also shows that “babbling equilibria” are always possible, in which actors’ communication conveys no information about their type whatsoever, and is consequently always ignored by others. This seems to be a rather abstruse argument with little real world relevance - but I reckon that one nice way to bring it home to my students is to point to how it helps explain DC taxi-cabs. In many (perhaps most) cities, cabs use their cab-sign to signal whether they are available or not. A lighted cab-sign indicates that the taxi-cab is free; an unlighted sign indicates that the cab is occupied. Washington DC, for some reason, is different. As far as I can tell, whether or not a cab’s sign is lighted bears no relationship to whether it is occupied. Thus, after some initial confusion, newcomers learn to ignore whether the cab has a lighted sign or not, instead squinting as best they can into the interior, to see whether they can spot any passengers. This is about as close to a babbling equilibrium as one may reasonably expect to find in the real world. How this came about in DC, and not in other American cities, is anyone’s guess.
Hunt Stilwell has let me know via email that the Netflix fallacy that I talked about last week seems to replicate a very interesting experiment on the psychology of intertemporal decision-making. His email (with permission, and some light editing) is reproduced under the fold.
I just read your post on Crooked Timber on the “Netflix Fallacy.” I had a similar experience with Netflix, which led me to get rid of it. I think that what your experience and mine represents is not a new “economic fallacy,” but an old one. In fact, I think our experience does a pretty good job of replicating an experiment by George Lowenstein and colleagues on intertemporal decision making. In that experiment (a link to the paper is below), participants were allowed to choose films, some of which were “vice” films (e.g., “Armageddon” or “The Mask”), with others being “virtue” films (e.g., “Schindler’s List” or one of your foriegn films with subtitles). Participants made the choices either in a sequence, meaning that the viewings would be in the immediate future, or made them all simultaneously, meaning that some of the viewings would be delayed. Participants who made the choices in a sequence tended to pick mostly vice films, while participants who made the choices simultaneously picked many more virtue films.
It looks to me like our behavior was an instance of a psychological tendency that researchers studying intertemporal decision making have observed over and over again: when making decisions for the immediate future, people tend to pick vices (low immediate reward, high long-term cost), whereas people who are making decisions for the more distant future pick virtues (low immediate cost, high long-term reward). We selected “virtue” movies because we knew our viewing would be delayed, and therefore were more likely to pick alternatives with low immediate cost and (potentially) high long-term rewards. We even picked them simultaneously (3 or three films at a time)! as in the experiment. If we had gone to Blockbuster, we would have known that we would have to watch the films in the relatively immediate future (24-72 hours, depending on whether the film was a new release or not), and thus we would be more likely to pick “vice” films that we might be motivated (by the easy entertainment value) to watch.
I’ve always had a problem with the “virtue/vice” distinction applied to films in the Lowenstein paper, because there really isn’t much of a difference between the cost associated with the two types of films. I suppose the wasted time makes for a cost difference, but I’m not sure that’s enough. However, the pattern is the same. The fact that Netflix demonstrates this so well, and even replicates an actual experiment, lends me to believe that Netflix is really just a big social psychology experiment, and those of us who’ve subscribed are all unwitting participants.
The paper on the movie experiment can be found here:
http://sds.hss.cmu.edu/faculty/Loewenstein/downloads/mixingvirtue.pdf
Over at Marginal Revolution, Alex Tabarrok recently presented a graph showing a positive correlation between UN measures of gender development and the Fraser Institute’s Economic Freedom Index. Of course, Alex presented the usual caveats about causation and correlation, but he concluded “at a minimum the graph indicates that capitalism and gender development are compatible contrary to many radicals”
This prompted me to check out how the Economic Freedom index was calculated. The relevant data is all in a spreadsheet, and shows that the index is computed from about 20 components, all rated as scores out of 10, the first of which is general government consumption spending as a percentage of total consumption. Since the Fraser Institute assumes that government consumption is bad for economic freedom, the score out of 10 is negatively correlated with the raw data.
Looking back at Alex’s post, I thought it likely that high levels of government expenditure would be positively rather than negatively correlated with gender development, which raised the obvious question of the correlation between government consumption expenditure and economic freedom (as defined by the Fraser Institute index). Computing correlations, I found that, although it enters the index negatively, government consumption expenditure has a strong positive correlation (0.42) with economic freedom as estimated by the Fraser Institute. Conversely, the GCE component of the index is negatively correlated (0.43) with the index as a whole. By contrast, items like the absence of labour market controls were weakly correlated with the aggregate index.
It immediately struck me that this could be the basis of some snarky pointscoring. But on reflection, I thought that it would be more useful to look seriously at the issues raised by this result.
First, it’s necessary to explain the results. A large slab of the index consists of things like the absence of military interference in the rule of law. There’s also an entire section of the index devoted to sound money (low and stable inflation). These are both features characteristically provided by strong democratic governments, and not by weak ones, regardless of whether they are interventionist or laissez-faire. It seems pretty clear that a principal components analysis would show a dominant component associated with the characteristics of developed mixed economies, characteristics that include high levels of public expenditure, combined with relatively light-handed forms of government intervention in the private sector, compared to those characteristic of weak governments in less developed countries.
This is scarcely surprising. State capacity tends to rise with income, so in wealthy countries the state can achieve more, with less obtrusive use of power, than in poor countries. It is strong and not weak states that produce economic freedom.
The same principles that lead to support for separation of powers within the state with clearly defined roles for the judiciary, legislature and executive (including the military!), also yield support for a mixed economy against the alternatives of comprehensive central planning and unfettered corporate power. The mixed economy produces more economic freedom for the average person than does the minimal state. Despite its own presumption to the contrary, the Fraser Institute’s analysis supports this conclusion.
Experimental psychologists are fond of pointing to examples of economic irrationality in every day life - for example, people respond in very different ways if an article is priced at $9.99 and if it’s priced at $10. Through detailed examination of my own and my wife’s behaviour, I think I’ve identified another such example - the “Netflix fallacy.” Netflix, for those of you who aren’t familiar with it, is a subscription service where you pay a set amount each month to rent movies. You can have three DVDs out at any one time - when you are finished with one, you send it back, and receive a new DVD from your list of picks by return post. In theory, it’s an ideal way to make sure that you have the movies you want, when you want, and an excellent deal if you rent more than 3-4 DVDs a month.
In practice, it’s different - at least in my experience. Movies that we’ve rented sometimes sit there for two or three months before we watch them, or eventually, reluctantly, decide to send them back without seeing them. To my shame, this happens most often with the interesting, difficult films with sub-titles. I suspect that this is because we’re accustomed to thinking of DVDs as stocks rather than flows. Because we have physical possession of the DVD, we’re disinclined to give it back until we’ve actually watched it. Of course, this means that we face substantial costs - we may very easily end up paying more money to rent the damn movie than we would have to pay to buy it and keep it forever. Meanwhile, Netflix is laughing all the way to the bank. It’s much smarter to think of the rental service as a flow - you’re likely to be happier if you keep the movies coming along in a steady stream, even if you don’t watch them (the latter may be useful information about your actual preferences, as opposed to the preferences that you would like to have). I suspect that virtually any reasonable decision rule along the lines of ‘send the movie back if you haven’t watched it within two weeks’ is likely to produce better results than our current policy of watching the movies whenever we get around to it. Or, more typically, don’t get around to it.
Reading the discussion of earlier posts about the efficient markets hypothesis, it seems that the significance of the issue is still under-appreciated. In this post, Daniel pointed out the importance of EMH as a source of pressure on less-developed countries to liberalise capital flows, which contributed to a series of crises from the mid-1990s onwards, with huge human costs. This is also an issue for developed countries, as I’ll observe, though the consequences are nowhere near as severe. The discussion also raised the California energy farce, which, as I’ll argue is also largely attributable to excessive faith in EMH. Finally, and coming a bit closer to the stock market, I’ll look at the equity premium puzzle and its implications for the mixed economy.
To recap, the efficient markets hypothesis says that the prices generated by capital markets represent the best possible estimate of the values of the underlying assets. So, for example, the price of a share in Microsoft is the best possible estimate of the present value of Microsoft’s future earnings, appropriately discounted for time and risk.
The EMH comes in three forms. The weak version (which stands up pretty well, though not perfectly, to empirical testing) says that it’s impossible to predict future movements in asset prices on the basis of past movements, in the manner supposedly done by sharemarket chartists, Elliot wave theorists and so forth. The strong version, which almost no-one believes says that asset prices represent the best possible estimate taking account of all information both public and private. For policy purposes, the important issue is the “semi-strong” version which says that asset prices are at least as good as any estimate that can be made on the basis of publicly available information. There’s a heap of evidence to show that the semi-strong EMH is false, but the most dramatic, as I pointed out recently, is that of the dotcom boom, when obviously hopeless businesses like Pets.com were valued at billions of dollars. I’ve discussed this a bit more here
An apparent, but not real, moderation of the semi-strong EMH is the formulation offered by James Surowiecki that “whether or not markets are perfectly efficient, they’re better than any other capital allocation method that you can think of.” It’s clear that, if capital markets are always the best possible way of allocating capital then they must produce the best possible estimates of asset values, given available information, and this is just a restatement of semi-strong EMH.
A much more defensible position is that, even if capital markets are not perfect, neither is any alternative, and it is therefore an empirical question whether unregulated capital markets or some alternative, such as regulation or public investment, will yield better outcomes in any particular case. This formulation leads straight to the basic economic framework of social democracy, the mixed economy, leaving, of course, plenty of room to argue about the optimal mix.
Now let’s look at some specific cases where the EMH gives a simple and misleading answer, and where a more careful analysis leads to mixed-economy conclusions.
First, there’s the question of foreign exchange markets. As Daniel observed, the EMH implies that the optimal policy is to allow exchange rates to be determined by unregulated capital markets, in what is called a ‘clean float’. The experience of the Asian crises, and also of Chile and China, suggests that developing countries may do better with controls that limit short-term capital flows. But even among developed countries, belief in the desirability of a clean float has faded. The volatility of exchange rates since the collapse of the fixed exchange rate regime in the mid-1970s has been much greater than expected, and much more than can be explained by any sensible model of rational markets. Most central banks have come to adopt a policy of ‘leaning against the wind’, that is, buying their own currency when the exchange rate is well below the long-run average (sometimes called Purchasing Power Parity, though this isnt quite accurate) and selling when it is well-above. On average, this has been a profitable long-term strategy. A lot of economists, starting with the late James Tobin, would go further and tax foreign exchange transactions in the hope of reducing volatility. The lesson here is that neither rigidly fixed exchange rates nor a perfectly free float is likely to be optimal. You can read some more on this here
Second, there’s energy, and particularly electricity. Until the 1990s, electricity was supplied either by public enterprises (most places except the US) or regulated monopolies. These did not do a perfect job in making investment decisions. Roughly speaking, when engineers were in charge, there was “gold-plating”, notably in the form of excessive reserve capacity. By contrast when accountants or Treasury departments were in charge, capital was rationed tightly, and investment decisions were determined largely on the basis of attempts to get around the resulting artificial constraints.
The EMH implied that private capital markets would do a far better job. This in turn led to the creation of spot markets for electricity which otherwise made little sense, since, in the absence of highly sophisticated metering, most users could not respond to market signals in the required fashion (the big consumers, who could respond, typically had contracts specifying if and how much their supply could be cut in the event of a shortage). In practice, this produced huge reallocations of wealth while failing to produce either sensible signals to consumers or rational investment. Instead, the pattern in investment was one of boom and bust. As a result, there’s been a steady movement away from spot markets and towards the reassertion of more co-ordination and planning of investment. I’ve had a bit more to say about this here
Finally, there’s the equity premium puzzle, that is, the fact that average returns to equity are far higher than can be accounted for by any sensible model incorporating both reasonable risk attitudes and the efficient markets hypothesis. If you reject the efficient markets hypothesis, it’s natural to conclude that the true social cost of capital is close to the real bond rate. This in turn allows us to evaluate privatisation by comparing the profits forgone with the interest saving that arises when sale proceeds are used to repay debt. For Australia and the UK, such an analysis produces the conclusion that most privatisations have reduced welfare. Some of the evidence is in this paper (PDF file), and there’s a more rigorous analysis here (PDF). Responding to this argument, Brad DeLong observes, with customary acuteness that my argument implies that “the natural solution to all this is the S-Word:Socialism:public ownership of the means of production”.
Actually, however, while this aspect of the argument, taken in isolation, would imply that public ownership always outperforms private, there are plenty of factors going in the other direction, such as the principal-agent problems associated with the absence of an owner-manager or a threat of takeover. So, the correct conclusion, as foreshadowed above, is that the optimal arrangement will be a mixed economy.
Determining the optimal mix is a difficult task, requiring lots of case-by-case analysis, but I’ll offer the view that the optimal public share of production and consumption is unlikely to be below 25 per cent, and is typically close to 50 per cent. For a contrasting view, I’ll point to my distinguished predecessor at the University of Queensland, Colin Clark, who thought 25 per cent was an upper bound. I’ll need another big post to spell out my claims here, and this will take some work. In the meantime, feel free to pitch in with your own views.
None of these are true.
The really dangerous application of efficient markets theory over its twenty-five year run as the state of the art in economics was its application by the IMF and World Bank to developing countries’ capital account regulations. Whatever one might have picked up on a brief tour through Chicago University, it is historical fact that the efficient market theory was not simply left on the sidelines by academics and occasionally drawn out as a homily against the perils of stock market speculation. Nobody involved in the development of the efficient markets theory ever thought that it ought to be quarantined in this manner. Quite the opposite; the people who put together the statistical evidence (such as it was) for the proposition that stock prices followed a random walk believed and stated that they were gathering evidence for Paul Samuelson’s contention that “Properly Anticipated Speculative Prices Fluctuate Randomly”; ie, that because the observed stock prices followed a random walk, this could be taken as evidence that all the information one might expect to determine those speculative prices, was already reflected in the price.
In other words, it was strongly believed by everyone involved that the efficient markets theory provided what can only be described as an informational free lunch. For the cost of establishing a market in traded securities (which is something that an advanced capitalist society would have to do anyway, in order to provide the convenience to investors of being able to convert their claims on long-term investment projects into liquid funds), we could gain, Ginsu-knives-style, a Delphic oracle that would also give s free information about the future and about cash flows. Belief in the existence of this sort of informational free lunch, by the way, is at the heart of James’s book “The Wisdom of Crowds”, and will also be at the heart of my review essay on it, which I am still procrastinating. Suffice it to say that the use of the phrase “free lunch” is intended to make the reader suspicious as to whether one should uncritically accept such a miraculous and politically convenient (for a number of people) piece of theorising as the efficient markets concept.
Once the world was in the grip of efficient markets as a concept (and the IMF was in the grip of what Joe Stiglitz has memorably called “third-rate students from first-rate universities”, a first-rate university being what Chicago indisputably is), it was hardly surprising that the concept would find itself being applied to the most pressing capital allocation problem of the age; the need to industrialise the Third World. “Development economics” as a sub-field of economic geography rapidly withered on the vine – and who wouldn’t let fruit wither on the vine if they had a free lunch? – as it became conventional wisdom that all one had to do was to open up a country’s capital markets to foreign speculation and sit back and watch. The idea being that markets were efficient, so capital would flow naturally to where it could be best used, in the process providing a better signal to emerging market governments about the correctness of their policy mix than mere humans at the IMF could ever offer. That was the plan.
Well, we all know how that worked out. A “bubble” and “panic” are all rather amusing when they affect funny speculative technology companies with amusing names and young arrogant staff in Aeron chairs. When they affect an entire economy, throwing millions out of work in a country with no functional welfare state, they are rather less of a laugh. And the “emerging markets”(as developing countries were rebranded) proved to be extraordinarily susceptible to booms and crashes. What happened was that when First World interest rates fell, money would flood into a developing country which had become fashionable, vastly in excess of any realistic assessment of the productive investment opportunities there. Because it couldn’t be credibly invested, this surplus money would usually finance a consumption boom, a current account deficit and an appreciation of the exchange rate. After a while, the country would become “uncompetitive” and the money would flood back out again, leaving the economy to deal with a now-unsustainable current account deficit. The only way to deal with this would be to push up the domestic interest rate, which usually had the effect of causing a serious recession. If the country was unlucky, this recession could easily lead to a bank collapse, the proposed solution to which was to “open up the banking sector to foreign capital”. Lather, rinse, repeat.
It’s a tragicomedy which has played itself out across Latin America and Southeast Asia several times. The efficient markets theory was an academic theory which had genuine, massive human consequences, and they were all baleful. Which is why the academics who are still driving stakes into the heart of the efficient markets theory are doing vital work, and why Crooked Timber will, hopefully, continue to put a few digs in ourselves if we see it popping up shoots on the Internet.
Which is the answer to the last point raised in the bulleted list above. In many important cases, there quite clearly is a better alternative method of capital allocation than unfettered free markets. Markets have their place in the development of an economy, but only as part of an overall policy mix which includes careful sequencing of deregulation, oversight of institutional development and which often requires capital controls and other overrides or “circuit-breakers” when free market solutions appear to be delivering a destructive or irrational allocation. And this all requires a little thing known as planning. It’s a difficult problem to solve, to be sure, but no more difficult than brain surgery and not so difficult that Chile, China and Malaysia don’t appear to have done rather better at it than geodemographically similar countries who have left themselves to the tender mercies of unfettered free markets.
There’s a cottage industry within economics involving the production of historical arguments giving rational1 explanations of seemingly irrational historical episodes, of which the most famous is probably the Dutch tulip boom/mania. This Slate article refers to the most recent example, a complex argument regarding changes in contract rules which seems plausible, but directly contradicts other explanations I’ve seen.
Once opened, questions like this are rarely closed. Still, articles of this kind seem a lot less interesting in 2004 than they did in, say, 1994. In 1994, the efficient markets hypothesis (the belief that asset markets invariably produce the best possible estimate of asset value based on all available information) was an open question, and the standard account of the Dutch tulip mania was evidence against it. In 2004, the falsity of the efficient markets hypothesis is clear to anyone open to being convinced by empirical evidence.
We have seen billion-dollar valuations placed on companies that proposed to home-deliver dogfood at prices lower than those charged in discount stores. We’ve seen unimportant subdivisions of profitable companies valued at more than the companies themselves. We’ve seen a dozen different companies simultaneously priced at levels that made sense only if they were each going to monopolise the industry in which they were competing. And don’t even get me started on the US dollar bubble (now burst) or the bond bubble (still inflating).
In summary, is contradicted by our own recent experience far more thoroughly than by anything that might or might not have happened in Amsterdam in the 1630s. Everybody who cared to look at the numbers coming out of markets in the late 1990s knew they were crazy, but it didn’t matter. Those who bet on an early return to sanity (George Soros for example) lost their money. The only sensible course was to withdraw to the sidelines and wait the madness out.
It’s true that dramatic episodes like the dotcom mania don’t happen all the time. But even one such episode, occurring in a well-developed and sophisticated financial market like that of the US in the late 1990s is sufficient to undermine the assumption that asset markets ever yield the best possible estimate of asset values, except by chance.
1 That is, explanations consistent with individual rationality as defined by economists
Tyler Cowen1 lists a number of economic propositions which he formerly believed, but has abandoned in the light of contrary evidence. Most of these propositions were elements of the economic orthodoxy of the 1980s and 1990s, variously referred to as Thatcherism, neoliberalism, the Washington consensus and, in Australia, economic rationalism. They include the efficacy of monetary targeting, the beneficence of free capital movements and the desirability of rapid privatisation in transition economies.
Following in the same spirit, I thought I’d list a couple of propositions on which I’ve changed my mind in the face of empirical evidence. These are elements of the Keynesian orthodoxy of the 1950s and 1960s, on which I was trained. Following Cowen, I’ll list them as false claims I used to believe
On both these issues, I’ve come to accept that Milton Friedman was largely right, and his Keynesian opponents largely wrong.
On the first, I think Friedman’s victory was total, although the supposed implication that there exists a “natural rate” of unemployment at which inflation remains stable has proved equally unreliable.
On the second, Friedman was absolutely right in talking about “long and variable lags” and rejecting the idea that it is possible to “fine-tune” the economy. This doesn’t mean that fiscal policy is of no value. In particular, the fact that budgets naturally go into deficit when the economy turns down provides a measure of automatic stabilisation. And when a deep recession lasts for more than, say, a year, there’s time to bring discretionary fiscal policy into play. The suggestion, by Nick Gruen and others, of some form of independent body to manage discretionary fiscal policy, analogous to that of the Reserve Bank in monetary policy, has a lot of appeal for me. Still, this is a long way from the kind of mechanical Keynesianism I was taught a few decades ago.
Combining my concessions and Tyler Cowen’s, it’s evident that there is some process of convergence in the beliefs of economists, though with a lot of oscillation. The breakdown of Keynesian economic management during the crisis of the 1970s, and the general ‘fiscal crisis of the state’ that occurred at the same time, validated many of the criticisms made by Friedman and others of the Keynesian and social-democratic orthodoxy of the postwar period. But it produced an overreaction, most notably in the extreme claims made by advocates of rational expectations macroeconomics, but also in overblown claims about the merits of privatisation and deregulation. These have gradually lost favour as their empirical weakness has been exposed by events.
1 Jason Soon has also noted this piece
Coming in a bit late, I have the opportunity to survey a range of blogospheric discussion of the topic of minimum wages, which largely supports the view (not surprising to anyone but an economist) that minimum wages are good for low-income workers. The traditional view among economists was that minimum wages reduced employment and thereby harmed workers, but this view has been overturned, or heavily qualified, by empirical evidence, beginning with the work of Card and Krueger.
The debate kicked off with a piece by Stephen Landsburg in Slate, noting that recent US econometric studies had failed to find economically and statistically significant negative effects on employment resulting from higher minimum wages. This was surprising, in view of a range of earlier studies which found right-signed effects, but were statistically weak because of small samples. Landsburg argues that this might be an example of publication bias, in which studies with no statistically significant results tend to get discarded. He concludesNow that we’ve re-evaluated the evidence with all this in mind, here’s what most labor economists believe: The minimum wage kills very few jobs, and the jobs it kills were lousy jobs anyway. It is almost impossible to maintain the old argument that minimum wages are bad for minimum-wage workers. In fact, the minimum wage is very good for unskilled workers. It transfers income to themLandsburg then goes on to argue against the minimum wage on the curious ground that it’s a less transparent alternative to policies such as an Earned Income Tax Credit. Brad de Long responds, endorsing the EITC, but arguing that minimum wages are also an effective policy instrument for transferring income to the poor.
There are quite a few interesting responses. Steve Verdon develops Landsburg’s argument, pointing out that a minimum wage increase which raises the general cost of goods and services is like a consumption tax and has an associated deadweight loss. That’s true, but it’s also true of whatever tax may be used to finance the EITC. Robert Waldmann suggests changing the structure of payroll tax, but as he himself points out, his argument disregards the point that the budget is already in deficit. Tyler Cowen observes that increases in wages may be offset be reductions in working conditions. Interestingly, no-one seems to have defended the traditional view on empirical grounds.
An interesting and important question is whether these results can be transferred to other countries like Australia, where the minimum wage is higher relative to average weekly earnings. In the survey of the literature we did for the National Wage Case, Steve Dowrick and I concluded that, although there might be some reduction in employment and some leakage to low-wage workers in high-income households, the evidence showed that minimum wages help low-income workers . Our study is here (PDF file)
Overall, my view is close to that of Brad de Long. Minimum wages are a useful policy instrument, but by no means the only or most important one, to improve the position of low-income workers.
Update Jacob Levy asks, reasonably enoughIf, as Landsburg claims, the published studies are “all in agreement” about the direction of the effect, then the underlying distribution of studies can’t be as he describes it, can it? Publication bias in favor of significant findings, superimposed on an actually-neutral relationship ought to generate equal numbers of ostensibly-significant findings in each direction.Actually, the Card and Krueger study found weak positive impacts of minimum wages on employment using a data set where most of the obvious sources of bias had been removed. There may have been earlier studies with similar results, but they would almost certainly have been discarded, on reasonable grounds of weak statistical significance or omitted variable bias. By contrast, studies with similar weaknesses, but with the expected sign would have been published.
I haven’t seen that Michael Moore film yet; there were special previews in London on Sunday, but you couldn’t get a ticket for love nor money1. It strikes me, however, that those critics of the film who are currently doing such a sterling job (by using words like “deceits”, “cunningness”2 and “misleading”) in convincing me that there are no actual factual errors in it, are failing to look at the big picture.
The big advantage of the “he’s implying this without saying it” critique, and the main reason I use I myself so often, is that since he isn’t saying it, you can chosse for yourself what you want to claim he’s implying. For example Jane Galt is cutting up rough about the timing of various Carlyle Group investments, compared with the timing of George Bush Senior joining the board. And indeed, Moore’s film would be deserving of censure if he had been attempting to make the claim that there were specific quids pro quo on those specific deals. But he doesn’t actually make that claim, as far as I can see. Now he might have been attempting to imply that claim without making it, which would be bad. But he might just have been using the revolving door between defence contractors, large investors and the highest echelons of government, to support the following assertion:
Wealthy individuals and capital have far too much influence in American politics, and members of the Bush family have provided numerous examples of this proposition.
Which would not be bad. Pace my esteemed colleague Mr. Bertram, the reason why Bush’s misleading implications are not on the same footing as Moore’s tendentious use of the facts, is that Bush was attempting to establish a specfic false claim (that Saddam Hussein was a threat to the USA) while Moore is attempting to support a general claim of opinion (that Bush as President has been bad for the USA and Americans should vote for someone else).
Footnote:
1Although actually, I can’t be sure of this since I only really offered money.
2The word is “cunning”, btw.
If you have a coffee break today, why not spend it reading this wonderful piece. RA Radford was an economics don who ended up in a POW camp toward the back end of the second world war, and wrote this article in Economica describing the experience from the economic point of view. If you’ve already read it then congratulations; you clearly went to the right kind of university. Otherwise, it’s a treat.
While chasing up the Radford reference, I happened across this blog btw. I happen to know a couple of things about Chavez-era Venezuela, and this news source, pretty uniquely, checks out as honest on all the areas where I was able to check. The author is a bit less charitable toward Chavez than I am inclined to be (so hate me, I’m inclined to cut totalitarian socialist regimes a bit more slack when they’re faced with massive externally-funded subversion), but he gets the big picture right; Chavez, like modern Castro, is a narcissist and a very poor poster-child for Socialism indeed, but his opposition is woefully lacking in any positive policy prescriptions other than handing everything over to foreign vested interests. Rather a long coffee break if you decide to read both of these, I admit.
This question comes via Rob Schaap and a letter to the Guardian, but it’s an issue on which I have a sorta-kinda claim to first publication.
The issue is this; does the current “sovereign” Iraqi government have sufficient sovereignty to enter into financial contracts which would be considered binding on future Iraqi governments? In particular, does it have the power to sell state assets, to allocate telecommunications licenses and to incur debts? And if it does, then given that it is not a democratically elected government, but one appointed by two countries (the US and the UK) with substantial economic interests in Iraq, is this not something of a scandal? I’d be very grateful if any readers who know more about Iraq than me could shed some light on this one.
Favorite moment from the Enron Energy Traders gloatfest:
Employee 1: He just f——s California. He steals money from California to the tune of about a million.
Employee 2: Will you rephrase that?
Employee 1: OK, he, um, he arbitrages the California market to the tune of a million bucks or two a day.
Always nice to see those technical financial terms (“f——k”) explained in terms the layman can understand (“arbitrage”). I’m sure the whole thing was the fault of a few bad apples.
John Kenneth Galbraith’s greatest contribution to economics is the concept of the bezzle - the increment to wealth that occurs during the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not understand that he has lost it. The gross national bezzle has never been larger than in the past decade.
We’ve had more than a few things to say about the Iowa Electronic Markets over the life of Crooked Timber. In particular, John and myself have defended the view that these markets do not appear to offer marginal information above and beyond published opinion polls.
Some would say that this is fighting talk, and that if we really thought this, we ought to be trying to make some money out of it. So here goes …
Big thanks to Nasi Lemak for sharing a dataset of historical poll data with me. I have used that data to construct and backtest a trading system for the IEM Kerry vote-share contract (KERR) which uses only published poll data and generates favourable backtesting over the last four months. The equity curve for this system so far is below the fold; I plan to use it to trade the IEM vote-share market over the rest of the campaign.
I’m not giving away any details of the system just yet; I’ll reveal them all at the end of the experiment. When the winner-takes-all market is launched, it is quite likely that I will bring another system into play based on trading the WTA contracts as if they were derivatives on the VS market, however that will not affect the current system which is entirely based on publicly available poll data. If I refine the system (currently it only makes use of Bush’s “Approval” ratings, whereas going forward I’d like to drive it off the approval/disapproval spread. I’d also like it to be able to double up and reduce bet size; currently it’s a pretty crude on/off switch), I’ll keep a record of what I did and when. But the proof of the pudding will be in my final P&L; I think that the IEM follows polls rather than leading them, and this is my way of finding out if I’m right. I’ll update as and when the system suggests I trade.
As of today, as the chart above shows, the system is long KERR, which I bought this morning at $0.495. I’ve entered into this trend fairly late, so I’m more or less expecting a few drawdowns early in trading, but if the system works, it ought to be well in profit by November. Wish me luck …
When I first studied economics ( a long, long time ago) the textbook explanation of why income differed between countries was based on capital. In the simplest version for example, that of Harrod and Domar), rich countries had a bigger stock of capital than poor countries, and the problem was one of accumulating sufficient capital to catch up. In more sophisticated versions, rich countries had more modern capital stocks, and therefore benefited from embodied technological progress.
Even when I was a student, this kind of thinking was already being superseded by notions such as human capital theory1. Still, I’ve never seen a really convincing refutation. It strikes me that computers and the Internet provide one, at least as far as differences among developed countries are concerned.
Taking the United States as the leading country, by how many years does its stock of computers and Internet connections lead that other members of the OECD? Take a relatively poor and poorly-connected country like Spain for comparison. In 2001, only 9 per cent of the Spanish population was connected to the Internet, a figure that elicits the description “technological backwater”. Although estimates vary widely, it seems reasonable to suggest that the US reached this figure around 1995. It also seems reasonable to suggest a similar (roughly two-generation) US lead in relation to computers. That is, in the sector where its lead is arguably greatest, the US is six years ahead of Spain.
Now, if we suppose that labour productivity grows by around 2 per cent per year (this is generous), the combination of larger capital stocks and embodied technological change might account for a difference in income per person of around 12 per cent. This is comparable to the difference associated with differential timing of business cycles (two years of boom vs two years of recession), that is, it can be disregarded for most practical purposes.
I’m leading up to an argument that, in an important sense, the process of convergence among developed countries is complete, or close to complete. Differences in income per person, on average and for different groups within society, reflect differences in individual and social choices rather than the kind of differences typically considered in growth theory. I’ll try to develop this further in subsequent posts.
1 Despite its origins in Chicago, and the unattractive nature of its central metaphor, human capital theory leads directly to social-democratic policy conclusions.
I’ve been meaning for a long time to collect my thoughts about US interest rates, and where they are and should be going. As is often the case, I’m largely in agreement with Paul Krugman, at least as far as long-term rates are concerned. On the other hand, I’m a bit more hawkish in relation to short-term rates than Brad DeLong, with whom I agree on a lot of things.
I’m planning on reworking this piece as I have new thoughts, and in response to comments. so please treat it as a work in progress.
Warning: long and boring (but maybe scary) post over the fold.
Much of the discussion has the same confused character as debates about the desirability of budget deficits. The essential problems are similar. In the short run, both interest rates and budget deficits can be controlled by governments (central banks count as part of government for this purpose). Other things being equal, low interest rates and budget deficits tend to stimulate economic activity, and are therefore appropriate when the economy is in recession1.
In the long run, however, government budgets must balance2. Similarly, interest rates must be determined by the intertemporal consumption plans of consumers and the available opportunities for investment. The problem is that the long run can be a very long time coming and no-one knows when it begins. Even the 10-year bond rate is clearly affected by judgements about the policy stance of the central bank.
The link between short-term rates and long-term rates can be seen by considering arbitrage or, as it’s sometimes called, the ‘carry trade’. If a central bank is committed to keeping short-term rates at, say 1 per cent, but market forces dictate a long-term rate of 5 per cent, speculators can make as much money as they want by borrowing short and lending long.
Another way to look at is to note ten years is just (about) 40 terms of 90 days. So, on average, the annualised rate of interest on 90-day loans has to be about the same as the 10-year bond rate, sometimes higher and sometimes lower. Because of transactions costs and risk premiums, arbitrage isn’t costless, so there isn’t an exact equality. In general, short term rates are a bit lower than long-term rates, but a large gap can’t be sustained indefinitely.
The interest rate problem is therefore really two problems. First, what is a reasonable value for the rate of interest in the long run. Second, given that the short-term rate is currently below the long-term rate, how soon should it be increased.
The first question itself is in two parts. The face-value or nominal interest rate is in part a compensation for future inflation, and in part a real interest rate, reflecting the existence of profitable investment opportunities, and the impatience of consumers. The real interest rate has generally been somewhere between 2 and 4 per cent. Given that savings rates are exceptionally low at present in the US and elsewhere (denoting high levels of impatience) the rate ought to be at the high end of the range, especially if you believe that technological progress has opened up lots of investment opportunities.
As regards inflation, the combination of low short-term rates and exploding budget deficits is bound to produce an acceleration if it persists long enough. Given that there’s a significant chance of a rapid acceleration, and that the bogey of deflation has now largely disappeared, it seems reasonable to pick an average rate of around 3 per cent.
Combining the two suggests that the long-term nominal rate of interest for the US ought to be between 6 and 7 per cent. The ten-year bond rate currently just below 5 per cent and has been below 4 per cent until quite recently, reflecting the influence of very low short-term rates. But the same reasoning implies that, at some point, the ten-year bond rate is likely to overshoot the equilibrium range.
Coming to the short-term rate, there is a trade-off between the need for stimulus now and the inevitable price of higher rates in the future. There’s been a big dispute between those, like The Economist who want to put rates up immediately and those like Brad de Long who want to keep them low while employment remains depressed (one reason may be disagreement about how far the economy is from its ‘natural’ equilibrium).
What would be the consequences of an increase in short-term and long-term interest rates. Higher short-term rates would depress consumption, particularly things like purchases of new cars. This could be problematic for Ford and GM, which are essentially finance companies with a manufacturing arm these days.
But the real puzzle relates to long-term rates and mortgages. Most US homeowners are in the enviable position of having fixed-rate loans which they are free to refinance if they wish. This is an amazingly generous one-way bet, but it’s not clear who is on the other side of it. The securitization and hedging of mortgages has become so complex that no-one knows who really holds them. Some mortgages are even more favorable than this, being assumable (that is, they can be passed on to a new buyer).
If interest rates rose a lot, refinancing would stop pretty quickly, though there may be a last-minute flurry as people try to lock in rates in anticipation of an increase. Moreover, homeowners with non-assumable mortgages would be forced to stay put, since moving would entail taking on a new mortgage at a much higher rate. The big problems, though, would be on the other side of the market, where the mortgagees would have assets that, on standard analysis might have halved in value. I discuss this a bit more here.
There are a lot of other scary possibilities relating to derivatives markets. These haven’t been seriously tested since the big expansion of the 1990s. Most people seem to think everything will be OK, but no one can be sure.
1 Some economists (for example, supporters of the new classical model) dispute this, but I don’t intend to debate this point here.
2 Under standard accounting conventions, governments can run deficits forever, but in economic terms, either the budget must balance, or public debt will follow an explosive path leading inevitably to repudiation. Roughly speaking, the appropriate economic definition of a balanced budget is one consistent with a stable ratio of public debt (net of income-earning assets) to national income.
Paul Krugman has a piece on oil. This is as good a time as any to put up a long post I’ve been working on about oil and whether it’s finally going to run short, points on which I broadly agree with Krugman.
Oil is the paradigm example of an exhaustible resource (there’s a charming, but apparently false, belief that oil comes from decayed dinosaurs) Whenever the price of oil rises sharply then, it is natural to ask whether this is a mere market fluctuation or an indication of the impending exhaustion of the resource.
A couple of points of clarification are necessary before we come on to the main issues. First, the price of oil is typically quoted in $US/barrel, for some specific grade of oil such as West Texas light sweet crude. This need not be an accurate indicator of the cost of oil in general, because of variations in the purchasing power of the US dollar and because the relative prices of different types of oil fluctuate. The current upsurge in prices is due in part to the devaluation of the dollar against other major currencies and also in part to a particular shortage of the light grades of oil most suitable for producing petrol.
Second, oil will never simply ‘run out’. As the supply of any commodity declines, prices increase and, for relatively low-value uses, the costs exceed the benefits. Where they are available, low-cost substitutes become more attractive. Before the 1973 increase in prices, oil was commonly used as fuel in electricity generation and home heating. Following the increase in prices, most oil-fired power stations were converted to gas or coal. Where natural gas was readily available, the same was true of home heating. The relevant question then, is not whether oil will run out, but whether it will become so scarce as to be uneconomic in its main uses, the most important of which is as fuel for motor vehicles.
Critics of predictions of resource exhaustion have plenty of history on their side. In the 19th century, the eminent economist W.S. Jevons predicted the imminent exhaustion of reserves of coal. He was wrong, as were a series of subsequent prophets of resource exhaustion, most notably Paul Ehrlich and the Club of Rome in the 1970s. Time after time, scarcity has been met by new discoveries and by improvements in resource technologies that have made it economic to extract resources from sources that were once considered valueless. In the case of oil, the estimate of ‘proven’ reserves in 1973 was 577 billion barrels. The Club of Rome pointed out that given projections of growing use, reserves would be exhausted by the 1990s. The economic slowdown from the 1970s onwards meant that the actual rate of growth was slower. Nevertheless, between 1973 and 1996, total usage was around 500 billion barrels. Yet at the end of the period, estimated reserves had actually grown to over 1000 billion barrels.
This is a pattern that has been repeated for many other commodities, and should give pause to any advocate of the exhaustion hypothesis. Nearly all the additional reserves came from upward revisions of estimates of reserves in existing fields. (This is seen by optimists as reflecting technological gains, allowing more secondary extraction, and by pessimists as reflecting a shift from conservatism to (excessive?) optimism in estimation procedures).
Believers in the exhaustion of oil reserves have some history on their side too. Their key exhibit is the Hubbert curve which is supposed to show that oil output from a field should peak about 25 years after discovery. If you buy this story, oil output should have passed its peak a year to two ago. The big success for the Hubbert curve was Hubbert’s 1956 prediction of the peak in US oil output around 1970.
The current period of high prices and short supply gives some support to advocates of the Hubbert Curve. The really striking events however, have been those relating to reserves. For the first time, downward revisions to estimated reserves have become commonplace. The Shell company has been the most notably affected so far, being forced to announce a series of downward revisions in estimated reserves, apparently because of problems with Nigerian fields. But there have also been suggestions of similar problems many other oil-producing countries, either because reserves have been overstated for political reasons, or because fields have been mismanaged.
Of course, some fields are still expanding. For example, new leases are being issued for deep water prospects in the Gulf of Mexico. But the very fact that such marginal prospects are being explored is an indicator that oil companies expect high prices to persist.
On balance, I think that current high prices are likely to persist and to rise over time.
Oil looms large in many geopolitical discussions. While claims that the Iraq war was ‘all about oil’ are unduly conspiratorial, it seems clear that, if it were not for the presence of oil, the Middle East would not be a central focus of US foreign policy. The 1973 OPEC ‘oil shock’ (an embargo imposed in protest against US support for Israel, followed by a quadrupling of prices) was widely blamed for the stagflationary recession of the 1970s, and was seen as indicating the strategic vulnerability of the West to attacks on its supply of oil.
Most of this is and was an illusion. In reality, the oil shock was a consequence rather than a cause of the collapse of the postwar economic order based on the Bretton Woods system of fixed exchange rates. A central element of that system, the convertibility of the $US into gold at the fixed price of $3835/oz had been rendered unsustainable by inflation, and had been abandoned in the early 1970s, beginning with the Smithsonian agreement of 1971. Increases in the price of other commodities, including oil, were an inevitable consequence. The price of wool, for example, had doubled before anyone outside the oil industry heard of OPEC.
Similar points apply to the supposed vulnerability of the West to the cutting off of oil supplies. An embargo similar to that imposed by OPEC in 1973 might necessitate some form of rationing, but this is scarcely the ‘moral equivalent of war’. It makes no sense to maintain military preparations for a possibility that could be dealt with by reducing consumption.
Still the fact that such things make no sense doesn’t mean they won’t happen. Permanently high gasoline prices will be a big psychological shock for US consumers and could produce some irrational responses, such as a desire to invade Middle Eastern countries.
It’s already 1 May in Australia, so I get to make what will no doubt be the first of many posts on the significance of the day.
First, and still the most important in the long historical view is the holiday (a public holiday here in Queensland) celebrating the achievements of the labour movement.
Second, there’s the admission of ten new members to the EU. As far as the historical significance of this event goes, I’m waiting to see whether Turkey is admitted to accession negotiations later in the year.
Thirdly, and of most immediate interest, the anniversary of Bush declaration of victory looks as good a time as any to date what seems increasingly certain to be a defeat [at least for the policies pursued for the past year, and for the objective of a stable, pro-American Iraq]. Of course, this judgement may turn out to be as premature as was Bush’s statement a year ago, but the decline in the US position has been almost as rapid as the collapse of Saddam’s regime, and the events of the last few days have seen the process accelerating.
Among a range of events the most important have included:
The Administration seems to be inching towards the position I’ve been advocating for some time - dumping the policies of Bremer and Chalabi (though not, unfortunately Bremer and Chalabi themselves), and handing over real military power to Iraqis. If the interim (still inchoate) government has substantial real power, manages to hold early elections and can get enough support to permit a rapid US withdrawal, the outcome might not be too bad. But there’s very little time left, and this scenario assumes exceptionally skilful management of the situation from now on.
1 Predictably enough, there have been quibbles about this word. But mock-executions such as the one shown here are among the worst forms of torture - from my reading of survivor accounts, they are mentioned with more horror than beatings. And of course what we are seeing is only what the guards chose to photograph for their own amusement.
The Google IPO has now been announced, and there are some more figures to analyze. In addition, I wanted to talk a bit about the option, suggested by one of the commenters on Kevin Drum’s blog of arbitraging by short-selling overpriced dotcoms and buying those with more reasonable valuations. Finally, I wanted to look at what all this means for capital markets and therefore for capitalism.
Looking at this NYT report, that doesn’t seem likely to be an option.In 2003, Google reported an operating profit of $340 million on sales of $960 million. But the 2003 figure appears to understate the company’s cash profit margin, since it includes very high expenses related to stock options that will probably decline in future years. On a cash basis, Google had an operating profit of $570 million in 2003, and an operating margin of 62 percent. Given those figures, Google will easily command a market valuation of at least $30 billion, and perhaps much more. EBay, which had an operating profit of $660 million on sales of $2.2 billion last year, is valued at $54 billion; Yahoo, with sales of $1.6 billion and operating cash flow $428 million, is valued at $36 billion.I’m not an accountant, but I think the “operating profit” referred to here is EBITDA (earnings before interest, tax, depreciation and amortisation): in any case, it’s more than the profit accruing to owners of equity. So it appears that all of these well-established businesses are valued at more than 100 times annual earnings.
As I recall, the ratio for profitable companies during the hyperbubble was around 400, so some progress has been made. But these values still look bubbly to me. To match an investment in 10-year bonds, without allowing for any risk premium or for the inevitable increase in long-term interest rates, all these companies need to more than quadruple their earnings, then maintain those earnings for at least 20 years. Maybe Google can do this, and maybe Yahoo can do it, but it’s most unlikely that both of them can.
At one time, I would have tried hard to think of an explanation consistent with some notion of aggregate market rationality, in which capital markets allocate capital to its most productive us. In the light of the evidence of the last ten years - the dotcom bubble, the US dollar bubble, the (still continuing) bond bubble - I no longer bother. Capital markets are driven by fashion (in this case, the continuing desire to be part of the Internet happening, in the face of mounting evidence that it provides almost exclusively public goods), fear and greed. On average, capital markets do a better job than Soviet central planners, but I think they do less well than the mixed economy that was dominant during the postwar Golden Age.
Eventually, no doubt, reality will prevail. If I knew that was going to happen within the next twelve months, I’d be shorting the remnants of the dotcom sector for all I was worth. But, as Keynes apparently didn’t say, the market can stay irrational longer than you can stay solvent.
I thoroughly recommend this article in the New York Times. While I have no particular opinions on the management of the Maine state pension fund (well, if you really needed one, I daresay I could get some for you cheap rate), it’s a nice and clear explanation of an interesting little part of an issue that I’ve always thought the plain man should be more interested in than he in fact is.
I’m working on a piece on the Iowa Electronic Markets in my copious spare time at the moment. Just as a warm-up, here’s a few questions for finance mavens.
1. In the 1996 Presidential vote-share market, after the candidates have been nominated and adopted, what should the sum of the values of the CL|DOLE plus V.DOLE (Dole vote share) contracts be?
2. What percentage return would you have made in the 2000 winner-takes-all market by buying the BUSH contract at the point when DEM was at its peak and holding to maturity?
3. You hold a porfolio in the current 2004 Presidential vote-share market long BU|KERR but short BU|CLINT. If George Bush were to announce tomorrow that he had decided to withdraw from the race, what would be your profit or loss?
Answers below the fold. Historical price and prospectus data available on the IEM website.
These are all trick questions, in case you hadn’t guessed …
1. In the 1996 Presidential vote-share market, what should the sum of the values of the VS_CLINT (Clinton vote share) plus VS_DOLE (Dole vote share) contracts be?
If you answered 1.00, 100% or some similar, you’re wrong and if you’ve got a finance qualification you should give yourself a slap on the wrist. That’s one answer that has to be wrong.
If you answered “100% minus Perot’s share of the vote”, then good try, you’re thinking, but wrong. The Iowa vote share markets are the share between the Republican and Democratic candidate; shares of third-party candidates don’t count. This is sometimes very relevant indeed; see below.
The correct answer is 1.00 minus the time value of money. Because you have to pay your money upfront to receive the return after the election, a portfolio of 1 Dole plus 1 Clinton should sell at a discount to face value reflecting that fact.
2. What percentage return would you have made in the 2000 winner-takes-all market by buying the BUSH contract at the point when GORE was at its peak and holding to maturity?
You would have lost all your money. The 2000 winner-takes-all contract paid out on Gore, not Bush. There are two factors at work here.
a) Although it is called “winner takes all”, the WTA market is not actually a betting market on the winner. It is a market in binary options which pay 1.00 if the underlying vote-share contract is above a strike price of 0.50.
b) Gore won the popular vote by a small margin. If you strike out Nader’s votes (as you have to for the VS market calculation [Update: Oh do you really, laughing boy? See below]), this margin became signficant.
3. You would neither win nor lose anything. BU is simply an identifier meaning “the Republican candidate”.
All of which is by way of suggesting that, whenever you’re doing research on any kind of financial time series, you need to be very careful indeed about the specification of the contracts involved. Lots of academic studies comparing mutual funds to the S&P500 index do not observe this basic requirement, by the way.
UPDATE Actually, my answer to 1 was only correct for the 1996 and market. In 1992, there was a Perot vote share market, and in 2000 a Buchanan VS market and both times the vote shares were calculated three ways. Consider me slapped.
Back in December I wrote:
”[…] the proposed “Policy Analysis Market” (which claims on its website that it’s going to launch in March; sadly there is no currently existing futures market which allows me to bet that it won’t)
Historical note; it didn’t.
BTW, for those who care about that sort of thing, while we’ve expressed plenty of scepticism over the marginal value of election betting numbers in the past, they are probably no worse than polling numbers and available with greater frequency. Bush is currently more or less holding steady on IEM, but weakening on Tradesports. Note that these two figures are not directly comparable, as the IEM contract is for vote share while the Tradesports one is “winner take all”.
As many bloggers have noted over the years, one of the weaknesses of modern journalism is that in a political campaign journalists feel compelled to try and present an even-handed picture when evaluating the claims made by the leading candidates, even when one side is exaggerating while the other side is simply making things up. This CNN/Money article is a classic of the genre.
Here’s the lead paragraph.
John Kerry says the nation’s household income situation is miserable. George Bush says it’s improving. Economists say the truth is somewhere in between.
When we get to the actual data, we find the following. Before tax, the median household income fell 3.3 percent in real terms since 2000. That looks pretty miserable to me.
Ah, but there’s a twist. When you add in the tax-cut-and-spent policies of the Bush administration, the after tax fall is only 0.6 percent. Now it’s not clear why this isn’t a miserable situation, just as Kerry said. For one thing, real household income is supposed to rise. Remember, 0% GDP growth over any extended period of time is a disaster by almost any measure, so flatline household income is not, or at least should not, a neutral baseline. Flatline is a disaster, and even by tilting the playing field in their favour, the Bushies don’t rise to that level. Miserable, just as Kerry said.
Tilting the playing field? Well, yes, because there’s another twist. It’s hard to quantify (to say the least) but at some stage there is a cost to ordinary households in the increased deficits. If there wasn’t, we could increase median household income by a few percent by simply doubling the automatic deduction, and there would be no real cost to average citizens. (By the way, my back of envelope calculations suggest doubling the automatic deduction couldn’t have cost much more than the actual Bush cuts, and would have done a lot more good.)
To put this in some perspective, the budget deficit last year was approximately $4000 per household. If you assign that debt to each household, the median income has fallen by something like 10% since Clinton’s time. Now that is obviously unfair because that can’t be exactly how the deficit is made up, but it’s clear that on any reasonable assignment of the costs of deficit financing the overall performance is much worse than the 0.6% fall that CNN/Money is prepared to run with.1
In this case there was no need to split the difference between what the two camps were saying. Kerry was simply right. Economic performance under Bush has been miserable.
1 To be clear, I don’t oppose deficit financing in the right circumstances, if done correctly. Something like increasing the automatic deduction, or increasing the EITC, or increasing the amount for exemptions, would have been quite appropriate in 2001. I’ve never been a particularly strong deficit hawk. But the actual budgets that were passed, blowing an historically large hole in the deficit while not even increasing median household income over what it was under Clinton were unconscionable.
Forbes is the latest magazine trying to capitalize on the blogging thing by holding a best blog competition across various categories. It’s interesting to note that no less than 53% of voters say that the best blog on the economy is “none of the above” (no other entry gets more than 11%). I imagine that the glaring absence of a certain Berkeley economics professor from the shortlist helps explain this rather peculiar outcome … (via The Decembrist)
The idea that the war in Iraq is a necessary part of the struggle against terrorism is probably the biggest single factor in the case supporting the war. Both political leaders and pro-war bloggers have made repeated claims that the overthrow of Saddam Hussein constitutes progress in the “War against Terror”. A variety of arguments in support of this view have been proposed, most notably the ‘flypaper’ or ‘bring ‘em on’ theory that, by encouraging terrorists to fight in Iraq, the war made the rest of the world a safer place.
The most widely reported opinion poll in Australia is the Newspoll, which provides results for Rupert Murdoch’s News Limited papers (he has about half the Australian market). There was widespread discussion recently about a Newspoll showing that 65 per cent of people thought the war in Iraq had increased the danger of a terrorist attack in Australia1,
However, the really striking result was ignored. This concerned the proportion of people who accepted the claim, made repeatedly by the government here, that the invasion of Iraq substantially reduced the danger of terrorist attack. Only 1 per cent of respondents said that the invasion had made a terrorist attack “less likely”. The view that the war made an attack “a lot less likely” got an asterisk (less than 0.5 per cent). You can read the details here (PDF file).
This is substantially less than the proportion of people who are reported (in other surveys) to believe that Elvis is alive or that aliens are controlling government policy. In fact, by coincidence, another story a couple of days later reported an opinion poll for a mayoral election in which an Elvis ‘tribute artist’ has 8 per cent support.
I don’t think I’ve ever seen an opinion poll in which the position of the government on a central issue of foreign policy is supported by a fraction of the population too small to be reported.
1 The question doesn’t distinguish between the interpretations ” our participation in the Iraq war has raised Australia’s profile as a target” and “the Iraq war has increased the risk of terrorism everywhere”. I have previously argued that the latter view is the right one.
Markets versus Politics - The Real Choice: … Too often policy arguments proceed as follows: A) politics “fails” because it does not produce the theoretically optimal result, therefore B) market processes are necessary. But B does not follow from A. The failure of government to produce an optimal result does not ensure that market processes will do a better job. From a social democratic perspective – or any perspective that is inherently suspicious of privatization – the burden should be on those advocating market processes to explain why the marketplace can be expected to produce a better result than the political process. In such an inquiry, the theoretical virtues of a basic equilibrium model of perfect competition are no more relevant than Pigouvian theories of government intervention. Both are blackboard abstractions that often have little bearing on what occurs in the real world. What matters is how privatization — and make no mistake, the subordination of political decisions to the marketplace is always political — is likely to affect the status quo ante, and whether the consequences of such intervention (and the attendant rent-seeking, transaction costs, etc.) constitute an improvement in the real world.
The introduction of market mechanisms into politics may be well intentioned, but that does not make it any more likely to generate positive results. Indeed, insofar as noble intentions leave the likely consequences of such interventions unexamined, such policies may make us all worse off.
(see here for original).
One of the justifications I make for the time I spend blogging is that it gives me a chance to try out arguments I use in my work. With that in mind, I’d very much appreciate comments on this short summary of the role of ideas and interests in explaining policy outcomes.
Economists seeking to explain policy outcomes use three main theories, sometimes explicitly and sometimes implicitly. These may be referred to as the public interest theory, the private interest theory and the ideological theory.
The public interest theory is rarely stated explicitly, but is implicit in much of the normative analysis of policy options. The central hypothesis of the public interest theory is that governments adopt policies that will maximise social welfare, subject to random error cause, for example, by ignorance about the issues. The utilitarian case for democracy is based on the argument that a government responsible to a democratic electorate will have an incentive to weight the interests of all voters equally, and will therefore promote the public good.
The private interest theory is commonly presented in conscious contrast with the public interest theory. The central hypothesis is that political outcomes are determined by interactions between interest groups, and that the relative weight of interest groups will determined by factors such as the effectiveness of their organisation, rather than by their significance in relation to some well-specified social welfare function. Marxism (at least in its simple ‘vulgar’ form) and public choice theory share this central hypothesis, along with some versions of liberal pluralism.
The ideological theory is most commonly associated with Keynes’ dictum that soon or late, it is ideas, not vested interests, which are dangerous for good or evil…” (General Theory, page 570)” On this view, it is changes in beliefs about the merits of policies such as privatisation, as opposed to changes in the actual costs and benefits or in the relative weight of competing interest groups, that do most to explain changes in policy outcomes.
Unless ideas are regarded as evolving independently of the real word, the ideological theory tends to reduce, in the long run, to some mixture of public interest and private interest theory. If ideas about the desirability of policy are adjusted in response to evidence (cf Keynes - when the facts change, I change my mind. What do you do?) then the public interest theory will be valid in the long run. If changes in ideas are determined by the rise and fall of dominant interest groups (as in many Marxist models, and in Schumpeter) private interest theories will be valid in the long run, even though people may believe themselves to be acting in the public interest.
Brad de Long correctly summarises the argument of my papers with Simon Grant. If you accept that the equity premium (the large and unexplained difference between the rate of return expected by holders of private equity and the rate of interest on low-risk bonds) is explained in large measure by the fact that capital markets do not do a good job in allocating and spreading risk, the the natural solution to all this is the S-World: Socialism: public ownership of the means of production This is because risk can be more effectively through the tax system, and through governments’ capacity to run deficits during economic downturns than through private capital markets. A very robust implication of the observed equity premium is that a dollar of investment returns received during a recession is worth two dollars during a boom - this provides governments with a huge arbitrage opportunity.
But we economists love our ceteris paribus (all other things equal) clauses. At least one commentator noted my qualification that this argument applies “unless there are large differences in operating efficiency between private and public enterprises”. Since, in a wide range of businesses, public enterprises have not performed very well (my own home state of Queensland experimented with state-owned butcher shops in the 1920s) this seems to leave us in the realm of “on the one hand this, on the other hand that”. Fortunately there is a simple empirical test which enables us to balance these considerations, at least in relation to proposed privatisations. If the advantages of privatisation outweigh the difference in the cost of capital, and assets are sold in a competitive market, then the government should come out ahead by selling assets and using the proceeds to repay debt, thereby reducing obligations.
In fact, this is rarely the case. In most cases, the interest savings from selling public assets are less than any reasonable estimate of the earnings foregone. And if you don’t like using estimated earnings you can look at cases where assets were valued for privatisation, but then not sold. Again governments came out ahead from not selling in most cases. It was this empirical observation, rather than theoretical analysis that led me to the conclusion that the equity premium provides a case for public ownership.
On the other hand, the kinds of enterprises where government ownership is common are, in general, those where you would expect the balance of considerations to lean towards public ownership. They are capital intensive, so a lower cost of capital is important and excess labor costs (for example, due to overstaffing) are not. In addition, they are often subject to fairly tight regulation for natural monopoly or essential-service reasons, which reduces the reward to entrepreneurial innovation.
The record of government ownership in other large-scale businesses is mixed (I mean this literally, not as a euphemism for ‘bad’). Brad notes that the US government made a pot of money by rescuing Chrysler in the 1980s, and the British government did the same for Rolls-Royce. But plenty of rescues have turned out badly (from memory, British Leyland didn’t do to well). And in these cases, the cost of acquisition was not great - the case for governments buying profitable enterprises outside the infrastructure sector (broadly defined) is not so strong.
The argument is clear-cut in the case of entrepreneurial businesses that don’t rely on outside equity. For such businesses, the incentive effects of having the residual flow to an owner-manager outweigh any considerations of risk sharing. Hence, as far as the considerations outlined above are concerned, there is no case for public ownership.
So, it turns out that the equity premium provides a case for the mixed economy, rather than for comprehensive socialisation. Given the generally successful performance of mixed economies (most notably between 1945 and 1970), there’s nothing paradoxical or surprising about this.
Brad DeLong has had a string of posts referring to the possibility that some or all of the US Social Security fund should be invested in stocks rather than, as at present, in US Treasury bonds, of which the most pertinent is this one. This idea first came up in a major way in Clinton’s 1999 State of the Union speech, and has since had some play on the Republican side, especially now that privatization individual accounts seem to be off the agenda.
The key fact that makes the idea attractive is the equity premium, the fact that, historically the rate of return to investment in stocks has been well above that in bonds. This used to be explained by the fact that stocks were riskier than bonds. But ever since the work of Mehra and Prescott in the 1980s it’s been known that no simple and plausible model of the social cost of risk that would be generated by efficient capital markets can explain more than a small fraction of the observed premium. The immediate response, that of finding more complicated, but still plausible models hasn’t gone very far. The alternative explanation is that capital markets don’t do a very good job of spreading risk. For example it’s very hard to get insurance against recession-induced unemployment or business failure, even though standard models imply that this should be available.
Simon Grant and I have done a fair bit of work on this, with some specific attention to the Social Security issue. In this paper (large PDF file), published in the American Economic Review, we argued that substantial gains could be realized by investing Social Security funds in the stock market. We didn’t put a number on it, but I don’t find Brad’s half-embraced suggestion of $2.4 trillion in present value implausible.
An important point, though, is that investing in stocks will generally not be the best way to go, at least if the amount invested is large. A government agency holding, say 20 per cent of the shares in Ford and General Motors, would seem to have big problems. Leaving aside the specific institutional issues of the US Social Security fund, the obvious implication of the equity premium is that, unless there are large differences in operating efficiency between private and public enterprises, government ownership of large capital-intensive enterprises like utilities will be socially beneficial. The case is strengthened if monopoly or other problems mean that the enterprises have to be tightly regulated in any case. Again, Simon Grant and I have written this up, this time in Economica (PDF version available here)
My post a week or so ago considering (and ultimately rejecting) the hypothesis that the 2004 election might be a good one for the Democrats to lose raised plenty of eyebrows, but the ensuing debate helped to sharpen up my thinking on the underlying issue, that of the unsustainability of current US fiscal policy and the appropriate Democrat response.
In the original post drew the conclusion that the only campaign strategy that would give a Democrat, once elected, any real chance of prevailing over a Republican congress, was that (supported by Dean, Gephardt, Kucinich and Sharpton) of repealing the entire Bush tax cut and starting from scratch. To the extent that primary voters considered this issue, they didn’t see it this way. With the possible exception of Lieberman, Kerry was the candidate most supportive of the tax cuts.
Like Bush, Kerry promises to cut the deficit in half over four years. He proposes to scrap the cuts for those earning more than $200 000, but to expand them for ‘middle-class families’, a group normally taken to include about 95 per cent of the population1. When other spending proposals are taken into account, the Tax Policy Center (a joint venture of the Urban Institute and Brookings Institution) estimates that Kerry’s proposals will yield a net increase in the deficit of $165 billion over four years , or $40 billion a year. (Of course, Bush will almost certainly spend more once the unbudgeted costs of higher defense spending and even more tax cuts are factored in). As I show below, this is relative to a baseline of around $550 billion.
I think it’s safe to say this won’t happen. The problem for Kerry, then, is when to discover the deficit. There are three basic options:
1 It’s evidence of the startling lopsidedness of the Bush tax cuts, and the explosion of income inequality over the past two decades, that there is, nonetheless, a substantial revenue gain from repealing the cuts for the rich and ultra-rich. About half the benefits of the Bush tax cuts go to those on incomes over $200 000 per year.
Update: Brad de Long points to Kerry’s appointment of Roger Altman as his budget priorities advise as evidence that Kerry will choose Option 1. Kevin Drum is underwhelmed. He supports Option 2 and expects Option 3, or worse.
1. Discover it now, dump the current fiscal policy and campaign on full repeal of the Bush cuts. As I argued in my previous post, this would give a newly-elected Kerry the mandate to push the policy through Congress. This strategy would incur a fair bit of short-term political pain, but Kerry’s early and overwhelming win in the primaries gives him some time and political credit to spend.
2. Discover it immediately after the election. This is the strategy usually adopted by newly-elected Australian governments who want to dump their campaign promises. The idea is that on Day 1, you appoint a Commission of Audit. In a month or two, the Commission reports back with the shocking news that the previous government’s figures, on which you naively relied, were a massive exercise in book-cooking. You then introduce an emergency Budget. This strategy works well in a Parliamentary system where the government has a majority in the Lower House where budgets are determined. To make it work in the US system, you’d need to win well enough to get a Democrat majority or at least a workable majority with moderate Republicans. As I understand things, however, a Democrat majority is unlikely and moderate Republicans are an extinct species.
3. Discover it slowly over time. The key point in favour of this strategy is that the Bush tax cuts expire automatically (in 2006 I think). But this is the strategy most likely to lead to deadlock, for which the President will probably take most of the blame, and which will produce the most painful economic adjustment
If you accept my summary of the options, I think it’s pretty clear that Option 1 is the way to go.
To get an idea of the scale of the problem, go to the Congressional Budget Office and add up from Table 1-1 and Table 1-3
1. The baseline budget deficit projection for 2008 ($278 billion)
2. The effect of extending the Bush tax cuts ($125 billion including debt service)
3. Alternative Minimum Tax relief ($43 billion including debt service)
4. Discretionary appropriations growing in line with nominal GDP ($102 billion including debt service)
for a total starting point deficit of $548 billion, assuming no adverse economic shocks or spending requirements.
Kerry’s partial repeal would save only about $50 billion
For completeness, here’s the section of Kerry’s economic policy headed Restore fiscal discipline to WashingtonBy borrowing from future generations to give tax relief to those who need help the least, George W. Bush’s economic policies have, for the first time in history, forced the federal government to spend $1 billion more EACH DAY than it takes in. John Kerry believes that we need a smaller and smarter government that wastes less money. He has put forward a sensible plan that will at least cut the deficit in half in his first term, while investing in economic growth and investing in workers. To restore fiscal discipline and strengthen our economy, Kerry will repeal Bush’s special tax breaks for Americans who make more than $200,000. He will cut excesses in government and reign in out of control spending. And he will implement the McCain-Kerry commission on corporate welfare to undermine the special interest groups that make it hard to cut tax loopholes and pork barrel spending projects.
Tim Dunlop tells us about another signal contribution to the David Bernstein school of revealed preference theory.
It’s hard to take Keith Windschuttle seriously when he says things like this, apparently without irony:
“In other words, since the ’60s the great majority of Aboriginal people have voted with their feet in favour of integration with white Australia.”
Same way I used to vote with my fork and eat my Brussels sprouts when told I couldn’t eat anything else for the night if I didn’t.
(minor corrections and reformatting of original)
I’ve been re-reading James Scott’s Seeing Like a State: Why Certain Schemes to Improve the Human Condition Have Failed”. It’s a wonderful book; I especially recommend it to libertarians who find it hard to believe that lefties too can be opposed to big government. I hope to blog more about its relationship to Hernando de Soto’s proposals for property rights reform in the developing world (see here, here and here ) sometime in the next few weeks. For the moment, I just want to highlight one implication of Scott’s argument; that free markets may be flawed from a Hayekian point of view. Scott doesn’t pay as much attention to markets as he perhaps ought to, but it’s quite clear that he sees the process of state-building as going hand-in-hand with the creation of national and transnational markets. In particular, both states and markets need commonly agreed formal standards (of quality, measurement etc), which allow non-local exchange between people who don’t know each other. The historical evidence is emphatic - creating universal standards is an important part of the state-building process, not only in autocratic regimes, but also in more market-oriented societies (see, for example, John Brewer’s exemplary study of the building of the British state, The Sinews of Power).
But here’s the rub. Scott very clearly shows that national, written standards are going to be “thin.” By their very nature, they’re unavoidably going to leave out many of the important forms of tacit knowledge that local, consensual, unwritten standards and rules can incorporate. The Hayekian case for free markets, as I understand it, is based less on the ideal of competition, than the ideal of information exchange; i.e. that markets allow the transmission of tacit knowledge more effectively than formal organizations. But Scott’s argument suggests that Hayek on tacit knowledge contradicts Hayek on free markets.1 If you want to have non-local exchange (i.e. properly competitive impersonal markets), you have to do so on the basis of universal standards. But these standards fail to live up to the Hayekian ideal. Ergo, you can construct a Hayekian case against the creation of competitive impersonal markets, insofar as these markets involve the destruction of the kinds of tacit knowledge that are embedded in informal local standards. I’m not a Hayek expert, so I’ll throw this out to the people who know Hayek better than I do (Dan for one), but I think that there’s a serious argument here to be fleshed out.
1 Cass Sunstein, in his review of Scott’s book, seems to suggest that Scott’s use of Hayek is self-contradictory. I suspect that the contradiction is in Hayek rather than in Scott.
Greenspan Urges Congress to Reign in Deficit
Says the NYT (though I’m sure the typo won’t last long).
The announcement that Ralph Nader will again run for the Presidency raises the (almost) unaskable question -are there any circumstances under which we should hope for, promote, or even passively assist, the re-election of George W. Bush as against either of the remaining Democrat contenders? I feel nervous even raising this question, but I think it’s worth a hard and dispassionate look.
Regardless of their political persuasion, most people will agree, at least in retrospect, that it would have been better for their own side (defined either in ideological or in party terms) to have lost some of the elections they won. Most obviously, this was the case for the US Republican Party in 1928. Hoover’s victory, and his inability to cope with the Depression, paved the way for four successive victories for FDR and two generations of Democratic and liberal hegemony, which didn’t finally come to an end until the Reagan revolution in 1980. The same was true on the other side of poltiics in Australia and the UK, where Labour governments were elected just before the Depression, split over measures of retrenchment demanded by the maxims of orthodox finance and sat out the 1930s in Opposition, watching their own former leaders implement the disastrous policies they had rejected, but had been unable to counter.
So, is 2004 one of those occasions? The case that it is rests primarily on arguments about fiscal policy. Bush’s policies have set the United States on a path to national bankruptcy, a fact that is likely to become apparent some time between now and 2008. Assuming that actual or effective bankruptcy (repudiation of debt or deliberate resort to inflation) is unthinkable, this is going to entail some painful decisions for the next President and Congress, almost certainly involving both increases in taxation and cuts in expenditure. On the expenditure side, this will mean a lot more than the obvious targets of corporate welfare and FDW1. Either significant cuts in the big entitlement programs (Social Security and Medicare) or deep cuts in everything else the government does will be needed, even with substantial increases in taxes (to see the nasty arithmetic read these CBO projections, and replace the baseline with the more realistic Policy Alternatives Not Included in CBO’s Baseline)
1 Fraud, Duplication and Waste
As far as I can see, the only way to avoid four years of grinding bargaining would be the Big Bang approach of repealing the Bush tax cuts en bloc while the electoral mandate was fresh. Gephardt and Dean proposed this (along with, I think, Kucinich, Braun and Sharpton), but Edwards and Kerry propose repealing only the cuts on incomes above $200 000 a year. Whichever of them wins the Democratic nomination, it seems likely that the pressures of the campaign will lead them to soft-pedal the bad news on tax and spending options, making it more difficult to push even partial repeal through a Congress that will probably have a Republican majority in at least one House.
Given that the deficit has yet to register as a major issue with many (most ?) voters, , it will be very hard to shift the blame back onto Bush and the Republicans if the problem is deferred until 2005 or 2006. It’s easy to imagine scenarios leading to an electoral catastrophe in 2008 and the election of a Republican even worse than Bush. Conversely, a re-elected Bush could be a second Herbert Hoover, discrediting the Republicans for decades to come.
Of course, similar arguments were made in 2000, notably on behalf of Nader, and they turned out to be totally wrong. More generally, the folk wisdom about birds in the hand and in the bush (sic) is applicable. And it’s always easier for an outside onlooker to advise taking the long-term view in cases of this kind, though in this case, we all have to live with the consequences.
Looking at the damage another four years of Bush would do in all areas of domestic and foreign policy, I can’t conclude that the putative long-term benefits of demonstrating the bankruptcy of his ideas are enough to balance the inevitable and immediate damage his re-election would cause. Still, I look forward to a Democratic victory with trepidation rather than the unalloyed enthusiasm I ought to feel.
I was a bit slow to respond to Kieran's post on the World City System, but let me say that my views on this system are pretty much a cross between Wired and William Cobbett. In a world where nearly all legitimate work of high-pay and status can be performed electronically and remotely, the most plausible explanation of 'global cities' is that they facilitate cronyism and corruption.
Updated with a little more evidence 25/2
On this point, Virginia Postrel has an interesting piece (abstract free, full piece payment required) on the work of Chicago economists Rajan and Zingales saving capitalism from the capitalists. Essentially, their claim is that where that where finance is allocated on the basis of personal relationships, it becomes a tool for creating and protecting monopoly. This is what they call "relationship capitalism". Others have used the more pejorative phrase "crony capitalism".
Postrel uses these ideas to attack the idealised, and largely mythical, small-town bankers of the past in favor of today's more impersonal system. It's certainly true for retail borrowers that relationships with bankers are no longer important. But she misses the irony that while distancing themselves from most of their customers, members of the financial sector have gathered together ever more closely in centres like New York and London.
Similarly, this Buttonwood column from the Economist deplores the fact that house prices in London are being bid up by City types who, he suggests, have enriched themselves at the expense of their customers (he's referring to the mutual funds scandals, but these are just the latest of many). He doesn't, however, ask the obvious question: Why do these City types crowd together in London (and New York). After all, the same City types are busy telling us about a globalised world, linked instantaneously by the Internet. And, as Warren Buffett has shown, they are right. You can get all the information you need to formulate market-beating investment strategies while sitting in Omaha, Nebraska.
The two halves of Buttonwood's observation are linked by the much older observation of Adam Smith (quoting from memory here)
Men of the same trade seldom gather together, even for innocent merriment, but the meeting ends in some conspiracy against the public.
The work that financial institutions are supposed to perform, trading assets and allocating risk in transparent markets, can be done anywhere on the planet. It's the stuff they want to do without any inconvenient records, and with the kind of trust that's needed for conspiracy that requires clustering in a central location where social bonds can be cemented by eating, drinking and sleeping together.
I should add (and at this point, readers might want to note that I am located in Brisbane, Australia, which may engender a somewhat jaundiced viewpoint) that most of what I've said above applies to academia. On balance, the clustering of high-status academics in places like Oxford, Harvard and Chicago has consequences that are more negative than positive. The merits of intensive collaboration and casual hallway discussions of academic topics are more than offset by the clubbishness and mutual backscratching produced by these concentrations.
Update 25 /2Another piece of evidence on this
Gains from Corporate Headquarters Relocations: Evidence from the Stock Market,” Journal of Urban Economics, Vol. 38(3), November 1995, 291-311, Chinmoy Ghosh, Mauricio Rodriguez, and C. F. Sirmans. This paper provides empirical evidence on investors’ perceptions of the relative advantages and costs of spatial agglomeration. Specifically, we examine the stock price effects of headquarters relocations. The stock market reaction is significantly positive when relocation decisions are attributed to cost savings, indicating that cost savings available at less centralized locations outweigh any loss of enhancements associated with spatial clustering at urban centers. In contrast, decisions prompted by managerial self-interest and desire for luxurious offices elicit an adverse reaction from investors.(emphasis added)
You can download the paper here (2.8 MB Word doc).
I thought I'd repost this piece from my old blog, because a multidisciplinary audience is just what it needs. The starting point is as follows:
Data mining' is an interesting term. It's used very positively in some academic circles, such as departments of marketing, and very negatively in others, most notably departments of economics. The term refers to the use of clever automated search techniques to discover putatively significant relationships in large data sets, and is widely used in a positive context. For economists, however, the term is invariably used with the implication that the relationships discovered are spurious, or at least that the procedure yields no warrant for believing that they are real. The classic article is Lovell, M. (1983), ‘Data mining’, Review of Economics and Statistics 45(1), 1–12, which long predates the rise to popularity of data mining in many other fields
So my first question is whether the economists are isolated on this, as on so much else? My second question is how such a situation can persist without any apparent awareness or concern on either side of the divide.
The paradigm example of data mining, though a very old-fashioned one is 'stepwise regression'. You take a variable of interest then set up a multivariate regression. The computer then tries out all the other variables in the data set one at a time. If the variable comes up significant, it stays in, otherwise it's dropped. In the end you have what is, arguably, the best possible regression.
Economists were early and enthusiastic users of stepwise regression, but they rapidly became disillusioned. To see the problem, consider the simpler case of testing correlations. Suppose, in a given dataset you find that consumption of restaurant meals is positively correlated with education. This correlation might have arisen by chance or it might reflect a real causal relationship of some kind (not necessarily a direct or obvious one). The standard statistical test involves determining how likely it is that you would have seen the observed correlation if there was in fact no relationship. If this probability is lower than, say, 5 per cent, you say that the relationship is statistically significant.
Now suppose you have a data set with 10 variables. That makes 45 (=10*9/2) distinct pairs you can test. Just by chance you'd expect two or three correlations that appear statistically significant correlations. So if your only goal is to find a significant relationship that you can turn into a publication, this strategy works wonders.
But perhaps you have views about the 'right' sign of the correlation, perhaps based on some economic theory or political viewpoint. On average, half of all random correlations will have the 'wrong' sign, but you can at expect to find at least one 'right-signed' and statistically significant correlation in a set of 10 variables. So, if data mining is extensive enough, the usual statistical checks on spurious results become worthless.
In principle, there is a simple solution to this problem, reflecting Popper's distinction between the context of discovery and the context of justification. There's nothing wrong with using data mining as a method of discovery, to suggest testable hypotheses. Once you have a testable hypothesis, you can discard the data set you started with and test the hypothesis on new data untainted by the process of 'pretesting' that you applied to the original data set.
Unfortunately, at least for economists, it's not that simple. Data is scarce and expensive. Moreover, no-one gets their specification right first time, as the simple testing model would require. Inevitably, therefore, there has to be some exploration (mining) of the data before hypotheses are tested. As a result, statistical tests of significance never mean precisely what they are supposed to.
In practice, there's not much that can be done except to rely on the honesty of investigators in reporting the procedures they went through before settling on the model they estimate. If the results are interesting enough, someone will find another data set to check or will wait for new data to allow 'out of sample' testing. Some models survive this stringent testing, but many do not.
I don't know how the users of data mining solve this problem. Perhaps their budgets are so large that they can discard used data sets like disposable syringes, never infecting their analysis with the virus of pretesting. Or perhaps they don't know or don't care.
US Secretary of Defense has received general derision for the following rather convoluted statement
Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns - the ones we don't know we don't knowAs I'm giving two papers on this general topic in the next couple of days, I feel I should come to his defense on this. Although the language may be tortured, the basic point is both valid and important.
The standard planning procedures recommended in decision theory begin with the assumption that the decisionmaker has foreseen every relevant contingency. Given this assumption, making the right decision is a simple matter of attaching probabilities (or, if you like my rank-dependent generalization of the standard model, decision weights) to each of the contingencies, attaching benefit numbers (utilities) to the contingent outcomes that will arise from a given course of action, then taking a weighted average. Whatever course of outcome yields the best average outcome is the right one to take. In this way, uncertainty about the future can be 'domesticated' and reduced to certainty equivalents.
The problem is that, in reality, you can't foresee all possible contingencies = the 'unknown unknowns' Rumsfeld is talking about are precisely these unforeseen contingencies. Some of the time this doesn't matter. If the unforeseen contingencies tend to cancel each other out, then the course of action recommended by standard decision theory will usually be a pretty good one. But in many contexts, surprises are almost certain to be unpleasant. In such contexts, it's wise to avoid actions that are optimal for the contingencies under consideration, but are likely to be derailed by anything out of the ordinary. There's a whole literature on robust decision theory that's relevant here.
Having defended Rumsfeld, I'd point out that the considerations he refers to provide the case for being very cautious in going to war. Experience shows that decisions to go to war, taken on the basis of careful calculation of the foreseeable consequences, have turned out badly more often than not, and disastrously badly on many occasions. The calculations of the German military leading up to World War I, including the formulation of the Schleiffen plan, provide an ideal example.
Finally, I should mention that I saw a link at the time to a post somewhere that seemed, from the one sentence summary to be making a similar point, but I was too busy too follow it, and can't now locate it. Anyone who can find it for me gets a free mention in the update.
UpdateAt least one such post has come to my attention, atLanguage Log, along with a useful link to Sylvain Bromberger who has, it seems, written extensively on the theory of ignorance. I will be keen to chase this up.
Anyone who's been following recent discussion of the US economy will be aware that the Bureau of Labor Statistics produces employment statistics from two different surveys, and that the results have diverged radically since 2001. The BLS preferred numbers on employment growth come from a survey of employers (the Establishment Survey) while other numbers, including the unemployment rate are derived from a survey of households (Current Population Survey). As the BLS Commissioner's latest statement notes (PDF file)
From the trough of the recession in November 2001 through January 2004, payroll employment decreased by 716,000. Over the same period, total employment as measured by the household survey increased by about 2.2 million (after accounting for the changes to that survey‚s population controls).Not surprisingly supporters of the Administration have been pushing hard to discredit the Employment Survey in favour of the CPS. While noting some reasons for the discrepancy, the BLS seems to be sticking with the payroll survey, noting that there are a lot of problems in estimating employment growth from the CPS, and that the payroll data is consistent with data on new claims for unemployment benefits.
If that's the case though, the implication appears to be that the CPS results are unreliable, and therefore that the unemployment rate (derived from the CPS) is an underestimate. Allowing for the fact that non-employed people are divided between unemployed and those not in the labour force, the discrepancy could easily be a full percentage point, implying that unemployment is now higher than when the recovery (as measured by output) began. This seems consistent with anecdotal impressions.
The deaths of nineteen Chinese illegal workers who were cockling on the treacherous sands of Morecambe bay has generated much comment in the British press. Much of that comment has focused on their illegality, the exploitation of such workers by gangmasters, the need or otherwise for tighter immigration controls, globalization and so on. Indeed. There was a similar burst of indignation when some immigrant workers were hit by a train back in July . But one thing that needs saying is that such tragedies are a normal and predictable consequence of capitalism and not simply the result of coercion and abuse by a few criminals. In his Development as Freedom , Amartya Sen discusses two examples where workers, in order to assure basic capablities (such as nutrition and housing) for themselves and their families, have to expose themselves to the risk of injury or death. Jo Wolff and Avner de-Shalit have a paper on this theme (Word format) that is on the programme of the UCL’s School for Policy Studies for this Wednesday, they recount Sen’s examples:
The first is from the southern edge of Bangladesh and of West Bengal in India, where the Sundarban [forest] grows. This is the habitat of the Royal Bengal tiger, which is protected by a hunting ban. The area is also famous for the honey it produces in natural beehives. The people who live in the area are extremely poor. They go into the forests to collect the honey, for which they can get a relatively high price in the city. However, this is a very dangerous job. Every year some fifty or more of them are killed by tigers. The second case is of Mr. Kader Mia, a Muslim daily labourer who worked in a Hindu neighbourhood in Dhaka, where Sen grew up as a child. Mr. Mia was knifed on the street by Hindu people, and later died. While he was deeply aware and concerned about the risk of going to look for a job in a Hindu neighbourhood in troubled times, Mr. Mia had no other choice but to do so because his family had nothing to eat.
Those are third-world examples. But it is not be hard to add to the list of disadvantaged workers who take dangerous jobs to secure the means of life for their families. Whilst some of them involve illegal workers at the margins of society, not all of them do or have done. Mine workers get trapped underground even in advanced capitalist countries and many workers in the oil and chemical industries run a greatly increased risk of death or injury. And many people who have worked with asbestos now face a slow, lingering death.
All of these “normal” examples should give us some perpective on the image of the heroic risk-taking entrepreneur, who typically risks a great deal less than any of these workers do. Those who consider Marx outmoded and are amazed that anyone should take him seriously (scroll to comments) would also find that Capital volume one attends rather more closely to this enduring feature of capitalism than do more conventional accounts.
Trade unions, the Health and Safety Executive and other bodies such as local authorities and the police certainly need to do more to protect people as vulnerable as the Chinese cocklers who died at Morecambe. But we mustn’t forget that the root cause of many such tragedies is that poor people need to risk themselves in order that they and those they love may live. Unless they cease to be poor, and cease to face such unpalatable choices, such events will happen again and again.
UPDATE: See Felicity Lawrence in the Guardian .
Apologies in advance because this edition of RFHE is not really going to be all that good. It’s a grab bag of things I’ve picked up relevant to personal hobby horses of mine. Lots of people sent me some really good stuff in response to the last one, for which thank yoyu very much. Unfortunately, my chaotic email management habits came through a minor MyDoom infestation about as well as I thought they were going to. I should be able to find all the stuff I had pretty soon; otoh, if any of you were to resend it, that would be just lovely. So, apologies, promises of something better next time, and please regard this inconsistency in quality as charming rather than annoying.
Situations Vacant: The ISMA Centre for Financial Economics at Reading, home of Carol Alexander who wrote “Market Models”, one of the few finance textbooks I keep next to my computer, has a vacancy for a postgrad researcher on an ESRC project, if your medieval Latin is up to it. They’re looking into the history of contracts for the forward sale of wool written between Cistercian monasteries in England and Italian bankers in the thirteenth century. Sounds absolutely fascinating, so if any CT reader gets the job, drop me an email (just to make it clear, I would be no help at all, I’m just interested).
Chap of the Week is Peter Temin. That makes two MIT citations, which surprises me as I’d never thought of MIT as a hotbed of heterodoxy. To be honest the papers on his MIT site aren’t particularly heterodox; they’re high quality but pretty mainstream economic history. But he gets the award for a) seeming like a nice old boy and b) Coauthoring this number with Hans-Joachim Voth. Basically, Temin and Voth have got access to the historical records of Hoare’s Bank, a very old goldsmith’s bank in London. They’ve got a few papers out of it, most of which are on SSRN, but this is the interesting one because it deals with the ledger recording Hoares’ dealings in the stock of the South Sea Company. Not only is the narrative incredibly interesting (to be honest, I could have done without having the laboured parallels to the dot com episode spelled out to me though), but the point they make is that Hoares very definitely did not operate the kind of stabilising speculation that you would have expected from a well-informed investor. Their trading massively outperformed the South Sea company stock price, and this did not apper to be due to insider information or front running its clients. They “rode the bubble”, buying on the way up and selling on the way down and made their money that way. It’s a great paper, although the typesetting in the version currently on SSRN is a little bit screwed.
This is of interest to me, because some while ago I wrote a blog post called “DeLong and the shorts”, which I unapologeticaly plug here, which argued that a rational, perfectly informed and well-capitalised speculator would not optimise his returns by popping bubbles. The record of Hoare’s trading seems to suggest that, empirically, they found it worthwhile not to do this, and a sweet little paper by Suleyman Basak of LBS and Benjamin Croitoru of McGill University provides the necessary theoretical underpinnings. To be honest, the paper frightened the shit out of me - it looks like a train crashed into an algebraic symbol factory and I swear that the underlying intuition is simple enough that it doesn’t need the heavy guns of dynamic programming brought out in this way - but if you struggle through the maths it seems sound. Basically, as they say “When the arbitrageur has market power in the securities markets,he will take account of the price impact of his trades on the level of mispricing across the risky assets.This consideration will be shown to induce much richer arbitrageur trades than those in the competitive case,in which under mispricing the arbitrageur simply took on the maximum trade allowed by the position limit.” And thus, will allow mispricings, potentially significant misprcings, to develop. My own contention expressed in the blog post linked above, is that the “non-competitive case” is the only case worth considering, because an abitrageur with no market power is close to being a contradiction in terms. All good stuff.
In comments to the last RFHE, we got a few points on the Cambridge Capital Controversy from Robert Vienneau, the Sisyphus of online Sraffaism and maintainer of the Frequently (sic) Asked Questions about the Labour Theory of Value page. Here’s a directory of his writings, and if you read them all you’ll have spent a pleasant afternoon and come out of it with the toolkit to win pretty much any argument you care to have on the subject of capital theory. The great thing about the capital debate is that one side is provably right, and the other side is provably wrong, and the side that’s wrong is the one that runs the economics profession. The real nuggets are here and here. Also of interest is this piece, which demonstrates an important point to remember next time you’re sharpening the pen for another critique of economic theory; hom economicus is much less important an assumption than is commonly believed. Most of the important things an economist might want to assert can be proved without specific assumptions about human nature; here Vienneau derives a labour demand schedule from a linear programming model. However, the conclusions derived in this way can be pretty interesting; the labour demand function derived has only a few points which could make sense as equilibria, making the whole business of supply/demand analysis problematic.
And penultimately, a funny little thing from Peter Bossaerts (author of “The Paradox of Asset Pricing”, which I liked) on “Neo-Austrian Theory Applied to Financial Markets”. Quite a misleading title, as it’s got f-all to do with Austrian theory as I understand it; it’s a Santa-Fe type simulation exercise in which artificially generated traders demonstrate (as artifically generated traders always do) significant regularities in their behaviour, but not in a way that’s easy to predict.
Finally two from J Barkley Rosser Jr, who I’ve mentioned earlier is the only person on earth I’d trust to write a sentence which combined “chaos theory” and “economics” and get both right. Epistemological Implications of Economic Complexity strikes me as an important one in Post-Keynesian economics; if I’ve scanned it right, Barkley Rosser is arguing that there are some economic questions where it is completely impossible (epistemologically andontologically) to assign probabilities to future events, or more generally, to make decisions based on any rational process or set of rules. It’s the first step on the way toward providing rigorous foundations for some important points about probabilitiy which Keynes just asserted andI think that this has big implications for all sorts of questions and not just in economics. This carries on from the argument in his Metroeconomica piece called All That I Have To Say Has Already Crossed Your Mind, a paper which I like so much I’m going to link to it again.
Anyway, there you go. more soon.
The New York Times Magazine had an excellent story this weekend on the fraud that brought the cable company Adelphia to bankruptcy. Most of the blame has to lie with the Rigas family, who managed to borrow three billion dollars, obligated the shareholders in their public company to cover it, and contrived to hide it from investors. They built a company with comically poor corporate governance- over half of the board were family members, and the audit committee may never have met.
Along the way, they had plenty of accomplices among the institutions that were supposed to be independent circuit breakers. Banks had no business lending other people’s money to the Rigases. Deloitte and Touche failed badly in their role as independent auditors. Despite the refusal of Adelphia management to disclose the loans, Deloitte still signed off on their 10-K. Adelphia attorneys did nothing to stop the loans, and most analysts did only a cursory job of inspecting the company’s structure.
In my mind, most of these problems don’t seem to be appropriate targets for public policy. No law can prevent incurious analysts, cowardly auditors, or shortsighted corporate management. (Conflicted auditors are another story, but that doesn’t appear to be a problem here.) I’d imagine that most laws that attempted to address these issues would do more harm than could.
But regarding banks, I’m not so sure. Specifically, I’m not sure about whether it was a good idea to overturn the Glass-Steagall Act, a Depression-era law prevented commercial banks from getting in the investment banking business until it was overturned in 1999 by the Financial Modernization Act. One of the concerns for the original lawmakers was that full-service banks would lower their standards for commercial loans in order to win lucrative underwriting contracts. According to the author, that’s exactly what happened in this case.
On Wall Street, the conditions for a capital-raising binge were ripe. The repeal of the Glass-Steagall Act, a Depression-era banking law, had paved the way for commercial banks like Citibank and Bank of America to get into the more lucrative business of underwriting. Adelphia’s Brown shrewdly exploited the banks’ greed. In a memo to bankers early in 2000, which cordially began, ”I hope your New Year is off to a great start,” Brown pitched the co-borrowing idea and pointedly observed, ”All of the lead managers and co-managers of each of these credit facilities are expected to have an opportunity to play a meaningful role in . . . public security offerings.”In others words, if the banks lent the Rigases/Adelphia money, then Adelphia would spill some gravy onto their investment-banking divisions. When the bankers saw that, their mouths watered. This was exactly the sort of conflict that Glass-Steagall had been intended to prevent. The banks went for it. From 1999 to 2001, three banking syndicates, led by Bank of America, Bank of Montreal and Wachovia Bank, allowed the Rigases/Adelphia to borrow a total of $5.6 billion, a staggering sum. Citigroup, J.P. Morgan, Deutsche Bank and scores of other banks participated.
Adelphia, meanwhile, was true to its word. From 1998 to 2002, it went to Wall Street more than a dozen times to issue stock, bonds, notes, convertibles — every flavor in Wall Street’s pantry. It raised something like $10 billion, while shelling out $233 million in fees. And syndicate banks like Bank of America were rewarded with lucrative underwriting assignments for their investment-banking affiliates.
The banking organizations have declined to comment. Generally, they maintain that their two functions, lending and underwriting, were separated by a Chinese wall and that the underwriters were in the dark with regard to the loans (even though they were arranged by their own affiliates). This does not square with the facts. In February 1999, Brown bluntly informed a large group of bankers and investment bankers, ”The Rigas family intends to use the proceeds of this distribution to purchase equity securities from Adelphia.”
Anyone looking for mere gaps in the Chinese wall is missing the larger point: banks weren’t trying to separate departments but to integrate them. That was the whole reason they had lobbied for Glass-Steagall’s repeal. Thus, the banks would send teams of 8 or 10 investment bankers and commercial bankers — no distinction was evident, according to Tim Rigas — to Adelphia pitching every financial service under the sun.
Enron had a similar relationship with JP Morgan Chase. To quote the Financial Times, “Morgan was simultaneously Enron’s creditor, its counterparty in various complex offshore trans-actions, an arranger of third-party financing and a provider of investment banking advice.” The setup worked really well until, suddenly, it didn’t.
My instinct is that Glass-Steagall was a classic case of concentrated costs and diffuse benefits. A small number of bankers could see a tremendous opportunity for growth and profits if the law was repealed. Many investors benefited just a little from it. I’m actually a little surprised that the law stood for as long as it did.
But I could be wrong. The Economist says, more or less, don’t bring G-S back. I’ve come across this brief note suggesting that, since other countries have different practices, the law makes it difficult to compete with foreign financial services firms or to work smoothly with international offices. This paper argues that diversification in financial institutions would lower their total risk and lead to fewer defaults. But I don’t know how well this stands up to the evidence of the last few years.
In short, I’m in search of a point. Any thoughts would be welcome.
Kieran's piece on kids being driven to school reminded me of a post I've been planning for a while. One of the issues debated at length on my blog is that of speeding and law-enforcement measures such as speed cameras. I've argued against speeding and in favor of rigorous law-enforcement. Not surprisingly, and perhaps reflecting the fact that more than 80 per cent of drivers regard themselves as above-average, this has been very controversial. You can read some instalments in the debate here and here or use the search facility for "speeding". Unfortunately most of the extensive and interesting comments were lost in a database failure.
In the course of this debate I discovered the fact, surprising to me, that, although the rate of road deaths per person in the United States is nearly twice that in Australia and the United Kingdom, much of this difference can be accounted for by the fact that distances travelled in the United States are a lot higher and are rising (there are problems with the numbers and biases in the measure, but I'll leave that to one side for now). The differences between US and UK are plausible given differences in population density and well-developed public transport in London at least, but the differences between the US and Australia certainly surprised me. Australia is every bit as car-dependent as the US and has much lower population density.
All of this is a prelude to the fact that, in economic terms, time spent travelling is a really big deal. In their book Time for Life, based on the 1985 US Time Use Study, Robinson and Godbey estimate that the average adult American spends 30 hours a week in paid employment and 10 hours a week travelling (they also, controversially, argue that working time has been falling, not rising). It's pretty clear that distances and times spent travelling have increased since 1985 in the US (in both the US and Australia, driving is by far the dominant mode of travel).
If, as I'll argue below, most travel should be regarded as being in the same economic category as working and if, as the stats linked above imply, Americans spend about twice as much time travelling as Australians, then reducing travel times to the Australian level would be equivalent to a productivity improvement of between 12 and 15 per cent. As it happens, combined with the relatively small difference in hours of paid work, adjusting for hours of work and travel would just about eliminate the gap between Australian and US GDP per capita (about 20 per cent on standard PPP estimates).
his is also important because quite a few commentators have argued that one of the factors promoting productivity growth in the US has been the rise of "big-box" edge of town stores like Walmart and Costco in place of small inefficient local shops (here, for example is Robert Gordon, cited by Brad deLong). As Steve Sailer points out, this apparent productivity growth has been achieved by transferring costs to shoppers.
Bigger stores mean fewer stores of each type, and that means longer drives, with larger parking lots and longer aisles to trudge through.(I'm less impressed by Sailer's argument that more diversity pushes the responsibility for choice onto consumers, but that's by the bye).
Now let me justify my claim that travel time should be added to work time in deriving economic measures of productivity. On the output side, at least two-thirds is associated with the basic business of getting and spending (commuting, childcare and shopping). The remainder is associated with free-time activities but, except in the case where the travel itself is part of the activity ('a drive in the country' and so forth) should, I would argue, be classified as work.
On the input side, driving is more stressful and unpleasant than most paid employment activities. One way of thinking about this is to look at full-time jobs that mostly involve driving (cabbie, courier etc) - these are generally considered high-stress unpleasant jobs.
Driving is more dangerous than work in general. Australian data suggests that there are about one and a half times as many work related deaths than road fatalities, but when account is taken of the number of hours spent at work and on the road, this means driving is several times more dangerous.
I should add that, in all of the above, I've treated distance travelled by car as a proxy for time spent travelling, on the assumption that average speeds are about the same. This might seem inconsistent with my emphasis on the enforcement of speed limits, but the reduction in average speeds associated with enforced limits is quite low - most of the time the binding constraint is congestion. My casual observation suggests that urban traffic doesn't move any faster in the US than in Australia.
As far as I can tell, this issue has been almost completely neglected by economists, except for the special purpose of evaluating savings in travel times associated with improved roads. The standard practice appears to be to value travel time at about half the average wage. This is consistent with the economic analysis I've proposed on standard assumptions about the disutility of work (if the disutility of work starts at zero and rises linearly until the marginal disutlity equals the wage, then average disutility will be equal to half the wage).
The most likely reason for this neglect is the assumption that travel represents a more-or-less constant overhead cost associated with getting to work, shopping and so on. If so, it can be ignored without changing anything of interest. But the evidence seems to be that this is not true. Travel times (or at least distances) differ greatly between apparently similar countries and vary significantly over time. This means that analyses of productivity and living standards that disregard travel costs are likely to get wrong answers.
Via Lawrence Solum, I found this interesting post from Professor Bainbridge arguing that corporations should not be compelled to pay reparations for past wrongdoing (in this case, complicity in slavery). He says
Punish the wrongdoers, you say? Sorry, but the corporation's legal personhood is a mere legal fiction. A corporation is not a moral actor. Edward, First Baron Thurlow, put it best: "Did you ever expect a corporation to have a conscience, when it has no soul to be damned, and nobody to be kicked?" The corporation is simply a nexus of contracts between factors of production. As such, there is no moral basis for applying retributive justice to a corporation - there is nothing there to be punished.This seems plausible. On the other hand, the obvious implication (one that was clearly implicit in Thurlow's original point) is that the principle of limited liability is untenable, at least in relation to civil and criminal penalties for corporate wrongdoing. The wrongdoers are, as Bainbridge says, the officers and shareholders at the time the wrong is committed, and they should be held personally liable. The law has moved a bit in this direction in recent years, but Bainbridge's argument implies that it should go a long way further, restricting the principle of limited liability to the case of voluntarily contracted debts.So who do we punish when we force the corporation to pay reparations? Since the payment comes out of the corporation's treasury, it reduces the value of the residual claim on the corporation's assets and earnings. In other words, the shareholders pay. Not the directors and officers who actually committed the alleged wrongdoing (who in most of these cases are long dead anyway), but modern shareholders who did nothing wrong.
I doubt that the Professor would want to go that far, but I think it's hard to make his general case without doing so (there are, of course, more specific defences that might be used in relation to reparations for slavery).
There are a number of counterarguments that might be offered. If you accept strong versions of the efficient markets hypothesis, the share price should discount the expected penalty associated with wrongdoing when it takes place, so that the penalty falls on contemporary shareholders. Since I don't accept the efficient markets hypothesis (except in the very weak form that information about the past history of share prices alone can't be used to predict future prices) I won't push this any further.
A second, more practical, argument is that the collective nature of corporate activity makes it hard to prove individual wrongdoing even when the fact of corporate malfeasance is clear - the current crop of corporate trials is making this point pretty clearly. The options for punishing such malfeasance are therefore rather unattractive. They include extending the law of conspiracy (which has frequently been misused) and expanding the scope of special-purpose provisions such as the US RICO laws (ditto). If there's no easy way of punishing guilty individuals, it seems preferable to hit the corporation in the only place it hurts - the bottom line.
It's good that we seem to be getting away from the silly idea that limited liability and the corporate veil represent some sort of fundamental human (or maybe legal-fictional-human) rights. They are 19th century pieces of government intervention, widely disputed at the time by economic liberals, and justified (to the extent they are justified) on the same kinds of pragmatic grounds as social security systems. But it's still probably best, except in cases of egregious personal wrongdoing to treat corporations as if they were individuals, responsible for their own actions, past and present.
A little belatedly, some thoughts on After the New Economy. Other Timberites are still in the throes of writing their posts, so we’ll do a linkage post pulling the various responses together (as well as the responses of non-CT people such as Brad DeLong), when we’ve all reported. First take - this is a very good book indeed. It provides a trenchant response, not only to the New Economy hype, but also to the political project that it implies. Most importantly (and unusually, for a book about the US economy) it’s solidly based in a comparative framework, examining not only the relationship between the US and the world economy, but also showing that the experience from other countries (European social democracies) suggests that large welfare states aren’t necessarily a drag on growth. Brad DeLong notes somewhere or another on his blog that the economic success of the statist Scandinavians is a real puzzle for economic theory; this is something that should give pause to gung-ho US advocates of unfettered free markets, but rarely does. It’s nice to see the lesson being drawn out in a book that isn’t aimed at an academic audience. Furthermore, as Kieran has already noted, After the New Economy avoids falling into the trap of bucolic communitarianism; Henwood makes a guarded - but thoughtfully argued - case for the potential benefits of globalization for societies in both the West and the developing world. He’s right on all fronts, I think - but there’s still something missing in the book, which reflects a wider absence in the political debate. Not only is there not much in the way of a pro-globalization left; what there is doesn’t have much in the way of a positive alternative vision to offer. This means that Henwood is able to make a strong case for the prosecution, but doesn’t have very many positive arguments to defend his own vision of globalization.
This is important, because “After the New Economy” isn’t merely an effort to debunk. Henwood believes that many of the stated aspirations of New Economy evangelists are worthwhile, but rightly resents how they’ve stolen the imagery of social revolution, without any intention to deliver on the concrete reforms that would make such a revolution possible. Still, it’s evident that the myths of democratization of ownership, of non-hierarchical workplaces and the like have real appeal - which suggests to Henwood that there is a real appetite for social change that the left can build upon. But it’s not clear to Henwood (or indeed to me) how best the left can do this, precisely because there isn’t a clearly articulated alternative political program that builds on the good bits of the globalized economy and information technology revolution, while avoiding some of the pitfalls.
Perhaps the best illustration of this is Michael Hardt’s and Toni Negri’s book, Empire. Henwood acknowledges that the book has flaws, but sees it as a good starting point for left-wingers who want to take globalization seriously. However, the weaknesses of Hardt and Negri’s vision not only weaken, but perhaps even cripple their argument. Empire proposes that there’s a confrontation between the Empire (roughly speaking, the forces of global capitalism) and the Multitude of workers, who are empowered as well as disempowered by globalization. It’s a nice story - but like the political program of the autonomous Marxist movement that Negri founded in the 1970’s,1 it’s remarkably scant on detail. As Henwood more or less acknowledges, it’s a celebration of agency that shows no particular interest in the actual social agents that are involved in globalization. Nor does it have any concrete political proposals to offer beyond a couple of pie-in-the-sky aspirations towards global citizenship.
The same is true of what Henwood describes as the “utterly wonderful” growth in activism over the last few years, which has been associated with the fight against the MAI, Seattle etc. Henwood is right in pointing to these activists, and the international social movement that they’re creating, as evidence that globalization and communications technologies can cut both ways, facilitating not only the advocates but the opponents of neo-liberalism. Skeptics for their part can point to the pronounced lack of a coherent set of ideas uniting these activists; their principal (and perhaps only) point of agreement is what they’re against. “Teamsters and Turtles: Together at Last” will only go so far without a concrete vision of why it is that Teamsters and turtle-lovers should be cooperating in the long run, and what sort of global society it is that they should prefer over the neo-liberal alternative.
There are real dangers in celebrating the counter-movement against neo-liberalism without spelling out a clear alternative vision of what globalization and the New Economy should involve. First, it’s all too easy to fall into Hardt and Negri’s trap - fetishizing resistance and counter-power as an end in itself rather than as a means to an end. Second, it obscures the very difficult task of constructing a realistic and coherent vision of global politics that more closely matches the egalitarian priorities of the left. Henwood’s dissection both of right-wing New Economy boosterism, and of the back-to-the-Stone-Age inanities of some anti-globalization activists is lovely to behold. But I suspect that he would agree that it’s only a first step - the second, and far more difficult one, is to construct an alternative. As Henwood correctly points out, it’s hard to continue to maintain the argument that socialism can be maintained in one country. However, the difficulties in creating a global alternative are enormous and obvious - for better or worse, there are few of the same solidaristic feelings on the global level that there are at the national. But that, I suppose, is the matter for another book.
1 Negri’s life history reads like a bad novel; political science professor and founder of autonomist Marxism; arrested for the kidnap and murder of Italy’s Prime Minister, Aldo Moro, and also charged with being the ‘secret leader’ of the Red Brigades; release from prison after his election to Parliament as a Radical deputy; flight to France when Parliament decided to revoke his immunity; a decade of exile followed by a voluntary return to prison in 1997 as part of a deal that the Italian government then went back on.
Quote of the week from Tyler Cowen.
I’ve been an economist for so long that I don’t flinch when the paper abstract starts as follows:
“This paper models love-making as a signaling game. In the act of love-making, man and woman send each other possibly deceptive signals about their true state of ecstasy. Each has a prior belief about the other’s state of ecstasy. These prior beliefs are associated with the other’s sexual response capacity…”
Via David Appell, I came across this marvellous quote from Freeman Dyson
In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, "How many arbitrary parameters did you use for your calculations?" I thought for a moment about our cut-off procedures and said, "Four." He said, "I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk."It came to mind when I read this story in the NYT with the introductory claim What really stimulates economic growth is whether you believe in an afterlife — especially hell.The report is of some estimations done by Rachel M. McCleary and Robert J. Barro (the story notes that the two are married) published in American Sociological Review.
Barro is probably the biggest name in the field of cross-country growth regressions (a field in which I've also dabbled), and I'm sure he's aware that thousands of these regressions have been run and that, with very limited exceptions, results that particular factors are conducive to growth have proved highly fragile. I haven't read the paper, so for all I know, the results have been checked for robustness in every possible way. But my eyebrows went up when I saw this para
Oddly enough, the research also showed that at a certain point, increases in church, mosque and synagogue attendance tended to depress economic growth. Mr. Barro, a renowned economist, and Ms. McCleary, a lecturer in Harvard's government department, theorized that larger attendance figures could mean that religious institutions were using up a disproportionate share of resources.What this means is that at least two parameters have been used in fitting growth to religiosity and that the two have opposite signs - most likely it's some sort of quadratic. In my experience, there's always at least one arbitrary choice made in the pretesting of these models (for example once you have a quadratic, the scaling of variables becomes critical). That gives three free parameters, if not more.
I'm no John von Neumann, but with two parameters I can fit a dromedary and with three I can do a Bactrian camel.
This is really Daniel’s department, but I’ve been waiting for Samuel Brittan to update his website with his review of John Allen Paulos’s A Mathematician Plays the Market for a while, and he’s finally done it. The most bloggable point is borrowed — I think — from Taleb’s Fooled by Randomness
In financial discussions you often hear how about Ms.X or Mr.Y who has had a consistently good record in beating the market indices. Paulos shows how such “successful” analysts can emerge purely by chance. Of 1,000 analysts, roughly 500 might be expected to outperform the market next year. Of these another 250 might be expected to do so well for a second year and 125 in the third. Continuing the series we might expect to find one analyst who does well for ten consecutive years by chance alone. But will she do better in the 11th year? Your guess is as good as mine.
I stole this idea from Cosma Shalizi, who got it from something else. Anyway, it’s basically an irregular sampling of economics things that interested me. Mainly post-Keynesian, econophysics or sociology of economics stuff, but if I see a good Austrian piece I’ll use it. Also a few things that aren’t really all that heterodox but struck me as good. I’m trying to put in a few bits that will interest fellow nerds and obsessives, and a bit of didactic stuff for the layman, so if any of it strikes you as incomprehensible and/or patronising, then I’ll hide behind the excuse that it probably wasn’t meant for you. Email suggestions very welcome.
Chap of the Week is Perry Mehrling of Barnard College, who has a few nice-looking papers on his college webpage. The one that particularly caught my eye while I was looking for someone else is Understanding Fischer Black. Which brings to mind an old Robert Crumb joke (“nobody understands Fischer Black except Robert Merton, and he ain’t telling ….”), but is a decent summary of the Big Ideas of the general equilibrium approach to the Capital Asset Pricing Model. Personally, I think that GE is a dead duck and so is the CAPM, but Black’s take on both was a lot more interesting than the standard version, and there are some very good points about the relationship between finance and economics. Not sure that Black’s career path really followed a conscious plan to revitalise economics by entryism via the business schools as Mehrling suggests, but good stuff. Also some very worthy pieces on liquidity.
Moving on to another personal obsession, the link between economics and literary criticism is this reprint of a Journal of Post-Keynesian Economics piece by Fernando J. Cardim de Carvalho. Basically, one of the key features of tragedy is that the characters make choices which have long term consequences, without knowing what those consequences will be. Since a good chunk of Keynes’ economics is about choice under uncertainty, you can write a decent piece bringing the two together.
It’s a shame I already gave the “Chap of the Week” award out in the first paragraph, because Franklin Fisher also deserves it. On the other hand, his CV runs to 19 pages (with the John Bates Clark Award discreetly popped in in chronological order, between “Council Member of the Econometric Society” and “FW Paish Lecturer, Association of University Teachers of Economics, Sheffield, England” - the old school used to be a bit more modest about these things), so I doubt he needs another award from me. To be honest, the papers on the site are not his best, despite the billing “Hall of Fame”, but “Janis Joplin’s Yearbook and the Theory of Damages” is a fantastic title, and at least The existence of Aggregate Production Functions is there. If that looks like heavy going, the first bit of this abstract gives you a clue what he’s on about; the question of “the existence of aggregate production functions” is that they don’t, and provably so, but neoclassical economists will not be talked out of using them in completely inappropriate contexts. If after that you want to see someone give the tired old undead corpse of aggregate production functions get yet another kicking, this paper by Luigi Passinetti struck me as quite good. For laymen, by the way, the debate about aggregate production functions matters because they’re used (wrongly) by applied economists to try and answer all sorts of questions about productivity and growth. More or less any time you see something in the Economist reported as fact on this topic, there’s an APF and therefore a logical fallacy at the base of it.
This was Fisher’s contribution to the Cambridge Capital debate, along with the observation that any econometric estimates of “factor productivity” are most likely really only telling you anything about movements in the change in factor income shares. That debate is summarised by Geoff Harcourt, the only really first-rate economist to have played in a Varsity Australian Rules football match, in a reprint from the Journal of Economic Perspectives. Harcourt is hardly a neutral in the debate, but on the other hand, he was on the side that was very clearly and demonstrably right, so any partisanship is most likely for the best.
And finally, thanks to the guy who emailed me the link to this number on the boundaries of econophysics and Marx’s value theory, and sorry I’ve lost your email so I can’t thank you by name. I’m normally allergic to this sort of Ormerodish simulation-building, but if you have to do it, I think that Ian Wright does it about as well as it can be done. I’ve never really understood the obsession on the arXiv for fitting power law distributions to economic phenomena (particularly in cases when lognormal distributions seem to fit just as well), but Wright’s model has one. It’s almost (almost) convinced me that arXiv might be useful as a source of interesting papers, rather than a great place to send people if you want to disabuse them of the idea that you can’t bluff in the ‘hard’ sciences.
more to come as and when ….
PS: As an example of an economist trying to do something good in the world, check out Franklin Fisher’s Water Management Site. It’s extraordinarily rare to read anything written by someone hoping to help facilitate peace in the Middle East and not to be able to work out within five minutes which side they’re on.
The discussion on the Caroline Payne story below reminds me of a fine old piece of doggerel attributed to James Tobin:
The poor complain
They always do
But that’s just idle chatter
Our system brings rewards to all
At least, all those who matter
No sooner have I mentioned 1960s expectations of what the future would be like — “future cities in which we’d all be whizzing about in our personal aeroplanes” — than I read John Kay in the Financial Times doubting whether our age is, as commonly supposed, one of unprecedented technological advance:
I began to doubt the conventional wisdom when I discovered a Hudson Institute report from the mid-1960s that predicted technological changes from then till 2000. Its prognostications about information technology were impressively accurate - it foresaw mobile phones, fax machines and large-scale data processing.
But in other areas the Hudson Institute was wide of the mark. Where are the personal flying platforms, the space colonies, the artificial moons to light our cities, the drugs that make weight reduction a painless process? Progress in IT has fully matched the expectations of three or four decades ago. But advance in other areas has, by historic standards, been disappointing.
Worth a glance.
Via The Big Picture’s Barry Ritholtz, CNN has an interesting article about which Democratic presidential candidate Wall Street might prefer. You’ve got to love the lead:
A recent study from the University of California at Berkeley, published in the October issue of the Journal of Finance shows that between 1927 and 1998, the stock market returned approximately 11 percent more a year under a Democratic president versus safer, three-month Treasurys. By comparison, the stock market only returned 2 percent more a year versus the T-bills under Republicans.
(Dwight Merideth had a marvelous series of posts on this subject called “Just For the Record”, by the way.)
I shouldn’t have been surprised, but Bush’s support from the “investor class” is far from monolithic. A Money magazine poll of “investor class” voters, however defined, revealed that only half planned to vote for Bush. And while Republicans got more in donations, they didn’t get that much more.
The piece goes on to detail:
It’s short and well worth a look; go to it.
An email from a reader alerts me to The Cheating Culture by David Callaghan, a new book which blames a whole raft of scandals in the US — from Enron to athlete doping — on the erosion of a sense of fair play in the winner-takes-all society. The book’s website has an interview with the author and also incorporates the author’s own blog on the issues covered by the book. Worth a look.
The other day I found myself reading a leftist rag that made outrageous claims about America. It said that we are becoming a society in which the poor tend to stay poor, no matter how hard they work; in which sons are much more likely to inherit the socioeconomic status of their father than they were a generation ago. The name of the leftist rag? Business Week …
I’m so glad that John Q. brought up the terrorism futures markets, because I’ve been dying to talk about them. The proposal to open a market in “terrorism futures” only lasted a day before it was retracted, and captured the imagination of many libertarians and libertarian-sympathizers. It was sharply criticized by Congressional Democrats, who felt that it was abhorent that the government would open a market that would allow terrorists to earn a monetary profit off of their terrorist actions. But there’s an answer to that:
“Why wouldn’t terrorists just hop online and start betting if they couldn’t either mislead American authorities about their plans or make money to fund more al Qaeda operations?” Wyden asked. Why not indeed? If terrorists were trying to use PAM to make money that “would mean that they are giving up information to gain money,” says Hanson. “In other words, we’re bribing them to tell us what they are going to do. That’s kind of like normal intelligence gathering when we bribe agents for information.”
I agree that the idea is fascinating, and it was probably retracted too soon. Nonetheless, I don’t see any way that it could work.
A) Betting parlors minimize transaction costs by only taking bets on unambiguous, discrete events. The ball is in the red or the black. The Chargers beat the spread or they don’t. The house isn’t going to engage your argument about why the black space is really just a very dark red.
Futures markets are so liquid and frictionless because they deal in standardized commodities. That is, if you want to buy a contract for a million shares of Microsoft in March, it makes no difference whether you’re buying it from one seller, ten sellers or a hundred sellers. The quality of the shares is identical. It isn’t necessary to speak to the sellers, or even know who they are. You know exactly what you’re buying and exactly what you’re selling.
I don’t see how the terrorism market can capture these efficiencies. Let’s say you have a contract that says “The Sears Tower will be the subject of a terrorist attack in February.” Would you get paid if:
- Someone throws a cherry bomb in a toilet.
- Someone throws a cherry bomb in a toilet. It creates a panic, and someone breaks their neck on the stairs as people flee.
- Spores from a biological attack on the Chicago Mercantile Exchange drift into the Sears Tower and infect some workers.
- John Smith comes to work and shoots his co-workers.
- Khalid Mohammed comes to work and shoots his co-workers.
- The FBI finds plans to shoot a missile at the Sears Tower and arrests the man holding the plans.
- The plans are later discovered to be a prank.
- A truck full of explosives is discovered in a parking lot downtown. The destination is unclear.
- An amateur pilot crashes a small plane into the Sears Tower, killing only himself. There’s no note, and it’s unclear whether it was an accident.
- The government of North Korea blows up the Sears Tower.
And so on. When the time comes to settle up, it’s easy to imagine events which would leave contract holders arguing about whether the outcome qualifies as terrorism or not. Some of the arguments will be more valid than others, but with a lot of money on the line, rational actors will often find the value in a legal challenge.
This is not necessarily an insurmountable problem. Participants can deal with this problem by writing more detailed contracts. But transaction costs in a market would be unreasonable if every participant in the market had to negotiate for every sale. The virtues of commodification would disappear. A buyer couldn’t effortlessly group ten standardized contracts to gain $1 million in protection; he or she would have to have lawyers read and negotiate each of the contracts.
In theory, people with assets that are possible targets of terrorist actions might enter into this market to hedge their risks. In practice, though, they can greatly reduce transaction costs by skipping the market and buying insurance.
B) I don’t see any way out of the “terrorists could profit off of their own terrorism” problem. It’s a big world, with hundreds of thousands of potential targets, and the odds of any one of them being hit are very small. An efficient market would greatly reward a small bet on an unlikely target. The bets could be small enough that they wouldn’t need to tip anyone off, and still pay off handsomely. (This assumes that there would be sufficient liquidity in the market to cover the contract on the (Blank) Tower in Capital City. If there isn’t, that’s another problem.)
C) Furthermore, it would provide an incentive for traders, who are not notorious for their incorruptability, to create their own luck. I could imagine the defense: “Was it irresponsible to phone in a phony bomb threat to the airport? I submit that it would have been irresponsible to our shareholders not to.”
D) Let’s say that I’m wrong, and the market gets off the ground. Imagine that a speculator strikes it rich on the terrorism market: he correctly bets on a few terrorist attacks before they happen and makes a small fortune.
Would you want to be that speculator? I wouldn’t. At best, he won’t be brought into a windowless room to explain his powers of prognostication. At worst, he’ll be dragged into the streets and hung from a lamppost.
E) The point of the proposal is not necessarily to make anyone rich. It’s to harness the power of the market to produce actionable information. This information would allow the Department of Homeland Security to thwart terrorist actions before they happen.
But if it was working as planned, wouldn’t it short-circuit itself? Imagine that the invisible hand pushed the price of contracts on the Brooklyn Bridge up, as the market correctly senses an oncoming attack. (How it would be sensed, I dunno.) If the DHS could use this information to foil the attack, then the people who are holding contracts predicting an attack don’t get paid.
Wouldn’t the market include this information in its calculation? If actionably high prices mean that the terrorist attack won’t happen, then the prices won’t rise to actionable levels. Which defeats the ostensible purpose of the proposal.
Tell me why I’m wrong.
Once there were three bubbles. The one that attracted everyone’s attention was the dotcom bubble, of which no more needs to be said. The second bubble, noted by plenty of economists was the glaring overvaluation of the bubble. Given chronic deficits in both the budget and current account, and the fact that the US dollar was trading at a value well above purchasing power parity, anyone who gave any credence to the view that markets eventually reach equilibrium could conclude that the US dollar was bound to fall, and it has duly done so. (this only leaves the question of why putatively rational investors did not sell earlier)
The third bubble seemed, until this year, like part of the second. Rates of interest on 10-year US government bonds are amazingly low, currently around 4.25 per cent (the price is inversely proportional to the interest rate, so low interest rates mean a bubble in bond prices). Most economists would, I think have assumed that, as the US dollar declined in value, long-term interest rates would go up. But, apart from a brief panic a few months ago, this hasn’t happened.
Why have long-term interest rates stayed so low? There are a bunch of factors that might be considered. First, as long as it maintains ‘credibility’, the US Federal Reserve can control short-term rates. The general assumption is that this control doesn’t extend to long-term rates, but the long run is just a sequence of short runs. If the Fed keeps the short rate at 1 per cent for years on end, the long rate must also be low (otherwise you could make as much money as you liked by borrowing short and lending long). I’m not convinced by this because, carried on indefinitely, such a policy would lead to resurgent inflation. So, to maintain credibility, it can only be maintained for a few years at most.
The second part of the story is that, while individuals are getting rid of US government bonds, Asian central banks are buying them. You can see this in the data supplied by the Bureau of Economic Analysis.
The third, and most interesting fact is that, even as it runs deficits, the US government is, in effect, buying its own debt. More precisely, it is not rolling over long-term debt as it expires but is, instead, issuing short-term debt. For example, even though rates on 30-year bonds look like an amazing bargain for a borrower, they are no longer being issued. Similarly, new issues of 10-year debt seem to be declining.
What all this means is that things are going to get very messy in a few years time, when the need to roll over increasing amounts of short term debt coincides with the payment of Social Security to the first of the baby boomers (in this purely actuarial context, generational terms like this are of some US).
As we have seen, no matter how solid they may seem, bubbles eventually burst, and the bond bubble will be no different.
I’ve just reached Amartya Sen’s chapter “Famines and Other Crises” in Development as Freedom . He has some discussion of the great famines that depopulated Ireland from 1845 onwards. The potato blight had destroyed the crop but the Irish peasantry lacked the resources to buy alternative foodstuffs which continued to be exported:
ship after ship — laden with wheat, oats, cattle, pigs, eggs and butter — sailed down the Shannon bound for well-fed England from famine-stricken Ireland. (p.172)
Sen argues that cultural alienation (or even hostility) meant that
very little help was provided by the government of the United Kingdom to alleviate to destitution and starvation of the Irish through the period of the famine. (p. 173)
Interesting, because Natalie Solent , who has been writing about famines recently links to an essay in the National Review Online by the awful John Derbyshire on the subject. Derbyshire asks why the
British government did not organize adequate relief, or prevent the export of foodstuffs from Ireland while Irish people were starving.
and answers
it was not within the nature, philosophy or resources of Anglo-Saxon governments to do such things in the 1840s.
Contrast Sen, who knows the facts:
… by the 1840s, when the Irish famine occurred, an extensive system of poverty relief was fairly well established in Britain, as far as Britain itself was concerned. England too had its share of the poor, and even the life of the employed English worker was far from prosperous …. But there was still some political commitment to prevent open starvation withing England. A similar commitment did not apply to the Empire — not even to Ireland. Even the Poor Laws gave the English destitute substantially more rights than the Irish destitute got from the more anemic Poor Laws that were instituted for Ireland.
So contra Derbyshire, who is probably just making it up as he goes along (but then gets quoted and circulated around the network of misinformation that is the blogosphere) it was “in the nature” of Anglo-Saxon governments, even in the 1840s to do “such things”. Just not for the Irish or the Indians.
Sen also provides us with this striking portrait of Edward Trevelyan
the head of the Treasury during the Irish famines, who saw not much wrong with British economic policy in Ireland (of which he was in charge), point[ing] to Irish habits as part of the explanation of the famines. Chief among the habitual failures was the tendency of the Irish poor to eat only potatoes, which made them dependent on one crop. Indeed, Trevelyan’s view of the causation of the Irish famines permitted him to link them with his analysis of Irish cooking: “There is scarcely a woman of the peasant class in the West of Ireland whose culinary art exceeds the boiling of a potato.” The remark is of interest not just because it is rather rare for an Englishman to find a suitable occasion for making international criticism of culinary art. Rather, the pointing of an accusing finger at the meagreness of the diet of the Irish poor well illustrates the tendency to blame the victim. The victims, in his view, had helped themselves to a disaster, despite the best efforts of the administration in London to prevent it. (p. 175)
Blaming the victim, bad choices, poor diet — I’ve heard those explanations before somewhere. And cultural alienation from those suffering from acute poverty? Plus ca change, plus c’est la meme chose .
I’ve occasionally mentioned that my lovely fiancee is a professional writer. She has been successful enough to hire some assistants and create a full-fledged corporate communications company, FrogDog Communications. It’s a terrific accomplishment.
Their webpage is up, and there are so many reasons to visit.
(That’s enough promotion. - Ed.)
I’ve been reading Amartya Sen’s magnificent Development as Freedom this week. A more bloggable books would be hard to find: startling facts and insights jostle one another on every page. Even when you already know something, Sen is pretty good at reminding, underlining and making you think further about it. So this, for example on the life prospects of African Americans:
Even though the per capita income of African Americans in the United States is considerably lower than that of the white population, African Americans are very much richer in income terms than the people of China or Kerala (even after correcting for cost-of-living differences). In this context, the comparison of survival prospects of African Americans vis-a-vis those of the very much poorer Chinese or Indians in Kerala, is of particular interest. African Americans tend to do better in terms of survival at low age groups (especially in terms of infant mortality), but the picture changes over the years.
In fact, it turns out that men in China and in Kerala decisively outlive African American men in terms of surviving to older age groups. Even African American women end up having a survival pattern for the higher ages similar to that of the much poorer Chinese, and decidedly lower survival rates than then even poorer Indians in Kerala. So it is not only the case that American blacks suffer from relative deprivation in terms of income per head vis-a-vis American whites, they are also absolutely more deprived than low-income Indians in Kerala (for both women and men), and the Chinese (in the case of men), in terms of living to ripe old ages.
Shocking, for the strongest economy on earth to create these outcomes (which, as Sen reminds us, are even worse for the black male populations of particular US cities).
UPDATE: Thanks to Noumenon for a link to this item . I closed the comments thread because I didn’t want to spend my weekend fighting trolls. But email suggests that there are some people who have worthwhile things to say so I’m opening it again (though I won’t be participating myself).
This is a piece I’ve been thinking about for around a year and have now finally got round to writing up now that the Cardhu Scandal has made it arguably topical again. Basically it’s an idea for anyone who wants an easy way into thinking about capital theory. I’ve thought for a while that the booze industry ought to be used much more as an example for people thinking about time and production, because it allows you to abstract from considerations of technology and the production process; there are any number of ways to produce a chair, some more time-consuming that others, but there’s only one way to produce a cask of ten-year-old whisky[1]; start with a cask full of nine year old whisky and wait. The fact that time is intrinsically part of the production process for wine and brown spirits is why you see “capitalised interest” on the balance sheets of drinks companies; part of the economic cost of whisky production, and therefore part of the eventual sale price and the value of the goods, is the interest foregone during the process of maturation. It’s this interest element which I’m going to concentrate on.
I’ve picked whisky as my example rather than wine, because it’s a product which is more homogeneous over time than wine. Unlike vineyards, whisky producers don’t have vintage years, because whisky is a more industrial and less agricultural product; the growing conditions of the grain make less of a difference to the finished product than they do with grapes. Therefore, there aren’t so many considerations of whether a particular year was “good” or “bad” when it went into the barrel to deal with; the capitalised interest element is the most important reason why older whiskies cost more than younger ones. I couldn’t abstract entirely from these kinds of issues, as you’ll see below, but it was possible to do quite a lot even under the aggressive simplifying assumption that all whisky is homogeneous.
I managed to dig up two datasets; one from the Scotch Malt Whisky Society and one from the WhiskyWeb high end vintage malts catalogue. The SMWS dataset is larger, and it’s more concentrated toward the younger end of the yield curve. It’s also, I think, more homogeneous in the types of malts on offer; the SMWS, as far as I can tell, is selling for the most part whiskies from smaller distilleries rather than the megabrand single malts, and it seems unlikely to me that there will be a lot of variation in the rarity of the malts on offer. The SMWS is also a mutual society rather than a profit-making concern, so I have less to worry about in the area of strategic pricing behaviour. For this reason, I’ll mainly be concentrating on this dataset.
My first step was to go through the current SMWS catalogue on the web, noting the age and price of all the whisky they’re selling through their website (30 data points in all). The next thing to do is to adjust for tax, since UK excise duty is charged at £19.65 for every 1% of alcohol strength per 100 litres, rather than ad valorem [2] Plotting and eyeballing the data below, it’s clear that the relationship is there.
In order to get anywhere, though, we need an estimate of the non-time-related costs of whisky production (mainly excise duty, but also grain, water, fuel and overhead). There’s a number of approaches one could take to this, but I decided on the simplest one I could think of that wasn’t absolutely obviously wrong. I took logs of the prices (because we’re dealing with compound growth rates here) and regressed log price on age, as below:
The regression equation suggested that the intercept of the simple log growth equation was 2.9877, and the antilog of 2.9877 is 19.84, suggesting a notional price of about twenty quid for raw single malt spirit straight out of the distillery. That strikes me as not so bizarrely out of line with the price of a bottle of vodka (an unaged grain spirit) as to be unusable, so I’m sticking with it. As shown lower down, this is quite an important number, though; I’d be grateful for any opinions on what a correct number might be.
Once you’ve got the price of a bottle of whisky of zero age (Pedants may at this point say that it can’t be called whisky if it’s less than three years old), it’s easy to compute the implied rate of return of a bottle of X year-old whisky selling for Y. Here’s the scatterplot, with a fairly boneheaded scatterplot smoother thrown in using Excel’s “Moving Average Trendline” function.
I wouldn’t read too much into the downward slope of the yield curve on this chart; it’s just kicked around a bit by the fact that there’s only a few data points at the short end. I would take away from this scatterplot that the realised rate of return on malt whisky in the cask over the last forty years has been surprisingly constant at four percent, give or take fifty basis points.
One might have expected a positive slope in this yield curve, if one thinks that whisky in the cask gets better at an ever-increasing rate (possibly a whisky snob’s a priori belief), or if one were to think that, as the Cardhu case proves, the supply of properly aged whisky is incredibly inelastic, older whiskies would have a rarity value. But I don’t think you can support this with the data. I played around with my spreadsheet until I got some signs of visible upward slope (basically, at a raw spirit cost of around £24 or above), and you get this chart:
I don’t think that the implied rates of return on the younger malts are credible; they’re well below the rate of return on money in the bank, so why would people be selling? (Also, my guess is that the implied forward rate of appreciation of an 8-year old bottle as it turns into a 10-year old bottle would be implausible). On this criterion, my log-linear estimate of £19.84 looks quite good; the lowest data point is 3.4%, so almost everyone’s earning more from whisky in the cask than the money rate of interest. My guess is that, within the SMWS dataset, the older spirits on offer are from less prestigious distilleries, so any time-varying rarity premium is being offset by a negative distillery-specific premium. Since I couldn’t conveniently get a list of the distilleries from the SMWS website (it could be done, but it seemed more trouble than it was worth [3]), I didn’t check up on this.
Modelling the effects of rarity premia is probably best done with the WhiskyWeb dataset. This is a short but expensive catalogue of older, more prestigious malts. There are fewer datapoints (25), and they range from 16 years to 66 years[4] old, so I felt uncomfortable putting them on a scatterplot. I’ve tabulated them below, sorted by distillery and then by age. Note that I’ve not tax-adjusted the prices because I’m not at all clear on the excise duty position of WhiskyWeb; I crudely altered the implied raw spirit price to sort of compensate. This probably accounts for some of the very low (c2.5%) implied rates of return on some distilleries.
What one gets out of this is first, that the estimate of four percent still looks pretty OK, even going out to some really rather old whiskies. Second, there are clearly distillery-varying premia between distilleries (Macallans seem to do much better, for example), and the very very old whiskies do appear to have an age premium; the assumed upward slope is there if you go out far enough. I’m mildly surprised about the distillery-specific premia in the growth rates (I’d have assumed that the premia would be there simply in the young whiskies, and that growth thereafter would be simply determined by the time value of money). Maybe the best malts age better, or maybe there is more demand for them so they get rarer as they get older.
So there you go. The internal rate of return on Scotch malt whisky in the cask since the war has been four per cent for whisky to be sold today. Mine’s a Chivas.
(For classroom discussion; How does this “malt whisky yield curve” relate to the more normal bond market curve? What rate of return might you expect today on spirit being distilled today?)
[1] Go on, spell it “whiskey” in the comments. I dare you. I just dare you.
[2] I assumed that the prices quoted were for a standard 75cl bottle at cask strength (50%), giving excise duty per bottle of £7.37.
[3] Perhaps a curious decision given the time and trouble gone into this piece overall …
[4] WhiskyWeb gives the “vintage” (year of distillation) rather than the age for most of its whiskies; I hand calculated the age and this might be a source of errors, given that some of the whiskies where they do give an age, it doesn’t seem to match up too well with the stated year of distillation.
In a post yesterday about the later age at which academics get proper jobs nowadays, I focused on how this means that academics now have fewer children later (or none at all). But there’s another consequence of the way the job market and accreditation process have changed: pensions. Academics here in the UK still have a final salary pension scheme (which is nice). The scheme assumes to that to receive a full-value pension (50 per cent of final salary) you have made 40 years worth of contributions. I’ve even met some academics — appointed at around age 23 in the 1960s — who’ve managed this. But those who have entered the profession late (and burdened with debt) from the 1990s onwards, at age 30+ will never pay in their 40 years (given retirement at 65) and will therefore receive a lower income in their old age. I’ve assumed in this post that the system is the UK one, but obviously the point generalises beyond final-salary schemes. Those who earn proper salaries later (and are debt-ridden) will not contribute so much towards their pension — especially if they are trying to bring up a delayed young family! — and will suffer in their retirement.
Given that Paul Krugman is reminding us all of Stein’s Law (“Things that can’t go on forever, don’t”), I thought I’d remind everyone of Davies’ Corolloraries:
1. Things that can’t go on forever, go on much longer than you think they will.
2. Corollorary 1 applies even after taking into account Corollorary 1.
Many years ago I had to supplement my income teaching evening classes in public administration. At the time — and maybe now for all I know — something called the “Baumol effect” was being widely blamed for higher inflation in the public sector than in the private sector. I was reminded of this recently when reading the France Profonde column in the latest Prospect (subcribers only - free to web in about 3 weeks). The latest article bemoans the decline in French traditional cooking both at home and in restaurants. The basic problem seems to be the same in both establishments: traditional French dishes are often very time consuming and labour intensive. The result: people don’t bother much at home (except on special occasions) and restaurants buy in inferior pre-prepared vacuum-packed versions of favourite dishes.
The restaurateur who was interviewed for the piece blamed French labour laws (as les petits commercants always do). No doubt that’s part of the picture, but I doubt that even without the SMIG (minimum wage) and other regulations the picture would be all that different. After all, labour power has to earn enough to reproduce itself, and there’s still going to be competition for labour.
The Baumol effect sounded like a more plausible culprit to me. The basic idea — at least to a non-economist like me — is this: some human activities allow for the easy substitution of capital for labour. Others don’t. As technological progress (or the development of the forces of production, as we ex-Marxists like to say) proceeds, sectors that are able to incorporate the new technology pay higher wages to smaller numbers of workers. Those that can’t still have to pay the higher wages (after all they’re competing with the others) but to their still numerous workers. And so the relative price of labour-intensive activities goes up.
(I’m sure that Daniel or John Quiggin or someone can provide a more rigourous account with all the is dotted and ts crossed).
Anyway, back to French cooking …. The thought is that a sector with increasing relative costs due to its labour intensity has a number of options. One is to allow its prices to rise. That’s fine so long as demand is inelastic. Another is to try to screw down the wages of the workers in that sector. Again, fine, so long as they don’t have other options, but possibly a problem for long-term recruitment. A third is to try to pass off some degraded version of the product as the real thing and hope that no-one really notices (which is what the French purveyors of vacuum-packed cassoulet are doing).
The long-term effect, of course, is that the real thing, which used to be available to everyone at reasonable prices becomes scarcer and scarcer and only vailable to those with money (or the rest of us on special occasions) and most people have to make do with an inferior substitute. (Though as the generations pass and palates don’t get educated fewer people will know what they’re missing.)
French cooking then is a bit like higher education. The real thing is labour intensive and has got pricier over the years. And you can’t really substitute technology for the labour (e-learning) and get the same product. But, of course, people will pretend that you can and will try to pass of some shoddy substitute as the same as the good old stuff that everyone used to get. In fact, if they’re really shameless they’ll try to pass off what people get now as better (“those one-to-one tutorials are so twentieth century”). (And the other strategy: downward pressure on wages leading to recruitment pressures in the long-run is also a well known feature of university life).
There seems to be a lesson here. Why does “progress” lead to some things being worse? French cooking, higher education and typography to name but three. Often because it raises the relative cost of the real thing and there’s a cheaper sort-of-functional-but-crap-really alternative which can be deployed. We put up with it, and spend the money we saved on something else, but we know if we’re honest with ourselves that something of quality has been lost.
If anyone knows of a position for an experienced, highly qualified technical writer, you could do no better than contact Ginger Stampley at immlass@yahoo.com.
If anyone knows of a position for a QA or CM position, and is looking for a bright, hardworking guy who had previously managed a project staff of fifteen (I believe), you will probably want to contact Michael Croft* at michael@whiterose.org. They would relocate for a good offer.
Via William Sjostrom, this rather remarkable specimen of codswallop from Gary Becker, Edward Lazear and Kevin Murphy. These gathered luminaries argue in the Opinion Journal that cutting taxes has a double pay-off. It starves the beast, making cuts in welfare state spending more likely, and it also encourages workers to invest in “human capital,” i.e. job skills.
The evidence is clear: Cutting taxes will have beneficial effects. Tax cuts will keep government spending in check and will provide the incentives necessary to produce a highly skilled, productive work force that enables high economic growth and rising standards of living.
This claim rests on some rather heroic assumptions which I won’t go into. It’s also, very possibly, self-contradictory; you can make quite a strong case that the two effects interfere with each other. Torben Iversen and David Soskice provide some decent evidence to suggest that people with high levels of specific skills actually want a beefy welfare state. More pertinently, where people don’t have such a welfare state, they may have a strong incentive to avoid investing in job-specific skills. If this result holds, then the benefits of tax cuts for human capital formation are not clear at all. Starving the welfare state will deplete valuable forms of human capital.
Why would a strong welfare state affect people’s willingness to invest in job-specific skills? Simplifying slightly, Iversen, Soskice and their collaborator, Margarita Estevez-Abe, argue that people, if they’re economically rational and risk averse, are going to be less likely to invest in job or industry specific skills if they think that this investment is risky. People who have invested heavily in job or industry specific skills will get into trouble if there is a weak welfare state, and they get fired, or their firm goes bankrupt, or, worst of all, if their entire industry takes a serious downturn. They have difficulty in finding a new job, because they’ve invested in skills that are non-portable across firms or industries. In some countries, they may have to take the first job that’s offered to them or lose their welfare benefits, even if that job doesn’t match their existing skills. If, however, there is a strong welfare state, then individuals with job or industry specific skills have a safety net. They can take their time trying to find a new job that matches their previously existing skill set.
The implication is obvious - social welfare isn’t only about distribution, it’s about insurance. Furthermore, a certain level of welfare state provision may be necessary to encourage people to invest heavily in risky skills that nonetheless may have important economic pay-offs. Individuals who are faced with a weak welfare state are likely to invest primarily in generalist skills, which are easily portable from job to job, and from industry to industry. To be sure, skills profiles of this sort have their advantages (especially in periods of flux and change). However, these self-same individuals are likely to underinvest in specialist skills without a strong welfare state - these skills will often be too risky to be worth it. Conversely, if a strong welfare state exists, individuals will feel much much happier about taking the risk of investing in skill profiles that may make it more difficult to find a job in economic downturns.
If Estevez-Abe, Iversen and Soskice are right, tax-cuts that starve the welfare state are likely to have perverse consequences. They’re going to lead to underinvestment in economically beneficial skills. Some industries and professions clearly require an intensive investment in specific skills; these industries are likely to do poorly in countries with lousy welfare states. It’s almost certainly easier to cultivate certain kinds of human capital in Sweden than in the latter-day US. And that’s something that Becker et al. don’t even begin to think about.
Here’s my contribution to the “M-Type versus C-Type” debate. Basically, just as it’s a useful analytical distinction to make that all UK Prime Ministers are either bookies or vicars, it’s always worth remembering that all economic policy debates of interest can be usefully analogised either to blackjack or to three-card monte.
I’m assuming everyone knows a little bit about blackjack, but monte is known by other names in other jurisdictions, so I provided the link above so that everyone knows what I’m on about. Note that the maintainers of that site are inclined to be suspicious of monte as a game, but you shouldn’t damn a game just because most of the players are cheats; in principle, 3CM is a perfectly legitimate game of skill. What I’m concerned with is the optimal strategy for playing 3CM and how it differs from the optimal strategy for playing blackjack.
First, consider the “house edge” for the two games under the simplest possible strategy; random play. For monte, this means just picking a card at random, which pretty intuitively means that you win double your money one time in three for an expected return of minus 33%. For blackjack, you flip a coin to decide whether to hit or stand - I couldn’t be bothered to work out the precise odds on this one, but the inexcusably crude Monte Carlo simulation macro I just wrote in Excel suggests that if you were playing blackjack with an infinite number of perfectly randomised decks and no aces, the house edge would be about the same.
Now, how do you do better than the return to a perfectly random strategy? In blackjack, there are various levels of sophistication you go through. You can play basic strategy under which you condition your hit/stand decision on the probability of a randomised deck getting you nearer to (but not over) 21. You can count cards, and take advantage of the fact that the deck is not a random number generator; if a lot of high cards have already been dealt, the expected value of a random deal is lower (thus making the deck slightly more favourable to you; if you increase and decrease your bet size in proportion to your advantage, your expected return is much better). Or, at the limit, you can do what Scott Hagwood does, and just memorise the entire deck - at this point, blackjack is no longer a game of chance for you, and you should be able win every time the deck has a winning hand for you, and place a zero bet when it doesn’t, making your potential return infinite and risk free (unsurprisingly, Mr Hagwood isn’t allowed in any casinos).
How do you improve your odds at three-card monte? Well, you watch the cards carefully, look for the slieght of hand and misdirection and aim to develop an eye that is quicker than the dealer’s hand. It sounds pretty difficult, but I daresay that there are people out there that can do it; my guess is that a body language expert like Geoffrey Beattie would be able to train himself to do well at 3CM, because I’d guess that a lot of “tossers” (no stop sniggering that’s what they’re called) leak their intentions pretty plainly as to where they’re going to throw the red lady. Beattie also has some pretty tasty mates, which might come in handy if he were to take this up as a career.
So anyway, what’s the point of this? Well basically, look at it this way. In blackjack, there is a perfect strategy, but there are also lots of less-than-perfect strategies which still deliver a much better expected return than random play (decent card-counting strategies often have positive expected return). In 3CM, there’s the strategy which if perfectly executed delivers perfection, and basically nothing else.
It gets worse. If you adopt one of the blackjack strategies and make a few mistakes, then it doesn’t usually matter a huge amount. If you get the trailing count wrong by a couple, or if your memorisation of the decks is less than perfect, you most likely still have a decent grasp of the overall position of the deck; whether things are good or bad for you as a player. In three-card monte, if you try to track the lady and make a single mistake, then you’re going to do worse than you would by simply playing randomly.
And this is the analogy to economic arguments. Some kinds of economic argument are relatively robust; they deal with broad truths to which even a basic-level understanding of the theory is a reasonable approximation. Other results in economics are incredibly fragile, and the “Economics 101” style of reasoning can lead you inexorably to exactly the wrong conclusion. That’s why it pays to be hugely suspicious of economists who have simple and appealing answers to complicated questions, whether they want to spend a load of government money on something or whether they want to “get the government out of it”. This line of argument is plainest in Ronald Coase, but it’s there in Hayek.
Furthermore, when some people argue economics, they argue it like a three-card monte tosser. Steel tariffs a bad idea your honour? Well maybe in a perfect world you’re right but you have to consider intra-industry trade, the global pattern of industrialisation, adjustment costs and the Cancun agenda. Pensions privatisation transfers risk to the workers? But what about the concentration of defined benefit plans on a single employer, trends in labour mobility, efficient capital markets and the Demographic Timebomb? Step right up, find the lady, look for the faces watch out for the aces, you look lucky this evening sir! When you’re dealing with people who want to use genuine complexities in the economic theory as a smokescreen, then you would be faced with the alternative of either oversimplifying yourself, or weakening yourself rhetorically by conceding ground on tangential issues. In this case, the only sensible choice is not to play.
That’s why it often makes sense to make those “M-Type” arguments. If a friend asks you for help playing blackjack, you can teach him about card-counting. But if you’re trying to give advice on a game of three-card monte, all you can legitimately do is point out that it is, in fact, a game of three-card monte. And if someone points out that this isn’t getting us any closer to a reasoned analysis of where the red queen actually is on the table, then so be it.
The FT’s economist-as-agony-aunt column takes a look at the costs and benefits of suicide .
Arnold Kling posts an essay on Tech Central Station, criticizing Paul Krugman’s punditry for deviating from sound economic theory. Kling suggests that Paul Krugman should stick to “Type C” arguments, about the consequences of policies, and that he should avoid “Type M” arguments about the motives underlying these policies. According to Kling, type M arguments are difficult to prove, and are anyway unimportant compared to policy outcomes, which are what we should care about.
Kling’s tone is reasonable and moderate, as compared, say, to the mendacious and economically illiterate ravings of Donald Luskin and his ilk. He’s still wrong.
First, is it true to say that type M arguments are more difficult to prove than type C arguments? Surely, it depends. Many type C arguments - including most of the economic policy arguments that Kling is interested in - involve very complex empirical relationships. Economists don’t seem anywhere near reaching closure about how best to model many of these relationships. These arguments can get just as diffuse as the Type M arguments that Kling doesn’t like, even if they’re conducted in technical language. Further, there are many occasions when Type M arguments are remarkably simple and straightforward. There’s a close and undeniable connection between politicians’ behavior, and the policy demands of groups that donate large amounts of money to these politicians. Correlation does not prove causation, but when there’s a consistent pattern of relationships, and an obvious, and simple explanation for the motivations underlying these relationships, it seems to me to be rather bizarre to eschew any discussions of motivations. Contrary to Kling’s implication, it’s an “empirical question” that doesn’t involve “specious reasoning.” General explanations of motivations aren’t 100% perfect. They may be wrong in some specific cases. Still, they’re an excellent guide to how politicians behave.
Second, politicians’ motivations are directly relevant to political choice. Sure, politicians are to some extent constrained from acting on their underlying motivations by the political system. But they also have a fair degree of latitude to take policy decisions independent of constraints; that’s why we elect them. Politicians’ underlying motivations are the best predictor of their future behavior; the more we know about these motivations, the better. When we vote, we’re not evaluating past policies as such; we’re evaluating how politicians are likely to behave in the future, if we give them elected office. Politicians’ previous policies can give us information about their likely future behavior but only to the extent that these policies reveal their underlying motivations. In game theoretic language, we can find out a lot about the “types” of actors we’re dealing with, from looking closely at the strategies that they choose.
And this is where Krugman comes in. He excels at exploding the type C arguments that the US administration is making, and in providing plausible, and in some cases utterly convincing type M explanations for why the administration says one thing and does another. The US administration introduces tax cuts, and keeps changing its rationale for why these cuts are justified. However, once you look closely, it’s obvious that these cuts disproportionately favour the rich over the poor. It’s hard to escape the type M conclusion that the government is more motivated by helping the rich then by looking after the poor or the middle-income folks. If I’m a middle income tax-payer, that’s information that I legitimately want to know. And I want experts like Krugman to tell me about the disconnect, and to join together the dots.
This isn’t to say that Krugman is always right. For example, Kling quotes Krugman as saying that the Iraq war was driven by the administration’s desire to make gains in the midterm elections. If that’s what Krugman is arguing, he’s wrong (although I’m not at all sure that this is what Krugman’s trying to say). But Krugman’s explanation of the rationale for the tax cuts is dead on - and I’d like to see Kling try to make the counter-argument that Bush’s tax cuts were radically unconnected to his desire to shore up his base among wealthy Republicans. Furthermore, and this is the key point, motivations are absolutely relevant to how we should think about Republicans, and indeed, politicians in general. If Krugman is right, and I believe that he is, then we know that the current administration is willing to push through tax cuts favouring the rich, even when they don’t make any economic sense. And that gives us good guidance to the current crop’s future behaviour if we make the mistake of re-electing them.
Kling is wrong. Type M arguments are not only important; they’re key to our understanding of politics. There’s a big difference between politics and policy and economists (or other social scientists) shouldn’t confine themselves to talking about the latter. We can’t assume that politicians are disinterested philosopher kings who will choose the “best” policy once the experts have reached agreement on what the “best” policy is, if indeed the experts are ever capable of reaching such agreement. Motivations count in politics- politicians are likely to be beholden to different interest groups, and to choose policies that reflect the demands of these interest groups. Krugman is doing a valuable service in using his expert knowledge of policy outcomes and processes to deduce the motivations underlying particular policy decisions. He should be applauded for it.
My education is clearly sadly lacking
Meanwhile, as a break from the hysterical, obsessive and politicised world of weblog disputes, I decided to have another look at an uncontroversial, scientific topic like John Lott’s research into gun control. And I discovered that I have been quite appalingly conned by two institutions that I thought I could trust. Instapundit has printed a letter from someone called Benjamin Zycher, a “Senior Economist”[1] at the Rand Corporation, supported by Raymond Sauer, a professor at Clemson University. Zycher says, and Sauer supports him in saying that the Ayres and Donohue paper on Lott’s work is all wet.
Specifically, he says that:
To put it bluntly: Any undergraduate student receiving a B or better in introductory Econometrics would be able to pick the Ayres/ Donohue work apart. This is for a number of reasons, the most fundamental of which is—-and this is an error more appropriate for freshman Statistics 1—-that their own interpretation of their estimated coefficients is simply wrong. They discuss two variables purporting to measure the effect of concealed-carry laws, but then fail to understand that it is the joint effect of the two variables, rather than merely one of them, that represents the estimated effect in the model.
This is worrying to me for the following reason. I’ve taken not only an undergraduate course in Econometrics (for which I did indeed receive a B), but also a postgraduate course in Econometrics (for which I got a Distinction, yes, thank you, we’re all very proud). I got ‘em at two of the country’s most prestigious institutes of learning; the undergraduate one was paid for by my parents and the UK taxpayer, but the postgraduate one cost me a pretty significant chunk of my own money (£18,000 to be precise). And I don’t understand what the hell Zycher’s going on about. Have I been made the victim of grade inflation and fobbed off with a couple of noddy preparatory courses that wouldn’t get me a passing grade at Clemons University?
In the shoddy, downscale version of econometrics taught at British universities, the multivariate regression model exists to separate out the effect of different variables. The estimated coefficients and associated standard errors measure the partial effect of each variable taken separately. Least squares estimation (which is the technique Lott used) doesn’t have the characteristic of delivering coefficient estimates that have to be taken two at a time.
Furthermore, I’ve read the A ‘n’ D paper, and I don’t understand what is meant by saying that “they discuss two variables puporting to measure the effect of concealed carry laws”. Throughout the paper, they discuss the effect of the “Concealed Carry Dummy” and whether it’s significant or not. That’s one variable, not two. Ayres & Donohue discuss two types of models (trend models and level models), but that’s not the same as “two variables” in a model, and I don’t see how the two types of model taken together might show significant effects if neither does on its own.
Mr Zycher, I think, overestimates his undergraduate students. He also overestimates his readers when he later writes: “There is no need here to delve into a mini-course on econometrics, however lacking for sleep your readers may be.”, because I’m both a chronic insomniac and apparently in dire need of a mini-course in econometrics. And he overestimates Lott’s co-authors when he goes on to say “Anyone interested simply can read the paper by Plassmann and Whitley, utterly devastating in its critiques of the Ayres/Donohue paper”, because the Plassman & Whitley paper doesn’t include anything resembling the critique that he’s made himself. (It does have one rather bizarre argument that you should ignore large discontinuous jumps in Lott’s model because they aren’t so big if you smooth them out with the following year’s trend, but that’s not really the same thing).
I was never a good study in econometrics - I found it dull and difficult - but I don’t think I was ineducable. If someone read the Instapundit letter and understood it, I’d be very grateful for help. Otherwise, Prof Sauer may wrong wrong in stating that “And he [Zycher - dd] knows that his argument prior to making that statement, if incorrect, will torpedoed in an instant by one of Lott’s critics who is also skilled at econometrics”. Speaking as one of them (I hope), it’s unlikely to be torpedoed by me any time soon because I don’t understand what it means.
[UPDATE]: I had hoped that the Ultimate Lott Trainspotter would be on this one and he is. As Tim says in comments, it’s likely that Zycher was referring to the “hybrid model” that A&D used in their paper. I personally think that the hybrid model is a terrible way to try to achieve anything in time series analysis, but the fact is that A&D only introduced the bloody thing for the purpose of addressing exactly the issue that Zycher might be raising. Anyway, read Tim’s post, it’s better than mine if you care about this sort of thing.
[1] “Senior Economist” put in scare-quotes because the meanings of titles vary from institution to institution and I don’t know exactly how senior an economist one would have to be to be titled thus at Rand. [2]
[2] I’m using this style for footnotes rather than superscripts because somebody told me that superscripts don’t fit into the CT template too well.
As an further antidote to the Paul Johnson rant, I thought I’d link to euro-cheerleader Philippe Legrain’s hymn to European dynamism in Prospect . There are one or two moments when Legrain has to turn up the volume in the hope that people won’t notice weaknesses here and there, but it is a pretty gutsy response to a certain widely-received view of Europe and America:
over the past three years, living standards, as measured by GDP per capita, have risen by 5.9 per cent in the EU but by only 1 per cent in the US. So says the IMF, an institution hardly biased against the US. An unfair comparison, perhaps, given America’s recent recession? Then look at how the EU and the US size up since 1995, a period that includes America’s late 1990s boom. While living standards in the US have risen by a healthy 16.1 per cent over the past eight years, they are up by 18.3 per cent in the EU. This is not a sleight of hand. Pick any year between 1995 and 2000 as your starting point, and the conclusion is the same: Europe’s economy has outperformed America’s.
It is true that the US economy has grown by an average of 3.2 per cent a year since 1995, whereas Europe’s economy has swelled by only 2.3 per cent. These headline figures transfix pundits and policymakers. But this apparent success is deceptive. Not only are US growth figures inflated because American statisticians have done more than their European counterparts to take into account improvements in the quality of goods and services, but the US population is also growing much faster than Europe’s. It has increased by nearly one tenth in the past eight years, whereas Europe’s population has scarcely grown at all. So although the US pie is growing faster than Europe’s, so too is the number of mouths it has to feed. Most people care about higher living standards, not higher economic growth.
I’ve spent the past couple of days at the second of a series of conferences with the title “Priority in Practice” which seek to bring political philosophers in contact with more gritty policy questions. It was good fun, there were some good papers and I learnt a fair bit. One of the interesting papers was by John O’Neill from Lancaster who discussed the controversial question of “contingent valuation”, which is a method by which researchers engaged in cost-benefit analysis attempt to establish a shadow value for some (usually environmental) good for which there is no genuine market price, by asking people what they’d be prepared to pay for it (or alternatively, and eliciting a very different set of answers, what they’d need to compensate them for its loss).
Naturally, people often react with fury or distaste to the suggestion that they assign a monetary value to something like the preservation of an ecosystem. They think that just isn’t an appropriate question and that it involves a transgression of the boundaries between different spheres of justice or value. John had a nice quote to show that researchers have been asking just this sort of question (and getting similar tetchy responses) for rather a long time:
Darius, after he had got the kingdom, called into his presence certain Greeks who were at hand, and asked- “What he should pay them to eat the bodies of their fathers when they died?” To which they answered, that there was no sum that would tempt them to do such a thing. He then sent for certain Indians, of the race called Callatians, men who eat their fathers, and asked them, while the Greeks stood by, and knew by the help of an interpreter all that was said - “What he should give them to burn the bodies of their fathers at their decease?” The Indians exclaimed aloud, and bade him forbear such language. (Herodotus, Histories , III).
I suppose lots of people will have seen it anyway, but for those who didn’t it’s worth pointing out that Kevin Drum has an excellent but thoroughly terrifying interview with Paul Krugman.
An appropriately spine-chilling taster:
Train wreck is a way overused metaphor, but we’re headed for some kind of collision, and there are three things that can happen. Just by the arithmetic, you can either have big tax increases, roll back the whole Bush program plus some; or you can sharply cut Medicare and Social Security, because that’s where the money is; or the U.S. just tootles along until we actually have a financial crisis where the marginal buyer of U.S. treasury bills, which is actually the Reserve Bank of China, says, we don’t trust these guys anymore — and we turn into Argentina. All three of those are clearly impossible, and yet one of them has to happen, so, your choice. Which one?
I’m almost certainly spending too much time reading lefty American blogs, but I now have far more emotional investment in the result of the US Presidential Election in 2004 than I have in that of the next electoral flurry in the UK.
Daniel’s post on the Cancun trade talks explains that their failure was rooted in disagreement about restrictions on foreign investment and capital controls. This reminds me that it’s time you all re-read Pierre-Olivier Gourinchas and Olivier Jeanne’s paper “The Elusive Benefits from International Financial Integration,” which I blogged about a few months ago.
Apparently the Cancun ministerial conference of the World Trade Organisation has got to such an appalling standstill that they all decided to pack up and go home. And the interesting thing is that what killed it wasn’t EU intransigence on agricultural subsidies, but rather something called the “Singapore issues”; a set of proposals about foreign investment on which the developed world is more or less united. Which is really rather a scandal., but as I argue below, the good thing about the Cancun collapse is that it allows us to get the measure of the character of the WTO as an organisation.
The Cancun round was meant to be all about the rich countries giving something up for the benefit of the developing world. We were going to reduce farm subsidies and let the poorer nations compete in our markets on fairer terms. And as far as one can tell from the outside, real progress was being made on these issues before the round collapsed.
However, lower down the agenda (and off the media agenda entirely, perhaps because it didn’t offer all that many opportunities to bash the French) were a raft of issues grouped together under the name of Singapore because they were discussed there in 1996. There are four of them; “Investment”, “Competition”, “Transparency in Government Procurement” and “Trade Facilitiation”. Of these four issues, the last two are not particularly controversial; what tore the Cancun summit apart were the first two, which India in particular had been campaigining to keep off the agenda for a long time. Here’s a few questions and answers.
What are the investment and competition “issues”?
What are they is a fairly easy question to answer. They’re basically a set of proposals which would have the effect of making WTO rules apply to cross-border investment as well as to trade in goods and services.
What does that mean?
Effectively, that it would become illegal under the WTO not only to place tariffs on trade in goods and services, but also to place any restrictions on investment by overseas corporations, or to have any laws in place which had the effect of disadvantaging foreign companies compared to domestic ones.
Absolutely not. The WTO is the World Trade Organisation, which was set up in order to facilitate trade in goods and services, something which more or less everyone agrees to be a general good. Free mobility of capital and investment is a much more controversial topic, mainly because the legal procedures needed to establish it would be much more invasive of national sovereignty, and because the benefits from liberalising capital flows are much less certain and significant than those from liberalising trade (for example, it’s not really consistent with the existence of nationalised industries, and it provides an easy channel for multinational companies to launch harrassing cases against any domestic legislation they don’t like; to take a hypothetical example, the local Coke bottling plant could launch an action against a free school milk program for unfairly prejudicing their investment1.). This used to be recognised before 1998, when there existed something called the Multilateral Agreement on Investment (MAI), a sort of sister to the WTO which was meant to negotiate, well, a multilateral agreement on investment. But the MAI folded in ‘98 due to lack of support. And a certain coterie of neoliberal types have been trying to bring it in through other means ever since
So anyway, the state of play was that India and Malaysia were dead set against the Singapore “Son of MAI” proposals, the EU, Switzerland and Japan were dead set on them and the USA was happy to let the whole thing fall apart because they were not exactly mustard-keen on getting rid of agricultural subsidies anyway. The Africans decided that they couldn’t wear the Singapore agenda either, and walked out So the entire Cancun talks fell apart.
I say that this provides a useful yardstick to measure the character of the WTO by because it brought face to face the two views of what the WTO is actually for. On the neoliberal side, we’ve heard for years that WTO is all about bringing the benefits of free trade and free markets to the poor of the world and allowing them to gain the benefits of “globalisation”. On the “anti” side, we’ve heard for years that the WTO is nothing more than a cynical exercise in attempting to subvert the democratic process of poor countries and forcing them to accept foreign ownership and control.
In other words, the neoliberals have said it was all about things like the agricultural subsidy proposals, while the antiglobos have said it was all about things like the Singapore issues. And when the two came head to head in Cancun, the Singapore agenda won. When push came to shove, the rich nations were not prepared to give an inch to the poor ones on agriculture unless they got their quid pro quo in the form of progress toward an agenda which has nothing to do with trade and everything to do with massively undermining the ability of democratically elected governments to set the terms on which the ownership of the means of production is decided.
On the basis that you can tell a lot about a person or an organisation from what it regards as negotiable and what it regards as a deal-breaker, it appears that those who suspected that the WTO was a ploy to force a political agenda down the throats of the third world would appear to have a point. It is going to take a heck of a lot for the WTO to win back the credibility it lost in Cancun.
(Finally, let it be noted that China, India and Malaysia, three of the four poster children for the “benefits of globalisation”, were in the group of countries dead set against the Singapore issuses).
1Which is not to say that they’d win such an action, or that the specific Coca Cola company would ever want to do so; Coke actually has a pretty good track record as a corporate citizen in the third world. But the legal framework would be there which would allow such an action to be taken, and lots of companies would be happy to use it unscrupulously.
In today’s FT, Samuel Brittan reviews John Gillingham’s European Integration, 1950-2003 : Superstate or New Market Economy?. One interesting snippet, which I knew about but deserves wider publicity:
Readers may be more surprised to find the name of Frederich Hayek given as the source of the alternative neoliberal interpretation. For most of today’s self-proclaimed Hayekians view everything to do with the EU with intense suspicion. Indeed I was sufficiently surprised myself to look up some of Hayek’s writings on the subject. Although he played no part in the post war institutional discussion, he had written at some length on the problems of federalism in the late 1930s. Hayek was among those who believed that some form of federalism, whether in Europe or on a wider basis, was an important step towards a more peaceful world. In a 1939 essay, remarkably anticipating the EU Single Market Act, he argued that a political union required some elements of a common economic policy, such as a common tariff, monetary and exchange rate policy, but also a ban on intervention to help particular producers.
John Quiggin gives a modest defence of existence theorems in economics, one of the three real vices of economists according to Deirdre McCloskey.
Existence theorems, for McCloskey are the archetypal example of ‘blackboard economics’, mathematical games yielding purely qualitative results that can be overturned with modest changes in assumptions. They were the high point of mathematical economics in the 50s and 60s … There are a wide variety of ‘impossibility theorems’ demonstrating the non-existence of index numbers with various properties [an area of research interest for John]. Familiarity with such theorems can save a lot of pointless effort, and they are therefore worth looking for. But an impossibility theorem is just the negative form of an existence theorem (or, if you prefer, an existence theorem proves the impossibility of the corresponding impossibility theorem).
This is a rather prosaic defence, that certainly does not justify the high status accorded to the kind of theory exemplified by existence theorems. But the argument can be pushed a bit further by considering the most famous impossibility theorem, that of Arrow who showed (roughly speaking) that no voting system having a set of seemingly desirable properties could work for all possible sets of voter preferences. This impossibility theorem precluded a lot of potential effort in designing ideal voting systems. [Emphasis added.]
This is a nice parallel. Actually, it’s so nice that it may prove more than John intended. (I absolve him of responsibility for what follows.)
Arrow’s Impossibility Theorem illustrates the point neatly. We begin with the assumptions. Roughly speaking, Arrow made a list of four criteria that reasonable people might think any method of aggregating people’s preferences ought to have. These conditions are (1) That the full range of everyone’s preferences be considered, (2) That if everyone prefers x over y then the group decision should as well, (3) That the position of x relative to y in the group prefernece depends only on the position of x relative to y in each individual’s preference, and (4) That there isn’t a dictator — i.e., someone who gets to have their preference enforced over everyone else’s. We then move forward to the impossibility result: there is in fact no method satisfying all four criteria. Any voting system would necessarily violate at least one of them. The proof is striking because the initial assumptions are so plausible, even weak, but they cannot all be satisfied together and so we find that the desirable result is impossible. And so we give up our quest for what we now realise is a chimera —- the idea of a perfect method of aggregating individual preferences.
As John says, existence theorems are the negative form of impossibility theorems. The classic existence theorems in economics — such as those for general equilibrium, also due to Kenneth Arrow, along with Gerard Debreu — illustrate the point neatly. We begin with the result. Roughly speaking, Arrow and Debreu wanted to show that supply and demand could be in balance in all markets at once. We then move backward to the assumptions necessary to make possible such a result. These include (1) All individuals are perfectly rational, (2) All trades take place simultaneously and instantaneously, (3) There is perfect information about all markets for all products in all conditions both now and at any point in the future, (4) Money does not exist. With these (and other) assumptions in place, the existence of a general equilibrium can be proved. The proof is striking because the initial assumptions are so implausible, even absurd, but they must all be satisfied together in order for the desirable result to be possible. And so we give up our quest for what we now recognise is a chimera — the idea that our world could ever contain economies capable of general equilibrium.
Whoops. I suppose many economists wouldn’t take that last step along the road. For their own reasons, most economists do not treat existence theorems of this sort in the obvious way — i.e., as a kind of useful reductio ad absurdum, or at least ad ridiculum. I sometimes wonder why economists so rarely adopt this interpretation. (I hear the phrase “F-twist” on the breeze.) The most plausible interpretation of the results is, “Well, we got the desired result … But look at the assumptions we had to make to do it. Absurd. The result can therefore never obtain. QED.”
Update: John Quiggin responds. Maybe I was wrong about the realism of the Arrow-Debreu temporality assumptions, because his comments seem to have appeared pretty much instantaneously.
It may sound to the uninitiated as though science fiction conferences are bad places to go for insights into economics, but the uninitiated would be wrong. One of the more interesting sf phenomena of the last fifteen years or so has been the creation of a more economically literate science fiction, which gets away from the libertarian ‘competent man’ certitudes of much of the early writing in the genre. It seems to me that the Brits have pioneered this - Iain Banks, Charlie Stross, Ken MacLeod, China Mieville, Justina Robson, Paul McAuley come to mind - but notable Americans too (Steven Brust, Cory Doctorow and Neal Stephenson) have been guilty of economically sophisticated literature on occasion.
This came home to me at Torcon, where a well-attended and intelligent panel discussed the economics of abundance - if future scientific progress allows us to produce material goods effectively for free (as some sf writers postulate), then what happens to society? Iain Banks’ ‘Culture’ series is perhaps the best known sf take on this question; Banks sneakily describes a Communist utopia in terms which might well mislead the uninitiated into thinking that he’s a gung-ho libertarian. And Banks got frequent and deserved namechecks at the panel. Charlie Stross gave the standard take that economics is the science of choice under scarcity, and then launched into a discussion of what economics might have to say under conditions where scarcity didn’t apply (answer: not much). The panel, after some meanderings, more or less agreed that material abundance would lead people to displace their energies to achieving social status through positional goods and the like.
Which got me thinking about the difficulties of applying economic analysis to these phenomena - while economic reasoning can lead to some interesting insights about people’s struggle for social status, it also has some very clear limitations. Certainly, Gary Becker’s ‘strong’ programme of applying marginal analysis to social phenomena across the board hasn’t had enormous success. Even if people behave in a self-interested fashion in their efforts to grab status, this self-interested behavior doesn’t lend itself well to standard economic analysis. Why?
Both the late Mancur Olson and the still-very-extant Doug North 1 have remarkably similar takes on this. Social goods and political goods are difficult to analyse using economic tools, because they’re not easily measurable. As Olson points out, political and social goods tend to be indivisible. That is, it’s hard to divide them up into discrete amounts without changing the quality of the good in question. As Olson says, friendship (as against acquaintanceship) and marriage (as against prostitution) involve a certain amount of indivisibility - beneath a certain level of provision, the good becomes qualitatively different. This means both that it’s difficult to impossible to translate these goods into money, and that it’s bloody difficult to measure them. As North argues, this has rather fundamental implications for neoclassical economic theory, which tends to assume that it’s possible in principle to measure what it is that actors are exchanging. To put it bluntly, neo-classical theory can’t tell us much about choice under these circumstances.
What does this tell us about situations where material (measurable) goods are abundant? I reckon that two implications follow. First: as the Torcon panelists argued, human beings are dissatisfied sorts by nature - if they’re getting enough in the way of material wants, they’ll find other unrequited (social) needs to squabble about, so that they can vie for position. Economists and sociologists like Thorstein Veblen and Fred Hirsch would likely agree with this assessment. Second, the economists of the future aren’t going to have much that’s useful to say about choice under these conditions, unless they radically change their assumptions and tools. Someday far hence, the dismal science may be a thing of the past.
1 Olson, Mancur. 1990. Toward a Unified View of Economics and the Other Social Sciences. In Perspectives on Positive Political Economy, edited by James E. Alt, and Kenneth A. Shepsle. Cambridge: Cambridge University Press. North, Douglass C. 1993. What Do We Mean by Rationality? Public Choice 77: 159-62.
John Kay has a good column (from yesterday’s Financial Times) arguing that the crisis on Britain’s railways and in the US electricity supply industry exemplify a more widespread failing affecting both public and private sectors: boosting revenues whilst neglecting the underlying assets
… modern business depends on intangible factors that, for good reasons, are not measured on the balance sheet. Security of supply is one. But the loyalty of employees, the trust of customers and the quality of service are also assets that require investment and depreciate if not well maintained. Reducing these investments enhances earnings. Media companies could focus on producing clones of already successful works - and it would be a few years before their bored audiences turned away. Financial institutions could replace their customer service staff by sales people and call centres. And drug companies could reduce costs and obtain synergies through mergers - and today find their pipelines of new drugs narrower than they have ever been.
A week late and a couple of dollars short, here are my thoughts on the now defunct Policy Analysis Market. I’d note right up front that this “market” always looked suspicious to me; even when it was going, the website seemed to consist of precisely five flat, static HTML pages, and this for a website that was meant to be going live with active trading in October. Particularly since nobody seems to be at all clear on the details of what this market was meant to achieve (was it open to the general public? Only to specialists? Was it going to trade “assassination futures”? Or just derivatives on the EIU political stability indices?), let alone on its clearing arrangements, confidentiality clauses, etc, I rather suspect that the whole thing was disinformation from start to finish. That’s why I didn’t want to comment on it at the time.
However, I do want to comment on the fact that a number of bloggers analysed it in terms of Hayek’s concept of tacit knowledge and markets as information-creating social entities. Henry had an excellent first cut at trying to develop a more rigorous Hayekian analysis last week, but I’d like to take issue with some of his points and make a couple of my own about the characteristics of successful markets.
Just to start out and for the record, if this idea had been genuine, which I suspect it wasn’t, and if it had got off the ground, which it didn’t, it might not have been a bad thing. I’m not inclined to take seriously those critiques based on bubbles or based on supposed inefficiencies of market behaviour, at least not unless they have some explanation of why these are particular flaws of market behaviour, rather than general organisational pathologies of groups of homo sapiens. I am always in favour of people being made to put their money where their mouth is when they’re shouting off about questions of global importance, and as Henry notes in the piece linked above, one of the potential functions that the PAM might have served would be as a means for low-ranking CIA officials to signal their genuine opinions, rather than those which were politically correct at the CIA. This is obviously not a first-best solution; first best would be a culture of honesty at the CIA. But a working market would be a benefit if it had the effect of creating a “back channel” of honest communication.
But here immediately we see what the real problem with a working PAM would be; as commenter Dan Hardie pointed out on Henry’s post, it’s not actionable information. If the “Saudi Arabia might attack us” index suddenly goes through the roof, what do you do? I admit that the immediate precedents of US and UK policy have not been wholly heartening, but somehow I don’t think that even Tony Blair would have the brass neck to point to a rising price and volume trend on a chart of ten days’ trading action and ask us to commit blood and treasure to a war of aggression. These prices are just too much of a black box to be a decision support tool on their own. So, what would actually happen if the Saudi contract roofed it? Well, this would apparently be the trigger for “more analysis” by the CIA and similar agencies. With respect to all involved, we’ve seen what “more analysis” does for you and so far all we’ve got to show for it is a couple of sterile mobile labs, some aluminium tubes and the starting handle for a centrifuge hidden under someone’s rose bush. The point being that if the overall intelligence process is corrupt, even the addition of an honest indicator isn’t going to help matters.
And this can be generalised into a wider critique of the unthinking application of “market solutions” to problems of this sort, on Hayekian grounds1. One point on which I think I quite strongly disagree with Henry, and by extension with the people he was discussing, is that I don’t believe that Hayek’s discussion of “tacit knowledge” is relevant to the question of a Policy Analysis Market. The defining characteristics of Hayekian tacit knowledge is that it’s practical, non-propositional and local in time and space. I actually think it’s something approaching a category-mistake to suppose that anyone could be in a position to have tacit knowledge relevant to the question “Is the chance greater than 22% that the government of Saudi Arabia face a coup attempt this year?”. It’s a question which demands an answer in the form of a proposition, not like the question “Is orange juice for September delivery too dear at $5 8/16 a contract?”, which demands an answer in the form of an action.
And this matters. Hayek’s view of a market as a knowledge-creating entity is one which actually sits pretty uneasily with such things as the efficient markets theory, arbitrage pricing and other strands of thinking about markets and information which rely on reading off the closing prices and using them as if they were propositional information about something else. In the sense in which Hayek uses it, a market is an information processing system because it takes tacit knowledge as inputs and has the co-ordination of human activity as an output. The actual prices and volumes traded are epiphenomena; in general, they match up reasonably well to events in the real world (which is how the Soviets were able to use price data from Western markets as an essential input into the planning process), but that’s not the point of a market, any more than it’s the point of a kettle to increase the humidity of your kitchen.
Why does this matter? Well, it suggests that the prices struck in a market will be informative only if the market is well stocked with buyers and sellers operating on the basis of their own tacit knowledge. And this is not just an obscure point of Hayek scholarship; it’s actually written into the rules of the Chicago Mercantile Exchange.
The Merc, and several other commodities exchanges, makes a distinction between “hedgers” and “speculators”. They do this for practical reasons; hedgers are allowed to run larger positions, because it is assumed that they are willing and able to take or make physical delivery of their contracts if necessary, and because they need to. But the distinction can also be made on sound grounds of Austrian economics. Consider the following sketch of a theory of the commodities market (to make it concrete, we’ll consider the grain futures market):
The market exists because of the hedgers; farmers are structural sellers of wheat contracts and bakers (etc …) are structural buyers. Both farmers and bakers are in the market because they want to fix their prices ahead of time in order to be able to make long-term plans about growing wheat or baking bread, and the futures market allows them to make these plans in the knowledge that they won’t be rendered unable to pay their debts because of sudden price movements in the spot market. Both the farmers and the bakers have plenty of (practical, unverbalised) tacit knowledge about grain, and the market price converts this tacit knowledge into a plan for co-ordinated action.
But experience has shown that it is pretty difficult to operate your market if you only have hedgers in it; in general, markets which don’t have speculators are illiquid. Speculators supply liquidity to the grain market, taking on the other side of the trades which the hedgers wish to make, in order that the market doesn’t have to wait until a hedger with an equal and opposite demand shows up. Speculators don’t in general have tacit knowledge of the underlying security (in extreme cases, nor do they want to; the old proverb “the stock doesn’t know you own it” comes to mind). Speculators assist the market’s functioning because they have tacit knowledge of the market; they have practical, nonpropositional information about the way in which liquidity is best provided to hedgers.
It follows from this that in order to be an efficient information-creating entity, a market has to have both hedgers or speculators. Although speculators are vital to the functioning of the market, you can’t have a market with nothing but speculators. And if you think about it, all the really successful “speculative” markets are ones in which the speculative activity clearly takes place in the context of a two-way market between hedgers. Commodities markets have structural demand from manufacturers and structural supply from primary producers. The stock market has structural demand (for stock) from people who want to save, and structural supply (of stock) from companies who want to raise money. The money market has structural demand from borrowers and structural supply from lenders.
There is nobody (to a reasonable first approximation) who has structural demand for more terrorism. The only people who have tacit knowledge of terrorists and would be considered to be on the long side of the market, are terrorists, who would presumably not be material participants. The problem is the same with respect to “weather derivatives”, “catastrophe derivatives” and other such markets which (with due respect to the well-intentioned and often frighteningly intelligent people who try to make them work) have never taken off2. The problem is that one side of the market has no tacit knowledge, and so the market does not really perform its function as an information processing entity.
I actually think that as a predictive tool over whether a market is going to work or not, this simple model works pretty well. Commodities exchanges - work well. Money markets - work reasonably well, but can be upset by the government coming in as a big player with no tacit knowledge. Stock market - can work just fine, but didn’t in the 1990s as the corporate sector became a net buyer of stocks, leaving no tacit knowledge on the short side. Betting spread “markets” - entirely speculative and notoriously subject to home town effects and long-odds effects. Hewlett-Packard-s internal market for forecasting sales - works fine and actually quite a good way to resolve what would otherwise be an organisational problem between salesmen (systematically interested in talking projections down) and engineers (wanting to talk them up). And so on. For the time being, then, I’ll assume that there was no great loss to society when the PAM bit the dust, as there is nobody who is systematically interested in the probability of terrorist attack being higher.
Update: As an example of the sort of thing I’m talking about, I’ve dug up this rather nice paper from the wonderful SSRN. It’s the latest contribution to the debate on orange juice futures as a tool for weather forecasting. Ever since Richard Roll’s widely misunderstood 1984 paper on the subject, there’s been an urban myth going round that the frozen concentrated orange juice (FCOJ) futures market provides a better forecast of the weather in Florida than the weather channel. Not only did Roll not find this, he actually thought he’d found that the variance in the FCOJ contract price was about five times greater than anything that could possibly be explained by weather movements. The paper I’ve linked above revists the issue and is rather more favourable to the efficient markets thesis in this regard, but for our purposes, we only need to note two things. First, the market doesn’t care about “the weather”; it cares about whether it’s freezing or not, because that’s the only thing that matters for orange production (practical knowledge). And second, the market reacts to today’s temperature news; there’s no attempt here to defend the proposition that the futures market forecasts the weather.
Anyone who likes the same kind of films as me would never be fooled by the orange juice/weather forecast connection, by the way. As Eddie Murphy showed us in “Trading Places”, about 40% of the volatility in the Fall FCOJ contract takes place on the single day in October on which the USDA releases its preliminary orange production estimates. Obviously, this well-observed phenomenon of the market is inconsistent with the FCOJ traders having any weather forecasting advantage over the USDA.
1I should probably point out that my interpretation of what Hayek was trying to say is not universally shared; it comes from Prof. John Gray of the LSE. However, several CT contributors were also taught by or worked with Gray at some point in the past, so in as much as there is an official editorial line on Hayek exegesis, “Grayek” is probably it.
2What trading takes place on these “catastrophe exchanges” usually takes place between reinsurance companies who have made bad underwriting mistakes and are trying to get out of them on the quiet.
Tyler Cowen’s got more of his Macroeconomics series up. It’s nothing like as bad as the monetary economics post that I objected to yesterday. Part Three on fiscal policy is OK ..ish. I don’t agree with him on Keynes, and think his comments on deficits and interest rates are naïve (I include by citation Brad Delong posts on this subject passim ad nauseam), but I can see how others would class my disagreements with it as probably political rather than technical. And Four on open economy macroeconomics is actually quite good, although the omission of any discussion of optimal currency areas is a bit of a lacuna. Part 2 has one very serious error, but in being bad, it is actually good, because it’s clued me into what went wrong in the train wreck which was Part One.
First, the really, really serious mistake (I think that he’s misunderstood Paul Krugman’s views on Japan in this addendum, but it’s probably arguable either way). In part 2, however, he makes the following argument in relation to the predictability of business cycles:
”[…] Most modern business cycles are simply bad luck. You can spend your whole life trying to divine the relevant patterns, but you are very likely to fail, no matter how smart you are.Many macroeconomists argue that the “time series” of most variables is statistically indistinguishable from a random walk […]”
He then links to this working paper, which to me shows that he has massively misunderstood the literature on this subject.
The point is that when econometricians talk about a “random walk” in GDP, they do not mean anything like the same thing as financial economists do when they talk about stock prices being a “random walk”. The random walk debate in econometrics is the debate over whether GDP and similar series have a particular statistical property that shocks have a permanent effect (rather than dissipating over time as the series goes back toward a long term trend). It’s also known as the “unit root” debate, because it can be framed as a question about whether the equations which make up a model of GDP have roots which lie outside the unit circle (or something; it’s ‘king ages since I did this and memory is hazy). In any case, GDP being a “random walk” in this sense is not at all the same thing as being unpredictable or the same thing as saying that the business cycle is impossible to model; it’s just a matter of whether time series of interest have the Markov property or not. It’s a loose usage of “random walk” to refer to something which isn’t random, which is why some people prefer to say “unit root”.
Anyway, enough of that. At the end of the comment, Tyler recommends his own book “Risk and Business Cycles”. I dug up a few reviews of it (this one in the Quarterly Journal of Austrian Economics is quite good), and it seems clear to me what the problem is. Basically, Tyler’s got a view of the macroeconomy not too dissimilar from my own.
Both Tyler and myself are quite a long way outside the mainstream of neoclassical economics. He’s basically an Austrian, I’m a Post-Keynesian. And in fact, Tyler’s particular brand of “New” Austrianism is very close to Post-Keynesianism indeed. Specifically, he rejects the key Austrian premis that recessions and malinvestments are always caused by gaps between the “natural” and”money” rates of interest opening up (as a result of Big Bad Government, natch), encouraging investors to make mistakes about the time-preferences of consumers and invest in production technologies with the wrong returns period. Tyler takes from rational expectations macroeconomics the idea that it doesn’t make sense to assume that policy-makers can systematically fool the rest of the economy, and from modern portfolio theory and financial economics, the idea that one of the real determinants of investment is the equity risk premium (a concept I discussed here). It’s a “Risk-based Business Cycle” theory in which the business cycle is driven in an Austrian manner by cycles of malinvestment and liquidation, but these cycles do not have a monetary origin. To cut a long story short, his model of the business cycle is one which is more or less entirely driven by animal spirits on the part of entrepreneurs. That’s why he thinks that all these monetary factors are irrelevant.
It all makes sense now. Or at least it doesn’t but it fits into place a lot better. The problem is that some people are good at translating their ideas for the layman (like Paul Krugman) and some aren’t. Tyler’s made what I consider to be a big mistake; he’s decided that he wants to put over his view of the macroeconomy, but he doesn’t want to get bogged down in thousand word explanations of the minutiae of why he doesn’t believe in monetary theories of the business cycle (contrast my own approach to similar questions in the posts I’ve linked in this discussion; I love getting bogged down in these discussions), so he ends up trying to have a fast way with mainstream theory, and in my opinion oversimplifies mightily.
So it’s basically the fault of Volokh for using software (unlike our own Movable Type) which doesn’t allow extended posts. A long Tyler Cowen post on monetary economics might be really good, but it would make the rest of the Volokh conspiracy more or less impossible to read. And the dumbed-down short one … ends up being pretty bad. So I apologise for any negative impression I might have given about Tyler Cowen as an economist, while standing by substantially all of my comments, including the harsh ones, about the actual piece from yesterday. So the Volokh heavy mob can stop sending Crooked Timber those death threats now, please.
I am somewhat uneasy about writing this, as it is about the fourth post in recent weeks having a go at the Volokh guys, and one of quite a few on Tyler Cowen specifically, but I simply could not let this post pass without comment. It’s part one of a “Guide to Macroeconomics in Five Easy Lessons”, on monetary economics. I wholeheartedly support the idea of someone producing such a guide, but the actual statements made about monetary economics seem to me to be horribly confused. So much so that I’ve been reduced to commenting on it line-by-line; I wanted to write a proper response, but grew worried that by concentrating on my main disagreements, I would be implicitly endorsing some of the errors I didn’t single out.
I’ve edited this twice to moderate some of the more temperamental remarks, but the tone is still pretty angry, as I’m genuinely annoyed that this is being fed to laymen. As a result, I have perhaps been excessively inclined to pick nits; that’s how I get when I’m angry. I will accept the judgement of Brad DeLong as definitive on the question of whether I have been unduly harsh and will post an apology here if he thinks I have been. I pre-emptively apologise to Mr Cowen for the lack of civility inherent to the “fisking” genre; as I mention above, I tried and failed to come up with alternatives.
Macroeconomics in Five Easy Lessons, criticised
(plain text is the original, italics are DD coments)
Most of macroeconomics today is monetary economics.
Most of macroeconomics today is growth theory.
And the stock market is obsessed with the Fed. So the money topic is central.
Complete non sequitur. Central to what? Why does what the stock market cares about matter?
I am assuming you already know lesson one, which is printing [sic] lots of money leads to hyperinflation.
Lesson two is that the Fed can never do a good job in the long run, no matter how smart or responsible it may be. The Fed’s core dilemma is this: it can only control a tiny base, yet the broader superstructure is what matters for the economy.
Horrible abuse of the terms “base” and “superstructure” will be noted by anyone with a background in Marx. Also as we’ll see below “control” is being used ambiguously here. You might just as well say that I can control only a small steering wheel, yet the motion of a much bigger car is what matters for my daily commute
Now bear with two paragraphs of arcane terminology.
It’s not actually that arcane, which is just as well, as no attempt is made to explain it
The monetary base is currency plus reserves held at the Fed, the base of the pyramid. The Fed controls this directly and with great accuracy, if you want just think of speeding up the printing presses for more currency.
As long as you ignore the effect of the balance on the capital account on “reserves held at the Fed”!
(Don’t worry about the discount rate, when the Fed lends to banks, that is secondary and I will ignore it.)
It isn’t secondary at all, particularly if you’re operating on a standard of “what the stock market cares about”. I explained at length a while ago how the control of the overnight money rate can be used to exercise a significant degree of control of the entire yield curve.
But currency is not most of the action. Typically the Fed buys more or fewer short-term T-Bills, and deals with banks (“open market operations”).
What is the purpose of open market operations? To maintain the “effective” Federal Funds rate at or near the “target” Federal Funds rate. In other words, this “monetary base control” is entirely subservient to the aim of maintaining the “secondary” discount rate target. Think, man, or at least open a copy of the Wall Street Journal once in a while. When Alan Greenspan walks out of a FOMC meeting, what does he announce? The monetary base target for the month, or Fed Funds?
In contrast, consider what we call M2, a broader monetary aggregate. M2 contains, among other things, demand deposits. Banks lend money by writing extra zeros into the accounts of their borrowers. The Fed can influence this process (more on this below) but cannot control it with any great accuracy.
Cannot because chooses not to; it targets the interest rate, not M2. The Bundesbank controlled monetary aggregates for years, with great accuracy. Banks do not lend money by “writing extra zeros”; they have to borrow the money first, from someone who is prepared to lend it to them. And this is how the Fed (as “lender of last resort” to the money market) gets its traction.
Milton Friedman told us that “lags are long and variable.” That means that when the Fed changes the base, it has a poor idea how M2 (and other aggregates) will respond. The base could go up a little and M2 would go up a lot. Or sometimes M2 goes down. Or whatever.
Milton Friedman might have believed that, but this idea is not expressed by “lags are long and variable”. Long and variable lags is a statement about lags, not about whether the actual effect is variable. Factive use alert on “told”, by the way.
Now keep that all in the back of your mind for a minute or two. Let’s turn to deflation and inflation.
Lesson three is that both falling prices (deflation) and price inflation are usually bad (how is that for an oversimplification, albeit a correct one?).
Bloody terrible, if you want my honest opinion.
Deflation pisses people off, by making them accept lower nominal wages. Funny, but academics will scream bloody murder if you cut their wages by $500 in a year. Those same educated people find it OK if their nominal wage is constant but eroded by price inflation over time. Don’t try to understand these people, or tell them they should be like Silicon Valley, just accept them for what they are. The bottom line is that a shock deflation will put many people out of work and discombobulate the economy.
The effect of nominal wage stickiness is a very small part of what’s bad about deflation. What’s bad about deflation is that most debt contracts are denominated in nominal terms, so their real value increases when there is deflation. This can quite easily lead to a situation in which the debt cannot be serviced because the real burden has grown too great, leading to financial dislocation as productive enterprises are broken up to satisfy their nominal contracts. Furthermore, in a falling price environment, there is an incentive to postpone purchases of capital assets, which reduces investment and therefore reduces demand (the multiplier effect). Real wage effects are small in comparison.
Inflation is bad too, though no one has a good account of why.
Milton Friedman?
Just believe me on this one (and don’t be tempted to send me all this blah-blah-blah on how inflation “distorts” prices, the claim might well be true but no one has ever produced a good model or account of it).
I am indeed not tempted to enter into a discussion with someone who has such powerful defences against the danger of learning anything. For those who are interested, the work of Robert Barro in the mid 1990s is definitive as far as I am concerned; single-digit inflation has no measurable effect on output, but above about 15% the effects on the real economy are both statistically and practically significant.
Maybe the worst thing about inflation is that if you have enough of it, sooner or later you have to end up with some deflation.
In actual fact, the bad thing about inflation is that it leads to higher real interest rates, as lenders need to be compensated for inflation risk as well as inflation. Thus reducing investment and therefore demand, blah blah blah. It is also in general associated with a loss of political control and general anarchy, which has an exhilarating effect on the animal spirits of artists (pet theory alert) but a debilitating effect on those of businessmen. Furthermore (a less well-publicised effect, but one which I regard as important), it increases the real cost of providing the interest free credit (in the form of 30-day payment terms, etc) which keeps the economy going.
Now let’s go back to the Fed. A good Fed will try to prevent both deflation and inflation. When inflation threatens, the Fed will tighten the money supply, hoping to stem inflation. When deflation threatens, the Fed will loosen the money supply, hoping to stop it.
I include by citation all my comments above; the Federal Reserve of the USA simply does not carry out its inflation policy by targeting the monetary base
Sooner or later they will mess up.
Yes, people, the entire economic argument of this piece is contained in the words “sooner or later, they will mess up”. Aren’t you glad you’re getting an economics education?
I have already noted that controlling the base doesn’t give you much leverage over M2.
(an aside for anyone who cares about how monetary base targeting is carried out). In actual fact, if you want to carry out monetary base targeting, you do it either by legally enforcing reserve requirements, in which case, controlling the monetary base gives you precise control over M2, at the expense of ensuring that M2 is no longer the relevant monetary aggregate because people will find ways of making loasn which aren’t counted in M2 (Goodhart’s Law), or you do it by credibly committing to keep raising the discount rate until any bank which is growing its asset book faster than your target will be running an unacceptable risk of being caught short of funds.
Note that the stock market will second-guess the Fed every step of the way.
Duly noted, although what the stock market has to do with the supply of M2, or indeed how this sentence fits into the overall piece, is a bit of a mystery
On average, the Fed cannot make things much better
You would almost believe that “Sooner or later they will mess up”, “I have already noted that controlling the base doesn’t give you much leverage over M2” and “Note that the stock market will second-guess the Fed every step of the way” formed premises of a syllogism which had as its conclusion “On average, the Fed cannot make things much better”, but you would be wrong.
, and is simply hoping to stop things from getting worse, due to its own inevitable mistakes. The power of the so-called Fed is simply power to mess up, or power to avoid messing up too badly.
This would be a powerful condemnation of the Fed, except that we have a missing alternative. The Fed “cannot make things much better” … than what? Than the Great Depression? Yes it could and should have done. There is decent evidence that post-war (more to the point, post-General Theory) recessions have been less frequent than they were before the era of active central banking; the economy has by no means been perfectly smoothed, but central banks have certainly accomplished more than just “not messing up too badly”.
That is a good chunk of what you need to know. Oh, yes, some of the time the central bank will “goose up” the money supply to reelect the incumbent. That is usually bad.
Specifics of this extremely serious accusation would have been interesting, if there were any, which there aren’t
It is sometimes said that “the Fed controls interest rates.” True or false?
The Fed most emphatically does not set short-term rates directly in the literal sense.
This is the literal sense in which I do not set the speed of my car; I merely control the throttle of the engine. In other words, a literal sense of no relevance at all.
The Fed can push around the short-term bank lending rate, by increasing or decreasing the monetary base, by more or less trading money for T-Bills. More money usually makes the nominal short-term rate fall, as there is suddenly more liquidity. Less money makes it rise. This it can be with real accuracy, it simply keeps on trading until it gets the short-term rate it wants.
That would be the short term bank borrowing rate, Federal Funds. In actual fact, of course, this sort of Economics 101 supply and demand analysis has little real relevance to open market operations. The market knows that the Fed can force the rate to Fed Funds target, so the market rate jumps there without the actual trading having to take place. Fluctuations around Fed Funds target usually reflect the day-to-day rebalancing of cash inventories.
The Fed has very little control over longer-term interest rates. It can move them lots only by wrecking things (remember the late 1970s?).
Or by credibly committing to an anti-inflationary policy (remember the 1990s?). Again, I include by citation my discussion of the Vasciek model and Rubinomics, linked above.
Now just a little more on how the monetary base is linked to M2. The Fed can push around the monetary base, hoping to change short-term interest rates enough to affect the lending practices of banks in a predictable way. This is tough to pull off for several reasons, one being that banks may care more about the long-term rate than about the short-term rate. Another is that the bank may not care what the rate of interest is, if it feels it will never get its money back. So Greenspan really has a tough job.
There is the kernel of a sensible argument here, but it is being swamped with all this rubbish about M2. Once more, however, I beg to look at the empirical evidence on effectiveness of monetary policy, and include by citation the voluminous Federal Reserve working papers on the subject. It’s a tough job, but it’s no tougher than, say, open heart surgery.
You might wonder why the Fed doesn’t simply do nothing, and freeze the monetary base (George Selgin once pushed this idea). I take this proposal seriously, but it has two problems. First, we would all have to get used to regular deflation. Second, short-term interest rates jump around a lot. Banks would scream if the Fed didn’t smooth out some of those movements.
But what? Now freezing the monetary base is meant to simultaneously freeze all other monetary aggregates? This is not even consistent with the line of argument taken above!
Milton Friedman used to think that a steady rate of money growth was a good idea (he has since moved away from this position, as has almost everyone else). But it is not. If you control the growth rate of the monetary base, M2 still moves around. Controlling the growth rate of M2 is much harder, plus it leads to wrenching swings in short-term interest rates.
It is by this point very difficult to tell what is being asserted. Why are “wrenching swings in short term interest rates” bad if measured, controlled movements in short term interest rates have no effect?
So we’re back to the Fed goofing, sooner or later, no matter what.
As I said above, this is the entire argument of this piece, and it appears to me that no baseline has been set for measuring what counts as a “goof” by the Fed.
The worst thing they can do is to engineer a sudden deflation. At least they have learned not to do that, so we are pretty lucky.
My apologies to the specialists, and please note I am using M2 as a proxy for all broader measures of money.
I have no real scale for guessing whether my comments represent the quibbling of a specialist or not, but I have to say that I think that as a primer in monetary economics, this is dangerously inaccurate on a number of topics. There is a significant danger of an educated layman understanding less about the workings of monetary policy after having read it than he did before (thankfully, it does not actually discuss the subject of monetary economics at all, despite the title). I recommend as an alternative this Powerpoint presentation.
There’s a lot of buzz in the blogosphere about a DARPA project which aims to predict terrorist attacks, assassinations and coups, through creating a futures market, in which traders can speculate on the possibility of attacks; the NYT picks up on it too. Most of the commentary is negative, but Josh Chafetz likes the idea, and invokes Hayek.
As I explained at length in a post on Hayek last year, complex systems function by finding ways to aggregate diffuse knowledge into simple indices, which then allow actors in the system to take advantage of knowledge that they don’t actually have (e.g., no one knows exactly what Americans’ breakfast cereal preference orderings are, but by watching the information-aggregating index that we call “price,” producers can generally ensure that, when you go to the supermarket, you’ll find the brand you want. Compare that to the shortages of some items and overproduction of others that centrally planned economies have produced). A futures market in terrorist attacks, while it sounds grisly, may help us to aggregate diffuse knowledge in a way that will prove superior to expert knowledge.
Seems to me though that Chafetz is wrong. As Chafetz suggests, Hayek makes some rather interesting arguments about the ability of markets to pick up on diffuse, tacit knowledge, and make it usable. And Hayek’s not the only one saying this; Michael Polanyi and GLS Shackle develop roughly similar ideas. But the key point is that Hayekian markets aggregate knowledge. They don’t create it. People tend to be tolerably well informed about their own tastes, and buying habits. Markets will do a good job of taking this diffuse knowledge and communicating it to producers. The general public is likely to be rather less well informed about the likelihood of coups, assassinations and general alarums, and thus the sum total of their tacit knowledge is likely to be an incoherent mess, or a product of shared cognitive biases, rather than a useful index of information. And indeed, DARPA’s “market” is aimed at the general public; it seems that random punters can sign up to participate on a first-come first-served basis. Whatever minimal amount of useful information is in there will almost certainly be drowned out by the noise.
This isn’t to say that information markets of this sort can’t be useful - but they need to involve people who have useful tacit knowledge to begin with. One of the problems with hierarchy is that valuable information sometimes doesn’t make it from the bottom of the organization to the top, because middle management blocks it, or because the boss doesn’t want to know. Anonymous information markets can potentially solve this problem. They might allow the people at the bottom of the ladder, who often have the best sense of what is actually going on, to share their information anonymously. Assuming that their decisions to buy and sell are kept confidential, management can’t punish them for not sticking to the corporate line. For example, one could create an information market that would allow anonymous CIA analysts to express their skepticism about Iraqi WMDs by shorting WMD “stocks” without fearing reprisal from on high. This would actually be a rather useful exercise. I wonder why DARPA isn’t funding it?
Crooked Timber is lucky enough to have recruited the services of the late Sir Montagu Norman as an economics correspondent. He will be contributing occasional dispatches from beyond the grave. He opens his account with us with some pointed remarks on the Chinese Yuan …
Mr Greenspan and the Chinese Yuan
Strange days indeed. Mr Greenspan of the Federal Reserve is making extraordinary remarks about the Chinese (Renminbi) Yuan. He says that the yuan is undervalued, and that China “must” allow its value to float. If I may be permitted in death to drop the Sphinx-like discretion which marked my career, Mr Greenspan is talking bloody nonsense. As he well knows, the Chinese are cannot be forced to do anything.
The issue is a familiar one to those of us of the older school. It is the question of the external, internal and temporal value of money; the triple problem of the exchange rate, the rate of inflation and the rate of interest. Under the laissez-faire theory which is currently orthodox in the West, a central bank acting alone is able to control at most one of these. But China is not a laissez-faire country.
Because it has the will to make a prices and incomes policy stick, and because it has the power to control the domestic banking system, China has been able for several years to maintain a low domestic interest rate, a fixed rate against the US dollar and a stable (mildly deflationary) price level.
Of course, this means that China is able to take a free ride on the stimulative effect of a falling US dollar globally, and Mr Greenspan no doubt wishes that it would not. The period of the peg and cheap-money policy has coincided with one in which the USA has collectively decided to consume well beyond its means, ensuring that (as part and parcel of a general trade deficit), China has developed a large surplus on the current account with respect to the United States of America. As well as reducing the stimulative impact of Mr Greenspan’s own cheap-money policy, this has led to a situation in which China has become a large net acquirer of US dollar assets. Leading, not unnaturally, to a situation in which Mr Greenspan is perhaps concerned that China will choose one day to realise these assets, with a consequent serious financial dislocation.
A serious problem, then, but a problem for China? I think of my own experience in the 1930s, and I think not. In a gold-standard, the burden of adjustment falls obviously on the deficit country. In a greenback-standard, the burden is less obvious and perhaps even less onerous, but it is still the creditor country which holds the cards. If the USA is to meet its debts to China, then it must either one day run a surplus, or default, or inflate its debt away. China has no such obligation to the USA. I see no reason in theory or experience to make me believe that China’s acquisition of US dollar assets will “put its monetary system at risk”.
It is true that the current low interest rate may lead to “an investment boom” in the Chinese economy, or indeed, to what we apparently refer to as “overheating”. But the Chinese government controls the price level and the interest rate. Unlike a laissez-faire monetary authority, they can act to solve domestic problems without needing to adjust their exchange rate. There are surprisingly few constraints on the actions of a government with a working incomes policy and a surplus on the balance of trade.
It appears to me that Mr Greenspan is bluffing a weak hand and knows it. America’s trade deficit has nothing to do with the Chinese yuan. It is a result of a cheap-money policy explicitly aimed at ensuring that the population as a whole consumes more than it earns. This policy keeps the wolf from the door in the near term, but makes it a mathematical inevitability that the USA will run a trade deficit with respect to somebody; if not the Chinese, then somebody else.
The fact that it is the Chinese rather than anyone else (or indeed, as well as everybody else) may be considered worrying. The Republic of China is the United States’ only credible rival as an imperial power, and they appear to be aware that genuine hegemons tend to import capital, not export it. But this is politics, and I am only a central banker, albeit one with a long memory.
As a member of Professor von Mises’ Institute notes, his problem is not dissimilar to one that I faced myself, and perhaps did not cover myself in glory in my handling of it. What a shame that Mr Greenspan has no Chinese equivalent of Ben Strong to help him out. The dogs bark …
Update: “Overvalued” corrected to “undervalued”
The new issue of Prospect includes a rather meandering piece by Samuel Brittan on baby bonds, basic income and asset redistribution. A central issue in this area is how to finance such proposals, and that’s something Brittan gets down to at the end of his article. He canvasses Henry George-style proposals for land taxation and also mentions inheritance taxes, but finally comes up with a somewhat odd suggestion:
… a very simple practical proposal, why not auction planning permission? Many local authorities have approached this piecemeal by making such permission conditional on the provision of local services such as leisure centres, approach roads and so on. But why not return this windfall to the taxpayer in the form of asset distribution and let citizens decide how to spend it?
An intruiging idea, certainly, but a great deal of UK planning law would need to change to implement it. For one thing, under the current system, more than one person or body can hold a valid permission to develop the same land. I can even apply for permission to demolish your house, though having planning permission to do so doesn’t entitle me to knock it down! Presumably, also, there would have to be some specific thing that was being auctioned: but planning permissions are specific to the purposes that the applicant intends. You want to build a cinema and I want to build a supermarket: you aren’t interested in my permission and I’m not interested in yours. And giving citizens assets as a result of such auctions doesn’t solve the problems that “planning gain” is usually used to address: your proposed superstore will generate more traffic, so we get you to pay to improve the roads, thereby covering costs which would otherwise fall to the local taxpayer. Still, there may be a workable idea here, but I can’t quite see it.
America has become a second rate power. The trade deficit and the fiscal deficit are at mightmare proportions …. sorry, I was just memorising the opening paragraph of Gordon Gekko’s “Greed is good”1 speech. Though it did amuse me how his opening remarks had become topical again. “Wall Street” was on Sky TV at the weekend, and it reminded me that I’ve always wanted to do a particular kind of review of this film. I’m not really qualified to carry out a proper critique of it as a piece of work2, and the film probably deserves better treatment than to look through it for hilarious ’80s kitsch3.
But what I would like to do is make the following case; very few of the actions which bring down the whole house of cards on Bud Fox and Gordon Gekko were actually illegal under securities law at the time. In fact, I’d make a case that any sequel to this film would have to start with the premise that Gordon Gekko was acquitted on all charges of securities fraud.
To begin with, three important caveats. First, securities law has been materially tightened since the 1980s. I’m judging Gekko against the general principles which operated during that period rather than specific post-Boesky rules. Second, although I’ve had a go at checking out major points of difference between UK and US law (and there are many), there may be some transatlantic confusions remaining. And third, Gekko was charged with both securities fraud and tax fraud in the film; since Oliver Stone doesn’t show us any material details of his tax arrangements, I can’t comment on whether he was guilty on these counts.
In any case, as far as I can tell, the charges against Gordon Gekko would be grouped into six general areas. You can follow along with the script, linked above:
1. Trading in BlueStar Airlines (first operations). This is the firm where Bud Fox’s father (Carl) works. Carl has become aware, as the union representative, that the FAA is about to rule in BlueStar’s favour in a safety case, which will open up a few big routes to them. Bud passes on the information to Gekko, who orders twenty thousand shares.
Potential charge: Insider dealing. Gekko is trading on the information provided via Bud’s dad. This is material and non-public information, and it would probably be illegal to trade based on it today. However, under the standards prevailing in 1987, “inside information” has to be “information coming from an insider”, and an “insider” used to be defined quite narrowly; it would not have been at all clear that Carl Fox was an insider for securities law purposes. In any case, Bud Fox is certainly not a BlueStar insider; although Gekko actually finds out that his dad is a union rep, he would not therefore have been assumed to have known he was dealing on inside information. Given that at least two people (the comptroller and Carl Fox) have already blabbed about this decision, he would be within his rights to assume that, despite Bud telling him it was incredibly secret, it was actually common knowledge in aviation circles. This charge would never stick.
2. Trading in Anacott Steel. Gekko tells Bud Fox to come up with some more hot tips. He suggests that looking into the activities of Sir Larry Wildman would be a good source. Bud follows Wildman about for a day, then finds out that he has boarded a plane bound for Erie, Pennsylvania (the headquarters of Anacott). Bud passes the information to Gekko, who sees a chance to greenmail his old rival. He instructs Bud to buy a number of blocks of stock, then to call the Wall Street Journal with the codephrase “Blue Horseshoe loves Anacott Steel”.
Potential charges: Insider dealing, market manipulation. Frankly, I don’t see how the insider dealing charge could ever have got off the ground on this one. It is not illegal to follow somebody into an elevator, and it is not illegal to ask their chauffeur where they are flying to. And that is all that Bud does with respect to Larry Wildman, They work out that his target is Anacott by an act of deduction from his aircraft’s destination, which is public knowledge (in the sense that anyone who was in the right place at the right time could have got it; if you see a train crash, you do not have to wait until it appears on the evening news before selling the stock of the railway company). Insider information has to be specific information, and the fact that a man has flown to a town is not specific. This is the deal at which the film marks the beginning of the corruption of Bud Fox, and it is, as far as I can tell, completely above board and honest.
The market manipulation charge is a bit more dubious. I must say that I don’t like the way in which Gekko handles his relationship with the media, and I suspect that he would be caught today under the rules brought in to deal with “pump and dump” internet stock manipulations. But rules were significantly more lax in the 1980s, and people did indeed feed tips to the WSJ’s “Heard on the Street” column about the dealings of big investors. And note that there is no false information here; Blue Horseshoe is the name of Gekko’s trading company, and at the time Bud calls in the tip, it was substantially long the shares of Anacott Steel. It’s a grey area, but I suspect that it would have been dealt with via an SEC disciplinary arrangement rather than a criminal charge. The actual greenmail which annoys Sir Lawrence Wildman so much is not a criminal offence and never has been. It’s just not against the law to buy something that you think other people will pay you a lot of money for.
3. Trading in Fairchild Foods, Rorker Electronics and Morningstar. It’s not clear quite what goes on here, but it seems to me that Bud bribes the owner of a cleaning service to get a job which allows him to wander round the offices of his college friend Roger’s law firm at night. He basically xeroxes documents relating to forthcoming mergers and acquisitions and uses the information to trade on behalf of Gekko.
Potential charges: Conspiracy to theft, insider dealing. Bud Fox has clearly gone way over his head here and is guilty of burglary and securities fraud. But how much of it can be pinned on Gekko? Not much, I’d say. Gekko calls Bud from his beach-house and says “You done good, but you gotta keep doing good. I showed you how the game works, now school’s out […] You don’t understand. I want to be surprised …astonish me, sport, new info, don’t care where or how you get it, just get it […] This is your wake-up call. Go to work”. In the context of the film, it’s clear what he means, but I think you would have a very hard time indeed in court proving that Gekko meant “Commit numerous counts of felony burglary” when he said “don’t care where or how you get it, just get it”.
Gekko clearly appears to be structuring his affairs in order to reduce the appearance of a paper trail linking him to Fox; he asks Fox to trade through a number of nominee accounts, and to act with limited power of attorney over the money he manages. But in the absence of any non-circumstantial evidence, there are a million and one reasons why he might do this. Indeed, Gekko might say that he specifically arranged his affairs this way in order to bring home to Fox that he alone would bear the consequences of any illegality, in order to ensure that he didn’t break the law. The point at work here is that Bud Fox is never an employee of Gordon Gekko (he keeps his job at the brokerage throughout the film), and so Gekko has next to no duty of supervision.
4. Conduct surrounding the tender offer for Teldar Paper. This is the centrepiece of the film, where Gekko makes the “Greed is Good1” speech. Teldar is a paper company that Gekko regards as poorly managed, and he wants to take it over to break it up.
Potential charges: None? Corporate raiding is not illegal, and Gekko is perfectly within his rights to buy a lot of stock in a company if he thinks he can run it better than the incumbent management. Gekko was buying stock in Teldar before he ever met Bud Fox, and by the time of the meeting, he is the largest single stockholder. Teldar paper is “leveraged up to the hilt like some piss-poor Latin American country”, but this can’t possibly be Gekko’s fault; he isn’t in charge of it at the time of the shareholders’ meeting. Oliver Stone clearly put this scene in the film in order to point out that most of what Gekko does which he considers harmful, he does within the bounds of the law (this is also true of Ivan Boesky, the financier upon whom Gekko appears to be loosely based. Boesky was sent to jail on relatively minor insider dealing charges, but his real fortune and reputation was based on perfectly legal, if sometimes spectactularly ill-advised, junk bond issues).
5. Trading in BlueStar Airlines, (second operations). Gekko decides to take over BlueStar in order to carry out a similar breakup. He uses Bud as his intermediary, in order to ensure that he will have co-operation from the unions for doing so. In the course of the negotiations, he makes a number of promises; he claims that he wants to turn the company round with Bud as President (quite why the company is in need of a turnaround is not clear; I thought that the FAA decision was meant to mean that it was all systems go for BlueStar). Bud later learns that Gekko has no intention of keeping these promises; he simply intends to raid the pension fund and then break up the company. As Gekko buys stock, Bud and the unions announce that they no longer stand by the informal commitments they have given, and the price plummets. This allows Larry Wildman to step in and take the company away from Gekko.
Potential charges: Insider trading, market manipulation, breach of takeover regulations. Again, Bud Fox is the villain here, and he has been scandalously unprofessional in his treatment of Gordon Gekko as a client. It is not against the law to carry out due diligence on an acquisition you are thinking of making, and that includes talking to union representatives. The only price-sensitive information Gekko has, concerns his own intention to make a bid, and that is privileged information; it can’t be inside information by definition. What is highly illegal is for someone like Bud Fox to take that information to a third party (like Wildman), and having received it from what he explicitly knows to be a privileged source, for Wildman to act on it. I have no idea why Wildman is not indicted toward the end of the film; he has clearly been responsible for creating a false market in this firm in order to get it at a lower price (particularly, Bud encourages Gekko to part with a block at $17 when he knows that Wildman intends to tender $18; this is about as blatant as fraud gets).
I note that it is usually not illegal for the acquirer of a company with an overfunded pension fund to buy “minimum annuities” for the fund members and pocket the surplus, although this issue is academic as Gekko never gained control of BlueStar. Of course, these days, the likelihood of finding a US regional airline which still had a material defined benefit pension fund to plunder would be pretty low.
6. Trading in Brant Resources, Transuniversal and Fulham Oil. These are the companies that Gekko mentions to Bud (along with Anacott Steel) shortly before punching him on the nose. They aren’t mentioned anywhere else in the film, but given the context, I have no reason to believe that anyone could make a charge stick on these companies either.
So that’s basically the plot of the film. Gordon Gekko, an aggressive but perfectly legal financier, makes a mistake in hiring a thoroughly dishonest stockbroker to act for him, is made the victim of a scandalous securities fraud and then, for unknown reasons, is indicted along with the person who defrauded him, while the main conspirator in that fraud walks free. It’s a travesty of justice. Of course, Oliver Stone is probably aware of this, and is making the point to us, rather subtly, that the means by which Wall Street takes advantage of the working man are not in general by breaking the law … after all, given that money is power, why would they be?
Further information can be found in Doug Henwood’s excellent book also titled “Wall Street”. This is a great book; it works just as well as an introductory primer to finance as it does as a political tract. I keep a copy on my desk next to “Principles of Corporate Finance” by Brealey and Myers.
Footnotes:
1That’s actually the “Greed (for want of another word) is good” speech, for true nerds of this film.
2Though if I were, I’d suggest that Gekko is actually a sympathetic character. He’s a loving father, a patron of the arts with good taste, and he clearly has some degree of social conscience (the “how much is enough?” speech). It’s also quite a good question for your film studies class to ask “Is Gekko Jewish?” It’s never explicitly stated, but he attended CCNY (“not bad for a City College boy”), he doesn’t like WASPS (“they hate people and love animals”), and he has many elements of the Great Gatsby to him.
3Oh go on then. Look out for:
Friedrich Blowhard has discovered public choice economics a la Buchanan and Tullock, and decided that he quite likes it.
What a gas to see a group of smart people take many of my private musings of the past decade and set them out with more clarity than I ever gave them. I actually read a webpage outlining some of the notions of public choice while literally laughing out loud to see that I wasn’t the only lunatic in the insane asylum.
Friedrich is especially impressed with public choice’s description of how government tends to get captured by special interest groups, who gorge themselves at the expense of the public purse. He also suggests that public choice provides some interesting alternatives to the current political system.
Actually, it gives me more than hope, it gives me an idea. Isn’t it time for virtuous people everywhere to start thinking seriously about a system of governance that works better than democracy (or at least how American representative democracy is practiced in 2003?
Specifically, Friedrich points to various market-based or locality based means that are proposed by public choice economists as ways of limiting the redistributive elements of politics.
Now I’ve blogged in the past about my dislike for public choice economics. As far as I can see, the ideology usually drives the economic models, rather than vice versa. There are some exceptions (see below) but public choice usually starts from a bias. As is made clear by Charles Rowley, editor of the flagship journal, Public Choice, who has defined the approach as a “program of scientific endeavor that exposed government failure coupled to a programme of moral philosophy that supported constitutional reform designed to limit government.” But then, Rowley has also claimed that political scientists like myself, who have failed to embrace public choice methodologies, are “scholars who had rendered themselves dependent on the subsidies of big government and whose lucrative careers in many instances were linked to advising … agents of the compound republic.” In plain words, Rowley is claiming that social scientists who disagree have been bought off; we have our snouts stuck into the same trough as the special interest groups.
One could respond to this in an equally ad hominem fashion (a quick glance at Rowley’s resume suggests that he’s not averse himself to hogging out on grants from right wing foundations). But this would distract from the interesting and important questions that Friedrich raises. Does public choice, as propounded by Rowley, Tullock et al., provide a good understanding of how government works? And does it provide an appropriate set of solutions to the problems of majoritarian democracy?
I suggest that the answer to the first question is a qualified yes, and to the second is a more-or-less unqualified no. There’s no doubt that the phenomena described by public choice - rent-seeking, capture of the political process by special interest groups etc - happen, and indeed are endemic to democracy. And the US has a particularly bad case of this, thanks in large part to its lax rules on the financing of politics. It’s also interesting to note that Tullock and his crowd are in perfect agreement with lefties like political scientist, Charles Lindblom, about the corrupting intersection between big business and government. The problems identified by public choice are real ones.
But the proposed solutions propounded by characters like Buchanan,Tullock and Rowley have their own, very considerable flaws. Public choice economists propose that government should be replaced, insofar as is possible, by market-type mechanisms. They argue that the kinds of choice permitted by free markets are inherently superior to the kinds of choice permitted by majoritarian democracy. Some argue, quite simply, that politics is all a horrible mistake, and should be replaced by so-called “incentive compatible mechanisms.” But they don’t have very good grounds for so doing. First of all, proposals for the replacement of politics by economic mechanisms arguably fail on their own terms: economic theory suggests that they remain inescapably politicized.* Second, public choice theory itself suggests that while majoritarian democracy is flawed, so too is any means of aggregating collective choices. Nobel prizewinner Kenneth Arrow showed this in his “Impossibility Theorem,” perhaps the single most important result in social choice and public choice theory. The theorem shows that no means of making social choices - democracy, market or any reasonable alternative to either- can be perfect - they all necessarily involve important tradeoffs.
What this suggests to me (and, indeed to Arrow, who’s a committed social democrat), is that simplistic prescriptions of “all markets, all of the time” don’t work. Majoritarian democracy has its problems; so too do unbridled free markets. Which isn’t to say that there’s no scope for reform. However, economic theory provides pretty well as much support for certain kinds of lefty retrenchment, as it does for the kinds of change that public choice economics (and Friedrich) would like. But that’s a subject for another post.
.* On this, read Miller, Gary J. and Hammond, Thomas. Why Politics is More Fundamental than Economics: Incentive-Compatible Mechanisms are Not Credible. Journal of Theoretical Politics. 1994; 6(1):5-26. Miller’s book, Managerial Dilemmas: The Political Economy of Hierarchy (Cambridge University Press, 1992) provides a slightly different version of this argument, and is also the most accessible introduction to these questions that I know of.
The Economist has a long article asking whether or not companies are too risk averse to take proper advantage of new opportunities and a changing marketplace. Explaining the roots of corporate advantage is tricky stuff; conventional economic theory isn’t very good at telling us when efforts to innovate are going to be successful, and when they’re not. Economic sociologists do a slightly better job, but they still have difficulty in providing useful lessons for business people. Which opens the way for all sorts of cranks and quacks, who offer dubious nostrums for business success, with all the fervid enthusiasm of a 19th century medicine show charlatan. I’m referring of course to management “theorists.”
Now, I don’t want to dismiss them all out of hand - some interesting and serious work does get done at business schools. But management theory has more than its fair share of fakers, especially towards the populist end of the market. Grand, sweeping claims are made on the basis of very iffy and dubious research and case studies which aren’t chosen for any better reason than that they can be shoe-horned into the analytic boxes prescribed by the prevailing wisdom.
Which brings us back to the Economist article. It gives a guided tour of business school research on creative ways to deal with changing circumstances. The survey gives the impression that the field is a mess: different management theorists shouting bland (but somehow mutually contradictory) business prescriptions at each other. For example, one major five year research program comes to the conclusion that successful companies are distinguished from their peers by four factors - flawless execution, a company culture based on aiming high, a structure that is flexible and responsive, and a clear and focused strategy. Bloody obvious stuff in other words, which any half way intelligent senior executive could jot down on the back of a beermat in five minutes.
Why is this stuff so bad? Two possible reasons I can think of. The first is that management theory is less interested in the patient accumulation of real knowledge, than in providing training-fodder for future managers. Probably not all that much that can be done about this - b-school professors get whopping salaries for training MBAs, and are likely disinclined to change their ways. But the second reason is a problem that they could address without too much difficulty - their research methodology is often just awful.
The Economist article gives an apparent example of this in its discussion of two highly influential books - Built to Last by Jim Collins and Jerry Poras, and Good to Great by Jim Collins on his own. Now I haven’t read these books, so I’m going on the Economist’s description of what they have to say - it could be that they’re innocent of all the charges laid below, and are serious pieces of research. They’re almost certainly a considerable cut above the Who Ate My Cheese genre of populist business nonsense. But it still sounds as though these books commit a serious social-science sin - “selecting on the dependent variable.”
What does this mean? Social scientists tend to believe that if you want to find out if a causes b by studying different cases, you need to be quite careful in choosing the cases. For example, if you want to argue that risk taking leads to business success, you want to look at cases of firms that are risk takers, and firms that are risk averse, and you also want to have cases of firms that are successful, and cases of firms that are failures. If you only study successful risk-taking firms, you’re cooking the books. It could be that there are many more risk-taking firms that are failures out there than successes - but because you’ve only chosen to look at the successes, you have no way of knowing this. You can thus end up providing pretty bad advice.
As far as I can tell from the summary in the Economist , Built to Last makes a mistake of just this sort, when it tries to figure out the source of business success by looking
at a small sample of companies (18) that had been persistently great over time. It suggested that endurance and performance were linked.
Apparently, so does Good to Great which looks at companies which spectacularly outperformed the stock-market over a three year period, and argues that the source of their success is quietly determined CEOs who believe in high standards.
The problem is that studies which concentrate on successful firms can’t tell us anything very useful about the differences between successful and unsuccessful firms, because they haven’t looked at the latter at all. We don’t know whether or not other companies, which also endured a long time, had below average performance. And very likely, many did (at least that’s the conclusion of another piece of research that The Economist also cites, which sounds a little more satisfactory). Equally, companies that tanked may also have had quietly determined CEOs who believe in high standards (which in any event sounds like a very hazy and subjective judgement on the researcher’s part - how can you really tell the one kind of CEO from the other).
Case-studies play a big part in business-school education; they’re a useful way to make students work through the consequences of real life decisions. And they can, indeed, contribute to our more general stock of knowledge, if scholars use certain methodologies (such as ‘process tracing’) carefully. But management theory not only relies heavily on case studies - it all too often uses vague and wuffly concepts, that reflect the pre-conceived biases of the researcher rather than grounded theories. Thus its rather extraordinary noise-to-signal ratio - it’s less suited to providing new insights than new jargon to be wielded by McKinsey consultants in order to terrify staid managers into submission.
The Cato Institute has published a new edition of its annual report on The Economic Freedom Of The World, endorsed by Milton Friedman and not to be confused with about a million other such reports produced by rival thinktanks (I seem to remember that Heritage were the first to get into this game, but their index is based on subjective scoring and is really bad, while Cato’s is based on publicly available economic and survey data and is only quite bad, from a scientific point of view.)
Lovers of liberty will be pleased to know that the forward march of human civilisation continues unabated and we are all precisely 0.15% freer than we were at the time of the 2002 Report; the Index of World Economic Freedom apparently increased from 6.34 to 6.35 in 2001. Is it me, by the way, or is it pretty pathetic that such a self-important document is only produced with a two year lag? Anyway, as usual the dominance of the rankings by a bunch of incredibly rich free-ports and tax havens at the top and a bunch of horribly poor kleptocracies at the bottom, means that they can publish their usual diatribe about how “economic freedom is closely correlated to wealth, equality, development, relief from aching piles etc”. But the interesting thing to me is the extraordinary level of philosophical incoherence of the whole exercise.
As Kieran said, we’re not necessarily all Isaiah Berlin fans on this site. But some of us are, including me, and I’d like to make use of one of his biggest contributions; the distinction between negative and positive liberty. Basically it’s pretty intuitive; negative liberties are the absence of forcible restrictions on you doing something, positive liberties are the provision of the means for you to actually do something. As one might imagine, the libertarian half of the internet, on which Cato can reliably be located, tends to slag off positive liberties and claim that only negative liberties can legitimately be described as “Freedom!”.
So let’s look at the categories under which countries are scored. There are five major headings.
Of these, 4 and 5 are pretty straightforwardly negative liberties, 1 is a negative liberty under a charitable interpretation which allows the taxation of income to be considered a form of restriction on liberty, but 2 and 3 are quite clearly positive liberties. A sound and stable medium of exchange (including a stable financial system), and an honest and impartial judiciary and legal system are things that the government provides for you, so that you can make decent use of your economic freedom. Now one might possibly argue that in a perfect anarchocapitalist world these could be provided by someone other than the government, but even granted that incredibly arguable proposition, Cato have given away far too much. Including the two positive liberties in their index of economic freedom is equivalent to the admission that economic freedom is not really worth anything unless you have the ability to make use of it. Which opens the door for everyone else to point out that “access to sound money and security of property rights” are all that you need in the way of positive liberties if you happen to be rich, but that if you aren’t then you also need education, basic healthcare and social security.
This is, if I remember, what Isaiah Berlin ended up concluding; that once you let in any sort of positive liberty, it is powerfully difficult to avoid ending up with a concept of liberty that includes all and any of the compenents of what people need to live a good life. Which is not necessarily a bad thing, but it does mean that the philosophical usefulness of the concept of “Liberty!” as a motivating force for theories of political morality, is rather more limited that the rhetorical attractiveness of the word would suggest. As it stands, Cato have constructed an index of what rich people need in order to enjoy their money. Which is exactly what they’re paid to do, but it doesn’t really have all that much to do with economic freedom in anyone’s sense of the word.
I’m normally quite a fan of Tory blogger Iain Murray, but I couldn’t believe his most recent TechCentralStation column. Iain is attacking some proposal for restricting carbon emissions that is currently before the US Senate and is full of doom and gloom about the economic implications. He cites a report on the impact of the proposed legislation - the McCain-Lieberman “Climate Stewardship Act” - from the Energy Information Administration (a government agency of which he clearly approves). Here’s Iain’s take on their report:
When the system comes into operation, the economy would be severely affected resulting in job and output losses in the short-run. Because of this shock, real disposable income would drop by almost 1 percent per person by 2011, and would take fifteen years to return to 2000 levels. By 2025, the average person will have lost almost $2,500 as a result of McCain-Lieberman. The effect on GDP is even more startling, with the nation losing $507 billion in real terms over the next twenty-two years. By 2025, the country’s GDP will be $106 billion lower in real terms than it is today.
Whoah! That looks pretty bad. So bad, in fact that I just couldn’t believe it. So I went to the EIA’s website and looked at the report for myself (available via the following, wonderful, URL http://www.eia.doe.gov/oiaf/anal_emissions.html ) It turns out that far from the US economy being worse $106 billion worse off than it is today (what! you mean the US economy which grew by 60% over the past two decades wouldn’t grow for 22 years because of one piece of legislation!!), it would be $106 billion worse off in 2025 than they are currently projecting it to be (peanuts for a 10 trillion dollar economy). What we’re looking at here is the compound interest effect of a very slightly reduced growth rate over a very long period (the expected difference in GDP between the two cases is simply the difference between a 3.02% and a 3.04% average annual growth rate over 22 years).
And the impact in individuals? As Iain says a loss of $2500. But the loss is spread over 22 years (I work that out at around 30 cents a day). Since what you’ve never had you’ve never missed and GDP per capita is a pretty poor proxy for well-being, I’d put the importance of this loss at pretty close to … nothing.
And how do the costs and benefits stack up? Remember we’re talking about refraining from imposing significant environmental costs on others here, that is, refraining from free-riding and growing one economy by burdening others. A cost so small that it will hardly be noticed, compares to, possibly, the benefit to some destitute Bangladeshi peasant who doesn’t get flooded out or even drowned. As Norman Lamont once said: “A price well worth paying”.
The NYT has a very interesting article today on AI and poker. A group of researchers in Alberta are using game theory to create automated ‘bots that can take on and beat most players. Now this was a little worrying for me; two months ago, I wrote a couple of rather confident posts suggesting that game theory wasn’t very helpful in solving complex and open-ended games like poker. Indeed, as Chad Orzel notes, human beings sometimes have difficulty in dealing with this sort of stuff too.
As it turns out though, the research project in Alberta reveals as much about the limits as the merits of game theory. The researchers have to make some radical simplifications in order for game theory to be useful at all, as they reveal in this report of their findings. First, they have to lop most of the branches of the “game tree” - the map of possible moves, responses and outcomes - in order to make the game tractable at all. They assume that all possible hands can be classified as falling into one of six or seven “buckets” of broadly equivalent hands. What’s more, their program doesn’t even start to try to analyse the play pattern of their opponent; instead, it assumes that the opposing player is also playing as if she was a computer program - i.e. that she’s employing optimal strategies. This means that their program isn’t going to be able to exploit patterns in its opponent’s game, although it will tend to win in the long run against players who follow “strictly dominated” strategies (i.e., who make really bone-headed mistakes). The authors ran the program against an expert poker player, who seems to have started to win consistently as soon as he stopped trying to exploit the program’s (non-existent) human weaknesses.
The researchers have made an interesting contribution - poker is a tough problem compared to, say, chess (which is much more rigid and deterministic, and therefore much more amenable to modelling). As they say, it’s a real achievement to have created a game theoretic poker player that isn’t completely outclassed by its human opponents. But there are good reasons to suspect that this line of research won’t go that much further - poker is too complex, and too dependent on subtle social interactions that are nearly impossible to model properly. The ‘bots don’t pose too much of a threat to you yet (as long as you remember not to draw to inside straights).
À Gauche
Jeremy Alder
Amaravati
Anggarrgoon
Audhumlan Conspiracy
H.E. Baber
Philip Blosser
Paul Broderick
Matt Brown
Diana Buccafurni
Brandon Butler
Keith Burgess-Jackson
Certain Doubts
David Chalmers
Noam Chomsky
The Conservative Philosopher
Desert Landscapes
Denis Dutton
David Efird
Karl Elliott
David Estlund
Experimental Philosophy
Fake Barn County
Kai von Fintel
Russell Arben Fox
Garden of Forking Paths
Roger Gathman
Michael Green
Scott Hagaman
Helen Habermann
David Hildebrand
John Holbo
Christopher Grau
Jonathan Ichikawa
Tom Irish
Michelle Jenkins
Adam Kotsko
Barry Lam
Language Hat
Language Log
Christian Lee
Brian Leiter
Stephen Lenhart
Clayton Littlejohn
Roderick T. Long
Joshua Macy
Mad Grad
Jonathan Martin
Matthew McGrattan
Marc Moffett
Geoffrey Nunberg
Orange Philosophy
Philosophy Carnival
Philosophy, et cetera
Philosophy of Art
Douglas Portmore
Philosophy from the 617 (moribund)
Jeremy Pierce
Punishment Theory
Geoff Pynn
Timothy Quigley (moribund?)
Conor Roddy
Sappho's Breathing
Anders Schoubye
Wolfgang Schwartz
Scribo
Michael Sevel
Tom Stoneham (moribund)
Adam Swenson
Peter Suber
Eddie Thomas
Joe Ulatowski
Bruce Umbaugh
What is the name ...
Matt Weiner
Will Wilkinson
Jessica Wilson
Young Hegelian
Richard Zach
Psychology
Donyell Coleman
Deborah Frisch
Milt Rosenberg
Tom Stafford
Law
Ann Althouse
Stephen Bainbridge
Jack Balkin
Douglass A. Berman
Francesca Bignami
BlunkettWatch
Jack Bogdanski
Paul L. Caron
Conglomerate
Jeff Cooper
Disability Law
Displacement of Concepts
Wayne Eastman
Eric Fink
Victor Fleischer (on hiatus)
Peter Friedman
Michael Froomkin
Bernard Hibbitts
Walter Hutchens
InstaPundit
Andis Kaulins
Lawmeme
Edward Lee
Karl-Friedrich Lenz
Larry Lessig
Mirror of Justice
Eric Muller
Nathan Oman
Opinio Juris
John Palfrey
Ken Parish
Punishment Theory
Larry Ribstein
The Right Coast
D. Gordon Smith
Lawrence Solum
Peter Tillers
Transatlantic Assembly
Lawrence Velvel
David Wagner
Kim Weatherall
Yale Constitution Society
Tun Yin
History
Blogenspiel
Timothy Burke
Rebunk
Naomi Chana
Chapati Mystery
Cliopatria
Juan Cole
Cranky Professor
Greg Daly
James Davila
Sherman Dorn
Michael Drout
Frog in a Well
Frogs and Ravens
Early Modern Notes
Evan Garcia
George Mason History bloggers
Ghost in the Machine
Rebecca Goetz
Invisible Adjunct (inactive)
Jason Kuznicki
Konrad Mitchell Lawson
Danny Loss
Liberty and Power
Danny Loss
Ether MacAllum Stewart
Pam Mack
Heather Mathews
James Meadway
Medieval Studies
H.D. Miller
Caleb McDaniel
Marc Mulholland
Received Ideas
Renaissance Weblog
Nathaniel Robinson
Jacob Remes (moribund?)
Christopher Sheil
Red Ted
Time Travelling Is Easy
Brian Ulrich
Shana Worthen
Computers/media/communication
Lauren Andreacchi (moribund)
Eric Behrens
Joseph Bosco
Danah Boyd
David Brake
Collin Brooke
Maximilian Dornseif (moribund)
Jeff Erickson
Ed Felten
Lance Fortnow
Louise Ferguson
Anne Galloway
Jason Gallo
Josh Greenberg
Alex Halavais
Sariel Har-Peled
Tracy Kennedy
Tim Lambert
Liz Lawley
Michael O'Foghlu
Jose Luis Orihuela (moribund)
Alex Pang
Sebastian Paquet
Fernando Pereira
Pink Bunny of Battle
Ranting Professors
Jay Rosen
Ken Rufo
Douglas Rushkoff
Vika Safrin
Rob Schaap (Blogorrhoea)
Frank Schaap
Robert A. Stewart
Suresh Venkatasubramanian
Ray Trygstad
Jill Walker
Phil Windley
Siva Vaidahyanathan
Anthropology
Kerim Friedman
Alex Golub
Martijn de Koning
Nicholas Packwood
Geography
Stentor Danielson
Benjamin Heumann
Scott Whitlock
Education
Edward Bilodeau
Jenny D.
Richard Kahn
Progressive Teachers
Kelvin Thompson (defunct?)
Mark Byron
Business administration
Michael Watkins (moribund)
Literature, language, culture
Mike Arnzen
Brandon Barr
Michael Berube
The Blogora
Colin Brayton
John Bruce
Miriam Burstein
Chris Cagle
Jean Chu
Hans Coppens
Tyler Curtain
Cultural Revolution
Terry Dean
Joseph Duemer
Flaschenpost
Kathleen Fitzpatrick
Jonathan Goodwin
Rachael Groner
Alison Hale
Household Opera
Dennis Jerz
Jason Jones
Miriam Jones
Matthew Kirschenbaum
Steven Krause
Lilliputian Lilith
Catherine Liu
John Lovas
Gerald Lucas
Making Contact
Barry Mauer
Erin O'Connor
Print Culture
Clancy Ratcliff
Matthias Rip
A.G. Rud
Amardeep Singh
Steve Shaviro
Thanks ... Zombie
Vera Tobin
Chuck Tryon
University Diaries
Classics
Michael Hendry
David Meadows
Religion
AKM Adam
Ryan Overbey
Telford Work (moribund)
Library Science
Norma Bruce
Music
Kyle Gann
ionarts
Tim Rutherford-Johnson
Greg Sandow
Scott Spiegelberg
Biology/Medicine
Pradeep Atluri
Bloviator
Anthony Cox
Susan Ferrari (moribund)
Amy Greenwood
La Di Da
John M. Lynch
Charles Murtaugh (moribund)
Paul Z. Myers
Respectful of Otters
Josh Rosenau
Universal Acid
Amity Wilczek (moribund)
Theodore Wong (moribund)
Physics/Applied Physics
Trish Amuntrud
Sean Carroll
Jacques Distler
Stephen Hsu
Irascible Professor
Andrew Jaffe
Michael Nielsen
Chad Orzel
String Coffee Table
Math/Statistics
Dead Parrots
Andrew Gelman
Christopher Genovese
Moment, Linger on
Jason Rosenhouse
Vlorbik
Peter Woit
Complex Systems
Petter Holme
Luis Rocha
Cosma Shalizi
Bill Tozier
Chemistry
"Keneth Miles"
Engineering
Zack Amjal
Chris Hall
University Administration
Frank Admissions (moribund?)
Architecture/Urban development
City Comforts (urban planning)
Unfolio
Panchromatica
Earth Sciences
Our Take
Who Knows?
Bitch Ph.D.
Just Tenured
Playing School
Professor Goose
This Academic Life
Other sources of information
Arts and Letters Daily
Boston Review
Imprints
Political Theory Daily Review
Science and Technology Daily Review