From the category archives:

Economics/Finance

A pig in a poke

by John Q on March 2, 2015

I’m doing some work on the proposed Trans Pacific Partnership Agreement, currently being negotiated in secret by diplomats and business representatives from 12 countries. Two facts of interest
(a) Australia’s Trade Minister Andrew Robb is claiming that a final agreement might be reached by mid-March. While this looks over-optimistic, it implies there is a near-final text
(b) Obama has sought “fast-track” negotiating authority, but there is no sign that this is going to happen soon, given that quite a few Democrats oppose the deal outright, and many Republicans are hostile to anything that would give Obama more authority.

The idea of “fast track” is that the Administration cuts a deal and Congress is bound (by having agreed to the fast-track rules) to give it a Yes/No vote, with no amendments. The assumption (I think) is that, if amendments were permitted, they would proliferate to the point where the legislation would fail to implement the agreement with other parties, who might then back out. Of course, the result is that Congress is, in effect, buying a pig in a poke. Given the unlikelihood of an outright rejection of such a massive deal, they have to accept whatever Obama puts before them. The flip-side is can no individual Congressperson has to explain why they didn’t seek protection for whatever local ox might be gored by the deal: they can respond that they had no choice.

My question is: Suppose that the final text is agreed and made public before fast-track authority is granted. What would be the chances of Congress agreeing to a Yes/No vote, and what difference would it make? There are a lot of issues to be raised here about international relations, trade agreements and US politics, none of which I have a clear feel for. So, I’d be interested to hear what others think.

Who blinked?

by John Q on February 24, 2015

So, the latest round of the Greek debt crisis has ended in a typical European combination of delay and compromise, much as Yanis Varoufakis predicted a week ago. But in view of the obvious incompatibility of the positions put forward, someone must have given a fair bit of ground. The Greeks wanted continued EU support, and an end to the Troika’s austerity program. The Troika (at least as represented by German Finance Minister Schauble) wanted Syriza to abandon its election program and continue with the existing ND/Pasok policy of capitulation to the Troika.

Put that way, I think it’s clear that the Troika blinked. The new agreement allows Syriza to replace the Troika’s austerity program with a set of reforms of its choice, focusing on things like tax evasion. Most of Syriza’s election platform remains intact. Of course, it’s only for four months, and none of the big issues has been resolved. But four months takes us most of the way to the next Spanish election campaign, hardly an opportune time to contemplate expelling a debtor country from the eurozone with utterly unpredictable consequences.

If the negotations were a win for Greece (feel free to disagree!) how did it happen?

[click to continue…]

Reciprocity vs. Baseline Communism

by John Holbo on February 19, 2015

I was rereading David Graeber’s Debt over the weekend. The intervening two years, since our book event, have not caused it to be the case that Graeber doesn’t owe Henry an apology, after all. But the life of the mind goes on. We do not freeze intellectual accounts due to outstanding personal debts. That is to say, the free market of ideas is baseline communist, in Graeber’s sense. If I have a bright idea, I do not expect to be paid back, by those who receive it, in the form of two half-insights – or 100 comments, each containing but a grote’s worth of thought; none of that. (I expect intellectual credit, of course.)

My bright idea for the day is that I have no idea what the difference is between reciprocity and baseline communism. [click to continue…]

Asset sales and interest rates (wonkish)

by John Q on February 4, 2015

One of the strongest most politically effective arguments made for selling publicly owned assets, such as government owned corporations is that, by reducing debt, it will reduce the interest rate on government bonds. This is plausible enough, and not by itself a conclusive argument. The interest saving (including the benefit of lower rates on remaining debt) needs to be set against the loss of earnings. But it would be nice to know how large this saving might be.

The Queensland state election, just passed, provides something of a natural experiment. The LNP government proposed to sell $37 billion in public assets and repay $25 billion in debt ($18 billion associated with the enterprises to be sold, and $7 billion in general government debt. Going in with 73 of 89 seats, the LNP was almost universally expected to be returned. Instead, they lost their majority and will probably lose office. Although the result is not yet final, everyone is now agreed that asset sales are off the table.

So, we should be able to look at the secondary market for QTC bonds to see how much this surprise changed the interest rate demanded by bondholders (this is what’s called an “event study” in the jargon of academic finance). You can get the data from https://www.qtc.qld.gov.au/qtc/public/web/individual-investors/rates/interactive%20rate%20finder/!ut/p/a0/04_Sj9CPykssy0xPLMnMz0vMAfGjzOLdnX2DLZwMHQ383QwtDDy9DUIsPTwDDA2NTPULsh0VAVfZvz4!/

and I’ve included it over the fold (a bit of a mess as I can’t do HTML tables)

The data shows that interest rates have generally been tending downwards, as you would expect given the Reserve Bank’s much-anticipated cut. On the trading day after the election, rates on longer term bonds rose by between 0.05 and 0.1 percentage points (or, in the market jargon 5 and 10 basis) points. But all of that increase, and more, was wiped out the next day when the RBA confirmed its cut. Overall rates on QTC debt have fallen by around 0.25 percentage points since Newman called, and then lost, his snap election.

To sum up, the surprise abandonment of one of the largest proposed asset sales in Australian history caused only a momentary blip in interest rates on Queensland government debt, immediately wiped out by a modest adjustment in monetary policy at the national level.

[click to continue…]

Teaching Marx to newbies, redux

by Chris Bertram on December 17, 2014

At a meeting on refugee rights the other night, one of the other activists asked me if I am a Marxist. “No,” I replied, “though I used to be.” I think the last time it was a vaguely accurate description of me was probably sometime in the late 1980s or early 1990s. It is hard to be sure. Not that I mind being called one, or think that being one is something to be ashamed of. In fact, I felt slightly sorry to disappoint my interlocutor. But things are what they are. So despite there being an irritating buzzing noise somewhere on the interwebs telling the world that I am a “Western Marxist”, I’m afraid I have to disclaim the title.

Nearly six years ago, I wrote the following as a suggestion for how to explain Marx to people (students) who were coming to him cold:

> Suppose I were lecturing about Karl Marx: I’d do the same thing. I’d probably start by discussing some of the ideas in the Manifesto about the revolutionary nature of the bourgeoisie, about their transformation of technology, social relations, and their creation of a global economy. Then I’d say something about Marx’s belief that, despite the appearance of freedom and equality, we live in a society where some people end up living off the toil of other people. How some people have little choice but to spend their whole lives working for the benefit of others, and how this compulsion stops them from living truly truly human lives. And then I’d talk about Marx’s belief that a capitalist society would eventually be replaced by a classless society run by all for the benefit of all. Naturally, I’d say something about the difficulties of that idea. I don’t think I’d go on about Pol Pot or Stalin, I don’t think I’d recycle the odd bon mot by Paul Samuelson, I don’t think I’d dismiss Hegel out of hand, and I don’t think I’d contrast modes of production with Weberian modes of domination (unless I was confident, as I wouldn’t be, that my audience already had some sense of those concepts).

Thinking about the matter again, I think I’d stick to those themes. Of course, then there’s the question of which texts would best illustrate those themes. It seems that some people believe those themes are best illustrated by looking at Marx’s early writings and that to do so would necessarily involve a distortion of Marx’s career bu concentrating on early texts. I don’t see it myself. When Corey Robin, Alex Gourevitch and I were thinking about freedom and the workplace, a central text for us was the chapter on the buying and selling of labour power, from volume 1 of *Capital*, you know, the one about “the exclusive realm of Freedom, Equality, Property and Bentham.” Thinking about human nature, work under capitalism, and its contrast with truly human work, I’d be sure to look at “The Results of the Immediate Process of Production” (included as an appendix to the Penguin edition of volume one of *Capital*). And central to explaining the importance of Marx *to students of contemporary political philosophy* would be the *Critique of the Gotha Programme*. Of course the themes you’d focus on and the texts you use are inevitably shaped by what you’re trying to achieve, the audience you’re addressing and similar matters. A comprehensive survey of Marx’s work, such as the two-year-long course Jerry Cohen ran in the mid 1980s at UCL (and which I was lucky enough to attend) would have a very different content to a taster course aimed at newbies.

The socialisation of economists

by John Q on December 9, 2014

I’m following up Henry’s post on the superiority or otherwise of economists, and Krugman’s piece, also bouncing off Fourcade et al, with a few observations of my own, that don’t amount to anything systematic. My perspective is a bit unusual, at least for the profession as it exists today. I didn’t go to graduate school, and I started out in an Australian civil service job in the low-status[^1] field of agricultural economics.

So, I have long experience as an outsider to the US-dominated global profession. But, largely due to one big piece of good luck early on (as well as the obligatory hard work and general ability), I’ve done pretty well and am now, in most respects, an insider, at least in the Australian context.
[click to continue…]

Economists aren’t ‘superior’ just because

by Henry Farrell on December 2, 2014

Marion Fourcade, Etienne Ollion and Yann Algan’s forthcoming piece on the ‘superiority of economists’ is a lovely, albeit quietly snarky, take on the hidden structures of the economics profession. It provides good evidence that e.g. economics hiring practices, rather than being market driven are more like an intensely hierarchical kinship structure, that the profession is ridden with irrational rituals, and that key economic journals are apparently rather clubbier than one might have expected in a free and competitive market (the University of Chicago’s Quarterly Journal of Economics Journal of Political Economy gives nearly 10% of its pages to University of Chicago affiliated scholars; perhaps its editors believe that this situation of apparent collusion will be naturally corrected by market forces over time). What appears to economists as an intense meritocracy (as Paul Krugman acknowledges in a nice self-reflective piece) is plausibly also, or alternately, a social construct built on self-perpetuating power relations.

Unsurprisingly, a lot of economists are reading the piece (we’re all monkeys, fascinated with our reflections in the mirror). Equally unsurprisingly, many of them (including some very smart ones) don’t really get Fourcade et al’s argument, which is a Bourdieuian one about how a field, and relations of authority and power within and around that field get constructed. As Fourcade has noted in previous work, economists’ dominance has led other fields either to construct themselves in opposition to economics (economic sociology) or in supplication to it (some versions of rational choice political science). Economists have been able to ignore these rivals or to assimilate their tributes, as seems most convenient. As the new paper notes, the story of economists’ domination is told by citation patterns (the satisfaction that other social scientists can take from economists having done unto them as they have done unto others, is unfortunately of limited consolation). Yet if you’re an economist, this is invisible. Your dominance appears to be the product of natural superiority. [click to continue…]

Inequality, migration and economists

by Chris Bertram on November 8, 2014

Tim Harford has [a column in the Financial Times claiming that citizenship matters more than class for inequality](http://www.ft.com/cms/s/0/d9cddd8e-6546-11e4-91b1-00144feabdc0.html). In many ways it isn’t a bad piece. I give him points for criticizing Piketty’s default assumption that the nation-state is the right unit for analysis. The trouble with the piece though is the immediate inference from two sets of inequality stats to a narrative about what matters most, as if the two things Harford is talking about are wholly independent variables. This is a vice to which economists are rather prone.

Following Branko Milanovic, Harford writes:

> Imagine lining up everyone in the world from the poorest to the richest, each standing beside a pile of money that represents his or her annual income. The world is a very unequal place: those in the top 1 per cent have vastly more than those in the bottom 1 per cent – you need about $35,000 after taxes to make that cut-off and be one of the 70 million richest people in the world. If that seems low, it’s $140,000 after taxes for a family of four – and it is also about 100 times more than the world’s poorest people have. What determines who is at the richer end of that curve is, mostly, living in a rich country.

Well indeed, impressive stuff. And as Joseph Carens noticed long ago, and Harford would presumably endorse, nationality can function rather like feudal privilege of history. People are indeed sorted into categories, as they were in a feudal or class society, that confine them to particular life paths, limit their access to resources and so forth. But there’s a rather obvious point to make which rather cuts across the “X matters more than Y” narrative, which is that citizenship isn’t a barrier for the rich, or for those with valuable skills. It is the poor who are excluded, who are denied the right to better themselves in the wealthy economies, who drown in the Mediterranean, or who can’t live in the same country as the love of their life. Citizenship, nationality, borders are ways of controlling the mobility of the poor whilst the rich pass effortlessly through. It isn’t simply an alternative or competitor to class, it is also a way in which states enforce class-based inequality.

r > g

by John Q on October 13, 2014

A standard piece of advice to researchers in math-oriented fields aiming to publish a popular book is that every equation reduces the readership by a factor of x (x can range from 2 to 10, depending on who is giving the advice). Thomas Piketty’s Capital has only one equation (or more precisely, inequality), at least only one that anyone notices, but it’s a very important one. Piketty claims that the share of capital owners in national income will tend to rise when the rate of interest r exceeds the rate of growth g. He suggests that this is the normal state, and that the situation prevailing for much of the 20th century, when r was less than g, was an aberration.

I’ve seen lots of discussion of this, much of it confused and/or confusing. So, I want to offer a very simple explanation of Piketty’s point. I’m aware that this may seem glaringly obvious to some readers, and remain opaque to others, but I hope there is a group in between who will benefit.

Suppose that you are a debtor, facing an interest rate r, and that your income grows at a rate g. Initially, think about the case when r=g. For concreteness, suppose you initially owe $400, your annual income is $100 and r=g is 5 per cent. So, your debt to income ratio is 4. Now suppose that your consumption expenditure (that is, expenditure excluding interest and principal repayments) is exactly equal to your income, so you don’t repay any principal and the debt compounds. Then, at the end of the year, you owe $420 (the initial debt + interest) and your income has risen to $105. The debt/income ratio is still 4. It’s easy to see that this will work regardless of the numerical values, provided r=g. To sum it up in words: when the growth rate and the interest rate are equal, and income equals consumption expenditure, the ratio of debt to income will remain stable.

On the other hand, if r>g, the ratio of debt to income can only be kept stable if you consume less than you earn. And conversely if r < g (for example in a situation of unanticipated inflation or booming growth), the debt-income ratio falls automatically provided you don’t consume in excess of your income.

Now think of an economy divided into two groups: capital owners and everyone else (both wage-earners and governments). The debt owed by everyone else is the wealth of the capital owners. If r>g, and if capital owners provide the net savings to allow everyone else to balance income and consumption, then the ratio of the capital stock to (non-capital) income must rise. My reading of Piketty is that, as we shift from the C20 situation of r ≤ g to one in which r>g the ratio of capital to stock to non-capital income is likely to rise form 4 (the value that used to be considered as one of the constants of 20th century economics) to 6 (the value he estimates for the 19th century)

This in turn means that the ratio of capital income to non-capital income must rise, both because the capital stock is getting bigger in relative terms and because the rate of return, r, has increased as we move from r=g to r>g. For example if the capital-income ratio goes from 4 to 6 and r goes from 2 to 5, then capital incomes goes from 8 per cent of non-capital income to 30 per cent[^1]. This can only stop if the stock of physical capital becomes so large as to bring r and g back into line (there’s a big dispute about whether and how this will happen, which I’ll leave for another time), or if non-capital owners begin to consume below their income.

There’s a lot more to Piketty than this, and a lot more to argue about, but I hope this is helpful to at least some readers.

[^1]: Around 20 per cent of GDP is depreciation, indirect taxes and other things that don’t figure in a labor-capital split, so this translates into a fall in the labor share of all income from a bit over 70 per cent to around 50 per cent, which looks like happening.

On Monday, 13 October 2014, at 11.45 am, the winner of the 2014 Nobel Prize in Economics will be announced (yes, we know it is officially the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel but that’s not the focus of this post). Some have said that the prize should go to Thomas Piketty, for his best-selling, important and highly influential book Capital in the twenty-first century. I, too, think this is a great book, for a variety of reasons.

But there is another inequality economist who is at least equally, and arguably much more deserving of the Nobel prize, and that is Anthony B. (Tony) Atkinson. For close readers of Piketty’s work, this claim shouldn’t be surprising, since Piketty credits Atkinson with “being a model for me during my graduate school days, [and Atkinson] was the first reader of my historical work on inequality in France and immediately took up the British case as well as a number of other countries” (Capital, vii). In a recent interview with Nick Pearce and Martin O’Neill which was published in Juncture, Thomas Piketty calls Tony Atkinson “the Godfather of historical studies on income and wealth” (p. 8). So my hunch is that Piketty would endorse the claim that if the Nobel Prize were awarded to welfare economics/inequality measurement, that Atkinson should get the Nobel Prize.
[click to continue…]

Thomas Piketty has power that heterodox economists never had

by Ingrid Robeyns on September 22, 2014

Long-time readers of this blog know that I am an apostate of the economics discipline. When I was 17, I wanted to study something that would be useful to help make the world a better place. I thought that economics would meet that requirement, and it also seemed natural since I always had a strong interest in politics, in particular the question how to organize society. For reasons explained here, I eventually gave up the hope that economics (as I studied it in the 1990s) could give me that knowledge, and diverted to political theory/philosophy and later also ethics, where I’ve been happy ever since.

But for the first time since many years, I felt a shiver of regret for having left economics – and that was when in April this year I started reading Capital in the Twenty-first Century, the best-selling book by Thomas Piketty. Reading Capital was a great intellectual adventure, while at the same time enjoyable to read (many have said the translator, Arthur Goldhammer, deserves part of the latter credits). It is hard for academic economics to evoke positive feelings in its readers, but Capital did so with me for at least two reasons.
[click to continue…]

Rawls, Bentham and the Laffer Curve

by John Q on September 8, 2014

The 1970s saw two important and influential publications in the long debate over justice, equality and public policy. In 1971, there was Rawls Theory of Justice, commonly described in terms like “magisterial”. Then in 1974, at lunch with Jude Wanniski, Dick Cheney and Donald Rumsfeld, Arthur Laffer drew his now-eponymous curve on a napkin. Of course there was nothing new about the curve: it’s pretty obvious that an income tax levied at rates of either zero or 100 per cent isn’t going to raise any money (* or, in the 100 per cent case, not very much money), and interpolation does the rest. What was new was the Laffer hypothesis, that the US at the time was on the descending side of the curve, where a reduction in tax rates would raise tax revenue.

I’ve always understood Rawls in terms of the Laffer curve, as arguing in essence that we should be at the very top of the curve, maximizing the resources available for transfer to the poor, but not (as, say, Jerry Cohen might have advocated) going further than this to promote equality.

A couple of interesting Facebook discussions have led me to think that I might be wrong in my understanding of Rawls and that the position I’ve imputed to him is actually far closer to that of classical utilitarianism in the tradition of Bentham (which is, broadly speaking, my own view).

Facebook has its merits, but promoting open public discussion isn’t one of them, so I thought I’d throw this out to the slightly larger world of blog readers.

[click to continue…]

The Laffer Event Horizon?

by John Holbo on August 21, 2014

Reading Jon Chait this morning:

With predictable fury, supply-siders have denounced this heresy [that Reagan-era supply-side policies might not be optimal today, even granting that they were in 1980]. You can get a flavor of the intra-party debate in columns appearing in places like Forbes or The Wall Street Journal, the later of which retorts, “Good economic policy doesn’t have a sell-by date. (Adam Smith? Ugh. He is just so 1776.)”

The quote is a few months old, but – wow! – what an evergreen formula for zombie economics!

Good economic policy need not be formulated with reference to the economy.

I think maybe we need something a bit more science-fiction-y. Instead of the Laffer Curve, we have the Laffer Event Horizon, which is located in 1974, when Laffer sketched his famous curve on a napkin. After 1974, the economy fell into a black hole, for tax purposes. Specific facts about it could no longer cross the boundary of the Laffer Event Horizon, for policy purposes. A bit more precisely: within the black hole, all tax-like-paths – must be warped down and down, eventually to zero. Especially taxes on the rich.

Just a thought.

Job search, 40 times a month

by John Q on August 1, 2014

I got lots of very helpful responses to my recent post on the search theory of unemployment, here and at Crooked Timber. But it has occurred to me that I haven’t seen any answer to one crucial question: How many offers do unemployed workers receive and decline before taking a new job, or leaving the labour market? This is crucial, both in simple versions of search theory and in more sophisticated directed search and matching models. If workers don’t get any offers, it doesn’t matter what their reservation wage is, or what their judgement of the state of the market. Casual observation and my very limited experience, combined with my understanding of the unemployment benefit rules, is that very few unemployed workers receive and decline job offers, except perhaps for temporary work where the loss of benefits outweighs potential earnings. Presumably someone must have studied this, but my Google skills aren’t up to finding anything useful.

And, on a morbidly humorous note, it’s a sad day for conservative politicians when efforts to bash the unemployed actually cost them support. But that seems to be the case for the LNP government in Australia with their latest plans, both expanded work for the dole and the requirement for 40 job applications a month. I’ll leave it to Andrew Leigh to take out the trash on work for the dole (BTW, his new book, The Economics of Almost Everything is out now).

The 40 applications requirement has already been the subject of some amusing calculations. I want to take a slightly different tack. Suppose (to make the math simple) that the average job vacancy lasts a month. There are roughly five unemployed workers for every vacancy, so meeting the target will require an average of 200 applications per vacancy. The government will be checking for spam, so lets suppose that all (or a substantial proportion) of the applicants take some time to talk about how they would be a good fit with the employer and so on. Dealing with all these applications would be a mammoth task. One option would be to pick a short list at random. But, there’s a simpler option. In addition to the 200 required applications from unemployed people, most job vacancies will attract applications from people in jobs. A few of them may be looking for an outside offer to improve their bargaining position with their current employer (this is a big deal for academics), but most can be assumed to be serious about taking the job and in the judgement that they have a reasonable chance of getting it. So, the obvious strategy is to discard all the applications except for those from people who already have jobs. What if there aren’t any of these? Given that formal applications are going to be uninformative, employers may pick interviewees at random or may resort to the informal networks through which many jobs are filled already.

Trying to relate this back to theory, the effect of a requirement like this is to negate the benefits of improved matching that ought to arise from Internet search. By providing strong incentives to provide a convincing appearance of looking for jobs for which workers are actually poorly suited, the policy harms both employers and unemployed workers who would be well suited to a given job.

Update I found the following quote widely reproduced on the web

On average, 1,000 individuals will see a job post, 200 will begin the application process, 100 will complete the application,

75 of those 100 resumes will be screened out by the Applicant Tracking System (ATS) software the company uses,

25 resumes will be seen by the hiring manager, 4 to 6 will be invited for an interview, 1 to 3 of them will be invited back for final interview, 1 will be offered that job and 80 percent of those receiving an offer will accept it.

Data courtesy of Talent Function Group LLC

Visiting the TFG website, I couldn’t find any obvious source. The numbers sound plausible to me, and obviously to those who have cited them. But, if the final number (80 per cent acceptance) is correct, then it seems as if the search theory of unemployment is utterly baseless. Assuming independence, the proportion of searchers who reject even three offers must be minuscule (less than 1 per cent).

Austrian economics and Flat Earth geography

by John Q on July 27, 2014

One of the striking features of (propertarian) libertarianism, especially in the US, is its reliance on a priori arguments based on supposedly self-evident truths. Among[^1] the most extreme versions of this is the “praxeological” economic methodology espoused by Mises and his followers, and also endorsed, in a more qualified fashion, by Hayek.

In an Internet discussion the other day, I was surprised to see the deductive certainty claimed by Mises presented as being similar to the “certainty” that the interior angles of a triangle add to 180 degrees.[^2]

In one sense, I shouldn’t be surprised. The certainty of Euclidean geometry was, for centuries, the strongest argument for the rationalist that we could derive certain knowledge about the world.

Precisely for that reason, the discovery, in the early 19th century of non-Euclidean geometries that did not satisfy Euclid’s requirement that parallel lines should never meet, was a huge blow to rationalism, from which it has never really recovered.[^3] In non-Euclidean geometry, the interior angles of a triangle may add to more, or less, than 180 degrees.

Even worse for the rationalist program was the observation that the system of geometry (that is, “earth measurement”) most relevant to earth-dwellers is spherical geometry, in which straight lines are “great circles”, and in which the angles of a triangle add to more than 180 degrees. Considered in this light, Euclidean plane geometry is the mathematical model associated with the Flat Earth theory.

[click to continue…]