Two things as the seminar wraps up (Cosma Shalizi is writing a response to comments which I will link to, but which will be hosted on his own site, since CT plays badly with the math script that he uses). First, a pointer to blogger and sometime CT commenter Adam Kotsko’s review of _Red Plenty_ at “The New Inquiry”:http://thenewinquiry.com/essays/footnote-fairy-tale/. People should go read. Second, a thank you. Without disparaging individual contributions to other seminars, I think that this was the best seminar we’ve done here (obviously, this is my personal opinion; not at all necessarily endorsed by other bloggers here etc). Part of this is due to Francis – for writing the book (which couldn’t have been better aimed at CT’s sweet spot if it had been written for this purpose and this purpose alone), and for his lovely three-part response. Part of this is due to the many splendid contributors who wrote posts for the seminar. And part of it is thanks to the commenters – we’ve hosted many good conversations over the years, but this has been something rather special. I find it difficult to make it clear just how grateful I am to all of you who have participated in this. Seeing how it has worked out has made me very happy.
Update: Cosma’s post, responding to various points is “here”:http://cscs.umich.edu/~crshalizi/weblog/919.html. Since Cosma’s blog doesn’t have a comments section, feel free to use the comments section of this post to discuss …
{ 21 comments }
Pascal Leduc 06.14.12 at 10:28 pm
I have been nothing but immensely impressed with the book, the posts, and even the commenter. I think its only fair to end this Utopic discussion of a Utopian book with a Utopian comment.
Marx may have dreamed of hunting in the morning, fishing in the afternoon, rear cattle in the evening and criticize after dinner. And even if these days so much of this seems so far away, it is good to see that our after dinner activities are healthy, engrossing and enjoyable.
Barry Freed 06.15.12 at 12:59 pm
That was fantastic. Thanks to all involved.
John Quiggin 06.16.12 at 1:24 am
First up, while my contribution was minimal, I’m very pleased to have been part of this.
Second, having failed to comment on Cosma’s review at the time, I’m going to abuse this opportunity and raise what I think is a new point. As a preliminary, I endorse Cosma’s point that the computation of prices for an Arrow-Debreu general equilibrium is just the dual of the planning problem. So, I’m going to talk about the practice of setting up computable general equilibrium models, which I know something about, rather than Gosplan, where I know no more than anyone else who has read Red Plenty and understands the basic theory of linear programming.
Cosma says that we can’t get around the problem that ‘everything is connected to everything else’, and points to the fact that all productive processes require labor. Still, CGE modellers do in fact achieve pretty big simplifications by using a hierarchy of commodities and industries. So, rather than solving for a million products, a planner might solve for the allocation of resources across 1000 industries each of which is assigned (in the first-stage problem) a composite product. In the second stage, each industry is assigned a plan to produce 1000 products. The solution you get this way isn’t perfectly optimal, since you are ignoring a bunch of interdependencies, but the error can, I think, be bounded at a reasonable level.
That doesn’t get around the problem that you still need relative values for the million commodities, and the information you need for this is decentralized, whether those making the choices act as consumers or as citizens. It’s this part of the problem that I think is critical.
But if Cosma has time for yet another round, I’d be very interested in his response on the computational point I’ve sketched out.
Manoel Galdino 06.16.12 at 1:02 pm
It really was an amazing seminar. The sad part, for me, is that I read the seminar before reading the book. I already bought it, but didn’t have enough time to read it beyond the first two chapters.
About Cosma’s posts, there is one think the he mentions only briefly, that is web 2.0, and I’d like to develop some points related to it. I work in a company that has large amount of transactions (purchases) data and we have to predict costumer behavior based on this data (credit card purchase data). The aim, of course, is to design products and promotions that match costumers preferences. Now, what we do it’s not web 2.0, at least not totally, I think. But we really get feedback about people preferences and it’s not based on price signals by the market (though prices are important as a way to register the amount spent by people, we can’t determine in any sensible way relative prices of goods and services) and I think that we do a good job.
Of course the computational power necessary to process all transactions in a big economy (and assuming people no longer use paper money and made only electronic transactions) would be huge. Still, I think that the efficiency of planning and the use of machine learning techniques will enhance the capacity of understanding people’s preference and, thus, to optimize what, when and where produce goods (I know, this says little about how to produce goods, but still it’s an advance).
So, it’s not only in web 2.0 that we learn what people like. We do know what people want and don’t want in the real economy by gathering data about their behavior in the off-line word. And we know better than if we just tried to look at which prices are increasing or decreasing (Hayekish way, arguably). If that’s the case, Shouldn’t we start to think about creating a public institute or enterprise that would use this data and enhance the way we get this data to increase responsiveness of our political system? Why let it only to corporations to use this data? Why not thinking about other ways of getting data in the off-line world that its not dependent on having money, in order to have a more unbiased representation of people’s preference?
Last, but not least, about innovation. Cosma talked about how people think that they want one thing, and then they discover that what they really wanted was not what they asked for in the first place. Now, think about any recommendation system (like Amazon recommendation system or google reader recommendation of sites and blogs based on feeds that you and other users signed in). One of the most challenges of them, I think, is that you should include a random component in suggestions to serve as some sort of mutations in a evolutionary process. This could help to prevent us of getting trapped in local optima and promote innovation and disruptions in preferences and tastes of people, letting them to discover new things that they liked. That is, if we increase our efficiency in using planning, we still need some anarchy or random component to promote changing in the system.
It would be nice to hear Cosma and others explore these things.
Best,
Manoel
I’m not sure I’m clear on what I’m saying. Apologizes in advance if I”m not being clear.
Henry Farrell 06.16.12 at 1:16 pm
Manoel – check out the piece that Cosma and I are writing on Cognitive Democracy (due for some substantial revisions), and, more directly, Cosma’s earlier paper on Social Media as Windows on the Life of the Mind, for some further discussion of this.
Alex K. 06.16.12 at 3:47 pm
The solution you get this way isn’t perfectly optimal, since you are ignoring a bunch of interdependencies, but the error can, I think, be bounded at a reasonable level.
Come on John, you can’t be serious about this stuff.
We know from Franklin Fisher’s results in aggregation (here is a summary, but Fisher wrote a book length treatment) that in general you don’t get even a function (even if you allow for approximations) when you try to aggregate multiple production functions.
This is not some unorthodox, value-laden economic theory (the Cambridge Capital controversy is a set of issues that is distinct from, but intersecting with, the production function aggregation issues) — it’s freshman level math.
If you assume some set of prices, then you could make some sense of such aggregation — but finding prices from purely technical coefficients is the very problem you are trying to solve.
The Raven 06.16.12 at 8:59 pm
Well…it may be a bit OT, but, Karl Marx Mastercard. The statue is in the rebuilt East German city of Chemnitz.
Plume 06.17.12 at 12:50 am
Not sure if this article has been mentioned already, but I found it brilliant and a wonderful companion to Red Plenty. It’s from The Baffler, by David Graeber.
Of Flying Cars and the Declining Rate of Profit .
Yes, I know, he and Crooked Timber had a rough time of it recently. But that should not be any reason to dismiss his work, which can be read fruitfully juxtaposed with Mr. Spufford’s book.
Manoel Galdino 06.17.12 at 12:52 am
I don’t think I’ve ever read this piece by Cosma you mentioned. I’ll look for it, thanks for the reference.
Manoel
meika 06.17.12 at 1:35 am
i’m just thinking aloud after reading the above,
John Quiggan’s: “Cosma says that we can’t get around the problem that ‘everything is connected to everything else’, and points to the fact that all productive processes require labor.”
If we reduced the need for labour in production then we reduced the need for management of that labour, and so reduced the need for governance of that management (reduce the need for collective/corporate ownership too??). Human agency in the market can/will then be restricted to consumption patterns. Let faux-AI do the rest, the point is “to live off machines, not like machines”.
and thus to Manoel Galdino’s post and Alex K.’s reply to John Quiggan: With 3D printing’s disruptive potential arriving in a old barn near you soon, the only consumption patterns requiring notice will be commodity feedstock to the printers, an aggregate worthy of econometrical study.
If you look at the junk on http://www.dx.com (usb drives in the shape of pistols ) it would be easy to come up with the equivalent of a word salad generator for 3D designs of consumer tat. As there is no need now to actually make each design until someone chooses one and prints them out, if all goods are printed on demand a lot of predictive choice issues disappear as problems, surely. If the consumer is really sovereign the get the rest of the middleman out of the way (labour, management, governance, corporates).
The only work left will be in the service industries, and the economy will run like the shell of a hollowed out pyramid scheme if all those seperate levels of decision making (labour, management, governance, ownership) are left in place. Hopefully the free market will get rid of them.
John Quiggin 06.17.12 at 4:52 am
@Alex K. As you might expect, I’m reasonably well acquainted with aggregation theory for both consumers and producers. You might be interested in this paper, which is one of several considering whether international consumption data is consistent with a representative consumer model
http://www.sciencedirect.com/science/article/pii/S0304387802001074
I don’t think this is an issue here, however.
Setting the Kantorovich planning problem up as an LP assumes linear technology and identical firms. You have to go a long way beyond linearity before you start running into theoretical aggregation problems (they’re entirely manageable if cost functions are homothetic for example). The big problem is with the “identical firms” assumption, but disaggregation in the formulation of the planning problem makes life easier in theoretical terms not harder. Cosma’s objection is that the resulting problem is computationally intractable.
wolfgang 06.17.12 at 7:48 am
@Manoel Galdino
>> Why not thinking about other ways of getting data in the off-line world that its not dependent on having money, in order to have a more unbiased representation of people’s preference?
There are of course polls of all kind that are taken regularly (from the (un)employment report to Gallup polls etc.).
But it has been suggested that politicians following such polls (rather than their own ‘values’) is one of the problems of our democracies …
Matt 06.17.12 at 7:53 am
Not sure if this article has been mentioned already, but I found it brilliant and a wonderful companion to Red Plenty. It’s from The Baffler, by David Graeber. Of Flying Cars and the Declining Rate of Profit .
Yes, I know, he and Crooked Timber had a rough time of it recently. But that should not be any reason to dismiss his work, which can be read fruitfully juxtaposed with Mr. Spufford’s book.
This article pines for technological fantasies that might have been if The Man (in this case, bureaucracy and the neoliberal consensus) had not quashed them: pocket fusion reactors, phasers, teleporters, antigravity… The right strain of libertarian will tell almost the same story: we could have been living in space for 30 years with miracle technologies if our corrupt society didn’t punish the Henry Reardens. If Jules Verne’s stories could be brought to life, surely Gene Roddenberry’s could be too — and if they weren’t, someone’s to blame. Probably stuffy bureaucrats and ossified scientists who lacked the vision to pursue teleporters instead of DNA sequencing.
wolfgang 06.17.12 at 8:02 am
I would like to repeat a comment I made to the original post.
People usually do not know what they really want if you simply ask them.
Right now polls tell us that Greeks want to stay in the EU, but they dont like the measures that the EU imposes on them.
If you ask US citizens you will find that they are worried about the deficit, but they prefer lower taxes and they do not want cuts to medicare, the military etc.
But finally they do (have to) vote for somebody on election day (usually they describe their choice as the lesser evil) and this is how politics can move forward.
It is the same on a micro-level, when finally people do spend their money on something, which tells us in some sense what it is they really want (and they keep buying Windows PCs although they all complain about it).
Alex K. 06.17.12 at 5:52 pm
You have to go a long way beyond linearity before you start running into theoretical aggregation problems
You only have to go a few miles outside most cities to find technologies with fixed costs — not a long way at all. In fact, even Spufford’s example of a hamburger stand involves fixed costs and non-constant returns to scale that make aggregation problematic.
Cosma’s method of argument involved being extremely charitable with the argument for central planning (allowing that the problem can be formulated as a linear programming problem) and then showing that even in this case the computational complexity is unmanageable. I think you (and Cockshott) are a bit abusive of this charity, when you use the special assumptions of the linear model to justify even further simplifications. After all, the Kantorovich linear programming model is not even able to deal properly with a hot-dog stand.
Alex K. 06.17.12 at 6:14 pm
When I read Cosma’s piece I thought of an obvious point that no one mentioned in this form:
When planning over several time periods, the planner should take into account the possible mistakes in executing the plan. Thus the time t+1 plan should include contingency plans for all the situations where the time t plan could go wrong. If this is so, then the size of the problem is exponential in the number of time periods, a complexity far greater than just that resulting from indexing commodities by time.
It now occurs to me that this line of thinking makes all types of approximation (including that proposed by John Q.) problematic. You may think that you are making a trade-off between precision and computational complexity, but if the plan is supposed to take into account the errors at early time periods, then the reduction in complexity is dubious: yes, the complexity of solving for one period decreases, but the spectrum of possible mistakes is larger and hence the complexity of the problem over many periods may well be even greater than the complexity of the original, non-simplified problem.
Of course, we would need the precise details of the considered simplifications and their error bounds, and the details of how mistakes in early period plan execution are dealt with in later periods — but it’s not obvious that this objection can be brushed aside.
James B. Shearer 06.17.12 at 10:39 pm
Cosma’s reply does not answer my objection (which he may not have seen since I made it in an Unfogged comment thread). Basically you can’t assume without additional justification that the average case solution time scales with problem size in the same way as the worst case solution time. My unfogged comment was:
” As for 346, no, actually, the theoretically-best solvers (like interior-path algorithms) really are what you find in commercial software. There was a phase in the 1980s when the algorithms with provably-not-horribly performance for convex optimization weren’t practically feasible, but that’s long since past. ”
In your Crooked Timber post you say:
“A good modern commercial linear programming package can handle a problem with 12 or 13 million variables in a few minutes on a desktop machine. Let’s be generous and push this down to 1 second. (Or let’s hope that Moore’s Law rule-of-thumb has six or eight iterations left,and wait a decade.) To handle a problem with 12 or 13 billion variables then would take about 30 billion seconds, or roughly a thousand years. ”
So you are assuming the n**3.5 worst case scaling is also the average case scaling seen in practice. So for example a problem with 12000 or 13000 variables would take a factor of 30,000,000,000 less time (or just a few nanoseconds). This seems unlikely to me. The simplex algorithm was notorious for having terrible (exponential) worst case performance but good performance in practice. You could not deduce the actual scaling in practice from the worst case bounds. Perhaps for current algorithms the practical scaling and the worst case scaling are in fact the same but there is no logical requirement that this be so. So at best you are being unclear by conflating worst case and practical case performance.
John Quiggin 06.18.12 at 2:18 am
@AlexK as you can see from my post, I’m not defending central planning. But, both in reading Red Plenty and in thinking about planning and economic modelling in general, I want to understand where central planning fails (I expect the failure is overdetermined, and that there a multiple problems any one of which would ensure an unsatisfactory outcome). Candidates are
(i) The problem is with the leaders who are supposed to implement the plan, but actually follow their own political agendas (that’s the view you get from at least some of the stories in Red Plenty
(ii) Individual households and firms won’t supply the right data (the main Hayekian criticism, also in some of the later chapers of Red Plenty)
(iii) The problem is posed is computationally intractable (Cosma)
(iv) The problem is ill-posed because of aggregation problems (AlexK)
I think (ii) is probably the hardest, but others may differ
Phil (a different one) 06.18.12 at 10:41 am
Fantastic seminar. Really enjoyed reading all the articles.
On the optimisation problem & linear programming: IIRC the (relatively) simple complexity for solving a linear programming problem depends upon the cost of the output being a linear function of the inputs for each input. I’m fairly sure there’s a a paper by Steve Keen (or possibly referenced by him?) that demonstrates that in the real world, the marginal cost of a given output can have almost any relationship with the cost of its inputs, and may be strongly non-linear over at least part (if not all) of the function. (IIRC Cosma mentioned that this result seems reasonably intuitive if you consider the practicalities of manufacturing any complex product.) The real world then, seems to conspire against Kantorovich & all who followed him sadly.
(As the denizens of CrookedTimber have pointed out, this reality also holes below the waterline any hope that “Capitalism” is automatically capable of reaching the optimal configuration of the economy since it labours (ha!) under exactly the same set of constraints: Libertarians take note?)
Tim Wilkins0n 06.19.12 at 9:04 pm
JQ – I agree that problem ii is the most important and interesting area – and the main Hayekian objection going by cogency rather than prominence. But like other objections, ISTM that this tends to be approached via a double standard – devise some fiendishly difficult problem for central planners to solve, but be satisfied with any old crap where markets are concerned. One problem is that as so0on as one introduces the basic idea of using peoples’ actually chosen trade-offs (some kind of revealed preference), market types immediately start (falsely) saying that this is cheating, that you are really just introducing the market by the back door (ergo leave markets as they are, or something). This too is a pretty standard move, seen also in supposing that cost-accounting = ‘shadow prices’ = an inferior attempt at mimicing market prices, etc.
+ I too would be interested too see some answers from Cosma Shalizi to various objections raised in comments.
Tim Wilkins0n 06.19.12 at 9:15 pm
Also, capitalism has some major problems so far as eliciting info about preferences is concerned – notably 1. that suppliers expend a great deal of effort on trying to manipulate consumer preferences, 2. that ‘revealed preference’ has gone too far in the other direction so that whatever people end up buying is held to be what they prefer by definition/necessity/indefeasible presumption, 3. that the problem of interpersonal comparisons is dodged, and the result is a system of ‘one dollar one vote’.
Comments on this entry are closed.