Attention conservation notice: Over 7800 words about optimal planning for a socialist economy and its intersection with computational complexity theory. This is about as relevant to the world around us as debating whether a devotee of the Olympian gods should approve of transgenic organisms. (Or: centaurs, yes or no?) Contains mathematical symbols (uglified and rendered slightly inexact by HTML) but no actual math, and uses Red Plenty mostly as a launching point for a tangent.
There’s lots to say about Red Plenty as a work of literature; I won’t do so. It’s basically a work of speculative fiction, where one of the primary pleasures is having a strange world unfold in the reader’s mind. More than that, it’s a work of science fiction, where the strangeness of the world comes from its being reshaped by technology and scientific ideas—- here, mathematical and economic ideas.
Red Plenty is also (what is a rather different thing) a work of scientist fiction, about the creative travails of scientists. The early chapter, where linear programming breaks in upon the Kantorovich character, is one of the most true-to-life depictions I’ve encountered of the experiences of mathematical inspiration and mathematical work. (Nothing I will ever do will be remotely as important or beautiful as what the real Kantorovich did, of course.) An essential part of that chapter, though, is the way the thoughts of the Kantorovich character split between his profound idea, his idealistic political musings, and his scheming about how to cadge some shoes, all blind to the incongruities and ironies.
It should be clear by this point that I loved Red Plenty as a book, but I am so much in its target demographic1 that it’s not even funny. My enthusing about it further would not therefore help others, so I will, to make better use of our limited time, talk instead about the central idea, the dream of the optimal planned economy.
That dream did not come true, but it never even came close to being implemented; strong forces blocked that, forces which Red Plenty describes vividly. But could it even have been tried? Should it have been?
“The Basic Problem of Industrial Planning”
Let’s think about what would have to have gone in to planning in the manner of Kantorovich.
- I. We need a quantity to maximize. This objective function has to be a function of the quantities of all the different goods (and services) produced by our economic system.
- Here “objective” is used in the sense of “goal”, not in the sense of “factual”. In Kantorovich’s world, the objective function is linear, just a weighted sum of the output levels. Those weights tell us about trade-offs: we will accept getting one less bed-sheet (queen-size, cotton, light blue, thin, fine-weave) if it lets us make so many more diapers (cloth, unbleached, re-usable), or this many more lab coats (men’s, size XL, non-flame-retardant),or for that matter such-and-such an extra quantity of toothpaste. In other words, we need to begin our planning exercise with relative weights. If you don’t want to call these “values” or “prices”, I won’t insist, but the planning exercise has to begin with them, because they’re what the function being optimized is built from.
- It’s worth remarking that in Best Use of Economic Resources, Kantorovich side-stepped this problem by a device which has “all the advantages of theft over honest toil”. Namely, he posed only the problem of maximizing the production of a “given assortment” of goods—- the planners have fixed on a ratio of sheets to diapers (and everything else) to be produced, and want the most that can be coaxed out of the inputs while keeping those ratios. This doesn’t really remove the difficulty: either the planners have to decide on relative values, or they have to decide on the ratios in the “given assortment”.
- Equivalently, the planners could fix the desired output, and try to minimize the resources required. Then, again, they must fix relative weights for resources (cotton fiber, blue dye #1, blue dye #2, bleach, water [potable],water [distilled], time on machine #1, time on machine #2, labor time [unskilled], labor time [skilled, sewing], electric power…). In some contexts these might be physically comparable units. (The first linear programming problem I was ever posed was to work out a diet which will give astronauts all the nutrients they need from a minimum mass of food.) In a market system these would be relative prices of factors of production. Maintaining a “given assortment” (fixed proportions) of resources used seems even less reasonable than maintaining a “given assortment” of outputs, but I suppose we could do it.
- For now (I’ll come back to this), assume the objective function is given somehow, and is not to be argued with.
- IIA. We need complete and accurate knowledge of all the physical constraints on the economy, the resources available to it.
- IIB. We need complete and accurate knowledge of the productive capacities of the economy, the ways in which it can convert inputs to outputs.
- (IIA) and (IIB) require us to disaggregate all the goods (and services) of the economy to the point where everything inside each category is substitutable. Moreover, if different parts of our physical or organizational “plant” have different technical capacities, that needs to be taken into account, or the results can be decidedly sub-optimal. (Kantorovich actually emphasizes this need to disaggregate in Best Use, by way of scoring points against Leontief. The numbers in the latter’s input-output matrices, Kantorovich says, are aggregated over huge swathes of the economy, and so far too crude to be actually useful for planning.) This is, to belabor the obvious, a huge amount of information to gather.
- (It’s worth remarking at this point that “inputs” and “constraints” can be understood very broadly. For instance, there is nothing in the formalism which keeps it from including constraints on how much the production process is allowed to pollute the environment. The shadow prices enforcing those constraints would indicate how much production could be increased if marginally more pollution were allowed. This wasn’t, so far as I know, a concern of the Soviet economists, but it’s the logic behind cap-and-trade institutions for controlling pollution.)
- Subsequent work in optimization theory lets us get away, a bit, from requiring complete and perfectly accurate knowledge in stage (II). If our knowledge is distorted by merely unbiased statistical error, we could settle for stochastic optimization, which runs some risk of being badly wrong (if the noise is large), but at least does well on average. We still need this unbiased knowledge about everything, however, and aggregation is still a recipe for distortions.
- More serious is the problem that people will straight-up lie to the planners about resources and technical capacities, for reasons which Spufford dramatizes nicely. There is no good mathematical way of dealing with this.
- III. For Kantorovich, the objective function from (I) and the constraints and production technology from (II) must be linear.
- Nonlinear optimization is possible, and I will come back to it, but it rarely makes things easier.
- IV. Computing time must be not just too cheap to meter, but genuinely immense.
- It is this point which I want to elaborate on, because it is a mathematical rather than a practical difficulty.
“Numerical Methods for the Solution of Problems of Optimal Planning”
It was no accident that mathematical optimization went hand-in-hand with automated computing. There’s little point to reasoning abstractly about optima if you can’t actually find them, and finding an optimum is a computational task. We pose a problem (find the plan which maximizes this objective function subject to these constraints), and want not just a solution, but a method which will continue to deliver solutions even as the problem posed is varied. We need an algorithm.
Computer science, which is not really so much a science as a branch of mathematical engineering, studies questions like this. A huge and profoundly important division of computer science, the theory of computational complexity, concerns itself with understanding what resources algorithms require to work. Those resources may take many forms: memory to store intermediate results, samples for statistical problems, communication between cooperative problem-solvers. The most basic resource is time, measured not in seconds but in operations of the computer. This is something Spufford dances around, in II.2: “Here’s the power of the machine: that having broken arithmetic down into tiny idiot steps, it can then execute those steps at inhuman speed, forever.” But how many steps? If it needs enough steps, then even inhuman speed is useless for human purposes…
The way computational complexity theory works is that it establishes some reasonable measure of the size of an instance of a problem, and then asks how much time is absolutely required to produce a solution. There can be several aspects of “size”; there are three natural ones for linear programming problems. One is the number of variables being optimized over, say n . The second is the number of constraints on the optimization, say m . The third is the amount of approximation we are willing to tolerate in a solution—- we demand that it come within h of the optimum, and that if any constraints are violated it is also by no more than h . Presumably optimizing many variables ( n >> 1), subject to many constraints ( m >> 1 ), to a high degree of approximation ( h ~ 0), is going to take more time than optimizing a few variables ( n ~1), with a handful of constraints ( m ~ 1 ), and accepting a lot of slop ( h ~ 1). How much, exactly?
The fastest known algorithms for solving linear programming problems are what are called “interior point” methods. These are extremely ingenious pieces of engineering, useful not just for linear programming but a wider class of problems called “convex programming”. Since the 1980s they have revolutionized numerical optimization, and are, not so coincidentally, among the intellectual children of Kantorovich (and Dantzig). The best guarantees about the number of “idiot steps” (arithmetic operations) they need to solve a linear programming problem with such algorithms is that it’s proportional to
(I am simplifying just a bit; see sec. 4.6.1 of Ben-Tal and Nemirovski’s Lectures on Modern Convex Optimization [PDF].)
Truly intractable optimization problems — of which there are many — are ones where the number of steps needed grow exponentially2. If linear programming was in this “complexity class”, it would be truly dire news, but it’s not. The complexity of the calculation grows only polynomially with n , so it falls in the class theorists are accustomed to regarding as “tractable”. But the complexity still grows super-linearly, like n3.5. Where does this leave us?
A good modern commercial linear programming package can handle a problem with 12 or 13 million variables in a few minutes on a desktop machine. Let’s be generous and push this down to 1 second. (Or let’s hope that Moore’s
Law rule-of-thumb has six or eight iterations left,and wait a decade.) To handle a problem with 12 or 13 billion variables then would take about 30 billion seconds, or roughly a thousand years.
Naturally, I have a reason for mentioning 12 million variables:
In the USSR at this time  there are 12 million identifiably different products (disaggregated down to specific types of ball-bearings, designs of cloth, size of brown shoes, and so on). There are close to 50,000 industrial establishments, plus, of course, thousands of construction enterprises, transport undertakings, collective and state forms, wholesaling organs and retail outlets.
—Alec Nove, The Economics of Feasible Socialism (p. 36 of the revised  edition; Nove’s italics)
This 12 million figure will conceal variations in quality; and it is not clear to me, even after tracking down Nove’s sources, whether it included the provision of services, which are a necessary part of any economy.
Let’s say it’s just twelve million. Even if the USSR could never have invented a modern computer running a good LP solver, if someone had given it one, couldn’t Gosplan have done its work in a matter of minutes? Maybe an hour, to look at some alternative plans?
No. The difficulty is that there aren’t merely 12 million variables to optimize over, but rather many more. We need to distinguish between a “coat, winter, men’s, part-silk lining, wool worsted tricot, clothgroup 29—32” in Smolensk from one in Moscow. If we don’t “index” physical goods by location this way, our plan won’t account for the need for transport properly, and things simply won’t be where they’re needed; Kantorovich said as much under the heading of “the problem of a production complex”. (Goods which can spoil, or are needed at particular occasions and neither earlier nor later, should also be indexed by time; Kantorovich’s “dynamic problem”) A thousand locations would be very conservative, but even that factor would get us into the regime where it would take us a thousand years to work through a single plan. With 12 million kinds of goods and only a thousand locations, to have the plan ready in less than a year would need computers a thousand times faster.
This is not altogether unanticipated by Red Plenty:
A beautiful paper at the end of last year had skewered Academician Glushkov’s hypercentralized rival scheme for an all-seeing, all-knowing computer which would rule the physical economy directly, with no need for money. The author had simply calculated how long it would take the best machine presently available to execute the needful program, if the Soviet economy were taken tobe a system of equations with fifty million variables and five million constraints. Round about a hundred million years, was the answer. Beautiful.So the only game in town, now, was their own civilised, decentralized idea for optimal pricing, in which shadow prices calculated from opportunity costs would harmonise the plan without anyone needing to possess impossibly complete information. [V.2]
This alternative vision, the one which Spufford depicts those around Kantorovich as pushing, was to find the shadow prices needed to optimize, fix the monetary prices to track the shadow prices, and then let individuals or firms buy and sell as they wish, so long as they are within their budgets and adhere to those prices. The planners needn’t govern men, nor even administer things, but only set prices. Does this, however, actually set the planners a more tractable, a less computationally-complex, problem?
So far as our current knowledge goes, no. Computing optimal prices turns out to have the same complexity as computing the optimal plan itself 3. It is(so far as I know) conceivable that there is some short-cut to computing prices alone, but we have no tractable way of doing that yet. Anyone who wants to advocate this needs to show that it is possible, not just hope piously.
How then might we escape?
It will not do to say that it’s enough for the planners to approximate the optimal plan, with some dark asides about the imperfections of actually-existing capitalism thrown into the mix. The computational complexity formula I quoted above already allows for only needing to come close to the optimum. Worse, the complexity depends only very slowly, logarithmically, on the approximation to the optimum, so accepting a bit more slop buys us only a very slight savings in computation time. (The optimistic spin is that if we can do the calculations at all, we can come quite close to the optimum.) This route is blocked.
Another route would use the idea that the formula I’ve quoted is only an upper bound, the time required to solve an arbitrary linear programming problem. The problems set by economic planning might, however, have some special structure which could be exploited to find solutions faster. What might that structure be?
The most plausible candidate is to look for problems which are “separable”, where the constraints create very few connections among the variables. If we could divide the variables into two sets which had nothing at all to do with each other, then we could solve each sub-problem separately, at tremendous savings in time. The supra-linear, n3.5 scaling would apply only within each sub-problem. We could get the optimal prices (or optimal plans) just by concatenating the solutions to sub-problems, with no extra work on our part.
Unfortunately, as Lenin is supposed to have said, “everything is connected to everything else”. If nothing else, labor is both required for all production, and is in finite supply, creating coupling between all spheres of the economy. (Labor is not actually extra special here, but it is traditional4.) A national economy simply does not break up into so many separate, non-communicating spheres which could be optimized independently.
So long as we are thinking like computer programmers, however, we might try a desperately crude hack, and just ignore all kinds of interdependencies between variables. If we did that, if we pretended that the over-all high-dimensional economic planning problem could be split into many separate low-dimensional problems, then we could speed things up immensely, by exploiting parallelism or distributed processing. An actually-existing algorithm, on actually-existing hardware, could solve each problem on its own, ignoring the effect on the others, in a reasonable amount of time. As computing power grows, the supra-linear complexity of each planning sub-problem becomes less of an issue, and so we could be less aggressive in ignoring couplings.
At this point, each processor is something very much like a firm, with a scope dictated by information-processing power, and the mis-matches introduced by their ignoring each other in their own optimization is something very much like “the anarchy of the market”. I qualify with “very much like”, because there are probably lots of institutional forms these could take, some of which will not look much like actually existing capitalism. (At the very least the firm-ish entities could be publicly owned, by the state, Roemeresque stock-market socialism, workers’cooperatives, or indeed other forms.)
Forcing each processor to take some account of what the others are doing, through prices and quantities in markets, removes some of the grosser pathologies. (If you’re a physicist, you think of this as weak coupling; ifyou’re a computer programmer, it’s a restricted interface.) But it won’t, in general, provide enough of a communication channel to actually compute the prices swiftly — at least not if we want one set of prices, available to all. Rob Axtell, in a really remarkable paper, shows that bilateral exchange can come within h of an equilibrium set of prices in a time proportional to n2log(1/h), which is much faster than any known centralized scheme.
Now, we might hope that yet faster algorithms will be found, ones which would, say, push the complexity down from cubic in n to merely linear. There are lower bounds on the complexity of optimization problems which suggest we could never hope to push it below that. No such algorithms are known to exist, and we don’t have any good reason to think that they do. We also have no reason to think that alternative computing methods would lead to such a speed-up5.
I said before that increasing the number of variables by a factor of 1000 increases the time needed by a factor of about 30 billion. To cancel this out would need a computer about 30 billion times faster, which would need about 35 doublings of computing speed, taking, if Moore’s rule-of-thumb continues to hold, another half century. But my factor of 1000 for prices was quite arbitrary; if it’s really more like a million, then we’re talking about increasing the computation by a factor of 1021 (a more-than-astronomical, rather a chemical, increase), which is just under 70 doublings, or just over a century of Moore’s Law.
If someone like Iain Banks or Ken MacLeod wants to write a novel where they say that the optimal planned economy will become technically tractable sometime around the early 22nd century, then I will read it eagerly. As a serious piece of prognostication, however, this is the kind of thinking which leads to”where’s my jet-pack?” ranting on the part of geeks of a certain age.
Nonlinearity and Nonconvexity
In linear programming, all the constraints facing the planner, including those representing the available technologies of production, are linear. Economically, this means constant returns to scale: the factory need put no more, and no less, resources into its 10,000th pair of diapers as into its 20,000th, or its first.
Mathematically, the linear constraints on production are a special case of convex constraints. If a constraint is convex, then if we have two plans which satisfy it, so would any intermediate plan in between those extremes. (If plan A calls for 10,000 diapers and 2,000 towels, and plan B calls for 2,000 diapers and 10,000 towels, we could do half of plan A and half of plan B, make 6,000 diapers and 6,000 towels, and not run up against the constraints.) Not all convex constraints are linear; in convex programming, we relax linear programming to just require convex constraints. Economically, this corresponds to allowing decreasing returns to scale, where the 10,000 pair of diapers is indeed more expensive than the 9,999th, or the first.
Computationally, it turns out that the same “interior-point” algorithms which bring large linear-programming problems within reach also work on general convex programming problems. Convex programming is more computationally complex than linear programming, but not radically so.
Unfortunately for the planners, increasing returns to scale in production mean non-convex constraints; and increasing returns are very common, if only from fixed costs. If the plan calls for regular flights from Moscow to Novosibirsk, each flight has a fixed minimum cost, no matter how much or how little the plane carries. (Fuel; the labor of pilots, mechanics, and air-traffic controllers; wear and tear on the plane; wear and tear on runways; the lost opportunity of using the plane for something else.) Similarly for optimization software (you can’t make any copies of the program without first expending the programmers’ labor, and the computer time they need to write and debug the code). Or academic papers, or for that matter running an assembly line or a steel mill. In all of these cases, you just can’t smoothly interpolate between plans which have these outputs and ones which don’t. You must pay at least the fixed cost to get any output at all, which is non-convexity. And there are other sources of increasing returns, beyond fixed costs.
This is bad news for the planners, because there are no general-purpose algorithms for optimizing under non-convex constraints. Non-convex programming isn’t roughly as tractable as linear programming, it’s generally quite intractable. Again, the kinds of non-convexity which economic planners would confront might, conceivably, universally turn out to be especially benign, soeverything becomes tractable again, but why should we think that?
If it’s any consolation, allowing non-convexity messes up the markets-are-always-optimal theorems of neo-classical/bourgeois economics, too. (This illustrates Stiglitz’s contention that if the neo-classicals were right about how capitalism works, Kantorovich-style socialism would have been perfectly viable.) Markets with non-convex production are apt to see things like monopolies, or at least monopolistic competition, path dependence, and, actual profits and power. (My university owes its existence to Mr. Carnegie’s luck, skill, and ruthlessness in exploiting the non-convexities of making steel.) Somehow, I do not think that this will be much consolation).
The Given Assortment, and Planner’s Preferences
So far I have been assuming, for the sake of argument, that the planners can take their objective function as given. There does need to be some such function, because otherwise it becomes hard to impossible to chose between competing plans which are all technically feasible. It’s easy to say “more stuff is better than less stuff”, but at some point more towels means fewer diapers, and then the planners have to decide how to trade off among different goods. If we take desired output as fixed and try to minimize inputs, the same difficulty arises (is it better to use so less cotton fiber if it requires this much more plastic?), so I will just stick with the maximization version.
For the capitalist or even market-socialist firm, there is in principle a simple objective function: profit, measured in dollars, or whatever else the local unit of account is. (I say “in principle” because a firm isn’t a unified actor with coherent goals like “maximize profits”; to the extent it acts like one, that’s an achievement of organizational social engineering.) The firm can say how many extra diapers it would have to sell to be worth selling one less towel, because it can look at how much money it would make. To the extent that it can take its sales prices as fixed, and can sell as much as it can make, it’s even reasonable for it to treat its objective function as linear.
But what about the planners? Even if they wanted to just look at the profit (value added) of the whole economy, they get to set the prices of consumption goods, which in turn set the (shadow) prices of inputs to production. (The rule “maximize the objective function” does not help pick an objective function.) In any case, profits are money, i.e., claims, through exchange, on goods and services produced by others. It makes no sense for the goal of the economy, as a whole, to be to maximize its claims on itself.
As I mentioned, Kantorovich had a way of evading this, which was clever if not ultimately satisfactory. He imagined the goal of the planners to be to maximize the production of a “given assortment” of goods. This means that the desired ratio of goods to be produced is fixed (three diapers for every towel), and the planners just need to maximize production at this ratio. This only pushes back the problem by one step, to deciding on the “given assortment”.
We are pushed back, inevitably, to the planners having to make choices which express preferences or (in a different sense of the word) values. Or, said another way, there are values or preferences — what Nove called “planners’ preferences” — implicit in any choice of objective function. This raises both a cognitive or computational problem, and at least two different political problems.
The cognitive or computational problem is that of simply coming up with relative preferences or weights over all the goods in the economy, indexed by space and time. (Remember we need such indexing to handle transport and sequencing.) Any one human planner would simply have to make up most of these, or generate them according to some arbitrary rule. To do otherwise is simply beyond the bounds of humanity. A group of planners might do better, but it would still be an immense amount of work, with knotty problems of how to divide the labor of assigning values, and a large measure of arbitrariness.
Which brings us to the first of the two political problems. The objective function in the plan is an expression of values or preferences, and people have different preferences. How are these to be reconciled?
There are many institutions which try to reconcile or adjust divergent values. This is a problem of social choice, and subject to all the usual pathologies and paradoxes of social choice. There is no universally satisfactory mechanism for making such choices. One could imagine democratic debate and voting over plans, but the sheer complexity of plans, once again, makes it very hard for members of the demos to make up their minds about competing plans, or how plans might be changed. Every citizen is put in the position of the solitary planner, except that they must listen to each other.
Citizens (or their representatives) might debate about, and vote over, highly aggregated summaries of various plans. But then the planning apparatus has to dis-aggregate, has to fill in the details left unfixed by the democratic process. (What gets voted on is a compressed encoding of the actual plan, for which the apparatus is the decoder.) I am not worried so much that citizens are not therefore debating about exactly what the plan is. Under uncertainty, especially uncertainty from complexity, no decision-maker understands the full consequences of their actions. What disturbs me about this is that filling in those details in the plan is just as much driven by values and preferences as making choices about the aggregated aspects. We have not actually given the planning apparatus a tractable technical problem(cf.).
Dictatorship might seem to resolve the difficulty, but doesn’t. The dictator is, after all, just a single human being. He (and I use the pronoun deliberately) has no more ability to come up with real preferences over everything in the economy than any other person. (Thus, Ashby’s “law of requisite variety” strikes again.) He can, and must, delegate details to the planning apparatus, but that doesn’t help the planners figure out what to do. I would even contend that he is in a worse situation than the demos when it comes to designing the planning apparatus, or figuring out what he wants to decide directly, and what he wants to delegate, but that’s a separate argument. The collective dictatorship of the party, assuming anyone wanted to revive that nonsense, would only seem to give the worst of both worlds.
I do not have a knock-down proof that there is no good way of evading the problem of planners’ preferences. Maybe there is some way to improve democratic procedures or bureaucratic organization to turn the trick. But any such escape is, now, entirely conjectural. In its absence, if decisions must be made, they will get made, but through the sort of internal negotiation, arbitrariness and favoritism which Spufford depicts in the Soviet planning apparatus.
This brings us to the second political problem. Even if everyone agrees on the plan, and the plan is actually perfectly implemented, there is every reason to think that people will not be happy with the outcome. They’re making guesses about what they actually want and need, and they are making guesses about the implications of fulfilling those desires. We don’t have to go into “Monkey’s Paw” territory to realize that getting what you think you want can prove thoroughly unacceptable; it’s a fact of life, which doesn’t disappear in economics. And not everyone is going to agree on the plan, which will not be perfectly implemented. (Nothing is ever perfectly implemented.) These are all signs of how even the “optimal” plan can be improved, and ignoring them is idiotic.
We need then some systematic way for the citizens to provide feedback on the plan, as it is realized. There are many, many things to be said against the market system, but it is a mechanism for providing feedback from users to producers, and for propagating that feedback through the whole economy, without anyone having to explicitly track that information. This is a point which both Hayek, and Lange (before the war) got very much right. The feedback needn’t be just or even mainly through prices; quantities (especially inventories) can sometimes work just as well. But what sells and what doesn’t is the essential feedback.
The innumerable living participants in the economy, state and private, collective and individual, must serve notice of their needs and of their relative strength not only through the statistical determinations of plan commissions but by the direct pressure of supply and demand. The plan is checked and, to a considerable degree, realized through the market.
It is conceivable that there is some alternative feedback mechanism which is as rich, adaptive, and easy to use as the market but is not the market, not even in a disguised form. Nobody has proposed such a thing.
Errors of the Bourgeois Economists
Both neo-classical and Austrian economists make a fetish (in several senses) of markets and market prices. That this is crazy is reflected in the fact that even under capitalism, immense areas of the economy are not coordinated through the market. There is a great passage from Herbert Simon in 1991 which is relevant here:
Suppose that [“a mythical visitor from Mars”] approaches the Earth from space, equipped with a telescope that revels social structures. The firms reveal themselves, say, as solid green areas with faint interior contours marking out divisions and departments. Market transactions show as red lines connecting firms, forming a network in the spaces between them. Within firms (and perhaps even between them) the approaching visitor also sees pale blue lines, the lines of authority connecting bosses with various levels of workers. As our visitors looked more carefully at the scene beneath, it might see one of the green masses divide, as a firm divested itself of one of its divisions. Or it might see one green object gobble up another. At this distance, the departing golden parachutes would probably not be visible.
No matter whether our visitor approached the United States or the Soviet Union, urban China or the European Community, the greater part of the space below it would be within green areas, for almost all of the inhabitants would be employees, hence inside the firm boundaries. Organizations would be the dominant feature of the landscape. A message sent back home, describing the scene, would speak of “large green areas interconnected by red lines.” It would not likely speak of “a network of red lines connecting green spots.”6
This is not just because the market revolution has not been pushed far enough. (“One effort more, shareholders, if you would be libertarians!”) The conditions under which equilibrium prices really are all a decision-maker needsto know, and really are sufficient for coordination, are so extreme as to be absurd.(Stiglitz is good on some of the failure modes.) Even if they hold, the market only lets people “serve notice of their needs and of their relative strength” up to a limit set by how much money they have. This is why careful economists talk about balancing supply and “effective” demand, demand backed by money.
This is just as much an implicit choice of values as handing the planners an objective function and letting them fire up their optimization algorithm. Those values are not pretty. They are that the whims of the rich matter more than the needs of the poor; that it is more important to keep bond traders in strippers and cocaine than feed hungry children. At the extreme, the market literally starves people to death, because feeding them is a less”efficient” use of food than helping rich people eat more.
I don’t think this sort of pathology is intrinsic to market exchange; it comes from market exchange plus gross inequality. If we want markets to signal supply and demand (not just tautological “effective demand”), then we want to ensure not just that everyone has access to the market, but also that they have (roughly) comparable amounts of money to spend. There is, in other words, a strong case to be made for egalitarian distributions of resources being a complement to market allocation. Politically, however, good luck getting those to go together.
We are left in an uncomfortable position. Turning everything over to the market is not really an option. Beyond the repulsiveness of the values it embodies, markets in areas like healthcare or information goods are always inefficient (over and above the usual impossibility of informationally-efficient prices). Moreover, working through the market imposes its own costs (time and effort in searching out information about prices and qualities, negotiating deals, etc.), and these costs can be very large. This is one reason (among others) why Simon’s Martian sees such large green regions in the capitalist countries — why actually-existing capitalism is at least as much an organizational as a market economy.
Planning is certainly possible within limited domains — at least if we can get good data to the planners — and those limits will expand as computing power grows. But planning is only possible within those domains because making money gives firms (or firm-like entities) an objective function which is both unambiguous and blinkered. Planning for the whole economy would, under the most favorable possible assumptions, be intractable for the foreseeable future, and deciding on a plan runs into difficulties we have no idea how to solve. The sort of efficient planned economy dreamed of by the characters in Red Plenty is something we have no clue of how to bring about, even if we were willing to accept dictatorship to do so.
That planning is not a viable alternative to capitalism (as opposed to a tool within it) should disturb even capitalism’s most ardent partisans. It means that their system faces no competition, nor even any plausible threat of competition. Those partisans themselves should be able to say what will happen then: the masters of the system, will be tempted, and more than tempted, to claim more and more of what it produces as monopoly rents. This does not end happily.
Calling the Tune for the Dance of Commodities
There is a passage in Red Plenty which is central to describing both the nightmare from which we are trying to awake, and vision we are trying to awake into. Henry has quoted it already, but it bears repeating.
Marx had drawn a nightmare picture of what happened to human life under capitalism, when everything was produced only in order to be exchanged; when true qualities and uses dropped away, and the human power of making and doing itself became only an object to be traded. Then the makers and the things made turned alike into commodities, and the motion of society turned into a kind of zombie dance, a grim cavorting whirl in which objects and people blurred together till the objects were half alive and the people were half dead. Stock-market prices acted back upon the world as if they were independent powers, requiring factories to be opened or closed, real human beings to work or rest, hurry or dawdle; and they, having given the transfusion that made the stock prices come alive, felt their flesh go cold and impersonal on them, mere mechanisms for chunking out the man-hours. Living money and dying humans, metal as tender as skin and skin as hard as metal, taking hands, and dancing round, and round, and round, with no way ever of stopping; the quickened and the deadened, whirling on. … And what would be the alternative? The consciously arranged alternative? A dance of another nature, Emil presumed. A dance to the music of use, where every step fulfilled some real need, did some tangible good, and no matter how fast the dancers spun, they moved easily, because they moved to a human measure, intelligible to all, chosen by all.
There is a fundamental level at which Marx’s nightmare vision is right: capitalism, the market system, whatever you want to call it, is a product of humanity, but each and every one of us confronts it as an autonomous and deeply alien force. Its ends, to the limited and debatable extent that it can even be understood as having them, are simply inhuman. The ideology of the market tell us that we face not something inhuman but superhuman, tells us to embrace our inner zombie cyborg and loose ourselves in the dance. One doesn’t know whether to laugh or cry or running screaming.
But, and this is I think something Marx did not sufficiently appreciate, human beings confront all the structures which emerge from our massed interactions in this way. A bureaucracy, or even a thoroughly democratic polity of which one is a citizen, can feel, can be, just as much of a cold monster as the market. We have no choice but to live among these alien powers which we create, and to try to direct them to human ends. It is beyond us, it is even beyond all of us, to find “a human measure, intelligible to all, chosen by all”, which says how everyone should go. What we can do is try to find the specific ways in which these powers we have conjured up are hurting us, and use them to check each other, or deflect them into better paths. Sometimes this will mean more use of market mechanisms, sometimes it will mean removing some goods and services from market allocation, either through public provision7 or through other institutional arrangements8. Sometimes it will mean expanding the scope of democratic decision-making (for instance, into the insides of firms), and sometimes it will mean narrowing its scope (for instance, not allowing the demos to censor speech it finds objectionable). Sometimes it will mean leaving some tasks to experts, deferring to the internal norms of their professions, and sometimes it will mean recognizing claims of expertise to be mere assertions of authority, to be resisted or countered.
These are all going to be complex problems, full of messy compromises. Attaining even second best solutions is going to demand “bold, persistent experimentation”, coupled with a frank recognition that many experiments will just fail, and that even long-settled compromises can, with the passage of time, become confining obstacles. We will not be able to turn everything over to the wise academicians, or even to their computers, but we may, if we are lucky and smart, be able, bit by bit, make a world fit for human beings to live in.
 Vaguely lefty? Check. Science fiction reader?Check. Interested in economics? Check. In fact: family tradition of socialism extending to having a relative whose middle name was “Karl Marx”? Check. Gushing Ken MacLeod fan? Check. Learned linear programming at my father’s knee as a boy? Check. ^
 More exactly, many optimization problems have the property that we can check a proposed solution in polynomial time (these are the class “NP”), but no one has a polynomial-timeway to work out a solution from the problem statement (which would put them in the class “P”). If a problem is in NP but not in P, we cannot do drastically better than just systematically go through candidate solutions and check them all. (We can often do a bit better, especially on particular cases, but not drastically better.) Whether there are any such problems, that is whether NP=P, is not known, but it sure seems like it. So while most common optimization problems are in NP, linear and even convex programming are in P.^
: Most of the relevant work has been done under a slightly different cover—- not determining shadow prices in an optimal plan, but equilibrium prices in Arrow-Debreu model economies. But this is fully applicable to determining shadow prices in the planning system.(Bowles and Gintis: “The basic problem with the Walrasian model in this respect is that it is essentially about allocations and only tangentially about markets — as one of us (Bowles) learned when he noticed that the graduate microeconomics course that he taught at Harvard was easily repackaged as ‘The Theory of Economic Planning’ at the University of Havana in 1969.”) Useful references here are Deng, Papadimitriou and Safra’s “On the Complexity of Price Equilibria” [STOC’02. preprint],Condenotti and Varadarajan’s “Efficient Computation of Equilibrium Prices for Markets with Leontief Utilities”, and Ye’s “A path to the Arrow-Debreu competitive market equilibrium”. ^
: In the mathematical appendix to Best Use, Kantorovich goes to some length to argue that his objectively determined values are compatible with the labor theory of value, by showing that the o.d. values are proportional to the required labor in the optimal plan. (He begins by assuming away the famous problem of equating different kinds of labor.) A natural question is how seriously this was meant. I have no positive evidence that it wasn’t sincere. But, carefully examined, all that he proves is proportionality between o.d. values and the required consumption of the first component of the vector of inputs — and the ordering of inputs is arbitrary. Thus the first component could be any input to the production process, and the same argument would go through, leading to many parallel “theories of value”. (There is a certain pre-Socratic charm to imagining proponents of the labor theory of value arguing it out with the water-theorists or electricity-theorists.) It is hard for me to believe that a mathematician of Kantorovich’s skill did not see this, suggesting that the discussion was mere ideological cover. It would be interesting to know at what stage in the book’s “adventures” this part of the appendix was written.^
: In particular, there’s no reason to think that building a quantum computer would help. This is because, as some people have to keep pointing out, quantum computers don’t provide a generalexponential speed-up over classical ones. ^
: Let me be clear about the limits of this. Already, in developed capitalism, such public or near-public goods as the protection of the police and access to elementary schooling are provided universally and at no charge to the user. (Or they are supposed to be, anyway.) Access to these is not regulated by the market. But the inputs needed to provide them are all bought on the market, the labor of teachers and cops very much included. I cannot improve on this point on the discussion in Lindblom’s The Market System, so I will just direct you to that(i, ii).^
: To give a concrete example, neither scientific research nor free software are produced for sale on the market. (This disappoints some aficionados of both.) Again, the inputs are obtained from markets, including labor markets, but the outputs are not sold on them. How far this is a generally-viable strategy for producing informational goods is a very interesting question, which it is quite beyond me to answer.^