The excerpt from Ophelia Benson’s article which Chris posted below got me thinking about a few particularly egregious examples of the phenomenon I’ve seen over the years. The one which sticks out in my mind was of a teacher proudly boasting that he’d spent half of a class ignoring the subject matter that was meant to be discussed and instead talking about technical arcana which added nothing to our understanding of the subject, made the discussion incomprehensible to the layman, but fitted the students to carry on a discussion among people working in the same field, according to the rules of a trivial formal game.
Hang your head in shame, Brad DeLong, for the following piece of wilful obscurantism and “Bad Writing”
“On February 12 I taught Andrei Shleifer’s “Implementation Cycles” (Journal of Political Economy, 1986) paper to my advanced macroeconomics Ph.D. students class.
Once again, just has happened the week before, I didn’t get through the paper. I had thought it would be easy–that I would finish with plenty of time to sketch extensions and qualifications. After all, the class does run from 12 to 2. However, that was not enough time.
My recurrent problem is that I spend so much time in asides on the modeling strategy–“this term is in this definition because twenty minutes from now it will cancel that when we take a derivative to establish the first-order condition”, “note that even though we have started with a rather general and flexible setup in which firms have a number of different decisions to make, the setup has been carefully designed so that when push comes to shove there is only one economically interesting and non-obvious decision a firm ever has to make”, “note that if this condition is not satisfied, then the consumer’s utility is infinite and it is not clear in what sense we can say that the model even has an equilibrium at all,” that kind of thing. These asides on modeling strategy take up a surprisingly large amount of time. Yet I don’t think I can cut them out or even cut them down. After all, if I don’t teach them this, when will they ever learn it?”
Pretty egregious, huh? After all, the essential insights of this model are clear. Brad even summarises them himself in the same post:
- A lot of firms implementing their new technologies at once creates a boom.
- An aggregate demand externality makes it profitable to cluster the implementation of new technologies. A firm with a new technology wants to wait until a boom to implement it because its technology will be quickly copied–its edge is temporary–and it is more profitable to implement when the economy is booming and demand is high than when the economy is not booming and demand is low.
- Partly offsetting this is the fact that the interest rate is high when a boom is expected, and thus the cost of waiting until a boom to implement your technological innovation can be substantial. (In fact with log utility or a utility function less risk-averse than log utility, the interest rate effect always outweighs the aggregate demand externality effect).
- Thus the model can exhibit periodic “implementation cycles” in which technological advance is delayed until periodic booms.
- However, in the basic model households always want new technology to be implemented as fast as possible: these implementation-cycle equilibria reduce welfare.
- But this can be reversed: in a version of the model with fixed costs of implementation and with ‘standing on the shoulders of giants’ effects in discovery, it is very possible that technological progress is only possible if there are periodic large booms. “
So, if it is perfectly possible to summarise the conclusions of the Shleifer implementation-cycles model of the business cycle in a few digestible bullet-points, surely it is counterproductive and unforgivable to shroud the simple underlying points in all this “Theory”, isn’t it? After all, if something is intrinsically simple, it’s ludicrous to suggest that the conclusions have to be established through convoluted language, massive generalisations and strange constructs which make no sense on the face of them, isn’t it?
Well of course, no. If the history of economic thought teaches us anything, it teaches us that people who don’t use the mathematics always, sooner or later, end up saying something badly wrong about economics. Paul Krugman has an essay on the subject with which I profoundly disagree on a number of points, but which contains one highly important truth; there are important ideas in economics which are crystal clear if you understand the mathematics and bloody hard to get your head round if you don’t.
But, of course, I’m being silly here, or at least satirical. The use of mathematics in economics isn’t the sort of Theory we’re concerned with trying to stamp out; like the War on Drugs, the War on Theory isn’t meant to touch the recreational hobbies of nice people like us. Nobody would question the right of economists to use whatever mathematical toolkit they need in order to write economics, because unlike the Bad Writing crowd, they’re using mathematics precisely in order to ensure the rigour of their analysis, not to cover up a lack of such rigour.
Well, not quite. The position that mathematics in economics is a) the best way to do economics and b) the only rigorous way to do economics can be attacked on two separate fronts. On the one hand, we have the view of Alfred Marshall and his famous rules for the use of mathematics in economic theory:
“”(1) Use mathematics as a shorthand language, rather than as an engine of inquiry. (2) Keep to them till you have done. (3) Translate into English. (4) Then illustrate by examples that are important in real life. (5) Burn the mathematics. (6) If you can’t succeed in (4), burn (3). This last I [Marshall] did often.”
This is the view of the subject in which the use of the mathematical theoretical toolkit predisposes economists toward what Schumpter called the “Ricardian vice”, after David Ricardo’s habit of building policy advice on the basis of arguments which made extensive use of deductive reasoning:
““He then piled one simplifying assumption upon another until, having really settled everything by these assumptions, he was left with only a few aggregate variables between which, given these assumptions, he set up simpler one-way relations so that, in the end, the desired results emerged almost as tautologies.”
This is exactly what Brad means when he euphemistically refers to “modelling strategy” above; the careful selection of the assumptions, not so that they are gerrymandered to give a particular result (although this too can be done, it is usually recognised by economists as the crime it is, whereas “modelling strategy” is a perfectly reputable thing to teach students), but so that they will deliver a deductive argument which has a conclusion at all. The problem being that there is one too few degrees of freedom here; you can either select assumptions so as to be realistic descriptions of behaviour, or you can select them so as to make a model soluble or tractable. Assumptions will only possess both desiderata by the purest of chance. Marshall’s approach, while honoured much more in the breach than the observance, could be seen as an attempt to avoid the Ricardian Vice. It’s the equivalent of (part of) the Bad Writing critique in the arts subjects; an exhortation to keep economic analysis to subjects that the common man can understand, in language that the common man can understand.
It gets worse for the users of mathematical theory in economics. They’re also under attack from the other flank from Econophysics. The econophysics crowd tend toward the belief that the problem with the Ricardian Vice is not the use of mathematical and deductive reasoning from assumptions per se but rather the use of particular forms of mathematical argument and insufficiently complicated assumptions. It’s always possible to fend off criticism from the likes of Robert Kuttner when they trot out the Marshallian/ Schumpeterian critique that they’re only doing so because they’re too thick to hack the maths. It’s rather more difficult to pull off that act with someone likeDidier Sornette, who predicts earthquakes for fun, or Barkley Rosser Jr[1] who understands Goedel’s theorem and uses it, and who is one of the few people who I would trust to write a paragraph including the words “Chaos Theory”.
Faced with the twin critiques, that they use a dumbed-down, linear mathematical toolkit which is woefully inadequate to the task of accurately modelling economies, and that they use this toolkit excessively in an obscurantist fashion (linear mathematics can get really quite complicated!), how is it the case that the particular form of mathematical economics practised by Andrei Shleifer and Brad DeLong has come to dominate the methodology of the field? Surely there is some terrible organisational pathology of the academy, that it lets such Bad Writing persist in economics? The Post-Autistic Economics Network, the Bad Writing equivalent for critics of mathematical economics, certainly seem to think so; do they have a point?
Basically, no. The level of mathematics used in most printed academic journal articles is, I have come to conclude, about right. The point is this; economics is, as Deirdre McCloskey points out regularly, a form of rhetoric. At its heart, it is and has always been about the construction of a certain kind of argument, which is meant to be persuasive over human action. I state this without argument, in the knowledge that many people at work in the field believe that they are involved in a project of genuine scientific enquiry. I feel no argument of mine is ever going to carry the day on this issue, so if anyone wants to make the case for economics as a science, I’ll simply respond thus: “Sir, I gracefully concede that you yourself and your department are engaged in a value-neutral quest for scientific facts about the allocation of resources under conditions of scarcity. I apologise for having suggested otherwise. But would you at least grant me that the description ‘A form of rhetoric … the construction of arguments aimed to be persuasive over human action’ is a decent description of what all those other bastards are up to?”
And the point I’m trying to make is this; in the construction of arguments of this kind, there are certain kinds of mistake which it is fearfully easy to make. It’s easy to spot particular benefits and miss the fact that their counterparts are costs elsewhere in the system. To come up with arguments which, if true, would imply that people systematically allowed others to impoverish them without changing their behaviour. To miss the fact that your model requires the build-up of debts forever that never get repaid. Etc, etc. The bestiary of really bad economic commentary is full of all sorts of logical howlers. And the good thing about building mathematical models is that, in general, it acts as a form of double-entry book-keeping, to make sure that, if you’ve followed the rules of the game, your economic argument will not have any of these most common and most egregious flaws. It doesn’t mean that it won’t be bad or misleading for other reasons, of course, but it does mean that you’ll at least be saying something that makes sense, if only to other experts.
And I think that there is probably a generalisation here which can be extended to other fields; typically, the formal language of a discipline (its jargon) has, among its other functions, the function of making it more difficult to make the characteristic mistakes of that discipline. In economics, it’s politically convenient adding-up errors. In literary criticism …. well, I don’t know enough about criticism to be sure, but if I know properly the little bit I do know, one of the things that at least some of them are all about is careful analysis of the implicit assumptions of common language. And it strikes me as not on the face of it unreasonable to suggest that the most common mistake in this kind of analysis would be to make arguments which unconsciously rely on an unanalysed implicit assumption, and that one way to avoid this common mistake would be to adopt a formal use of language which made it more difficult to rely on the common meanings of words. So the defence of Bad Writing on the grounds that “some subjects can only be written about in unclear terms” actually encapsulates an important truth about the subject; it’s probably possible to write about the implicit assumptions of everyday terms without falling into exactly the same kind of mistake yourself, but it might take a hell of a guy to do it. Just as it is possible to write in a sensible and apolitical way about economic matters, but it takes a hell of a guy to do it. Furthermore, it’s much more difficult to write economics in a manner comprehensible to laymen (and check by hand that you’re not making the mistakes) than to write in the mathematical style (when the maths basically does half of your checking for you). So the progress of the subject at anything like its current rate depends on the ability of professionals to use the formal language when talking to each other, and to only use Good Writing when expressing ideas to a non-specialist audience which have already been judged as worthy of the extra effort.
All of which suggests to me that, as a criticism of professionals writing for professionals, the Bad Writing crowd are protesting far too much; there is a genuine place for formal language in subjects which have characteristic mistakes. I think that the sensible position in both the maths-in-economics debate and the theory-in-criticism debate is somewhere in the middle of this exchange between Krugman and James Galbraith; obscurantist theory is over-used, it’s overused specifically as a means of trying to shut outsiders out of the debate, but for a’ that, it doesn’t mean that the theory is worthless and it particularly doesn’t mean that you can ignore the theory altogether and still assume that your contribution to the discussion will be valid except by purest luck. Bad writing is writing that’s inappropriate for its content, and to assume that the same pristine clarity should mark out a postcard from Ibiza, an actuarial report and a discussion of subtext and metonymy in Buffy the Vampire Slayer is, well, bad.
[1] Yes yes yes, Rosser is not really an econophysicist, but he certainly associates with people who are, and he shares the same status of being an outside critic of neoclassical economics on the grounds of insufficient mathematical sophistication.
{ 56 comments }
tim 12.15.03 at 7:23 pm
In Defense Of Jargon:
Let’s not confuse a technical jargon with slang. In a technical jargon, the words have extremely precise, well-defined (if not commonly used) meanings. The language of physics is obscure to the outsider, perhaps, but the terms are very carefully delineated. One cannot substitute power for force, for example, or strangeness for charm. Street slang, on the other hand, provides you many different words for money or for sex with no essential difference between them. This is also the case in what passes for literary theory. You have many, many words to re-express the same indistinct ideas (I uncharitably posit there exist only two in the whole field: Good and Bad). The language of theory is not jargon, it is slang.
What you excerpt from the economics lecture is technical jargon. It may be wrong, and it may be misapplied, but the words are being used in precise and well-defined ways. It is, in that way, different from literary theory.
dsquared 12.15.03 at 7:28 pm
slang, on the other hand, provides you many different words for money or for sex with no essential difference between them.
I’m not saying you’re not onto something, but it strikes me that to make this distinction work, you’d need a very carefully outlined concept of what was and wasn’t “essential” to the meaning of a word, and it strikes me that this would be very difficult indeed to achieve in a non-question-begging sense.
I wrote a whole blog post once on the inconsistent ways in which the word “investment” was used by economists …
Brad DeLong 12.15.03 at 7:33 pm
Ahem! I wasn’t “proudly boasting,” I was “lamenting.”
Anyway, you’re the guy who thinks that the phrase “aggregate demand externality” is comprehensible to the layman…
dsquared 12.15.03 at 7:37 pm
Yes yes yes but the joke wouldn’t have worked then, would it? It’s the broad historical sweep of things which counts rather than these pettifogging details.
I tend to associate with a fairly strange kind of layman …
Ophelia Benson 12.15.03 at 7:40 pm
What a coincidence – I just posted that Krugman article in B&W’s Flashback section two days ago.
It gave me a melancholy feeling. Ah, yet another discipline I haven’t got a hope of understanding.
Now, if literary ‘theorists’ would only start using mathematical models, I would stop teasing them, because I would finally be convinced that they really are talking about subjects so technical and arcane and beyond my ken and, well, mathematical, that I just don’t understand them and have nothing to say. But they haven’t thought of it yet.
chun the unavoidable 12.15.03 at 7:43 pm
Macherey developed computational models of ideology, and there is a long-standing tradition of computational stylistics in literary criticism, which you would know if you in fact bothered to learn anything about the field you so feel so compelled to criticize.
Paul 12.15.03 at 7:47 pm
Apropos of nothing, Crooked Timber is the best blog ever. This is a fantastic debate.
Keith M Ellis 12.15.03 at 7:48 pm
But that would make sneering more work than fun, wouldn’t it? And what’s the point in that?
chun the unavoidable 12.15.03 at 8:00 pm
Speaking of sneering, that “so” is a type of Valley Girl emphasis, and you must withdraw your “sic.” We can bring in the Language Loggers to judge if you wish to challenge.
Keith M Ellis 12.15.03 at 8:10 pm
That wasn’t a sneer. I thought you’d made an honest typo. It didn’t read sensibly to me. Would the correct interpretation of emphasis be “about the field you *so* feel so compelled to criticize“? Just wondering.
Ophelia Benson 12.15.03 at 8:11 pm
Oh darn, found out again. Chun keeps finding me out. Last time, he challenged me to provide an actual quotation of the sort of thing I object to. Which I did, despite the fact that there were already several on the site, just there for the reading. When I did, he mumbled something about my having taken the quotation out of context (well how else can one provide a quotation?) and then disappeared. No ‘Oh, my mistake,’ no ‘Oh, you have read some of it then,’ no ‘Oh, beg pardon,’ no nothing. So my shame and confusion are not quite as complete as they might be.
Matt Weiner 12.15.03 at 8:13 pm
you’d need a very carefully outlined concept of what was and wasn’t “essential†to the meaning of a word, and it strikes me that this would be very difficult indeed to achieve in a non-question-begging sense.
Indeed. (Conclusions not endorsed.)
Fascinating stuff, dsquared. I immediately thought of my field, analytic philosophy, which can be subject to the same criticisms as economics.
I think the characteristic mistake of philosophy is equivocation, in the sense of accidentally using one word in two senses. (Ex. “Nothing is better than eternal happiness. A ham sandwich is better than nothing. So a ham sandwich is better than eternal happiness.”) But I’m not sure if I can make it fly….
Keith M Ellis 12.15.03 at 8:14 pm
More’s the pity.
cdm 12.15.03 at 8:38 pm
chun:
Could you provide a reference for Macherey’s “computational models of ideology”, please?
tim 12.15.03 at 8:57 pm
“there is a long-standing tradition of computational stylistics in literary criticism”
Oh yes. But neither is “stylistics” substance. For anyone who understands what a boundary value problem is in mathematics (jargon, not slang) it is difficult to treat seriously the literary theorist who supposes that represents a masculine preoccupation with boundaries. It is equally difficult to see the equation of the phallus with the sqrt(-1) as a rigorous mathematical expression. But these both brim over with “computational stylistics.”
Keith M Ellis 12.15.03 at 8:58 pm
Yes, please. Because Googling doesn’t help. It only leads me to waters that, forgive me, I refuse to wade, much less swim.
chun the unavoidable 12.15.03 at 9:01 pm
Ophelia Benson provides middle-mind solutions to the problems confronting today’s blogorati intellectual. Since this group is particularly susceptible to confirmation bias, it doesn’t much matter that Benson’s displayed familiarity with literary theory is a small subset of Denis Dutton’s.
She wrote above that mathematical models weren’t used in literary theory. You may remember a Vassar professor who’s been written about in the New York Times, among other obscure venues, who’s just one example of the previously mentioned computational stylistic analysts who’ve been around since the computer (and longer). Just because any introductory book on textual criticism will mention these methods doesn’t mean that Benson should necessarily know anything about them.
For her chosen subject is “intentional obscurity” in the work of professional literary theorists. I did criticize the parochialism of Benson’s various writings on this topic, which I feel point to an illustrative hypocrisy: one of the mainstays of traditionalist humanists is that the so-called “canon” cannot be discarded, as they accuse contemporary critics of so wantonly doing, without it first being understood. We have young PhDs who’ve never read Jonson or even Beaumarchais, etc. Benson, who’s never felt the schizophrenic love of Deleueze and Guattari, dismisses that which she doesn’t understand. Her quotes from anthologies and French physics professors only underscore this apparatchik mentality and right deviationism.
Lurid 12.15.03 at 9:03 pm
I think the idea that one should respect a discipline more if it uses mathematics – which we all know is so wonderfully clever – is a little suspect. I have nothing against mathematics, quite the opposite, but just because an idea is couched in equations does not put it beyond the realm of criticism. It is a sad sign of innumeracy that this combination of veneration and phobia is quite commonplace. That said, mathematics can also be useful sometimes. One needs a balanced view of these things.
I thought that this was an excellent entry in that regard. I think it is especially interesting to see the Econophysicists and Post-autistics as somehow making similar points from completely different viewpoints.
chun the unavoidable 12.15.03 at 9:05 pm
Tim, did it occur to you to at least attempt to find out what I was talking about before mouthing off? Computational stylistics doesn’t anything to do with your bastardization of Irigaray; but it has a lot to do with corpus analysis and textual reconstruction from diverse sources.
Keith M Ellis 12.15.03 at 9:06 pm
Aren’t you setting up a bit of a strawman? I don’t think there’s any denying that there’s a lot of bullshitting going on. And I think that, for example, Alan Sokal demonstrated this with his stunt. However, I was not friendly to him in that matter because he did so in bad faith and as an outsider lacking expertise. Your complaints remind me of this, although that’s just my impression.
…and this, I think, damages your case since I think you’re deeply, deeply wrong in this example.
Ophelia Benson 12.15.03 at 9:19 pm
Ooh, I have an apparatchik mentality and right deviationism! Now that is cool. I think I’ll get it printed on a sweatshirt.
Keith M Ellis 12.15.03 at 9:21 pm
Yes, but not all make this error. (“Traditionalist humanists” discarding that which they do not understand.)
I recall a post-Friday night lecture at my alma mater—a very old (for the US) “Great Books” school—where two of the more dogmatic faculty members misused Derrida as a strawman; whereupon another faculty member, well versed in Derrida, took the other two to task for their willful ignorance.
The point is that even among we extremely naive supporters of the “canon” there are some who know better than to ridicule that which we know not. This was part of my problem with Alan Sokal.
Having said that, I have a strong impression that there’s a lot of bullshitting going on over in those parts. But, you know, that’s not uncommon. Tim, you sound like a physicist to me. Hey, I used to hang out with astrophysicists. Physicists aren’t immune to hand-waving.
chun the unavoidable 12.15.03 at 9:35 pm
Get a “tone-deaf” logo as well, and you’ll sell thousands on your site.
And Keith, I think more people should dismiss what they don’t understand. It’s an unhealthy respect for difficulty that’s made academia the elitist place that it is today.
DJW 12.15.03 at 10:07 pm
I’m still digesting that wonderful, wonderful post. Thanks.
Keith M Ellis 12.15.03 at 10:12 pm
Yes, I will bow out now and encourage those who remain to deal substantively with Daniel’s very, very good post. My participation here has been a digression, and I think Daniel’s post merits serious discussion.
Daniel, have you seen that DeLong responds to you on his site?
c. 12.15.03 at 10:46 pm
Wow. A (characteristically) excellent post. A few comments/questions, though:
(1) If I’m reading you correctly, I think you miss McCloskey’s point re. science and rhetoric. Yes, she points out, economics is a form of rhetoric, but so is physics. The point is not that there is (as you seem to be claiming) some science/rhetoric split and economics falls into the latter, it’s that any field of inquiry (physics, economics, lit. theory) is inextricably tied up with its rhetorical style. Substance is style and style is substance, there’s no getting one w/o the other.
(2) I think you have a valid and important point about the rhetorical style of fields developing as a reaction to commonly occuring mistakes and fallacies. But, I think there’s a trade-off here. If I make an economic argument in a less mathematical style with an emphasis on clarity of prose, then it may be easier for me to overlook some important assumption, but, I think, it’s also easier for my audience to spot my mistake.
What mathematization and obscurantism do, even if they have they are beneficial in that they prevent certain mistakes, is that they limit the extent of the discourse by limiting the audience, so that any mistakes or fallacies that are left, are much less likely to be found because less people can follow the argument in a complete and intuitive manner.
And mathematization only prevents certain kinds of errors, and clarifies some aspects of your model, other aspects it can obscure, and it’s just as easy to make a ridiculous argument with mathematics as it is without.
And essentially, I find, economists are not convinced by an argument by the rigor of its math, but what, when they translate that math into english, the argument claims, and whether that seems experientially correct to them, (“Oh, so basically what Shleifer is saying is a,b, and c. Sure, I’ll buy that.”) The more mathematically obscure your argument is, the harder it is for people to do that.
I think it’s a commonly believed fallacy that mathematics always clarifies economic arguments by clarifying assumptions. Sometimes it does, sometimes it doesn’t. Noting that d=f(p) and f'(p)<0 tells you nothing more or nothing less than writing "When stuff's cheaper, people want more of it, and vice-versa." The level of mathematization in a paper is something that authors need to consciously and seriously consider, and not simply do so out of imitation, habit, or some unconscious appeal to authority.
The argument here is not that economists are too mathematical, or not mathematical enough but that they are not rhetorically conscious enough.
Russell L. Carter 12.15.03 at 11:50 pm
Many, maybe all technical fields have inscrutable–to laymen–techniques that produce results that are readily translated into common language such that the results are intelligible in general by the moderately well educated. In mathematics, you might say that nonminimalist, possibly obscure first proofs are Bad Writing that follow on simplifications are intended to rectify. Such simplications are highly esteemed in the mathematical culture. Similarly, in my original field of engineering, the techniques are myriad and each exceedingly complex, and consequently your brilliant contribution gains the interest of other practitioners only if can be shown to satisfy certain fundamental empirical relationships by display in the form of comparatively dead simple 2-D graphs.
This then is the source of much of the irritation that classic Bad Writing provokes: as a set of complex manipulative techniques it doesn’t appear to produce much that upon conversion to common language or basic graphical display is intelligible by even highly educated non-specialists, or to even other practitioners in the field. At least that’s what it looks like from the outside looking in; could be wrong.
It might be of interest that some mathematicians are very curious about this stuff. For instance, in the August 2003 Notices of the AMS is a long, erudite, and sympathetic review of “Mathematics and the Roots of Postmodern Thought” by Vladimir Tasic. Reading this, I came to the conclusion that maybe there is something there, there, if only as a general set of (complex) techniques whose potential viability is validated via analogy. As to what these techniques actually produce, I don’t have a clue yet.
Ben the Eminently Avoidable 12.16.03 at 12:25 am
I think part of the main problem is that people don’t understand what mathematical models are FOR and what claims may be made from them.
I tend to think that models are FOR providing a short black box representation of reality at a minimum, or at a higher level encapsulating relevant mechanisms about reality in a way that lets one make predictions about reality in regimes where we haven’t yet done any experiments or where experiments are impossible.
There’s one kind of model (Type I, say) which is sort of like curve fitting – you have data that lives in some sort of state space and you try to make a mathematical widget that will somehow interpolate the space. The parameters of the model may or may not mean anything, even if the model predicts what actually happens very well. You can’t hope to use this kind of model with much confidence in any situation that’s even a little outside the experience that went into making it.
Another kind of model (Type II, say) is a little more physically based – you try for ‘physically relevant’ parameters and mechanisms – and after a lot of testing you might come to conclude that the way your model works actually is a good underlying representation of the phenomenon you’re trying to study.
I’m a computational gas dynamicist (yes, a *real applied mathematician*, 5 or so months out from getting my union card), so I’ll pull from ye handy specialty. If I wanted to compute flow around an airplane I could try to model each molecule but that’s infeasible and more work than is reasonable. Instead there’s a partial differential equation, a MODEL of the physics, that in *certain regimes* is an adequate representation of the physics that is important. It can be proven and experimentally tested that the parameters of this model are the physically relevant ones. Along with that comes knowledge (rigorous and through experment) of what modelling assumptions have been made and where the model’s liable to break down. If one has that sort of model then there’s a good chance of actually gaining insight into reality by studying the model rather than being limited to using it as some sort of black-box subsystem to help you study the larger physical system.
I have seen, seriously offered, a lot of (what I see as) Type I curve-fitting sorts of models with almost as many parameters as data points. It’s real darn hard to make claims that the parameters of such a model mean anything. You give me a dozen parameters and a dozen not-totally random obs with a couple dozen degrees of freedom each and I bet I can come up with an ODE/discrete model that does a decent job at approximating those. (notice I don’t say anything about how many ODEs, etc..!)
As an honest to gosh computational scientist (I guess!) I’m curious about how ‘computational stylistics’ works. What’s the model representing? How many parameters? Anyone tested the model on data outside that used to build up the model?
ben t. e. a. 12.16.03 at 12:27 am
Apologies for the odd boldfacing; my habit of using asterixes for emphasis backfired.
JRoth 12.16.03 at 12:31 am
Crooked Timber cross-pollination note:
Krugman & Glabraith’s dialogue addresses directly the issue of which fields require expertise of their experts. Stated more clearly, Krugman attacks the idea that, because economics is better understood without all the tricky math, non-economists should be put in prominent economic roles. Which is just like the debate elsewhere on CT about whether the NYTimes should have a professional ethicist on staff, rather than a former Letterman gag writer.
As a bonus cross-connection, the PK-JG debate has some side sniping about relative wages over time, and whether inflation has been overstated, a topic covered by Brad DeLong today.
Boy, this internet…like some kind of intricate stringing together of strands, or something….
david 12.16.03 at 1:56 am
What to do with Gary Becker, who says bullshit in the clearest of English and backs it up with some pretty obscure looking math (oops, did I out myself)? I take the theory point, just not quite sure how to tie it into the bad writing debate as it looks at Judith Butler.
McCloskey has gone on at length recently about the uselessness of much of the math in economics journals, which may change the focus of the economic science as rhetoric from her past.
Zizka 12.16.03 at 3:44 am
To me there’s little analogy between “theory”, as presently defined in literary studies, and math in economics. (Except when both are obscurantist and unnecessary). Math follows strict knowable rules, and theory doesn’t.
“Rhetoric” as McCloskey uses the term has a bigger meaning than most people understand. It’s not “fancy words used to confuse people” or “dressing your ideas up prettily to convince the general populace”.
It’s more like “writing things down so that people can see the point you’re trying to make”. The question “What’s the point here” is a question about rhetoric, or of rhetoric. It does involve concessions to the reader, in the sense that different ways of underlining your point will work with different readers.
In a way rhetoric is like “application”. “What does this mean in practice” or “What does this mean in concrete terms” is a question of, or about rhetoric. [You can’t say, “a rhetorical question”].
My doubts both about analytic philosophy and about economics is that they define certain questions out of existence right at the beginning, and then proceed rigorously from there. “Defining the terms”, or deciding what the question is, precedes the actual philosophical / economic work, and it’s also “rhetoric”. Often systematic thinkers beg the question by ending up trying to prove their starting premises based on proofs which have presupposed those premises.
cdm 12.16.03 at 4:30 am
chun:
Could you provide a reference for Macherey’s “computational models of ideology”, please?
epist 12.16.03 at 6:58 am
Guess I wasn’t the only one suppressing a grin when I read that anti-lit-theory piece.
Glass houses and all. Surely the theorists can see that the proper strategy is to make room at the table. We’re fighting for our lives here, let’s not blame the Judean People’s Front for the problem, eh?
DJW 12.16.03 at 8:47 am
Ophelia Benson, I’m curious, if you’re still around and feel like talking about it, if Dsquared’s post has given you any pause about the scorched earth policy with which you go after “theory” and “bad writing.” I’m not trying to be provocative here, or intervene on behalf of Chun in what’s clearly an ongoing grudge match. I’m just rather amazed that you’re off the cuff posts here and elsewhere are generally quite thoughtful and engaging, which presents such a stark contrast to you’re approach to this topic. Whenever I read your stuff on theory/science studies/bad writing, I get the sense that a) I agree with you on about 2/3 of your substantive points, and b) the points on which we disagree would still put me beyond the pale, as it were, and into the irreedemable ‘theory’ camp.
If this is something you’d rather deal with in this forum, that’s fine. I’ll look for a good place to post similar, more specific concerns on your blog when I get a chance.
Ophelia Benson 12.16.03 at 1:23 pm
djw,
Well, my ‘scorched earth policy’ is a policy on bad writing rather than one on theory. The title of the article is bad writing, not theory, and not difficulty. Bad writing is, by definition…well, bad. And that’s what I’m talking about – the bad stuff.
Actually it’s all a bit odd. I didn’t write that article as a stand-alone piece – I wrote it for our In Focus section, where the articles serve as introductions to a lot of links on a particular subject. If I had written it as a stand-alone piece, I would have made various things clearer that, as it was, I expected the links to make clear. But when the Guardian asked if they could republish it, I wasn’t going to say No, or even Yes but I want to re-write it first. And then, it was edited quite heavily for the Guardian version, so a good few transitions are missing.
And then, yet another explanatory point – it all started with a review in the CHE by Carlin Romano of a book called Just Being Difficult? (which John Holbo discusses brilliantly and at length on his site), in which (according to Romano – I haven’t read the book yet) people who (apparently) take themselves to be bad writers, or to be under suspicion of being bad writers, resort to the difficulty defense. I found that idea both so hilarious and so irritating that it inspired me to do the In Focus. So the difficulty issue came in not because I think nothing should be difficult, that all writing and all ideas should be easy, but because I wanted to dispute the difficulty alibi for writing that is difficult not of necessity, not because the subject is inherently difficult, but for purposes of ostentation.
dsquared 12.16.03 at 2:03 pm
I’d just like to say that I very much regret mentioning Ophelia by name above; I haven’t read enough of her peices on this subject to have an opinion on them. I hope it’s clear above that I’ve simply used the whole Bad Writing debate as an opportunity to attack a few hobby-horses of my own.
Chad Orzel 12.16.03 at 2:53 pm
I’m coming in to this late, but I’d just like to highlight this marvelous sentence from the first comment:
One cannot substitute power for force, for example, or strangeness for charm.
which makes a remarkable amount of sense whether you use the physics definitions of those terms, or the everyday ones. Indeed, the non-equivalence of “strangeness” and “charm” is the bane of many a physicist…
Sadly, I disagree with the next sentence:
Street slang, on the other hand, provides you many different words for money or for sex with no essential difference between them.
The sticking point is probably the definition of “essential,” but my experience of slang is that there are generally distinctions between terms that are apparent (if not well articulated) to regular users of the slang, if not to outsiders.
tim 12.16.03 at 3:49 pm
A few people have disagreed with my comments on slang (as distinguished from jargon).
So I’ll provide some examples.
I had so much to drink last night that I blew my cookies all over the bathroom.
I had so much to drink last night that I puked all over the bathroom.
I had so much to drink last night that I spewed all over the bathroom.
Hurled, upchucked, barfed, ralphed….
Sure, maybe there are connotative differences between these words, but they are slight; the words have essentially the same meaning.
Yes the choice of (cool/wicked/gnarly/keen/fine/hot) provides some information about the user, but it doesn’t change what is meant very much. The same is true if you (screwed/balled/fucked/nailed) your girlfriend. And if you have the (dough/bread/moolah/scratch/bucks) to buy a new computer.
dsquared 12.16.03 at 4:46 pm
But you have established this point only at the expense of your analogy; you can’t use the same method to show that literary theory uses different words to convey the same “essential meaning” without begging the question.
tim 12.16.03 at 5:46 pm
“you can’t use the same method to show that literary theory uses different words to convey the same ‘essential meaning’ without begging the question.”
But you can discover whether the obscuring words in literary theory are an honest technical jargon by trying to determine if there is a consistent, well-defined meaning for the terms. If the only meaning that a given term shares across instances is “good” or “bad,” or if it only seems to serve as a officious sounding intensifier [or if, for example “deconstruct” and “interrogate” are synonymous with “analyze” – if there is no ‘difference’ despite an implicit equivocation which promises you so much more], you know you are dealing with bad prose, if not necessarily bad intent.
I don’t mean that this is the whole of the problem with bad prose or with bad literary theory, but this is a way to determine if the obscurity of the prose is due to an honest but unfamiliar jargon or aposematic coloration. The next step (if it is an honest technical jargon) would be to see if the jargon is being used as a magician’s assistant to draw your attention away from the slight of hand and trickery going on in the argument. If it passes that test, you have to evaluate the argument.
roublen vesseau 12.17.03 at 6:07 am
I’m pretty sure some of you have read them, but Krugman has written a *lot* on economic methodology.
Two of my favorite pieces are:
The Fall and Rise of Development Economics>
and
A country is not a company
dsquared 12.17.03 at 7:25 am
But you can discover whether the obscuring words in literary theory are an honest technical jargon by trying to determine if there is a consistent, well-defined meaning for the terms
At the risk of repeating myself, I don’t believe that you’ll find it possible to do this in a non question-begging way.
tim 12.17.03 at 1:13 pm
I can’t see how begging the question enters into it at all. Look, there’s no great trick to finding out what the jargon of math means, and that the meaning is stable. Why isn’t this possible for literary theory? Can’t we see if the usage implied in one instance is consistent with usage in another? Can’t we look for agreement among the practitioners on a well defined meaning? Where is the begging of the question in that?
Zizka 12.17.03 at 2:20 pm
I spent a couple years in ~1979-81 or so reading Foucault, Derrida, Lacan, and their sources. I ended up concluding that Foucault had a lot to say, Derrida may have had something to say, and that Lacan was a fraud. Rather than writing difficult ideas in a difficult way, I feel that Lacan and the Lacanians are playing an elaborate insiders game of peekaboo, and that none of the key words and phrases used have definite meanings such that if you understood them, you’d understand what was being said by Lacan et al. I further ended up believing that the obscurantism was deliberate.
It can always be said that I was resisting the message or that I didn’t try hard enough. There are still some people who say that about astrology too, for example, but I just don’t have the time to study astrology in detail either.
I suspect that if I knew more math I would find out that some of the math is used for obscuratist reasons. I know that within the profession many say that models are being “refined” even though they’ve become completely detached from the actualities they supposedly are “about”. But since I don’t know enough math, I don’t know which medels these are or when abuse is taking place. The general idea that mathematical analysis is a useful tool for economists seem unassailable.
On a different thread someone apparently quite knowledgable told me that the use of intimidating statistical presentations to dress up mushy results is a recognized problem. I also suspect that a lot of the technical language in the DSM-4 has a lot more to do with insurance payments and defending malpractice suits than it does with either prevention, treatment or discovery of causes of disease.
dsquared 12.17.03 at 4:19 pm
Look, there’s no great trick to finding out what the jargon of math means
Yes there is. This was what the entire formalist project was about; finding a way of doing mathematics without having to agree on what the terms “mean” (what kind of an entity is a “set”, for example?). It’s only possible in maths because the formal language of mathematics is “formal” in the literal sense; it’s a set of rules for making marks on paper which defines theorems and nontheorems without reference to the (semantic) meaning of those marks. That’s just not going to be possible in a field where the semantic content is important, which brings one face to face with the indeterminacy of “meaning”.
tim 12.17.03 at 5:27 pm
Look, the same is true of physics and biology. Everyone can agree on the proper usage of “cell,” or “mitochondria,” or “photon,” or “quark.” You won’t find a biologist describing a human being as a cell or a virus here, and an axon or amphibian or tuber there. And you won’t find a physicist calling those things inside protons sometimes quarks, sometimes electrons, sometimes molecules, or sometimes planets. And you won’t find a chemist using the word “Carbon” but meaning sometimes Scandium, sometimes Phosphorus, and sometimes graduated cylinder. So why can’t we expect literary theorists to have something specific, something generally agreed upon by their fellow practitioners, and something fixed behind the meanings of the words they use?
I’m not asking this of literature, where ambiguity may well be a virtue; I’m asking this of academic analysis, where it is not.
It’s clear that language in general is up to the task, “indeterminacy of meaning” not withstanding. After all, many, many academic fields carry on a fruitful theoretical dialogue – not just math. Surely you don’t think it is impossible to communicate clearly about literature. It was done for centuries before literary theorists adopted obscurity as a virtue.
I see only two possibilities: that what the obscurantists want to say cannot be communicated in language, in which case they should shut up about it already; or that it can be communicated in language, in which case they should take the trouble to do it clearly.
While the ideas of most academic fields aren’t easily communicated using the only the ordinary language of laypersons, this is precisely why they have developed rigorous technical jargons. And this isn’t out of reach for literary criticism – those people concerned with the rhetorical structure of texts have a considerable technical vocabulary available to them (see, for example, =A Handlist of Rhetorical Terms= by Richard Lanham).
Matt Weiner 12.17.03 at 6:54 pm
So why can’t we expect literary theorists to have something specific, something generally agreed upon by their fellow practitioners, and something fixed behind the meanings of the words they use?
Quick unfair response: Consider scientific terms such as “fish,” “solid,” and “mammal.” People turned out to be surprised by what these terms really mean; for instance, solids are substances with a certain crystalline structure, and glass turns out not to be a solid. So the meaning wasn’t completely fixed, and yet it was a good term for a’ that.
(I have an analogy in mind with something in analytic philosophy, but it’d take too long.)
tim 12.17.03 at 7:26 pm
Of course. Force means something different in the technical jargon of physicists than in the common language. That’s the point of a technical jargon. You can find an analogy in any academic discipline. (Analytic philosophy: endurance and perdurance as kinds of persistence.)
Tim F 12.18.03 at 12:46 pm
Literary theorists (or, perhaps, different schools of literary theory) _do_ “have something specific, something generally agreed upon by their fellow practitioners, and something fixed behind the meanings of the words they use.” ‘Deconstruct’ has a particular technical meaning coming from Derrida (depending on his claim that implicit assumptions necessarily lead to internal contradictions in any statement) – it isn’t just a synonym for ‘analyse’. Likewise ‘desire’ in Deleuze (and Deleuze-inspired criticism), or ‘the real’ in Lacanian psychoanalysis.
Using these shorthands makes it _easier_, not harder, to communicate. Indeed, I think this would be a good challenge for the Bad Writing crowd: choose a theorist and write something either about them or using their critical methods, without using their terminology. I don’t think you’ld have to write very much before you discovered the value of this jargon – it’s certainly possible to find a synonym for ‘ontology’ when writing a Heideggerian critique of Hobbes, say, but it’s difficult and, if you’re writing for an audience who knows what the terminology means, an uneccessary waste of your time and theirs, and, by cluttering the text makes it harder to follow the actual argument.
umair 12.18.03 at 4:12 pm
Cool debate. I think math is the cap on the modelling crutch because the epistemology and ontology of economics don’t really exist (or at least never really got sorted out). Luckily, simulation and behavioural methods are building a new epistemology for economics.
Then again, I’ve always noticed an inverse relation between the insight I got from an econ paper and the amount of math it contained.
tim 12.18.03 at 4:43 pm
“Literary theorists … do ‘have something specific, something generally agreed upon by their fellow practitioners, and something fixed behind the meanings of the words they use.'”
So you say. But others disagree. That’s exactly the point that has to be proved to show that what they are up to is not just obscurantism but proper use of a technical jargon.
“‘Deconstruct’ has a particular technical meaning coming from Derrida (depending on his claim that implicit assumptions necessarily lead to internal contradictions in any statement) – it isn’t just a synonym for ‘analyse’.”
Not according to one of the literary theorists’ defenders in this debate. The equation of deconstruct with analyze isn’t my claim.
dsquared 12.18.03 at 5:21 pm
Would you be prepared to stake much on the question of whether quantum physicists use the term “observation” in a consistent manner?
The point is that you’re looking at the wrong comparision with scientific jargon. The scientists use unambiguous language to describe things which is possible because things are unambiguous. When scientists describe what they do, they use terms like “observation”, “experiment”, “significant”, “effect” and a number of other terms which are as ambiguous and philosophically complex as you like.
The distinction is easy to miss, because in criticism, the “things” that you’re writing about are the same as “what you’re doing”.
Matt Weiner 12.18.03 at 5:47 pm
Force means something different in the technical jargon of physicists than in the common language. That’s the point of a technical jargon.
“Charm” and “strangeness,” even more so. But “solid” wasn’t meant to be a jargon term–it was supposed to mean exactly what it means in the common language. That it turned out not to, I think, indicates that physicists were able to work for a long time with a term that was not clearly defined.
As for “endurance” and “perdurance”–those are obviously jargon terms, defined stipulatively. But take “vagueness.” Philosophers don’t seem to be able to agree on what “vagueness” really means–ask Brian Weatherson, who has a very controversial definition. I guess pretty much everyone agrees that “bald” leads to issues of vagueness, and vagueness can be vaguely defined by saying “cases like ‘bald'”; but that leaves a whole lot of room for disputes about definitions.
Yet it’s pretty clear to me that analytic philosophers aren’t BSing (always), and often do make progress toward getting clear on concepts. It’s not clear to me that critical theorists aren’t BSing, partly because I haven’t read enough of them. (Foucault and Adorno certainly weren’t, though.) But I’m reluctant to condemn them without more study.
Tim F 12.18.03 at 8:08 pm
_So you say. But others disagree. That’s exactly the point that has to be proved…_
Why does it ‘have to be proved’? Why does literary theory have a special obligation to demonstrate that its technical terms are genuine, an obligation physics or economics or analytic philosophy apparently do not have?
Literary theorists are generally intelligent, serious people (and often, like Judith Butler, capable of writing very clear, jargon free, prose, when they wish to); they appear to be able to communicate with one another using their jargon. Surely the burden of proof is on those who claim that the jargon is meaningless?
If I were to claim that particle physics was pretentious jargon, would my disagreement mean that physics had to prove to my satisfaction that _all_ it’s jargon were meaningful?
tim 12.18.03 at 11:11 pm
“Surely the burden of proof is on those who claim that the jargon is meaningless?”
Perhaps. But there are plenty of people out there asserting it, and a few people out there testing it. Everybody and their uncle likes to mock the nonsense that comes out of the MLA conference. And this discussion started precisely because people have been assaulting the “bad writing” of theory. Whatever you think of Alan Sokal and his dirty trick, it did demonstrate the point. (Do you think if you wrote a simulated-jargon filled particle physics paper and submitted it to Phys Rev Letters, that you would make it past the referees?) It doesn’t help either that the defenders can’t make a clear case for the defense. (One of the editors of Social Text went so far as to say that Sokal’s paper was legitimate, Sokal’s claim that it was a hoax was a hoax.)
This discussion isn’t going on in a vacuum, it’s going on in a landscape littered with pretentious sounding nonsense that is propagated under the name “Theory” and which is attacked as nonsense from many sides. So now is time for those theorists to answer the charges – if they can.
There isn’t the same skepticism about particle physics – and there is very likely a good reason for that.
Comments on this entry are closed.