Hugo Awards II

by Henry Farrell on July 26, 2010

One of the nominees this year (for best related work) is Farah Mendlesohn’s “The Inter-Galactic Playground: A Critical Study of Children’s and Teens’ Science Fiction”:http://www.amazon.com/gp/product/0786435038?ie=UTF8&tag=henryfarrell-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0786435038. I haven’t read the other contenders, but wanted to do a quick write up. It’s a fun book, with an argument that both contributes to genre studies and sets out Mendlesohn’s own position on the kind of books that sf writers for younger readers _ought_ to be writing.

First, the argument shorn of its normative message. Mendlesohn claims, on the basis of having read more or less everything in this genre that she could put her hands on (several hundred books), that we have seen important changes in the genre over the last several decades. Themes which used to play a significant role – interstellar exploration, encounters with new alien cultures and the like – have mostly faded away. Instead, we have a new literature which focuses more on personal relationships. While she does not put it in _quite_ this way, her argument implies, I think, that science fiction for teenagers used to be primarily a subset of science fiction, but has now primarily become a subset of literature for teenagers. It is driven by a set of concerns about what teenagers ‘should’ be thinking about – concerns about identity, relationships and so on.

And now the normative point. It would be putting it too strongly to say that Mendlesohn doesn’t _like_ this literature, but she is clearly worried about what it is missing. She argues strongly that the previous literature – which dealt with how young adults went out into the world rather than their familial and personal relations, was unashamed of its didacticism and was informationally dense – catered especially well for a particular kind of child. Children can be obsessive in their desire to devour information – and fiction which provides them with this, together with an emphasis on its usefulness can set fire to their imagination. There is a strong implication that this also recreates the science fiction community, and that much modern science fiction does not serve as a gateway in the form that its predecessors did. However, Mendlesohn also points to the burgeoning of a new literature aimed at teenagers which recreates some of these virtues for the modern world. She’s particularly fond of Cory Doctorow’s _Little Brother_, which despite its modern themes is a “quite unashamed throwback to Heinlein”:http://firedoglake.com/2008/07/20/fdl-book-salon-welcomes-cory-doctorow-little-brother/.

I enjoyed the book a lot, found most of its claims highly plausible, and agreed with enough of its argument to be pleasing, but not so much as to find it unenlightening. The one major criticism I have is “cribbed directly from an old essay by Cosma Shalizi”:http://cscs.umich.edu/~crshalizi/weblog/404.html, and takes the form of ‘why oh why do genre studies scholars not want to math up a little.’ It should be noted that Mendlesohn shows more signs of caring about this stuff than most of her colleagues do – for example, she thinks a little about whether a survey that she conducted is at all a representative sample. I also suspect that if she _had_ written a more teched-up book (say, at the fairly minimal level that someone like me is capable of), it would have been difficult to find a publisher in the humanities. But if this largely absolves Mendlesohn herself, it pushes the blame back a level, to that of the academic discipline rather than the person representing same.

It is important to be clear that teching it up is not a substitute for critical judgment. There are many interesting aspects of text that absolutely resist quantification. But many of the claims that Mendlesohn is making _could_ be improved by more attention to the formal stuff, because they are implicitly quantitative ones. The larger part of her argument is that we see more of some things, and less of others than we used to. She also makes some more tentative causal claims – that the _reasons_ why we see more of this or that can be traced back to this or that social change. And both of these kinds of arguments could be _much improved_ with better methodology. More explicit quantification could better capture the changes observed from decade to decade, and allow for statistical testing to see whether apparent patterns were significant, or effectively indistinguishable from noise. Tests for inter-coder reliability would allow for greater confidence in the data.

Finally, the comparative method could be applied in interesting ways. For example – Mendlesohn talks a little bit about how change in science-fictional accounts of career paths has gone together with (and, if I understand her rightly), been driven by broader social changes in the options available to young people (such as third level education rather than vocational training). It would be interesting to see what the pattern was, if any, in the science fiction of countries like Germany, which have retained a considerable emphasis on apprentice-style training schemes.

I’d love to see more work that explicitly sought to think about problems and development of genre over time, using arguments from hypothesis testing. I also suspect that there are real opportunities opening up that could transform research. Google began some while ago to try to find researchers who wanted to test out arguments on the corpus of data that Google Books has gathered – this would allow a scholar like Mendlesohn, for example, to see whether the frequency of use of words like “family,” “son,” “daughter,” “mother” and “father” has increased or decreased in the genre over time. This would not substitute for critical acumen – but would serve as at least somewhat supportive evidence for arguments about change in the emphasis on family relationships over time.

I likely owe Mendlesohn an apology for using her book as an excuse to trot out one of my own hobby-horses about the study of literature – but I do think that a little more quantification, in gently increasing doses, could do wonders for the field that she is contributing to with this very interesting book. It would also relieve some pressures on the scholar. Mendlesohn makes it clear that the majority of the books she read were somewhere on the spectrum between utterly banal and truly godawful. While she draws some lessons from these books, her active discussion concerns the books she liked, or at least found interesting. Quantification would allow scholars like her to present what is valuable about these books (the corpus of data that they represent about broad trends in the field), without having to dwell on their lack of imagination, tortuous prose style etc. And it might reveal some _interesting_ findings (the broad patterns of genre may look quite different in aggregate from the field’s peaks of accomplishment, which may in turn say things about how ideas percolate etc).

{ 41 comments }

1

Jared 07.26.10 at 11:32 pm

“Mendlesohn makes it clear that the majority of the books she wrote were somewhere on the spectrum between utterly banal and truly godawful.”

This is either astounding modesty, or a mistake on your part.

Otherwise very interesting.

2

JSE 07.27.10 at 1:56 am

Some people in literary studies who are using quantitative methods in much the way you wish:

my Wisconsin colleague Robin Valenza;

Franco Moretti at Stanford.

3

Western Dave 07.27.10 at 2:42 am

From my own reading, it was Orson Scott Card’s Ender’s Game that seemed to be some sort of turning point. The original story that ran in Omni was very much old school and the book very much new school. Do individual works get attention in her work?

4

Matt Austern 07.27.10 at 4:27 am

Omni? My memory (which Wikipedia, FWIW, agrees with) is that it was published in Analog.

I wouldn’t particularly see it as a turning point in YA science fiction, because I don’t think it was ever published as YA.

5

ajay 07.27.10 at 11:27 am

“Mendlesohn makes it clear that the majority of the books she wrote were somewhere on the spectrum between utterly banal and truly godawful.”

From the context, I’m guessing that the “wrote” that he wrote should be read “read”. The next sentence is “While she draws some lessons from these books, her active discussion concerns the books she liked, or at least found interesting” which seems to confirm this.

I am filing this away under WORST TYPOS EVER, along with the infamous Guardian correction about Wolverhampton Wanderers Football Club. (“In our interview with Sir Jack Hayward, the chairman of Wolverhampton Wanderers, page 20, Sport, yesterday, we mistakenly attributed to him the following comment: ‘Our team was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.’ Sir Jack had just declined the offer of a hot drink. What he actually said was ‘Our tea was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.’ Profuse apologies.”)

6

ejh 07.27.10 at 11:38 am

But is it really to be classed as a typo?

7

Aulus Gellius 07.27.10 at 12:19 pm

In Classics, it’s actually reasonably common to use word counts, metrical statistics, etc., though they aren’t often used in very sophisticated ways, understanding of statistics not being all that widespread.

8

ajay 07.27.10 at 12:30 pm

6: thinko, possibly.

9

ejh 07.27.10 at 1:41 pm

Jack Hayward, by the way, is and was a bastard, as well as a fiercely patriotic tax exile.

And, amusingly, they were indeed the worst team in the Premier League.

10

ajay 07.27.10 at 2:01 pm

Blimey, ejh. Thanks for that. Not being a football follower, I’d never heard of Hayward until he popped up in the Corrections and Clarifications column. (I’d barely heard of Wolverhampton Wanderers.)

This is funny, on his support for Thrope’s Liberal Party in the 70s:

why on earth did he give so much money to the wishy-washy Liberals? “Well, I used to say, ‘I don’t want anything to do with Europe.’ And Jeremy used to say, ‘My dear fella, if we joined Europe, with our expertise on how to run an Empire, we’ll be in charge of Europe! We will be the master race!’ And I would say: ‘How much do you want?’

He does sound barking mad, but, I suppose, not actually sociopathic (unlike, say, van Hoogstraten).

11

Maria 07.27.10 at 2:46 pm

Thanks, JSE. I gave Henry a birthday present of Moretti’s wonderful Atlas of the European Novel a few years back, and he has been on heavy rotation since then, I believe. Numbers+data visualization + 19th century novels = pretty awesome indeed.

12

chris 07.27.10 at 3:20 pm

I wouldn’t particularly see [Ender’s Game] as a turning point in YA science fiction, because I don’t think it was ever published as YA.

It might not have been published as YA, but it was read as YA, I think.

I wonder how many of the best/most influential YA works weren’t intended that way?

I once saw _The Warrior’s Apprentice_ (which I presume is old enough to qualify as old school in Mendlesohn’s analysis) on a YA rack at my local library, and wondered if I should tell the staff to move it. I doubt if anyone who thinks about what teens ought to be reading about decides that they ought to be reading about… well, do you mind spoilers here? Let’s just say the events that happen to and around Elena, which are a long way from the “fitting in with the right crowd” of stereotypical bad teen fiction.

Although, in a sense, you could say that YA is supposed to be about growing up, and growing up is often about finding something more important to concern yourself with than obsessing about your social standing… in which case there are some awfully tall children running around, but a lot of definitions of maturity lead to that result. The events of _tWA_ are a growing experience for both main young adult characters, but not in the way that a typical teen is likely to experience.

13

Henry 07.27.10 at 3:45 pm

Error corrected, thanks. We have no power at home, and hence my blogging has been conducted in a hurried fashion from the Friendship Heights Panera.

14

Henry 07.27.10 at 3:46 pm

And JSE – worth reading the Cosma essay linked to in the post if you’re interested in this stuff (it is a response to Moretti)

15

magistra 07.27.10 at 7:03 pm

If you have 200-300 novels/datapoints to play with and you’re trying to divide this into several subgenres, aren’t you already near having too few examples in each category to be able to say any hypothesis is statistically significant, unless the clustering is so blatant you can spot it ‘by eye’ anyhow? And aren’t you getting to the stage where a single arbitrary decision about one novel’s categorization will change the outcome?

I don’t know about literary scholars, but I have to say that if historians only accepted hypotheses for which we had more than 90% certainty, we’d write a hell of a lot less. There’s also a limit to how many times you can write: ‘There may have been a change, but we have too little data to confirm this’, without people deciding that your funding ought to be cut.

16

Western Dave 07.27.10 at 8:23 pm

My bad, I read Unaccompanied Sonata in Omni along with A Thousand Deaths, which led me to a collection of short stories of his (now out of print) that had Ender’s Game in it. I haven’t thought about Unaccompanied Sonata in a long time, which is a shame because it is a much better story than the original Ender’s Game story is. If Ender’s Game wasn’t originally published as YA it certainly became that. By the time I entered college in the Fall of ’85, everybody I went to college with at Swarthmore who admitted to reading SF (ie: damn near everybody) had read it and almost everybody identified with Ender. I know have a hard time convincing my geekier students (who are in HS) to read Speaker for the Dead and Xenocide because they see Card as exclusively a YA author. Plus there’s the whole batshit crazy since 9/11 thing.

17

chris 07.28.10 at 1:36 pm

There’s also a limit to how many times you can write: ‘There may have been a change, but we have too little data to confirm this’, without people deciding that your funding ought to be cut.

This seems rather close to saying that the funding sources are demanding academic irresponsibility and lower standards.

18

magistra 07.28.10 at 2:26 pm

Chris@17 – what I meant is that in historical studies (which includes studies of past literature) you have to work with the data points you have, not the data points you’d like to have. Sophisticated statistical techniques that work well if you’re got thousands of observations are often no use if all you’ve got is 50 examples, and you simply can’t get any more. If you stick to simple statements of proportions (for example, 20 out of 36 books before 1900 are X, as opposed to 5 out of 30 after 1900), you can often demonstrate useful information about trends, even though it may not be statistically ‘significant’.

19

ajay 07.28.10 at 2:51 pm

If you stick to simple statements of proportions (for example, 20 out of 36 books before 1900 are X, as opposed to 5 out of 30 after 1900), you can often demonstrate useful information about trends, even though it may not be statistically ‘significant’.

That difference is, in fact, statistically significant. You don’t necessarily need thousands of data points to get a statistically significant result – 50 can be more than enough. Ten can be enough, if the hypothesis you’re testing is “tossing this coin will produce heads and tails with equal frequency”.
That’s one point.
The other is that if your result isn’t statistically significant, then you shouldn’t really use it to demonstrate any useful information about a trend at all, because you can’t be sure that the trend is there. You may think it’s there, based on instinct – but humans are built to spot patterns, and sometimes those patterns aren’t actually there (pareidolia).

Say you’re writing about Mr Earbrass’ novels, and you start thinking “The Great War really put Earbrass off foreigners. After 1918 he never wrote another novel with a foreign setting.”
Which is true. Is it a trend? Well, actually, he only wrote four novels in total. Of the three he wrote before 1914, two of them had a foreign setting. The one he wrote in 1922 didn’t. So it’s not statistically significant.
Fortunately, there are other sources for how Mr Earbrass thought about foreigners: maybe you could look at his letters or his memoirs or something. Unlike a lot of scientists, you don’t have to rely solely on statistical evidence. If you find that his diary contains lots of stuff about how much he hates foreigners, starting from 1914, then fair enough. But you can’t use a change in his novels’ settings as evidence (it’s OK to use them as an example, though). There’s nothing useful you can draw from that fact.

20

Cosma Shalizi 07.28.10 at 3:25 pm

You can also say at what level trends (etc.) are statistically significant, which indeed people who do hypothesis tests really should be doing anyway (5% got enshrined because 1.96 is about 2…). With a little more work you can give confidence intervals for how strong the trends (etc.) are, which again one should do anyway. This might indicate that the evidence for (or against) a trend is quite weak, but weak evidence is evidence.

21

bianca steele 07.28.10 at 5:39 pm

The results of a statistical analysis can be interesting even if they’re not significant. Peter Farey, a former corporate instuctor of statistical techniques and an advocate of the theory that the real author of Shakespeare’s plays is Christopher Marlowe, has done some statistical analysis of Marlowe’s and Shakespeare’s plays and re-dated some of the plays based on the general trend, incidentally comparing them to the datings of several prominent Shakespeare critics. Farey’s argument is that writers–due to some tic of human psychology–always progress in certain ways throughout their careers, and that Shakespeare’s writing (properly dated) picks up where Marlowe’s left off. This work relies on several assumptions and is not convincing. Farey does not consider alternative hypotheses for the apparent trend he has found (for example, his analysis might appear to be a support of the “social energies” theory of which the New Historicism is fond), and I do not recall seeing any indication that he performed any confidence level checks.

Farey’s work would seem to be interesting if: (a) you are one of his fellow Marlovians and you want to talk up a comrade and the work he did that supports your cause, (b) you do the same kind of analyses and make the same kinds of arguments, and you want to take support away from people who criticize those, and (c) you think the spreadsheet comparing Farey’s dating to Bloom’s, et al., is worth the verbiage surrounding it, or at least (as I do) worth ignoring that verbiage. There is some scholarly work on dating and authorship using statistics, and I assume it is better, but the books are ridiculously pricey and clearly written for an entirely internal audience.

22

magistra 07.29.10 at 7:13 am

I’ll try and explain a bit more about my concerns, but I’m conscious that it’s more 20 years since I did the limited amount of statistics I studied, and that I’m coming from a background now (early medieval history) that is used to working with only scraps of evidence.

To take Ajay’s hypothetical case of Mr Earbrass, he is assuming 1) that the variable he is looking at can be accurately classified, 2) that this variable is a good proxy for the real information we want, 3) that any change in behaviour is abrupt and 4) that we have extra data about Mr Earbrass easily accessible. All of these are often dubious assumptions when looking at historical or literary data. To take no. 4: we know about a number of authors (medieval ones in particular), only from their own texts. On no 1: most classification isn’t as definite as ‘foreign setting’ or not. What about the one that’s mostly in Britain, but has one scene in France, and does it matter if that’s the most important scene or not? As Cosma has pointed out, Franco Moretti’s ideas on genre rely on sometimes arbitrary decisions, such as that Jacobite literature and anti-Jacobite literature are two separate genres, not one ‘Jacobite debate’ one . And I suspect that it’s not easy to decide consistently whether a SF novel is ‘about’ personal relationships or not. On no 3, literary fashions may be an area where there are abrupt changes in taste: one year everyone is reading Harry Potter, the next everyone is reading Twilight. But I suspect for most historical processes, change is a matter of slow rise or decline.

Given these problems, if you’re using statistical techniques, it’s tempting to start trying to find other things you can measure, even if they’re not actually very good proxies (no. 2). So Henry suggests that you count the number of uses of particular family terms as a measure of whether SF novels are ‘about ‘ personal relationships or not, though that won’t pick up a book where the narrator talks incessantly about her brother by his given name, for example, or one where parents are now called ‘genkin’.

But suppose you have got a reasonably robust set of data and you’ve done your statistical analysis. What then? One possibility is that you find that your hypothesis about Mr Earbrass is not statistically significant, but nor is the alternative hypothesis (that he was unaffected by the Great War). Now, according to Ajay you can use this as an example, but not as evidence, but that’s not a distinction that most articles in the humanities make. It is going to sound strange to read ‘all Earbrass’s novels with a foreign setting were written before WW1 but we cannot conclude anything significant from this’. And in practice, readers are going to draw their own conclusions from the data, anyway.

Suppose, on the contrary, that you do find some data that has some statistical significance. If you have 90% statistical significance, or higher, then that is fairly compelling evidence that you were right that Earbrass’s characters are disproportionately born illegitimate. But suppose you find with 70% significance that Earbrass’ characters are more likely than normal to be born illegitimate. Cosma points out that weak evidence is still evidence, but what can the reader do practically with that result that they can’t do with the same data presented without statistical testing? How should you react differently to evidence with 70% or with 80% significance?

Finally, on the matter of funding, with which I started. If you go to a research committee (or your supervisor etc) and say: ‘I have found no statistically significant trends in Earbrass’ use of metaphors in the novels, but nevertheless I want more money/time to test all his letters in case those contain them’, do you think you’re likely to get a positive response? It’s hard enough to get negative results published in the sciences, where at least you’re removing blind alleys for future researchers, but there is even less interest in negative results for the humanities. If, after exhaustive comparisons, you find that Earbrass uses the same number of quotations from Shakespeare as other writers from his period, this is only of much interest if a leading article in the field is ‘From Shakespeare to Earbrass: the golden thread of English pastoral’.

I’m not denying that you can do useful statistical studies of literature or history, but I suspect that if you do it has to be designed in from the start of the research and that there are a lot of questions statistical techniques are just not good at exploring. In particular, I think the kind of hybrid research techniques (part-statistical, part not) that Henry wants are likely to prove unsatisfactory from both sides: a lot of additional work for not many definite results produced.

23

ajay 07.29.10 at 8:58 am

But suppose you have got a reasonably robust set of data and you’ve done your statistical analysis. What then? One possibility is that you find that your hypothesis about Mr Earbrass is not statistically significant, but nor is the alternative hypothesis (that he was unaffected by the Great War).

Well, the format is going to be “the evidence is not strong enough for us to reject the null hypothesis that he was unaffected by the Great War”, but yes, basically.

Cosma points out that weak evidence is still evidence, but what can the reader do practically with that result that they can’t do with the same data presented without statistical testing? How should you react differently to evidence with 70% or with 80% significance?

The advantage is, really, that statistical analysis is a more rigorous way of thinking about things. As I noted, it helps you avoid the tendencies towards pareidolia and recency bias and all the other flaws that creep in when you try to do complex analysis with a souped-up monkey brain.

What you can do with evidence that is significant only at, say, the 70% level is write it up and say “There is definitely some indication that X (significant at 70%), but it’s not possible to say whether or not definitely X. More research needs to be done. Possible new areas of research A, B or C could produce more evidence to support or disprove X with certainty. However, we have found no evidence that Y.”

Finally, I am not at all sure about arguments that follow the line “I don’t want to use this more rigorous technique, because it might prove that I’ve been talking rubbish, and then I would find it difficult to get funding. Existing techniques are vague enough for me to be able to convince people that I am not talking rubbish, or at least for them to give me the benefit of the doubt, and thus allow me to get more grants and continue to live in the style to which I have become accustomed.”

24

alex 07.29.10 at 9:55 am

When politicians make policy decisions on the basis of rigorous statistical analysis, then perhaps scholars will feel obligated to quantify their results defensibly. So long as prejudice, fudging, superstitious claptrap and outright twaddle are the essential bases of public debate, scholars can be content that they at least know what the concept of ‘evidence’ means, even if they aren’t always as strict as they could be in applying it.

25

magistra 07.29.10 at 11:33 am

Ajay@23 – I’m not saying that you should get funded if you’re talking rubbish, and I’m aware that descriptive statistics can help to lessen perceptual distortion. What I’m saying is that in some fields there will be never be enough quantifiable evidence to prove or disprove your argument with the kind of certainty that a scientist would like. If I have four texts that are relevant to my study of the ninth century, however much extra funding I get will not change that. Often there simply isn’t extra comparable data to be found that will add statistical significance. There may be other less good proxy data, but I don’t see how you can’t combine 75% confidence from dataset Y with 70% confidence from dataset X to get 90% confidence overall, without introducing far too much uncertainty. However, I can still do a qualitiative study of my four texts, which I think can provide useful information.

If you say that historians and literary scholars should normally do quantitative as well as qualititative studies, which is what Henry is implying in the OP, you are often effectively saying that entire historical periods or particular topics should not be studied, because they cannot produce ‘scientific’ conclusions in the way you would like. I don’t mind cliometrics done well; I do mind when cliometrics is found not to work well for a particular research topic and the conclusion is that the research topic is therefore not worth studying.

26

ajay 07.29.10 at 12:58 pm

Fair enough – but I think there’s a difference between the entirely acceptable path of saying “I can’t do a quantitative study of this for whatever reason [eg lack of evidence, basic unsuitability of the topic], here are the tentative conclusions of a qualitative study, but you should be aware that they also support alternative interpretations” and saying “I am not going to do a quantitative study, even though I could, because I am worried that it might disprove my hypothesis, and if I produce too many negative results I won’t be able to get any more funding”.
The latter is what I read into
“If you go to a research committee (or your supervisor etc) and say: ‘I have found no statistically significant trends in Earbrass’ use of metaphors in the novels, but nevertheless I want more money/time to test all his letters in case those contain them’, do you think you’re likely to get a positive response? It’s hard enough to get negative results published in the sciences, where at least you’re removing blind alleys for future researchers, but there is even less interest in negative results for the humanities.”

27

Henry 07.29.10 at 12:58 pm

bq. If you say that historians and literary scholars should normally do quantitative as well as qualititative studies, which is what Henry is implying in the OP

Magistra – dunno where I imply this at all. What I’m making is a much weaker argument – that where you are trying to figure out quantitative trends, and you have a reasonable amount of quantitative data (the book in question has, I think, evidence drawn from <500 books) you should consider using quantitative techniques, with some attention to significance testing. I don’t (at least as far as I can see) even _hint_ that it is not worth paying attention to periods or subjects where you can’t gather sufficient quantitative evidence. Indeed, when I say

bq. It is important to be clear that teching it up is not a substitute for critical judgment. There are many interesting aspects of text that absolutely resist quantification.

I would have thought that this was pretty explicit evidence that my beliefs are exactly the contrary of those that you seem to think I have. I would love to see more serious quantitative study of literature, because I think that you could learn a lot. I also suspect that the reason why you see so little is because the field’s self-definition is one that effectively rejects this sort of work. But I would _never_ want to see all study of literature be conducted using statistical analysis etc.

28

chris 07.29.10 at 1:25 pm

However, I can still do a qualitiative study of my four texts, which I think can provide useful information.

I think ajay’s point, to which I am somewhat sympathetic, is that if you have that little data you can’t honestly be sure that what you are looking at is information at all, let alone useful information. That’s rather the point of having standards of evidence.

The prospect of having entire fields of study essentially replaced by “There is not enough surviving evidence to be certain what happened.” doesn’t exactly fill me with joy, but if there in fact *isn’t* enough surviving evidence to be certain what happened, why delude ourselves otherwise? Humans can spin plausible-sounding conjectures from practically no data whatsoever until we’re blue in the face, but only at the risk that the results may be as real as Thor (a great story to explain those loud noises coming from overhead, but as it turned out, there was not a lot of truth in it).

29

Chris Williams 07.29.10 at 2:05 pm

I did some of this once, and (a) I think it worked even though (b) I kept it as as impressionistic as possible. My caveat ran as follows

“I’ve refrained from processing the raw results in this paper – I could give you all manner of percentage breakdowns, but this is likely to lead to a spurious air of accuracy in the numbers, which can be avoided if I stick to formulations like ‘about one in three’. For reasons that I will make clear below, there’s so much noise in the data that the only clear conclusions that can be drawn from it are those of the order of changes from (say) ‘about one in six’ to ‘about one in three’. ”

http://www.open.ac.uk/Arts/history/obp-policing.htm

PS I’m sure that Baron Ogdred, from the ‘frozen pavillion’ scene of Earbrass’s ‘Unstrung Harp’, is of German extraction.

30

magistra 07.29.10 at 2:07 pm

Henry – I’m sorry if I’ve misinterpreted your views on the topic. I read the initial information you gave on the dataset as making it intrinsically unlikely to deliver the kinds of certainty you wanted, and so not a good example to use. If you have ‘several hundred books’ and 5 or 6 different themes you’re getting potentially to 10 or less examples in each cell of a contingency table. Given that there are always limits on the consistency of subject indexing even with one indexer (there have been a lot of studies on this by librarians) aren’t you getting to a point where a different decision on the categorisation of one or two books might change your outcomes? Or is my limited statistical knowledge letting me down here? On the other hand, if you’re got 500 books and 2 or 3 very clearly separated categories, then I can see more point in the exercise.

One of the problems is that if you’re talking about trends, which is what historians do, you’ve almost always got just enough data that you could in theory use quantitiative methods, even when it would in practice be a stupid idea to do so. (As Ajay’s example showed, you can in theory do analysis with 4 datapoints). So I’m concerned that either researchers have to start routinely putting in a long explanation of why they haven’t used quantitiative methods, or they feel pressured to use them inappropriately.

Because I think it’s unrealistic to pretend that research committees, funders etc don’t have preferred methodologies and that ‘scientific’ methods don’t tend to have more prestige, at least when you get multidisciplinary committees. History seems to me currently quite open to both quantitiative and qualitative studies, but I think literary studies may fear that if you let the ‘quants’ in, everyone else will eventually be driven out. If you look at what’s happened to economics and economic history, that’s not an entirely unjustified fear.

31

ajay 07.29.10 at 2:50 pm

Given that there are always limits on the consistency of subject indexing even with one indexer (there have been a lot of studies on this by librarians) aren’t you getting to a point where a different decision on the categorisation of one or two books might change your outcomes?

Yes, but that doesn’t mean you shouldn’t do it. Scientific analysis can depend quite a lot on subjective decisions – ethology, for example, does a lot of statistical analysis of behaviour, and whether a particular movement by the animal counts as a “tail elevated” is quite often a matter of interpretation.

I think the point I’m trying to make is that, if the question you are investigating is one that in principle could be accessible to statistical methods, then you should either use them or make it clear why you aren’t. Further, if your conclusion is one that could be checked using statistical methods, then you should use them to check it; and if they won’t back it up, you should think very seriously about how you’ve reached the conclusion in the first place.

This is just basic good academic practice, of the sort that everyone should be following anyway, whatever their subject.

32

ajay 07.29.10 at 2:51 pm

PS I’m sure that Baron Ogdred, from the ‘frozen pavillion’ scene of Earbrass’s ‘Unstrung Harp’, is of German extraction.

Spoilers, dude. Not everyone’s read TUS.

33

ajay 07.29.10 at 2:51 pm

TUH, even.

34

Ray 07.29.10 at 3:20 pm

While she does not put it in quite this way, her argument implies, I think, that science fiction for teenagers used to be primarily a subset of science fiction, but has now primarily become a subset of literature for teenagers.

It may be just the SF I read, but I didn’t think “SF for teenagers” existed as a category 50 years ago. There was just SF, which was suitable for teenagers. All/most of the famous old school SF writers I either read when I was a teenager (or younger), or came to later and thought “hmm, I might have enjoyed this when I was a kid, but not now”

Even today, there is a YA SF marketing category, sure, but a lot of the top-selling SF – Star Wars novelizations, Robert Jordan, Baen MilSF – is not being kept out of the hands of teenagers.

I’m wondering how she drew her categories.

35

Chris Godfrey 07.29.10 at 3:44 pm

I’m pretty sure it did – Blast Off at Woomera was published in 1957 and The Future Took Us was published in 1958, both pretty clearly aimed at a young audience.

36

chris 07.29.10 at 5:00 pm

It may be just the SF I read, but I didn’t think “SF for teenagers” existed as a category 50 years ago.

Now that you mention it, I’m not sure literature for teenagers did either. I mean, if you define it by angst and obsession over interpersonal relationships, then _Wuthering Heights_ qualifies, and possibly half of Austen, but nobody puts them in a genre ghetto, even now that the genre has been identified and marked off.

Prior to, say, Judy Blume, was anyone identifying a category of literature for teenagers and describing it as something different from unmarked literature?

37

magistra 07.29.10 at 6:18 pm

Prior to, say, Judy Blume, was anyone identifying a category of literature for teenagers and describing it as something different from unmarked literature?

A search on JSTOR’s library journals (which go back as far as 1931) shows that the Library Association was already then publishing a list of ‘Books to read’ for ‘young readers’ (aged 12-18). I suspect that pretty much as soon as professional library associations got going in the late 19th century, you’d start getting lists for ‘adolescents’ like that. Whether what was on those lists would correspond at all to what we think of teenage literature today is an entirely different matter.

38

magistra 07.29.10 at 6:54 pm

ajay@31 – on the point about errors in categorization. I know that any categorization always has errors, but if you assume that errors are random in direction, in larger data sets they tend to cancel each other out. If you’ve got two categories and you have 10% category errors, with 10 data points what was actually 3 X and 7 not X might become 4X and 6 not X, which makes a big difference. In contrast, if you’ve got 100 data points, what was 32 X and 68 not X might become 35 X and 65 not X, but it’s unlikely to become 42 X and 58 not X.

More generally, I’m worried by your claim that even if you have tiny datasets, you should do statistical analysis on them, and that if you find they have no statistical significance, then it’s illegitimate to draw any conclusions at all about the data. To go back to the hypothetical Mr Earbrass: his first two novels are about fashionable Parisian society and don’t mention Catholicism at all. His last two novels are respectively about a cardinal’s relationship with his illegitimate son and a nun’s struggle with doubt. Is it permissible to refer to ‘Earbrass’ late fascination with Catholicism’?

According to you, there is only weak evidence that he isn’t writing at random about Catholicism (there’s one chance out of 8 it’s a random sequence, if I’ve got the figures correct). You’d either have to find evidence from elsewhere (but the second Mrs Earbrass lets no-one near her husband’s letters) or you have to derive essentially arbitrary tests from data within the book: ‘let us consider how many times Earbrass uses the word ‘Pope’ or ‘confession’ in each of his 4 books and the statistical significance of this’.

Or, you can use the empirical proposition that the themes of authors’ successive books aren’t randomly distributed, but tend to be correlated (there’s a trajectory to people’s writing), and that therefore it is likely that Earbrass did became fascinated by Catholicism. But this proposition is very hard to quantify in any way that would allow you to feed it into a statistical model.

I’d be quite happy with a rule of thumb for academic research that said something like ‘if you have more than 100 datapoints, you ought to be thinking about doing statistical analysis or explaining why you’re not going to do it’. But one that says ‘ if you have 4 data points, you ought to be doing statistical analysis or explaining why you’re not’ strikes me as completely unrealistic.

39

roac 07.30.10 at 4:26 pm

Ray, at 34, may be right in the narrow sense that the publishing industry didn’t have a separate category for “Teenager’ 50 years ago (do they now? I thimk they still say “Young Adult”). But it is absolutely the case that SF was sorted into Adult and Juvenile. I know, because I was reading the stuff at the time. The basis for the classification, mostly, was that the juveniles had teenage protagonists, and sex was kept at arm’s length.

Heinlein of course wrote lots of juveniles, and to the extent we have to admit that Heinlein has merit, we have to extend the concession to those books as well. Tunnel in the Sky, The Rolling Stones, and Have Spacesuit Will Travel are titles that come to me off the top of my head.

Heinlein was one of my two favorite SF authors when I was a kid. Rereading those books, I can understand why, which is not true of Andre Norton, my other favorite. The plot of one of her juveniles, incidentally, is cribbed from Xenophon’s Anabasis. I think the title was Star Guard, but I could be wrong.

I read lots of books by other authors, which I mostly recognized as crap even then. Even Asimov wrote juveniles, about a hero called, IIRC, “Lucky Starr.”

40

Farah 07.31.10 at 2:32 pm

Thank you for a fascinating discussion.

Two quick things:

I’m a huge fan of Moretti, and was horribly aware that I lacked the skills and training for the directions the book took me in.

Fifty years ago there were plenty of sf books for teens and children and they were part of a separate publishing category. Books for teens were published under the term “juvenile” by publishers such as Scribners and Blackie (1890-1991). For those who think they didn’t exist: I didn’t realise that most of them existed either until I started trawling the back rooms of second hand shops. I began the project with a tiny number of titles and watched the collection mushroom. Their most common manifestation (and I talk about this in the book) were as career books.

41

Henry 08.02.10 at 10:14 pm

Farah

As I said, I probably owe you a bit of an apology for using your excellent book to ride this particular hobby horse – but it just seemed to me like exactly the kind of project where a little bit of quants would have been great (even if only to organize the information from all the duds that you had to read).

Comments on this entry are closed.