Solitaire mysteries

by Henry Farrell on October 21, 2003

I’ve just finished reading Bruce Schneier’s _Beyond Fear_, which I recommend to anyone who’s interested in security issues after 9/11. Schneier’s a famous cryptographer – if you’ve read _Cryptonomicon_, you’ll be familiar with his “Solitaire”:http://www.schneier.com/solitaire.html code – but over the last few years he’s become more and more interested in the human side of security systems. And this is where _Beyond Fear_ excels – it describes in clear, everyday language how we should think about security in the modern world and why even the most sophisticated (especially the most sophisticated) security systems are likely sometimes to fail.

Unsurprisingly, _Beyond Fear_ talks at length about the security choices made after 9/11. It’s far from complimentary about most of them, but it doesn’t just provide a list of entertaining “stupid security award”:http://www.privacyinternational.org/activities/stupidsecurity/ style gotchas. Schneier talks about the political and technical processes that produce manifestly bone-headed policies – political bargains struck by actors with their own agendas; the perceived need for “security theater” to reassure people that something is being done to protect their safety; the manifest impossibility of foolproofing any reasonably complex system. He stresses that security involves trade-offs rather than perfect solutions. Not only that; he provides some useful ways to think about when these trade-offs do, and do not, make sense. Schneier’s take is interesting to those, like me, who usually think about new security measures in terms of how they hurt privacy; if he’s right (and he has some good arguments and evidence to back him up), many of these measures don’t even make sense in their own terms.

The book is aimed at non-professionals, which means that sometimes the tone is a little _too_ folksy and straight-talking for my liking. Schneier uses a couple too many quasi-topical instaquotes from famous people in order to try and sweeten the pill of his (deadly serious) argument and prescriptions. But _Beyond Fear_ still has a lot to commend it, even to those who already know something about the issues that Schneier is writing about. He has a very nice discussion of how complexity theory and emergent phenomena afflict security systems, laying out the main ideas without lapsing into jargon. His discussion of the relationship between detection and prevention strategies is worth the price of the book on its own. It lays out in a simple yet devastating way the reasons why Diebold style electronic voting machines are a bad idea.

bq. After the 2000 U.S. presidential election, various pundits made comments like: “If we can protect multibillion-dollar e-commerce transactions on the Internet, we can certainly protect elections.” This statement is emphatically wrong – we don’t protect commerce by preventing fraud, we protect commerce by auditing it. The secrecy of the vote makes auditing impossible.

Exactly right – and a lovely insight to boot. If you’re at all interested in these topics, you need to read this book.

{ 11 comments }

1

Nasi Lemak 10.21.03 at 11:42 am

Hm. Blackwells have not yet delivered my copy, and I really ought to wait until I read it before replying to this post (ie July 2007 on best estimate) but:

From what I’ve read of Schneier in Crypto-Gram, he seems to be pretty mechanistic in his thinking about security i.e. only things which effect a real reduction in the risk of some defined bad outcome are worthwhile. He uses words like “placebo” to describe policies which make people think they are more secure but which do not in fact have any such effect. (I guess this view comes out of the long tradition of hucksterism and fraud in the sale of computer security products, where sales are often about baffling the consumer into buying some thoroughly useless product.)

But in the terrorism example, it seems quite plausible to me that the bad outcome is fear (itself…), something that can probably be managed quite effectively by policies that feel pretty intrusive (e.g. searching shoes at airports, no scissors on board aeroplanes) and therefore feel as if they ought to be effective – regardless of whether those policies actually are effective.

I have a faint memory of knowing, when I was about 10, something that is probably an urban legend: during the Blitz, the initial, demonstrably effective, policy for protecting London was to have night fighters shooting down as many German bombers as possible. However, frightened civilians could not hear this going on and morale fell. The policy was changed to replace the night fighters with anti-aircraft fire (can’t have both for fairly obvious reasons). AA gunners boomed away all night every night, to no real effect, while the Germans dropped their bombs unhindered. Morale rose, because civilians could hear their own protection.

In the, let’s say, stylized facts of this example, it seems to me that the placebo is actually better than the effective response.

2

Dan Hardie 10.21.03 at 3:10 pm

Partly an urban myth, Nasi. It’s true that in the ‘Blitz’ (ie the period Sept. 1940- spring 1941, after the Battle of Britain) when it became apparent that the Anti Aircraft guns on the ground weren’t having any success in shooting down night raiders, the Army wanted to stop them firing, to save on the waste of shells and manpower, but Churchill over-ruled them on the grounds that the sound of the AA guns firing would reassure civilians that something was being done to protect them.

But it simply isn’t the case that Churchill ordered an ineffective-but-morale-boosting policy (ground fire) in preference to an effective-but-non-morale-boosting policy (night fighters). RAF night fighters at this stage were pretty much as ineffective as AA guns. We know this because the joint Army anti-aircraft people and the RAF’s Fighter Command had some of the best statisticians in the country working for them- Fighter Command were the first people to use the phrase Operations Research, and much of the OR methodology was invented in a hurry in 1940.

To be effective, night fighter networks needed a) a much more powerful ground radar system (the existing one was good enough to direct fighters by day to the vague area where German planes could be found, and more precise information was then available from the fighter pilots’ own visual observation and, most importantly, from a network of ground observers with binoculars and telephones; b)a reliable aircraft-mounted radar system which would allow fighter pilots to close the last mile or so on their targets in pitch dark. Neither of these technologies were available in late 1940, except as unreliable and ineffective prototypes.

Neither was the technology which would make AA guns effective by night: again, a more effective ground radar system, backed up by radar proximity fuses in the guns’ shells.

Recommended reading would be Len Deighton, ‘Fighter’ (yes, I know he wrote thrillers- but actually he is an exceptional military historian); Richard Overy, ‘The Air War’ (his book on the Battle of Britain, ‘The Battle’, is essentially a precis of Deighton); and Alexander McKee, ‘Strike from the Sky’, which has a good summary of the night fighter problem.

3

Dan Hardie 10.21.03 at 3:25 pm

So a summary would be: if faced with a choice between two militarily ineffective responses, choose the one that might have some kind of placebo effect- but don’t forget to pump huge amounts of money and brainpower into developing an effective response.

4

Nasi Lemak 10.21.03 at 3:29 pm

Umm, yes. Well, I *did* say “stylized facts” and I am a kind of rational choice type, so no real commitment to actual factual accuracy here.

But if it were in fact the case that Churchill had ordered an ineffective-but-morale-boosting policy, in the circumstances where morale was the more important issue, then I don’t think it would have been a poor security policy – and I think Schneier would, on the basis of what I’ve read before. I’m interested in whether the book, where it talks about “security theatre”, sees this as actually a good thing (which I think it probably is).

5

Nasi Lemak 10.21.03 at 3:42 pm

… provided, of course, that the placebo does not impact too much on any effective policies that may be available.

6

Jonathan Goldberg 10.21.03 at 7:38 pm

…and provided the placebo does not extract a price in civil liberty.

Freeman Dyson, who was part of the statistical team, had nightmares after the war based on how little actual use was made of his group’s efforts, and how they were overruled for propaganda purposes.

7

Tom Runnacles 10.21.03 at 9:21 pm

I know that there are some in the security world who think Schneier is a bit of a fraud, since he’s primarily a cryptographer and mathematician rather than a real, down-in-the-mud guy who has to keep large, sensitive systems safe for a living. Whatever.

I can say that his Applied Cryptography is a marvellous book if you want to understand a bit about the means by which some mathematical ideas can be put into the service of security. As importantly, Schneier emphasises the profound limits that the real world imposes on the confidence that can be placed in the pure crytographic results.

I can’t claim to have followed all of it, but I learned a lot from AC. Perhaps it’s not a big chunk of the CT readership, but I reckon anyone who builds software for a living should read it.

8

Henry 10.22.03 at 3:23 am

In yet another book (Secrets and Lies), Schneier talks in a slightly guilty way about how too many readers of _Applied Cryptography_ believed that crypto was some sort of ‘magic security dust’ that you could sprinkle over software to make it secure. From your account, it sounds as though he was maybe being too hard on himself.

9

Dan Hardie 10.22.03 at 12:06 pm


Hate to sound like a train-spotter, but Freeman Dyson did OR for RAF Bomber Command (as I remember, from 1943 or 1944- well after the Blitz) – and yes, it’s true that very little use was made of their work. Bomber Command had a very different culture from Fighter Command. Fighter Command would quite literally not have survived the war had their top brass not been prepared to listen to their OR and other scientific people. Again, this is a pretty standard view in much of the literature. Good discussion in R.V.Jones, ‘Most Secret War’- which is an excellent book for anyone interested in the scientific aspects of intelligence.

Like Nasi, I should probably read Schneier’s book before sounding off. But I’ll tentatively disagree with Nasi- given that terror groups create terror through successful attacks (‘propaganda of the deed’- who DID say that first? Sure it wasn’t Lenin) the best morale-boosting response would be the most operationally effective response, ie that most likely to frustrate attacks.

Btw Nasi, Chris Bertram seems to feel that I have grievously insulted you in the ‘Krugman on Mahathir’ thread. My deep apologies if I have, but I feel rather that Chris reproved him for his use of rude words.

10

Dan Hardie 10.22.03 at 12:08 pm

Sorry-last post was garbled. Shd read: ‘that Chris was upset because I reproved him for his use of rude words.’

11

Matt 10.22.03 at 2:07 pm

Another good book on security technologies (with lots of interesting case studies on how they fail – often because of failure to understand the human factors) is Ross Anderson’s Security Engineering.

It covers a wide range of topics – banking security to nuclear command and control, and it’s pretty readable (non-techies may want to skip over some of the crypto discussions).

Comments on this entry are closed.