February 08, 2005

JCMC special issue on search engines

Posted by Eszter

I am editing a special issue of the Journal of Computer-Mediated Communication on The Social, Political, Economic and Cultural Dimensions of Search Engines. I hope to receive submissions from people in a variety of disciplines. Details below the fold.

Journal of Computer-Mediated Communication Special Issue on

THE SOCIAL, POLITICAL, ECONOMIC AND CULTURAL DIMENSIONS OF SEARCH ENGINES

CALL FOR PAPERS

Guest Editor:
Eszter Hargittai
Northwestern University

IMPORTANT DATES:

Abstracts (optional, but preferred) due: June 1, 2005
Full papers due: Oct 1, 2005
Anticipated publication: Summer or Fall 2006

ISSUE FOCUS

Search engines are some of the most commonly accessed Web sites online. Millions of people turn to search engines daily to find information about news, health concerns, products, government services, their new neighbors, natural disasters and a myriad of other topics. At the same time, recent trends suggest that the search engine market is shrinking, with fewer large players guiding users’ online behavior than ever before. Despite the crucial role that search engines play in how people access information, little attention has been paid to the social, political, economic, and cultural dimensions of large-scale search engines.

This special issue will explore the social implications of large-scale search engines on the Web. It will bring together experts from the fields of communication, sociology, political science, economics, business, law, and computer and information sciences to consider what we know about people’s search engine uses and what recent trends suggest for the types of content that will be most accessible to users in the future.

The following are some questions papers might address: Who uses search engines and for what purposes? What are the effects of search engine use on mass- and interpersonal communication? How do users’ communication practices influence search engine functionality? How skilled are various population groups at the use of search engines? How do search engines shape identity management and representation online? Are all search engines created equal? Is all content created equal in the eyes of search engines? Is there a viable public alternative to the search engine market dominated by private actors? These are just some of the possible questions papers in this special issue may address.

GUIDELINES FOR SUBMISSION

Potential authors should submit a preliminary proposal of 500 words by June 1, 2005 to the issue editor Eszter Hargittai
(searchengines06@webuse.org). Those interested in submitting an abstract are encouraged to contact the special issue editor with questions and ideas. The proposal should include the central research question, the theoretical and/or empirical basis for the paper and preliminary findings.

Authors whose proposals are accepted for inclusion will be invited to submit a full paper of roughly 7,000-10,000 words by October 1, 2005. Since JCMC is an interdisciplinary journal, authors should plan for papers that will be accessible to non-specialists, and should make their paper relevant to this audience. Anticipated publication date for the issue is Summer or Fall 2006.

Final submissions should be emailed to the special issue editor, Eszter Hargittai at searchengines06@webuse.org.

http://webuse.org/searchengines06/
http://jcmc.indiana.edu/

Google Maps

Posted by Eszter

Last week Gawker Media launched Lifehacker, a site I have gotten addicted to quite quickly. It’s a great resource for any geek or geek-wannabe. One of today’s finds is the most recent service launched by Google: Google Maps. They offer very nice clean maps that allow searches for more than just addresses. For example, see chocolate in evanston. Click on the red pointers and get the exact addresses. With another quick click you can add an address for directions. By clicking on “Link to this page” you get a static link you can share with others. (Note that the arrows for navigating are in the upper left hand corner not on the sides of the map as with some other services.)

The results to searches are far from exhaustive though. I’m afraid the above search misses my favorite chocolate store in town. In fact, curiously, it misses relevant stores that a regular Google search will bring up and Google Local doesn’t seem to be using Google Maps yet either. Since they’re still in beta, hopefully we’ll see some improvements. Regardless, it looks like a very nice new service worth checking out.

February 01, 2005

Student blogs

Posted by Eszter

A while back I posted about my plans to teach a class in which each student would be required to maintain his or her own blog. We are now halfway through the quarter (really) and so I thought it would be a good time to get some outside readers to take a look at the students’ blogs. If you happen to have a moment and wouldn’t mind surfing over I am sure the students would be delighted to get some comments from people not enrolled in class. TheRockBlog.com has a link to each of the blogs in the right-hand menu.

As you will see, the quality of student posts differs quite a bit. This is not particularly surprising since one can expect some level of variation in the work of students for most classes. To give a bit of background on the content of the blog entries, students are required to post to their blogs each week discussing at least two of the reading assignments covered that week. Students can use their blogs to post other material as well. They are also required to post a comment on a peer’s blog each week. The syllabus also includes some additional blogging assignments (finding and discussing various online content).

Judging from midterm feedback, it sounds like most students are enjoying the blogging experience although some find commenting on others’ blogs a bit tedious. At the same time others find it disappointing that they are not getting more feedback so it’s hard to satisfy everyone. Having students blog about the readings is certainly helpful for an understanding of how they are processing the material. Their blog entries have guided discussion in several class sessions.

I’ve learned a lot from this experience and plan to write up a detailed description of the course logistics later. For now, feel free to take a look at how the student blogging is going by visiting some of their sites.

January 20, 2005

OOPSLA? Me?

Posted by John Holbo
I've gotten myself involved in something a little unusual (for me, anyway). I'm on the program committee of OOPSLA '05. Specifically, I'll be reading submissions in the 'essay' track. These are supposed to contain "in-depth reflections on technology, its relation to human endeavors, and its philosophical, sociological, psychological, historical, or anthropological underpinnings." I'm announcing it here because academic folk with solid but untechnical essays that fit the bill might not necessarily think to submit to a conference nominally devoted to object-oriented programming. I'm quite curious what sorts of things I'll be reading. Should be fun.

December 14, 2004

Unanticipated Google Hacks

Posted by Henry

I downloaded Google Desktop a couple of weeks ago, and have found it invaluable - it’s greatly superior to the standard Windows search tools. But up until a few minutes ago, I didn’t realize that it could serve as a sort of rough-and-ready backup tool to boot. I loaded up a Word document that I’d been working on recently, and found (as occasionally happens) that most of my work had somehow disappeared, through the vagaries of Windows, or my having pressed the wrong key at some stage or another, or some combination of the two. None of the temporary files were still on my hard drive, so I more or less resigned myself to having to recreate several days work. But then I decided to use Google Desktop search to trawl my hard drive on the off chance that it was still in existence somewhere - and discovered that Google creates and retains several caches of all Word documents that you are working on, so that you can go back and see earlier versions, and, if necessary, cut and paste old material that has somehow gone missing back into your document. It’s not an ideal solution (you lose formatting etc) - but it beats the hell out of having to rewrite something that you had already spent a lot of time on.

November 13, 2004

Delicious Monster

Posted by Kieran

Delicious Monster is a two-person company out of Seattle with a good pedigree in the Apple development community — even though half the company is eighteen years old, he’s been writing good software for the past three years. They have just released Delicious Library, a cataloguing application for books, music, movies and computer games. John Siracusa has a detailed review at Ars Technica. As Siracusa points out, an application designed to keep a catalog of your books and whatnot is fundamentally a boring idea. Yet Delicious Monster has managed to make it cool.

They do this by cleverly taking full advantage of the capabilities of the Mac OS and Amazon’s Web API. If you (like me) have an iSight Camera then Delicious Library can turn it into a barcode scanner. You scan the code, Delicious Library looks it up on Amazon, downloads all the details available for it (including a summary and the cover art) and the item is added to your shelf. It can also point you to similar items on Amazon, and if you happen to own them you can just drag them over to your shelf. The result is that you can build a pretty big database really fast, because there’s no typing involved. The result looks like this. It’s like creating an iPhoto or iTunes library on the fly for the books on your shelf. It’s absurdly satisfying to use, even though it’s basically useless in its current form. I mean that you can’t actually do anything very much with the data besides sort it every which way and print it out nicely. Well, that’s not entirely fair. You can sell the items you own on Amazon. And you can keep track of any books you loan out to people.1 But that’s about it right now. I imagine there are a lot of obsessive geeks out there who just want a catalog of their stuff, of course, and some people may well have a collection worth cataloguing for its own sake. What I really want from future versions is the ability to (a) output nicely formatted web pages (or PDF files) with selected books and any annotations I want to add, and more importantly (b) output data to a BibTeX file (or Endnote for the great unwashed), preserving annotations and ISBNs, etc. That would make it really useful. There’s a free application called Books that can do some of those things. Combining that functionality with DL’s eye-candy and iSight-scanning would turn it into a really killer application.

1 This is pretty useful, actually, because graduate students love to borrow books and never return them. DL puts a little yellow “Out” stripe over the corner of books you’ve loaned out, keeps a shelf of loaner books for everyone you lend to, and puts a reminder in your calendar to go get them back.

November 02, 2004

Voting machines

Posted by Henry

For those interested in breaking news on electronic voting machines and their associated tribulations, Princeton comp. sci. professor Ed Felten and friends are keeping track at E-Voting Experts.

October 27, 2004

Space invaders

Posted by Henry

More on the troubled relationship between the Republican Party and technology. One of my colleagues complained to me this morning that her AOL Instant Messenger software had been hijacked by political spam. As I’ve seen for myself, every time she moves her cursor over the program, a loud, obnoxious movie-ad pops up, telling her in stentorian tones about the horrible things that John Edwards and the Evil Trial Lawyers are doing to doctors. On further investigation, it turns out that this particular box of delights has been brought to your desktop by the “November Fund,” a pro-Republican 527 created by the US Chamber of Commerce. Apparently, the fund has spent $2 million; according to the American Bar Association’s ABA Journal, they’re legally prohibited from buying attack ads on TV or radio, which probably explains why they’re spending money on pop-ups.1 For my part, I sincerely hope that they raise and spend as much money as possible on Internet advertising. If I were a swing voter, I can’t imagine anything more likely to make me vote Democratic than having my desktop invaded by talking, dancing Republican adware.

1 The Internet is exempt from the ban on corporate funded advertising that specifically targets candidates.

October 15, 2004

Evil Spyware problem

Posted by Chris

I’m plagued by an evil SpyWare problem at the moment, which neither SpyBot S&D nor AdAware detects. (Norton AV also says I’m virus free.) The problem is an occasional launch of an Internet Explorer window, linking to this site or that site. Perhaps installing XP SP2 would solve this, but my last attempt just hung my system mid-install (and I needed to do a lot to recover). I’m tempted just to rename the IE exe file so that the program won’t run, but since evil Microsoft may have programmed in all kinds of subterranean connections between the browser and the OS, I’m wary of doing so. Any advice? (Advice of the form “You should buy a Mac” will not improve my immediate situation or mood.)

July 26, 2004

Adventures with Linux

Posted by Chris

I thought I’d indulge my fantasy of joining the hard-core techie kids (like Kieran) by installing Linux on my home PC at the weekend. Bravely ignoring the concerns of my family — who feared for their own future access to the computer — I downloaded a disk image for Suse 9.1 (Personal edition) and rebooted from the CD-ROM. I even managed successfully to repartition my hard disk (and Windows still works). But under Linux I have no mouse (mine’s a Logitech Optical USB creature) and no network (despite faithfully copying down and reproducing details of DNS servers, gateways etc.). Much googling and initialization of modules later, I’m no further forward. The problem isn’t Linux as such, since Knoppix works fine direct from the CD, recognizing the rodent, happily working with other USB devices, and auto-configuring the network. But I’d like a “proper” version, nicely installed on my new partition, so that I can escape the “told you sos” and “what did you expects” of partner and children. All advice gratefully received.

July 06, 2004

Online communities

Posted by Eszter

It has been interesting to follow the various discussions about blogs and what types of communities and discussions they resemble. I thought I would post a note to remind people (or let people know) that the study of online communities1 is one of the oldest topics explored by academics about the social aspects of information technology use. There are probably hundreds of papers written about Usenet, mailing lists and bulletin board systems. Of course blogs have some distinct characteristics, but overall the existing body of literature about online communities would probably yield some interesting and helpful reading to those interested in blogs. Let’s not reinvent the wheel. One place to look for such work is the Journal of Computer-Mediated Communication (almost a decade old), but a simple search in a library catalog will yield numerous sources on virtual communities. Of particular interest to those pondering the social network aspects of online communities may be some of the excellent work by Warren Sack and much interesting research done on Usenet by Marc Smith. I realize mapping the blogosphere is a somewhat different issue, but some of the questions that have been raised are relevant to other online communities as well. People have worked for years to find some answers, let’s not ignore them. A piece that seems especially related to some issues that have come up is “Community without Propinquity Revisited: Communications Technology and the Transformation of the Urban Public Sphere” [pdf] by Craig Calhoun.

1 When I use terms such as “online communities” and “virtual communities”, I do not mean to suggest that these exist in isolation from other types of communities. See this piece [pdf] by Barry Wellman and Milena Gulia for more on this point.

June 23, 2004

Fun with IT. Fun with IT?

Posted by Eszter

Chicagoland has a lot to offer especially during the summer. Lucky for those not in the area, you can catch some of these without being there. From art made of searches to interesting book signings, the Windy City will keep you busy.

Last Fall, I visited Kris Hammond’s Intelligent Information Laboratory at Northwestern and saw some really neat projects. Luckily, on occasion, these projects are shown in a more public forum as well. Such is the case of graduate student David Ayman Shamma’s Information Environment. While watching a TV broadcast, the viewer sees images that come up as a result of image searches both online and in a picture data base on words mentioned in the broadcast. It can be viewed at Piper’s Alley in Chicago or here.

Another IT-related event tomorrow, Thursday, will be Siva Vaidhyanathan’s book signing of Anarchist in the Library. It looks like Basic Books is putting out some interesting material this year (they published Paul Starr’s The Creation of the Media as well). Siva will be at Old Orchard mall tomorrow at 7:30pm.

June 22, 2004

Tech Active

Posted by Eszter

There is lots to blog about while in London and Paris, but I am saving most of it for when I’m back in the States. (I really cannot justify sitting at a machine when I could be running around the streets of London and Paris, sorry.) However, this one event will be over by the time I get back to regular blogging so I wanted to post about it.

The Stanhope Centre for Communication Policy Research is sponsoring a panel discussion next Mon (28th) in London on “Tech Active” or the promises, successes and challenges of both using the Internet to change the world and using social policy to change the Internet. Both scholars and activists will engage in this discussion including Cory Doctorow, Gus Hosein, Lisa Nakamura and Bill Thompson. Thanks to Christian Sandvig for organizing the event. I am sure he will have interesting thoughts to add as well. I am sorry to miss it, but my flight leaves London a few hours earlier. The event is free and open to the public so I hope people will take advantage of it!

May 21, 2004

Phone numbers

Posted by Eszter

Obviously there are tons of ways in which one can study memory and recall from the trivial to the immensely important. This morning I was wondering about a tiny corner of this area: how do people remember numbers, and in particular, phone numbers? I wish I had a better reason than the following for bothering with all this. I was woken up, for the nth time, by a phone call from a number that looked much like mine. What gives?

When I answered, the caller hung up. This had gone on for a while. At first I thought I would just ignore it and it would go away. But clearly it didn’t. So I decided to call back the number. The person had no idea what I was talking about (i.e. that someone from that number kept calling me), told me his was a new number (seemingly irrelevant since they were the ones making the call not receiving it) and eventually hung up on me. However, a few minutes later he called back to say that the owner of the number was checking his voicemail and had dialed the wrong number, thus the stray calls. Aha, of course. There are providers that allow you to check your voicemail by calling your own number. Ideally in this case the person would just add a speed dial, but of course that would not help me in cases when they would try to access voicemail from another phone… so I decided to get my number changed. (Other reasons follow below.)

But so why the frequent mistakes? My number looked like this: XAY-BBXA. The number from which I was getting calls was XAY-BBYA. Add to this that X and Y are located in somewhat similar positions on the dial pad (just across from each other) and I guess it is not so crazy that someone would keep getting it wrong. I don’t know much about how we remember numbers, but it seemed such a confusion was within the realm of possibilities (too much so, in fact, as evidenced by the frequently made mistake in this case). I am extremely visual when remembering phone numbers so I just dial them on the pad. In fact, at times even just to remember a number to give it to someone I have to “type it out” on an imaginary pad. I just wish this person would have remembered the right sequence. In any case, the idea that I would have to depend on this person remembering their own number correctly was not appealing so I moved on.

Other reasons I had been annoyed by the number included text messages and phone calls aimed at perhaps the previous user of the number. Her friends had a hard time understanding I was not her and just kept calling.. and sending text messages. This is especially annoying since the receiving end (here: me) pays for such call minutes and text messages (the latter do not even require any action on your part so you cannot just ignore them, the charge is automatic). Add to that the tone of some of those text messages, and I was far from amused.

Lesson learned: when getting a new number, ask for one that is new or has not been in use for a while.

May 18, 2004

WWW conference

Posted by Eszter

Today, I will be attending a conference workshop in New York on Measuring Search Effectiveness: The User Perspective. I will be presenting some findings about What Makes an Expert Searcher? Evidence from User Studies. (That paper is not ready for distribution, but I will take this opportunity to link again:) to the paper that presents the coding scheme I used to analyze most of the data.) The workshop is being held in conjunction with WWW2004, the Thirteenth International World Wide Web Conference.

I am reminded of my attendance at The 4th International World Wide Web Conference in Boston in 1995. I was a senior in college writing a thesis on the unequal international spread of the Internet. I went to this conference with the hopes of learning what research was being done about the social implications of the Internet. There were very few sessions on the program that were about any aspects other than technical. After one of the few sessions where panelists discussed some philosophical questions related to the Internet, I walked up to someone to ask whether they thought the government was doing anything about the Web. His response: “Yes, I think they have a Web page now.” This wasn’t exactly what I was getting at. I had hoped to see some sessions discussing policy implications. But this was still the era when many people thought the medium was somehow going to evolve in a vacuum, in isolation from existing social institutions.

Looking at this year’s program, it is clear that technical questions are still the overwhelming topic of this particular conference so perhaps it was a mistake to look for other types of content at WWW4. But this is easy to say today when the conference scene is littered with meetings discussing all aspects of IT. Back in 1995, there weren’t too many meetings you could go to where people would care to discuss any aspects of the Web.

April 30, 2004

Google as rational actor

Posted by Henry

As John Quiggin has already said, the expected market valuation of the Google IPO seems to reflect fundamental irrationality among its investors. At first glance, Google’s IPO statement is even crazier - it seems to poke a finger in the eye of Wall Street. Larry Page’s covering letter tells potential investors that Google will continue to reserve the right to make extremely risky investments, to coddle its employees, and to refuse to release traditional earning guidances.

Although we may discuss long term trends in our business, we do not plan to give earnings guidance in the traditional sense. We are not able to predict our business within a narrow range for each quarter. We recognize that our duty is to advance our shareholders’ interests, and we believe that artificially creating short term target numbers serves our shareholders poorly. We would prefer not to be asked to make such predictions, and if asked we will respectfully decline. A management team distracted by a series of short term targets is as pointless as a dieter stepping on a scale every half hour.

In fact, there’s a very strong argument to be made that Google’s behavior is entirely rational, and furthermore is exactly the right thing to do if it wants to maximize its long term profits. As Gary Miller has argued in a series of publications, shareholder capitalism in the strong sense of the word is plagued by fundamental inefficiencies - shareholders cannot be trusted to maximize long term value because of fundamental dilemmas of social choice.

First the technical bit. Miller’s argument1 is based on work by Bengt Holmstrom that suggests that it is impossible for a firm to split its earnings in a way that is simultaneously (1) Pareto optimal, (2) a Nash equilibrium, and (3) ‘balances the budget’ (i.e. splits the proceeds of a team production function among the members of the team without generating a surplus). In this schema, shareholders may play an important role - they may be a surplus sink, sucking up any surplus from the team production function, and allowing the other actors in the firm to reach a Pareto optimal equilibrium. However, if shareholders are to play this valuable role, they must do so in an entirely passive way. As Aswaran and Kotwal show, they cannot be allowed to participate in production, or decide the incentive scheme under which the profits of team production are to be produced. For every possible scheme that produces an efficient equilibrium, there is another scheme that produces an inefficient equilibrium that also produces a greater surplus for the shareholders to absorb. In other words, shareholders will always have an incentive to choose schemes that maximize their own surplus rather than overall efficiency. If they have unconstrained control over the firm, they will prefer inefficient outcomes over efficient ones.

As Miller argues, this apparently technical and abstruse set of results provide a powerful way of thinking about the relationship between management, workers and shareholders in a modern firm. Workers face a fundamental problem of trust in their relations with a firm - they are only likely to make the maximum effort possible if they know that it is going to be rewarded. Because much of their effort is unobservable over the short term, the best way for management to ensure that workers give their most is to guarantee a long term employment relationship in which workers will receive fair reward for their efforts over the long run. Management can further signal its willingness to reward additional effort by allowing workers a fair degree of leeway and flexibility in how they use their time. The ideal is a relationship of gift exchange - a diffuse commitment on the part of management to reward workers, in return for a diffuse commitment by workers to do good work.

However, management will have difficulty in credibly committing to workers, when they are visibly constrained by shareholders’ interests. Workers know that shareholders are interested in maximizing the surplus rather than in maximizing the overall returns of the firm. They further know that insofar as shareholders are interested in short term profits, they are not going to want the firm to commit to the kinds of long term relationships that would allow workers proper compensation for their efforts. Nor are shareholders likely to tolerate the kinds of implicit rewards that support gift exchange. Too strong an emphasis on shareholder value and shareholder control will mean that managers will not be able to make credible commitments to workers that additional effort will be rewarded, and workers will thus not give additional effort.

Under this logic, Google’s IPO statement makes perfect sense. Google has succeeded in large part because of unmeasurable efforts from workers. It encourages its workers

in addition to their regular projects, to spend 20% of their time working on what they think will most benefit Google. This empowers them to be more creative and innovative. Many of our significant advances have happened in this manner. For example, AdSense for content and Google News were both prototyped in “20% time.”

Thus, its IPO statement isn’t aimed at reassuring potential shareholders. It’s aimed at reassuring Google’s workers. It’s a statement that short term shareholder interests will not predominate, and that workers will continue to be rewarded for their additional effort and creativity after the firm goes public. Google is making a credible commitment, by publicly promising to protect its employees in circumstances where such promises are vanishingly rare.

We provide many unusual benefits for our employees, including meals free of charge, doctors and washing machines. We are careful to consider the long term advantages to the company of these benefits. Expect us to add benefits rather than pare them down over time. We believe it is easy to be penny wise and pound foolish with respect to benefits that can save employees considerable time and improve their health and productivity.

The significant employee ownership of Google has made us what we are today. Because of our employee talent, Google is doing exciting work in nearly every area of computer science. We are in a very competitive industry where the quality of our product is paramount. Talented people are attracted to Google because we empower them to change the world; Google has large computational resources and distribution that enables individuals to make a difference. Our main benefit is a workplace with important projects, where employees can contribute and grow. We are focused on providing an environment where talented, hard working people are rewarded for their contributions to Google and for making the world a better place.

None of this is to say that Google will necessarily succeed in squaring the circle - too much freedom of action for managers can create its own inefficiencies. But it is to say that Google’s unorthodox approach to its IPO is entirely defensible, and indeed is a rational response to the serious dilemmas that Google faces in maintaining its innovative strengths after going public.

[ Via John Battelle’s weblog]

1 See especially Gary Miller, “Why is Trust Necessary in Organizations? The Moral Hazard of Profit Maximization,” in Karen Cook ed., Trust in Society (Russell Sage 2001); also Managerial Dilemmas: The Political Economy of Hierarchy (Cambridge University Press 1992); Gary Miller and Thomas Hammond, “Why Politics is More Fundamental than Economics: Incentive-Compatible Mechanisms are Not Credible,” Journal of Theoretical Politics 1994, 6, 1:5-26.

April 27, 2004

How much is Google worth?

Posted by John Quiggin

According to this report, the widely-predicted Google IPO is likely to value the equity in Google at more than $20 billion - others suggest $25 billion. I immediately wondered whether Google was really worth $25 billion.

I started on a standard financial analysis. Although, as a private company, Google doesn’t have to publish annual reports, it’s been estimated that Google has annual revenues of $500 million and profits of $125 million so that the return on equity is about 0.5 per cent. We can expect that to grow reasonably fast in the next few years, but the scope for expansion in Google’s core business is far from limitless. Most people in the developed world are already online and most of the heavy users already use Google (Eszter might have more to say on this). Moreover, there’s no strong reason to suppose that Google will be around in, say, 20 years time. I find it hard to draw a plausible earnings path that would yield a present value of $25 billion at any reasonable discount rate.

That’s a problem for the investors, though. The Google example started me thinking about the more general problem of economic valuation in the Internet era. I started by looking at this piece by Simson Garfinkelhat tip - Tyler Cowen. As well as reporting potential competition from Akamai (relevant in considering Google’s longevity), Garfinkel estimates that Google operates a network of 100 000 servers, but that clever design allows the use of very cheap computers as servers. Let’s and suppose an average of $500 a piece. This implies that the main piece of capital equipment operated by Google is worth around $50 million1 - a hefty sum, but a tiny fraction of the estimated equity value (and presumably there’s some debt in there as well) .

Next, it’s of interest to look at capital-labour ratios. Google apparently has about 1000 employees, which would suggest a total labour cost of the order of $100 million per year - a little on the low side as a proportion of revenues of $500 million, but not implausible. On the other hand, the number of employees is minuscule in relation to the valuation above, which implies a capital stock of $25 million per worker. I feel sure that this kind of ratio would imply some pretty strange organizational policies.

Then there’s the question of how much Google is worth in economic terms. I would think the correct answer must be lot more than the present value of its revenues. I use Google all the time, but unless text ads have a subliminal effect for which Google is being paid, I’ve never contributed a penny to its revenues, and quite possibly never will.

The general problem is that, in an economy dominated by public goods, like that of the Internet, there’s no reason to expect any relationship between economic value and capacity to raise revenue. Things of immense social value (this blog, for example!) are given away because there’s no point doing anything else. On the other hand significant profits can be made by those who can find a suitable choke point, even if they haven’t actually contributed anything of value. Assuming for the moment that SCO prevails in its attempts to extract revenue from Linux users, it won’t be because SCO’s code was better than some free alternative but simply because it was widely distributed before anyone found out it was copyrighted.

If the Internet continues to grow in economic importance, the central role of public goods in its formation will pose big problems for capitalism, though not necessarily to the benefit of traditional forms of socialism.

1 Thanks to commentators danny yee and thijs for correcting parametric and arithmetic errors in the original version of the draft, and thereby greatly strengthening my point.

April 15, 2004

I read your email

Posted by Eszter

I used to have a sign up in my office that said “I read your email”. It was just a joke, a geek’s bumper sticker to shock people. But as with so many things, what may seem like a joke or far-fetched idea one day suddenly becomes mainstream reality.

By now I’m sure many people have read about the controversy surrounding Google’s proposed new free email service, GMail. Soon after the company announced the forthcoming new service, privacy advocates started criticizing Google for potential privacy violations. The basic idea is this: the service may scan the contents of people’s email to figure out the most relevant targeted advertisement. One response to the reactions has been to say that people have a choice to use this service. If they are bothered by the practice, they do not have to use GMail. But is it really as simple as that?

Let’s set aside for a moment the issue that many users probably do not read the agreement they sign or even if they read it they may not understand its full implications. Let’s assume that those who sign up for the service do so because for whatever reason they do not mind that their emails get scanned. Okay. But what do you do if you get correspondence from someone who is using a GMail account? If you respond to them then your email will be scanned as well regardless of what email service you use. You did not opt to use GMail because you are bothered by the implications of your mail being scanned. But what can you do? Worse yet, let’s assume you are writing to an email address that the recipient uses as an alias that forwards to a GMail account. You have absolutely no idea that your mail is ending up in the mailbox of someone whose every message gets scanned.

So when people say users will have a choice to opt in and use GMail knowing that their emails may be scanned, I do not think they are considering the implications of the scanning for the correspondants of GMail account users.

April 07, 2004

More on assumptions about search engine use (and related research)

Posted by Eszter

I had a piece on the BBC News site yesterday. A few people have kindly sent me notes letting me know about this so I thought I should blog it so people know that I am aware of my article on the BBC site. ;-)

I should clarify that my motivation for writing this piece - or any other that mentions Google for that matter - is not a reflection of any personal love or hate relationship I may have with Google.. or any other search engine for that matter. My thoughts on the topic are a result of studying how average Internet users (as in not just me, or just some of my friends and colleagues) find information online. I have tried to make this increasingly explicit in my writing in order to avoid people sending me emotionally charged notes about how I am misunderstanding that one particular company. This part seems to be getting better as no one this time sent me messages explaining to me how to use Google to make the most of it. (Believe me, I know how to use search engines, learning those skills was the least I could do while writing a dissertation on how people find content online.:)

April 06, 2004

iPod envy

Posted by Maria

What amazes me is that it is taking the IT hardware industry - with the notable exception of Apple of course- literally decades to cotton on to the facts that 1)a simple and effective user interface is a selling point and 2)people like gear that looks good.

Why are most computers and IT devices still so damn ugly? My computer is the same (anti-)colour as most, so it blends in just fine with the peeling, mushroom coloured paint of my office and the complementary, half-tone, exposed plaster. It’s a coherent look, no doubt about that, but so, so dreary. But aside from office drones who don’t choose their equipment and must simply accept what is purchased in bulk - and without aesthetic considerations - surely the massive home computer market might have exerted a little more user choice by now?

Of course there are so many other considerations that bear on choosing a computer - operating software, odd relationships between manufacturers and the accompanying market distortions not least. But isn’t it downright odd that an industry calling itself ‘personal computing’ relishes the sale of machines it calls clones? Why should it be a revelation that people making a major purchase that affects so many parts of their lives might want it to look a bit more interesting or apt than a fridge or an air-conditioning unit? (And on the subject of ugly fridges, there are alternatives.)

Only in the past couple of years have we seen those sleek, black, manly laptops (that are about as sexy as Old Spice after shave) come on the mass market and, more recently, the slender, natty silver ones. But, let’s be honest. In the shop, the iPods, iMacs and G5s are the only models with rapt consumers literally stroking them.

So, iPods are pretty. In a non-girly way of course, so no threat to anyone’s masculinity. (except for the lovely pastel mini-iPods which we won’t get in Europe for months) As quickly as Toshiba can kit them out with their little chips, armies of iPods are marching out into the pockets of the affluent middle class.

This is a good thing for everyone.

It means that music downloaders are being joined by a new demographic; professionals who like to think of themselves as law-abiding, people who own shares, people who vote. In short, people with clout. As opposed to frightened twelve year olds.

Because whichever way the music industry wants to cut it, and whichever model the fawning IT industry chooses to interact with them, music prices are set way too high, and artificially too high. Be it iTunes or Janusonline music is being (or will be) sold/rented for more than most people are prepared to pay. And let’s not even get into the negative privacy and security externalities of technical measures to protect the music industry’s copyright. Because somehow I don’t see the disadvantage or harm to consumers of invasive and inefficient rights protection technologies being built into content-pricing. Prices are patently more than the market is willing to bear (a dollar a song? 10 - 20+ dollars a month to ‘rent’ your music collection?), but the music industry has responded by criminalising its consumers.

Except that now, thanks to iPod, more and more of the consumers who download their music and are fed up of being ripped off are stroppy, articulate, well-connected professionals. These people really don’t like being called criminals and they can hire lawyers if someone tries it. Hell, plenty of them are lawyers themselves.

Let the games begin.

April 01, 2004

CAN-SPAM

Posted by John Quiggin
Among the offerings in today’s special edition of TidBITS, the long-running online Macintosh magazine, I found this item particularly appealing.
Canned Spam Can Can Spam with CAN-SPAM — Hormel is expected to announce today their campaign to can spam using their canned Spam with the aid of the CAN-SPAM legislation. Starting today, Hormel will print the phone number, email addresses, and other information about unsolicited email senders on cans of Spam along the lines of the “Have you seen me?” photographs published on milk cartons. Canned Spam buyers who help to can spam by canning spammers can receive cans of Spam as a reward.
Other important news includes a report that the US Department of Homeland Security is responding to the threat of Windows-specific cyberterrorism, most notably through Trojans such as Phatbot by standardising on Macs.

March 28, 2004

Selective intelligence

Posted by Eszter

There are clearly some very smart folks behind Google given that they provide us with a great service and continually add useful features. That said, at times I am surprised by some of the decisions. Should they be placing their machine intelligence over user preferences? I am surfing the Web in Budapest. When I try to go to google.com, I am redirected to google.co.hu. I change the URL because I prefer to see the site in English. Fine. Then I run a search using English words and get Adwords Sponsored Links on the right in Hungarian. The rest of the interface is in English as are all of the results, but the ads are not. (Granted, the one term that matched the search term “domain” was in the ads, but every other word was in Hungarian.) Geography does not equal language preference or knowledge, especially when the user has already signaled so. It seems getting meaningful ads would be in the interest of both Google and its Adwords clients, why this decision then? (I commented on something very similar a year ago and although it seems some progress may have been made, need for improvement remains.)

March 11, 2004

British university axes staff websites

Posted by Chris

In a disproportionate and heavy-handed response to a specific problem, the University of Birmingham (UK) has banned staff from hosting personal web pages (including blogs) on their systems. The Guardian has the story . And staff at Birmingham have a campaign to defend their right to host personal material.

February 11, 2004

The political science of Google

Posted by Henry

Ed Felten has a nice post on Google from a few days ago, suggesting that laments for the halycon days before people tried to manipulate Google are misconceived. His rejoinder: Google results don’t represent some Platonic ideal of the truth - they’re the product of collective choice.

Google is a voting scheme. Google is not a mysterious Oracle of Truth but a numerical scheme for aggregating the preferences expressed by web authors.

This means, as Felten suggests, that Google isn’t perfect, and can’t be. Indeed, the point is underlined by Arrow’s possibility theorem which says, more or less, that any form of aggregate decision making is going to be flawed under certain reasonable assumptions. Felten’s insight is an important one - it opens the door for the application of a plethora of interesting results from the theory of collective choice to Google and other aggregators/search engines. There are some eminently publishable academic papers in there for anyone who’s familiar both with this literature, and with public choice theory. There’s a more general point too. Much of the early rhetoric about the Internet suggested that it somehow managed to escape from politics. Some people (Declan McCullagh for example) are still trying to peddle this line. It’s ridiculous. The Internet and other communications technologies involve real collective choices, with real political consequences, and the sooner we all realize this, the better.

February 07, 2004

Some consequences of bad spelling

Posted by Eszter

Last week, the New York Times had a piece about the potential monetary losses resulting from bad spelling. The author discusses how some misspelled auction items on eBay sell for very little because few bidders find them.

Reading about the frequency of spelling mistakes on the Web was no shock to me. In fact, the geek that I am, I even ran analyses [pdf] in my dissertation to see what explains whether and how often people misspell words during their online actions.

I should take a step back and explain my project. I study people’s Web-use skills. For my dissertation project, I collected data on one hundred Internet users’ online abilities. Participants were a random sample of the Mercer County (NJ) Internet population. Although these people are more educated and come from families with higher income than the average American Internet user, the sample was likely representative of the county’s Net users. (I say “likely” because it is practically impossible to know for sure, but I did as much background research as possible to establish that this is highly likely.. see my dissertation (or contact me) for more on that.)

I asked people to come to a university research setting and perform tasks online. I asked them to look for various things (political candidate information, tax forms, local events, etc.) and recorded everything they did. Many of them made spelling mistakes. This certainly slowed people down, and in some cases it also meant that they were unable to complete certain tasks.

No one asked, but since I had the data, I figured I’d look to see what explains why some people make spelling mistakes and how often. I found [pdf] that those with less education were more likely to make spelling mistakes. However, the effect of education seemed to be mediated by computer use at work and experience with the Web. Regarding number of spelling mistakes, age also seemed to matter (older people made more mistakes), but again, computer use at work and experience with the Web mediated this effect. Explaining differences in typographical errors was a bit more interesting, but I’ll leave it to you to check that out on the tables. (I included a table with information about participants’ demographics in that file in case that’s of interest.)

In a forthcoming paper, I list some more examples of common mistakes people make online such as spaces in URLs, no spaces in multiple-term search queries, and mistaken top-level domain-name extensions. More importantly, I describe the classification and coding scheme I used for coding people’s online actions. Send me a note if you’d like a copy.

As for attempts by Google and others to highlight to people that they are making a spelling mistake, it’s useful to some but not to others. My experience observing dozens of average users was that many people don’t see such hints and because results show up even in response to misspelled queries people do not realize they made a mistake and proceed.. often not to the best of sources.

January 28, 2004

The New Economy lives

Posted by John Quiggin

Having finally managed positive earnings over a full year, Amazon shares have now acquired that most basic measurement of value, a price-earnings ratio. With shares at $53 and earnings of 17 cents per share, it's a bit over 300 to 1, which suggests that perhaps the New Economy is not dead after all. With revenues growing at 20 to 30 cent per year, and slowing, it's hard to see how Amazon can deliver the four or five successive doublings in profit that would be needed to justify this price.

Unfortunately, I haven't yet been able to get hold of Doug Henwood's book After the New Economy so I can't relate this directly to Kieran's review> But I will make the point that, especially on first acquaintance, the Internet is like a magic mirror. More precisely, it's like Harry Potter's Mirror of Erised, which shows the viewer whatever they most want to see. Among the academics and other geeks who built the Internet this was a co-operative world in which sharing based on mutual esteem would displace the profit motive and render large corporations obsolete. In the United States, where stock market mania predated the dotcom boom, the mirror showed a route to instant riches. (Thomas Frank's One Market Under God, which I reviewed here along with what I found a very disappointing book from William Baumol, The Free Market Innovation Machine, is very good on all this).

After starting this post, I thought it would be a good idea to read the comments on Kieran's, and I notice that Brad de Long has offered an Amazon book prize to the first member of Crooked Timber to follow Kieran up. I don't suppose I could ask for a copy of After the New Economy, could I?

Update I've fixed a couple of typos noted by commentators. Thanks for that. I've also attended to a problem arising from my inexperience with ecto, that led to duplication of part of the post

January 26, 2004

The Hard Way

Posted by John Quiggin
My summer holiday activities over the last couple of months included a lot of work on my music collection (I’m slowly transferring from vinyl to MP3/AIFF) and rereading Nick Hornby. So, I was naturally struck by how rapidly the skill of making compilation tapes, a central theme of High Fidelity has gone from the esoteric to the everyday. Not surprisingly, not everyone is happy about this. Joel Keller, writing in Salon, says
Putting together a home-brewed compilation of songs used to be an act of love and art. Now it’s just too damn easy to be worth caring about.
and much more in the same vein, though his conclusion is more elegiac than polemical
When making the decision between practicality and artistic merit, I’ll choose practicality more often than not. I may be wistful for the old days, but I’m not an idiot.

So let’s have a moment of silence, for the mix as we used to know it is dead. Technology has overtaken the experience and made it cold and impersonal. But it’s time to look forward, as the Internet has allowed us to trade and download more varied types of music, making for better-sounding, albeit more antiseptic, mixes. One of these days, Nick Hornby should do a sequel to “High Fidelity” and list Rob’s Top 5 music downloads. I’m sure it’ll be a nice read. But it just won’t be the same.

The first time I heard this form of argument, it was from my Grade 4 teacher, lamenting the arrival of the ballpoint pen, and its adverse effect on the quality of handwriting. Possibly since I never mastered the steel nib/inkwell technology still favoured by the South Australian Department of Education in the 1960s, I was not impressed. Since then, I’ve seen the same argument applied to calculators, word processing and desktop publishing. And of course, the argument wasn’t new when I first met it - in one form or another, it’s been applied to almost any technical innovation that replaces a complex skill with an easily usable machine. (It’s separate from the income-distributional arguments that apply when skilled workers are displaced by unskilled ones, although the two are often entangled).

Before defending modernity on this , let me extract what I believe to be the core of validity in this argument. If the production of an item requires substantial skill and effort, the average quality of the items produced will be higher. This is for the same reason as (I have been told) some Japanese stores giftwrap their fruit - given the cost of a piece of fruit, giftwrapping makes sense. If making a compilation tape at all takes hours of work, and requires skills that only a music enthusiast will bother to acquire, a lot more effort and judgement will go into the selection and ordering of the tracks, correction of the levels and so on, than if a 14-year old can put together a CD in five minutes, as is now the case.

Similarly, when WYSIWYG word processing first became feasible, it was asserted that the quality of writing declined because students were spending too much time on flashy presentation. While this is possible, I suspect the truth is that the total input of time declined substantially. Students judged (probably correctly at first) that an essay that looked professional, contained no spelling errors and so forth would get by even if the content was pretty weak. Moreover, cut and paste made it easy to produce an apparently final version without rewriting. In this case, the problem is that those setting the essays wanted to elicit some amount of work from the students but (with the exception of those students who actually wanted to learn something) the student’s objective was to minimize the effort required to do the job. At least until teachers learned to disregard cues like good presentation, the result was a decline in average quality which (if you agree that Teacher Knows Best) made everyone worse off.

Another case where average quality declined with bad effects, following an increase in ease of use, was that of Internet newsgroups. These were useful forums as long as the skills and effort required to use them confined access to those willing to make serious contributions. When they became easily accessible (roughly when AOL merged with the Internet) the newsgroups were flooded with garbage. It’s only since the rise of blogging software that the old vision of the Internet as a forum for debate that could bypass media monopolies has reasserted itself.

In most cases, though, (including that of blogging) a decline in average quality is quite consistent with an improvement across the board, in the sense that more and better good quality outputs are produced, even while the average is dragged down by people who would previously not have produced at all. People like Sal Tuzzeo, quoted by Keller may sneer that
On the subways you see people with iPods. They have, what, a thousand songs on them. Ten thousand, even. They stare random-glared into oblivion. [R]obots with shitty music taste and too much money to spend on music-listening hardware and shoes, in that order
but why shouldn’t people be free to follow their taste, shitty or otherwise? Keller argues that
Fewer people who are connected to the music they listen to translates into a less critical and picky audience for the crapola that the record companies and radio stations promote. The quality of music overall goes downhill.
but, again, why should anyone care about average quality or what is promoted on radio stations? People who are critical and picky, but don’t have the time or skills to make compilation tapes, chase down obscure records and so on, now have a much better capacity to find good music and reward those who are producing it.

By the way, talking of innovation, this post was produced using Ecto, a blogging client for Mac OS X currently in version 0.2.1, but already a big improvement on anything else I’ve used. Thanks to Brad DeLong for the tip.


[Posted with ecto]

December 12, 2003

My Big Idea

Posted by Maria from Geneva

It’s pretty vague and unformed so far, but here’s the Big Idea I came away with from the World Summit on the Information Society.

Information and communications technologies (ICT), it is now fairly safe to say, have not been the democratic panacea that many of the ‘information just wants to be free and it’ll find a way’ crowd foresaw a few years ago. In many ways (and I’m thinking here of lax data privacy and over-zealous intellectual property rights protection), ICTs have had the reverse of the effect expected by the libertarian view. ICTs bring new and often sinister opportunities for government control and repression in countries where democracy is nascent or non-existent.

But opportunities are one thing. Applications are another. The Chinas, Saudi Arabias and other authoritarian regimes could not use ICTs to spy on their citizens, block/filter/monitor their access to the internet, collate and analyse personal data in ways that are outrightly harmful, if they could not buy the technologies to do so. And who is selling them the technology? Well, many (though not exclusively) IT and equipment manufacturing companies who would never dream of being thought by their western customers of as a means of political repression.

Now let’s look at firms whose work affects the environment - say, Shell, Exon, Union Carbide or BP - and where they stood 10 or 15 years ago. Awful events in developing countries, such as Ken Saro-Wiwa’s death or the Bhopal disaster, really jolted these companies into realising that what might seem to cut it abroad can look pretty grim at home. I’m somewhat dubious about the concrete achievements the whole corporate social responsibility movement has achieved. Have they really cleaned up their acts, or is it just about glossy brochures? But I do think there’s something to it, and the big firms seem to do some genuinely good things. The threat of government intervention seems to have put the fear of god into these firms. While self-regulatory efforts won’t deliver everything I might want, I recognise that they’re a good start if undertaken in good faith.

So to the ICT industry. I propose an endeavour of corporate social responsibility or an ethical code for ICT firms dealing with governments, particularly governments without functional or mature democracies. It would deal primarily with privacy and security matters - holding firms to the expectation that they not supply software or hardware to governments that would not be acceptable at home or, for example, that go against the OECD privacy or security guidelines. The trick really is, how do you work with western ICT companies to not sell the technologies of oppression to the oppresors. Or, in a world of grey, what are the guidelines and principles that they should be thinking of when doing B2G projects in non/struggling democracies?

Well, as I say, it’s a squirming little newborn of an idea. And I’m much too sleep deprived and shell shocked after the week of WSIS mayhem to really have a sense of whether something like this might fly. So please, peanut gallery, your thoughts?

December 10, 2003

Your trusty correspondent from WSIS

Posted by Maria from Geneva

First off, excuse the strange author name - an essential for a blogger whwo can’t be trusted to remember her own login while on the road.

As CT tries hard to keep its faithful readers up to date on all the news that’s new and improved, I am blogging from the World Summit on the Information Society in Geneva. I’m here for work, so in the interests of keeping my job, I won’t be blogging about the really juicy political bits. But at an event like this, there’s so much going on that at least I can give a flavour of what it’s like.

apologies in advance - this is on the hoof!

What is it like? A zoo maybe. Or sheltered housing. WSIS is full of strange species and behaviours that would probably seem just weird anywhere else. A bubble. A human version of the Eden Project. Walking around the place, with its long halls and wide open spaces, and strange, strange tribes, it feels like the kind of unreal/hyper-real American high school Gus van Sant created in Elephant. It would be very easy to fall between the cracks here.

Upstairs are the plenary sessions where each country speaks for its alotted 15 minutes and people wander in and out. All sorts of political and territorial spats that have nothing to do with information and communcation technologies get played out here. Country leaders stand up and explain how their wide-armed embrace of the information society would be complete, if it weren’t for A.N. Other neighbouring country occupying their territory. In the protocol guide, the definition of a VVIP (in a place like this, a thirds of the 16,000 attendees consider themselves VIPs) is a head of government, head of state, crown prince, crown princess, oh, and the highest ranking representative of the Palestinian Authority…

A few things I have learned,

- The Crown Prince of Lesotho is impressively articulate, more so than most elected presidents we saw today.
- Kofi Annan is shorter than expected.
- Azerbayjanians speak Russian at the UN, they probably don’t like to, but it’s the only way other people can understand them.
- African leaders do really still go around with bodyguards dressed like stormtroopers.
- Walkie talkie things, they hurt your ears.
- Civil society speakers get cut off at about two minutes in sessions where everyone’s allotted three.
- It’s really hard to watch the president of Rwanda talk about ISP connection fees without wondering where he was back when it was all happening there.
- Fun people are here, Gus Hosein, Marco Cappato, Stephanie Perrin, my old boss, and the Irish delegation (but I haven’t found them yet…)

Ok, and as those fun people are screaming that they’re as hungry as Elvis, I’ll hold the political commentary till tomorrow.

December 05, 2003

The Elders are getting at the Protocols

Posted by Henry

Unless I want my contribution to this blog to become some sort of Glenn Reynolds-watch, I’m going to have to stop reading him. Quite simply, whenever he posts on something I know about (EU politics; the governance of information technology), he gets it wrong. And not just wrong on details. More often than not, he’s spectacularly wrong, usually because of some conspiracy theory or another that’s rattling around in his skull. It’s really getting on my nerves. This is a particularly outrageous example.

THE NEW CLASS IS THREATENED BY THE INTERNET, with its intolerance for lies and posturing and its openness to alternative voices. Here’s the response:

Leaders from almost 200 countries will convene next week in Geneva to discuss whether an international body such as the United Nations should be in charge of running the Internet, which would be a dramatic departure from the current system, managed largely by U.S. interests.

The representatives, including the heads of state of France, Germany and more than 50 other countries, are expected to attend the World Summit on the Information Society, which also is to analyze the way that Web site and e-mail addresses are doled out, how online disputes are resolved and the thorny question of how to tax Internet-based transactions.

The “new class” types who dominate international bureaucracies can’t be expected to take the threat to their position lying down. And, as I’ve written before, it’s a very real threat to them, and to others who profit from silencing people. As blogger-turned-Iranian-Parliamentary-candidate Hossein Derakshan notes: “We can’t vote, but we can still say what we really want.”

That’s a horrifying notion to some, and you can expect more efforts to put a stop to it.

It’s hard to know where to start. But I’ll try.

Reynolds takes two facts, jumbles them together, and generates a breathtakingly stupid conspiracy theory about the “new class.” The facts are as follows. Some authoritarian countries would like the UN to play a more prominent role in Internet governance, as this would make it easier for them to control its direction. See further, Dan Drezner. The International Telecommunications Union (ITU), which is part of the UN infrastructure, would like to play a wider role for its own, quite different reasons. A few years ago, it tried to take control of the Internet domain name allocation process, and failed. Now it’s seeing an increasing threat to its international role and swingeing budget cuts, as the Internet transforms conventional telecommunications structures. It’s trying for a second bite at the cherry, in a desperate (and doomed) attempt to ensure its survival as a healthy international institution.

So, we have two, starkly different groups of actors - authoritarian states who don’t like openness, and international bureaucrats, who face the chop if they don’t get control of the Internet. Reynolds conflates the two, to argue that the “”new class” types who dominate international bureaucracies”, “profit from silencing people”, and are “threatened by the Internet with its intolerance for lies and posturing and its openness to alternative voices.” In other words, he seems to be claiming that there is an international conspiracy of bureaucrats, encompassing the UN, the EU, and whoever you like yourself, who are opposed on principle to democratic openness. Cue the black helicopters.

As a thought experiment, go back through Reynolds’ post, and wherever you see the words ‘new class,’ substitute ‘International Zionist Movement.’ The end result will look pretty ugly. But then, it’s an ugly post to begin with.

December 02, 2003

The un(?)intended consequences of courseware

Posted by Eszter

Five years ago when a few savvy instructors rushed to integrate the Web into their teaching and put their syllabi online the idea exchange so crucial to academia was alive and well in the teaching realm of our work. A few years later, witness how various password-protected courseware adopted by so many campuses is making it increasingly impossible to see others’ teaching materials. Sure, some people may not want to share their syllabi, but I suspect many wouldn’t mind. Regardless, the increasing proliferation of these services makes the teaching side of our work less and less visible to a wider audience. So while blogs may be opening some aspects of teaching, courseware is closing others.

In the summer of 1999 I gave a talk on a panel at the American Sociological Association meetings about the use of the Web in teaching. I was reporting on my experiences having built an extensive Web site for a class on the Sociology of Latin America: Mexico and Cuba. (Don’t laugh, if you take a look. That wasn’t so bad for a 1998 Web site.) One of my advisors had hired me on a special grant to build a Web site that was especially elaborate with lots of resources. I included numerous links to relevant materials including lots of images. We even tried out having a weekly quiz based on online content. Reactions from students were quite positive, on the whole.

One of the people in the audience of the panel at the ASA meetings inquired how people would be able to do create such Web sites if they didn’t have special grants to hire grad students for compiling them. I replied that the nice thing about the Web is that one could share the wealth. Once posted by someone the site would be available for others to use as well.

Flash forward less than five years and this is increasingly rare. Courseware at most schools is password-protected. At my university, I can’t even look at the course Web sites of other faculty in my own school. (This is based on local decisions though as at my previous univ I could log onto any course’s site.)

There are some exceptions. MIT’s OpenCourseWare makes many of their course materials public. But this seems increasingly uncommon. The Resource Center for Cyberculture Studies has a long list of online syllabi. But notice that the number of course links seems to be decreasing. There could be numerous reasons for this, of course, but part of it may have to do with syllabi disappearing behind members-only systems.

Although I may adopt our courseware for teaching because it does offer some helpful features (e.g. automatic class list for easily communicating with all enrolled students), I plan to post copies of my syllabi on the open Web as well in case anyone may be curious. Here’s the course I just compiled on the Social Implications of Communication and Information Technologies. Since it’s a course I had never taken myself, I was very interested in finding related syllabi out there.. but unfortunately bumped into a few password-protected sites along the way.

November 18, 2003

Stupid code

Posted by Eszter

I was searching for the journal The Information Society yesterday on our library’s online catalog system. I had looked up the journal the day before so I knew that we had a subscription to it. Regardless, yesterday I kept getting “Your search found no matching record”, which was incredibly frustrating given that I had just browsed the journal the day before. Finally, I decided to try the search without the “the” in the title. I’m not sure why that occured to me, but I gave it a try. Surprise-surprise, searching simply for information society specified as the Journal Title worked.

I realize search engines often exclude articles like “the”, but it seems if it is part of a journal title it should not count against you to include the article. It’s one thing to exclude it completely but bring up related results nonetheless, it’s another to have it count as a hindrance leading to no results. In case anyone’s wondering, using quotes or a plus sign in front of “the” - strategies that would help in some search engines - do not lead to any results either (in fact, the quotes confuse the system completely).

I wrote to the library to tell them about the incident and ask whether they could tweak the code so it wouldn’t count against you to know the actual full title of the journal one is seeking. I just received a response according to which the library has already asked the online catalog system provider Endeavor to make this change - as have other libraries apparently - but they have not cared to improve the system. Ugh.

October 27, 2003

Book cites

Posted by Eszter

Starting today, searches on Amazon.com will look for your terms in the entire text of over 120,000 books. Not only do you get a list of books that cite an author or mention a concept, but you can also view a pdf copy of the page where the citation occurs.

As an academic, this serves as an extremely helpful complementary tool to the Social Science Citation Index (or other citation index equivalents), which allow similar searches for journal articles.

It is also a fun procrastinatory tool as I try to figure out which of the Hargittai references are to my work and not to the work of my parents.:) (Thanks go to my Mom for calling this new feature to my attention.)

October 21, 2003

News for Nerds? Some of it matters

Posted by Tom

Mark Kleiman picks up an important story that I’d half-noticed on Slashdot a little while back, but had given little thought to. Fortunately, Professor K has a better attention-span than I do. While it could well be true that this stuff has had broader coverage in the US than it has in the UK, in which case my apologies to American readers for repeating the backstory, still…

The meat of the issue is that the fair and balanced and impeccably competent voting-machine company Diebold is doing its damn best to suppress the web-publication of leaked internal memos revealing some absolutely shocking security holes in their product.

Fortunately, the Electronic Frontier Foundation thinks Diebold is trying it on, and is backing the defence of the affected websites.

From a geek perspective, I offer the opinion that the whole e-voting thing is bad enough when you trust the political neutrality of the vendors, given the scope for technical fuckups. And it’s also worth saying that we Brits have every reason to fear that we’ll be subjected to similar nonsense given the blind optimism of our government, which appears to be countenancing voting mechanisms which include ‘the use of the Internet, text messaging, interactive digital TV, and touch-tone telephony’. Gah.

Back to Diebold. As someone who fiddles with relational databases as part of my living, I don’t know whether to laugh or cry when it is revealed that the system which is offered as the backing infrastructure for American democracy involves as its lynchpin an Access database.

Access, as any fule know, is a toy program for putting together a database upon which you want to record the details of your CD collection or keep track of the contents of your sock-drawer; it does not supply a platform which anyone with the tiniest bit of nouse would use for anything that actually mattered.

If a reader can provide me, in confidence, with the name of a financial institution which relies on Access as a core component of a critical business system, I shall be gigantically surprised, and then move my account with them, if I have one, when I have recovered. Perhaps I’m just weird, but I really do care at least as much that I can trust the means by which my government is elected as that my bank statements should be correct each month.

(It is of course true that Access could merely serve, via some kind of ODBC interface, as a view of a proper RDMBS behind the scenes, in which case Diebold’s technical architects would not be the shocking idiots they appear, at first sight, to be.)

And now for the full-on, utterly predictable bit where I do my open-source zealot shtick. If we’re going to have e-voting of any kind, I want to see the code. OK, I personally may well not want to see the sources, but I certainly do believe that anyone who reckons that they can make some sense of the them, and might be able to point out bugs and some potential fixes, should have the ability to see all the code for the software that ends up determining who gets to govern us. Security through obscurity won’t do.

Assuming Diebold’s developers to be super-talented and their bosses’ bosses to be entirely disinterested concerning the results of the elections their software is responsible for calling, there will most likely still be holes in the implementation unless there have been a lot of eyeballs looking at the code trying to find the cockups. Black boxes are no good.

Now, if it is mandatory to place the source for your voting system in public view, it may turn out that there is no stable business-model for companies seeking to produce software in this area. If that proves to be the case: too bad. Collectively, we’ll have to work out some other means for generating the software we feel we need.

Chads, dangling, hanging and otherwise, were clearly a menace; but they could be counted after the fact, and it was obvious when the machines which produced them were up the spout, since their innards could be inspected. We should ask for no less when considering their prospective replacements.

Solitaire mysteries

Posted by Henry

I’ve just finished reading Bruce Schneier’s Beyond Fear, which I recommend to anyone who’s interested in security issues after 9/11. Schneier’s a famous cryptographer - if you’ve read Cryptonomicon, you’ll be familiar with his Solitaire code - but over the last few years he’s become more and more interested in the human side of security systems. And this is where Beyond Fear excels - it describes in clear, everyday language how we should think about security in the modern world and why even the most sophisticated (especially the most sophisticated) security systems are likely sometimes to fail.

Unsurprisingly, Beyond Fear talks at length about the security choices made after 9/11. It’s far from complimentary about most of them, but it doesn’t just provide a list of entertaining stupid security award style gotchas. Schneier talks about the political and technical processes that produce manifestly bone-headed policies - political bargains struck by actors with their own agendas; the perceived need for “security theater” to reassure people that something is being done to protect their safety; the manifest impossibility of foolproofing any reasonably complex system. He stresses that security involves trade-offs rather than perfect solutions. Not only that; he provides some useful ways to think about when these trade-offs do, and do not, make sense. Schneier’s take is interesting to those, like me, who usually think about new security measures in terms of how they hurt privacy; if he’s right (and he has some good arguments and evidence to back him up), many of these measures don’t even make sense in their own terms.

The book is aimed at non-professionals, which means that sometimes the tone is a little too folksy and straight-talking for my liking. Schneier uses a couple too many quasi-topical instaquotes from famous people in order to try and sweeten the pill of his (deadly serious) argument and prescriptions. But Beyond Fear still has a lot to commend it, even to those who already know something about the issues that Schneier is writing about. He has a very nice discussion of how complexity theory and emergent phenomena afflict security systems, laying out the main ideas without lapsing into jargon. His discussion of the relationship between detection and prevention strategies is worth the price of the book on its own. It lays out in a simple yet devastating way the reasons why Diebold style electronic voting machines are a bad idea.

After the 2000 U.S. presidential election, various pundits made comments like: “If we can protect multibillion-dollar e-commerce transactions on the Internet, we can certainly protect elections.” This statement is emphatically wrong - we don’t protect commerce by preventing fraud, we protect commerce by auditing it. The secrecy of the vote makes auditing impossible.

Exactly right - and a lovely insight to boot. If you’re at all interested in these topics, you need to read this book.

October 02, 2003

Sexing up Spaghetti

Posted by Tom

I’m moving from one software job to another, and during the period of my notice (just ended, thanks for asking) I was placed on documentation duty. It has been my proud responsibility over the last month or so to attempt to capture, in flowing English prose and naturally UML, the state of the pile of mouldering spaghetti that my erstwhile employers like to call their ‘system’. Feh.

I’m pleased but quite surprised to be able to say that I managed to avoid the temptation to get all Borgesian on their asses by making the whole thing up. That would have been much more fun than what I ended up doing, but a bit too cruel to my successor.

Anyway, I particularly enjoyed a conversation on my last day with a colleague who is Spanish, and whose written English is excellent, but who relies a bit too much on the free newspaper ‘Metro’, given away on the tube in the morning, for his education in the vernacular.

He asked me if one particular document I had prepared had been ‘sexed up’. When I’d picked myself up off the floor and wiped away my tears, I denied the charge indignantly. (It is impossible to sex up a description of spaghetti.)

BBC journalists really do need to show more care about introducing this kind of thing into the language. They just don’t know how much trouble they end up causing.

September 28, 2003

Orwell Meets the Group of Seventeen Meets ...

Posted by Henry

John M. Ford comments in an Electrolite thread on mixed metaphors and cliches.

If you want a vision of the future, it is a wireless broadband network feeding requests for foreign money-laundering assistance into a human temporal lobe, forever. With banner ads.

As Brad DeLong readers may recall, Ford is responsible for introducing Zweeghb into the Scrabble lexicon. A man of many talents.

September 25, 2003

Dogs in the manger

Posted by Maria

It’s a bright day for the rainbow of opponents who lobbied all summer against the excesses of the European software patenting directive. News.com reports that the European Parliament voted yesterday to pass the extremely unpopular software patent directive. The European Parliament could have thrown out the directive, but instead lumbered it with some amendments that may make it too difficult to implement in the member states. Though the result is messy, the EP’s vote has allowed common sense (and the conclusions of independent research) to prevail. It strikes a blow against oligopoly and tries to keep the way open for truly competitive innovation. (see some economists dismiss as daft the idea that software patenting creates economic growth.)

This directive should have been a relatively straightforward housekeeping exercise in making sure patents are enforced in all EU countries. But it opened another front in the war to extend intellectual property rights protection to every half-decent or half-baked idea any Dilbert can come up with.

Aside from the immediate analysis of the directive and its aftermath, there is some more food for thought; firstly, the benefit, if any, for the US in pressing for these extensions, and secondly, the contempt with which the Commission has treated the European Parliament.

The State Dept. exerted great pressure on the European Commission to extend throughout Europe protection for software patents and ‘business processes’, for example, objecting strenuously to an amendment that allows patented software to be used without permission or payment if the use is strictly for interoperability purposes. Larry Lessig has blogged before about the bizaare lengths to which some US agencies will go in opposing the open source movement. Lessig puts the USG’s support of software patenting in Europe down to the logic that it will help established innovators (predominantly US ones) and harm new innovators, creating a barrier to entry for the small guys. This seems a little simplistic, though I do think industry capture explains a lot about the official US position.

But surely it’s in the US’s broader and long term interests that open source thrives. Rising tide raises all boats, and all that. Yes, it’s harder to hug the benefits of open source software to one single company (no matter how big). And it’s damn tricky for the beancounters to add the positive externalities of all those busy coders at home and abroad / more and smoother interoperability / better functioning products and networks / more and better and cheaper software on the market etc. into a tidy equation of shareholder value. But still, isn’t anyone high level taking a long term strategic view of this? And doesn’t it jar just a little with the whole Washington consensus rhetoric of competition being slightly painful to begin with, but ultimately good for everyone?

On the European side, it’s not every day that legislation on an obscure subject like software patenting becomes the subject of a petition signed by 150,000 people. But an extraordinary range of people - from software companies themselves to SMEs to economists to scientists to Linux supporters, and all the way to the ordinary people who are simply fed up with the patenting of The Bleedin Obvious - worked right through the summer and succeeded in getting the usually docile European Parliament to jam a spanner in the Commission’s carefully contrived works.

And the reaction of the European Commission to this flexing of democratic muscle? “unacceptable” says Frits Bolkestein, head of DG Internal Market. From news.com:

On Tuesday, in a debate ahead of the vote, Commissioner Bolkestein also criticized the amendments, telling the Parliament that “the majority of those amendments will be unacceptable to the commission.” He said if the “unacceptable” amendments were passed, the commission could withdraw the directive entirely and seek to achieve patent harmonization through a renegotiation of the European Patent Convention.

“If I may be blunt…the process of renegotiation of the European Patent Convention would not require any contribution from this Parliament,” Bolkestein told the Parliament.

Or, in contemporary parlance, he flipped them.

I can’t quite decide between jaded and weary or some good, old-fashioned passionate outrage. In any case, EDRI-gram will keep us up to date on what happens next and whether the Commissioner does indeed decide to throw his toys out of the pram.

September 18, 2003

New adventures in WiFi

Posted by Maria

Hotspots are multiplying all over the place, not just in Stateside Starbucks’, but even along the Paris metro. The only time I’ve used wifi so far was at CFP 2003 where it came in extremely handy for blogging the event. But think of it; free internet, wherever you go - how great is that going to be?

Jonathan Ezor of the Touro Law Center in Huntington, NY has written a piece on three potential problems of publicly available WiFi hotspots; misuse of anonymity, free-riding, and liability of providers. (He actually wrote it in June, I just stumbled upon it this morning.)

The first problem arises from the fact that publicly available wifi hotspots could do away with the need for users to register or identify themselves in some way, tying their computer to a personal identity in meatspace. In some set-ups, users of hotspots will be able to act anonymously, making detection of abuse (DOS or other computer related crime, spam, harassment, etc.) much, much harder. This would certainly be a problem for law enforcement. However, I think Ezor over-estimates current traceability on the internet minus the use of wifi, as this piece by Richard Clayton shows.

As things stand, the G8 Lyon working group on hi-tech crime has been working against anonymity in the communications infrastructure for several years now. Mostly, they’ve been worried about free traditional internet services and pay-as-you-go mobile phones, but you could see how wifi could become a real concern. I’m no expert on the tech aspects of this, (and thoughts/expertise are very welcome) but it seems to me that the software could be set up to make some sort of registration process mandatory. As wifi develops and the various pilot projects conclude, we may see more registration requirements, particularly for hotspots provided by public agencies or public/private partnerships. People can always provide false information, but that’s no different from users’ existing relationships with ISPs. And ultimately, we’ll see the law-abiding signing up, and the bad actors, as always, finding ways to escape monitoring and detection.

Free-riding seems to present another teething problem. What to do about companies or users who hog bandwidth and slow down or even stop publicly provided network access? This raises an interesting question - to what extent is a wifi hotspot a public good? It’s neither purely indivisible nor is it, depending on registration requirements and network monitoring capability, entirely non-excludable. Bandwidth is finite, so it is to some extent a rivalrous good. It all depends on who owns the system and how it’s set up of course. But, over time, the trade off between network efficiency and user convenience may also tend toward registration requirements which will provide a means to prevent bandwidth hogs doing their thing.

Ezor calls for a liability safe harbour for wifi hotspot providers, pointing to existing (US) liability exemptions from libel claims. If only libel claims were the biggest problem communications service providers faced. What is really needed is liability protection against IP and copyright claims. Under EU legislation (specifically, Article 12 on ‘mere conduit’ of the E-Commerce Directive) providers of communication services should be exempt from liability for the information they provide. But not only are content owners taking legal action against various European CSPs who could not possibly be expected to monitor every piece of information on their networks, but even this protection is now to be removed altogether by the draft IPR Enforcement Directive. So how about a future in which the RIAA sues not only grandfathers and 12 year old girls, but also local councils, hospitals, schools, and, heaven forbid, mass transit systems?

At least in the early days of wifi, the technology probably will be used by early-adopting criminals (amongst others). Forget infrastructure and rollout costs. Liability, risk, and the expense or impossibility of insuring against them are the most likely candidates to smother wifi at birth.

September 15, 2003

Is piracy killing music?

Posted by Chris

The music industry claims the download pirates are killing music. So how bad would things be if the music industry died? John Holbo paints a plausible picture.

September 02, 2003

Pot. Kettle. Black

Posted by Maria

It turns out that Kazaa has succeeded in having Google remove several responses to search queries involving ‘Kazaa’ and ‘Kazaa Lite’. The grounds? Violation of copyright of course.

Google ‘Kazaa Lite’ and a note pops up on the bottom of the page;

“In response to a complaint we received under the Digital Millennium Copyright Act, we have removed 11 result(s) from this page. If you wish, you may read the DMCA complaint for these removed results.”

Googling ‘Kazaa’ yields 6 removed results. Have no fear, though. The DMCA complaint that Google thoughtfully links to contains a list of the banished urls.

What’s interesting here is that rights holders are now picking off the low-hanging fruit by targetting search engines - shooting the messenger, so to speak. Rightsholders much prefer the big, easy targets, e.g. technical intermediaries such as telcos and ISPs, and are succeeding in changing laws all over the world to tip the balance even further against the idea of communications companies providing ‘mere conduit’ as the postal service does.

The latest push comes through the EU Commission’s IPR Enforcement Directive, which sacrifices the right to privacy, the European internal market, competitiveness and the entire communications industry to keep on filling the coffers of content owners. Check out London-based FIPR for an excellent analysis of everything that’s wrong with this proposal.

July 29, 2003

Babel, Software, Work

Posted by Tom

Here’s a bit I rather liked in Fred Brooks’ classic essay on the management of software engineering projects, The Mythical Man-Month:


According to the Genesis account, the tower of Babel was man’s second major engineering undertaking, after Noah’s ark. Babel was the first engineering fiasco.

The story is deep and instructive on several levels. Let us, however, examine it purely as an engineering project, and see what management lessons can be learned. How well was their project equipped with the prerequisites for success? Did they have:

A clear mission? Yes although naively impossible. The project failed long before it ran into this fundamental limitation.

Manpower? Plenty of it.

Materials? Clay and asphalt are abundant in Mesopotamia.

Enough time? Yes, there is no hint of any time constraint.

Adequate technology? Yes, the pyramidal or conical structure is inherently stable and spreads the compressive load well. Clearly masonry was well understood. The project failed before it hit technological limitations.

Well, if they had all of these things, why did the project fail? Where did they lack? In two respects - communication, and its consequent, organization. They were unable to talk to each other; hence they could not coordinate. When coordination failed, work ground to a halt. Reading between the lines we gather that lack of communication led to disputes, bad feelings, and group jealousies. Shortly the clans began to move apart, preferring isolation to wrangling.

TMMM, as the essay has come to be known, was first published in 1975 so some of what Brooks had to say has dated a bit, but there’s lots of stuff in there that’s wise and fresh to a reader in the present day. I’d add that the prejudices of those who, like me, were educated in the humanities might have about a work of this kind are pretty well confounded, since Brooks’ writing manages to be extremely clear and precise whilst achieving a nicely relaxed, off-duty-with-tumbler-in-hand tone, and that’s not an easy trick to pull off.

I’ve a sense that portions of the audience are beginning to wriggle uncomfortably in their seats: doesn’t this blog generally cover current affairs from a moderately egg-headed philosophical/social-scientific perspective? Why is this guy wittering on about elderly texts about the management of software projects?

Don’t start chucking popcorn yet, folks; I may have some problems with the egg-headed part of the brief, but I aim eventually to pull this around to some stuff that seems a bit closer to the central topics Crooked Timber usually covers.

The main thing that TMMM is famous for pointing out is that you can’t get a project which is behind its schedule back on track by adding more code-monkeys to it: Brooks offers as a oversimplification the slogan that ‘adding manpower to a late software project makes it later’. Why would that be?

Well first up, some tasks are inherently not susceptible to being partitioned into subtasks, so adding extra people to do them avails you precisely nothing: as Brooks memorably puts it, ‘the bearing of a child takes nine months, no matter how many women are assigned’.

More fundamentally, where a task can be partitioned, one has to take into account the communications overhead which comes from adding people. In general, the people working on a given subtask will have to communicate with oneanother and also with those working on the other subtasks. This means that as the number of developers grows linearly, the number of communication paths grows exponentially.

And systems of any size cannot be absorbed by new monkeys straight away: they need to be inducted into the code’s strangenesses and mysteries (which it undoubtedly has, since the project is presumably late for a reason) by someone who knows about them. Assigning such a guru to training the new chimps has an obvious opportunity cost.

Brooks puts it like this: ‘since software construction is inherently a systems effort - an exercise in complex interrelationships - communication effort is great, and it quickly dominates the decrease in individual task times brought about by partitioning’.

By emphasizing the importance of communication as a necessary feature and overhead cost of software development, Brooks nudged people towards recognising that software gets built by people, too. When we manage to do away with the picture of systems being built by interchangeable typing monkeys, we might begin to think about running projects in a fashion that reflects a different picture.

Of course, anyone who has done software-related stuff professionally (or who, indeed, has worked in any other project-based environment) knows that Brooks may as well have never bothered writing his famous essay. Managers still try to save projects by adding people, and seem perpetually surprised when it doesn’t work. More centrally, they continue to pray for a future in which employees really are entirely fungible ‘plug’n’play’ components who can be heaved about the place according to whim. And then sometimes they behave as if that future is already here.

So by the late ‘eighties, we have the opportunity to read books like Tom de Marco and Timothy Lister’s PeopleWare, which tries to explain why some different approaches need to be tried. The really wonderful thing about PeopleWare is that it hammers home the point that projects are staffed by human beings, who (shock) aren’t interchangeable, (horror) have lives, and (shame) would really prefer to hang on to them. It does so by drawing on stacks of empirical evidence about projects that succeed, projects that fail, the circumstances under which they do each, and so on. (There’s also a fascinating chapter on the lessons that the work of Christoper Alexander might have for the physical configuration of the modern work place en route.)

De Marco and Lister spend a lot of time explaining why teams which are treated decently outperform those which aren’t (and hence why the hardest-headed bean-counter ought to pay attention to their findings), but there’s also a real sense of anger at the sheer human waste which is generated by the many shoddily run projects which end up eating up the lives of those who are unfortunate enough to find themselves assigned to them. (There’s a good blast of that anger here, in a letter DeMarco wrote about so-called ‘Death-March’ projects.)

To pull things back towards Crooked Timber territory, what I most admire about PeopleWare is the way that DeMarco and Lister are reacting against a sort of Rand Corporation-inspired, Econ 101, and above all dehumanised conception of the working world. He understands amongst other things that (most) people care about their work, and want to do it well; that people are demotivated by foolish processes dictated from above whose only possible rationale is that those who are required to follow them cannot be trusted; and that work cannot replace the many other things that make for a decent human life.

I certainly have no idea about the state of the academic literature on this kind of thing, but I find it encouraging that it’s not too hard to find serious software methodologists beginning to look into the seriously into the contribution that sociological and ethnographic research strategies might make to their efforts to work out how to design projects. This paper, for instance, makes me feel optimistic that one day, the human beings may win out in the software industry.

Now, I’ve been drawing on some stuff that aims to solve a very particular problem: how the hell to run programming projects on time, to budget, with the right feature-set, and without driving anybody to a nervous breakdown. But the broader lessons about the work-place seem pretty obvious: businesses are run by people, who can only do their jobs well if they have some kind of internal motivation to do them at all, and who will do them poorly if they are not trusted to think for themselves. The pleasures of craft, and the autonomy allowed to professionals, are things that most of us need in order to be productive workers. We also need them to be happy human beings.

Perhaps someone should send a copy of PeopleWare to the senior managers at British Airways, who clearly don’t understand any of the above.