Gresham’s Law and Blogging

by Henry Farrell on February 24, 2005

Two slightly worrying posts that suggest to me that the linked economy of the blogosphere might be more fragile than we would like. First, Brad DeLong gives us an economist’s take on Technorati’s “recent difficulties”:

bq. For an economist, this is absolutely fascinating. There is an underlying resource here: the decision of a human being that such-and-such a webpage was worth linking to is a valuable and useful piece of information. There are businesses (Google and Technorati) that grow up to harvest, repackage, and make money off this information. And then there are the people–comment spammers, link spammers, trackback spammers, link-farm creators, et cetera–who see an economic edge from setting up internet robots to pollute the underlying web-structure information stream.

bq. The standard economist’s way of dealing with all problems is to advise (i) setting up a system of property rights so that (ii) someone controls each resource of value and make sure that (iii) that someone has the incentives to properly husband the resource and ensure it finds its way to its most valuable use. But when the valuable commodity is the indicator of human attention that is the underlying structure of the web, it is not at all clear how this is to be accomplished.

Second, “Stephen Berlin Johnson”: tells us that Jason Kottke is effectively selling links from his blog.

bq. I just noticed that Jason is including a list of donors to his site, including HTML links to their websites if they choose. (See my previous post for background.) It occurs to me that Jason’s put entirely the wrong spin on this whole pledge drive thing. He’s not asking for donations. He’s selling PageRank! A link from has got to have enough cred with Google to make any blogger want to shell out $30 bucks for a Kottke link to his or her front door. Now — that’s a real “A-List blogger” business….

The juxtaposition of these is both interesting and worrying. As Brad says, the informational economy of the blogosphere is very poorly understood. But I still think that we know enough to be able to put forward some initial hypotheses as to why the blogosphere works as well as it does. First, as Brad says, the underlying value of the blogosphere is that it is a system that more or less efficiently conveys the decisions of readers that a web page is worth viewing. It does so through links. As Rebecca Blood observed a couple of years ago, the best way for a blog to get attention or readership is to get links from other blogs – especially well known ones. Links are the currency of the blogosphere – and they’re valuable because they have real informational content – they tell us about the blogs, or posts, that another blogger considers to be worth reading.

This has led to the creation of a sort of informal economy of link exchange, with norms regarding due credit, reciprocity and so on. It’s by no means perfect, but it does a pretty good job in ensuring that good posts and good blogs get attention. Not a perfect job – network effects, path dependence, link cartels and so on all have a distorting impact – but, as stated, a pretty good job. Both ‘big bloggers’ and services like Technorati then serve as information filters that lead blog-readers to interesting posts.

The problem is that the political economy of link exchange on which this rests may have some inherent fragilities. Links are valuable currency because they refer to the ‘interestingness’ (an ugly word that should nicely annoy “Mark Kleiman”: of a specific post or blog. Interestingness is a subjective concept, of course, but the more people (especially people who share your tastes), that find a post or blog interesting, the more likely it is that you yourself will find it interesting. If the relationship between links and underlying interestingness is broken, than many of the advantages of the blogosphere as a means of sifting through views and highlighting the interesting ones will evaporate. There isn’t any other obvious metric that I could see that would replace inbound links as a means of establishing which posts or bloggers are more interesting than others.

DeLong and Johnson’s posts point to two phenomena, both of which could weaken the relationship between links and subjective level of interest. First, link farms and other attempts to game the system could seriously hurt the ability of Technorati and other services to provide an overall snapshot of the blogosphere, the most interesting blogs, the hot topics and so on. However, the kind of entrepreneurial approach to linkage that Johnson identifies could do some damage too. If top bloggers were to go a couple of steps further than Kottke has, and to start flogging off links in their blogroll (flogroll) to the highest bidder, again this would damage the relationship between links and interestingness, and weaken the blogosphere as a whole. Links would become an indicator of how much money a blogger has; not whether she has anything interesting to say (a problem that of course applies to pay-to-play search engines today). Both linkfarms and flogrolls would create a sort of Gresham’s Law effect, driving out (or at least greatly weakening the informational value of) links, and thus hampering the efficiency of the blogosphere as an aggregator of interesting opinions.

Of course, the lesson here isn’t that the blogosphere is doomed – like Brad, I reckon that the actors with encompassing interests (Google etc) have an interest in coming up with technical solutions to the former problem, while the latter is largely a hypothetical. But what both of these suggest to me is that the informational economy of the blogosphere isn’t as self-stabilizing as people often assume. It depends on a set of initial conditions – most particularly, the relationship between links and interestingness, and a set of norms and practices about the exchange of links. If either of these were seriously to be damaged, there’s a strong likelihood that the ability of the blogosphere to serve as an aggregator of interesting facts and opinions would be damaged too.

{ 6 trackbacks }

Reading A1
02.24.05 at 7:40 pm
Peter Levine's blog
02.25.05 at 12:52 pm
Peter Levine's blog
02.25.05 at 2:28 pm
Earth Wide Moth
03.02.05 at 3:31 am
Earth Wide Moth
03.02.05 at 12:20 pm
Earth Wide Moth
03.03.05 at 3:58 pm



ogged 02.24.05 at 6:06 pm

One way in which it might remain self-stabilizing is that sites selling links will themselves (I assume) become less popular and less linked.


Cranky Observer 02.24.05 at 6:12 pm

> The standard economist’s way of
> dealing with all problems is to
> advise (i) setting up a system of
> property rights so that (ii)
>someone controls each resource of
> value and make sure that (iii) that
> someone has the incentives to
> properly husband the resource and
> ensure it finds its way to its most
> valuable use.

Funny, because that is the exact opposite of how the Internet developed: it was designed and built with government money by graduate students working for pennies who didn’t try to lock up the “intellictual property” for themselves (the way biology researchers do today). It was then given away for free at a time when this was thought to be the appropriate course of action for university research, particularly that funded by the public.

Then the Internet grew to popularity largely on the back of tools developed by the GNU project under the General Public License (GPL) and CERN (another public entity with a “gift culture”).

Finally a very large portion of the Internet is today supported by Linux-based systems, Linux(tm) being another tool built under the umbrella of that pesky ole GPL.

How does that history fit in with the economists’ desire to monitize everything?



MQ 02.24.05 at 6:19 pm

Why do you think the blogsphere works well? There are great things about it, but they all come from the basic infrastructure of the web — that it’s cheap to set up sites and easy to communicate. But in terms of the information exchanges within the blogsphere itself: a lot of the most popular blogs (e.g. instapundit) are complete crap. IMO the blogsphere works about as well in selecting and promoting quality information as other market-based information systems, like unregulated commercial television, do. That is to say, not very well at all.


Michael 02.24.05 at 6:19 pm

I think there’s a problem with your term “interestingness,” Henry, that somewhat vitiates your discussion here. The problem is measured by the difference between “interestingness” and “relevance.”

The conceptual leap made by Google was to recognize, in the structure of the Web link (a marriage of resource-pointing behavior with keywordish description), an opportunity to analyze, across the information space, the existing association of resources with intelligence about them. That’s an analysis of relevance: simplistically, the more pages associate a given resource with a given bit of description, the likelier it is that the resource is relevant to the bit of description.

To treat links as bloggers do, as an attentional currency—ideally, a measure of interestingness—follows a different conceptual model from Google’s, though, even if both can be made subject to similar attacks meant to distort the information space. “Reciprocity” is the chief difference: the link, in the Google model, isn’t reciprocal; it ends at the pointed-to resource. Where links are treated as currency, though, reciprocity—rather than pointing-to—is the fundamental gesture. “Interestingness” may or may not be an artifact of the link-exchange, but it’s only an artifact, not part of the structure of the exchange.

In other words, your notion that “the relationship between links and interestingness” might be in danger of breaking isn’t persuasive to me, because I think the relationship was always-already broken as soon as links became currency. And while the blogosphere does, indeed, reliably produce interestingness as an artifact of link exchange, link exchange is a wildly unreliable measure of interestingness within the blog space. In fact, the power-law effect in social networks pretty much guarantees that, as it expands, the link structure of the blogosphere will become less and less reliable as an index of good/interesting: necessarily, more and more such stuff is going to languish in the tail of the distribution. Flogrollers, in that sense, are already (as it were) behind the curve: they can’t distort the picture more than the network effect already has, and will continue to.


Henry 02.24.05 at 7:02 pm


That’s a really quite interesting argument that I’ll have to think about (although I do acknowledge reciprocity as a distorting influence in link cartels etc). One important caveat though – Dan Drezner and I have done research which suggests that links among political bloggers have a lognormal distribution rather than a power law. The implication of this is that we don’t have a simple rich get richer model of network growth – it _is_ possible for quality new blogs to break in.


Bithead 02.24.05 at 7:11 pm

WOuld it help your thought process to consider the Googles and Technorati, Bloglines and Blogdex’s of the world as the aggregator, and the Blogs themselves as content providers?

Here’s the nut of it; Consider the style:

“I found this interesting” (Insert link, insert snarky comment, fade to tagline)

Is this content, aggregation, or both?

To my mind this is content provision, but that’s me.


anno-nymous 02.24.05 at 9:32 pm

“Interestingness”: Interest?


Chris Clarke 02.24.05 at 9:43 pm

Well, selling links woud at least change the subtext of that idiotic boyblog meme from “Where Are All The Women Bloggers (who someone in my in-group finds interesting)” to “Where Are All The Women Bloggers (with an advertising budget).”


HP 02.24.05 at 10:04 pm

There are something like 5 – 7 million blogs out there. Only a fraction of them constitute the politics and debate blogs that we commonly think of as “The Blogosphere.” A link on, say, LiveJournal or Xanga is often less an indicator of “interestingness” than a map of social relationships or identities.

Then there are special-interest blogs where the total community’s collective judgement is unlikely to swing Google or Technorati one way or another, but provides immediate value in creating shared knowledge among people with arcane interests that aren’t being served locally.

It seems that a theory of blog economy would have to extend beyond the newsblogs and poliblogs and debateblogs to encompass, oh, I don’t know, hummel figurine blogs and Swedish sexploitation film blogs and drawing-bible-stories-on-grains-of-rice blogs and . . . .


Seth Finkelstein 02.24.05 at 10:35 pm

“we don’t have a simple rich get richer model of network growth”

Well, maybe a complex rich get richer model of network growth? That is, the power law curve is only a rough approximation. Its best application is in debunking the blog-triumphalism of some evangelists (along with the concomitant let-them-eat-cake cruelty, that if you’re poor, I mean unread, it must be because you’re shiftless and lazy, since there is such abundant opportunity in blogotopia …). But it’s certainly not the last word. There’s “ecological shifts” which provide lottery-like opportunities for break-outs from the bottom – for example, an Iraqi who favors the US occupation may suddenly find himself with a very large audience. And established pundits can bring much of their audience with them when starting a blog. Political blogs are arguably experiencing a kind of gold-rush right now.


Cheryl Rofer 02.24.05 at 10:56 pm

I recognize the common acceptance of links as the currency of the blogosphere, but my mental model (very much a work in progress) has more relationship to Michael’s and bithead’s.

Links are a measure of popularity, which sometimes may be correlated with “interestingness.” This may or may not say anything about content, and indeed sites with many links are found all over the political/social spectrum.

Google and other search engines are getting better at delivering what a person is actually looking for, provided that person can choose a few appropriate words, but they do not evaluate content for whether it makes sense or other details.

Bloggers search the internet in selective ways and link to what is within their definition of “interestingness.” If you find a blog you like, you can rely on that blog to deliver some of what you’re looking for on the Web. If others haven’t found the blog yet or don’t share your viewpoint, then the blog won’t have their links.

Bloggers also provide content, frequently in a particular niche. I like to write on scientific topics that are mangled by the media, for example, or new scientific findings that could be of general interest but probably won’t be picked up by the media, along with nuclear nonproliferation. This kind of material may or may not contain links. I link to reference material, or recent, fairly obscure, sources rather than other blogs in posts of this kind.

Blogs can be both content providers and “interestingness” aggregators at a level above the search engines. Links have little to do with this. And Gresham’s law really does work.


Jon Garfunkel 02.25.05 at 4:02 am

What fun is it agonize over the economy of links without suggesting some sort of remedies (short of transferring them to privatized accounts or a stop-loss program)?

One, banish the blogroll from the front page of a blog (set it up on bloglines instead). There’s a subtle difference between what I read and what my readers should read. It doesn’t take an economist to point out that scarcity has a value. The less links you have, the better. It was nice to hear Jay Rosen concede this point at webcred conference when I pressed his thinking on it. (BTW, Henry, I brought your paper up on-screen at the conference when countering Winer’s claim that “the media doesn’t read blogs.”)

Second, qualify your links, not based on topic, but what you think of them. As any blogger would say, transparency is the best policy (at least, for other people.)

And third, wouldn’t it be a gas if bloglines started releasing data about how frequently people scan the blogs– and compare them to their own qualifications? I wrote this up in a little more depth a couple of months back– and that was before Seth coined the phrase “love for the linklorn.”

I’m not too worried about the power law– sorry, logarithmic law– I think we’re still under a bit of a shakeout… especially when people have to upgrade from MovableType to the next generation of social software.


Scott Martens 02.25.05 at 11:47 am

Michael is making a good point about the difference between relevance and interestingness. It reflects some long running issues in the information retrieval world about the difference between finding relevant documents and finding answers.

No technical solution is possible using the approach Google and other IR systems have taken to date. None: none now, none later, and I’m pretty sure none ever. There is no algorithm capable of making the required distinction using exclusively information about the structure of the web – at least so long as the people who produce web materials are aware, even partially, of the mechanisms used by IR services. I can’t prove this rigorously, but I suspect that it could be proven rigorously as a variation of the Halting Problem proof. All Google can do is run the Red Queen’s Race: try to devise new filtering mechanisms faster than the spammers can reverse engineer them.

The only alternative is to use mechanisms which have access to information other than the structure of ‘Net information. There are a number of those, and research continues, but to date there has been no satisfactory success. The best results to date have all involved retreating from process automation and using some quantity of human labour to maintain database quality. This is too expensive to be satisfactory, and where it is being tried, it is not organised on a large enough scale to help.

I suspect the nature of the problem precludes a firm with the kind of technical orientation and business model of Google from resolving the challenge of link spam and link spuriousness in a comprehensive way. I’m not sure where to look for a solution. My own field touches on some of these areas, and if I was a gambling man, I’d look at research into automated question answering for some answers. But I don’t think waiting for the free market to create a technical solution is really going to work any better here than it has for automatic translation or other problems that highlight the difficulty of modelling the behaviour of agents capable of modelling their modellers.


Roxanne 02.25.05 at 6:05 pm

I spend alot of time thinking about the revenue model for blogs. It seems to me that selling a link isn’t that far away from selling edit. As an ascendant medium trying to gain credibility in the greater public sphere, I’m not certain we want to go there. Although,, etc. have already bled the line between legit edit and advertorial …


derek 02.25.05 at 10:17 pm

michael writes:
the power-law effect in social networks pretty much guarantees that, as it expands, the link structure of the blogosphere will become less and less reliable as an index of good/interesting

I just want to say, what possessed the writer of that link to illustrate his power-law graphs using linear-linear scales?


sidereal 02.26.05 at 1:44 am

The fact is that links are an excellent indicator of interestingness, even under the burden of spammers. The problem is in figuring out who exactly finds the targeted page interesting. In the case of spammers, it’s the spammers and their clients who find the target page interesting.

Some assumptions underlying Technorati and similar services are:
1) If a link appears on a site, the proprietor of that site endorses it.
2) If the proprietor of a site endorses the link, the community that attends that site will also find it interesting.

This lets them use site traffic (or link popularity) as a shorthand to determine which sites are supernodes and have more important opinions.

Community features break assumption 1. Much of the content on blog pages is provided by commentors and/or spammers. Selling links breaks assumption 2.

1 is, I think, easily solved by isolating links the proprietor is responsible for. . those in original posts. 2 is harder, but I think it solves itself. If a blog gets in the habit of selling links that people otherwise wouldn’t traverse, people will simply stop traversing their links altogether and the balance will adjust.


RobotWisdom 02.27.05 at 3:23 pm

Page-rank algorithms have miles to go before they sleep: someday they may track the ‘chain of discovery’ so they can give thirdhand linkings lower rank than ‘firsthand’ ones, and they could gauge the level of original research put into a page, based on how many different sources it used. Weblogs that recycle the HampsterDance-of-the-hour should lose points. And certainly pagerankers need to parse (and adjust weights for) the semantics of the different segments of the page (blogroll, adspace, etc).

I think XML has been a big distraction from the real semantic challenge, which should start with ‘screen-scraping’ of these page segments.

Comments on this entry are closed.