Big government, big IT

by Daniel on August 14, 2008

Over at the Guardian website, I have another piece up about my general scepticism of both big government IT projects, and the possibility under our current political and economic system of not being deluged with big government IT projects. I filled it full of jokes because I’m not yet really sure what I believe about the underlying causal mechanism. There’s a half-joking suggestion that the business development offices of the major IT consultancies probably ought to be considered as a material interest group in any analysis of British politics; we’ve not yet reached the levels of a “consultancy/government complex” but we’re not far off.

But on the other hand, I might be committing a version of fundamental attribution error here. The sales process is an important part of the procurement of big, failed IT projects, but the proliferation of big failed IT projects isn’t really a result of successful selling – it’s a result of the fact that nearly anything new that the government does is going to require an IT element, and that government projects tend to only come in one size, “big”, and to very often come in the variety “failed”.

And a lot of the reason why these projects screw up so badly has to do with the fact that they have to reinvent a lot of wheels, duplicate data collection exercises, and integrate incompatible systems (useful rule of thumb: whenever you hear an IT person use the word “metadata”, as in the sentence “all we need to do in order to make this work is to define suitable metadata”, you can take it to the bank; this project is fucked). In Sweden, for example, they have a working education vouchers system not unlike the one I discuss in the article, but in Sweden they have a big central database linked to the national identity card system.

In the UK, we don’t have a big central identity card database, and the main reason for that is that we don’t want one. And so I find myself entertaining the hypothesis that the constant parade of halt and lame IT projects which is British administrative politics, is actually an equilibrium outcome.

I am also rather pleased that, after two years of removing my bad language, the website editors actually introduced a swear-word into this piece that I hadn’t originally put in there.



Martin Wisse 08.14.08 at 9:56 am

The main problem with big governmental IT projects is also that succesful big governmental IT projects are not very distinguishable from failed big governmental IT projects. You’ll always gets cost overruns and kludges.

What makes the British big governmental IT projects so bad even when compared with dismal failures elsewhere is that New Labour fundamentally doesn’t seem to get IT at all.


sanbikinoraion 08.14.08 at 10:09 am

But that’s because *no-one* gets IT at all. Not in politics. As DD identifies in the CiF piece, the Tories don’t get that their own proposals require large IT coordination systems in order to work. The other thing that politicians and civil servants seem to be absolutely awful at is writing the contracts for these projects to ensure that the backsplash (so to speak) when things go wrong doesn’t fall on the public purse.

DD – which was the added swearword and where?


Luis Enrique 08.14.08 at 10:13 am

It’s not just an equilibrium outcome in big goverment IT – I remember being told by a software testing tools vendor that something like 80% of IT projects (that involve writing new code, having to be compatible with old data etc.) in the private sector ‘fail’ too (or at least are hilariously over budget and late by the time they work) but we just don’t read about it in the papers. The same used to be true of the construction industry – everythimg was built late and 3x over cost. As I undertsand it, the construction industry is getting better. We have discussed this before here . The thing that strikes me about this, is that fewer projects would be classified as failures if both parties acknowledged projects were going to cost twice as much and take twice as long in the first place. The equilibrium outcome we see is one where the criterea for success (the expectation) is repeatedly unrealistic. Why don’t both principal and agent set more realistic expectations? It’s contract design. Something to do with the inherently ex-ante uncontractable nature of the job (because you don’t really know what’s needed until you start, and the client will keep changing its mind) so the equilibrium is that an unrealistic cost and schedule is agreed (which might even be loss-making for the contractor/agent – I know in construction, contractors put in loss making bids and only made a profit from adding work on later) and then cost is added as they go along.

This way of doing things, with its inevitiably high failure rate, may even keep the overall costs down. If realistic expecatations were set in the first place, the agent/consultants would be able to fleece the principal/client even more. It’s a bit like when you set yourself a deadline for writing something – if you give yourself a month, you know you’ll take two. But if you start by giving yourself two months, to be realistic and avoid ‘failures’ you’d just take three. Or perhaps somebody will come up with a cleverer way of writing the contracts, and these failures will go away. If I’m right that construction is getting better at this, then perhaps the nature of the contracts have changed there already.

If contract theory didn’t make by brain hurt, I’d love to write a paper on this.


randomvariable 08.14.08 at 10:37 am

Most of the failures seem to be attributable to the requirements engineering stages of projects. Government seems to have poor idea of what they want, and then realise its unworkable when they get it. Unless I’m mistaken, transport seems to be the exception (C-Charge & Oyster).


Daniel 08.14.08 at 10:43 am

Sam – my original title was something like “Tories Underestimate Logistic Requirements” or something; the rather snappy “Arse, meet Elbow” was the work of a CiF website sub, who I will buy a pint for if he can credibly claim it.

Luis – yes good point. I’m reminded of Richard Portes slogan on financial crises (that the optimal frequency is clearly not zero and might be rather high, on the basis that if you’ve never missed a plane, you’ve spent too much time waiting in airports).


john b 08.14.08 at 10:46 am

It’s one of the areas where managerialism actually works quite well, in making corporate IT projects less unsuccessful than government ones.

On a corporate IT project, when the spec is being set, there is someone who can say “no, piss off, this is the system you’re getting because it’s the one Accenture can do cheaply; for the cost of adding bell X and whistle Y we could hire temps to replicate their functionality for a decade, by which time it’ll be obsolete anyway. Full business case required on my desk tomorrow for any variations”.

On government IT projects, because getting them approved is a tortuous process involving a million stakeholders, Parliament, not pissing off the tabloids, etc, there’s nobody who can say “no, NHS trusts, get lost – this is how the computer system will work; reorganise your teams to deal with it”. Instead, they try and fit in all the requirements that everyone says they want, with predictable consequences.


john b 08.14.08 at 10:47 am

“can actually work quite well”, even. Obviously there are plenty of disastrous private sector projects.


Laleh 08.14.08 at 10:53 am

There is also a peculiarity of consulting services which applies whether or not working with governments: big projects inevitably have to fail before they are pronounced successful. It’s just that when the clients are private, the failure is often masked, whereas government accountability requires some transparency about the failures.

I say this as someone who worked with Andersen Consulting (before it became Accenture) and with Price Waterhouse (before it became PWC) and was involved in several very large projects (in insurance, telecom, and customer service for a multi-site newspaper service) and in all instances, if the IT project was developed from grounds up (rather than with a modified out-of-the-box software), we ran so over time and over the budget that had it been the government, the process would have been considered failed and shut down. But in every single instance, the client coughed up more time and money and the problems were fixed (sort of halfway) and a software was rolled out so as to save face for everybody.


Laleh 08.14.08 at 10:55 am

And John B, you are either living in a fantasty world or I would really love to know which consulting firm you are talking about that sounds so decisive.


dsquared 08.14.08 at 11:11 am

Any readers who are management consultants, by the way, feel free to defend the industry, but beware that any shading into marketing material will not be tolerated.


john b 08.14.08 at 11:48 am

I don’t mean the consultants – I mean that corporations have senior managers who’re able to make compromises up front rather than having to deal with endless stakeholder feedback. Not least, using something off-the-shelf and adjusting the organisation/product to fit with the software (and yes, I’ve been involved on the corporate client side with IT projects which have worked like this).

The point about government failures being public and corporation failures being private is entirely true and fair. Nonetheless, the fact that EDS’s public sector practice makes double the margins of its private sector practice, and that nearly all of the big IT firms’ margins come from contract variations, suggests the government side of things might be even less effective.


Danielle Day 08.14.08 at 11:50 am

“In the UK, we don’t have a big central identity card database…” Maybe not, but what you do have is a DNA sample of almost everyone in Britain.


stuart 08.14.08 at 12:01 pm

The population of Britain is under 4 million then?


dsquared 08.14.08 at 12:09 pm

I suspect that the toilets at Kings’ Cross station might contain a sample of the DNA of almost everyone in Britain, but these things really have to be systematised and labelled if they’re going to be any use.


ckstevenson 08.14.08 at 12:47 pm

The fate is the same in the US as well, and as many have mentioned in the corporate arena (and you just tend to not hear about those failures). There are too many factors to point to, but there are some very well known published studies that call out the main culprits. Requirements tends to be #1. They are either poorly understood from the outset, poorly defined, and/or change too late in the process. It is clearly well know that the later in the project lifecycle you change a requirement, the exponentially higher cost it will take to implement the change.

The difference between corporate and government IT projects is the acquisitions process. In the US, we look for the lowest bid. Is it ever a shock that the company that comes in $4m under all other competitors for a project that “should” cost $30m fails and goes over budget? Behind closed doors (and sometimes out of them) many government consulting companies/firms will tell you they low bid with the intent of “mod’ing” the project to the ends of the world to drive up the price. Often times as a consultant you can read the RFP and SOW and point out dozens (if not more) mistakes, omissions, duplications, conflicts and flat out idiocy.


Guano 08.14.08 at 1:09 pm

The two big projects that worked were Congestion Charge and Oyster Cards. This suggests that Livingstone’s administration of London wasn’t the maelstrom of corrupt, squabbling Trots that some people have suggested.


randomvariable 08.14.08 at 1:14 pm

Saw a presentation by an Accenture manager who admitted that “Accenture Assured Delivery” process was rubbish.


reason 08.14.08 at 1:16 pm

Having been on the sidelines for a big public sector failed project, I think the main problem is the BIG bit. Plan something big by all means, but build it in small incremental steps, best with substantial permanent employment participation (consultants know they won’t be there when the shit hits the fan). “Big bang” in IT usually “bangs” all right.


reason 08.14.08 at 1:18 pm

And public sector projects usually have a built-in problem, they are often written to meet brand new legislative requirements and so often can not draw on an existing knowledge base.


peter 08.14.08 at 2:11 pm

I write as someone who has consulted to industry and government, and eventually decided not to consult to government. The main problem IME is the multiplicity of stakeholders and the diversity of stakes and interests in government projects. Although the notion that private companies have a military-like, single chain of command is a myth, most major companies do have individuals empowered to make decisions, and with the courage to risk making them. Governments very rarely have such people IME.

Identifying all the stakeholders and persuading them to reveal their interests, socialising ideas with and between them, forging some form of consensus, and then proceeding through all the talk to taking consequence-rich actions are incredibly difficult tasks when the stakeholders are many, when perspectives and interests diverge, when power is diffused, and when many stakeholders may have powers of delay or veto, but few have powers of initiation. It is not for nothing that some people claim IT as now a branch of applied sociology.

I came away from my experiences in working with governments with enormous admiration for public servants like Dick Bissell, who organized allied merchant shipping for FDR in WWII, and later led the CIA/USAF U2 spy plane program. Unfortunately, people of the calibre of Bissell, able and willing to take power, are very rare, at least in western governments.


foolishmortal 08.14.08 at 3:11 pm

What you want to do is split up the massive IT project into little, and hopefully redundant, bits. Fund them all, and let God sort them out.


ajay 08.14.08 at 3:25 pm

16: you probably shouldn’t read the news about the huge security hole in the Oyster card system then.


Ginger Yellow 08.14.08 at 3:50 pm

There was a good article in the New Statesman (trust me on this) about why UK public sector IT projects fail so badly, so often. If I recall correctly the author pinned most of the blame on two things – the lack of dedicated/experienced project managers in the civil service and the tendency to go for the biggest contract possible rather than breaking a project up into smaller contracts. He contrasted the UK, which does particularly poorly even compared to other countries’ public sectors, with (I think) New Zealand, although it might have been Denmark and it might have been both of them. Of course, the NHS IT programme is the classic example, where gargantuan contracts were awarded to the handful of companies that could (in theory) cope with them, leaving the government over a barrel when they screwed up or wanted changes to the specification. Those were hardly the only problems, of course.


chris y 08.14.08 at 5:54 pm

the lack of dedicated/experienced project managers in the civil service

This is very true, and in harness with John b’s point about the procurement process above goes a long way to explain the unique awfulness of government IT in Britain. However, the recruitment budget of the civil service is permanently overstretched, and they won’t start hiring experienced PMs, who don’t come cheap, until some politician makes it a priority. Which they won’t, because efficiency can only be achieved by cutting resources, as we’re constantly told, not by adding better or more appropriate ones.


Martin Wisse 08.14.08 at 7:47 pm

I don’t mean the consultants – I mean that corporations have senior managers who’re able to make compromises up front rather than having to deal with endless stakeholder feedback.

Bitter, hollow laughter.

Large companies are the worst kind of environment for making compromises up front. There’s no worse bureaucracy than huge corporation bureaucracy.


abb1 08.14.08 at 7:55 pm

UK government’s IT projects can’t fail – they are fully ITIL compliant! ITIL is completely fool-proof!


dsquared 08.14.08 at 10:05 pm

I don’t think experience in project management is all it’s cracked up to be. None of the Bank of England CREST team were experienced project managers and very few had any direct experience in IT, but they delivered the goods. The problem is a massive failure of mojo on the part of HM Civil Service, who appear to have got into an institutional funk which leaves them unable to use the judgement for which they were hired and altogether wildly too credulous of their hired experts. And also revolving-door careerism, let’s not underestimate that.


Alex 08.14.08 at 11:57 pm

Abb1: ITIL was invented by the British Government when the Government had an IT specialisation.

Further, this is the point; presumably Redwood imagines that Government IT requirements will be matched by the decentralised efforts of local techies. Unfortunately, this would require rebuilding a major IT specialisation all over the entire civil service and beyond.

Or does he just want to smash things?


Cranky Observer 08.15.08 at 1:32 am

> There are too many factors to point to, but there are some very well
> known published studies that call out the main culprits. Requirements
> tends to be #1. They are either poorly understood from the outset,
> poorly defined, and/or change too late in the process. It is clearly well
> known that the later in the project lifecycle you change a requirement,
> the exponentially higher cost it will take to implement the change.

The fundamental basis of this analysis is that the waterfall model of systems development, revolving around the fully-defined, correct, and complete “spec”, is valid. I am not fully convinced by the agile/extreme camps yet but it is well known that the waterfall model hasn’t done very well over the last 20 years. The response so far has been the PMI, CMMI, and attempts to ratchet the process and the spec down tighter and tighter until nirvana is reached and I have to say that doesn’t seem to be going very well either.



nick s 08.15.08 at 6:52 am

I remember working at a Large Media Company Based Outside The UK as a project manager for a large-ish IT thing, where there was a de facto wall of separation (and an entire floor) between the worker bees and the McKinsey alum / MBA management peeps who ‘owned’ the project.

My job was divided into getting the project completed and producing convincing but entirely fictional MS Project charts to send upstairs in order to keep the MBAs happy.

I wasn’t ‘experienced in project management’ for that particular gig: I was a) sufficiently experienced to know how projects came together in practice; b) a good enough bullshitter to keep the suits happy. I also had a passing knowledge of The Mythical Man-Month, which always helps.

Some years later, I had an informal interview for another IT-related gig with A Large Financial Institution, and demurred because after an hour of abstract explanation of the position from the guy who would have been my boss, I didn’t have a fucking clue what they actually wanted doing.

I have no idea whether Crapita represents primarily the suits upstairs or not, but I do know that Michael Gove — to pick on him, not just because of the CiF piece, but because he’s a weaselly little fucker — has lots of friends who got milkroaded into consultancy, and will be only too pleased to assist him in proposing projects that will, at the best of times, be managed at the ground level by people who a) may not have a fucking clue what they’re meant to be delivering; b) will cook the books to keep the suits upstairs happy. Let me repeat: that’s at the best of times.


nick s 08.15.08 at 7:08 am

Milkroaded? Talk about a mixed metaphor. Milkround meets railroad.

My guess with CREST is that there was a degree of atomisation in the abstract conceptualisation of the system — getting stuff from A to B without fuckups — that dovetailed pretty well with the basic skillset of the people assigned to code it all up. There’s an algorithmic purity to that kind of thing. I’d agree with those upthread that taking a UNIXy approach — dividing projects into small, self-contained, discrete functions — is optimal, though that still raises the problem of interoperability at the most basic level.

(Still, that also reminds me of a conversation ten years ago with a friend at a Large Consultancy Firm who’d been flown out to San Francisco and wanted a primer on the whole dot-com thing.)


Naadir Jeewa 08.15.08 at 7:28 am

What is odd is that the Office of Government Commerce created one of the best IT frameworks to come out for a long time – ITIL (IT Infrastructure Library). They then fleece you of £300+ for the books (on Crown Copyright).


Alex 08.15.08 at 9:52 am

I said this last night, but it appears to be stuck in moderation hell:

Abb1, the thing about ITIL is that it was invented by the government’s IT specialisation when the government had an IT specialisation. Since then, they sacked all the geeks and outsourced to EDS.

Secondly, and more generally, Redwood doesn’t seem to realise that there is a way of having government IT requirements met in a more incremental and human-scale way whilst not using as many consultants – but it’s called “doing it in house”, and it requires that all the organisational entities who might want their own small problems solving have their own engineers. This means re-establishing a huge government-wide IT career path, a public sector IBM. Is this really what he wants? (Mind, I’m quite keen on it myself.)


peter 08.15.08 at 11:21 am

Complaining about abstract explanations in IT projects (#29) misses the point somewhat: unless you write code in binary digits, you are engaged in abstracting away from something more concrete. The whole discipline of computer science, computing and IT is an abstract activity.


Nick 08.15.08 at 11:47 am

I mean that corporations have senior managers who’re able to make compromises up front rather than having to deal with endless stakeholder feedback
Erm, no – I laboured for some years in the IT department of A Very Big Airline, and believe me there were failures of small, medium, big and very big IT projects a-go-go for the discerning connoisseur. The problem was not so much the irresponsible use of the word ‘metadata’ as irresponsible use of the words ‘all we have to do . . . ‘. Oh, and the fact that systems had a habit of being designed to meet the psychiatric needs of managers not the business needs of their departments . . .


Cian OConnor 08.15.08 at 2:49 pm

A snarky answer for why the government’s IT projects keep failing is that they insist on using Accenture. I’ve never seen a successful Accenture project in the private sector (I may have seen a PwC one that worked, though that could have been a hallucination), and while there are a range of reasons for this, they largely relate to them being bureaucratic and crap.

A more reasoned answer is that large IT projects usually fail for sociological and design related reasons, rather than engineering ones. If you’re designing a computer system that will be used by an organisation, then you need to understand how work is done within that organisation (which rarely has much to do with org charts, or the fantasies of the business managers commissioning the project), and have a strong sense of how the IT project is going to augment/improve/change the functioning of that organisation. The skills required are a mixture of design and sociology ones, with a smattering of psychology. Not the kind of skills that business, or IT, grads typically have, even though in the UK they are the ones typically doing this kind of work.

Somebody mentioned Denmark as an example of somewhere that manages this kind of stuff. Their health systems were designed by HCI specialists, and seem to work quite well. In contrast for the NHS, doctors and nurses were excluded from the design process and we wonder why it didn’t meet their needs very effectively…


Tangurena 08.18.08 at 2:06 pm

There are too many competing interests for large governmental IT projects to succeed. Some want the project done, some want the project to fail, some will only “support” the project if their pet gizmo (sometimes called a widget by those MBA types) is included. As a result, most gov-IT projects suffer from severe gold bricking. For an example, I’d refer you to the FBI’s Virtual Case system, which had to replace some 40+ different computer systems AT ONCE. If they wanted it to succeed, they’d start with a system that replaced one existing system, and with each iteration roll in the functionality of another system.

Part of it is due to impedence mismatching between the “specs.”
The paperwork says that work is done one way, (mis)managers think that work is done some other way, and the workerbees have a thirdway where work actually gets done.

And private industry isn’t any better at getting IT to work. Just better at hiding the dead bodies and spinning what can’t be shoved under the carpet. For a wonderful example, I refer one to the Denver Airport baggage handling system. The engineering management said “projects of this size take 4 years to complete,” while the sales mismanagement said “airport is finished in 2 years, so we sold it to them, you have 2 years to make it work.” As everyone knows, the baggage handling system was functional (meaning: it stopped losing and shredding bags) 2 years after the airport opened. Was it late or on-time? Does it matter when the business ends up going out of business? I’m currently involved in a similar fiasco. An internal project to replace a currently selling software product (that cannot possibly run on Vista) is 3 years behind schedule. The mismanagers are now claiming that this project is only 2 years old and therefore can’t possibly be 3 years late. My estimate is that there is about 6-9 more calendar months of work left before it is sellable, but the PHBs all say it must ship next month.

I’m coming to the conclusion that the reason that outsourcing/consulting is becoming (or is already?) a form of political patronage and graft. I’m running for elected office this November, and the more I see of the contracting/outsourcing of governmental functions, the more I smell graft.


Martin Ross 08.19.08 at 12:54 am

Not to reiterate the points of the previous comments, but Government IT projects typically have following characteristics compared to even the largest private sector requirements:
1) Large code base. As a result of the following items.
2) Very custom requirements (e.g. Build a drivers license registration system for an country of Absurdistan – the land where drivers licenses are issued by each individual municipality – with no common processes. Not too many people have experience with that one.).
3) Lots of legacy requirements into the workflow that private entities would simply ignore or migrate (i.e. Need to be able to handle old documents pre-1929 issued to Martian aliens non-resident in the state of Main from 1915-1959. Private sector tells you to get stuffed or to trade in your documents.)
4) Unintentionally absurd, insane requirements since IT staff have no capacity to give feedback to legislators writing legislation. (e.g. Electronic Health Record for every citizen in 1 year).
5) Insanely tough legacy data migration requirements. Governments have 24/7/365 requirements and don’t get liquidated (possible upcoming exception: Georgia) or sold for assets only thereby allowing you to junk/old/useless data. Nearly everything needs to be retained and incorporated.
6) Elections can cause requirements to radically veer 180 degrees every four to five years. Civil servants are given new priorities – old priorities are dropped; in the U.S. Federal government, civil servants are replaced with incompetent political hacks.
7) Because of salary pressures and other factors public sector IT is unable to attract the candidates of high enough caliber to meet the challenges required in a tougher environment than most private sector companies.
8) Since requirements change so frequently IT consulting for the government is awesome because most contracts have standard ‘if requirements change then additional charges apply’.


derrida derider 08.19.08 at 1:12 am

As someone who has been on the purchasing side of a few public sector IT projects (mostly small-to-medium, but one or two big ones), I’ve always found that the biggest problem is that it is just no possible to accurately define the specs in advance, for a number of reasons. And as others have said, making changes later is incredibly difficult and expensive.

The reason is that the world is just not a static place, even (especially) in the civil service. Not only are there lots of stakeholders, their interests keep changing. Plus none of them can really know what they want until they see it.

Degree of difficulty rises disproportionately with project size, so I’ve learned the hard way that IT projects should be as small and modest as possible; never try to do too much. Also treat all tenderers as filthy liars who’ll promise the world until they get you locked into them, upon which they will proceed to merrily screw you for all the Treasury is worth.


Cranky Observer 08.19.08 at 1:15 am

> For a wonderful example, I refer one to the Denver Airport baggage
> handling system. The engineering management said “projects of this
> size take 4 years to complete,” while the sales mismanagement said
> “airport is finished in 2 years, so we sold it to them, you have 2 years to
> make it work.” As everyone knows, the baggage handling system was
> functional (meaning: it stopped losing and shredding bags) 2 years after
> the airport opened.

Actually they finally gave up after 10 years, ripped out the automatic system, and replaced it with a standard manual system augmented with modern labeling/sorting technology (which of course had improved over those 10 years). But your point is still a good one.


Comments on this entry are closed.