An echo of Y2K

by John Q on January 3, 2009

Microsoft Zune music players stopped working on New Years Day because of a software bug, raising the inevitable comparisons with the Y2K fiasco. The way in which the largely spurious Y2K problem was handled raises some interesting comparisons with the all too real problem of climate change. Although many billions of dollars were spent on making systems Y2K-compliant, there was no serious scientific study of the problem and its implications. The big decisions were made on the basis of anecdotal evidence, and reports from consultants with an obvious axe to grind. Even the simplest objections were never answered (for example, many organisations started their fiscal 2000 year in April or July 1999, well before remediation was completed, and none had any serious problems). There was nothing remotely resembling the Intergovernmental Panel on Climate Change, let alone the vast scientific literature that needs to be summarised and synthesised for an understanding of climate change.

Thus, anyone who took a genuinely sceptical attitude to the evidence could safely predict that 1 January 2000 would pass without any more serious incidents than usual, even for the many countries and businesses that had ignored the problem. The retrospective evaluations of the policy were even more embarrassingly skimpy. I analysed some of the factors involved in this paper in the Australian Journal of Public Administration.

A really interesting point here is the fact that, in the leadup to 1 January 2000, self-described global warming sceptics, for the most part, went along with the crowd. If any of them rallied to the support of those of us who called for a “fix on failure” approach, I didn’t notice it. Of course, I’m open to correction here. I’d be very interested if anyone could point to a piece published before 2000 taking a sceptical line both Y2K and AGW.



Barry 01.03.09 at 2:42 am

“I’d be very interested if anyone could point to a piece published before 2000 taking a sceptical line both Y2K and AGW.”

That wouldn’t be that much; there’s *bound* to be at least *one* person putting out something (on the internet, or in some print publication) taking the view that both were a problem. If there wasn’t, that would mean that there was at least one view which was not represented on the internet, which, unlike the LHC, definitely would cause a space-time singularity.

BTW – I noticed what was probably the Y2K bug in MS Excel, in ’97 or ’98, in a bond pricing function (input yield, current date, date due, and get present value). When I set the due date at ’03’, I got an error message; when I shifted things back 4 years and had all dates in the 90’s, it worked fine.


The Raven 01.03.09 at 3:14 am

I hope this is sarcasm, not ignorance. This whole argument smacks of the sort of reasoning that got us into the current financial mess–assume nothing is wrong, and fix it later. There was a lot of skepticism at the time, in fact. There was panic, too–one of the best of the consultants and teachers went overboard. His summary of how it actually went is here. But the problems were real, though their exact scale is hard to assess (read the linked article), and it’s only because a lot of time and money were spent on preparation that we didn’t have a huge mess on the turnover date. I’m beginning to be very glad to be working in a computer applications field, rather than doing systems work. In systems, computer professionals get more respect outside their field for doing a bad job and then fixing it (and charging to fix it), than for doing a good job in the first place. And Microsoft is a fine example. Krawk!


Anonymous 01.03.09 at 4:18 am

Really, this issue really reminds you of climate change? That’s quite the leap


noen 01.03.09 at 4:58 am

I suspect that global warming skeptics who were also hoarding food and ammo for the coming Y2K crash were getting their information from the same sources. Pat Robertson springs to mind.


weichi 01.03.09 at 5:28 am

The Y2K alarmists had a powerful argument in their favor: fixing Y2K was a huge software project, huge software projects are always late, thus the fix wouldn’t be ready in time.

I never worked on any Y2K stuff, but I’ve always thought that the problem with that arguments was as follows (perhaps someone with experimence doing remediation will comment):

The reason huge software projects are always late is that you never *really* know how much work a software system will require until you start to build it, and as you build it requirements change (for good reason – you didn’t know what you wanted before). But Y2K was totally different – everyone knew exactly what problem needed to be solved. People are *much* better at estimating under these circumstances, thus the estimate were good, and the work got done on time.


bad Jim 01.03.09 at 10:37 am

Mark Chu-Carroll has a good piece on the Zune problem (the NYTimes article is wrong on the technical details). Lots of applications did fail because of Y2K errors, but those that hadn’t been corrected beforehand tended not to have catastrophic effects; as with the Zune fault there were ways to cope with them.

Raven’s right that software types get credit for screwing up, so long as they admit it. Once or twice I was the hero of the day because someone came to me with a critical problem that could keep product from shipping, which I would verify, then find and fix the flaw in my code and issue a replacement, all in about ten minutes (half of which was an obligatory cigarette break).

It makes me wonder why other professionals are so reluctant to admit mistakes. And so modest.


soru 01.03.09 at 11:30 am

If massive concerted global action against climate change does succeed in avoiding any real catastrophe, it is a racing certainty all the denialists will say ‘see, I was right’.

Annoying, but the alternatives are worse.


Barry 01.03.09 at 1:01 pm

Raven, for those of us who are more clueless, what’s the difference between I ‘a computer applications field’, vs ‘doing systems work’?


Donald A. Coffin 01.03.09 at 2:58 pm

I argued at the time that the problem was minimal, but that rich countries could afford to take out insurance (which they mostly did) and that poorer countries were les likely to have problems, should any exist, because they were less dependent on computers. Whether I was right, I can’t say, but it does seem that the (implied) icnome elastiticy of demand for insurance against a Y2K problem was right.


Tim Worstall 01.03.09 at 3:27 pm

I was certainly extremely sceptical about how large a problem Y2k was going to be as I am about AGW (no, not the existence of it but how large or immediate a problem it is). But anything I wrote about it is lost in the mists of Usenet so you’ll just need to take my word for it.

Or should you be sceptical of that too?


Cranky Observer 01.03.09 at 6:04 pm

The one time mankind works together around the entire globe to proactively identify and fix a problem, and the reward is 30 years of carping about how the whole process was unnecessary. You might want to talk to the Finnish National Railway: their system management software failed regularly on January 1st from 2000 through 2005 (not sure if they finally fixed it by 2006 or just replaced it) – luckily January 1st-5th are not big train trip days in Finland.

Were there excesses, stupidities, and hucksters involved. Sure. Name me a significant human event where no such attached.



Slocum 01.03.09 at 8:18 pm

The one time mankind works together around the entire globe to proactively identify and fix a problem, and the reward is 30 years of carping about how the whole process was unnecessary.

Except that mankind didn’t work together in a coordinated fashion. Y2K problems were identified and fixed independently by countless organizations and out of self-interest (the need to keep their own systems running in the new millennium). There was no need for coordination (and no free-rider problem).

I was skeptical not that there were systems requiring Y2K fixes — undoubtedly there were many. I was skeptical that we’d hit Jan 1 2000 unprepared and disaster would strike. I knew a woman whose husband had stocked up on all kinds of survival gear and emergency food and water. I suggested that at around midnight she sneak down to the basement and pull the main breaker just to give him a thrill…


The Raven 01.04.09 at 12:40 pm

Slocum@12: “Except that mankind didn’t work together in a coordinated fashion.” There was extensive co-ordination within the computing profession. Yourdon, who I linked, was one of the co-ordinators; ironically his good work was part of what rendered his alarmism invalid. Yourdon also links to the US government’s “Y2K Czar” who was an important co-ordinator. That you aren’t aware of the co-ordination does not speak well for the rest of your reasoning.

Barry@8: Broadly, “systems software” are the components that support applications. These include operating systems, especially the largely invisible resource allocation and networking components of operating systems, firmware, and so on. To use a perhaps over-simplified analogy, systems are like the telephone network and an application are like a business that uses it. To use another, systems are like the structure that makes the finished building stand; the abstract and unnoticed engineering underpinning of the human experience of architecture. Systems work makes it possible for computers to reliably do useful tasks. It is expensive, difficult, and when done well near-invisible. It is noticed when it fails and, as you can see, gets very little credit outside of the field.


Barry 01.04.09 at 1:28 pm

Thanks, Raven – I see; the sort of work which people don’t notice until it stops working.


Slocum 01.04.09 at 1:39 pm

The Raven: There was extensive co-ordination within the computing profession.

I don’t doubt that there was sharing of strategies and technologies, but the incentives and actions were at the level of individual organizations fixing on their own (disparate) software systems, driven by their own organizational self-interest, as your linked article makes pretty clear:

“But it has always been common practice for individuals, corporations, and government agencies to fix their problems “behind the scenes” whenever possible, and to maintain a facade of normal operations whenever possible. There’s no reason to imagine that it will be any different than Y2K problems; the only obvious difference is that customers and end-users may be more vigilant in looking for such problems than they normally would. ”


sg 01.04.09 at 4:07 pm

I know what Slocum wants to say – that companies should be left to fix their AGW contributions themselves rather than the evil govt forcing them to. Just like they did for Y2K. But the analogy is completely false.

We all know that if a company had x computers doing y they would have a problem in their own corporate functions on 1/1/2000. But with AGW, a company with x emitters doing y may have no problem in the future, while another company may have a very large problem. For example, power plants will not necessarily suffer as a consequence of AGW; farmers will suffer the effects for them. So why should powerplants do anything? And why should farmers do anything to mitigate the effects of their own activities when they will bear a disproportionate share of other peoples’ (unmitigated) activities?

The only solution to this is coordination, whether it be central command and control orders, a carbon tax, whatever.


sg 01.04.09 at 4:10 pm

and I’d like to add my voice to those disputing JQ’s assertion that Y2K was not a problem. I was involved in mitigation in a part of my organisation, and the one part of our organisation which refused to cooperate collapsed with several days of serious consequences on New Year’s Day. If banks, transport networks and powerplants had suffered the same problem as that part of my organisation, things would have been unpleasant. Not catastrophic, but worth the effort to avoid.


Randolph Carter 01.04.09 at 5:52 pm

In 1998, I started working for a company that sells groceries. I spent 9 months in 1999 working on fixing or replacing application code impacted by the Y2K ‘bug’. This was part of an effort that had been ongoing for more than a year by the time I was added to the team.

The only reason that there was no crisis for us on Jan 1 2000, is because of the efforts of our team, and the similar teams in the companies we did business with.

The reason the project lead was able to get COMPLETE buy-in from senior management was he set up several test systems and rolled the clock forward to 12/31/1999. Nothing will get the attention of management faster than “can’t cut purchase orders”, “can’t generate invoices”, “can’t take delivery of new product”, etc…, etc…

The statement that Y2K wasn’t a big deal, wasn’t a crisis is technically accurate, but grossly oversimplifies the efforts expended to ensure that it wasn’t a crisis.


Cranky Observer 01.04.09 at 6:34 pm

> that companies should be left to fix their AGW contributions
> themselves rather than the evil govt forcing them to. Just like
> they did for Y2K.

The SEC, auditors of public companies, and government purchasing agents who wrote Y2K certification requirements into purchase orders (among others) did not leave Y2K compliance to enlightened self-interest. Just as well given Mr. “Oracle” Greenspan’s recent comments on the limits of same.


Oddly, Greenspan apparently worked as a programmer at some time around 1970 and he admitted that he had left code behind that would not work at the turn of the century under the assumption it would be replaced by that time – but he was aware it was still in production as of 1999.


Slocum 01.04.09 at 9:32 pm

I know what Slocum wants to say – that companies should be left to fix their AGW contributions themselves rather than the evil govt forcing them to. Just like they did for Y2K. But the analogy is completely false.

No — you’ve got that backwards. What I’m saying (what I did say) is that Y2K and global warming are not commensurate. The analogy is false. Organizations had strong reasons to fix Y2K problems in their own systems regardless of what anybody else did. In contrast, organizations (and nations) do not have strong reasons to reduce their own carbon emissions independently of what others do — from a point of view of self interest, each would be better off free-riding and letting others bear the cost.


sg 01.04.09 at 9:42 pm

are we agreeing with one another then?

I think I better go to bed!


MR Bill 01.04.09 at 10:11 pm

Perhaps pervasive suspicion of the validity of Y2K problems was that most folk in the States experienced it as marketing of disaster preparedness: I have recently unearthed a previous tenant’s Y2k supplies, stale water and some dried food in nitrogen packs, in a barn behind my place. You couldn’t go into a convenience store or hardware without seen a display of Y2K survival goods, and the radio and TV preachers would do promo teases of “Was Y2K foretold in Revelations?” A lot of items of dubious value were sold during the runup to Y2K, and only some of them were computers and software.


The Raven 01.04.09 at 11:17 pm

This points up the likelihood that climate-change denial will persist even after a successful response to the problems of AGW. Only if we fail will the deniers be persuaded.



Slocum 01.05.09 at 12:39 am

Oh, and by the way if you’re looking for a true the Y2k problem, there is one coming–but you’ll have to wait a few more decades:


David 01.05.09 at 4:55 am

I expect better from this blog. Y2K was a very real problem, more real than most people understand. The fact that dire events didn’t take place and the sky fall was taken by nearly everyone as proof that the entire thing had been wildly sensationalized and blown all out of proportion. What should have been understood was that the potential problem was very real and that many very smart people worked very hard to make sure that it didn’t. And this started in the early to mid-nineties when some people working with the Social Security Administration were doing some routine scenario planning x years out and ran into 2000. They realized they had a problem and that if they did, quite a few other systems/structures might as well.

The most dangerous of these was the third leg of the nuclear deterrent triad, the Trident submarines. These operate on a fail-deadly rather than fail-safe mode. If they don’t receive the necessary signal every 24 hours, they launch. Game over. There was every reason to believe that this system might be subject to the same Y2K problem. It was barely talked about, but you can bet a good deal of time and brainpower were expended on it. But I guess it was all false alarm since we didn’t have a nuclear war (just the electoral equivalent of such a catastrophe that November).

Y2K is a lousy or useless analogy in its current misunderstood context.


David 01.05.09 at 5:04 am

Just to harp on this, a “fix-on-failure” approach was not a viable across the board policy. It wouldn’t be for the Social Security payments problem much less the Trident gum up the works problem.


Tracy W 01.05.09 at 11:36 am

The most dangerous of these was the third leg of the nuclear deterrent triad, the Trident submarines. These operate on a fail-deadly rather than fail-safe mode. If they don’t receive the necessary signal every 24 hours, they launch. Game over.

I am very suspicious of this story, on the basis that there hasn’t been a non-test nuclear explosion since WWII. There are so many reasons that communication signals could fail, aside from Y2K, that I am inclined to think that if anyone was operating nukes on a fail-deadly protocol by now we should have seen a really big mushroom cloud somewhere just because a rat chewed through the critical component at the same time as someone put a bulldozer through the backup system, or the controller slept in and forgot to turn the machine on, or something. One of my communications engineering lecturers was ex-US military, and he was as firm an adherent of Murphy’s law as any other professor at the engineering school.


Dave 01.05.09 at 12:27 pm

Ah, the wonders of googling. The nuclear Y2K problem was certainly seen as real at the time:

On the ‘why haven’t we blown up the world by accident already since we’re so stupid’ point, I think history shows that we very nearly have, but that fortunately it is still very hard to launch nuclear devices, due to governments and militaries appreciating how stupid we all are.


sg 01.05.09 at 9:04 pm

I would like to add, I didn’t see much evidence of doomsday alarmism about Y2K in Australia. We knew it was a problem, and potentially nasty, but I don’t recall much discussion of fallout shelters and bottled water. Is this an American phenomenon?


Tracy W 01.06.09 at 1:04 pm

Dave – I did google it unsuccessfully before posting skeptically. I however discussed the absence of nuclear explosions as this struck me as a stronger reason for skepticism than my failure to find something on Google, which could equally well be explained by me having poor googling skills or the US military not putting all its nuclear weapon systems details online.
Unless I am missing something, in the links you provided there is no mention of submarines being put on a fail-deadly mood where not receiving a signal in 24 hours they launch. There are concerns about nuclear safety and the Y2K problem, but not the specific situation described. The fear apparently was that a false alarm would lead operators to launch the nuclear weapons, plus associated fears about untested systems generally, not the mere failure to get a signal for 24 hours would lead to an explosion.


John Quiggin 01.07.09 at 3:09 am

sg, among other things, Australian embassies in Eastern Europe were reduced to a skeleton staff in anticipation of Y2K related havoc. And the total expenditure on remediation was estimated at $12 billion, close to 2 per cent of GDP at the time.

Comments on this entry are closed.