I’ve seen this kind of article many times but is it correct? I’d say that I’ve generated several million words in papers, newspaper articles, blog posts and so on since I got my first Mac in 1984 (a bit over 100kw/yr for 25+ years, for something like 3 million), and also attracted maybe 10 million more in blog comments (over 100k of non-spam comments. Of that, I’ve lost
* a fair bit of material I produced before 1990, when hard disk space was very expensive, and stuff had to be stored in various external disk formats. Sadly that includes my first econ theory book and a book of satirical songs I turned out in the 1980s. Mostly this was a case of physically losing, or accidentally overwriting, the data rather than possessing it, but being unable to read it any more.
* The first year or so of comments on my blog in the now-obsolete Haloscan system.
* The blog has also suffered a lot of linkrot, including internal links to its older incarnations
* A lot of my older text is in formats that can now only be read by extracting a text-only format, and some old stuff (eg pre .qif financial records) is in formats that are no longer readable in any way. But again, that’s mostly a problem with pre-1990 stuff.
Compared to my slightly obsessive desire to preserve every revision of every piece I’ve ever written, those are substantial losses. But compared to my paper records, my digital stuff is almost perfectly complete, not to mention instantly accessible and searchable.
{ 31 comments }
Straightwood 08.08.11 at 10:19 pm
Redundant storage is the key survival strategy for digital records. Our own DNA is massively replicated, so nature has already told us how best to preserve critical data. As storage moves to cloud-based systems, the service providers will replicate data on a vast scale so as to protect against loss-related legal liability. As long as data storage technology continues to improve exponentially, the cost of taking this brute force approach will be negligible.
JP Stormcrow 08.08.11 at 10:34 pm
Our own DNA is massively replicated, so nature has already told us how best to preserve critical data
So trillions of copies inside my house and none anywhere else. Got it.
(Damn those identical twins and their cunning DR plan!)
Barry 08.08.11 at 10:39 pm
JP, then stop sitting around your house and get out there and make some copies! :)
JP Stormcrow 08.08.11 at 10:58 pm
I already have! But 1) I’ve only got it about 87.5% covered , and 2) It’s still inside my house much of the time! (But now that you mention it A Game of Thrones has highlighted at least two solutions to that last problem.)
William Timberman 08.08.11 at 11:04 pm
The short answer is yes, it does. The long answer is so does everything else. The time frame varies, of course — books printed in the Eighteenth Century, for example, are likely to survive those printed in the Nineteenth. (See Slow Fires for a good overview, and a healthy cultural frisson.)
As long as enough representative bits of former glories remain, we should do fine. It would be nice if the Parthenon still looked as it did on the day it was first dedicated, we could still peruse the stacks of the Bibliotheca Alexandrina, or dine at the Brown Derby, but time marches on — and so do we.
JP Stormcrow 08.08.11 at 11:05 pm
I recall hearing someone knowledgeable claiming that people were more likely to lose their digital photographs than physical prints over the long run. But that would be for regular people people not just obsessive keeper people. (And it might have just been speculation–probably not enough time has elapsed in the digital era to assess the long run.)
Nick Caldwell 08.08.11 at 11:15 pm
Yes, I’m sure publishers who are blithely replacing their blogs’ native comment systems with the eye-bleed-inducing majesty of Disqus have never heard of Haloscan.
Honestly thought we’d grown beyond the desire for fragile, third-party javascript-based commenting systems.
Red 08.08.11 at 11:49 pm
I am a medievalist, working with documents (parchments) that are literally a thousand years old. They have long outlived the individuals, their families, or the state and religious institutions (even the social customs) of those who produced them. I doubt that our digital communications will do that.
Matt 08.09.11 at 12:37 am
There’s a fair bit of digital data (in absolute terms) that was created back before software was widely sold or otherwise standardized. It seems to me that fear of undecipherable digital data mostly comes from extrapolating (e.g.) harrowing experiences with records created on long-obsolete machines with proprietary, internally-developed software at big corporations or NASA in 1970. It’s hard to find working hardware to even read the tapes, the tapes may have suffered in storage, and the only “native” application for it was a 50 year old FORTRAN or COBOL program running on a long-gone mainframe. It may be an effort finding even a single source code or binary copy of the software that created or used the data.
It seems to me that digital preservation alarmists have incorrectly projected that experience to widely-distributed software and its associated data formats. If retrieving an overlooked reel of Apollo data takes herculean effort in 2010, reading an old Visicalc file created on the Apple II will be similarly difficult by 2020, the logic goes. If you wait until 2020 to read your 1980 vintage files, the magnetic storage medium may indeed be too deteriorated. I migrated my Commodore 64 files in the early 1990s; avoiding deteriorating storage media is the most important part.
If you have the data preserved, just fire up your emulator and read the software with its original application. You can’t get easy data interchange with modern computer systems that way (no copy/paste rows out of Visicalc and into Excel!), but neither can printed paper records. Millions of people used Visicalc and collectors have helpfully placed archives of its various versions online for free. This is technically a copyright violation but no publishers are losing money over it so nobody really cares. The NYT article discusses and dismisses emulation in one paragraph, basically because it’s unwieldy, but it’s much cleaner and simpler than trying to shoehorn past-popular software formats into the constraints of currently-popular software.
Finally, if your data is stored in a widely understood format with several implementations, there’s no reason to fear that you’ll even have to dip into an emulator. TIFF and PostScript files created in 1986 are still readable today with zero extra effort. XML, PDF, JPEG, GIF, ZIP, MP3, MPEG2, and other formal and informal file standards are likely to remain readable with mainstream software decades into the future. If not, there’s always emulation again.
There’s one point about digital archives that curiously wasn’t in the NYT piece. We have higher expectations for searching digital archives than for searching paper. We don’t expect to walk into a rare book room, shout “Isaac Newton” and instantly see the books with relevant text leap into a relevance-ordered stack for reading. But we do expect similar magic when it comes to searching digital archives. This is where it gets tricky with emulation: if you have an interesting document in obsolete Wordperfect 2 format, you can easily snag a copy of old Wordperfect 2 from the pirate-preservationist cloud and start reading in minutes under an emulator. But if you’re relying on an automated search system to find interesting documents, you need a bridge between that old Wordperfect document and the document indexing process before you even discover that its contents are interesting. Nobody has bothered to write bridge software for most obsolete/proprietary formats. In the future, either people will write bridging tools so that all formats can be searched in a uniform way or the oldest and least popular formats will sink to the level of paper books, only revealing their contents when deliberately opened by humans.
Tremendously kludgy idea for rapid, uniform bridging of obsolete document formats: associate each document with an appropriate software emulation environment. Set up genre-specific macros to systematically go through documents (scroll down and across in spreadsheets, scroll down through pages of text in word processors, etc.) and capture screen shots for each screen-full of information. Then run OCR on the screen captures just like you would for scanned pages of a paper book, and index them the same way. Just as Google Books captures pictures of book pages and recognizes words and numbers in them, do the same with software, capturing it all as pictures and relying on OCR to provide good-enough indexing so it will be noticed if further study is warranted.
Matt 08.09.11 at 1:07 am
I am a medievalist, working with documents (parchments) that are literally a thousand years old. They have long outlived the individuals, their families, or the state and religious institutions (even the social customs) of those who produced them. I doubt that our digital communications will do that.
Is there a ballpark estimate of what percentage of documents produced 1000 years ago have survived or at least had their contents copied so that they remain readable today? Looking at just the survivors doesn’t give a good idea of durability. No common digital storage format is as durable as clay tablets or properly stored archival paper*. On the other hand, digital storage is less vulnerable to mold, insects, and water damage, and making perfect, redundant, geographically dispersed copies is easier done than said.
*For really, really durable digital storage you can mechanically inscribe patterns on thin sheets of stainless steel. It will outlast even parchment under comparable conditions and offers a much higher information density for written records. It is also too expensive to preserve anything but small volumes of valuable information (no copies of DVDs or CDs stored this way!)
Red 08.09.11 at 2:45 am
Matt@10: No, we don’t have reliable or even approximate estimates of the proportion of written materials from c. 1000AD that has survived. The number of written documents increased dramatically shortly afterwards, i.e. in the central Middle Ages, during and after the great economic boom of the 1050-1250s AD and the concomitant rise of literacy among the general population. I would guess that about one million pages written in Western script, mostly on parchment (paper, less durable but cheaper, became gradually more common), written before 1500, have survived.
The printing press of the late fifteenth century generated yet another explosion of written materials but again, mostly on paper.
By the way: parchment, as opposed to paper, is highly resistant to insects and mold; even water does not usually leave lasting damage if it’s good quality parchment. Fire is another story: much more would have survived if not for the horrendous devastations in the two World Wars of the 20th century.
I am intrigued by your suggestion for “really durable digital storage.” But there is no money in it, is it?
John Quiggin 08.09.11 at 3:31 am
@Matt, your comments agree with my prior beliefs and you sound a lot better informed than I am, so I am going to adopt your view until I see good evidence to the contrary.
bad Jim 08.09.11 at 4:14 am
Haven’t some works of Archimedes been recovered from parchment copies which had been repurposed for contemporary use? That would put their survival nearer two thousand years.
Being connected can have important implications for the survival of information. If you’re recording an atrocity, the authorities can’t confiscate the data if it’s been uploaded beyond their reach. Replication is perhaps the best general approach to preserving information.
(JP Stormcrow: three kids? Strictly speaking, with two or more all you know for sure is that it’s between half and one, at least if you have a son or you’re female [assuming you’re a mammal; it’s the reverse if you’re a bird].)
Gavin Findlay 08.09.11 at 4:23 am
There is a rapidly growing field/discipline, digitial humanities, which is attempting to come to grips with this issue, primarily from the museums and archives perspective. The guru is Dan Cohen at George Mason University and there are a number of leading lights in Australia, such as Tim Sherratt, formerly of the National Museum of Australia, tackling some of the hard core tech issues. There is a network of THATCamp conferences (The Humanities and Technology) and a reasonable corpus of publications e.g. Theorising Digital Cultural Heritage, one of the editors is Fiona Cameron of UTS.
Matt’s advice is top-notch, but it’s worth noting that the problem of legacy data on old media is a small subset of a much larger problem – as our capacity and tools increase we are (a la Google Books) digitizing vast slabs of printed text, images, video etc. My own PhD research is on this very thing – dealing with the legacy of video recordings of Australian theatre in the 80s and 90s, preserving it and making accessible. Video was of course one of the first in the chain of intermediate technologies straddling the analog/digital divide.
Long-term storage shouldn’t trouble you – I reckon that problem will disappear through redundancy any time time now if it hasn’t already. Making an emulator to retrieve J Quiggin’s legacy data sounds like a nice comp sci undergraduate project for a colleague of your to impose on someone.
andrew 08.09.11 at 4:29 am
As this genre goes, the article isn’t actually all that alarmist; unfortunately, it has an alarmist headline, and the Sterling example might make it seem like the author shares his view, which I don’t think she does. I also don’t think it’s aimed at people who already pay attention to how and what they store digitally. One of the main points of the article is that you have to do ongoing maintenance on your digital files, so if you already do that it’s not going to seem remarkable. Matt’s comment about migrating Commodore 64 files in the 1990s is exactly one of the points of the article.
But I bet a lot of people didn’t do that and aren’t doing the equivalent today: I have nothing from my Commodore 64 and none of the programming I did in Pascal or Logo or Basic when I was a kid, and my parents didn’t save any of that – unlike some of my paper-based work. I also have none of my e-mail from my university sponsored account. Of course I could have kept almost all of this if I’d thought about it. I may even find some of the old disks, though I’d have to track down a 51/4″ drive.
Last summer, I came across some old 3.5″ floppies, and after running an old version of word on an old laptop with an old floppy drive, I was able to migrate all of the papers I wrote in college in wordperfect 5.0 into a more recent (but still proprietary) file format without much problem. (Although I do miss the old blue screen.) But that was because it occurred to me to do something.
What digital archivists – that is, the ones who are actually working with digital materials – have run into in recent years is a breakdown in the old ways of acquiring and processing donated materials. You used to have people show up with boxes of papers, on paper (including photos, etc.) and maybe some magnetic media in the form of audio/video stuff and that was pretty much it. But now you either get donors who don’t think the digital stuff is “archival” enough to donate, but who have it, or you get boxes of material that include stuff dating back to the 1980s that hasn’t been migrated and hasn’t been stored properly. And then they have to recover it. That doesn’t mean it’s impossible, of course.
I take this article to be an attempt to keep that from happening over and over down the line. And to get people who don’t plan on donating anything anywhere, but who want to preserve stuff that’s important to them, to think about how they’re going to do that.
On the flip side, and maybe this should get into print more often, you see people argue that as long as we keep our digital preservation systems running, we are very likely to be able to save a higher percentage of material than was possible in the past. And I suspect they’re probably right. But I also suspect that a lower percentage of that is going to be preserved by accident, in the sense of discovering a great-ancestor’s bundle of papers in an attic.
maidhc 08.09.11 at 4:52 am
There are certain types of objects which have a limited on-line lifetime. Many BBC radio shows are only available for a week. Other radio outlets keep content available for 2 weeks or 4 weeks, or for some indefinite but limited time (“until we get around to taking it down”, I suppose). Comics strips are usually on-line for a month.
There are blogs that discuss such objects, with frequent linking. Such blogs can really be only read fresh, since even though the blog content may be preserved indefinitely, the links and the availability of the object under discussion is gone after a few weeks.
Theoretically, in some distant future when all these copyrights have expired, it might be possible to reconstruct these blogs in their original form, but it seems as though it would be a challenging task.
When my father emigrated, he began writing to his mother every week until she died. She kept all his letters, and later on he went back and collected them, and now I have them. (He didn’t keep her letters to him, though.)
My father and I rarely corresponded, because we talked on the phone regularly. Later in his life he began sending me e-mail, much of which I still have sitting on several different hard drives in a proprietary format.
Joe 08.09.11 at 5:22 am
At the risk of missing (ignoring) the point, surely this is all partially good news from a privacy point of view, no? There are no doubt some people who don’t want permanent digital archives (scholarly or otherwise).
Clay Shirky 08.09.11 at 5:44 am
Also chiming in with Yes.
Back In The Day* I chaired the technical committee of the US Library of Congress’s digital preservation strategy**, and we found that what seems like a straightforward problem is actually fairly layered.
The first risk is that the primary copy of 1s and 0s will not be kept — disks are lost, storage is erased and written over that sort of thing. Ziff-Davis paid me to write a book about the internet in 1993 and, in that same year, I wrote hundreds of thousands of words for free on Usenet; today, it would be much (much) easier to find what I said on Usenet than to lay my hands on a digital copy of the MS.
Compounding this risk, many of the claims for long-term preservation via various media, and especially optical storage, were simply false; supposed ‘100-year’ substrate durability for DVDs has ended up showing significant 10 year wear, even in relatively good archival storage, so you could all sole copies even if you thought you were taking steps to prevent such loss.
As with @Straightwood’s answer, the answer to this is to assume LOCKSS, in the acronym of one digital preservation effort: Lots Of Copies Keeps Stuff Safe. Even this, though, can suffer from monoculture threat — if all copies of something are committed to the same storage medium, and that one medium later reveals hitherto unknown failures, then your LOCs would not KSS. To this end, the Library set on a strategy of networked diversity of storage, so that different kinds of institutions, with different preservation regimes, down at the level of the hardware and up at the level of the patrons, would preserve the requisite material.
Then there is the ‘Linear A’ problem, analogized to the Minoan writing system whose marks are readable and whose meaning is not***. Having all the bits to your thesis in, say, NisusWriter will do you no good if you don’t have something that reads NisusWriter. There are many sources of format drift — one of the big ones is that many large and important data sets are uninterpretable without direct participation by the creators of said data sets.
I remember one field trip we took to a NASA installation to see their (incredibly impressive) distributed storage system for the data their scientists worked with. The librarians told us about how the data was insulated against floods, lightning strikes, earthquakes; they had a backup plan for all of it. We asked what they’d do if a scientist was hit by a bus. There was an uncomfortable silence, and then they said that most of the data would be lost, as they can’t interpret it on their own; making sense of much of the data was alchemical in its reliance on its original creator for subsequent interpretation.
Over a beer, the NASA librarians muttered that the scientists were too lazy to label their work so others could read it. Over three beers, they suggested that the scientists might not regard keeping other people from either extending or or reviewing their work as a harmful side-effect. In either case, much of the world’s research data is at hermeneutic risk even where it exists in multiple readable copies.
And then there is the lulu: DRM. The whole point of DRM (and, to a lesser degree, periodically induced format incompatibilities) is to make it impossible to interpret data outside a very narrow range of devices and uses. It’s a POCKSE strategy: a Paucity Of Copies Keeps Stuff Expensive. The big win here, and we’ll see how provisional it is, was the Library of Congress’s interpretation of copyright allowing DRM cracking for fair use, thus injecting the public good argument directly into the interpretation of the Digital Millenium Copyright Act.****
There are subtler ontological questions (is a work of art made to be viewed on a Mac 512K preserved if its run on an emulator?) and questions of happenstance (non-use tends to preserve analog material and degrade digital material), but heterogenous storage and format compatibility are the to big predictors of long-term viability of digital data.
* For values of “The Day” roughly equal to 2002-2006
** The National Digital Information Infrastructure and Preservation Project, to be exact, making that unloveliest of acronyms, NDIIPP.
*** http://en.wikipedia.org/wiki/Linear_A
**** http://arstechnica.com/tech-policy/news/2010/07/apple-loses-big-in-drm-ruling-jailbreaks-are-fair-use.ars
Matt 08.09.11 at 7:22 am
In speaking of digital data engraved in stainless steel, I was inspired by the Long Now Foundation’s Rosetta Disk concept. Instead of the spiraling design and natural language you could use constant-scale markings to store compressed digital data. Standard laser engraving machines should easily be able to make the markings. You might need to include a natural-language cover sheet or two explaining the encoding process. Due to the greater information density achievable with compact symbols and data compression, I think the cost can be competitive with paper records, despite the much higher price of stainless steel. But this is sadly faint praise compared to cheap consumer-grade storage costs of about 5 cents per gigabyte.
I have vague past recollections from Usenet of people talking about occasionally printing out must-retain data on archival quality paper when digital storage wasn’t trusted. I don’t think there is any common solution to the problem of passively preserving digital data for the long term. Most organizations either don’t have digital records that they expect to need in 200 years or they expect to maintain and migrate the records as necessary every few years. Almost nobody is clamoring for a solution where they can produce a physical artifact representing the records, lock it in an unused vault for a couple of centuries, and expect their descendants to easily read it 10 generations later.
I wonder how well hard drives would serve for century+ storage. How rapidly do the magnetic patterns randomize over time if the drives are stored with as much care as rare books? If the degradation is reasonably slow, predictable, and uniform, you can initially store the data with whatever degree of error-correction encoding is necessary to survive X centuries. I wouldn’t expect anyone to have a SATA cable in 200 years, nor would I expect the mechanicals to work after so much idleness, but the drive casing might serve as reasonably cheap and robust environmental isolation for the data-dense platters. A few centuries from now the drives could be opened up in a clean room and the platters read with whatever equipment is then current, as long as the platters retain data well enough.
bad Jim 08.09.11 at 8:11 am
I have a recordable CD of pictures of naked ladies. I think I can recall those images rather vividly, and it’s questionable whether that disk, twenty years old, is still readable. (I haven’t attempted it). Supposing it’s so, it’s remarkable that our fallible memories may actually surpass the reliability of our media.
Homer’s epics, whether they were composed by him or by someone else of the same name, survived for centuries and were almost certainly improved by their oral tradition.
Most of my works would be destroyed by changing a single bit, since “MOV A,@R0” and “MOV @R0,A” are hardly symmetrical (80 and A0 in hex, as if anyone cared) but the hardware that hosted them was more fragile than their embodiment and my code was nonsense without the hardware.
We have extensive collections of writings we can’t read (Etruscan, anyone?) because stuff outlives humans. The Septuagint remains a key resource today not only because it’s older than the Hebrew texts but because the old language was already lost back them. (And leaving out the vowels turned out to be a really bad idea.)
Ginger Yellow 08.09.11 at 2:51 pm
Having all the bits to your thesis in, say, NisusWriter will do you no good if you don’t have something that reads NisusWriter.
One thing I’m always curious about when this topic comes up is why cryptography principles don’t apply. While not a trivial problem, or one that casual archivists would undertake, surely it’s no harder in principle to “translate” an obsolete text format than to crack a moderately straightforward code. With the added benefit that, for the most part, the format won’t have been designed to be hard to crack. And, quite possibly, the older the format, the less complicated the analogous code.
sean matthews 08.09.11 at 6:23 pm
The first piece of memorably decent original thinking/writing I ever did – for my final year undergradulate paper, and good enough so that my professor asked if he could pass it round at a conference he was attending – was written on a Mac. I had the disc for a long time after I could read it, and now the disc itself has long vanished. It had major sentimental value.
JP Stormcrow 08.09.11 at 8:19 pm
bad Jim@13: JP Stormcrow: three kids? Strictly speaking, …
Yes three. I tend to speak “likely” rather than “strictly”. Although strictly speaking your calculation did not include a discount factor for the possibility of my not actually being their father. All I know for sure is nothing.
JP Stormcrow 08.09.11 at 8:26 pm
GY@21: That does seem to be an interesting idea that should work. I have a couple of times needed to reverse engineer data in unknown or obscure numeric formats and was able to crack those without resort to anything fancy. (Although in those cases the known information already constrained the possible solutions pretty heavily.)
Henri Vieuxtemps 08.09.11 at 8:58 pm
Dump everything into gmail, like I do. What can go wrong?
dbk 08.09.11 at 10:15 pm
Matt@10: Is there a ballpark estimate of what percentage of documents produced 1000 years ago have survived or at least had their contents copied so that they remain readable today?
Ballpark estimate for Ancient Greek literature (i.e. around 2000-2500 years old) preserved is 2%.
nick s 08.09.11 at 10:27 pm
One of the more famous cases of data-rot (or format rot and hardware rot) was the BBC’s Domesday Project, designed to commemorate the 900th anniversary in 1986 by collecting accounts of local communities from schools and colleges. Unfortunately, the method of distribution (laser discs and highly customised BBC Master System computers) went from being Tomorrow’s World to Today’s Junk very quickly, and it’s taken the best part of the last decade, and the hard work of very many people, to get the emulation right and finally put the data online.
Looking at just the [print/MS] survivors doesn’t give a good idea of durability.
At best, it gives a sense of priorities. As someone whose archival research only stretches back 300 years, it became very obvious through the manner of their archiving that many pamphlets survived thanks to the personal interest of a handful of collectors. The great libraries of the Elizabethan era regarded playtexts as ephemera; private collections supply us with the bulk of the theatrical archive. The intention of digital archivists is to diminish the need to prioritise at the outset.
I’d agree with Clay’s survey, and am reminded of how Jason Scott, who now works for Brewster Kahle’s Internet Archive, recently put out an alert and obituary for 5.25in floppies. In addition, one of my ongoing interests is the shifting proportion of people’s digital vaults that can be classed as ‘commodity data’ (commercial multimedia, OS files, applications, etc.) for which there’s a solid presumption of duplicability and retrievability, as opposed to personal data that requires either sharing or some form of backup in order to survive catastrophic failure. If ubiquitous streaming access or some kind of referential storage alleviates the burden of copying or backing up one’s music collection or samizdat archive of obscure BBC documentaries, then perhaps it also permits greater focus on preserving personal data, but such a move would mean transferring both the access rights and the archival duty to third parties, and the stories of the BBC’s past backup-wiping habits may give one pause.
I also wish that I’d been more active in keeping personal copies of some of the earliest websites: it’s a chunk of social history that looks more and more like an archaeological project.
bad Jim 08.10.11 at 8:26 am
For what it’s worth, my next door neighbor is an amateur archivist who has extensive paper files on the activities of local governments and public agencies, spanning decades and filling a great many file cabinets, and is looking for a good home for them. Some of her files appear to be the only copies in existence. She knows where everything is, but doesn’t have that much longer to live. Ideally she could hand them off to an academic at UC Irvine or Cal State Fullerton together with a thorough briefing, but absent the attention of an interested human her collection is just so much paper awaiting recycling.
Converting her files to a searchable digital format would be a massive undertaking with a very uncertain value. It might be cheaper simply to warehouse them and publish a bare description of their contents. Neither outcome is in prospect.
Perhaps
John Quiggin 08.11.11 at 9:57 am
Reading over this, it seems as if the big problems are social rather than technical. Can the institutions that preserved lots of paper records of no immediate interest be adapted to preserve the same, or a larger, proportion of digital data.
Emma in Sydney 08.11.11 at 10:17 am
What, JQ, like the National Library of Australia’s Pandora Project has done for Australian websites?
John Quiggin 08.11.11 at 10:31 am
Emma, you’re right about Pandora – I have a bunch of stuff there. More broadly, I think anything that makes it online is probably safe for (most values of) all eternity. So, maybe once we all have Dropbox/iCloud, that will be the case (though privacy issues may be problematic). In the interim, I suspect there are lot of hard drives being discarded, in the manner of irreplaceable scrolls being burned for home heating.
Comments on this entry are closed.