I posted this piece in RenewEconomy a couple of months ago. It didn’t convince the commenters then, and I don’t expect it to be any different here, but I’m putting it on the record anyway.
AI won’t use as much electricity as we are told, and it’s not a reason to slow transition to renewables
The recent rise of “generative AI” models has led to a lot of dire predictions about the associated requirements for energy. It has been estimated that AI will consume anything from 9 to 25 per cent of all US electricity by 2032.
But we have been here before. Predictions of this kind have been made ever since the emergence of the Internet as a central part of modern life, often tied to claims and counterclaims about the transition to renewable energy.
Back in 1999, Forbes magazine ran a piece headlined, Dig more coal — the PCs are coming. This article claimed that personal computers would use 50 per cent of US electricity within a decade. The unsubtle implication was that any attempt to reduce carbon dioxide emissions was doomed to failure
Of course, this prediction wasn’t borne out. Computing power has increased a thousand-fold since the turn of the century. But far from demanding more electricity personal computers have become more efficient with laptops mostly replacing large standalone boxes, and software improvements reducing waste.
A typical home computer now consumes around 30-60 watts when it is operating, less than a bar fridge or an incandescent light bulb.
The rise of large data centres and cloud computing produced another round of alarm. A US EPA report in 2007 predicted a doubling of demand every five years. Again, this number fed into a range of debates about renewable energy and climate change.
Yet throughout this period, the actual share of electricity use accounted for by the IT sector has hovered between 1 and 2 per cent, accounting for less than 1 per cent of global greenhouse gas emissions. By contrast, the unglamorous and largely disregarded business of making cement accounts for around 7 per cent of global emissions.
Will generative AI change this pattern? Not for quite a while. Although most business organizations now use AI for some purposes, it typically accounts for only 5 to 10 per cent of IT budgets.
Even if that share doubled or tripled the impact would be barely noticeable. Looking the other side of the market, OpenAI, the maker of ChatGPT, is bringing in around $3 billion a year in sales revenue, and has spent around $7 billion developing its model. Even if every penny of that was spent on electricity, the effect would be little more than a blip.
Of course, AI is growing rapidly. A tenfold increase in expenditure by 2030 isn’t out of the question. But that would only double total the total use of electricity in IT.
And, as in the past, this growth will be offset by continued increases in efficiency. Most of the increase could be fully offset if the world put an end to the incredible waste of electricity on cryptocurrency mining (currently 0.5 to 1 per cent of total world electricity consumption, and not normally counted in estimates of IT use).
If predictions of massive electricity use by the IT sector have been so consistently wrong for decades, why do they keep being made, and believed?
The simplest explanation, epitomised by the Forbes article from 1999, is that coal and gas producers want to claim that there is a continuing demand for their products, one that can’t be met by solar PV and wind. That explanation is certainly relevant today, as gas producers in particular seize on projections of growing demand to justify new plants.
At the other end of the policy spectrum, advocates of “degrowth” don’t want to concede that the explosive growth of the information economy is sustainable, unlike the industrial economy of the 20th century. The suggestion that electricity demand from AI will overwhelm attempts to decarbonise electricity supply supports the conclusion that we need to stop and reverse growth in all sectors of the economy.
Next there is the general free-floating concern about everything to with computers, which are both vitally necessary and mysterious to most of us. The rise of AI has heightened those concerns. But whereas no one can tell whether an AI apocalypse is on the way, or what it would entail, an electricity crisis is a much more comprehensible danger.
And finally, people just love a good story. The Y2K panic, supposedly based on the shortening of digits in dates used in computers, was obviously false (if it had been true, we would have seen widespread failures well before 1 January 2000).
But the appeal of the story was irresistible, at least in the English-speaking world, and billions of dollars were spent on problems that could have been dealt with using a “fix on failure” approach.
For what it’s worth, it seems likely that the AI boom is already reaching a plateau, and highly likely that such a plateau will be reached sooner or later. But when and if this happens, it won’t be because we have run out of electricity to feed the machines.
Update
The AI boom is also being used to justify talk, yet again, of a nuclear renaissance. All the big tech firms have made announcements of one kind or another about seeking nuclear power to run their data centres. And its true that the “always on” character of nuclear makes it a genuine example of the (otherwise mostly spurious) notion of “baseload demand”. But when you look at what Google, Meta and the others are actually doing, it amounts to around 1 GW apiece, the output of a single standard-sized reactor. That might bring a few retired reactors, like the one at Three Mile Island, back on line, but it’s unlikely to induce big new investments.
{ 24 comments… read them below or add one }
some lurker 11.30.24 at 4:55 am
The author might want to check with people who were alive during that distant era, maybe look for accounting records of the fees paid to consultants to shore up the systems that relied on two digit years for trivial matters like payroll and tax accounting. It was not false: it was mitigated, at great expense. There would not have been failures before Jan 1, 2000, because Jan 1, 2000, was the trigger. Rolling over 99 to 00 and doing arithmetic against a zero where a non-zero value was expected could be problematic.
A misunderstanding like this, coupled with actual current news stories about carbon cowboys stripping locatities of tax revenues by siting cloud computing to take advantage of tax schemes, makes me question the rest of this piece…
KT2 11.30.24 at 5:01 am
JQ said “Will generative AI change this pattern? Not for quite a while.”
Definitely not for a while if Jensen Huang is doing hardware minus 20Watts at a time.
“Earlier in 2024 at GTC, Nvidia CEO Jensen Huang said that if the company had used optics as opposed to copper to stitch together the 72 GPUs that make up its NVL72 rack systems, it would have required an additional 20 kilowatts of power.”
“AI ambition is pushing copper to its breaking point “Ayar Labs contends silicon photonics will be key to scaling beyond the rack and taming the heat
Tobias Mann
Thu 28 Nov 2024 /
https://www.theregister.com/2024/11/28/ai_copper_cables_limits/
And just as the AI renewables/ copper scare articles appear (qlready?), photonics will be cost competitive.
I’m not impressed by the 20W saving. I am impressed that Jensen Huang…
“In June 2024, Nvidia became the largest company in the world by market capitalization.[6] As of November 2024, Forbes estimated Huang’s net worth at $130 billion, making him the 9th richest person in the world.[7]” (Wikipedia)… is Watt wise, not Mw foolish.
TH 11.30.24 at 5:10 am
With the electricity projections, I can‘t quibble, not enough research. But perpetuating the lie that the Y2K bug was a myth is just fucked. Köche been on teams starting in 1996 that we’re working quite grad mitigating that shit. The difference was, a) people were forewarned, b) a lot of smart people checked and knew that the problem was real and c) there was no vested interest in having the systems fail.
This nasty story these days, that it was all a myth, is tailor-made to ridicule and belittle other warnings. To support that is not a good look.
ozajh 11.30.24 at 7:10 am
The Y2K panic may have been overblown, but I can assure you from personal experience that the Y2K problem was very, very real. I myself spent the 3 years leading up to 1st January 2000 in a remediation team.
supposedly based on the shortening of digits in dates used in computers
Just so, because the cost of computer memory and storage back in the 1980’s, and even more so the 1970’s, lead developers to use the smallest fields they could to satisfy the immediate requirement. No “supposedly” about it.
we would have seen widespread failures well before 1 January 2000
Depends on your definition of “widespread”. The organisation I worked for had some very significant problems on 1st January 1990, when YDDD dates went from 9365 to 0001. It was in fact one of the reasons why they took Y2K relatively seriously.
They spent man-decades fixing identified problems, and simply replaced a lot of equipment and software where they couldn’t guarantee that the old stuff would work, and they STILL had significant problems on 1st January 1999, and 1st July 1999, and 1st January 2000, and 1st April 2000, and 1st July 2000. It’s just that none of these stopped things for more than a few days.
could have been dealt with using a “fix on failure” approach
Really? A large part of the concern was due to the fact that people didn’t know what would fail, and how. The failure of the systems I worked on and around could be considered more annoying than health-threatening (although having a large institution unable to pay invoices and wages for more than a few weeks would have significant flow-on effects if a lot of counter-parties were in the same boat).
The horror stories about the possibility of planes falling out of the sky may indeed have been urban myths, but it’s not a myth that a water authority in the UK did a trial cutover and found that all the valves controlling the chemical inputs in their treatment plant opened fully and stayed open. The water in their mains would have been poisonous shortly thereafter.
Anyway, rant over and you could well be correct. We’ll have a control experiment over the next few years, IMHO, because “fix on failure” is exactly the approach being taken with respect to CO2-driven Climate Change.
nicopap 11.30.24 at 10:51 am
There is many misconceptions in this post. First and most obvious one is that software became more efficient with time. The contrary is true. Many optimization techniques widely used in the 80s are now barely known. There is a running joke in software development that says that software expands to use available memory and CPU cycles. The only reason computers do not consume a significant larger amount of energy today is that with computer speed increases, came at the same time, efficiency increase of staggering amounts. The famous Moore’s law that probably already came to an end last decade.
It also forgets the Jevon paradox. Jevon noticed that during the 19th century, coal usage increases as engine efficiency increases. This is true of computers as well. and probably if the AI train continues, it will be true of AI algorithms.
concerning the Y2K bug. It is probably due to the massive expenses mentioned in the post that it didn’t happen. Just 4 years ago in Switzerland in the Canton Neuchatel, for two weeks, a couple thousands people were without heating in the midst of a very cold winter. It turned out to be a Y2K bug. Just that the developer that fixed it, delayed it 20 years and everybody forgot about it. Now, with just this single example, think about the damages that a “fix as you go” approach would have caused on a global scale.
Another limitation of the post is that it focuses on electricity consumption while technology also expanded material usage. I’m wondering how many new mines were opened to support the technological changes we experienced.
The only saving grace is that generative AI as it is today is completely useless. So, either more efficient technologies will be used or it will burst and stop consuming electricity.
I have absolutely no belief that software development will increase electricity consumption, in line with the post, but for entirely different reasons.
Alex SL 11.30.24 at 10:52 am
I think the truth is somewhere in the middle: Yes, AI won’t be using anywhere close to 25% of a nation’s electricity, because it doesn’t provide benefits that would make people willing to pay as much in fees for its use as that energy consumption implies. (The end-user would, after all, ultimately have to pay that bill somehow.) It is largely good as an efficiency tool and for spam generation. But on the other side, the news are full of examples of data centres and AI companies using more energy than would have happened without the AI bubble, Google being an example when they recently matter-of-factly announced that they weren’t going to hit their climate goals due to AI. This may not be as bad as some claim, fueled by the hype, but it certainly isn’t good.
As for de-growth, we aren’t there yet, but there simply is a hard limit to how much economic activity we can conduct before everything collapses. That is not a matter of debate, it is just physics and biology. The absolute global limit is when merely the heat loss from running our machines is enough to boil off the oceans, even at 100% regenerative energy. I understand that moment is a few hundred years of 2-3% annual economic growth away, unless economic growth were to be re-defined to nothing more than putting larger numbers into some spreadsheet, i.e., if it is actually some kind of meaningful wealth and progress. Overuse of non-renewable resources like groundwater, fertile soil, phosphate, and fossil fuels, plus global warming, plus contamination of ourselves with increasing loads of hormones, microplastics, and other nasties will do us in long before then. Even rainwater is today effectively not safe for drinking anywhere in the world due to what we have done already. De-growth is a matter of civilisational survival.
In a few hundred years, there will be fewer than three billion of us living a much more sustainable life. There is only one question. Will that be the case because we collectively decided to gently and deliberately de-grow towards sustainability, or will it be the case because our descendants are in the process of digging themselves out of a dark age following catastrophic collapse and another Migration Period that made the one in late antiquity look like a picnic. At least the Visigoths weren’t fleeing from their entire land dropping under water or growing uninhabitably hot, and they were much fewer in number than e.g. the population of Bangladesh.
Phil 11.30.24 at 11:58 am
The Y2K panic, supposedly based on the shortening of digits in dates used in computers, was obviously false (if it had been true, we would have seen widespread failures well before 1 January 2000).
One more time: the main reason Y2K wasn’t a disaster is that it was fixed. I started coding eight-digit dates in the late 1980s, and plenty of other people were doing the same. The remediation of systems, and software packages, that had been built with six-digit dates – and the scrapping of systems, even entire platforms, that couldn’t be remediated – began in the early 90s. In retrospect some people do seem to have overstated the possible consequences of the problem – survivalists and ‘preppers’ were a small minority in Y2K preparedness circles, but they made a lot of noise. The problem itself, though, was entirely real.
Laban 11.30.24 at 2:02 pm
I worked full time on Y2K, and it was mostly wasted effort – but we did find a few short dates, and some Assembler date-calc routines written maybe decades before that needed fixing. So the Assembler I wrote in my 1983 training came in handy at last.
Fifteen years after Y2k, mainframes were, if not entirely deceased, pretty near it. The replacement system we’d built at vast consulting fee cost just ran legacy stuff and everything new was client/server.
“talk, yet again, of a nuclear renaissance”
Well, if molten salt reactors can actually live up to the promise of burning nuclear waste and transmuting it from long-halflife to short-halflife, we may yet have one.
https://world-nuclear-news.org/articles/moltex-reactor-can-consume-used-fuel-research-confirms
My guess is that the Chinese will perfect them at a bargain price, just as the UK has covered every hill with Chinese wind turbines, and every roof with Chinese solar panels.
navarro 11.30.24 at 8:06 pm
as a teacher in a high-tech school district starting in 1998 which district had begun working on the issues with consultants in 1995. i too am forced to say that the author’s choice to describe the y2k bug as a “myth” completely discredits everything else the author has to say.
this notion that the y2k bug was some kind of IT boondogle or hoax is so completely at variance with the facts as to become a parody.
did the author ask chatgpt to write a parody?
Michael Cain 11.30.24 at 8:26 pm
Well, if molten salt reactors can actually live up to the promise of burning nuclear waste and transmuting it from long-halflife to short-halflife, we may yet have one…. My guess is that the Chinese will perfect them at a bargain price…
Gates’ TerraPower and Buffett’s Pacificorp have agreed to build a sodium-cooled reactor in Wyoming at the site of a Pacificorp coal burner (scheduled to stop coal use in 2026). They’ve also signed up HD Hyundai and GE Hitachi. Southern Co. has reached an agreement with Terrapower and the Dept of Energy to build a small prototype molten-salt reactor at the Idaho National Lab. It’s interesting that these are being built in the back side of nowhere in the Western Interconnect, far from where the power is needed.
Alex SL 11.30.24 at 9:06 pm
Ye gods, I apparently read over the Y2K thing. One could just as well say that the ozone hole was an overblown scare because the nations of the world got together to phase out the chemicals that cause it. Oh, wait, some people do say that.
nicopap,
A more senior colleague once said to me, Intel giveth, and Microsoft taketh away. Our software today is towering heaps of many layers of inefficiency. Because computing is so fast, and storage is so plentiful, they can get away with that on most machines. But I once bought a netbook optimised for cheap and light-weight, and with Windows 10 pre-installed. Windows occupied two thirds of the disk storage before any other software was installed, and if you clicked on something, it took 3-4 seconds to react.
I resolved those issues by wiping the disk and installing Ubuntu; suddenly the netbook became usable. But most people use Windows, and most corporate software is similarly badly coded.
navarro 11.30.24 at 11:47 pm
“It’s interesting that these are being built in the back side of nowhere in the Western Interconnect, far from where the power is needed.”
oh, hey! i just noticed that those two experimental reactor prototypes are being built in the middle of nowhere, far from any population centers which might be impacted by any problems that crop up.
John Q 12.01.24 at 12:41 am
I should probably have omitted Y2K, but I was around at the time. I did my best to ensure that the money spent (by an organization where I was on the board) wasn’t entirely wasted. We tried to time things so that we could accelerate replacement of old systems rather than working to make them compliant.
And (as I’ve done in the OP) I put my Y2K predictions on the record in 1999, at a time when lots of people were pointing to the schools, small businesses and entire nations that had done nothing to prepare. Here’s an example (paywalled, but the opening para is enough to get the idea).
https://www.afr.com/policy/y2k-bug-may-never-bite-19990902-k9053
and here’s the kind of thing I was responding to at the time
https://archive.nytimes.com/www.nytimes.com/library/tech/99/11/cyber/education/03education.html
Australia stripped its embassy in Moscow down to a skeleton staff in preparation for the expected collapse. Russia spent only $200 million on Y2K preparation and had no problems
https://www.theguardian.com/business/2000/jan/09/y2k.observerbusiness
Here’s a fairly balanced retrospective summary
https://erp.today/did-software-wolves-cry-bug-in-y2k/
John Q 12.01.24 at 3:25 am
As regards software, I wasn’t making any grand claims about software in general, just saying that energy management software (such as that associated with the Energy Star program) has improved the efficiency of energy use by computers. Hopefully, that’s uncontroversial.
bad Jim 12.01.24 at 3:47 am
Two minor comments about energy use in LLM or generative AI: first, that graphics processors are being used because of their massive parallelism, but their floating point capability is unnecessary and wastes a large fraction of their power; designs are in the works for more efficient, integer-only versions.
Second, that the massive effort of scraping the universe of text has already been accomplished, and perhaps cannot be duplicated because AI-generated text is rapidly increasing its volume, diluting its useful content. Past performance may not be indicative of future results.
Thomas P 12.01.24 at 4:24 pm
The Y2K scare was also used by IT-departments to get funding to replace legacy systems that needed replacement anyway, but where it was hard to convince management to cough up the needed money.
bad Jim, there are even attempts to go back to analog computing for neural networks and that could save orders of magnitude of energy at least in some applications.
Laban 12.01.24 at 4:45 pm
re efficiency of energy use, at the client end you used to have “dumb” terminals , but with CRT screens using a fair bit of power, and mainframes needed special power supplies, cooling etc – I remember the bang when a digger cut through one and everything went dark on site.
OTOH pretty much everyone has a computer on their desk now, as well as the giant server farms running pretty much everything.
Though when I try and find which uses the most energy, I discover that
a) the mainframe is still with us
b) it’s being marketed as more energy efficient than the server farm
https://www.bmc.com/blogs/mainframe-sustainability-green-it/
(nb – BMC do mainframe software)
engels 12.01.24 at 6:29 pm
As a devotee of trolling I would love see a condescending anthropology study that treated all the Y2K bug-fixing efforts as a form of hi-tech millenarianisn (“Y2K: A Doomsday Cult in the Digital Age” forthcoming from Yale…)
ozajh 12.03.24 at 3:48 am
Thomas P,
The Y2K scare was also used by IT-departments to get funding to replace legacy systems that needed replacement anyway, but where it was hard to convince management to cough up the needed money.
That is indeed true, and in fact one accusation I would accept about Y2K is that it caused (or at best exacerbated) the Tech-driven recession starting in 2001. So much money had been spent on new equipment and software in the late 1990’s, that companies had over-leveraged assuming unreasonable growth rates were going to keep going forever.
John Q,
The ERP summary is quite close to my own views. I would, however, note that the Cambridge Professor decided in 1996 to buy ‘a secluded house in the country with a wood burning stove and well’, which puts my own decision to have $1,000 on hand in just-in-case cash completely in the shade.
I also note his 1999 statement that ‘concerns we had last year about the millennium bug turned out to be misplaced once we examined the relevant systems in detail‘. Well yes, examining the relevant systems in detail was by far the greater part of what the team I worked in did, and it took a lot of time and effort.
And all plaudits to you for pushing to replace old systems rather than remediating them. Sometimes we did that also, but there were cases where the team knew that a little bit more (or the same amount, or perhaps even less) money spent would get us a new system rather than a patched-up old one, but where we couldn’t get the case through the acquisition hoops.
Goneski 12.03.24 at 5:35 pm
Getting a big unsubscribe from me for claiming Y2K is some sort of hoax or conspiracy
engels 12.03.24 at 6:20 pm
Speaking of hi-tech scams this is also pretty funny:
https://www.theguardian.com/uk-news/2024/dec/03/500m-bitcoin-hard-drive-landfill-newport-wales-high-court
afeman 12.04.24 at 1:58 pm
There was in the past couple years an update to software used by the US National Weather Service – IIRC the two-digit problem was turning up in the processing of legacy data from a century prior. Nothing catastrophic or critical, but a real problem. This, some twenty years into Y2K.
The panic was overblown in interesting ways – these looming crisis points seem to provoke millenarian tendencies in people who imagine they will get the society they always fantasized about, from Mad Max to Dawn of the Dead to Kunstlerian urban revival. But as a real problem the comparison to the ozone layer is apt.
G. Branden Robinson 12.05.24 at 1:20 pm
I want to push back on some of nicopap’s claims.
“First and most obvious [misconception in JQ’s post] is that software became more efficient with time. The contrary is true.”
Both this statement and its negation are not even wrong. Everything depends on what sort of software you’re looking at and how you define “efficiency”.
“Many optimization techniques widely used in the 80s are now barely known.”
That is not because they’ve been forgotten, but because for the most part, they moved. Back in the 1970s and 1980s, compilers (and interpreters) were much simpler. You’d build an abstract syntax tree of the input source language, map each node to something approximating a “statement”, and, roughly speaking, crank out the most straightforward and obvious set of 1 to n assembly language instructions you could for each such statement. Maybe if you knew the processor manual really well, you’d throw in some efficiency tricks, or put in some special cases, like using a shorter branch instruction if the destination was “close”.
But it has been a long time, 30+ years, since programmers of high-level languages needed to worry about loop unrolling, common subexpression elimination, or tricks like that. Did their code get less efficient? Not really. Compilers know more and take care of that sort of thing for you. Perversities like Duff’s Device, as clever as they were in their time (and as reliant on insider knowledge of compilers internals on the platform in question), are no longer necessary. And doing “clever” optimizations at the source level is now widely recognized to be dangerous. Source code is not the place for cleverness if you can avoid it. Source code needs to be CLEAR. Far more money is lost on software that is improperly specified or incorrectly implemented than on software that is insufficiently tuned to milk as much performance as possible out of the (these days, possibly virtualized hardware running it).
When it comes to optimization, stuff got complicated really fast. Once processor caches became popular in workstations and eventually home computers. Sometimes optimizing for time was not the best call, as with loop unrolling, if what was actually hammering your performance was getting evicted from the instruction cache. In those cases, you want to optimize the code for space (code size, measured in instructions), not time. You then get the time win anyway because your procedure fits in cache.
Until it gets evicted anyway because we don’t run MS-DOS anymore, but multiprogrammed machines. And we did that even before most of us were running multiple cores. Things are now more complicated still: we have NUMA, non-uniform memory access, and multiple layers of caches. Some are allocated to one core, and some are shared by many. Maintaining ache coherence is now a major resource management concern.
I’ll also mention briefly the double-edged swords of processor pipelining (which led to the infamous “branch delay slot” of some RISC architectures) and branch prediction (where your CPU executes instructions from a branch that it might end up NOT taking, and then have to throw away and roll back all the changes to internal state that taking the incorrect branch made).
The only way to tune for efficiency these days is to empirically test specific cases you care about.
In some sense nearly all of the technologies hardware vendors have innovated in pursuit of speed are inefficient. But the days of artisanal programming are long gone: when your CPU didn’t even have multiple privilege modes, and lacked a hardware multiplier or block copy instructions so that every single instruction executed in a fixed number of clock cycles, and your memory was about as slow as your processor so you never needed wait states and you knew exactly when fetches and stores to RAM would happen, so you didn’t even really need a real-time clock: your CPU was absolutely deterministic.
That was efficiency, by one definition, and with it you could achieve feats like the Atari VCS game console, which was so simple and predictable that you didn’t need a CRT controller: you drove the electron gun of the TV set directly with software.
“There is a running joke in software development that says that software expands to use available memory and CPU cycles.”
I don’t think this says very much. When a problem is too big for the resources you have available, you don’t solve it with those resources. When you get more resources, you try again. For general applications, single-precision floating point arithmetic isn’t quite good enough: fitting a float into 32 bits was an attractive prospect on machine architectures of the 1970s/early 80s but it was just a bit too crude. (It was already a whole TWO memory fetches on a 16-bit bus!) Rounding errors accumulated too fast. 64-bit doubles give you 53 bits of precision instead of 24, and a lot of everyday problems just go away. As with motion pictures and the persistence of vision, some quantitative advancements enable qualitative thresholds to be passed.
“The only reason computers do not consume a significant larger amount of energy today is that with computer speed increases, came at the same time, efficiency increase of staggering amounts.”
The word “only” here seems to give a hostage to fortune.
“The famous Moore’s law that probably already came to an end last decade.”
That, I agree with. We’ve hit the wall with Moore’s law, and further advances in performance will have to come from even greater parallelism, or maybe quantum computing, or something that’s not on my (or many others’) radar. There is a widespread belief that parallel programming is inherently too difficult for human brains. Alan Kay would differ; he claimed to teach young children parallel programming with ease using the Actor Model. I do think that process calculi seem to be pretty esoteric to most working software engineers. I had to go to work at a research lab to meet a lot of people who were familiar with them. In software engineering at Fortune 50 companies, CS research seems to be useful mainly as a source for buzz words your manager can use to run a snow job on promotion committee. So the problem may really be that many of today’s professional programmers are hidebound, incurious, and careerist. The human brain in general isn’t the problem. Just theirs.
Alex SL 12.05.24 at 9:12 pm
G. Branden Robinson,
It is possible that two different levels of efficiency apply here. I am not a professional programmer, and I found your explanation of developments in compiler performance very enlightening.
But I would find it difficult to accept an argument that Windows or MS Office of today are programmed more efficiently relative to the available processing power and storage than they were in 1992, when I first started using them. Their performance then was smooth, and now it just isn’t. When I contributed to teaching at a university, it would sometimes take Windows 8 twenty minutes of swirling dots animation to log me into the network, which wasn’t conducive to being able to show my lecture slides. Windows Vista booted up so slowly that I could have a coffee in the meantime. Or take SuccessFactors, to pick an example of the various bits of corporate software I have to use. It takes a shocking amount of time to even start up or react to anything given that all it effectively does for me is provide a few website-like interfaces in which to query minuscule amounts of data (what staff ID does this team member have?) or enter minuscule amounts of data (please extend the affiliate contract to that date, or book me into this training course).
I would understand if an argument could be made that this is simply how slowly software has to work when interacting with all of the data stored far away in the cloud, but I am continually using web applications that return me tens of thousands of data points from remote databases of tens or hundreds of millions of data points, with various data filters, table and map displays, and even analysis functions, and they work an order of magnitude faster. There are layers upon layers of inefficiency in contemporary corporate software that result from prioritising quick coding over end-user experience, partly because the end user generally has no input into the purchasing decisions of their high-level managers.