by John Q on September 28, 2018

This is an extract from my recent review article in Inside Story, focusing on Ellen Broad’s Made by Humans

For the last thousand years or so, an algorithm (derived from the name of an Arab a Persian mathematician, al-Khwarizmi) has had a pretty clear meaning — namely, it is a well-defined formal procedure for deriving a verifiable solution to a mathematical problem. The standard example, Euclid’s algorithm for finding the greatest common divisor of two numbers, goes back to 300 BCE. There are algorithms for sorting lists, for maximising the value of a function, and so on.

As their long history indicates, algorithms can be applied by humans. But humans can only handle algorithmic processes up to a certain scale. The invention of computers made human limits irrelevant; indeed, the mechanical nature of the task made solving algorithms an ideal task for computers. On the other hand, the hope of many early AI researchers that computers would be able to develop and improve their own algorithms has so far proved almost entirely illusory.

Why, then, are we suddenly hearing so much about “AI algorithms”? The answer is that the meaning of the term “algorithm” has changed.

A typical example, says Broad, is the use of an “algorithm” to predict the chance that someone convicted of a crime will reoffend, drawing on data about their characteristics and those of the previous crime. The “algorithm” turns out to over-predict reoffending by blacks relative to whites.

Social scientists have been working on problems like these for decades, with varying degrees of success. Until very recently, though, predictive systems of this kind would have been called “models.” The archetypal examples — the first econometric models used in Keynesian macroeconomics in the 1960s, and “global systems” models like that of the Club of Rome in the 1970s — illustrate many of the pitfalls.

A vast body of statistical work has developed around models like these, probing the validity or otherwise of the predictions they yield, and a great many sources of error have been found. Model estimation can go wrong because causal relationships are misspecified (as every budding statistician learns, correlation does not imply causation), because crucial variables are omitted, or because models are “over-fitted” to a limited set of data.

Broad’s book suggests that the developers of AI “algorithms” have made all of these errors anew. Asthmatic patients are classified as being at low risk for pneumonia when in fact their good outcomes on that measure are due to more intensive treatment. Models that are supposed to predict sexual orientation from a photograph work by finding non-causative correlations, such as the angle from which the shot is taken. Designers fail to consider elementary distinctions, such as those between “false positives” and “false negatives.” As with autonomous weapons, moral choices are made in the design and use of computer models. The more these choices are hidden behind a veneer of objectivity, the more likely they are to reinforce existing social structures and inequalities.

The superstitious reverence with which computer “models” were regarded when they first appeared has been replaced by (sometimes excessive) scepticism. Practitioners now understand that models provide a useful way of clarifying our assumptions and deriving their implications, but not a guaranteed path to truth. These lessons will need to be relearned as we deal with AI.

Broad makes a compelling case that AI techniques can obscure human agency but not replace it. Decisions nominally made by AI algorithms inevitably reflect the choices made by their designers. Whether those choices are the result of careful reflection, or of unthinking prejudice, is up to us.



SusanC 09.28.18 at 11:09 am

With most traditional algorithms, you knew what it was supposed to be computing, and in many cases you could even provide a mathematical proof that it did in fact compute the thing it was supposed to be computing. e.g you could prove a sort algorithm really does return its results in sorted order.

Something like Support Vector Machine is almost an algorithm in that sense, in that it finds a hyperplane between two sets of points that meets certain criteria. But typically you’ld use SVM to make a prediction about a point whose classification is unknown based on points whose classification is known. That involves a whole of much more uncertain stuff, and your guarantees of correctness are much weaker (by which I mean, pretty much non-existent).


NomadUK 09.28.18 at 11:33 am

The meaning of the word hasn’t changed. You’ve got illiterate dweebs and marketing weenies misusing it. Engineers, especially software engineers, are often the most poorly read and worst writers I run across.


JimV 09.28.18 at 1:25 pm

“Decisions nominally made by AI algorithms inevitably reflect the choices made by their designers.”

Counterexample: AlphaGo. Version three learned its decision-making procedures by playing itself. The choice to use neural networks was made by the designer, but it was based on natural processes, not human prejudices.

(I agree there is a lot of bad AI stuff. Just like everything else developed by humans. There is also some good stuff.)

Language evolves. Is there a better word for “decision-making procedure” than algorithm? Does a procedure have to work infallibly to qualify as an algorithm, or can there be bad algorithms? The best general procedure I know of is trial-and-error plus memory (evolution). It doesn’t always work, but without it we wouldn’t be here.


Orange Watch 09.28.18 at 2:22 pm

On the legal and moral problems of the statistical side winning the war for the soul of AI:


P.D. 09.28.18 at 2:47 pm

I agree with the upshot “that AI techniques can obscure human agency but not replace it”, but I disagree about the meaning of the word algorithm.

An algorithm is, to take one definition, “a specific set of instructions for carrying out a procedure.” It’s helpful to distinguish the procedure (in the sense of the task you are trying to accomplish) from the instructions (in the sense of the path taken to accomplish the task). That distinction is helpful even when the task isn’t a formally defined mathematical function, because it highlights two different ways that algorithms can go wrong: The instructions might fail to accomplish the task, and the task might be the wrong thing to be aiming for.


Trader Joe 09.28.18 at 3:46 pm

In bond and equity market trading there isn’t a minute of market time where algorithms (algos we call them) aren’t making trades. For these P.D.s definition of “a specific set of instructions for carrying out a procedure” is a good definition.

Literally thousands of these are running every day, all the time on different trading desks. Some work well, some don’t. Some are saying “buy” something if XYZ happens, others say “sell” something if XYZ happens. A measurable portion of every day’s trade is simply one machine trading with another machine.

In the market, we run these because these can recognize what I’ll call ‘situations’ faster than humans can and accordingly can instantly execute multiple trading instructions faster than a human can put down his cup of coffee and refresh his screen.

In my view its a fine line whether these algos are replacing or merely obscuring agency. While its 100% true that the algos reflect the programming of the designer, when multiple algos are running side by side there’s a degree to which these things take on a life of their own owing to the multitude of simultaneous consequences of so many market participants trying to do the same sort of things at the same time. Its why markets sometimes require us to turn off the programs before they create an uncontrollable level of volatility.

I’ll save the really scary stuff for closer to Halloween. Suffice to say, there is a lot of sausage involved with even the best algo.


Orange Watch 09.28.18 at 4:34 pm


Counterexample: AlphaGo. Version three learned its decision-making procedures by playing itself. The choice to use neural networks was made by the designer, but it was based on natural processes, not human prejudices.

That’s not actually a counterexample; it merely seems like one because Go is a narrowly defined but well defined problem. The designers still performed feature selection and defined the solution, even if it seems like they didn’t because of course they chose “winning” the solution and selected features that measured “winning” instead of choosing “make a pretty design” and selected features that measured how pretty the design was.

Since most problems are not as well-defined as Go, nor is how one actually goes about measuring given features objective and unambiguous as in Go, the usefulness of the analogy for broader ML applications breaks down immediately.


Matt 09.28.18 at 4:48 pm

Why, then, are we suddenly hearing so much about “AI algorithms”? The answer is that the meaning of the term “algorithm” has changed.

I don’t think that the meaning has changed. One big change has been the availability of more data to train on. Going from small corpora of newspaper articles to billions of words from automatically retrieved Web pages has made huge improvements in the performance of the very same algorithms.

Another change has been the availability of more and cheaper computing power. A subtle point here: one of the most important effects of cheap computing power is permitting low-cost exploration of different approaches. When I first read the word2vec paper, I was struck by how much of it seemed comfortably familiar. If someone hid the authors and citations and told me that it was written at AltaVista in 1999, I would have believed them. This approach to natural language processing requires little in the way of precursor techniques invented after the 1990s, and it seems computationally feasible by 1990s standards too. But you would have had to run it on pretty expensive 1990s hardware to get results in a reasonable amount of time. So the general idea-space of this approach wasn’t explored much until the present decade.

The original word2vec is based on a shallow neural network. Later approaches to the same task of creating word vectors (fastText, GloVe) actually use even simpler algorithms and get even better results with the same training data. The main thing that makes them somewhat opaque is the same thing that makes them effective: those huge training corpora, containing more text than a human can read in a thousand lifetimes. Finally, that harvested-from-the-wild textual training data is also How to make a racist AI without really trying.

Other changes since the 1970s:

Much more publish-or-perish pressure on researchers, and an implicit bias that negative results are not noteworthy, leads to things like the dubious “sexual orientation from photographs” paper. This also affects the physical sciences, psychology, and other fields. There are short term rewards for “fooling yourself” (and your reviewers) by failing to robustly challenge your own findings. This happens inside companies, too, when a team has to show the rest of the company that what they spent the last few months on was a good use of time. Even if it turns out it wasn’t a good use of time.

There is a dearth of science and technology reporting for the educated non-specialist. New Scientist circa 1980 might as well be a completely different publication from the 2018 edition. The influential computing magazine Byte, in the 1970s, wrote about algorithms, programming languages, and even the circuits that implement digital logic. But you didn’t need a specialized education to read it either. There has been another hollowing out of the middle: you either accept a cartoon-schematic level of reporting that is almost certainly wrong as well as limited, or you dive in with Google Scholar and sci-hub to try to digest a field’s own academic literature.

There are actually some blogs that still occupy this “for the educated non-specialist” ground. But they typically focus on narrower topics than the old print publications did. And they’re not as easy to find.


JonD 09.28.18 at 6:19 pm


AlphaGo can avoid the issue because there is an objective, known set of rules to judge success or failure within Go and the game can be simulated perfectly. Outside of board and video games those conditions don’t apply. Using real data to train a machine learning/AI system will lead to biases based on the choices made in selecting and cleaning the data and the biases of the existing human system. If you use simulation to train the machine learning/AI system you’ll have biases based on the choices made in designing and validating the simulation.

Even with a perfect data or a perfect simulation you would still have human choices to decide what is the objective of the system. Do you care about false positives or false negatives more? If there are multiple components of the objective, what are their relative weights? How do you put the expected number of years in prison the falsely imprisoned will serve into the same units as the expected number of innocent victims of criminals that were set free?


Z 09.28.18 at 7:28 pm

Just a pedantic note, but on a topic that is dear to me: Al-Khwarizmi is not an Arab mathematician, precisely.


mpowell 09.28.18 at 9:27 pm

This is absolutely correct and makes a very good point on the model vs algorithm. The use of algorithm to describe machine learning is partially just incidental, but also has some marketing purpose (mainly for the benefit of customers or investors though). Overall, what people are doing with machine learning isn’t bad, per say, but it would be a lot healthier if people didn’t view this as some new form of magic even though it has all the same potential downfalls and traditional statistical modeling tools and for many problems offers nothing new in comparison. People who suffer real consequences from making this mistake (finance, for example) will avoid this, but there may be plenty of applications where the lesson will be learned the hard way and slowly. The problem is that when you look at image recognition or similar tasks, it really does seem like magic.


bianca steele 09.28.18 at 10:04 pm

I agree with JimV and P.D. but think that horse has left the barn. An algorithm is now “anything done by a computer that can’t be explained in five simple or two if-then sentences,” and the person who comes up with it is definitionally a “programmer.” Popularity voting, for example, is an algorithm, if the act of voting appears to be some other act, like purchasing, and popularity is described as “quality,” and it happens on computers. And we have the failures of engineering education to blame for the fact that anyone thinks sales volume is a measure of quality, or at least for not standing up in that meeting and saying, “Hey, is this really a good measure? Because I think . . . ,” which no developer has said, ever.

I’m not even sure it’s a bad thing anymore. Maybe blaming the programmers will have a good effect somewhere down the line. Maybe everyone who reads the book and has some influence on actions taken will think twice and realize that’s a simplistic framing of the problem.

(Personally, I’ve seen problems where we tried to break some aspect of customization down into discrete scalar measures and just could no longer explain how they interacted in a way that a human being could understand, and probably should have called it a “heuristic” or an “algorithm” and stopped trying to document it.)


Collin Street 09.28.18 at 10:50 pm

With most traditional algorithms, you knew what it was supposed to be computing, and in many cases you could even provide a mathematical proof that it did in fact compute the thing it was supposed to be computing. [snip]
That involves a whole of much more uncertain stuff, and your guarantees of correctness are much weaker (by which I mean, pretty much non-existent).

The problem is, computer science courses don’t include any epistemology, and software eng is if anything somehow worse. Doesn’t matter if you’re working off a provided spec, but if your task is to mentor a self-developing system it kind of puts you behind the eight-ball.

Training of computer programmers is a known trouble area.


A Erickson Cornish 09.29.18 at 12:34 am

Al-Khwarizmi was Persian, not Arab.


bianca steele 09.29.18 at 1:34 am

I don’t mean to suggest Broad’s own argument is simplistic. I have a couple tabs open with reviews of it and other books and a quick skim suggests the opposite. “Designers,” too, doesn’t necessarily mean programmers alone.


robo_friend 09.29.18 at 3:22 am

There are a number of worrying examples of how machine-learning-based systems have replicated human biases. But we should bear in mind that our alternative for the vast majority of these tasks, if we are not comfortable with the ML-based model and its trained societal biases, would be a human and its trained societal biases.

There are two major categories of error in many of these thorny decisions: systematic biases and random errors. The sorts of errors we humans make when hungry or tired before lunch, or when we had a bad night’s sleep, or an especially good night’s sleep that has us super optimistic about everything! At least many machine learning models show significant reduction in the random errors, which can still make them better than the typical racist, sexist, classist human. And we’re still fumbling toward best practices that will help reduce the systematic biases.

My belief about these fancier ML models is that they kind of look like linear regression in the 60s. The math was reasonably straightforward, and computers had the ability to run them in greater scale than ever before. But theoreticians has never thought about all the ways these calculations could go wrong when run with poorly specified assumptions or bad data. It took a couple more decades for the diagnostics and best practices research to catch up with the technology. Even to this day, we’re still having to work toward disseminating and inculcating those best practices among the countless researchers using regression techniques.

The fancier ML techniques that have just started to become available in (relatively) easy-to-use packages will probably still go toward a lot of misuse for the next couple decades until the new best practices and diagnostics (that are only just developing to my knowledge) mature and become widespread.


Gareth Wilson 09.29.18 at 5:48 am

For a simple example, what if human police officers never carried guns, but were followed around by lethally armed robots, judging when to fire by a machine learning model? Sounds horrific to most of us, but I imagine there would be some who would see it as an improvement.


soru 09.29.18 at 8:50 am

Machine learning techniques are essentially using an algorithm to create a model. As such, they combine some of the properties of algorithms, and some of models, without it being true that they are either or both.


Fake Dave 09.29.18 at 9:24 am

robo_friend at 16 raises a good point that we have to fairly compare AI to human cognitive capacities and acknowledge that we often come up short in ways that are the same or analogous to how machines do. Often people say that machines are dumb for doing things that dumb humans do all the time. On the other hand, we have to remember that human beings are part of a society that actively shapes our thinking and applies subtle (or not so subtle) correctives when we start going down cognitive dead ends. We are forced by our social natures into an awareness of who we are and how we think (the “looking glass self” and so on) which grants us the capacity to act as our own “programmers” in a way machines can’t (at least until we learn to simulate robot peer pressure).

There will be a lot of jobs where the human capacity for self-correction is irrelevant or actively counterproductive, but I also think the inability of AI to take that step back and examine their own behavior will keep them from ever developing anything close to what we’d call common sense. Common sense is both biased and irrational, but spend some time around someone who doesn’t have enough of it (or perhaps imbibed the wrong substances), and you start to appreciate it a bit more.


Mike Huben 09.29.18 at 11:42 am

For a closely related or overlapping issue, see Algorithmic Prison.


Patrick S. O'Donnell 09.29.18 at 11:59 am

While it is true that al-Khwārizmī was born Persian, his remarkable contributions to mathematics (the term algebra comes from the title of one of his books), geography, astronomy, and cartography were all written in Arabic (and soon translated into Latin). He was also responsible for major treatises in these fields being translated into Arabic.


Bill Benzon 09.29.18 at 12:25 pm

It’s my impression that the meaning of “algorithm” has been generalized and popularized for two or three decades. Same thing with “recursion” and, somewhat differently, “deconstruction” (most of what passes for deconstruction these days is not what Derrida had in mind). This seems to be a common process.

A usage I find particularly annoying is talk of “the evolutionary algorithm”, which is NOT a phrase that’s applied to computing procedures that are evolutionary in principle. It’s applied to biological evolution and characterizes it as some kind of mechanical procedure. That, it seems to me, is at best unhelpful. I know Dan Dennett uses the phrase a lot and for all I know he may have been the one to coin it. I also have a vague memory of him complaining one time that mathematicians use the term “algorithm” too strictly. I’m pretty sure that when Dennett uses it he means nothing more than some kind of computational procedure, if that.


JimV 09.29.18 at 3:24 pm

“Since most problems are not as well-defined as Go, nor is how one actually goes about measuring given features objective and unambiguous as in Go, the usefulness of the analogy for broader ML applications breaks down immediately.”

The point as I see it is that humans didn’t make the choice on where to place the next stone in GO, or even the rules for making that choice. They just set the objective: win the game. The complaint in the OP was about algorithms which don’t meet their objectives due to human prejudices/errors/oversights getting in the way. The fact, if it is one, that not all (or most) current problems are easy to apply that process to, does not make a counterexample not a counterexample, at least in my semantics.

On the evolutionary algorithm, I’ve never read Dennett. The idea is my personal reaction to those who claim that human design work and human intelligence is something magical which no machine could reproduce, whereas in my experience, design work is mostly memory (rules developed by trial and error previously), and, where the old rules don’t apply, trial and error. It happens to be the same process as biological evolution (albeit the mechanisms are different; for me a bubble sort done in FORTRAN or with a pencil and paper are the same algorithm with different mechanisms). It is the process AlphaGo used to develop its GO strategies, although I am sure the developers didn’t call the process “the evolutionary algorithm”. (What’s in a name?) Of course no creationist or dualist will agree with me, but I haven’t heard a good argument against it.

(Sorry, I guess I like to argue fine points on the Internet. If there were a Donations Button at this site I would pay a fine for arguing. In lieu of that, I’ll make another donation to IRC–done.)


Eszter Hargittai 09.29.18 at 5:01 pm

There are so many books out now about the social aspects of algorithms! It’s really hard to keep up so thanks for reviewing the three here. I’m not sure what new angles the newer ones are adding to what is already out there. I realize they were likely being written simultaneously so I’m not faulting the authors for that, I’m just finding it very hard to figure out which ones will add to what’s already been covered in the others available. It’s also refreshing to see books from outside the US. I’ve found in my teaching that the US-based books require a lot of translation outside the US since many of their examples depend on detailed knowledge of crazy US-based phenomena (like for-profit universities, insane health care, policing). I’m teaching a class on the social aspects of algorithms for the third time and am always on the look-out for what new material is out there on the topic.


Orange Watch 09.29.18 at 6:44 pm


The point as I see it is that humans didn’t make the choice on where to place the next stone in GO, or even the rules for making that choice. They just set the objective: win the game. The complaint in the OP was about algorithms which don’t meet their objectives due to human prejudices/errors/oversights getting in the way.

This misses the point. In none of the algorithms of the sort referenced in the OP are humans choosing intermediate steps to reach the defined solution. That’s not where human bias is being introduced, because by the time the program defines its criteria for advancing towards the goal, all human interaction in the process was already completed. Bias enters via solution definition, but more importantly, it also enters during feature selection. The step where the computer – bereft of human decision-making – defines how to weigh relevant features and select a solution is common to statistical methodologies, and is not where bias is entering the system. Bias enters the system before the computer is turned loose to generate its model. Humans did not merely pick the solution for AlphaGo (“win according to this definition of the rules of Go”), they also selected the features that they deem relevant for the computer-produced model to consider (board state, prior board states, whatever). The feature selection period was trivial because it’s an extremely well-defined problem about a simple, abstract system with very limited variables and inputs… but this is still feature selection. And that’s really the whole point: AlphaGo isn’t a counterexample, it’s just a poor analogy for most ML problems because of how abstract, rigid, and well-defined the universe of possible features and solutions are. The biased programs use models no less computer-generated than AlphaGo, but unlike AlphaGo there is not a well-agreed-upon set of features (or possibly even well-agreed-upon metrics to measure some selected features) that have been determined to be relevant to the (also-possibly-poorly-defined) solution that the computer seeks to reach via the model it creates.


Bill Benzon 09.30.18 at 12:13 am

@JimV: I wasn’t thinking of your #3 when I used the phrase, “evolutionary algorithm”. I was only thinking of those who used the phrase to characterize biological evolution, Dennett in particular. Whether or not the development of AlphaGo was “evolutionary” in some interesting sense is not my concern.

Here’s the Wikipedia article on actual evolutionary computation:


John Quiggin 09.30.18 at 12:48 am

In the case of games, an algorithm in the traditional sense would be a procedure guaranteed to yield a given outcome (normally win or draw). (Nearly) every child learns such an algorithm for Tic-Tac-Toe. The biggest solved game (according to Wikipedia) is checkers, which was solved using powerful computers but not by AI in the sense I understand the term.

AlphaGo’s playing rules aren’t an algorithm in the traditional sense since there is no guaranteed outcome. All we can say is that, in practice, AlphaGo reliably beats human players. But it’s obviously possible that a better computer, or even a freakishly good human player could beat AlphaGo.


John Quiggin 09.30.18 at 5:46 am

@10, @ 14 Thanks! I love getting corrections like this, so keep them coming even at the risk of looking pedantic.


Peter Erwin 10.01.18 at 1:50 pm

John Quiggin @ 27:
an algorithm in the traditional sense would be a procedure guaranteed to yield a given outcome

No, that is much too limited a definition of “algorithm”. There are many algorithms which are not guaranteed to find the correct answer, but provide good probabilities in certain cases and are thus still useful. Many “Monte Carlo” algorithms are of this type.

For example, outside of specific, carefully defined problems, no practical optimization algorithm is guaranteed to find the absolute minimum in all problem spaces. They have various trade-offs in things like speed, ease of computation, reduced probability of getting “trapped” in local minima, and so forth. They are all algorithms, even if none of them are “guaranteed to yield a given outcome.”

Even if we stick to pure mathematics, there are, for instance, several classic algorithms developed by mathematicians to solve the question of “is a given number prime or not”, not all of which are guaranteed to give the correct answer.


John Quiggin 10.02.18 at 1:38 am

@29 I agree that there’s room for dispute about the exact characterization, including the points you raise.

As usual, Wikipedia has a good summary

But on any of the proposed definitions, the standard procedure for Tic-Tac-Toe is clearly an algorithm while (I claim) the set of playing rules used by AlphaGo is not.

Comments on this entry are closed.