Workplace Freedom: A Primer for Alan Dershowitz

by Henry Farrell on September 22, 2014

Alan Dershowitz “expresses his opinion”:https://www.insidehighered.com/news/2014/09/22/salaita-case-illustrates-two-cultures-academe-many-experts-say on academic freedom, the Salaita case, and why UIUC natural scientists appear to have been less likely than social scientists and humanities people to support him.

bq. Some, including Alan Dershowitz, a Harvard law professor who backed Summers and opposed the tenure bid of Norman Finkelstein, the controversial former political scientist at DePaul University, have a more cynical take. Dershowitz said that in his experience, academics working in STEM tend, “in general, to be more objective and principled, and those in the humanities tend to be ideologues and results-oriented, and believe it’s the appropriate role of the scholar to use his or her podium to propagandize students.” Dershowitz said he believed personal opinion had influenced how those human sciences viewed both the Salaita and Summers cases, and that scientists were likelier to examine the evidence impartially. “I would bet anything that 99 percent of the people who are demanding that [Salaita] be restored tenure would be on the exact opposite side of this if he’d been making pro-Israel but equally uncivil statements,” he said.

There is a very strong case to be made against “results oriented” ideologues in the academy but I think that it isn’t quite the case that Dershowitz is making.

To illustrate this case, let’s turn to some relevant quotes from an article in the now defunct Harvard student magazine 01238, preserved at The Faculty Lounge.

bq. Dershowitz is, however, notorious on the law school campus for his use of researchers. (The law school itself is particularly known for this practice, probably because lawyers are used to having paralegals and clerks who do significant research and writing; students familiar with several law school professors’ writing processes say that Dershowitz reflects the norm in principle, if to a greater degree in practice.) … Several of his researchers say that Dershowitz doesn’t subscribe to the scholarly convention of researching first, then drawing conclusions. Instead, as a lawyer might, he writes his conclusions, leaving spaces where he’d like sources or case law to back up a thesis. On several occasions where the research has suggested opposite conclusions, his students say, he has asked them to go back and look for other cases, or simply to omit the discrepant information. “That’s the way it’s done; a piecemeal, ass-backwards way,” says one student who has firsthand experience with the writing habits of Dershowitz and other tenured colleagues. “They write first, make assertions, and farm out [the work] to research assistants to vet it. They do very little of the research themselves.”

I don’t recall that Dershowitz was himself quoted in the article in question; quite possibly he wasn’t asked. If he had been asked, he might very well have contested the description of his research practices that the article attributes to several of his former researchers. But imagine that a scholar in a department in the hard sciences (or social sciences or humanities for that matter), tenured or otherwise, conducted his research in the manner that the article attributes to Dershowitz. That scholar would deserve to be fired, regardless of whether his or her political leanings (or research findings) leaned hard left, hard right, centrist or whatever. He or she would be guilty of the most flagrant abuse of research standards. In the hard sciences, if you’re caught throwing out inconvenient data in order to justify a conclusion, you will be disgraced, and ought to be compelled to resign. The social sciences, likewise. If you’re a humanist, and you write articles claiming e.g. that the historical sources say x, when you have carefully and deliberately omitted the sources that say not-x, again you’re likely to be drummed out of the profession.

This is academic misconduct. Put more simply, you are cheating on the job you are supposed to be doing. You are not a scholar, but a hack and propagandist. Perhaps this kind of conduct is ubiquitous in law schools (the article claims that other colleagues of Dershowitz also do this), although I personally would be surprised. Outside of academia, there may, very reasonably, be different standards. Litigators are supposed to make the best case they can under the law for their clients. However, if my understanding is correct, they too have obligations as officers of the court, not e.g. to knowingly omit relevant information or citations. It is honorable for a litigator to be a hack, but only up to a point.

Holding unpopular opinions or saying harsh things on Twitter is not academic misconduct. I personally find some of the views that Alan Dershowitz has expressed (e.g. on torture) to be repulsive and indeed, actively depraved. I wouldn’t press to have him fired for saying those things, and if his job were threatened because he had said these things I would defend his right to employment, while holding my nose. If he had reached these opinions through real academic research (rather than outsourced hackish opportunism), it would be part of his vocation- academics are supposed to follow their search for knowledge wherever it leads them. If he were expressing those opinions outside an academic context, it would be his own private business.

As per Chris, Alex Gourevitch and Corey’s broader analysis, the Salaita case is best seen as an instance of a broader phenomenon: how control over people’s employment opportunities is being used to deny them ability to express their political beliefs. It’s in the same class as “this case”:http://www.washingtonpost.com/blogs/the-fix/wp/2014/09/11/the-gops-favorite-coal-executive-is-being-sued-over-political-contributions-heres-whats-up/ in which a foreman at a West Virginia mine was pressured to make contributions to GOP candidates (through a centralized process, in which her boss would be able to see who contributed and who did not), and alleges that she was fired when she failed to comply.

Academics, obviously, have self interested reasons to defend against abuses of the sort that we saw in Salaita’s case. But they also should want to see these freedoms extended to the workplace more generally, even in instances where the results may seem “individually obnoxious”:https://crookedtimber.org/2008/04/22/academic-freedom-some-propositions/ to them. Employers shouldn’t have any control, express or implicit, over their employees’ political activities outside the workplace. That they do have effective control in many US states, is more a hangover from feudalism, than anything that is justifiable in principle in a democratic state.

{ 122 comments }

1

MPAVictoria 09.22.14 at 3:35 pm

I just know someone is going to bring up the damn Firefox thing….

2

TM 09.22.14 at 3:48 pm

Thanks for the reminder. Dershowitz deserves nothing but contempt.

3

Barry 09.22.14 at 4:22 pm

As does the author of that piece, quoting Dershowitz’s opinion,when he should be used as an avatar of political bias.

4

AF 09.22.14 at 4:24 pm

Dershowitz is wrong on Salaita and utterly hypocritical for criticizing humanists as “ideologues and results-oriented” when he is an extreme example of the phenomenon.

That said, Dershowitz’s ideological, advocacy-oriented scholarship is not “academic misconduct.” Like many other legal academics and humanists (as well as some social scientists), his scholarship is not on a scientific model but rather consists of marshaling evidence in support of a thesis (presumably inspired by previous knowledge and intellecutal commitments) whose basic outline is defined before all the evidence is in.

There is no accusation that he is fabricating evidence, plagiarizing the work of others, or deliberately or negligently mischaracterizing counterevidence. The worst thing he is accused of is omitting information that a disgruntled research assistant considered discrepant. That may be shoddy scholarship but it doesn’t rise to the level of academic misconduct. Furthermore, it seems important to academic freedom to acknowledge this. If ideologically biased research constituted academic misconduct, activist-academics like Salaita would be much harder to defend.

5

TM 09.22.14 at 4:41 pm

“On several occasions where the research has suggested opposite conclusions, his students say, he has asked them to go back and look for other cases, or simply to omit the discrepant information.”

Assuming that this is true, that is clear-cut misconduct in any academic discipline.

6

David Steinsaltz 09.22.14 at 4:42 pm

I’ll just say, as a professor of statistics who is backing Salaita and the boycott of UIUC, while also objecting publicly to both the tone and the logic of Salaita’s tweets, that made-up claims about what “99 percent of the people” would do in a counterfactual world sit poorly in the same paragraph as praise for the “objective and principled” scientists. The only figure in this story who seems to have obviously changed his principles to suit his favoured political outcome is the former defender of academic freedom Carey Nelson.

For what it’s worth, I share Henry Farrell’s impression that natural scientists, social scientists, and humanities scholars all have more in common with each other than any of them has in common with the legal academy.

7

milx 09.22.14 at 4:42 pm

“I would bet anything that 99 percent of the people who are demanding that [Salaita] be restored tenure would be on the exact opposite side of this if he’d been making pro-Israel but equally uncivil statements,” he said.

He’s not wrong here.

8

milx 09.22.14 at 4:43 pm

I should add that this is certainly true of Corey who I can’t imagine dedicating equal time on CT, or his own personal blog, organizing and defending someone in the counterfactual who made impolite pro-Israel statements. Maybe the 99% figure is hyperbole.

9

Ronan(rf) 09.22.14 at 4:46 pm

There’s an obvious difference between not “dedicating equal time” and being “on the exact opposite side.”

10

DBake 09.22.14 at 4:47 pm

I should add that this is certainly true of Corey who I can’t imagine dedicating equal time on CT, or his own personal blog, organizing and defending someone in the counterfactual

But I can imagine you imagining it. So your point is refuted.

11

Phil 09.22.14 at 4:52 pm

Like AF, I’m not convinced. What we think about the guy’s thesis-driven research approach is liable to get entangled with what we think about his let-the-grad-students-do-the-reading methodology. There’s something boneheadedly arrogant about responding to a rebuff from the data by asking someone else “to go back and look for other cases, or simply to omit the discrepant information”, but if you were writing a paper on an under-documented phenomenon which you were pretty damn sure was there – say, worker discontent in un-unionised workplaces in the Southern states in the 1960s – would you feel honour bound to reference all the accounts you found that suggested that, nope, there was nothing to see there, everything was fine? Or would you go back and look for other cases, or omit the discrepant information?

12

Brett Bellmore 09.22.14 at 4:54 pm

“He’s not wrong here.”

Sure he is: If Salaita had been making similar public statements in the opposing direction, they’d never have the opportunity to take the opposite side on his unhiring. Getting unhired required getting hired in the first place, after all.

13

AF 09.22.14 at 5:04 pm

“‘On several occasions where the research has suggested opposite conclusions, his students say, he has asked them to go back and look for other cases, or simply to omit the discrepant information.’

Assuming that this is true, that is clear-cut misconduct in any academic discipline.”

No, it’s not. It depends on what the proposition was that the research assistant believed to be contradicted by the evidence, what evidence led the research assistant to believe the proposition to be untrue, and what statement ended up in the final work. There are a number of ways the RA’s statement could be true without it amounting to academic misconduct.

14

AcademicLurker 09.22.14 at 5:13 pm

As someone who works in the physical/biological sciences and supports Salaita and opposes the actions of the UIUC chancellor and Board, I have a different theory about the tepid participation of natural scientists on either side of the affair.

Scientists have their own distinct academic ecosystem and their own distinct news sources/social media/blogosphere. Relatively few scientists read The Chronicle, IHE or places like Crooked Timber. For the most part, they simply have never heard of Salaita. I’ve seen zero about the Salaita affair on the science oriented blog networks that I read regularly.

15

milx 09.22.14 at 5:20 pm

“But I can imagine you imagining it. So your point is refuted.”

The original Dershowitz argument was in the form of “a bet,” so we’re talking about speculative counterfactuals. It’s not sufficient that I can mentally conjure the image Corey Robin being such a stalwart defender for academic freedom that he would flood his blogs defending the unhiring of a pro-Zionist academic. Colloquially “I can’t imagine” implies my certainty based on my experience that he wouldn’t — ie: how I might bet. It is self-evident imho (though obv not yours) that Salaita’s political defenders are using the argument of “academic freedom” to buttress their defense of the particulars of Salaita’s POV. This wasn’t like Burton Joseph of the ACLU defending the National Socialist Party of America in the Skokie Affair.

16

Henry 09.22.14 at 5:24 pm

Brett – if you knew more about how the academy works, you’d be better able to troll (a solid B+ student at least).

AF – it certainly could be – as the post specifically acknowledges – that Dershowitz would have a very different interpretation than that suggested by the article. Even if the piece is correct in broad outline, Dershowitz might reasonably have wanted not to include evidence that seemed pertinent to a student researcher, for reasons not visible to that researcher. But that’s not the question posed by the post, which asks about a hypothetical scholar conducting his or her “research in the manner that the article attributes to Dershowitz.” If a scholar deliberately discards relevant data that is inconvenient to his or her preferred conclusions, then he or she isn’t a scholar, but a hack masquerading as one. One doesn’t have to include data that is obviously erroneous, and one can certainly explain why one thinks that a particular piece of evidence doesn’t count for much. But one has to include it, or one is committing suppressio veri and deceiving one’s readers. NB as noted that even litigators are supposed to report relevant citations etc.

17

Lars 09.22.14 at 5:32 pm

Instead, as a lawyer might, he writes his conclusions, leaving spaces where he’d like sources or case law to back up a thesis.

This is something we see quite a bit of in contra-scientific efforts to undermine the climatological community’s research program on global climate change. Is there a formal name for logical constructs of this sort? Affirming the consequent doesn’t seem to quite fit the bill.

18

Robespierre 09.22.14 at 5:43 pm

Apologetics? Failing that, “dishonest research” is fairly descriptive.

19

AF 09.22.14 at 5:58 pm

“If a scholar deliberately discards relevant data that is inconvenient to his or her preferred conclusions, then he or she isn’t a scholar, but a hack masquerading as one.”

Not a scholar in a normative sense, I agree. But not guilty of academic misconduct either, I would still argue. If being a hack masquerading as a scholar is academic misconduct, there are thousands of professors who could be plausibly characterized as guilty. I don’t think this is or should be the standard. Can anyone cite an example of someone in the humanities or legal academia who was disciplined for academic misconduct simply for not dealing without relevant counterevidence?

20

AF 09.22.14 at 5:59 pm

“with relevant counterevidence”

21

Henry 09.22.14 at 6:04 pm

AF – I think that there’s a significant difference between “not dealing with” and “deliberately suppressing.” It’s obviously often hard to prove the latter – but I think that e.g. if a historian were found to have cherry picked a couple of convenient quotes from an archival source that he or she had clearly consulted, while failing to report the substantial body of evidence that weighed against his or her thesis, he or she would be in serious disciplinary trouble.

22

Abbe Faria 09.22.14 at 6:18 pm

“I have a different theory about the tepid participation of natural scientists on either side of the affair… For the most part, they simply have never heard of Salaita. I’ve seen zero about the Salaita affair on the science oriented blog networks that I read regularly.”

But if you take a case like Suzanne Sisley recent termination from the University of Arizona, there’s frankly been very little pushback from natural scientists even though it is relevant to them, and incidently pretty much complete silence from humanists. Which does seem to support the idea that STEMs just aren’t interested, and humanists only get fired up when there’s a political agenda.

23

Julian F 09.22.14 at 6:22 pm

“It’s in the same class as this case in which a foreman at a West Virginia mine was pressured to make contributions to GOP candidates (through a centralized process, in which her boss would be able to see who contributed and who did not), and alleges that she was fired when she failed to comply.”

On the other hand, union members are compelled to contribute to Democratic party candidates as a matter of course, whether they agree with them or not.

(I did communications for SEIU Local 6434 for four years. )

24

MPAVictoria 09.22.14 at 6:23 pm

“On the other hand, union members are compelled to contribute to Democratic party candidates as a matter of course, whether they agree with them or not.”

Really? Or did the union donate to the democratic party?

25

Dr. Hilarius 09.22.14 at 6:27 pm

Dershowitz uses student researchers both for academic articles and briefs used in litigation. In either case, failing to cite contrary legal holdings is misconduct, academic and professional respectively. Litigation is different from a scholarly work in being goal directed, advancing the client’s case, so there the “backwards” research model is reasonable. But by failing to cite contrary precedent the lawyer runs the risk of being thought at best sloppy even if not dishonest. Not a good strategy for your professional reputation.

I have wondered about whether Dershowitz would have such a high-profile reputation if he had to do his own work instead of relying on a small army of very bright, highly motivated student workers. Sort of like some Supreme Court justices and their clerks.

26

Julian F 09.22.14 at 6:29 pm

Hi MPAVictoria,

You pay your dues or you lose your job, all the while knowing that the bulk of your dues are going to end up in Democratic party coffers. I knew members who were pro-life; they were shit out of luck.

27

harry b 09.22.14 at 6:30 pm

Interesting, if brief, relevant piece from the Chancellor of my university, an economist, about who has what in common with whom:

http://www.chancellor.wisc.edu/blog/is-an-english-professor-a-scientist/

28

MPAVictoria 09.22.14 at 6:32 pm

“You pay your dues or you lose your job, all the while knowing that the bulk of your dues are going to end up in Democratic party coffers. I knew members who were pro-life; they were shit out of luck.”

Were the members able to vote on the Union leadership?

29

The Temporary Name 09.22.14 at 6:32 pm

You pay your dues or you lose your job, all the while knowing that the bulk of your dues are going to end up in Democratic party coffers.

Wow. Which union pays the bulk of its dues into political races?

30

Rich Puchalsky 09.22.14 at 6:37 pm

I’m inclined to agree with the cynicism about whether any large number of people really support academic freedom / freedom of speech / any other freedom in all cases, or just in cases sympathetic to them. Look at Cary Nelson, after all. Hell, look at constitutional law professor Obama.

But just because most people are horrible hypocrites doesn’t mean that people who really do care about these freedoms shouldn’t join these campaigns when they come along. So it doesn’t matter what you think Corey Robin or anyone else would do in some different incident. What matters right now is that he’s on the side of freedom of speech in this incident.

I can’t think of any time when Dershowitz has been. Wait, wiki tells me that he defended a porn actor in 1976, and says that porn is protected by the First Amendment. That’s not much to set against his defense of torture.

31

AF 09.22.14 at 6:40 pm

“AF – I think that there’s a significant difference between “not dealing with” and “deliberately suppressing.” It’s obviously often hard to prove the latter – but I think that e.g. if a historian were found to have cherry picked a couple of convenient quotes from an archival source that he or she had clearly consulted, while failing to report the substantial body of evidence that weighed against his or her thesis, he or she would be in serious disciplinary trouble.”

Sure, if a historian did that, it would be a problem. But Dershowitz isn’t a historian. And he doesn’t deal with archival sources — which are the real source of the misconduct in your hypothetical. If a professor of literature or political theory or law — or for that matter a historian — cites portions of a generally available text in support of an argument and ignores (even “deliberately”) portions of the text that go against the argument, that is bad scholarship, not academic misconduct.

32

Lars 09.22.14 at 6:41 pm

Thank you, Robespierre.

33

The Dark Avenger 09.22.14 at 6:51 pm

AFAIK, there are requirements in the law for non-Union members to pay at least that part of of the union dues for the collective bargaining services and other benefits union members receive, but that they can’t charge for political activity in what the non-union worker pays in said dues. Unfortunately, SEIU has had some problems that I’m aware of, but none of them linked to the contributions to candidates.

And as someone who was the only person in my workplace as the unofficial shop steward, things look a lot different on the ground then when you’re even slightly above the fray.

34

The Temporary Name 09.22.14 at 7:04 pm

I’d expect the “bulk of the dues” example to match the “99 percent” example.

35

mbw 09.22.14 at 7:14 pm

I’ve been talking with physical science colleagues here at UIUC The numbers aren’t clear yet but a significant minority of us are very concerned about this as an academic freedom issue. A few of those who signed the pro-administration letter are even a bit concerned about the implications for academic freedom, although they’re more concerned about institutional stability and funding. One economist (now elsewhere) – very conservative, and somewhat Islamo-phobic- is strongly on the academic freedom side. I just spoke last night with a philosopher who is boycotting. He hates identity politics, has a low opinion of the humanities departments here, but is sure that this blow to academic freedom will just make them worse.
So I’d hesitate to throw around crude labels of the sides.

36

Julian F 09.22.14 at 7:20 pm

Hi The Temporary Name,

For some folks, one percent is too much. Would you like one percent of your paycheck to go to the Republican Party? I wouldn’t.

Dark Avenger,

It’s true that workers can opt out of paying for political activities, but the process is laborious (and don’t forget the power of the nudge. It’s a lot harder to opt out of a process than to opt in). How laborious? Members must use snail-mail, and their written requests must be re-submitted every year. Here are the instructions from the UAW website:

“Under the Supreme Court decision in CWA v. Beck, nonunion members who pay money to the union under union security agreements, may file objections to nonrepresentational-related expenditures of the money they pay under such agreements. (Such agreements, including those that the UAW is a party to, may be and are applied by the UAW only to require as a condition of employment that covered employees “tender the periodic dues and initiation fees uniformly required as a condition of acquiring or retaining membership” in the union. This means that at any time you may decline membership in the union and be a nonmember agency-fee payer. In addition, if you do so, you are eligible to submit an objection to the UAW under Beck as described below.)

To comply with the Beck decision, the UAW honors objections by nonmembers of the union covered by National Labor Relations Act union security agreements who notify in writing the Agency Fee Payer Objection Administration-Private Sector, International Union, UAW, 8000 E. Jefferson Ave., Detroit, MI 48214 of their objection. Objections must be renewed each year.”

37

Barry 09.22.14 at 7:50 pm

Brett Bellmore 09.22.14 at 4:54 pm

“Sure he is: If Salaita had been making similar public statements in the opposing direction, they’d never have the opportunity to take the opposite side on his unhiring. Getting unhired required getting hired in the first place, after all.”

Pretty rich, when the article starts with Alan Derschowitz.

38

MPAVictoria 09.22.14 at 7:55 pm

Shorter Julian F- Truly is there any greater oppression than being forced to mail a letter every year?

39

Marc 09.22.14 at 7:58 pm

So, let’s see. By Julians logic, having my employer fire me for not making a personal contribution to a political cause that I oppose is exactly the same thing as being a member of a union – where I can vote on the policies – and having some of the money go to causes that I disapprove of. But – let me guess! – it’s completely OK to have corporations spend their money on causes that I disapprove of, even though I’m a shareholder.

40

MPAVictoria 09.22.14 at 7:59 pm

“So, let’s see. By Julians logic, having my employer fire me for not making a personal contribution to a political cause that I oppose is exactly the same thing as being a member of a union – where I can vote on the policies – and having some of the money go to causes that I disapprove of. But – let me guess! – it’s completely OK to have corporations spend their money on causes that I disapprove of, even though I’m a shareholder.”

And you don’t even HAVE to have your union dues go to a political cause if you care enough to write a letter asking that they don’t.

41

Henry 09.22.14 at 8:17 pm

AF – but as noted, lawyers are not supposed to suppress citations to relevant cases. I just don’t think you can maintain the argument that it isn’t a breach of professional ethics if a legal academic deliberately does this in a law review article.

42

Julian F 09.22.14 at 8:20 pm

I surrender :^)

43

TM 09.22.14 at 8:41 pm

AF, if hypothetically a legal scholar asked a student to research a legal issue, explicitly predefined the expected outcome, the student came up with relevant case law that did not support the expected outcome, and the scholar then told them to discard that relevant precedent, then I think most academics including most legal scholars would agree that this constituted misconduct.

Of course we don’t know in detail what happened and can only guess based on those student statements. If it were true and could be proven, it is still unlikely that formal action would be taken. Academic misconduct is often difficult to prove and gets punished rarely, and only in the most egregious cases. But that doesn’t mean that the standards aren’t there and aren’t well understood.

44

Dr. Hilarius 09.22.14 at 8:45 pm

For those wanting the specific rule on citing contrary authority:

“A lawyer shall not knowingly fail to disclose to the tribunal legal authority in the controlling jurisdiction known to the lawyer to be directly adverse to the position of the client and not disclosed by opposing counsel” Rules of Professional Conduct 3.3(a)(3).

This is from the Washington State rule book but the rule itself is fairly uniform across jurisdictions. It is also not a legitimate dodge to tell your researcher to not bring adverse law to your attention because other rules impute a subordinate’s conduct to the supervising attorney.

45

Brett Bellmore 09.22.14 at 9:05 pm

“Which union pays the bulk of its dues into political races?”

Taking into account in-kind contributions of union labor? I don’t know that any do, but their political spending is typically a LOT higher than 1%. More like 10-20%.

Political Spending by Unions Far Exceeds Direct Donations

Since unions will typically only exempt non-members from the reported political expenditures, and their total political expenditures including in-kind contributions amount to maybe 5 times as much, writing that letter only slightly reduces the extent to which you’re forced to subsidize the union’s political agenda.

46

Peter Dorman 09.22.14 at 9:05 pm

Regarding the claim by Dershowitz that whole swaths of academia are intellectually dishonest and care only about “results”, could there be an element of projection at work? People often fear most the sorts of things they do themselves. After all, you know for a fact that the threat is real.

47

Barry 09.22.14 at 9:20 pm

Peter, that’s what I figured.

48

Alex 09.22.14 at 9:40 pm

Taking into account in-kind contributions of union labor?

I see an obvious remedy; don’t show up.

49

ifthethunderdontgetya™³²®© 09.22.14 at 9:40 pm

Julian F 09.22.14 at 8:20 pm

I surrender :^)
————
You trolled CT and sucked in Brett. You win an Internet Service Star. Wear it proudly.
~

50

Collin Street 09.22.14 at 10:01 pm

Thought strikes me that “hiring practices” are essentially social, and you’d expect scholars-of-society to have a firmer grasp on social issues even out of their area of expertise

[likewise you’d expect physical scientists to have a firmer grasp on the principles underpinning anthropogenic-climate-change theories than the social sciences; I have no idea on the actual numbers so take this as a testable hypothesis.]

51

Jozxqk 09.22.14 at 10:04 pm

Henry — you misunderstand the “candor towards the tribunal” rule of professional conduct. You have indicated that this rule prohibits “suppress[ion] of citations to relevant cases” or “knowingly omit[ting] relevant information or citations.” These statements are not remotely true.

The model rule, like the rule where I practice, provides that a lawyer may not: “fail to disclose to the tribunal legal authority *in the controlling jurisdiction* known to the lawyer to be *directly adverse* to the position of the client and not disclosed by opposing counsel.”

If I know of directly adverse authority from a different state I can “suppress” that authority all I want. If I am litigating a case under Oregon law, and there is a US Supreme Court decision that reaches a directly contrary conclusion applying federal law, I need not cite it (though I would certainly expect the other side to).

Moreover, if I know of authority that I (or a judge) might consider relevant, but that is not directly adverse to the position of my client, I need not disclose it. For example, a case may exist in a related area of law that would provide a great and persuasive basis for my opponent to ask the court to adopt its reasoning in the present case. I will not cite that case.

Indeed, among the less than punctilious members of the bar (a fair proportion, as all the jokes suggest), this rule is viewed as virtually toothless, because nearly any case can be distinguished in such a way that it is not “directly” adverse. In other words, if you have some argument to work with, then the legal authority is not “directly” against you.

And this is only regarding legal authority. Regarding facts, I cannot make an affirmatively false statement (or an omission that is the equivalent of a misrepresentation), but in no way is a litigant required to volunteer damaging information just because it might be relevant to the judge’s decision. Our system is an adversarial one. It is the other side’s job to highlight the weaknesses of your case.

52

Oxbird 09.22.14 at 10:26 pm

It may be that the “now defunct Harvard student magazine 01238” warrants its current status. More years ago than I care to count I did research for Professor Dershowitz and, a year or two later, a prominent antitrust professor at Harvard Law School who happened to have, as I recall it, very different political positions. My experience with both was substantially the same — they were very straightforward and professional in their evaluation and use of precedents — and the attitudes of both bore no resemblance to the anonymous comments you quote from 01238. This is not to defend Dershowitz’ position, which you are more than competent to deal with on their merits. In my view it would have been a much better post without your reaching out to these types of anonymous comments.

53

Paul Davis 09.23.14 at 1:15 am

I would hope that I don’t really have to remind Henry or commenters of Feyerabend’s “Against Method” and related studies on the history of science.

If discarding/hiding/forgetting data that didn’t fit the hypothesis wasn’t a critical part of science, we’d have to do without a number of subsequently very well substantiated theories. Millikan’s estimation of e (electron charge; the “oil drop” experiment) is perhaps one of the most well known, Even though there is some disagreement over the extent and nature of Millikan’s exclusion of results, there is no disagreement that he did collect a substantive number of results that were not included in published work and that at best cast the nature of his results in less focus than the published set did.

And then there is longer term effect on other researchers, neatly summarized by this quote from Feynman on the wikipedia page:

We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that…

So, as awful as Dershowitz may or may not be, lets not be quite so sanctimonious about “misconduct”, eh? Just as forgetting is critical part of self-formation, convenient exclusion of contradictory data is a key part of the evolution of scientific hypotheses. Of course, there comes a point at which that has to stop for the hypothesis to evolve towards a theory, but that isn’t always (and perhaps isn’t ever) the responsibility of the initial author.

54

Corey Robin 09.23.14 at 2:06 am

If I’m understanding some people’s comments here correctly, I am a hypocrite unless every single time there is a violation of academic freedom in a university setting, I am willing to spend upwards of 8 hours a day on my blog, here, and elsewhere, fighting that violation of academic freedom. In other words, in order to fight one fight of academic freedom, or even a few, I have to fight every fight. In other words, I have to basically give up my day job as a professor who has his own research and teaching to do and become a full-time defender of academic freedom.

This is absurd. So absurd it can’t be what any on this thread actually means.

I think you’d have a much stronger case — that is, an actual case — if you found that on a comparable case of academic freedom, but one in which the person whose rights were being violated said things that I am diametrically opposed to (e.g., the State of Israel has the right to occupy Palestinian lands in perpetuity), if in that instance, I not only refused to support that person’s rights by, say, not signing a petition (and if anyone can find an instance of me doing so, you should let me know), but actually spent a good part of my time over a period of nearly two months, working to deny them those rights. Again, if you can find an instance of me doing so, please let me know.

In any event, the kind of logic that’s being mounted here on this particular issue reminds me of the “why single out Israel” crowd, whose arguments I pretty much addressed in this post here.

https://crookedtimber.org/2013/12/13/a-response-to-michael-kazin-on-bds-and-campus-activism/

The bottom line is: I’m not the ACLU. I’m not a national organization with extensive resources that can find multiple fights across the country. I pick and choose my battles. Like, I assume, all the rest of us. (And in fact even the ACLU is not the ACLU; it also has to choose its battles.) If you can find an instance of me taking the opposite stance — or refusing to support someone — just because I disagreed with that person’s views, that would be one thing. But again, I don’t think you can.

55

js. 09.23.14 at 2:36 am

AF has a point actually — I don’t think it’s terribly uncommon in the humanities (to speak of what I sort of know) to start with an idea that functions as a sort of thesis, and then to do the best job one can to marshall the strongest argument(s) for it. Rather than going wherever the text/evidence/whatever leads you. It’s also quite common — indeed, sanity demands it — that one doesn’t look at _all_ the countervailing evidence/arguments.

What AF seems to be missing though is that simply “to omit the discrepant information” sounds really bad, at least borderline misconduct (though admittedly it might refer to something less bad than it sounds).

56

js. 09.23.14 at 2:46 am

harry b @27:

It’s a lovely sentiment, but surely “scientist” hasn’t been used in the sense of Wissenschaftler for at least a century?

57

Joshua W. Burton 09.23.14 at 2:48 am

AcademicLurker @14: I have a different theory about the tepid participation of natural scientists on either side of the affair. Scientists have their own distinct academic ecosystem and their own distinct news sources/social media/blogosphere.

It may also be relevant that a cloropleth map of the physical and biomedical science academic ecosystem would show Israel large enough to cover both Urbana IL and Blacksburg VA at once. Some sleuthing (and a bit of coding) would be needed to filter on scholars who have no first-degree collegial ties (coauthor, advisor, coeditor) with any Israeli citizen, but that might be all it takes to make the alleged science vs. humanities bias, if such there be, vanish into the noise.

58

DBake 09.23.14 at 2:49 am

The original Dershowitz argument was in the form of “a bet,”

Which I am willing to bet is an embarrassingly stupid form of argument.

59

Rich Puchalsky 09.23.14 at 3:13 am

“if in that instance, I not only refused to support that person’s rights by, say, not signing a petition (and if anyone can find an instance of me doing so, you should let me know), but actually spent a good part of my time over a period of nearly two months, working to deny them those rights.”

Isn’t that essentially what Cary Nelson did? Defender of academic freedom, and then working to deny someone those rights?

I don’t speak for the people who are trying to smear you in particular. But there does seem to be something of a belief here that the left cares about basic freedoms non-hypocritically and the right doesn’t. In the Age of Obama, I haven’t observed this to be true. The archetypical figure for these years is Marty Lederman, strong opponent of secret detention and expansion of executive power when Bush was President, and author of secret law once Obama became President allowing the President to have people assassinated at will. The difference between Lederman and Yoo vanished when Lederman went into Yoo’s job.

So I’m willing to congratulate anyone who’s on the right side now, but to presume that people will on average stay on the right side when their preexisting commitments say otherwise would be foolish.

60

Joshua W. Burton 09.23.14 at 3:25 am

Paul Davis quotes Feynman @53: We have learned a lot from experience about how to handle some of the ways we fool ourselves.

Ian Hinchliffe and Mike Barnett have a long-running irreverent project at the Particle Data Group, mapping the best values of measured fundamental constants over time. If science were what Descartes and Popper told you in high school, you’d expect to see normally distributed scatter, with decreasing variance as the measurements improve. If science were what Kuhn and Feyerabend told you in sophomore year of college, you’d expect to see paradigms shifting stochastically without convergence. Marvelously, you instead see the human endeavor of science, in which measurements are “sticky” around the “known” value — not out of malice or even sloth, but simply for the reason your keys are in the last place you look, namely that when you think you’ve found them you stop looking — but still narrow over time toward objective truth. As Piet Hein says,

The Road to Wisdom?
Well, it’s plain
And simple to express:
To err, and err, and err again . . .
But less, and less, and less.

I could stare at that graph for hours, with more profit than spending the same amount of time reading any 20c philosopher of science I can name.

61

J Thomas 09.23.14 at 3:38 am

#60 JWB

If science were what Descartes and Popper told you in high school, you’d expect to see normally distributed scatter, with decreasing variance as the measurements improve.

Yes.

If science were what Kuhn and Feyerabend told you in sophomore year of college, you’d expect to see paradigms shifting stochastically without convergence.

We do see that, just not very often. Very few scientists have both the imagination and the clout to present a new paradigm and get it accepted.

Marvelously, you instead see the human endeavor of science, in which measurements are “sticky” around the “known” value — not out of malice or even sloth, but simply for the reason your keys are in the last place you look, namely that when you think you’ve found them you stop looking — but still narrow over time toward objective truth.

I’m with you here up to the point you decide that what they narrow in on is objective truth. Why would you think that?

The more prior research has already been done, the stickier the values get. Right up until somebody makes a successful paradigm shift after all.

62

Joshua W. Burton 09.23.14 at 4:13 am

I’m with you here up to the point you decide that what they narrow in on is objective truth. Why would you think that?

I’m looking at it. Fifty years ago, the best value for the neutron lifetime was just over 1000 seconds; a value as low as 880 seconds was a 3.5 σ surprise, a chance in five thousand. Today, it’s 880 seconds with less than a second of systematic error; to roll it back above 1000 seconds would be a 130 σ jump, one chance in 10 to the 3700th power. Buddy, can you paradigm?

63

J Thomas 09.23.14 at 4:42 am

Fifty years ago, the best value for the neutron lifetime was just over 1000 seconds; a value as low as 880 seconds was a 3.5 σ surprise, a chance in five thousand. Today, it’s 880 seconds with less than a second of systematic error; to roll it back above 1000 seconds would be a 130 σ jump, one chance in 10 to the 3700th power. Buddy, can you paradigm?

Let’s say for the sake of argument that the current value is correct. The original value was way off, but they didn’t think it was off. They got consistent results to the point that if they had gotten a correct result it would have been an unusual outlier.

And I will assume that this fit the usual pattern where the observed values gradually drifted down toward the correct one. And now more reliable equipment gets the same result more consistently, so that what used to be 3.5 sigma has become 130 sigma.

Now, imagine that the results were drifting in the correct direction, but they had not drifted to the right answer yet. But the equipment got so reliable that it gave much more reproducible results. Wouldn’t that tremendously slow the rate of drift?

Imagine that the correct value is in fact 130 sigma on the other side, and it has drifted about halfway. When researchers compute the value fresh, how likely is it they will publish a value that’s more than, say, one sigma away from the consensus? It will take a very long time to get there at this rate!

Well, but we could argue that the current value can’t be 130 sigma off but must be correct. And yet, how was it that the early values were consistently around 3.5 sigmas off? 130 sigma is just more of the same. It takes a lot more calibration to adjust your numbers that far if you in fact start to measure the correct value at the beginning, but 130 sigma is also a lot more powerful as an incentive. How could you possibly be right when everybody else agrees on something so reliable and so far from your answer?

Of course I don’t want to say it has to be that way. But consider the possibility that we might not converge to objective truth. Possibly it’s a race between drift in the correct direction versus increased precision. When the precision gets high enough, maybe the drift mostly stops whether the result is true or not.

64

Ben Alpers 09.23.14 at 6:28 am

Apropos of the questions of hypotheses, confirmation bias, and Dershowitz’s research methods, my colleague at the U.S. Intellectual History Blog L.D. Burnett recently published a fascinating post on historians’ uses of hypotheses and evidence.

65

Joshua W. Burton 09.23.14 at 6:36 am

Imagine that the correct value is in fact 130 sigma on the other side

No, I won’t.

66

Duktur Daud 09.23.14 at 7:46 am

I don’t know what Dershowitz does with evidence or his research assistants. In general I don’t have a high regard for the legal academy. Yet the picture of the social sciences Henry Farrell is painting is very naive at best. The claim that people are drummed out of the academy for hiding evidence that is contrary to their theories is false. People report the statistical results that support their claims, often seeking to produce the arbitrary, but widely-accepted benchmark of statistically significant association at the .05 level, knowing full well that slightly different but equally or more defensible specifications do NOT support their results. They cherry-pick. They also mischaracterize cases that readers will not know well. This is done routinely by the most prominent people. Every investigation I have ever seen shows that many prominent studies cannot be replicated either, to say nothing of having robust findings.

67

Brett Bellmore 09.23.14 at 10:17 am

“(And in fact even the ACLU is not the ACLU; it also has to choose its battles.) ”

Not really, so much. The crucial difference between individuals and organizations is that individuals have largely fixed limits, while organizations are capable of growing to meet the challenges they take on. An organization like the ACLU takes on a cause, it accretes new members who think that cause important, and grows to have the capacity to take it on.

That’s why, unlike an individual, it really is fair to look at an organization, and draw conclusions about what it’s leadership believes from the fights they take, and the fights they pass on. The ACLU, for instance, didn’t decline to defend 2nd amendment rights, and invent specious excuses for why they weren’t really civil liberties, due to a lack of resources. (The giant the NRA grew to, taking on that cause, ought to tell you that.)

They passed on that fight, and even enlisted in the other side, because they hated that right. And really didn’t want to associate with the folks who didn’t hate it.

But you’re certainly right about the limits of the individual, and I draw no conclusions from whose causes YOU fail to exert yourself on behalf of. Individuals can’t do everything, so they really do have to pick their fights, and only the fights they DO pick say much about them.

68

Ebenezer Scrooge 09.23.14 at 10:45 am

Jozxqk @ 51 is spot-on. Litigation is the only discipline of the mind in which one’s weak or slightly slimy arguments do not discredit one’s strong arguments.

Dershowitz checked out of serious legal academia several decades ago, and I’m not trying to defend his habit of omitting unfavorable cases, if indeed he has one. However, I work with kid lawyers in law school. They often do amazing things. But I often ask them to chase down a fox, and they come back proudly with a goat. They don’t realize that the cases they dug up are irrelevant, perhaps because they didn’t understand the issue in the first place. (Yes, I know it is my fault, but I like to underspecify my assignments to them, because as I said: “they often do amazing things.”)

69

Daniel 09.23.14 at 11:12 am

>>>“That’s the way it’s done; a piecemeal, ass-backwards way,” says one student who has firsthand experience with the writing habits of Dershowitz and other tenured colleagues. “They write first, make assertions, and farm out [the work] to research assistants to vet it. They do very little of the research themselves.”<<<

No surprise. This is how judges make law.

70

J Thomas 09.23.14 at 11:32 am

#65 JWB

“Imagine that the correct value is in fact 130 sigma on the other side”

No, I won’t.

;)

No, seriously, try it out. We have two competing force here. On the one hand, people try their best to calibrate their equipment and adjust their reasoning to get what they think is the right answer, an answer that’s within conversational distance of pre-existing work.

And on the other hand, their raw unfiltered data has some connection to reality.

So each time a new scientist repeats the same work, they have some tendency to come closer to getting it right (and some tendency for random errors to make it a random walk). And also they pay close attention to previous work, and pay the most attention to the most recent work which they presume is done the best.

When the error bars get tighter, doesn’t it make sense that later researchers will get more timid? If they get results that are wildly variant from the last guy, won’t people assume they are wrong? Won’t they themselves assume they are wrong? That’s what made it go that way in the past, why would it stop?

Two independent processes. One of them, scientists look for ways to massage their results to get close to the consensus and quit when they get close enough, and since they are coming from the right direction the new result is closer than the last old one. The other process, they improve their methods and get increasingly reproducible results. They happen at independent rates, and the smaller the error bars, the slower the drift toward correctness.

Is there any reason to think that the precision of the machinery etc will improve only after the researchers’ biased calibrations and biased adjustment for systematic errors etc have approached the true value?

What is to keep the results from getting highly reproducible while still including biased calibrations etc?

71

MPAVictoria 09.23.14 at 12:25 pm

“while organizations are capable of growing to meet the challenges they take on.”

This is pretty weak. If charitable organizations could grow infinitely we wouldn’t have poverty now would we? There is a limit to the number of dollars out there for orgs like the ACLU so they have to decide how they want to spend those dollars.

72

Joshua W. Burton 09.23.14 at 12:46 pm

When the error bars get tighter, doesn’t it make sense that later researchers will get more timid? If they get results that are wildly variant from the last guy, won’t people assume they are wrong? Won’t they themselves assume they are wrong? That’s what made it go that way in the past, why would it stop?

Yes, I understood you the first time. What makes it stop is on view in the actual data, which is why that’s such a beautiful page. It could have been that science (in this case turn-of-the-millennium particle physics) had that behavior, but in fact it doesn’t. Instead, for fundamental constant after fundamental constant, people do the same experiment a bit better, subject to subtle confirmation biases, and the data points shuffle along guiltily in what will turn out to be the right direction, with suspiciously less variation than the error bars demand.

Then — because this is physics and we have many orders of magnitude in sight and we conquer them boldly — somebody does a radically better experiment. The central value shifts to some place that is (within 5 σ, say, ballparking 3 for statistical perversity at 1% and another two for undetected but honest systematic error) borderline consistent with the “target” result, and in the very same data point (this is how you know they’ve got a new technique) the error bars shrink fivefold, or in some cases a hundredfold. (During Norman Ramsey’s professional life, his lab reduced the neutron electric dipole moment, a “negative” result because a nonzero answer would have meant new fundamental physics, by eleven orders of magnitude, using at least four entirely new approaches and countless incremental refinements. Unfairly, the whole magnificent race earned him only one Nobel Prize.) And, because the error bars are now so much smaller, researchers who follow will play the game at the edges of the new, much tighter Overton window.

In a less fortunate subject, where understanding advances in increments of accuracy measured in percent, your Kuhnian story might be true: the whole apparent advance of objective knowledge could unravel, and yesterday’s result turn out to be as wrong as last year’s. But when yesterday’s result is suddenly a hundred times better than last year’s, there is (barring rank incompetence of the 100 σ sort) no going back. In physics we see lots and lots of paradigm shifts, and to the innumerate they do look Kuhnian. But if you look at all the beautiful, beautiful data, you see that they are Heinian: “err, and err, and err again . . . but less, and less, and less.” Believe your eyes, and be comforted.

73

J Thomas 09.23.14 at 12:59 pm

It could have been that science (in this case turn-of-the-millennium particle physics) had that behavior, but in fact it doesn’t. Instead, for fundamental constant after fundamental constant, people do the same experiment a bit better, subject to subtle confirmation biases, and the data points shuffle along guiltily in what will turn out to be the right direction, with suspiciously less variation than the error bars demand.

This is the biased behavior we described, the behavior you said it could have had but doesn’t.

Then — because this is physics and we have many orders of magnitude in sight and we conquer them boldly — somebody does a radically better experiment.

Now you have something! They do a different experiment, and they don’t feel any obligation to respect the old results!

74

Joshua W. Burton 09.23.14 at 1:18 pm

They do a different experiment

Leave out the quantitative improvement, and that’s the tale you’d want to tell. They do a better experiment.

Try to imagine an absurd scenario in which, say, Keynes can refute Jean-Baptiste Say by answering objective questions with six more significant figures of accuracy, and Friedman with five more sig figs than Keynes (or, as in Ramsey’s case, let this extraordinary qualitative advance in quantitative precision fit within a single career), and you’ll see why a physicist tends to view Kuhnian subjectivism as mere envy and projection.

75

J Thomas 09.23.14 at 1:27 pm

Oops, my keyboard did something weird and posted early.

They do a different experiment and feel no obligation to respect the old results because their new results are better! So the answer takes a random jump, and now this is the value that others compare against. Does it include calibration errors and judgement errors? I dunno. It’s something new, so it might. On the other hand the new precise experiment may be simpler and easier to perform and so there simply may not be as many errors to make. It stands to reason that when your experiment gets an answer that’s precise to 6 decimals instead of 3, there will be fewer fiddly little things to adjust for that could be adjusted wrong. And when it’s precise to 12 digits then you just *know* that all the systematic errors are gone from this brand-new experimental technique.

I tend to think that careful experimental scientists, thinking carefully, will tend to actually get correct results, and those results will consistently get better with time instead of following a series of random jumps with small incremental improvements between them.

But the data you point to suggests that I’m likely to be wrong about that. For 3/4 of the constants on that page, the currently accepted value is outside the error bars from 1990. For how many of them, will some new technique justify a new shift outside the current error bars within the next 25 years?

I was a bit heartened by the K-short lifetime. The graph dipped down, and then it came back up. As if they overshot significantly and then the data started coming in on the other side so they gradually increased again. That makes it plausible that likely the true value is somewhere between the highest and lowest reported.

76

Joshua W. Burton 09.23.14 at 1:36 pm

For 3/4 of the constants on that page, the currently accepted value is outside the error bars from 1990. For how many of them, will some new technique justify a new shift outside the current error bars within the next 25 years?

Almost all of them. We’ll be outside the current error bars, because we’re not Cartesian angels of the scientific method; we’re human and can fool ourselves. But, we’ll be outside the current error bars by an amount that is tiny compared to the 1970 error bars, because the scientific religion is better than its priesthood.

77

Jerry Vinokurov 09.23.14 at 2:36 pm

Joshua Burton, let me just tell you how much I’m enjoying your posts in this thread. Thanks for holding the line; that poem from Hein is just gold.

78

J Thomas 09.23.14 at 2:45 pm

Try to imagine an absurd scenario in which, say, Keynes can refute Jean-Baptiste Say by answering objective questions with six more significant figures of accuracy,

It isn’t just you. When I googled “significant figures of accuracy” I got 43,700 hits. “significant figures of precision” got only 21,600.

http://www.chem.tamu.edu/class/fyp/mathrev/mr-sigfg.html

There are two ways to get more significant figures. One way is to do the work over again exactly the same way, repeatedly, and average the results. Assuming that each time you do the work your result has the same finite variance, you should expect to get twice as good with four times as many repetitions.

The second way is to repeat the work more precisely. With tighter control on all your independent variables, you can do the same work and get closer to the same answer each time.

Neither of these approaches does anything about the possibility that you were doing the same wrong thing every time you repeated the work.

If you believe the story that these physicists looked for errors that would bring their work into closer conformance with existing knowledge, and did not look for errors that would push it farther from the previous results, and in each case they found the errors they were looking for and they did not find the errors they were not looking for, that would imply that there may be a tremendous number of errors available to find.

79

J Thomas 09.23.14 at 3:00 pm

We’ll be outside the current error bars, because we’re not Cartesian angels of the scientific method; we’re human and can fool ourselves. But, we’ll be outside the current error bars by an amount that is tiny compared to the 1970 error bars, because the scientific religion is better than its priesthood.

Again, if scientists accidentally (but mostly consistentl) fudge their results to get them close enough to the previously-established results, then as the precision of their work increases, that makes the allowable variation decrease — independent of any objective accuracy of the precise results!

I’m ready to believe that the results are actually getting better. But the increased precision and tighter grouping of new precise results close to older precise results, are not evidence for it. We already have an explanation for that which explains more. So by Ockham we don’t need to postulate a second explanation.

I figure though that when everybody already knows the answer is zero, that’s a special case. It’s a great big deal to get a nonzero answer, it’s even a big deal to get errorbars that don’t include zero. So all there is to do is to get errorbars that get closer and closer to zero — unless you actually do measure something astounding.

80

James Wimberley 09.23.14 at 3:00 pm

Let’s see if the website software can speak sciency: Σ, σ, ∫, η. If this is scribble, it can’t. Otherwise, commenters please copy and paste the characters.

81

Brett Bellmore 09.23.14 at 4:17 pm

“This is pretty weak. If charitable organizations could grow infinitely we wouldn’t have poverty now would we? There is a limit to the number of dollars out there for orgs like the ACLU so they have to decide how they want to spend those dollars.”

Who said anything about “infinitely”? There aren’t infinite dollars out there, but there aren’t infinite battles to spend them on, either.

In practice, the ACLU has a half million or so members, the NRA 4.5 million. Somewhere in between is how big the ACLU would have been if it hadn’t decided that particular civil liberty was too offensive to defend.

82

MPAVictoria 09.23.14 at 5:33 pm

“Who said anything about “infinitely”? There aren’t infinite dollars out there, but there aren’t infinite battles to spend them on, either.

In practice, the ACLU has a half million or so members, the NRA 4.5 million. Somewhere in between is how big the ACLU would have been if it hadn’t decided that particular civil liberty was too offensive to defend.”

Again there are only so many dollars out there for orgs like ACLU. They don’t magically appear so choices have to be made. You may not like the choice but of course your are free to start your own org or join the ACLU and try and change their focus.

83

Joshua W. Burton 09.23.14 at 5:51 pm

Again, if scientists accidentally (but mostly consistentl) fudge their results to get them close enough to the previously-established results, then as the precision of their work increases, that makes the allowable variation decrease — independent of any objective accuracy of the precise results!

No. It’s really easy to fool yourself into a couple of standard deviations of error. It is gross, career-limiting incompetence to go to press with hundreds of standard deviations of error. Major advances in precision therefore imply (not logically, but empirically with overwhelming force in publishable work) comparable advances in accuracy. When an error-bar I becomes a dot, you are looking at the true answer in the back of the book . . . and a new book opens with new puzzles when you expand the vertical scale so that the new data points look like I’s again. Particle physicists have a saying: yesterday’s discovery is today’s background is tomorrow’s detector calibration.

Null experiments are no different in this regard, by the way. In fact, the main trick to get order-of-magnitude gains in precision is to find a difference between signal and expected signal that you can turn into a null experiment. At least seven, and possibly all, of the PDG graphs I cited involve measurement in a setup that nulls out a low-order (that is, yesterday’s) result, and observes the residuum.

84

TM 09.23.14 at 6:13 pm

BB, you are welcome to disagree with the ACLU. If that is your only point, you have made it and can leave it there. Did I mention that your comments have been completely off-topic? Entirely unrelated to anything that the OP or anybody else here was debating? This thread is unrelated to guns, got it?

85

Jonathan Dresner 09.23.14 at 7:08 pm

Thanks to Abbe Faria (comment 22), I now know about the Suzanne Sisley case. Not sure why it didn’t pop up through my media filters before, though.

It brings up an interesting question, though about the different reactions of “hard” and “soft” fields to political pressure: It seems like there has always been more regulation of science research (at least the last sixty years or so) in the form of “you can’t research this or that.” But because the government structures it as ‘funder’s preference’ rather than an outright ban, it seems to get around some of the academic freedom arguments.

86

J Thomas 09.23.14 at 7:42 pm

#83 JWB

“Again, if scientists accidentally (but mostly consistentl) fudge their results to get them close enough to the previously-established results, then as the precision of their work increases, that makes the allowable variation decrease — independent of any objective accuracy of the precise results!”

No. It’s really easy to fool yourself into a couple of standard deviations of error. It is gross, career-limiting incompetence to go to press with hundreds of standard deviations of error.

I know you believe this, but you have not presented the ghost of a reason why anybody should believe it.

If you get extremely consistent results, that does not in any way imply that your systematic errors are small. It only implies that they are consistent.

Major advances in precision therefore imply (not logically, but empirically with overwhelming force in publishable work) comparable advances in accuracy.

How would you go about testing that?

It’s an empirical scientific claim, right? What experiment could be performed that would tell how true it is?

87

Jerry Vinokurov 09.23.14 at 8:14 pm

This worked for me: Σ, σ, ∫, η

J Thomas: I don’t know why you’re persisting in this, but you clearly don’t have a very good idea of how science actually works, either at the implementational or at the institutional level. The claim that physicists have gotten it pretty much right over the last hundred years or so, in incremental fashion, isn’t really in dispute for any reasonable definition of “dispute.” JWB has nothing to “prove” to you that the historical record hasn’t proven many times over.

88

Paul Davis 09.23.14 at 8:18 pm

@87: have you ever read Feyerabend? What did you think?

89

etv13 09.23.14 at 8:50 pm

I am always grateful when opposing counsel fails to cite relevant authority or mischaracterizes the cases they do cite, because it gives me an opportunity to tear them apart. This is why, ethics aside, you should never do this. If there is authority that goes against your position, you should deal with it up front. You are much better off distinguishing it, down-playing it, or explaining why it’s simply wrong, in your opening brief than having to explain why you omitted it in your reply, or at oral argument.

90

Jerry Vinokurov 09.23.14 at 8:51 pm

Paul, my familiarity with Feyerabend begins and ends with Against Method, which I’m actually pretty sympathetic to. I think Feyerabend attempts to claim too much in saying that there are no rules or methods at all to be followed, but I also think that it’s an important argument for an ecumenical attitude towards methods. In the end, though, the methods have to hang together and prove themselves; some do, and some don’t, and that’s ok. It’s important to try all sorts of things because you never know what you might find, but you still have to have some sort of meta-criteria for how you’re actually going to judge success and failure.

91

Joshua W. Burton 09.23.14 at 8:54 pm

I know you believe this, but you have not presented the ghost of a reason why anybody should believe it.

I think this side-conversation has essentially come down to where we ended up on one of the Zionist threads: cheap, obdurate and hence ultimately irrefutable (but also boring) skepticism on your part about the lived experience and measured judgment of people who know things, oddly misapplied as a strategy to chat them up.

How would you go about testing that?

As an exercise in context-free internet wanking, this is half-clever. As applied to the actual datasets we are discussing, it’s obtuse: when we say that a formerly cutting-edge measurement (say, the W mass, known at the 20% level in the 1980s and the 0.02% level today) is settled at the 1% or 50 σ level, we mean that the value is so certain it is now baked into the deep engineering design of the LHC detectors as well as into the software cuts. You might as well ask how it would affect the Higgs search if Michael Faraday turned out to be totally wrong about electromagnetic induction. And, of course, for a Kuhnian something like that might happen, at any time. The wonder is that you dare to sit so close to a computer while talking about it.

92

William Berry 09.23.14 at 8:58 pm

MPAV @82:

Not to mention that the ACLU might hold the view of the 2A that prevailed through more than 200 yrs. of U.S. Constitutional jurisprudence.

If you take that view in good faith, you are not going to roll over just because the courts stormed in from far right field with its decision in that D.C. Case (and its descendants) a few years back.

93

Paul Davis 09.23.14 at 9:05 pm

I think that the point of AM was that actual scientists do not follow particular practices in order to arrive at results, nor to judge the quality of the results. Research is ad hoc, judgement is ad hoc within the field; the only judgement that isn’t is the broader one rendered by human societies: does this help us to improve some aspect of our interaction with the world (by efficiency, knowledge, or both).

I raise Feyerabend again because I think the idea that you could watch what scientists do for a century and conclude that there is no shadow of a doubt that they are on the right track would have struck him as mostly laughable. Whether science is on an ascending staircase towards some better understanding of the world, or whether it took a wrong turn somewhere before it reached a nearly stairway – surely this is something that only hindsight reveals – this has been true for 5000 years of human history, and I see no particular reason to believe that it is any different today.

There’s no questioning the apparent continued string of consistent and improving results (in the sense of the depth and breadth of the world that can be explained) within the natural sciences during the 20th century. But is this sense of things unique to our time? It seems unlikely to me, and either way, I would not want to sit in the same chair of judgement used by Michelson to declare the end of physics in the late 19th/early 20th century.

94

Paul Davis 09.23.14 at 9:11 pm

@91: I don’t think the issue being raised is whether one can anticipate an actual Kuhnian shift before it happens, but merely, given that we know these things occur, what story we choose to tell about our current models, data and experimental processes. You present one of more or less iteratively improving accuracy and precision in which key theories are confirmed over and over. This isn’t wrong (certainly not in any provable way, nor in any useful sense). However unless you honestly believe that something has taken place in human scientific endeavour that fundamentally renders it unlikely to ever undergo a Kuhnian shift again (and you appear not to believe that), this level of utter confidence seems slightly hubristic.

95

J Thomas 09.23.14 at 9:13 pm

#87 Jerry Vinokurov

J Thomas: I don’t know why you’re persisting in this, but you clearly don’t have a very good idea of how science actually works, either at the implementational or at the institutional level. The claim that physicists have gotten it pretty much right over the last hundred years or so, in incremental fashion, isn’t really in dispute for any reasonable definition of “dispute.”

It’s clear that physics has gotten lots of results over the last hundred years or so.

The immediate question though is the value of these particular constants.

Joshua and I agree that physicists very often find their results biased to agree with wrong prior research, for whatever reason. Looking at his graphs, it appears to happen far more often than not. Do you disagree?

Next, the error bars are a description of the reproducibility of their work, how well they can do the same thing again and get the same result. Uncontrolled variables cause the results to vary. Controlled variables do not.

So for example, imagine an experiment in which a 60-cycle hum makes a difference, where samples are taken in much less than a sixty’th of a second. You get different results depending on what part of the cycle you happened to be on when the sample is taken. Now imagine that you find a way to isolate the 60-cycle background, but there is a constant 5-volt/meter field. Will that make a difference compared to a minus 5-volt field, or zero? If different instants in the cycle made a difference, it might. But if it stays constant, it will probably make the same difference all the time. It won’t make the results less reliable, just different from what they would be at zero volts.

There are lots and lots of potential errors like that, which can change the result but which only reduce precision when they vary. I don’t know how to measure a neutron’s half-life, but I’ll use that as an example anyway — if it doesn’t work then something similar would work.

I expect you have a way to tell when a neutron reaches its end of life. But it’s an indirect event. You measure something that ought to be produced. And you don’t detect every event. So you need a way to calibrate that measurement, and lots of things can affect it. There can also be some false-positives, and you can probably do a good job of reducing those. You can measure how many of those you get when there are no neutrons present, and figure out how to keep them from happening. Except for the false-positives that only happen when neutrons are present….

Then there’s a question how many neutrons you have to start with. You can produce them at some average frequency, and you can measure how many there are by some indirect method. You have to calibrate that method also. After you get your raw data you must massage it to deal with known errors and biases. So you need a good way to estimate those errors and biases, because the more accurately you can do that the better your final number will be.

In all of this, doing the same experiment repeatedly and getting similar answers tells you only that the various variables are controlled. It does not say that your estimates of them are correct.

Rebuild the same apparatus at another lab and lots of things will be different. You will have to calibrate it all over again. You figure you’ve done it right when you test the calibration and get the same result the other guys did….

Scientists have more faith in their results when they try entirely different methods and come up with similar results. There’s lots of room for error, but if three entirely different approaches give the result, it’s probably more-or-less real.

Joshua pointed out the method of nulling out results. Sometimes you can detect very small results if you are careful. Say you want the weight of a battleship captain and you can’t get it directly. Maybe you can weigh the battleship with the captian on it and off it, and compare the difference. Sometimes this approach can work wonders. Sometimes it can have unexpected errors. If you can’t control the number of seagulls that land on the ship, then you get observable errors. If the captain always takes his duffel bag off the ship when he leaves, you may get a consistent wrong number.

J Thomas: I don’t know why you’re persisting in this, but you clearly don’t have a very good idea of how science actually works, either at the implementational or at the institutional level.

I have a clear idea how it actually works. Far too many people misunderstand how it ought to work. This is basic stuff people really ought to pick up in grade school and I run into PhDs who have completed their dissertations who don’t get it. Not to mention MDs.

96

L.D. Burnett 09.23.14 at 9:17 pm

Absolutely agreed w/ etv13 @ 89. This is as true for historians as it is for attorneys — a sound argument has to contend with and somehow account for evidence/arguments to the contrary. A historian should be familiar with those counter-arguments — maybe not every single instantiation of them, but certainly the main interpretations current in the field.

Thanks to Ben Alpers upthread for linking to my USIH post on this. Sherman Dorn had an interesting comment on that thread that helped bridge the gap (for me, anyhow) between hypothesis in the natural sciences and hypothesis in the social sciences/humanities. (Yes, I think humanists do hypothesize — but I’m still not sure when that goes from bug to feature, or vice versa.)

97

J Thomas 09.23.14 at 9:26 pm

#95 JWB

“How would you go about testing that?”

As an exercise in context-free internet wanking, this is half-clever. As applied to the actual datasets we are discussing, it’s obtuse: when we say that a formerly cutting-edge measurement (say, the W mass, known at the 20% level in the 1980s and the 0.02% level today) is settled at the 1% or 50 σ level, we mean that the value is so certain it is now baked into the deep engineering design of the LHC detectors as well as into the software cuts.

That is not responsive. I ask how you’d test it, and you tell me that people deeply believe it, to the point that they would be willing to accept extremely expensive consequences if it turned out to be wrong.

But they would not test it to find out even if they knew a way, so that’s no problem.

It’s times like this that I begin to wonder if possibly Brett Bellmore might have a point about climate science after all….

98

Jerry Vinokurov 09.23.14 at 9:42 pm

I think that the point of AM was that actual scientists do not follow particular practices in order to arrive at results, nor to judge the quality of the results. Research is ad hoc, judgement is ad hoc within the field; the only judgement that isn’t is the broader one rendered by human societies: does this help us to improve some aspect of our interaction with the world (by efficiency, knowledge, or both).

But they do! I mean, I’m a scientist, having worked in several pretty different fields, and both of those fields have their particular practices and constraints. Obviously they’re not often strictly formalized, but being surprised by that is like expressing surprise when you go to play a pickup game of basketball and call an offensive foul and people look at you weird; the norms of pickup ball generally dictate that you don’t call offensive fouls, even if it’s not written down anywhere. Just the same, there are norms and methods that scientists follow mostly implicitly, although if you challenge them on it, they can start enumerating them.

I raise Feyerabend again because I think the idea that you could watch what scientists do for a century and conclude that there is no shadow of a doubt that they are on the right track would have struck him as mostly laughable.

Sorry to be glib, but I don’t find “what would strike Feyerabend as laughable” a useful criterion. I think physics, which I know best, is in fact on the right track, scientifically speaking; I feel confident that the record of history is pretty solid on that. Anyway, that’s neither here nor there, Feyerabend-wise, because even if I think some fields have got it all wrong, I acknowledge that there’s a set of normative criteria which roughly circumscribes the kinds of things that those people do and the kinds of methods which fly in their disciplines. Those lines aren’t sharply drawn and there’s lots of wiggle-room there (just today I attended a talk on applying multiplicative update theory to evolutionary biology, so clearly there’s lots of room for crossover), but these common methods do exist, and there are common meta-methods that cover the discipline as a whole. That’s the reality of the enterprise, and if Feyerabend didn’t see that, then he got it wrong.

Whether science is on an ascending staircase towards some better understanding of the world, or whether it took a wrong turn somewhere before it reached a nearly stairway – surely this is something that only hindsight reveals – this has been true for 5000 years of human history, and I see no particular reason to believe that it is any different today.

I’m not quite sure what this means. Let me put it like this: I can’t refute ultimate skepticism and won’t try to. I don’t think there’s any doubt that our understanding of the natural world today is, in every facet, vastly better than our understanding was 100 years ago. There’s just no meaningful debate to be had about this without undermining the whole project of trying to understand the world in the first place.

There’s no questioning the apparent continued string of consistent and improving results (in the sense of the depth and breadth of the world that can be explained) within the natural sciences during the 20th century. But is this sense of things unique to our time? It seems unlikely to me, and either way, I would not want to sit in the same chair of judgement used by Michelson to declare the end of physics in the late 19th/early 20th century.

There are two things here which are logically unrelated. The first is your question about whether it’s unique to our time; the answer is absolutely yes, it is unique, because never in the history of civilization have we known as much as we know now about how the natural world works. The second is the bit about the “end of physics” which as far as I know is not a view that any actual physicist holds. “Better than it’s ever been” does not mean “can’t be any better ever,” and there are tons of unanswered questions that we should keep hammering at.

99

Jerry Vinokurov 09.23.14 at 10:10 pm

It’s clear that physics has gotten lots of results over the last hundred years or so.

The immediate question though is the value of these particular constants.

There is no actual question about the value. There is only a misunderstanding on your part of the experiments that have obtained these values.

Joshua and I agree that physicists very often find their results biased to agree with wrong prior research, for whatever reason. Looking at his graphs, it appears to happen far more often than not. Do you disagree?

Those graphs don’t mean what you think they mean. Notice how most of them have the same characteristic shape: They start out with large error bars that tamp exponentially as a function of time, reducing the region of space that represents an “admissible” result. That makes sense when you think about such things as the improvement of understanding in the construction of experiments, more factors being taken into account, and the improvement of the technology itself. Yes, a scientist might have institutional reasons for preferring a result that is a half-sigma closer to the previous accepted value (assuming the scientist has several results to choose from). A value that is off by 10-sigma either means that we’ve had it all wrong before (possible, but unlikely) or your experiment has done something wrong. If you’re a good Bayesian, you should, in the parlance of our times, check yourself before you break yourself.

So for example, imagine an experiment in which a 60-cycle hum makes a difference, where samples are taken in much less than a sixty’th of a second. You get different results depending on what part of the cycle you happened to be on when the sample is taken. Now imagine that you find a way to isolate the 60-cycle background, but there is a constant 5-volt/meter field. Will that make a difference compared to a minus 5-volt field, or zero? If different instants in the cycle made a difference, it might. But if it stays constant, it will probably make the same difference all the time. It won’t make the results less reliable, just different from what they would be at zero volts.

What is this thought experiment intended to accomplish? First of all, we have ways of isolating experiments from things like electric fields; that’s called a Faraday cage and has existed for 150 years. Second, we (presumably) have some sort of fundamental theory that guides our understanding of whether or not these factors are relevant. The magnetic field of the earth is relevant for stratigraphic analysis because you can see the direction of the field reflected in magnetic domains frozen into the rock. We know this because we have a fundamental theory of magnetism. Whether or not some other factor is relevant would, again, depend on the theory. Maybe there are factors that are relevant and we don’t know them, which would be a problem, but under certain reasonable assumptions of the distribution of errors, those factors can be subsumed into “noise.” There are methods for dealing with such things; this isn’t exactly a surprise to anyone.

I expect you have a way to tell when a neutron reaches its end of life. But it’s an indirect event.

Everything is an indirect event; direct events just don’t exist.

You measure something that ought to be produced. And you don’t detect every event. So you need a way to calibrate that measurement, and lots of things can affect it. There can also be some false-positives, and you can probably do a good job of reducing those. You can measure how many of those you get when there are no neutrons present, and figure out how to keep them from happening. Except for the false-positives that only happen when neutrons are present….

Then there’s a question how many neutrons you have to start with. You can produce them at some average frequency, and you can measure how many there are by some indirect method. You have to calibrate that method also. After you get your raw data you must massage it to deal with known errors and biases. So you need a good way to estimate those errors and biases, because the more accurately you can do that the better your final number will be.

In all of this, doing the same experiment repeatedly and getting similar answers tells you only that the various variables are controlled. It does not say that your estimates of them are correct.

There’s a pretty simple experiment that you can do in any advanced undergrad lab (and which I’ve done twice) and that’s to measure the lifetime of the muon (and simultaneously test the special theory of relativity). It turns out that with some PMTs and a vat of scintillator liquid (plus some fancy electronics), you can get pretty damn close to the true value (under the assumption that muon arrival times are Poisson-distributed). Of course this requires you to assume there aren’t any other factors that affect the muon lifetime and you can continue adding complications as you like, but it’s not like there’s no answer for that. Actual laboratories that measure these things think deep and hard about various confounding signals and expend a great deal of time and money in modeling and understanding their sources of error (both systematic and stochastic). At some point, you will exhaust those sources of error down to the randomness inherent in the idea of a measurement. That’s just how it goes; this is actually one of those common methods that Feyerabend seems to think don’t exist. If this is an unsatisfactory situation for you, then I suppose you will remain unsatisfied, but it’s emphatically not a problem for the discipline as a whole.

Rebuild the same apparatus at another lab and lots of things will be different. You will have to calibrate it all over again. You figure you’ve done it right when you test the calibration and get the same result the other guys did….

That’s not how you “know you’ve got it right.” If I do my muon lifetime experiment and get a value 10-sigma outside the mainstream, I will assume that I have made a mistake because lots and lots of other people who are much smarter and better experimentalists than me have done it many, many times and it makes sense to seek the fault in myself rather than in them. And of course, because these measurements are part of a whole tapestry of results, an anomalous muon lifetime would mean that our physics is actually radically different from what we thought it was.

The crucial difference people are missing between the late 19th century and the early 21st is the sheer volume of what we know. We just didn’t know all that much circa 1890 or whenever it was that Kelvin thought the earth was 100,000 years old. And now we do know a lot more things, and if one of those things turns out to be wrong, then a whole bunch of other results would also be wrong, and then everything would be wrong and no longer work at all. But it does work! When you flip a light switch and your bulb fails to turn on, you don’t immediately assume that all of physics is wrong, you assume your light bulb must be broken. The constraints on what counts as a plausible hypothesis today are much tighter than they used to be.

Joshua pointed out the method of nulling out results. Sometimes you can detect very small results if you are careful. Say you want the weight of a battleship captain and you can’t get it directly. Maybe you can weigh the battleship with the captian on it and off it, and compare the difference. Sometimes this approach can work wonders. Sometimes it can have unexpected errors. If you can’t control the number of seagulls that land on the ship, then you get observable errors. If the captain always takes his duffel bag off the ship when he leaves, you may get a consistent wrong number.

Again, I have no idea what this is supposed to mean or what relevance it has to scientific practice other than pointing out that errors exist and need to be modeled.

100

Joshua W. Burton 09.23.14 at 10:19 pm

they would be willing to accept extremely expensive consequences if it turned out to be wrong
But they would not test it to find out even if they knew a way

Anybody sensible who is still following along should observe that these two sentences were consecutive, just as I quoted them.

As for doubting Thomases, I can only advise them to reflect that building a detector which relies on a once-delicate result as a crude calibration background is, en passant, testing that result. We don’t need to go on testing Lorentzian time dilation and the approximate roundness of the earth every month in the lab, because the GPS in our phone will let us know pretty quickly if a paradigm shift occurs in the wild.

101

Paul Davis 09.23.14 at 10:25 pm

@98: I wish I had the time to repeat Feyerabend’s historical examination with one rooted in current lab observation. Maybe someone is doing it. I was once a molecular biology PhD candidate, gave that up for software development, but retain many friends and neighbours involved in many different areas of research. My gut feeling is that if you really went into labs (and offices) and observed what goes on, you’d walk away with the same conclusion that Feyerabend did: scientists use whatever they can find, and their practices follow norms only to the extent that they have to tell a story of their work to their peers. I think you’d still conclude that despite observing many common practices in laboratories across disciplines – the noise of “whatever works” would drown out “well, first you do X and then you do Y”. I think that one of the take home lessons from AM and from other sociology of science studies that have occured since it was published is that scientists tell a just-so story about what they are doing even to themselves and even when that story is at odds with their day to day reality. I wish I could give you a citation on that.

On the uniquess of “now”: I suspect that except for some relatively brief periods in the history of various human civilizations, it has always been the case that we have never “known as much as we know now about how the natural world works”. Even when wrong turns were being taken (phlogiston, geocentrism, the humors etc. etc), this was still true.

I have a very strong faith that when viewed on a long enough arc, human knowledge of the natural world tends towards increasing depth and correctness. But I have no idea whether the arc needs to be 100 years or 10,000 years long. If it is closer to the latter, there’s no reason that there may not be periods in which we make serious mistakes, both in measurement and in conception.

However, I don’t really subscribe to any notion of science drawing us closer to some knowledge of a “reality”, and view it as something to be judged based almost entirely on efficacy. By that measure, the 20th century advances have been astounding and I suspect that further Kuhnian shifts in understanding will do nothing to alter that, much as both the relativistic and quantum revolutions didn’t really change the impact of newtonian mechanics.

102

J Thomas 09.23.14 at 10:39 pm

#99 JV

“The immediate question though is the value of these particular constants.”

There is no actual question about the value. There is only a misunderstanding on your part of the experiments that have obtained these values.

https://en.wikipedia.org/wiki/Accuracy_and_precision

“Joshua and I agree that physicists very often find their results biased to agree with wrong prior research, for whatever reason. Looking at his graphs, it appears to happen far more often than not. Do you disagree?”

Those graphs don’t mean what you think they mean. Notice how most of them have the same characteristic shape: They start out with large error bars that tamp exponentially as a function of time, reducing the region of space that represents an “admissible” result.

Yes, but look at another part of the shape. If you look at any four consecutive datapoints, they are almost always monotonic. Given the error bars, wouldn’t it be predictable that they would scatter up and down more? But they don’t.

Joshua’s interpretation of this was that the original value was wrong, and each successive one was righter, but they (presumably unconsciously) adjusted the result to be more like the previous one. He figured they slowly converge on the correct value because they are afraid to believe that their result — which if close to correct is far from the previous one — should be published as-is.

That makes sense when you think about such things as the improvement of understanding in the construction of experiments, more factors being taken into account, and the improvement of the technology itself.

Yes, it makes sense that the error bars would shrink with time and repetition. If nothing else, to get shrunken error bars they can do more reps. As the experiments get more automated it gets easier to do more reps. And the result is more publishable if the error bars are smaller.

Everything is an indirect event; direct events just don’t exist.

If you measure flashes in your retina, you can actually count flashes in your retina. But yes, mostly we measure indirect events which require an effort at calibration. Joshua’s belief is that this calibration (and other things) gets adjusted to make the results closer to the norm.

Actual laboratories that measure these things think deep and hard about various confounding signals and expend a great deal of time and money in modeling and understanding their sources of error (both systematic and stochastic).

Yes! That’s the part that can make it work!

103

The Temporary Name 09.23.14 at 10:47 pm

Joshua’s belief is that this calibration (and other things) gets adjusted to make the results closer to the norm.

Norm?

104

J Thomas 09.23.14 at 10:55 pm

Actual laboratories that measure these things think deep and hard about various confounding signals and expend a great deal of time and money in modeling and understanding their sources of error (both systematic and stochastic).

Yes! That’s the part that can make it work! To the extent that the people practicing that art do it really well, they can get specially good results. If they think about possible sources of systematic error, and invent ways to measure them, and then invent ways to control them, their results will be better. And noticing sources of systematic error that no one else ever handled, can give them the confidence to publish results that differ from the norm, and do less fudging their results toward the previous established value.

If this is an unsatisfactory situation for you, then I suppose you will remain unsatisfied, but it’s emphatically not a problem for the discipline as a whole.

All you can do about systematic error is to think about possibilities, and test for them, and deal with them as you find them. I can’t suggest any better approach. I prefer an attitude of humility when I consider that I might not have found them all and so my results might still be biased. But that’s just me, if somebody prefers to deeply believe that the current best guess is true to within its error bounds, that’s their own choice.

“You figure you’ve done it right when you test the calibration and get the same result the other guys did….”

That’s not how you “know you’ve got it right.” If I do my muon lifetime experiment and get a value 10-sigma outside the mainstream, I will assume that I have made a mistake because lots and lots of other people who are much smarter and better experimentalists than me have done it many, many times and it makes sense to seek the fault in myself rather than in them.

First you say that isn’t how you know you have it right. Second you say that if you get a different result, that’s how you know you have it wrong.

And now we do know a lot more things, and if one of those things turns out to be wrong, then a whole bunch of other results would also be wrong, and then everything would be wrong and no longer work at all.

Do you believe that if one of your physical constants which you believe is known to .02% was actually wrong by 2%, that a whole bunch of other results would also be wrong? And therefore you know that it is not wrong by 2%?

For myself, I believe that if the current best guess at the W mass is wrong by 2%, there might be some theoretical result that would make sense of various things which currently are not well-understood, but possibly it is not being followed up because it only works right with the correct W mass and so it fails with the current data. Things like that happen sometimes.

For most practical things, people come up with workable fudge factors if the actual unadjusted numbers don’t quite work out.

105

J Thomas 09.23.14 at 11:13 pm

#103 The Temporary Name

“Joshua’s belief is that this calibration (and other things) gets adjusted to make the results closer to the norm.”

Norm?

For attempts to measure some fundamental constant, I mean the latest previous attempt, or anyway the one that is considered the best.

The argument is based on an indirect measurement. Usually each successive measurement is farther in the same direction. If the changes came because each new researcher found new errors and fixed them, at random, then I’d expect the changes to be in random directions.

So one hypothesis is that they keep actually getting results that are close to correct, but each time they assume they are wrong since all the other previous experimentalists were every one of them off to the same side. So they look for errors they can correct that will push their result closer to the others.

Another possible hypothesis is that if they get results that are just somewhere in the middle of the pack it won’t be very interesting, but if they get something that’s to one side but close enough to be believable, then it’s potentially important. That assumes a level of cynicism on their part that I don’t believe.

There might be some other explanation we haven’t thought of yet.

106

Jerry Vinokurov 09.24.14 at 1:07 am

Yes! That’s the part that can make it work! To the extent that the people practicing that art do it really well, they can get specially good results. If they think about possible sources of systematic error, and invent ways to measure them, and then invent ways to control them, their results will be better. And noticing sources of systematic error that no one else ever handled, can give them the confidence to publish results that differ from the norm, and do less fudging their results toward the previous established value.

Which is what JWB and I have both been trying to say. What are you arguing about, then?

First you say that isn’t how you know you have it right. Second you say that if you get a different result, that’s how you know you have it wrong.

No, you didn’t read what I wrote. I didn’t say “that’s how I know I got it wrong.” I said I would assume, in the face of overwhelming contrary evidence, that I had made a mistake somewhere. This is radically different from the Millikan situation, where people took his one single value as authoritative and adjusted their reports accordingly. Simply put, when the countervailing data is one point, then the authority of the person producing that one point can drive others’ reactions to it. When the countervailing data is almost literally the whole of known physics, it is maximally parsimonious to assume a mistake on my own part.

Do you believe that if one of your physical constants which you believe is known to .02% was actually wrong by 2%, that a whole bunch of other results would also be wrong? And therefore you know that it is not wrong by 2%?

Hell yes I believe that. Let me tell you about a really cool result, or rather, set of results. It’s called the quantum metrology triangle, and it describes the three-sided relationship between Planck’s constant and the electron charge (three sided because you need three independent experiments to determine both). Now, the electron charge is known down to something like 9 significant figures, and from the QMT, once you know e, you know h, so if e were different by 2% then h would be different by 2% and if h were different by 2% then all of black-body physics would look very different and if black body physics blah blah blah etc. and so on.

The moral of this story is that the metaphor of scientific laws being represented by an interconnected web is only very slightly metaphorical. In reality, you can draw out a dependency graph at the root of which lie fundamental constants like h and e; the nodes of this graph are themselves various theories. Because constants like e and h are so central, if they change, then literally every node in the graph has to be recomputed because everything (at every scale, even!) depends on those values. If you travel far enough out on this graph, maybe you encounter a leaf node that doesn’t have any dependencies (less likely in physics, more likely in, say, biology). If that node changes, then you’re ok, because you don’t have to recompute everything, or maybe you only have to reconsider a small slice of the world. But the majority of the nodes in this graph are not leaves, and they’re very densely connected to each other (and the dependencies, as the QMT example neatly shows, are even cyclical), which means you can’t alter stuff over here and not expect it to have consequences over there. The whole thing has to hang together coherently or it just doesn’t work.

Incidentally, if I had to pick any central meta-principle of science it would be precisely this dependency graph picture, together with the meta-method according to which science is a process of refinement of each of these various nodes. The poem JWB posted above is a much more eloquent and less prolix summation of this view.

For myself, I believe that if the current best guess at the W mass is wrong by 2%, there might be some theoretical result that would make sense of various things which currently are not well-understood, but possibly it is not being followed up because it only works right with the correct W mass and so it fails with the current data. Things like that happen sometimes.

They do happen, but the odds of them happening are rather small. I am not a particle physicist, so this particular example falls outside my comfort zone, but I do know that the W boson mass is known to a fairly high accuracy and a change of 2% in its value would imply lots of changes to fairly fundamental physics. The error bars on all the related measurements are, at this point, way too small to consider seriously whether a value that’s 2% off the currently accepted number ought to be the right one; too many other dependencies would break if that were the case, and we know they’re not broken, so it very likely isn’t.

107

Paul Davis 09.24.14 at 2:21 am

@106: nice post.

quibble mode though: it is precisely the tangled web of interdependencies of various constants that might lead one to question whether the model is actually correct. obviously this is a deeply philosophical question (somewhat akin to the pros and cons of Bohm’s hidden variables interpretations of quantum theory). i can see at least three possibilities: (1) a world as apparently rich in phenomena as ours is inevitably well described by a model with this kind of interdependency (2) the interdependency is not part of the world at all, but is a consequence of the mathematics used for the model (3) it is ridiculous to believe that a model with this level of interdependency describes the universe.

we’ve had consistent models of the universe before which have been broken apart by new observations. it doesn’t seem absurd to think that as the number, accuracy and precision of our observations explodes over time that this becomes less and less likely (because as you note, the model involves so many more interdependent components). but it doesn’t seem absurd either to believe that further kuhnian/copernican/quantum shifts are still part of the future of our understanding (i.e. that despite the lovely interdependencies and consistencies between fundamental constants, they are just artifacts at some level).

108

Jerry Vinokurov 09.24.14 at 4:40 am

I don’t know how I would distinguish between optons 1 and 2. I’m not a mathematical Platonist myself, but I suspect that, operationally speaking, there aren’t too many Platonists in the laboratory. It just doesn’t seem to make all that much difference whether you think the world is “really” like that or whether it’s the model that’s like that. Since the model is the only way we have of grasping the world anyway, I suspect we’ll have to live with the ambiguity between 1 and 2. As far as 3 goes, I don’t see any good reason to believe that the universe should operate in a way that is appealing to us. Anyway I actually find the whole picture rather beautiful, so maybe this is just a queston of personal aesthetics.

Of course you are right that we have had models superseded before, but the degree of connectivity in those models was fairly low, which makes sense because they were, for the most part, really lousy models. Pick your favorite old-timey folk science theory and you’ll find that it was not terribly well integrated either with other theories or with experiment. As better theories came along, they either simply threw the old stuff out as it became clear that it was useless (e.g. chemical combustion vs. phlogiston) or subsumed them when the old theory turned out to be a special case of the new one (Newtonian mechanics vs. relativity). Certainly I hope that we do discover more general theories, even ones that might explain fundamental constants in a parameter-free way, but any new theory that comes along has a very high bar to meet: it had better explain at least as much as our current theories do.

109

J Thomas 09.24.14 at 10:22 am

#106 JV

“Yes! That’s the part that can make it work! To the extent that the people practicing that art do it really well, they can get specially good results.”

Which is what JWB and I have both been trying to say. What are you arguing about, then?

It sounded like you both were saying that you were confident in the results because they are measured to such great precision. (Which you both called “accuracy”, which is something entirely different.)

Getting precision is something different from the art of discovering and correcting systematic errors.

Here’s a climate-change example since I looked at it recently for Brett Bellmore. Say you have a bunch of weather stations around the country that have been measuring daily temperature for decades or sometimes even centuries. If you start replacing them with electronic thermometers that record the temperature to .01 degrees, your new readings are maybe more precise. That isn’t a bad thing, but it doesn’t much matter in a climate-change context.

Say that over the past 40 years a lot of your non-automated stations have switched from measuring the temperature in the afternoon to measuring it in the morning. You can either remove their data, or you can come up with the best correction you can manage to account for the change. Some people say you should just use the raw data, but if you do you will get a general cooling effect that isn’t real, because most places it’s cooler in the morning than in the afternoon.It makes no sense to use the data as if it isn’t biased, but if you make your best effort to correct it, then political partisans will say that you fudged it to get the answer you wanted.

It’s a subtle art to correct for systematic errors, and you can never be completely sure you’re doing it right.

“First you say that isn’t how you know you have it right. Second you say that if you get a different result, that’s how you know you have it wrong.”

No, you didn’t read what I wrote. I didn’t say “that’s how I know I got it wrong.” I said I would assume, in the face of overwhelming contrary evidence, that I had made a mistake somewhere.

I don’t see the difference between what I said and what you said.

This is radically different from the Millikan situation, where people took his one single value as authoritative and adjusted their reports accordingly. Simply put, when the countervailing data is one point, then the authority of the person producing that one point can drive others’ reactions to it. When the countervailing data is almost literally the whole of known physics, it is maximally parsimonious to assume a mistake on my own part.

Yes, of course. And this explains the graphs Joshua pointed to. If the original estimate was wrong by 20% or more, then maybe each later researcher gets a result which is closer to the truth than that, but each of them assumes that they have made a mistake and looks for ways to “correct” it toward the previous answer. Each time, the result is closer to correct than the last published result and gets adjusted to be close to that one. So the answers keep creeping closer and closer. My thought is that as the precision goes up, the creep toward better answers slows down. Because too many physicists stupidly assume that precision implies accuracy, so they stop considering systematic errors that have big effects and consider only systematic errors that have effects small enough to keep them close to the currently-accepted value.

Of course, it’s possible that the reason the progress slows down is that they have found all the large systematic errors and correctly fixed them. But how likely is that? ;-)

It might be particularly likely in physics where you can study something extremely simple in isolation, and the mechanisms you use to keep your simple items isolated are also beautifully elegant with hardly any moving parts, changing fields, or measurements required.

It’s called the quantum metrology triangle, and it describes the three-sided relationship between Planck’s constant and the electron charge (three sided because you need three independent experiments to determine both). Now, the electron charge is known down to something like 9 significant figures, and from the QMT, once you know e, you know h, so if e were different by 2% then h would be different by 2% and if h were different by 2% then all of black-body physics would look very different and if black body physics blah blah blah etc. and so on.

So yes, if that happened you would get a different result. A lot of other numbers that were calculated from these constants would have to be revised. After those revisions, would you get results that were as compatible with reality as the existing results? I guess that could be determined, but it looks like a lot of work. It might be more interesting to look specifically for nonlinear results, things where the shape of the curve would be distinctly different. Then check what difference that would make. You might possibly get changes that could not be fixed by recalibrating measurements.

The whole thing has to hang together coherently or it just doesn’t work.

This is an excellent reason for physicsists to doubt their lying eyes. Get a result that appears to be a little off, so you look for errors in your work that can be corrected to get it closer. Because the whole thing does work, therefore it’s right. Your observation doesn’t quite fit, therefore your observation is off.

We have evidence that it goes that way for measurement of constants, why would it not work like that in a wider context?

110

MPAVictoria 09.24.14 at 12:18 pm

William Berry @92

Another excellent point.

111

Jerry Vinokurov 09.24.14 at 12:43 pm

I don’t see the difference between what I said and what you said.

The difference is between claiming what you know, and evaluating how likely you are to be right given other data points; that is, estimating (if only notionally) a Bayesian posterior. And that’s a sensible way to read those PDG plots too, because if you’re a good Bayesian, you’ll take the previous data point into account.

If the original estimate was wrong by 20% or more, then maybe each later researcher gets a result which is closer to the truth than that, but each of them assumes that they have made a mistake and looks for ways to “correct” it toward the previous answer. Each time, the result is closer to correct than the last published result and gets adjusted to be close to that one. So the answers keep creeping closer and closer.

But that’s not how it works, and this is where it all bottoms out. It’s not that every successive researcher adjusts their values towards the previously best value. Of course, it’s not impossible that one or two people do this, especially with tabletop experiments, but the institutional nature of a large-scale team-based practice like particle physics mitigates against that. When you’re just one person in your own lab, you could conceivably “get away” with doing this, but when we’re talking about large collaborations where data is semi-public, it’s really difficult (I’d say pretty much impossible) to get away with any actual malfeasanse.

Of course “adjusting” values is not the only way to be influenced by past results. If a past result was obtained by some experimental technique that has some systematic error built in, and you replicate the experiment with the same technique, you might incur the same systematic error even if your technology is better. Or if you pick an old method of analysis where a new one would give you more correct results because an older experiment used that older method. All these things are possible and do happen; I’d suspect, though I don’t know for sure, that when you see the jumps in those graphs it’s because some systematic error was eliminated either through a more careful experiment or through better analytic tools. But most scientists, especially those working in large collaborative projects, just aren’t sitting around picking values that accord better with past measurements.

My thought is that as the precision goes up, the creep toward better answers slows down. Because too many physicists stupidly assume that precision implies accuracy, so they stop considering systematic errors that have big effects and consider only systematic errors that have effects small enough to keep them close to the currently-accepted value.

I don’t know why you would think this. What do you know about the actual practice of what physicists do that would lead you to believe such incredible things? I mean, if taken seriously, this is a claim of serious scientific malpractice!

Here’s a story that’s playing out right now. You may have heard back in March that an experiment called BICEP2 had claimed a detection of primordial gravitational waves. Along with that detection, BICEP2 reported a number, called the tensor-to-scalar ratio, r, and said that r = 0.2 to some pretty high certainty. Nevermind what this number represents, the important thing is that we have it. Now, as it turns out, upper bounds on r have previously been produced by the Planck satellite mission, and Planck has high certainty that r < 0.11. Obviously this is a huge discrepancy. Initially the BICEP2 team claimed that their data was very clean but upon closer examination, it now appears possible that they used an incorrect model for the dust foreground and that the signal they're actually observing is either the inflationary signal plus the foreground or even swamped by the foreground entirely. This analysis was carried out by the Planck team, with assistance from cosmologists unaffiliated with either project. You can read a pretty good summary of it here, but the short of it is that BICEP2 has acknowledged the flawed analysis and gone back on their claims (which, if true, would be an explosive discovery in cosmology; the “incentive” here is not to hedge towards the previous value but to move away from it) and people are now digging through the data to try and figure out what happened here and what can be done better. Future experiments will obviously seek to learn from this and avoid BICEP2’s problems (partly caused by measuring at only a single frequency).

That’s how these things actually work. Huge amounts of human intelligence are being directed precisely towards figuring out just what the systematics are and how they can be accounted for. To be sure, there are imperfections in any institution, but the suggestion that scientists aren’t taking systematic errors seriously is unwarranted and contrary to every standard practice I’ve ever encountered.

So yes, if that happened you would get a different result. A lot of other numbers that were calculated from these constants would have to be revised. After those revisions, would you get results that were as compatible with reality as the existing results? I guess that could be determined, but it looks like a lot of work.

I am reminded of Babbage’s famous quote: ‘On two occasions I have been asked, — “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?”… I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.’

How would it be possible to get results that were “as compatible with reality” if the fundamental constants were different? That’s the whole point, they wouldn’t be compatible with reality; the reality of a different set of fundamental constants would look radically different; it’s not just a question of changing some values around and getting out the same physics as you had before but just with a different set of numbers.

It might be more interesting to look specifically for nonlinear results, things where the shape of the curve would be distinctly different. Then check what difference that would make. You might possibly get changes that could not be fixed by recalibrating measurements.

Imagine a simple threshold, like the work function of some metal or the ionization energy of an ion. If that number is different, then the physics of everything based on that is different. All sorts of implications cascade right out of that. I guess you could call it a nonlinearity if you like.

This is an excellent reason for physicsists to doubt their lying eyes. Get a result that appears to be a little off, so you look for errors in your work that can be corrected to get it closer. Because the whole thing does work, therefore it’s right. Your observation doesn’t quite fit, therefore your observation is off.

We have evidence that it goes that way for measurement of constants, why would it not work like that in a wider context?

Yes, because the whole thing does work. The entire structure has to cohere; if according to your experiment some piece of it is sticking out in contravention of other known results, then you should be suspicious. It doesn’t necessarily mean you’re wrong, but it very well might. In a case like BICEP2, it’s plausible that the BICEP2 number for r is correct. It doesn’t seem consistent with Planck’s number, but there’s no a priori reason to say that BICEP2 is wrong. If you measure a radically different lifetime for the neutron or a radically different electron charge (or you claim to detect superluminal neutrinos, say), then odds are overwhelming that you are wrong and you should start examining your experiment and analysis pipeline for errors. It’s a process which has actually proven wildly successful, as those graphs indicate, so I’m not sure what exactly the problem is supposed to be here.

As to the wider context: of course it happens. It’s called the conventional wisdom for a reason, and even in some scientific fields it’s stronger than others. But that’s why things like “academic integrity” actually matter; if physicists behaved like Dershowitz, the Planck and BICEP2 teams would be throwing feces at each other, not sitting down like civilized human beings and trying to figure out which one of them is actually right. All that shows you is that reasoning norms are important and need to be enforced.

112

J Thomas 09.24.14 at 3:44 pm

#111 JV

“If the original estimate was wrong by 20% or more, then maybe each later researcher gets a result which is closer to the truth than that, but each of them assumes that they have made a mistake and looks for ways to “correct” it toward the previous answer.”

But that’s not how it works, and this is where it all bottoms out. [….] When you’re just one person in your own lab, you could conceivably “get away” with doing this, but when we’re talking about large collaborations where data is semi-public, it’s really difficult (I’d say pretty much impossible) to get away with any actual malfeasanse.

Joshua and I aren’t talking about malfeasance, but more a sort of subconscious thing that affects people without them noticing it.

Of course “adjusting” values is not the only way to be influenced by past results. If a past result was obtained by some experimental technique that has some systematic error built in, and you replicate the experiment with the same technique, you might incur the same systematic error even if your technology is better. Or if you pick an old method of analysis where a new one would give you more correct results because an older experiment used that older method.

OK! An alternate hypothesis!

So, say the original research had 12 important sources of consistent bias, all pointing in the same direction. As later researchers remove them one by one, the results trend in the same direction, toward the true value. They don’t find bias in the other direction and remove those, because it happened that there weren’t any biases in the other direction, they were all in one direction.

It’s plausible to me that could happen.

Is there any way we could estimate which of these fits the data better? Are there any ways to do experiments that might reveal the effects of both?

Looking at JWB’s 12 graphs, I’d say the middle one on the bottom row looks like it fits your idea very well. One first result. Then a string that all have the same improvement, and they all get the same result with about the same error bars. Then another change and everything looks the same at a third value, but now it’s all so squeezed together on the graph we can hardly see changes. Explainable by two corrections.

Joshua thought the shape of the graphs implied the other idea. Like, if the changes are usually in the same direction, that would indicate a bias in the removal of biases. I’m not at first sight sure what statistical treatment would be appropriate on that, and I don’t fully trust my lying eyes about it.

[….] Obviously this is a huge discrepancy. Initially the BICEP2 team claimed that their data was very clean but upon closer examination, it now appears possible that they used an incorrect model for the dust foreground and that the signal they’re actually observing is either the inflationary signal plus the foreground or even swamped by the foreground entirely. This analysis was carried out by the Planck team, with assistance from cosmologists unaffiliated with either project.

This is an example — when results come out that appear to be cause important changes, there’s a big effort to figure out how they are wrong — how to correct the results and bring them better in line. But when results are compatible with previous results, there is no such effort.

How would it be possible to get results that were “as compatible with reality” if the fundamental constants were different?

I dunno, you’d have to try it and see. There might be more than one way, unless you already have a system that explains everthing. If you have a system that explains everything perfectly with no error and nothing unexplained, then any alternative that worked as well would have to be completely equivalent — just as Copernican orbit theory was equivalent to a sophisticated ptolemaic system. Nowadays we know that it doesn’t really matter where you decide the center of the system is, the math will work out with any center, just some approaches are more tedious to compute than others. But if your system doesn’t explain everything, then it’s possible an alternative might work as well if it had as much effort put into it as this one has.

Astrologers have gone to a lot of trouble to make a self-consistent system. But there are some differences between astrology and physics. One is that the astounding success of astrology is almost entirely subjective. It’s hard to quantify astrological predictions well enough to do statistics on them, though it’s easy to measure customer satisfaction which tends to be high. Another difference is that there are at least three major schools of astrology and all of them get about the same success. This argues that their results do not truly depend on their astrological methods but on something else. But there is only one major school of physics. We know that no other approach to physics can work as well so there’s no point trying.

Imagine a simple threshold, like the work function of some metal or the ionization energy of an ion. If that number is different, then the physics of everything based on that is different. All sorts of implications cascade right out of that.

I think the farther you get from things that can be measured directly, the more room there is for fudge factors that will warp incorrect constants into acceptable results.

I don’t know how much room there is for that with the consensus physics. Maybe very little. It would have to be tested, and I haven’t heard that it has been tested much.

113

Jerry Vinokurov 09.24.14 at 5:27 pm

Is there any way we could estimate which of these fits the data better? Are there any ways to do experiments that might reveal the effects of both?

There’s no general answer to this question. You have to consider the specific theory to figure out what can be done in this regard.

I dunno, you’d have to try it and see. There might be more than one way, unless you already have a system that explains everthing. If you have a system that explains everything perfectly with no error and nothing unexplained, then any alternative that worked as well would have to be completely equivalent — just as Copernican orbit theory was equivalent to a sophisticated ptolemaic system.

I’m not aware of any theory that explains everything. But the dependencies are important; you can’t “try it and see” because there’s only one universe (that we live in, anyway) and we can’t run it multiple times to see what happens, so you have to rely on what other factors of your system are going to be affected. And the thing is that we have lots of results that we know, to a ridiculous level of precision, would look very different if those constants were different. It’s just not a good counterfactual at all, because you can just look at the equations and the differences will jump out at you. There’s not a good distinction between “reality” and the “model of reality” at the fundamental level. Like, what you’re trying to say is that the equations could be different and reality would still be the same, but that’s not possible if you accept that the equations are an accurate model of reality.

Nowadays we know that it doesn’t really matter where you decide the center of the system is, the math will work out with any center, just some approaches are more tedious to compute than others. But if your system doesn’t explain everything, then it’s possible an alternative might work as well if it had as much effort put into it as this one has.

Yes, we have the benefit over the Ptolemaics and Copernicans of having a more sophisticated mathematics. What you’re really doing is making the old Quinean argument about theory being underdetermined by the data. And while it’s logically true, in practice, it doesn’t actually end up being a problem. It turns out that if you have a special theory of planetary motion over here and another special theory of projectile motion over there, you might come up with a way of changing one without affecting the other. But as soon as you realize that these special theories are really cases of a general principle of gravitation, then the interdependency of the theories becomes clear. And by and large the story of physics is the (wildly successful!) story of unifying these special theories under general principles.

Astrologers have gone to a lot of trouble to make a self-consistent system.

That’s because the governing meta-principle is not intra-theoretic consistency but rather inter-theoretic consistency.

I think the farther you get from things that can be measured directly, the more room there is for fudge factors that will warp incorrect constants into acceptable results.

I guess, but it’s not like biology or sociology or even history consider quantification to be unimportant. I don’t know of any subdiscipline of science that doesn’t fundamentally boil down to making measurements. Of course, there’s a lot less room in physics for divergence than there is in other fields; even the “central dogma” of biology is not anywhere near as restrictive as the laws of thermodynamics, and the landscape of “constants” (whatever those might be in biology) is much less firmly established. But from everything I know of biologists (many of my friends in grad school were biologists) they’re much more likely to push a result that’s out of the mainstream than one that adheres to “known values.” There are a lot of reasons for this: partly, there’s the fact that “replication studies” are not considered terribly respectable. Partly, the fact that the landscape is less well-established means that there’s more of an opportunity to stake out your own corner of that landscape. And of course a more sensational result, if true, leads to higher-profile publications. So all the incentives that I am aware of drive scientists in the direction not of conformity but of pushing the boundaries.

114

J Thomas 09.24.14 at 9:12 pm

And the thing is that we have lots of results that we know, to a ridiculous level of precision, would look very different if those constants were different. It’s just not a good counterfactual at all, because you can just look at the equations and the differences will jump out at you. There’s not a good distinction between “reality” and the “model of reality” at the fundamental level. Like, what you’re trying to say is that the equations could be different and reality would still be the same, but that’s not possible if you accept that the equations are an accurate model of reality.

Have you noticed how much of your reality consists of fudge factors? It might possibly turn out that with a different set of fundamental constants you could get a different set of fudge factors which mostly work out adequately. It might possibly work out better on average. Maybe not with exactly the same fundamental laws, but you just might find a way to make it work better.

You are sure that is impossible because you are impressed at how well the current system works. But a large number of very smart people have worked hard to fit the current system together. I say you can’t know how well they’d do with an alternative system unless they actually try it.

“Astrologers have gone to a lot of trouble to make a self-consistent system.”

That’s because the governing meta-principle is not intra-theoretic consistency but rather inter-theoretic consistency.

I believe you’re wrong about this but I have not studied astrology to the depth that would be required to argue seriously how it’s done. Partly because I don’t believe it is worth the effort required. I could be wrong, but when I gamble my exploratory learning time, that isn’t where I choose to place my bets.

115

Jerry Vinokurov 09.25.14 at 2:45 pm

I’ll try one last time and then let this die.

Have you noticed how much of your reality consists of fudge factors? It might possibly turn out that with a different set of fundamental constants you could get a different set of fudge factors which mostly work out adequately. It might possibly work out better on average. Maybe not with exactly the same fundamental laws, but you just might find a way to make it work better.

What does “work out” mean? I’m trying to take your point seriously, but you have to understand that any other valid description of reality would have to reproduce what we already know to be true, and as such it (or a subset of it) would be isomorphic to the current formulation. There are fudge factors and then there are fundamental constants and they really do have a different status within the theory. What you’re describing is not possible so long as you’re committed to a mathematical formulation of your theory, because the outputs of your theory just are determined by the mathematical structure.

You are sure that is impossible because you are impressed at how well the current system works. But a large number of very smart people have worked hard to fit the current system together. I say you can’t know how well they’d do with an alternative system unless they actually try it.

You know what the capitalists say: there is no alternative. Seriously, any alternative would, in a fundamental sense, look just like what we have now, except rotated 90 degrees (or whatever). This has nothing to do with what I want and everything to do with the structure of the description. Any other description has the constraint that it must explain all the things that the current description explains, so in some sense it will be the current theory. It’s like if you took the QMT I talked about above and said that the fundamental constants were actually the Hall resistance and the flux quantum, and then we went about rewriting everything with the appropriate substitutions.

For this reason, and for the reason that the current system works splendidly, there’s no incentive to try anything else. All those smart people know this, which is why there isn’t a cottage industry of alternatives in physics like there is in, say, philosophy, where you aren’t bound by the constraint of having your theory stand the test of experiment.

116

J Thomas 09.25.14 at 3:19 pm

You know what the capitalists say: there is no alternative. Seriously, any alternative would, in a fundamental sense, look just like what we have now, except rotated 90 degrees (or whatever). This has nothing to do with what I want and everything to do with the structure of the description. Any other description has the constraint that it must explain all the things that the current description explains, so in some sense it will be the current theory. It’s like if you took the QMT I talked about above and said that the fundamental constants were actually the Hall resistance and the flux quantum, and then we went about rewriting everything with the appropriate substitutions.

Yes, and we could have said the exact same thing about Ptolemaic astronomy. Except it was far, far simpler.

For this reason, and for the reason that the current system works splendidly, there’s no incentive to try anything else.

Yes, and so there is no attempt to find out whether any alternatives would work better. The system has gotten too complicated to hold the whole thing in one mind, so it has to be working superbly. It seems to me there was an attempt once, Jaynes and his small group of neoclassical physics guys. They intentionally looked for a variant on their theory which would result in a result that would be experimentally different, which had not yet been tested. Then it was tested, and came out somewhat closer to standard predictions, and it got widely reported that Jaynes’s theory was wrong and the standard model was right. Jaynes’s grad students failed to get jobs where they could continue to work along those lines, so the whole thing was abandoned.

What you guys are doing is not likely to lead to much progress. Somehow physics has been dominated by people who want to believe that they are right. It isn’t just you, I run into this pretty consistently.

Scientists make the most progress when they have at least two viable hypotheses, and can focus on ways to tell them apart. When there is only one they tend to stagnate.

Maybe physicists are doing that secretly. But in public they tend to say there is no alternative. That tells me they will accomplish very little until they get data which contradicts their beliefs, and I see them ferociously attempt to explain away any data that appears to do that.

The one big public exception is string theory, which I haven’t learned in any detail. From where I stand it looks like a mathematical framework that holds enough fudge factors to explain anything whatsoever. There’s some attempt to find a particular set of fudge factors that would explain our own physics, but then there’s also a big attempt to assume that the mathematical structure reflects a meta physics, and catalog potential universes etc.

From my perspective it all looks utterly bleak. I’m glad I didn’t go that route when I was picking careers.

117

geo 09.25.14 at 6:24 pm

Jerry @115: philosophy, where you aren’t bound by the constraint of having your theory stand the test of experiment

Yes, but does physics have trolley problems?

118

Jerry Vinokurov 09.25.14 at 7:42 pm

What you guys are doing is not likely to lead to much progress. Somehow physics has been dominated by people who want to believe that they are right. It isn’t just you, I run into this pretty consistently.

And here we bottom out at the fact that you really don’t know what you’re talking about.

Yes, but does physics have trolley problems?

And how! Even worse, the trolleys are usually relativistic, so factor that into your ethical calculations.

119

J Thomas 09.25.14 at 8:27 pm

#18 JV

“What you guys are doing is not likely to lead to much progress. Somehow physics has been dominated by people who want to believe that they are right. It isn’t just you, I run into this pretty consistently.”

And here we bottom out at the fact that you really don’t know what you’re talking about.

You might be delving too deep into the roots of the trees to see the forest.

You gave me the impression that you aren’t ready to have an informed opinion on the topic when you argued that tight error bounds imply the systematic errors have been fixed. I had mostly failed to get Joshua to understand about this — he didn’t respond to hints, and finally you sort of started using the argument after the second time I presented it, but you kept using the wrong argument too.

It really helps to play Eleusis.
https://en.wikipedia.org/wiki/Eleusis_%28card_game%29

Also it might possibly be useful to play Black Box.
http://www.bibeault.org/blackbox/

120

Jerry Vinokurov 09.25.14 at 9:11 pm

Your confusion stems from the fact that educated people won’t accept your fantastical ideas of how things must be in their disciplines, leading you to think they’re narrow deluded specialists, when actually they’re people who understand how the system works. You just proffer these weird examples as though they’re supposed to prove something and ignore all the stuff that people actually do. So, you’re not interested in facts, and I’m uninterested in your unfounded speculations on the nature of physical theory and scientific practice, so I suspect we have nothing left to say to each other.

121

J Thomas 09.25.14 at 9:47 pm

I suspect we have nothing left to say to each other.

Probably. But I hope you’ll try out those games. The second is online and is kind of fun to try for awhile. Eleusis is a great game that can teach a whole lot about scientific method. Scientists at work don’t get to actually do scientific method all that much because there’s so much else that has to be done, but in the game there just isn’t much else to do.

122

Greg Hunter 09.26.14 at 2:47 am

The short of it is simple – Scientists, whether it be professors or grad students, would not accept or allow use of their time or effort without a byline.

Lawyers and those that serve them have no such expectations.

Comments on this entry are closed.