Alarming News: I like Morgan Freeberg. A lot.
American Digest: And I like this from "The Blog That Nobody Reads", because it is -- mostly -- about me. What can I say? I'm on an ego trip today. It won't last.
Anti-Idiotarian Rottweiler: We were following a trackback and thinking "hmmm... this is a bloody excellent post!", and then we realized that it was just part III of, well, three...Damn. I wish I'd written those.
Anti-Idiotarian Rottweiler: ...I just remembered that I found a new blog a short while ago, House of Eratosthenes, that I really like. I like his common sense approach and his curiosity when it comes to why people believe what they believe rather than just what they believe.
Brutally Honest: Morgan Freeberg is brilliant.
Dr. Melissa Clouthier: Morgan Freeberg at House of Eratosthenes (pftthats a mouthful) honors big boned women in skimpy clothing. The picture there is priceless--keep scrolling down.
Exile in Portales: Via Gerard: Morgan Freeberg, a guy with a lot to say. And he speaks The Truth...and it's fascinating stuff. Worth a read, or three. Or six.
Just Muttering: Two nice pieces at House of Eratosthenes, one about a perhaps unintended effect of the Enron mess, and one on the Gore-y environ-movie.
Mein Blogovault: Make "the Blog that No One Reads" one of your daily reads.
The Virginian: I know this post will offend some people, but the author makes some good points.
Poetic Justice: Cletus! Ah gots a laiv one fer yew...
Looks like a book with many sensible points to make, so I put it in my Amazon cart. “Wrong: Why experts keep failing us–and how to know when not to trust them.” The title actually has a footnote by the word “experts,” which is expanded out to “Scientists, finance wizards, doctors, relationship gurus, celebrity CEOs…consultants, health officials and more.”
From all I’ve managed to read about it, no, the book doesn’t say to ignore experts.
There were quite a few points made by the author in the interview, back in 2010 when he wrote the book, that I thought hit the nail on the head.
…it’s not that we want to discard expertise — that would be reckless and dangerous. The key becomes, how do we learn to distinguish between expertise that’s more likely to be right and expertise that’s less likely to be right?
:
Bad advice tends to be simplistic. It tends to be definite, universal and certain. But, of course, that’s the advice we love to hear. The best advice tends to be less certain — those researchers who say, ‘I think maybe this is true in certain situations for some people.’ We should avoid the kind of advice that tends to resonate the most — it’s exciting, it’s a breakthrough, it’s going to solve your problems — and instead look at the advice that embraces complexity and uncertainty.
:
Some experts project tremendous confidence. They have marvelous credentials. They can be very charismatic — sometimes their voice just projects it. Some experts get very, very good at this stuff. And what do you know? It really sort of lulls us into accepting what they say. It can take a while to actually think about it and realize their advice makes no sense at all.
Interviewer points out that picking the good advice out from the bad, can seem “like finding a needle in a haystack.” In responding, Author Freedman sensibly blames not the advisors, but the advised:
It is a needle in a haystack. Part of the problem is, we’re kind of lazy about it. We would like to believe that experts have the answer for us. And what we pay the most attention to are the most recent, most exciting findings. Newspapers, magazines, TV and the Internet oblige us by constantly reporting the stuff. We face this sea of advice all the time. So where is that needle in the haystack? I think the best thing to do is to discount as much as possible the more recent findings and pay more attention to the findings that have been floating around for some years. With a little bit of work, I think most of us can figure out how to answer some of these basic questions about whether advice seems to be pointing in the right direction or whether it seems to be falling apart.
On the troubling subject of experts who discard data that doesn’t fit the conclusion they wanted, Freedman’s words are alarming:
That is a huge understatement [“some cases”] — it is almost routine. Now, let me point out that it’s not always nefarious. Scientists and experts have to do a certain amount of data sorting. Some data turns out to be garbage, some just isn’t useful, or it just doesn’t help you answer the question, so scientists always have to edit their data, and that’s O.K. The problem is, how can we make sure that when they’re editing the data, they’re not simply manipulating the data in the way that helps them end up with the data they want? Unfortunately, there really aren’t any safeguards in place against that. Scientists and other experts are human beings, they want to advance their careers, they have families to support, and what do you know, they tend to get the answers they chase. [emphasis mine]
Suppose we had a way to sound an alarm as the data were being chiseled down, from what was collected, to what would ultimately be used in the survey, experiment or test. How would that work? Obviously, “I’m throwing this out because it doesn’t support the finding I want” would close the circuit on the buzzer, but “We decided at the outset we’re going to begin by discarding the extremes and proceed with the balance” would probably not. What other, finer points of the definition of invalid data selection could we program? There’s really no way to formulate it in advance — the human judgment calls would have to win out.
We could set up some kind of peer review on the selection process, so that no one individual practitioner has the final say on what’s thrown out, and why. But that would merely replace individual biases with institutional ones, and I’m not convinced the bulk of the problem with bias exists at the individual level.
So with all the problems remaining after the installation of a system like that — or, with such a system not installed — we are left to evaluate it by outcome. The experts can evaluate it by outcome prior to publication, or the public can evaluate it by outcome afterward, examining the content of what’s said, the controversies associated and the overall history.
And there, tragedy strikes: The “lazy” tend to win out, rather consistently I notice, in superficial debates (and it is the superficial ones that really count) against the not-so-lazy. “I win, because I’m listening to the experts,” they say; and they say this because it works. But it would be more accurate for them to say “I win because I’m lazy. I do what I’m told, I don’t look for contradictions and ponder what they mean, I don’t ask questions.”
Related: An essay from Freedman. Also, a year later, another author’s take on Why Experts Get It Wrong.
Update: From following the links, I see a point being made that is important. Having it to do over again, I would have worked it in to the above…better late than never…
According to the investigative journalist Dan Gardner in his 2010 book Future Babble (McClelland and Stewart) and the University of Pennsylvania psychologist Philip E. Tetlock in his 2005 scholarly masterpiece Expert Political Judgment (Princeton University Press, 2005), such cognitive biases are pervasive for both liberals and conservatives, optimists and pessimists, well educated or not, and well informed or not. After testing 284 experts in political science, economics, history, and journalism in a staggering 27,450 predictions about the future, Tetlock concluded that they did little better than “a dart-throwing chimpanzee.” There was one significant difference, however, and that was cognitive style: “fox” versus “hedgehog.”
Foxes know many things while hedgehogs know one big thing. Being deeply knowledgeable on one subject narrows one’s focus and increases confidence, but it also blurs dissenting views until they are no longer visible, thereby transforming data collection into bias confirmation and morphing self-deception into self-assurance. The world is a messy, complex, and contingent place with countless intervening variables and confounding factors, which foxes are comfortable with but hedgehogs are not. Low scorers in Tetlock’s study were “thinkers who ‘know one big thing,’ aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who ‘do not get it,’ and express considerable confidence that they are already pretty proficient forecasters.” By contrast, says Tetlock, high scorers were “thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ‘ad hocery’ that require sticking together diverse sources of information, and are rather diffident about their own forecasting prowess.”
This touches on something I’ve been writing, lately. The blog archives touch on this contrast here & there, and for the moment my time demands do not permit me to go a-searchin’. But I do have an off-line treatise of sorts that already represents some time investment putting these points all together…
In the same way it is much easier to destroy than to create, it is also much easier to control a maturing shape and definition by eliminating whatever doesn’t comport with it, than by adding in whatever does. It’s much like sharpening a pencil by removing the wood, or carving a block of marble into a statue of a horse by removing whatever doesn’t look like a horse. That is the efficient way for humans to achieve definitions in things, by way of removal rather than by augmentation. And so when very bright people make mistakes that non-bright people would not have made, you will very often see this: Information starts to be viewed, and treated, as a contaminant. People start to behave as if they know more, by avoiding learning things. One conclusion is to be preferred, and if any evidence arrives that creates a problem for it, the advocates for that conclusion will start to attack the evidence. This is the exact opposite of the way science is supposed to work, of course.
Leave a Reply
You must be logged in to post a comment.
The interview is funny.
You can read Freedman as self-refuting, but actually he is advocating for using judgment, not abandoning expert opinion. Freedman said “…it’s not that we want to discard expertise — that would be reckless and dangerous.” His point is to not accept expert advice blindly. There are systemic errors in many fields, as well as biases making its way into research. Experts are more likely to be wrong on the fringes of knowledge, there is less likely to be a consensus on those fringes, and more room for biases to operate.
Ultimately, we mustn’t forget The Relativity of Wrong.
- Zachriel | 08/09/2013 @ 05:34http://chem.tufts.edu/answersinscience/relativityofwrong.htm
You can read Freedman as self-refuting
Or you can read that question as moronic. If I wrote a book called Why Phrenology is Bullshit, I’d need to quote a lot of phrenologists saying things that don’t pan out for my argument to be convincing.
But any excuse for a cut-and-paste homily, eh?
Sperg.
- Severian | 08/09/2013 @ 08:02Severian: Or you can read that question as moronic.
It’s actually a good question in that Freedman himself relies upon experts. But Freedman’s point isn’t that experts are valueless, just that judgment needs to be exercised.
Severian: If I wrote a book called Why Phrenology is Bullshit, I’d need to quote a lot of phrenologists saying things that don’t pan out for my argument to be convincing.
Phrenology fails blind testing.
- Zachriel | 08/09/2013 @ 08:23http://zachriel.blogspot.com/2005/09/forearmed-with-knowledge.html
Phrenology fails blind testing.
Uh huh. And if I were to write a convincing treatise on the subject, I’d need to quote phrenologists saying things that were subsequently disproved by blind testing.
This is the difference between “making a convincing argument” and “just cutting and pasting stuff.” One is an attempt to bring others around to one’s own point of view; the other is a cheap debate trick.
- Severian | 08/09/2013 @ 08:34Severian: And if I were to write a convincing treatise on the subject, I’d need to quote phrenologists saying things that were subsequently disproved by blind testing.
Twains experiment is quite compelling, and fairly easy to replicate.
In any case, that’s not the experts being referred to in the question, as is clear from Freedman’s answer, but Freedman’s reliance on experts in the limits of expertise.
- Zachriel | 08/09/2013 @ 08:38Twain’s experiment exposed one particular practitioner as, essentially, a quack. He revealed the guy wasn’t making measurements.
Today, the “American Phrenological Association” would protest — with some legitimacy — that Twain was, or you are, unfairly maligning a noble profession based on the malpractice of some rogue individual who wasn’t even in good standing.
- mkfreeberg | 08/09/2013 @ 08:44mkfreeberg: Today, the “American Phrenological Association” would protest — with some legitimacy — that Twain was, or you are, unfairly maligning a noble profession based on the malpractice of some rogue individual who wasn’t even in good standing.
Actually, Fowler was the head of the Phrenological Institute in New York City and published ‘research’ in the Phrenological Journal. As we said, Twain’s experiment is easy to replicate. Phrenology isn’t a valid field. One indication is its lack of overlap with related fields of scholarship.
- Zachriel | 08/09/2013 @ 08:56Jeez. Since spergs are over-literal, I’ll spell this out (apologies to non-OCD readers):
It’s an argument by analogy. There are two parts.
Part I.
The question “You say that many experts are wrong, yet you quote many experts in your book. Are these experts wrong too?” is a silly one because of the obvious-to-anyone-but-a-sperg assumptions built into it. It would be a nice little gotcha! if and only if Freedman maintained that ALL experts are always wrong. Then his book — citing experts to prove that all experts are always wrong — would fall victim to the Ishmael Effect and be self-refuting. He explicitly does not maintain this.
As you seem to grasp. And yet you write “It’s actually a good question in that Freedman himself relies upon experts.”
By writing “It’s actually a good question in that Freedman himself relies upon experts,” you proceed upon the same assumptions that make the interviewer’s question silly, and thereby contradict yourself. I — probably incorrectly — gave you the benefit of the doubt and assumed you did not do this intentionally, but were simply in a hurry to cut and paste something.
In order to point this out in a not-excruciating-to-non-spergs fashion, I used….
Part II
An analogy to the now-discredited “science” of phrenology. One can, of course, object to phrenology on all kinds of grounds, and run all kinds of rigorous NIH-approved experiments to test phrenology’s main ideas, and publish one’s findings in any academic journal that will have them.
or
one could quote a bunch of phrenology experts saying all kinds of wacky things, and say “see? Phrenology is crap.”
The second method is not only sufficient, but way easier, both on the writer and on the reader. A writer who wants to bring his reader around to his point of view — and not just cut-and-paste his own comments over and over and over again because he’s a sperg — would therefore find himself quoting a lot of expert phrenologists, in order to show that their claims don’t hold up.
This concludes “Argumentation for Aspies 101.”
- Severian | 08/09/2013 @ 09:15Severian: The question “You say that many experts are wrong, yet you quote many experts in your book. Are these experts wrong too?” is a silly one …
It’s a clarifying question. One only has to read the blurb on his book to see the problem, the same problem of scholarly overstatement the book is meant to address! Unfortunately, Freedman doesn’t answer the question very well. However, later he makes his meaning clear when he says “…it’s not that we want to discard expertise — that would be reckless and dangerous.”
This relates to our previous discussions on appeals to authority, and when they are valid.
- Zachriel | 08/09/2013 @ 09:37Severian: one could quote a bunch of phrenology experts saying all kinds of wacky things, and say “see? Phrenology is crap.”
It’s easy to make fun of jargon, but that doesn’t constitute a valid argument. It’s quite possible talk about charmed and strange particles has a scientific basis. Contrariwise, Twain used a scientific test that could be replicated and verified by other researchers.
- Zachriel | 08/09/2013 @ 09:41Refreshing! Comes in many varieties! Try yours today!
- nightfly | 08/09/2013 @ 09:41Nightfly,
heh. Nice!
I particularly love that we’re now supposed to “debate” the validity of appeal to authority again, based on nothing more than a silly one-off question in a Time Magazine interview. That the readers of Time — and, apparently, our favorite cephalopods — would require “clarification” that a guy whose book contains a conditional right there in the title isn’t claiming that all experts are always wrong says a lot about the intellectual temperature down there among the coral reefs.
Notice too the dishonest attempt to sneak in “jargon.” As if I had said that the main reason to quote phrenologists — the main reason phrenology is bullshit — is that they made up a bunch of funny words to describe stuff.
There’s a word — four words, actually — for this kind of deliberate, carefully crafted obtuseness.
(Hey! This cutting-and-pasting your own stuff thing is kinda fun!)
- Severian | 08/09/2013 @ 09:50Severian: I particularly love that we’re now supposed to “debate” the validity of appeal to authority again
That topic was introduced in the original post to the thread, “…it’s not that we want to discard expertise — that would be reckless and dangerous. The key becomes, how do we learn to distinguish between expertise that’s more likely to be right and expertise that’s less likely to be right?”
- Zachriel | 08/09/2013 @ 09:54Uh huh. And all that needed to be said about it was said there, viz.:
- Severian | 08/09/2013 @ 13:35Severian: Information starts to be viewed, and treated, as a contaminant. People start to behave as if they know more, by avoiding learning things. One conclusion is to be preferred, and if any evidence arrives that creates a problem for it, the advocates for that conclusion will start to attack the evidence.
Sure. People commonly become wed to their positions. That’s why objective methodologies, including cross-disciplinary checks, are so important.
- Zachriel | 08/09/2013 @ 13:59Hmmmm…. a comment that is nothing more than agreeing with me agreeing with Morgan? It appears endgame has arrived here, too. So:
Last!!!
- Severian | 08/09/2013 @ 14:03Ah…
The issue of “experts”. (‘ll re toss “professional” in here too)
The problem, as I see it.
Genious reasonably presents “thesis A”.
Doing the math, there are vast numbers of folk who are incapable of even comprehending “thesis A”-lack of specialized cliche I suspect.
Therefore, “recent studies” show a VAST MAJORITY of folk “intellectually” defend “thesis B-sub i, para 2”.
(ie) Any “recent studies” by undergrad, or “professional” prols doing the heavy lifting for the esteemed, award winning, presidential appointee (or most recently,”medal recipiant”) that refuses to recognize pre-“fundimental change” observation of the human condition known as Brothers Grimm, Aesops Fables, et cetera, et alia , (maybe even Maurice Sendak and Theadore Geissel) is an “expert” that can easily be dismissed. One can’t conflate ” colors with crayons only BETWEEN the lines (of their theory”) with “consistantly recognizes patterns, and subsequent outliers”.
Ya’ like “obscure specialized lingo”? Let’s just call it;
The Wolf and the Grapes
The Dog in the Manger
The Fishermans wife
The Blind Men and the Elephant
Even “experts” of ANY field that are (IMHO)actually worthy of consideration are subject to;
Belling the Cat
“The Science is settled!”, Because shut up.
- CaptDMO | 08/09/2013 @ 18:10[…] support of failed plans & predictions; Looks like a book with many sensible points to make, so I put it in my Amazon cart; Fascinating research by Andrew Thomas and crew over at American Thinker; Paging Baron Bomburst of […]
- Steynian 486rd | Free Canuckistan! | 08/15/2013 @ 07:12