What is Truth in Science?

December 20, 2010

by Jerry O’Driscoll

In the “Annals of Science,” Jonah Lehrer asks “is there something wrong with the scientific method?” He poses the question in an article entitled “The Truth Wears Off” in the December 13, 2010 issue of The New Yorker (pp. 52-57). The problem is that across disciplines “claims that have been enshrined in textbooks are suddenly unprovable.”

It is a problem of being unable to reproduce results in subsequent experiments.  Even scientists who perform the original experiment cannot reproduce their own results.  The pattern is that, over time, results become less strong or even disappear. Again, it is occurring in many disciplines but is especially acute in medicine. For instance, the original tests showed great promise for second-generation antipsychotic drugs and they became the most profitable products for some drug companies. New tests show the second-generation drugs no more effective than first-generation antipsychotics in use since the 1950s. In some cases, the new drugs perform worse.

The same thing is happening with cardiac stents, vitamin E therapy, and antidepressants. The decline in the efficacy of antidepressants is especially dramatic.

There are similar case studies detailed in psychology and zoology. There is a widespread problem of the non-reproduciblity of experimental results. The problem is well known, but scientists understandably don’t want to talk about it publicly. One scientist was advised not to attempt to reproduce his own results as he would only be disappointed. Results long since called into questions remain as truths in textbooks. Medical treatments undermined by subsequent tests remain in widespread use.

The author discusses all the possible reasons for this phenomenon from poorly structured experiments, to non-randomness and professional bias. All are likely sources of experimental failure, but he enumerates attempts by serious scientists to overcome all such weaknesses to no avail. For whatever reason, it is becoming increasingly difficult to reproduce experimental results.

The author is understated and balanced in his presentation. He offers no definitive explanation for what he reports. He ends on an almost Weberian note.

13 Responses to “What is Truth in Science?”

  1. Daniel Kuehn Says:

    Ya – that was an excellent article. They mention Ioannidis in the article. This is another piece about his thoughts on publication bias from a little while back, in case you missed it:

    http://www.economist.com/node/12376658?story_id=12376658

    I think publication bias probably explains a lot of this. Data is noisier than a lot of scientist probably like to admit (or alternatively, they are happy to admit it but that message doesn’t always get communicated to the public). But noise alone doesn’t explain the transition from significant to insignificant – a publication bias does help with that.

    It’s also important to emphasize that drifting towards insignificance doesn’t mean that the insignificance is the right answer. Insignificant results that overturn previous accepted truths are exciting and for that reason are just as likely to benefit from publication bias as earlier significant results were. In the beginning, finding an effect is the sexy, exciting thing that gets published. After a while, smacking down established “facts” is the sexy, exciting thing that gets published. Which is “right” in all likelihood varies on a case by case basis.

  2. Daniel Kuehn Says:

    I was not introduced to meta-analyses until recently (economists don’t really do them… in fact, I don’t know who heavily relies on them), and it seems to me more meta-analysis should be done. It can do a lot to help sort through these issues.

  3. Roger Koppl Says:

    Thanks for alerting us to that article and the general issue, Jerry.

    A recent (the penultimate?) The Atlantic had a nice article Ioannidis, who teaches us to think in terms of the ecology of testing. You gotta think at the systems level, not paper-by-paper. This sort of thing is a Very Big Deal IMHO.

    Here are two helpful cites for those who wish to pursue the point:

    Ioannidis, John P.A. 2005. “Why most published research findings 
    are false,” PLoS Med 2(8): e124 (0696-0701).
    Available online at http://www.plosmedicine.org.

    Berger, Vance, J Rosser Matthews, and Eric N Grosch. 2007. “On improving research methodology in clinical trials,” Statistical Methods in Medical Research pp. 1-12.

  4. koen Says:

    some punk put the whole pdf of the article online http://crayz.org/science.pdf

  5. Current Says:

    With medicine I wonder if the problem is that background assumptions are being made that aren’t valid. For example, the assumption that drug pathways aren’t much affected by diet or lifestyle. Also, there is the possibility that some diseases are caused by infectious diseases that haven’t been discovered yet, as was the case for stomach ulcers.

  6. Current Says:

    For example, take a look at this:
    http://www.statistics.gov.uk/cci/nugget.asp?id=722

    In Britain there was a huge group of children born in the 70s and 80s who had asthma, I was one of those. When I was younger I had asthma, eczema and hay-fever. As I grew older the asthma and eczema went away. The same thing happened to many of my age group.

    As you can see by the article the prevalence of all these diseases changed radically in a short space of time. Why that happened is still a subject of research. There must be a cause of some sort, and whatever it is may not produce exactly the same sort of asthma that other sufferers have.

    In my view this casts into doubt the idea that two research studies spaced apart in time are really testing exactly the same disease. And that casts into doubt whether conclusions about drug effectiveness are comparable.

  7. David Hoopes Says:

    1) I think this phenomenon will decline over time.
    2) The possible publication biases seem quite likely. It’s very difficult to publish results that differ from your original (or the referees’ and editor’s) expectations.

    I have been gently prodded in a paper or two to better fit my (our- don’t know if co-author wants a mention here) a priori theory to the results.

  8. Current Says:

    I think a big problem is understanding what conventional or orthodox opinion actually is.

    When I began my career as a radio frequency electronic engineer I worked for a small start-up company that had been spun out of a University. The company developed unorthodox antenna technology.

    The professor I worked for there wrote papers explaining his technologies benefits over conventional approaches. Later I moved to other companies and learned the conventional approaches myself. Now when asked to design antennas I use conventional designs, which I consider best.

    I’ve encountered many proponents of alternative types of antenna and alternative electromagnetic theories since. The problem with them is almost always the same: they don’t adequately understand conventional ideas. They understand their own proposed improvements and their own revolutionary ideas, but they only have a hazy understanding of the orthodoxy. So, when they write papers comparing their own ideas with conventional ones theirs come out ahead because these mistakes.

    It happens in radio science and engineering. Those scientists who think they’ve disproved Maxwell’s equations often don’t understand Maxwell’s equations. Those engineers who think they can build better antennas through new design types often don’t understand old design types.

    The same thing happens in Economics. Think about Keynes’ misrepresentation of Neo-Classical economics. Then Hick’s wrote his paper answering Keynes and “The Classicals”. As Mark Blaug said: which Classicals? Every generation has misunderstood the orthodoxy of the previous generations, and then sometimes later a historian of economic thought rescues damned figures by pointing to what they actually wrote.

    In some ways this is inevitable, if you do research into something new then you have to concentrate on that. You can’t possibly understand conventional alternatives as well as those working with them all the time. Neither can journal referees and editors. Those folks have to take into account that upholders of convention have self-interested motivations for suppressing new ideas as well as better knowledge of their own approach.

  9. Troy Camplin Says:

    I’m not at all surprised about this. Biology is not physics. The brain’s neural network is even further away in complexity. The more complex the system/process, the less likely it is to have reproducible results (or to be predictable, which is two sides of the same coin). There’s nothing wrong with science, unless scientists unreasonably expect brains to respond like billiard balls. In the case of biology, we have many complex adaptive systems adapting to different and ever-changing environments. One can make statistical observations under such conditions — at best — but little more. In testing a drug, you might happen to test it on a group of people genetically predisposed to reacting well (or poorly) to it; or there my be environmental factors at work. Or both. With psychiatric drugs, it’s even worse, because we are talking about a system even more complex than are biological systems. Further, the tests cannot be truly random, because of geography, the nature of the people who show up (being the kind of people who would show up), etc. All of the variable one needs to take into consideration cannot be taken into consideration. The systems are too complex. Thus, medicine, and especially psychiatric medicine, cannot be expected to have the same level of experimental confirmation as physics or even chemistry.

    And consider: the economy is a system made up of interacting embodied neural systems which are exponentially more complex than the living systems they are made up of, which are exponentially more complex than the chemical systems they are made up of, which are exponentially more complex than the atomic systems they are made up of.

    The math we have breaks down at the biological level, for crying out loud.

  10. Bob Layson Says:

    A market system is a great simpifier. People with unique brains are drawn to converge on similar or practically identical methods and product types. People buy either less or more as the prices and availability of substitutes change. Production and exchange never ceases even as people age and die and capital is replaced. An economy is like an orang-utan in that it never lets go of a means of support until it has hold of another.

    Likewise languages and locations make for markets being held in the same place for hundreds of years and similar things asked for in the same old way. Archeologists often say that the most durable manmade thing is an intangible boundary.

    To use another analogy an economy may be likened to a ship in an open seaway. The wind, waves and tide play upon it in ways uncalculable in total effect and yet the helmsman can more or less keep to a heading and the navigator can establish the ship’s position, course and speed.

    Many influences play upon a ship yet it can but roll, pitch or corkscrew as a result. So it is with market participants deciding what production node to sell their labour or assets to or what products to purchase with the income arising from their productive integration – this or that, here or there.

    Prediction regarding types and kinds is easy and certain in an extended and long established market order: ‘In London on a Saturday a man with sufficient money can get fed’. Particular expectations may be dashed: ‘There’s a place I went to in Fleet Street last year that does a very fine steak and ale pie’.

  11. Troy Camplin Says:

    If the economy were simple, it could be predicted and controlled.

    If the economy were complicated, no patterns or coordination could occur.

    If the economy is complex, patterns and coordination occur, yet the system cannot be either predicted (except locally — in time and space) or controlled.

    So I wouldn’t say that the free market is a simplifier — it s rather a pattern-creator, and because of that, we can coordinate our actions.

  12. Bob Layson Says:

    The simplicity of an extended and industrialised market order resides not in the graspability of its multipart specialised production but in the way it can be funtionally understood: raw materials, worked-up goods, transport, places to buy consumer goods and so on. A time traveller would recognise commerce even if many of the things sold and their means of manufacture would baffle him.

    If the task of an economist is to give advice to the policy czars steering the economy and pulling on the levers of policy then the task is an impossible one but, fortunately, quite unnecessary.

    The task of economists, and they are not a market phenomenon – save in the book market and in non-state teaching – is to justify free production and exchange. This requires government not to be well advised but simply absent, as neither sound money nor the institutions of private property and the law requires either the state or career statesmen. The economist should speak not to the leaders but to the people groaning under the burden of ‘enabling government’.

  13. Andrew Goddard Says:

    So strange. I just came across this book about how peer review and the scientific method have been manipulated by our corporatist masters to produce deadly health policy:

    Title: Ignore the Awkward
    Author Uffe Ravnskov., PhD


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: