Science, Evidence and Policy

Published December 8, 2009 by Sean

From http://www.houlihane.co.uk/blog/
Prompted by the BBC’s program “Moral Maze” broadcast on 2nd Dec, 2009 (iplayer), I decided to write about a disconnect that I observed between science and scientists, and the perception given by the panel members.
Prompted by issues such as the recent resignation of the government science advisor on drugs, and the leaking of the climategate emails, this issue was looking at the interaction between science and politics.

The panel consisted of Michael Buerk as chairs, with Michael Portillo, Matthew Taylor, Claire Fox and Clifford Longley. Now I assume these are moderately intelligent people who ought to have a fair understanding of science, and scientific method. I wasn’t expecting a particularly effective investigation into the issues surrounding this weeks discussions in Copenhagen, but I was actually stunned by the perception of science which they portrayed.

The questions which they were looking at were to do with how much influence scientists could and should have on policy making. Is it good for policy and politics to be supported by science, or should the thinkers be free to follow their own personal ideologies? As a generalisation, a liberal might wish to blank out and ignore science because it prevents a politician from having a free choice – they would do this in the belief that their philosophy is correct, even if the facts are in disagreement and they would believe the facts.

Should a scientist take steps to ensure that what is reported in the news about their research is fair, honest, safe, and should they even go so far as to distort their presentation in order to ensure that the correct message is passed out to the people by the media?

It seems from this programme that the non-scientists are of the view that science is an absolute truth, which cannot be debated, and probably can’t even be fully comprehended by mere mortals. When they do interact with scientific opinion, they do not focus on the substance of a claim, they tend to focus on the semantics of the argument, and treat it as a literary exercise. Scientists seem to be regarded as extremely rare people who work in small closed communities. Science is regarded as above corruption, and capable of providing absolute guarantees of certainty. Despite all of this, there was some acceptance that scientists have been wrong in the past, and it is maybe the job of the policy makers to play the role of the skeptic and keep the scientists honest. (Clearly, politicians are also above corruption).

I had initially intended to address a specific issue of evidence, but I think some background is more relevant first.

I wasn’t able to easily determine the proportion of the population who have a science degree, and even that definition is maybe not very useful, but even if it is only 1 in 20 people, or 1 in 50, there are still a good number of us. We don’t all have the same education or background, but we probably tend to share the same Myers-Briggs personality types. We are inquisitive, expect answers to be accurate, and probably aren’t too interested in asking if you’ve had a good day. We can give a convincing answer to all manner of complicated question, and you probably assume we’re reciting some fact that we learnt, rather than relying on a combination of knowledge and guesswork.

Despite the fact that scientists may train and specialise in very detailed fields, much of what they learn is to do with the process of science rather than the facts. Many scientists are perfectly able to read papers in a field which they are not familiar with, and still understand if the work is consistent and well argued. It is not necessary to be an expert in a particular field to be able to determine if a set of results is convincing proof, or likely to be just a coincidence.

The nature of science is not to prove things, it is to give us a better understanding. Generally it helps us to engineer things better, building bridges that are just strong enough, making laptops lighter, making artificial joints which last longer, etc. More fundamentally, it provides a framework for us to take observations, make assumptions which may explain those observations, develop a strategy to test and break the assumptions, and finally make predictions about new and different scenarios.

Scientists tend to like a consensus. Speaking out against the experts in a field is not a good way of making friends (and even scientists are aware of this). The idea that the continents (Africa and South America for example) move over geological timescales, is well accepted today. We can even measure the movements from year to year now with milimetre precision. Just by looking at the coastlines, it seems quite plausible, and the distribution of different types of rock is also good evidence. Consider then that the theory of continental drift was proposed in 1912, but ridiculed for 53 years, and only finally accepted in 1965. This natural human behaviour suggests that maybe reaching a decision by a simple approach such as a literature review may not give an honest picture of the state of science at any point in time. Rather, it is instructive to compare the opposing view points and compare the merits of the arguments rather than the weights. The approach taken in a court of law, deciding based on who can argue best, is not likely to lead to an improved conclusion.

Even though many people seem to view the word of a scientist in the same way as they would have listened to an oracle, they should remember that scientists can make mistakes, and are not even above corruption. It seems that the mistakes are not remembered too well. The Piltdown Man, and more recently Hwang Woo-suk’s claim to have succeeded in creating human embryonic stem cells by cloning are rapidly becoming distant memories even amongst people who were aware of the stories. Can anyone now believe that lobotomy was a common medical procedure in the 1940’s?

A frequent problem with scientific research is it’s relevance to new scenarios. it is very easy to overlook the un-intentional consequences of making a change to an established system based on measurements made in a different environment. Assessment of the survivability and treatment of hypothermia victims based on careful laboratory experiments neglected the effect of removing a submerged body in a vertical harness (sudden loss of blood pressure, usually fatal). Analysis of the health risks associated with taking illegal drugs makes assumptions (when extrapolated to policy) that de-criminalisation would only have a limited effect on behaviour, and conversely is unable to counter the concern that it would resuit (as was claimed for pop music) in widespread breakdown in society.

Very few people, even amongst those with a technical background, seem to have a good grasp of statistics. Bizarrely, as a society we seem to be comforted by statistics. The most generous explanation for the recent MMR vaccine scare would be that a small number of unusual cases occurred by chance, and that in the hope of finding more proof to support a genuine concern, the likelyhood of the results being a fluke were not given sufficient consideration. How do we weigh the risks between something we think is likely, and something which is really serious? Is it safer to risk getting Swine Flu, or risk having a reaction to the virus – and what about your chance of some other common need to be in hospital if 30% of the population became incapacitated?

When making statistical comparisons, it is important to check that the data is consistent. Is there a control group, and is the data really as complete as it seems? Much of the historical data on hypothermia fails to record body temperature more than a few degrees below ‘normal’ because of the medical thermometers in common use. Clearly there is not much laboratory data covering the precise point of survivability. A scientist might even worry that measuring a person’s core temperature in such an extreme state is quite complex, and subtle changes in technique over time could untroduce an unwelcome spread in the data.

I had intended to make this post about the claims that extreme weather results have been increasing – but I’ll leave it with this link and the points that it’s easy to infer a trend when looking at noise – as well as our ability to observe, measure and categorise hurricanes has improved dramatically since we invented satellites, so using the historical data is educated guesswork.

Science is not definitive, it is useful in guiding decisions, but it is very dangerous to assume that scientific theories are always correct.

Filed under Science

Comments (2)

Comments RSS - Trackback - Write Comment

  1. George Crews says:

    Hi Sean,

    I read your recent comment on Roger Pielke Jr.’s Blog. I thought it perceptive and succinct. So I came over here. Nice post. I too have been wondering why the scientific method is so difficult and unnatural for most people.

    Everybody’s perspective is a little different. Reading your perspective has helped me better understand my own perspective. Thanks.

    My perspective is common. I think that people tend to confuse a sense of conviction with the perception of truth. Put another way, a belief that for the important and difficult issues, feelings are more trustworthy than facts. And I think that most people believe the ends justify the means, whereas for the scientific method – the means justify the ends.

    But I like your analogy of scientists publishing scientific theory and experimental data as the not fully comprehensible holy dogma of oracles. If the oracles represent your faith, they are revered. If not, or deemed a false (non-consensus) oracle, their writings can be dismissed out of hand. IMHO, an easy to understand and potentially very useful perspective.

    George

    Posted December 8, 2009 @ 4:34 pm (UK)
  2. Sean says:

    Thanks for the comment, George. Nice to know that people are reading this. I completely agree with you about conviction vs. truth. We have a feeling about a question, and seek to support our view with proof.

    Posted December 9, 2009 @ 7:29 pm (UK)

Write Comment