Peter Bowditch's Web Site
 

Philosophy and Science

This is a version of a talk I gave at SkeptiCamp in Brisbane on July 19, 2014. It was reproduced in the September 2014, edition of The Skeptic.

Some prominent scientists such as Stephen Hawking, Lawrence Krauss, and Neil deGrasse Tyson have recently declared that philosophy is dead and provides nothing that scientists need to know or worry about.

One problem is that people misunderstand and misquote philosophers. I used the expression "philosophy is dead", which might remind people of Nietzsche saying "God is dead". This was not a statement of atheism, it was a statement about morality – as people tended to base moral decisions on the teachings of religions, would it be a problem to establish a moral framework in the absence of directions from some assumed deity? (The old "Atheists have no moral compass" and "Good without God" arguments.) Karl Popper didn't say that scientists spend their time trying to falsify what everyone else was doing, he was talking about the demarcation between science and pseudoscience. Thomas Kuhn didn't say that science progresses like a form of punctuated equilibrium with revolutions occasionally throwing over the consensus and totally new theories replacing old ones, he was saying that even well-accepted theories might not explain everything and there can come a time when the unexplained anomalies reach a mass where a different explanation is required.

I'm going to stick with Nietzsche and look at the moral framework of science, that is, how questions in moral philosophy affect the way science is done.

I've often been told that science is amoral and disinterested – that is, that science can't take any moral position, it is what it is and practitioners have to go where the science takes them, leaving it up to others to decide on what is right or wrong. This might sound like an extreme position, but I've had it put to me – research goes where it goes and that is all that matters to a scientist. Note that I'm not talking about the ethics of the way research is done, just what the results are used for afterwards.

A few years ago I read an interview with someone who had worked on the development of the hydrogen bomb. (Edward Teller held the view I referred to above – the science that produced the weapons could not be held responsible for the results of using the weapons. Robert Oppenheimer, on the other hand, wanted senior Japanese politicians and military figures to be invited to witness the first atomic bomb test to frighten them into surrender before it became necessary to destroy any cities.) I know that people working on military weapons must know what they are used for (someone invented napalm), but the point I'm making here is about the level of detachment. The person being interviewed was concerned, because most of the early researchers had seen above-ground nuclear tests and rightly held the view that the use of the weapons should be avoided. Above-ground testing finished in the 1960s (except for France and China) and all testing by major powers ceased in 1996. India and Pakistan conducted tests up until 1998, and North Korea did a test in 2013 (but nobody has access to any data from the test), so the current crop of researchers had either only worked with data from underground testing where the instrumentation was destroyed within milliseconds of the explosion, or, in the case of younger researchers had only worked with computer simulations. It was all good physics and good fun to these people, because they were totally detached from any concept of the results of their work.

For an example of detachment outside science, consider Adolph Eichmann. Here was a man who was just doing his job. He had no particular hatred for Jews, he never saw a concentration camp, he never saw a train being loaded, he never killed anyone or asked anyone else to kill for him – he just did his job of arranging trains and timetables so that millions of people could be efficiently transported to their deaths. At his trial he continually maintained that he had done nothing wrong, and the bad results of his excellent work were the fault of others and of no concern to him. It's why Hannah Arendt chose the word "banality" to describe him.

The British philosopher Philippa Foot came up with a thought experiment which many of you will be familiar with. The situation is a tram track with five workers on it. A tram is approaching at a speed which means that it can't stop before reaching the workers and they don't have time to get out of its way. Between the tram and the workers is a set of points that can be used to divert the tram to another track where there is only one worker. Should you throw the switch, knowing that you will inevitably kill one person but save the lives of five? When tested on this most people say "Yes", but I suspect that given the situation in real life most of us would do nothing, paralysed by indecision.

An American philosopher, Judith Jarvis Thomson, came up with a variation on Foot's experiment. In this case the track is straight with no fork, but you are on a bridge between the tram and the workers. There is a very fat man on the bridge and if you push him off in front of the tram it will slow it enough for the workers to escape death. Again you have sacrificed one to save five. Most people say that they wouldn't do it, even when it is suggested that they just pull a lever to drop the fat man through a trapdoor. It's now too personal. (Thomson's paper was published in 1976. You will be pleased to find that it hadn't been written before 1945 when the name "Fat Man" was chosen as the name for the atomic bomb dropped on Nagasaki. That would not have been banal evil, it would have been cynicism of almost unimaginable dimensions.)

Here's a third hypothetical, modified slightly from another scenario by Thomson.

There are five patients waiting for organ transplants in the hospital. None are expected to survive beyond two weeks unless donors can be found. A fit, relatively young man is brought into the ED with severe head injuries after an accident. He is otherwise in perfect health, but he is put into an induced coma and there is no way of knowing when or if he will recover. He is also a tissue match for all the people waiting for organs. Should he be allowed to die so that his heart, lungs, kidneys, liver, and pancreas can be used to save five lives? Almost nobody answers "Yes" to this.

Note that in none of these thought experiments are there any details of the people involved. Sometime you might see these questions with details such as choosing between relatives and strangers, or good people and bad people, or people tied up who can't escape. I have deliberately left all that out. It's a simple question – kill (or allow to die) a small number to save a large number or not?

And now the hypothetical situation I want you to put yourself in.

Let's imagine that you are working in medical research and you come up with cures for two forms of cancer. Both show 90% cure rates in early investigations even when all other treatments have failed, neither have any unacceptable side effects, each will require about a billion dollars to bring to market (research, clinical trials, promotion, manufacturing and distribution setup, …), and each will have five years of patent protection after release to the market..

One treats a type of cancer (A) for which the only cure is massive surgery that leaves the patient crippled in a wheelchair, attached to colostomy bags, on a restricted diet, and requiring constant attention. Life is generally extended by no more than two years. Without treatment median time between diagnosis and death is ten months, mean time is 13 months, and the death is very painful with the patient being incapacitated for the last few months of life. The other one (B) kills relatively slowly, can be detected early, and can be successfully treated with a variety of means including radiotherapy and various levels of surgery, but even with the most drastic surgery patients can live an almost normal life afterwards. About ten percent of the people with this cancer die from it.

Any split of the billion dollars would result in neither getting enough to do the job properly. You have to decide which one to pursue and which to abandon.

Disease

Cancer A

Cancer B

Cases per year

1,000

1,000,000

Deaths per year

1,000

100,000

Lives saved

900

90,000

Price to each patient for 5 year cost recovery

$200,000

$200 if given to all patients; $2,000 if only given to recalcitrant cases.

It's rather obvious what the decision would be, and remember this is not a decision forced on the researchers from bean counters above. The decision to proceed with either option is to be made by the scientists at laboratory level.

Three questions:

  1. Do you really think you can detach your work from the use to which it is put?
  2. How does this differ from throwing the fat man under the tram or sacrificing the accident victim?
  3. Your daughter is diagnosed with Cancer A. How do you explain your decision to her?

Think about it.


Here is the actual talk. There is a period of silence when a song was played. This was done to avoid any accusation of copyright violation.




Copyright © 1998- Peter Bowditch

Logos and trademarks belong to whoever owns them