Evidence versus hearsay: Learning to think like a scientist…

Welcome to Eclectic Moose’s 50th blog – yup, been doing this for almost a whole year. Hope you’re still enjoying. Please let me know if there are any topics you’d like me to write about or incorporate into future blogs.

I see you

So, onwards. Today I thought I’d write about science – not the products of science, but the process, and why we could all do with a dash more of science in our lives. I’m also going to touch on my pet peeve, the sad tendency for humans to believe (often fervently and disturbingly) without evidence, and the lack of ability to evaluate evidence…

Let’s start with science (or empiricism) itself. Most people, despite having studied some sort of science at school, have a stereotypical (and blurry) notion of science as something that’s done by scientists, themselves nerdish figures in white lab coats. But at its essence science, and scientific method, is a philosophy for observing evidence, and proving or disproving theories based on the notion of falsification. Typically, the scientific process includes induction (the gathering of evidence through observation), deduction (attempting to explain the observations), theorising (developing specific theories about why the thing you’re observing occurred), hypothesising (stating your theory as a specific problem), and experimentation (determining whether a hypothesis is correct or false). Put simply, you might observe that, as Summer approaches, the sun rises earlier each morning and sets later at night (and vice versa in the Autumn) (induction), and deduce that at some times of the year, the Earth gets more exposure to the Sun. Assuming you already know the Earth revolves around the Sun, you might theorise that seasons are caused by a tilt in the Earth’s axis. You’d state this theory as a hypothesis (i.e., the Earth’s seasons (and therefore change in amount of sunlight) is the result of a tilt in the Earth’s axis), and then design an experiment to test your hypothesis (e.g., place a stick in the ground and measure its shadow at different times in the day – if it changes the Earth’s axis is tilted (and your hypothesis is correct), and that angle can be measured by taking the longest shadow and using basic trigonometry to measure the angle).

In other words, science is about making observations and then attempting to explain those observations. The key is theorisation and falsification – stating a theory as a hypothesis allows you to design an experiment to prove whether or not that hypothesis is correct. If it’s incorrect, you try again with more data collection and a refined or alternative theory. The main reason why this method has done more for humanity than any other thought system, is because it allows us to explain the world around us in an objective way that results in real understanding. Within this approach there is no room for flights of fancy. If a theory can’t be tested in a way that allows for falsification, it does not fall into the scientific method. There is no room, therefore, for the supernatural in science. Explaining something by simply saying “God wanted it that way” just isn’t good enough in this methodology, because there is no empirical way of testing such a hypothesis. Note that this way of thinking doesn’t preclude the possibility that “God wanted it that way”, just that it’s untestable and therefore not within the realms of science. Also note that there is no dogma in science. If new evidence comes along to refutes a previous theory or finding, then that theory or finding is dropped in favour of the new one. Simple as that.

It’s this openness to all and any data that makes science so remarkable. The true scientist will consider any potential variable as an explanation for a phenomena, but can only account for those that can be measured. Divine intervention is not measurable, and therefore not a scientific option. Nevertheless, if there is reasonable theoretical evidence for the existence of a currently nonmeasurable variable, there is good reason to build devices that might detect (and therefore measure) it. Physics (theoretical and applied) is a perfect example. Theoretical physicists attempt to explain the universe by theorising (using mathematics as the language with which to describe their ideas) about its workings. Applied physicists attempt to prove or disprove (falsify) these ideas through the design of increasingly sophisticated experiments (culminating in astounding devices, like the Large Hadron Collider). A similar analogy can be drawn between psychology and neuroscience.

Left to their own devices, humans don’t think this empirically. This is a problem. Humans evolved to make inferences from small amounts of data, a process that worked pretty well when pretty much everything was trying to eat us. Something moves in the corner of your vision, it’s potentially dangerous, run away. This tendency to believe things with fervour, based on little or no evidence, is the basis of most religions, conspiracy theories, and internet memes (for a much more in-depth discussion of our tendency to believe crap, read here and here). It’s not our fault we’re so vulnerable to infection by these ideas, it’s mostly a neurological deficit based on a survival-based evolution. Nevertheless, it gets us in trouble.

It goes a bit further too. Instinctively, virtually all humans are statistically illiterate; we simply overestimate the odds of success without understanding the variables at play. Again, this makes a lot of sense. Our systems for modelling the world evolved for a much simpler environment (read here for a more detailed discussion). Daniel Kahneman described this bias as What You See is All There is (WYSIATI): when evaluating the world and making decisions, our brains simply don’t take into account information that isn’t immediately available. As far as it’s concerned, if you can’t see it (or if you haven’t had experience of it), it doesn’t exist. This means it’s very hard for us to consider variables that we don’t have direct experience with.

So add this all up and we get humans who are not natural scientists. We make assumptions, believe non-falsifiable ‘facts’, come up with erroneous conclusions based on very little information, weight and bias our information based on its source, and assume we understand complex systems when we don’t have a clue (read here). These errors in thinking get us in a lot of trouble. We get angry without justification and take actions that are unsupportable. We get confused and upset by trivial things. We leap to unmerited conclusions. We trust our limited and inaccurate systems over objective processes. We make a lot of really stupid mistakes, but don’t learn from them afterwards. We don’t look for evidence to support our beliefs and, instead, accept the most outrageous bunches of crap as fact, based on mere hearsay.

What are the benefits of developing a more scientific worldview? Well, for a start, we can learn to be more discriminatory in our thinking. Instead of accepting things on faith, think about the origin of information and its reliability (and that doesn’t mean that lots of people believing something means it’s reliable!). Try evaluating the evidence for and against something, but make sure that the evidence is objective and the proposed reasons for something are potentially falsifiable. Try to avoid supernatural explanations when there are objective alternatives. Don’t confuse anecdote for actual data, and be careful making assumptions about cause and effect.

A short aside. To establish cause you need three conditions. The causal agent (A) has to come before the effect (B), A must have a relationship to B, and you need to be sure that the effect (B) isn’t the result of something else (e.g., C, D, or E). In science it’s very hard to talk about cause and effect because its difficult to be sure that A causes B without understanding everything about the system. Other things (extraneous variables) could influence the outcome. Humans, however, love to claim that A caused B just by citing a relationship between A and B. Correlation DOES NOT equal causation! A great example of the causation confusion is homeopathy. Many people take a homeopathic remedy and then get better. The assumption is that it must be the homeopathy that led to your recovery, but this is potentially erroneous. Unless you can actively demonstrate the causal effectiveness of the medication (perhaps with lab tests that show the effect of the homeopathic medication on, say, a virus – needless to say, this hasn’t yet been demonstrated for any homeopathic remedy) you need to consider other causal options. One explanation is the ‘inverted U effect’ – many diseases have a natural life-span, peaking at a specific point, after which natural recovery occurs (think of the common cold – you feel crap for three days, and then start to improve). Most people don’t try homeopathic remedies until they’ve tried lots of other things, usually about the time where natural recovery has started or is about to start. But because humans imply cause from two related events (i.e., I took a homeopathic remedy and started to feel better, it must work) a causal misattribution error is made.

So next time you choose to believe that homeopathy works, or that vaccinations cause autism, or that there’s a shadowy world government, or that big Pharma is plotting to kill us all, take a step back and think about it. Understand that your brain evolved to take action in a simpler world, and that it’s hardwired to trust its own conclusions (even those based on really faulty evidence). Try to take a more dispassionate stance, look at the evidence objectively, discuss it with other people (preferably those who don’t share your worldview), and reevaluate your stance. Even better, try doing this next time you get angry because of an implied insult or slight, or when you find yourself acting in a way you might regret later. Don’t trust your brain (it’s wrong most of the time – read here). Learn to observe and make useful decisions based on objective observations.

A parting note: scientific thinking has helped us pull ourselves out of the mud, understand ourselves, develop remarkable technologies, and placed us on an amazing pathway. Using a nonscientific approach, many people like to blame science and scientists for today’s problems. Let me reiterate, scientific methodology is not about an agenda, it is a way of evaluating the world. It’s humans and their venal, petty, irrational, greedy processes that have used these methodologies for ‘evil’. Take nuclear power, a massively more effective way of generating electricity than pretty much anything else. It’s humans that turned that idea into a way of wiping out the species, and it’s human stupidity that resulted in the few nuclear accidents. Blaming science, is a bit like blaming the sun for being hot or water for being wet. Science is simply a way of observing the world, it’s what we do with it that counts.

I’ll leave you with my favourite quote of recent times:

“What do we want?”

“Evidence-based change”

“When do we want it?”

“After peer-review”

4 Replies to “Evidence versus hearsay: Learning to think like a scientist…”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.