In my posts so far, I’ve spent the majority of my time talking about the ways in which psychology can benefit us. Mostly, I’ve focused on mindfulness, and understanding the psychoevolutionary basis for human behaviour so that we can choose our actions. As a part of this, I’ve pointed out how humans are often not in control or their actions, but that they often think that they are. For instance, most of us believe that we have a stable self that is consistent over time, and are pretty sure that we can predict how we’ll act in the future.
But can we really be so sure of how we’ll behave, say, under extreme pressure? Many of us would like to believe that, in a crisis, we’d be able to stay calm, lead others, behave heroically or, most importantly, maintain our beliefs (in, for example, nonviolence). Will you still be ‘you’ come the zombie apocalypse? What about your moral code? Will you be able to stick up for yourself or resist the orders of an authority figure? I’m not so sure you will, and I’ll tell you why.
The fact is that very few of us can predict how we’ll act under extreme pressure. This is largely because of the precepts I’ve already talked about (see here): when we’re under pressure most of us experience a flight or fight reaction that evolved to help us survive in potentially dangerous encounters and, among other things, shuts down a lot of higher functioning when activated (i.e., higher brain activity is physiologically expensive to run in terms of blood sugar, so the body conserves it). This is a great system for surviving a mastodon attack, but in a modern world, where we actually need our higher functioning when we’re under stress (and where we have easy access to sugar), it’s a bit crap. So, in other words, when we experience high levels of stress or are in real or imagined danger, we don’t think clearly or act in expected ways.
Some of you might have done first-year psychology at university. If you did, you’ll recall two classic psychology experiments from the 1960’s and 70’s. The first was a highly controversial bit of research by Stanley Milgram, looking at obedience. In this study, volunteers were told that they would be participating in a learning study. They were seated at a panel with a microphone and speaker, and a dial. Participants were told that the ‘learner’ (actually an actor) was in another room and that they would be required to ask pre-scripted questions of him or her using the microphone (hearing the response on the speaker). If the ‘learner’ gave the wrong answer, the volunteer was to administer an electric shock by setting the voltage on the dial. The dial was labelled from mild all the way through to extremely painful and fatal. Volunteers were told by the experimenter (who was wearing a wearing a white lab coat) to continue giving shocks at increasing intensity to the ‘learner’, even though they could hear that the learner was in pain (including simulated screams). If the volunteer refused to continue he or she was told to continue with the following script:
- Please continue.
- The experiment requires that you continue.
- It is absolutely essential that you continue.
- You have no other choice, you must go on.
Here’s the scary bit, despite experiencing extreme stress, 65% of volunteers administered the ‘fatal’ shock, and even those who insisted the experiment end didn’t ask to check on the health of the ‘learner’. Most of them claimed that they would never behave this way, but weren’t able to stand up to the ‘authority figure’. Milgram used his findings to attempt to explain the obedience of Nazi soldiers during World War 2.
A second disturbing study, known as the Stanford Prison experiment and run by Philip Zimbardo, reinforced the notion that people are easily manipulated into behaving in scary ways. In this study, a group of students volunteered for a prison simulation. A basement at the university was modified to look like a prison, with as much realism as possible, and participants were randomly assigned to the role of prisoner or guard. Originally planned to run for two weeks, the experiment was shut down after only six days because of the disturbing behaviour by the ‘prison guards’, who had begun (without prompting) to behave in sadistic ways toward the prisoners. Like the Milgram study, participants believed that they would act ‘morally’, but found themselves acting in highly unpredictable and unpleasant ways, in a very short period of time.
And thus was social psychology born. Other experimenters have demonstrated that we’ll agree with (go along with) a group, even though we know that we’re right and they’re wrong (groupthink), will slack off when we’re in a larger group because we can get away with it (social loafing), will often behave out of character in a crowd (deindividuation), and can be easily manipulated into changing our opinion when certain conditions are suggested (such as authority, safety, or comfort). The scariest thing about this research is not only the demonstrated ease and rapidity of change, but the rationalisation that goes along with it. Most people will comfortably rationalise their behaviour post hoc, convincing themselves that they actually chose the action (and refusing to acknowledge or even believe that they were manipulated).
It gets scarier. Our perceptual limitations mean that a lot of what goes on isn’t consciously available to us, especially when our attention is directed elsewhere. Two classic studies demonstrate this phenomenon quite starkly. In one, experimenters asked participants to watch a recording of a people passing a basketball and to notice the number of passes by the players wearing white. During the video, a person in a gorilla suit walks directly across the screen (see here). When focusing on the instructions (counting the passes), most people didn’t even see the gorilla and would swear blind that it wasn’t there. In a similar experiment, an experimenter asked random people for directions (see here), during the exchange, two other experimenters walked between them carrying a door , and the original experimenter was swapped for another person. Most of the time the person giving directions didn’t notice the switch!
And it gets even scarier. It’s not just gross manipulation, inattention and post-hoc rationalisation that can modify our behaviour. It turns out we can use a phenomenon called priming*, to change people’s behaviour without any conscious awareness on their part. For instance, read the following sentence: “the house was old, it creaked and groaned and seemed to struggle on its foundations”. Chances are, if you’d stood up after reading it, you would have done so measurably slower. You were primed for ‘old age’. You can also be primed to change your voting preference, based on the location of the polling booth. In fact, researchers have claimed all sorts of priming effects, suggesting that we can be manipulated subtly and simply without any knowledge on our part (and, of course, we tend to claim that any resulting behaviour was our decision all along).
It might not come as a shock to learn that many of these principles have been applied over and over again throughout history. This is hardly conspiracy theory, it simply represents the human desire to dominate and manipulate others to achieve our own ends. Every time you see a particularly ‘effective’ TV ad (one that actually makes you want to buy the product), you can be sure you’ve been manipulated using these principles. Likewise, when people act in violent ways based on their beliefs, there’s a pretty good chance they’ve been manipulated using a combination of social conditioning and memetic infection (see here and here).
In the future it could get substantially scarier. There are already technologies that can be used to infer your mental state through a brain scan, and even to literally see (on a screen) the contents of your thoughts (this is nascent technology, but will improve). We also have ways of implanting behaviours or inducing abnormal mental states, using transcranial magnetic stimulation (TMS) or (more scarily) ultrasound. It’s quite possible that these technologies will become more effective, selective, and widespread in the near future.
So, we’re all screwed right? Well, yes and no. Yes, in that none of us are able of predicting how we’ll act in difficult situations unless we get a chance to test ourselves (and most of us won’t or wouldn’t want to experience that sort of stress). Yes, in that a lot of the stuff that manipulates us happens at a level far removed from conscious attention, and we’re really good at pretending that our actions, even those highly ‘out of character’ were the result of our choices. But, no, if you learn to pay attention to your behaviours. Disciplined, conscious attention of your actions allows you to test who’s in control. If you notice yourself acting (or feeling tempted to act) in a way that might not be ‘you’, try and figure out what’s going on. Do you really want to act this way, or does it just feel like you should?
* It’s worth noting that the validity of the priming construct has been questioned, especially because certain claims haven’t been easy to replicate.
2 Replies to “The dark side of psychology: Manipulation, mind control, and priming…”