Wednesday 2 September 2015

We Need To Talk About P II

In a previous post I spoke about the problem with the way psychology has been going about testing to see if the results it produces are significant or not. One of my lecturers, Dr Jim Grange, has, as part of a team, published a paper in Science which shows what happened when they tried to replicate 100 studies from three scientific journals. The results have major implications for psychology research.

Here is a link to The Guardian article about it.


Saturday 18 July 2015

A Little More Conversation



I wrote last time about controversy regarding null hypothesis significance testing (NHST) in psychology. NHST belongs to quantitative research. There is, however, a whole other type of research in the social sciences: qualitative. Qualitative research doesn't use numbers, which is why a lot of students prefer it. That is, until, they actually get to do it and realise it is much more difficult than they thought. Perhaps the most basic form of qualitative research is thematic analysis. This is where you, for example, interview someone and look for common themes in what they say. Strictly speaking it is more of a tool which other forms of qualitative analyses may use. There are many different forms and most of them are difficult to pronounce (Interpretative Phenomenological Analysis) but the one I want to talk about here is Conversation Analysis (CA).

To explain what CA is it is easier to begin by describing Discourse Analysis (DA). DA is an umbrella term for similar analyses but in essence, DA looks at how words create and reflect meaning and reality. So, for instance, describing immigrants as 'flooding' in to the country reflects a very different reality to simply saying they are 'arriving'. Words have power to shape discourse and this is especially true in the world of politics. CA, however, doesn't look at the meaning of words and what they create: it looks on words as action and at what they do. It breaks speech down into parts and examines where those parts come in a conversation, why, what they 'do' and what happens if they are absent. 
Most of us are not aware of it but we all follow normative rules when we take part in a conversation (Stivers, Mondada & Steensig, 2011). We don't have to follow them and we often don't, but when we don't follow them the smooth progress of a conversation is disrupted and we have to work hard to get it back on track. What are these rules? There are too many rules in conversation to go into here in a blog post. However, conversation analysts have identified something called adjacency pairs (Goodwin & Heritage, 1990): a first pair part (FPP) followed by a second pair part (SPP). This is jargon but easy enough to recognise: it's any opening that requires a closing. So, when I say hello (FPP) you say hello (SPP). If I say sorry (FPP) or you say 'that's okay' (SPP). A phone ringing is a FPP which requires you to pick up (SPP); a sneeze is a FPP that requires a 'bless you' (SPP). 












This is simple enough except adjacency pairs don't necessarily come so closely together. Nor is the SPP always provided. Often what happens, the FPP is given but a lot of work has to be done by both of you before the SPP is forthcoming. An example of this is where I tell you that I fell over. That would be the FPP. The SPP I want from you is likely to be sympathy. What will probably happen, though, is that if I tell you I fell over you'll want to know why, so instead of a SPP you'll begin a process of evaluation. Was I running? Did lots of people see? Did I hurt myself? Your SPP, if it ever comes, depends on these answers. 


If a SPP is not given, the teller is caused something like distress. If I say sorry for being late and you don't respond I won't leave it there. I am likely to escalate my story by explaining my alarm didn't go off, or the train was delayed, or there had been an accident. If you still don't respond I will go on: 'I left early to allow myself time', 'I ran all the way' and so on. We do so much to elicit a SPP and if we never get it it can bother us for a very long time. 

Given all this, I might want to pre-empt your evaluations (or lack of them) by something called prefacing. This is where I set the tone of the story and offer you a 'candidate stance' (Antaki, 2012). That is, the stance or opinion I want you to take. This is quick and easy to do and very effective. 

'I fell over' doesn't say much at all. 

'I went up to collect an award and fell over' does much more. 

'I went up to collect an award and slipped on water' does even more still. 

By the time I have completed my preface the candidate stance that I was embarrassed and injured and it was not my fault is well set up. 

It's not always so simple, though. Candidate stances position the teller of a story (Wetherell, 1998). In the above example I am positioned as a blameless and reliable witness. Had I tried to absolve myself of responsibility for doing something dreadful by saying I was drunk, I am positioning myself as someone whose testimony might not be accurate. A witness to the event who had not been drinking is then positioned as the authority. 

This battle for positioning between the teller of a story and the recipient is called 'epistemic primacy' (Stivers, 2011). That is, who has the right to knowledge? You might think it is obvious the teller does, but this is not always the case. If I tell you I saw a foul in a football match that should have been a penalty even if you didn't see the incident if you believe you know more about football than I do you will claim epistemic primacy for yourself. You might do so by asking me where I was sitting at the time, how much football I watch, which side I supported and so on. By the end of it, even though you were not there and I was, you could be in full charge of the story. 

All this is just a very basic introduction to CA. It gets a lot more complicated! Conversation analysts scrutinise every nuance of speech, which turns out to be very predictable more often than not. And that is the point of it all. Think of all the interactions where it is very important it goes smoothly. If you call the police in an emergency you would hope it all goes quickly and efficiently. Well, CA can help with that. Doctor/patient interactions are another example. Think, though, of being a parent and being told by your child that something bad happened that day. How do you handle it? Do you respond with sympathy immediately or do you withhold sympathy and ask a series of evaluative questions first? CA suggests it's the latter. This gets even more important when you learn that children of evaluative parents tend to do better at school and score more highly on measures of self esteem. 

The worst part of CA is that when you start to do it you actually become a worse listener. You start to recognise the 'rules' as they are followed or broken. You start to be able to predict what's coming next. If a conversation follows a 'textbook' route you'll smile to yourself. And all the while, you haven't really listened to a word that's been said! 

Antaki, C. (2012). Affiliative and disaffiliative candidate understandings.Discourse Studies14(5), 531-547.

Goodwin, C., & Heritage, J. (1990). Conversation analysis. Annual review of anthropology, 283-307.

Stivers, T., Mondada, L., & Steensig, J. (Eds.). (2011). The morality of knowledge in conversation (Vol. 29). Cambridge University Press.

Wetherell, M. (1998). Positioning and interpretative repertoires: Conversation analysis and post-structuralism in dialogue. Discourse & Society9(3), 387-412.

Monday 1 June 2015

We Need to Talk About 'P'.

Most psychology students dislike their research methods module. For those who don’t know, I am talking about statistics. When you experiment on people and you want to know, say, if women behave differently to men in certain circumstances, you need to have some method of understanding if there is anything going on in your data or if it is all just noise. Some students don’t see why they need to know it; some are put off by the arithmetic; then there are those who just find it dull. However, I think the problem most students have with the research methods they are taught is that deep down they have a feeling that there is something fundamentally wrong with it. The thing is, they are absolutely right. The field of psychology feels it in its bones, too.


The Cornell psychologist Daryl Bem published a study in 2011 which showed he had found evidence of people being able to see the future. This was the last straw. The problem with the way psychology went about statistics was always going to throw up something silly and now it had. Wagenmaker et. al. (2011) was moved to say “[…] the field of psychology currently uses methodological and statistical strategies that are too weak, too malleable, and offer too many opportunities for researchers to befuddle themselves and their peers.”

It is not an exaggeration to say that psychology was in crisis. At least the quantitative researchers were. Qualitative psychologists, often derided by their number-crunching collegues, no doubt sniggered into their sleeves.


You may be wondering why it was assumed the results were wrong. Perhaps people really can see into the future! I am going to have to back peddle a little to explain why they can’t, at least not according to the study in question.


You see, psychology depended for decades on something called null hypothesis significance testing (NHST). What psychologists want to do is know what every single person in a given population would do under certain circumstances. Obviously, they can’t test everyone. So what they do is test some of them and infer from that limited number an estimate of the population. To do that they put their numbers through statistical tests which tell them how likely it would be to find the effect they did if no such effect really existed. That is, if there is, for instance, no difference between men and women in reaction times what is the chance of finding one through some fluke? Fisher, way back in the 1920s, decided that if that chance was only 5% or less then you could report a significant result. But why 5%, students ask, quite reasonably. And how do you know your results don’t fall into the 5% fluke range? The answer to the first question is that is the side of the bed Fisher got out of that morning; the answer to the second question is, you don’t. 

If I were to tell you that at the last election there was a significant difference between the red party and the blue party you’d argue that didn’t really tell you anything. That, though, is what NHST has been telling psychologists for decades. Nothing more than that. Fisher never really intended for his statistical method to be used as a benchmark by which studies were published. To him, it was nothing more than a useful tool which told you there was something worth looking at more deeply. However, psychology didn’t listen and such studies were indeed published and have been ever since, each claiming that p<.05 (where ‘p’ means significance).

Another problem with 'p' is that it tells you how likely your data are if the null hypothesis is true. That is, how likely it was to have found a difference by chance if no difference really exists. The trouble is, the null hypothesis can never be true (you can't say for sure something doesn't exist; you can only say you haven't found it yet).


Apart from the fact ‘p’ doesn’t tell you what as a psychologist you want to know, like where is the effect and how big it is, it is alarmingly too easy to manipulate. All psychologists know that if you collect 30 participants and find no significant difference between groups then just keep adding participants until you do. It doesn’t mean there really is something interesting going on; it’s just a peculiarity of NHST that if you keep increasing your sample size you will eventually find significance. This is called p hacking and it is cheating. That brings us back to Bem (2011).

Screenshot 2015-06-01 at 11.47.25.png


Have a look here at Bem’s results. There are 9 experiments testing the ability to see the future. ‘df’ stands for degrees of freedom and is the number of participants in that experiment. ‘t’ stands for test statistic and ‘p’ is significance. In each experiment except experiment 7 (p>0.05) a significant result was found. Look at the participant numbers, though. They range from 49 to 199. This ought to make any reviewer suspicious. It looks like he kept adding participants until he found significance. He got all the way to 199 in the one that showed nothing before giving up.


It took a study suggesting that people can see into the future to confront this properly but NHST was how almost all studies were done till then. They still are most of them.

What you can do is something called a Power Analysis. You tell a programme what sort of design your experiment is, how powerful you want it to be and what you want to set 'p' as and it tells you how many participants you need. You shouldn't go over that number. The trouble is, if a psychological experiment uses a Power Analysis it very rarely says so in my experience. Not only that, if you see in the method section of a published study a large sample you are more likely to be impressed than immediately suspicious.


Fortunately, there are better ways of analysing data and psychology is moving toward adopting them. Remember how I told you there was a significant difference between the red party and the blue party at the last election? What if now I were to tell you the red party had 60% of the vote and that the margin of error was 7%, would that be better? At least now you have a lot more information to go on. That’s what psychologists are doing now with effect sizes. There is no excuse now for looking at a published paper in a psychology journal and not seeing effect sizes displayed alongside ‘p’ allowing you to see what the effect actually was and how big it was too. The great thing about effect sizes is that they stay the same no matter how many participants you throw at them, unlike ‘p’ values.


Then there are confidence intervals (CI). A CI tells you what the mean score of your sample was but also gives an estimate of how accurate a representation of the population mean it is. This is what they look like:


https://jimgrange.files.wordpress.com/2014/12/mini-meta.jpg


What we really want to know is how far from the true mean each of those little black squares are. We can’t test everyone but mathematicians have worked out an ingenious way of doing that from just a single sample. This amount of variability in the estimation of the population mean is called the standard error (SE). Don’t ask me how they know this as I am not a statistician despite the amount of it psychologists need to learn, but they know that if you draw a line (as above) 2 SEs either side of the sample mean then 95% of the time, the true and unknowable population mean will fall somewhere within them. The shorter those lines the better.


So now we’re getting to see much more deeply into our results than we could from a simple ‘p’ value. There is more, though. Bayesian statistics have been around a long time but computing power needed to catch up. They are a little controversial as you have to factor in an arbitrary value. What you do is, you work out your prior belief (what you already believe about what your study will show). Next, you run the experiment and input the data you came up with. Then you’re told how you should update your prior belief given the data you now have.


That may sound an odd way to go about it. However, as Carl Sagan said, 'Extraordinary claims require extraordinary proof.' Now we have a statistical model which challenges extraordinary claims to be supported by extraordinary proof. How would this have affected Bem and his experiment on seeing into the future? Since this is an extraordinary claim that is not supported by any evidence so far your prior belief ought to be quite strongly biased against it. Bem’s results, therefore, would not have been powerful enough to update your belief significantly.


Below you can see this working with Bayesian statistics and our ability to see the future. In the first example, we have someone with no prior belief at the top (prior). Then we get the results (likelihood). Then we get the updated belief (posterior). Because the experimenter had no prior belief, the data was able to update that belief dramatically even though it was weak.




Now look below at someone who has a strong prior belief that people cannot see into the future; someone who believes extraordinary claims require extraordinary proof. His prior belief is around zero. The data is weak so his updated belief barely shifts at all.


Screenshot 2015-06-01 at 12.32.35 - Edited.png


The whole prior belief thing does bother people as it is arbitrary, but it is based on prior research. If there is none, the prior is usually set conservatively. However, even a conservative prior still doesn’t help Bem and his findings. Indeed, if you apply Bayes to Bem's findings, only one of his experiments shows results anywhere near interesting and even then not very (quite a come down from 8 out of 9).


Some psychology publications are not accepting p values at all now. Some say that within 10 years p values will be dead, though apparently that was said 10 years ago too (which just goes to show people really can’t see into the future!) Some universities are teaching the new statistics and encourging students to evaluate studies based on this more modern understanding. They are on their way out, though. At the very least, what they can and can’t tell us is better appreciated.


Refs.

Bem, D. J. (2011). Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. Journal of personality and social psychology, 100(3), 407.


Wagenmakers, E. J., Wetzels, R., Borsboom, D., & Van Der Maas, H. L. (2011). Why psychologists must change the way they analyze their data: the case of psi: comment on Bem (2011).

Wednesday 8 April 2015

Jump!

Have you ever found yourself looking over the railing of a bridge or balcony and having the urge to jump? It's all right; a lot of people have. There is nothing wrong with you; you don't have a death wish; you're perfectly normal.

This urge to jump which is very commonly reported by people who have never had a suicidal thought in their lives has led some to believe suicidal acts can come out of the blue as impulsive actions. This goes some way to explain why someone who appears outwardly well and happy one moment might take their own lives the next.


Hames, Ribeiro, Smith and Joiner (2012), suspecting there might be more to it, decided to look closer. The first thing they did was to give it a name, which they could then conveniently reduce to initials. They called it the High Place Phenomenon (HPP), inspired, apparently, by a quotation by the character Jack Sparrow from the film Pirates of the Caribbean: “You know that feeling you get when you're standing in a high place… sudden urge to jump?… I don't have it”. According to them, the HPP is not evidence of a death wish. Instead, it's simply a misinterpretation of a safety alert. This hypothesis was dependent on another phenomenon: anxiety sensitivity.


Anxiety sensitivity is experienced by people who are fearful of the symptoms of anxiety. Reiss, Peterson, Gursky and McNally (1986) said there was a difference between anxiety (frequency of symptoms) and anxiety sensitivity (the belief that an anxiety symptom has negative implications). In their studies, they found that people with high anxiety sensitivity were more likely to suffer from anxiety disorders. That means it might be more important to ask what someone thinks their anxiety means rather than the fact they have anxiety in the first place.


What has anxiety sensitivity got to do with the urge to throw yourself from great heights? Hames et al. studied 431 university students (a big sample making up to some extent for the unrepresentative nature of the participants) and they found that while those with suicidal thoughts were more likely to have the urge to jump, a large number of those who didn't had the same urge. Interestingly, those who did not have a tendency towards suicidal thoughts were much more likely to experience the urge to jump if they scored highly on the anxiety sensitivity index.


So . . . you're walking along and you come to a bridge with a long drop to the ground. You approach the railing and as you do, a quite reasonable safety alert goes off in your brain. You become anxious about the state of the railing, for instance. Is it secure? Is it high enough? If you are highly sensitive to anxiety you might misinterpret this anxiety as something very bad. If you don't commonly have suicidal thoughts and are not habituated to them you then might interpret, incorrectly, the anxiety you're experiencing as an urge to jump. Hames et al. suggest that the non-conscious fear response to step back from the edge happens so quickly that the interpretation, coming a second or two later, might make the mistake of thinking, 'Wow, I was going to jump there!' So don't worry. If you're not suicidal, the urge to leap from a great height is probably just you getting a perfectly healthy warning signal all mixed up.


It has to be said there is still very little research on the HPP and a lot more needs to be done. It's an intrguing start, though. So far, at least, you could say this so-called death wish is really a life-affirming reaction. It's proof that you really do want to live.




Refs.

Hames, J. L., Ribeiro, J. D., Smith, A. R., & Joiner, T. E. (2012). An urge to jump affirms the urge to live: An empirical examination of the high place phenomenon. Journal of affective disorders136(3), 1114-1120.

Reiss, S., Peterson, R. A., Gursky, D. M., & McNally, R. J. (1986). Anxiety sensitivity, anxiety frequency and the prediction of fearfulness. Behaviour research and therapy24(1), 1-8.

Saturday 28 March 2015

Blink and You Miss It

When you first survey a new environment such as when you enter a room for the first time (or any time, come to think of it) your senses are bombarded with information. The human brain is pretty amazing but even it can't take it all in. This means we sometimes miss important things. What is really impressive is that we don't miss more than we do. Psychologists want to know more about this mechanism. To find out, they have invented some ingenious experiments which have allowed them to tease out what the brain is doing at such times. In fact, the inventiveness of psychologists coming up with these experiments never ceases to amaze me.

To test this, Potter and Levy (1969) came up with the rapid serial visual presentation (RSVP) task. In its most basic form, a series of letters flash up one at a time in front of you. You're asked to look for an 'X' (T1). If you spot the X you then have to process it. This processing takes time and cognitive resources, thus creating a blindspot. Any letter (T2) presented within that blindspot gets lost and studies show this blindspot occurs between about 100ms and 300ms from the X (or whatever letter you're asked to look for). The phenomenon is called the attentional blink (AB).

So far so good. But psychologists were curious. Is it just that we can't see T2 at all or is it that we see it and process it in our brains but, for some reason, don't pay any attention it. Shapiro, Driver, Ward and Sorensen (1997) did something very clever. They had three target letters instead of just two. The second would be presented during the AB (the blindspot) and therefore go unnoticed by the participant. However, the twist was that they added a priming effect to T2 which would affect T3. What they did was in a stream of numbers, T2 would be a letter and T3 would be the same letter but in a different case. The idea was that if participants were more successful at noticing T3 if T2 had primed them toward it (as opposed to participants who were not exposed to the T2 as a primer) then it would suggest we really do process these things even if we say after that we didn't see them. Well, as I am sure you have guessed this is what they found.

Right, so now we know it is very likely we see things we don't notice and that we process them well enough that they can have an effect on other things we see. There was another study by Shapiro, Arnell and Drake (1991) which looked to see if colour was affected by the AB. It turned out it wasn't. So if you saw the X and then a colour was presented within the AB period you could report what that colour was. A few years later, though, another team of researchers (Ross & Jolicoeur, 1999) went a step further. They added a series of alternating colours after the first colour. They wondered if this created what they called chromatic masking. It did. In their experiment, the AB returned for participants who were asked to identify a colour when that colour was followed by others.

I included those two studies because it is a good example of how psychology (and science in general) works. The first study wasn't wrong. It researched a phenomenon and reported the results. But those results raised other questions and along came someone else who tried to answer them. That's how knowledge is built upon. But it is also what leads people to mistakenly believe that science is always changing its mind. It isn't.

There are lots of other studies which looked at the AB. Another one investigated what would happen when T1 was accompanied by a noise (Olivers & Van de Burg, 2008). What happened was that the AB vanished for T2. Another study (Shapiro, Caldwell & Sorensen, 1997) used the participant's name in T2 and that got through the AB as if it wasn't there. The same thing happens for emotionally negative words (Anderson & Phelps, 2001). The last of those studies had participants with damage to the part of their brain called the amygdala. It seems it is the amygdala which decides which stimuli are important and need to be passed on to higher processes and which can be safely discarded.

This goes some way to explain how it is we miss things but also why we tend to notice the really important things most of the time. So when you're driving you might not notice the marching band on the side of the road but you will notice the child bouncing a ball among a crowd of people.

Refs

Anderson, A. K., & Phelps, E. A. (2001). Lesions of the human amygdala impair enhanced perception of emotionally salient events.Nature, 411(6835), 305–309.


Ross, N. E., & Jolicœur, P. (1999). Attentional blink for color. Journal of Experimental Psychology: Human Perception and Performance, 25(6), 1483.


Shapiro, K. L., Caldwell, J., & Sorensen, R. E. (1997). Personal names and the attentional blink: a visual “cocktail party" effect. Journal of Experimental Psychology: Human Perception and Performance, 23(2), 504.


Shapiro, K., Driver, J., Ward, R., & Sorensen, R. E. (1997). Priming from the Attentional Blink: A Failure to Extract Visual Tokens but Not Visual Types.Psychological Science,8(2), 95–100.


Van der Burg, E., Olivers, C. N., Bronkhorst, A. W., & Theeuwes, J. (2008). Audiovisual events capture attention: Evidence from temporal order judgments.Journal of vision, 8(5), 2.

Wednesday 18 March 2015

So What Is Psychology?

I thought I would dedicate my first blog post here to asking the question: what is psychology? The easy answer is that it is the study of the human mind, behaviour and feelings. After that it gets a little complicated. After all, there is clinical psychology, cognitive psychology, health psychology, developmental psychology, educational psychology . . . I will let Wikipedia take over from here. You're not here for a list, though. I assume you want my take on the matter or you would not be here at all. Well, here it is:

When I tell people I am studying psychology the response I get most often is 'You're not going to analyse me, are you?' When I feel mischievous I reply, 'I am analysing you right now.' However, the truth is, halfway through my second year I think I have only had one lecture on psycho-analysis. It's not that all of psychology has moved on from it; it's that psychology has grown so many branches since then and they all bear their own, individual fruits.

The biggest change in psychology was probably caused by the advancement of brain imaging, especially fMRI. Now we're able to look inside a living brain as its owner completes a task and see what's happening and where. That means we can begin to tease out how the brain works. We already had an idea that different parts of the brain performed different tasks, but there is so much coordination between brain regions that the overall picture is so incredibly complicated. For that reason, psychologists still need to come up with ever more ingenuous experiments to make sense of what is going on. Take the stroop test. Most of you should be familiar with what that is. It's where you have to say the colour a word is written in and not the colour the word is spelling out. So, for instance, if you see BLUE, you need to say 'green'. If you see RED, you need to say 'yellow'. This is difficult to do, especially quickly. But what is actually going on in the brain? Are we suppressing the word we are not supposed to say or are we promoting the word we are supposed to say? To answer that, someone very clever (Tipper, 1984) tweaked the experiment: they arranged the words so that each colour you're supposed to say was the colour you weren't supposed to in the preceding word. So, RED is followed by GREEN. Here, the word you want to say but are not supposed to is 'red' but the next word you are supposed to say is 'red' as well. The experimenters said that if you were suppressing the word 'red' then you would take longer to say 'red' when you are supposed to. If you were just promoting the correct word that would not have any effect. What they found was that participants took longer to say the correct colour when it was preceded by the same word that they weren't supposed to say. It's a very simple experiment but it allows us to say that, probably, we suppress things we're not supposed to say instead of promoting the words we are.

That's what psychology is. At least for me as a student most of the time. We're interested in what the brain is doing. Except when we're not. I am currently involved in some research that involves analysing conversation and looking for patterns in the way we report problems. That involves watching hour upon hour of videos of families eating dinner and transcribing anything that resembles what we're looking for. There are no fMRIs and no clever experiments: just lots and lots of watching and listening. Which is not to say it is not incredibly methodical. Conversation analysis uses a type of transcription called Jefferson. This allows us to break down conversation into recognisable and predictable parts so that we can see what language does as opposed to what it means. We're not trying to imply cognitive processes; we're trying to work out how we construct language to make it do what we want it to do. Why? Well, when a child calls a helpline to report abuse it really pays to know how best to coax out the information you need as quickly and as accurately as possible. However, while I am doing that, a few doors down the corridor, two of my friends are sticking their hands into icy water and seeing if swearing enables them to stand it for longer (Stephens, Atkins & Kingston, 2009).

I may not have helped you to understand what psychology today is. I hope, though, I have gone some way to helping you appreciate how broad and inventive it is. That should come as no surprise. We're trying to understand the human mind, which is still the most complicated thing we know of in the universe.

Refs.

Stephens, R., Atkins, J., & Kingston, A. (2009). Swearing as a response to pain. Neuroreport20(12), 1056-1060.

Tipper, S. P. (1985). The negative priming effect: Inhibitory priming by ignored objects. The Quarterly Journal of Experimental Psychology37(4), 571-590.

Tuesday 17 March 2015

Introduction

Eighteen months ago I started a degree in English and psychology at Keele university in the UK. A lot has happened since then. One of those things was entering a Wellcome Trust science writing contest. The lecturer I entered with, Dr Richard Stephens, won and he encouraged me to submit my effort for publication elsewhere. I passed it around and got interest from The Psychologist no less.

I was a journalist before enrolling at university and I had an idea back then I would write about psychology in the future, making it accessible to people, clearing up misconceptions and sharing in some of the fascinating things I have learnt and am still learning every day. The communication of science, in general, is something I have always admired in those who do it well. I also think it is a valuable endeavour. After all, we're all human, and understanding why we do, think and feel the things we do has got to be worth it. For instance, why do we swear when we are in pain? How do parents really react when their children report a trouble to them? What causes us to walk into a room and forget why we are there? And do mamals really urinate for an average of twenty-one seconds? Answers to these questions and more will be offered in the coming weeks, months and, hopefully, years.

(If you have a question about psychology then feel free to ask it. If I don't know the answer I will find out for you.)

Richard