Mental health, mental illness, insanity, wellbeing, distress, madness…

Hello, my name is Phil, and I don’t know what mental illness is.

Hello Phil.

I have an MSc in Counselling Psychology, work full time as a counsellor, and I don’t know what mental illness is. Neither do I know how it differs from mental health. I have a vague, felt sense of what these terms mean, but I don’t know.

Now admittedly, the ‘psychology’ aspect of my MSc wasn’t the kind that has lab rats and Big Brother body-language experts all that, but still, you’d think someone who’s qualified to work with people who are suffering from mental distress (there’s another ambiguous term to throw into the mix) might have a firmer grasp on such basic terms as ‘mental health’ and ‘mental illness’.

But I don’t. Neither do I really understand how ‘mental illness’ differs from ‘madness’ or ‘insanity’, or what place ‘wellbeing’ takes in relation to them.

Jens_Lehmann.jpg
A layperson [niche reference which doesn’t work in gender-neutral terms]
Sometimes I feel bad about this, and worry that I’m alone in my confusion – that I’ve missed the obvious distinction which everyone else was told about while I was in the toilet. But most of the time I think we’re all just operating in the dark. If you listen to people talk and write about any area of mental health there’s a real muddled mishmash of terms and attitudes which, to me, betrays a fundamental incoherence in the way that mental health/illness is understood both by the professional and the layperson.

Part of the problem is that the world of counselling is a bit scared of ‘proper’ mental illness – the kind we meant when, as politically-incorrect children we talked about people being ‘psychos’ or ‘mental’. We counsellors often shy away from a world we’re taught to see as too serious for our woolly skills (and too physical in cause). Some of us believe that we can help people with ‘proper’ mental illness deal with their problems, but the overriding discourse says that, at a certain point, we have to pass these people on to the big boys: the psychiatrists with the ability to prescribe and to section.

So there’s a whole big chunk of people who deal in mental health but feel they are not permitted to talk about the ‘real’ part, only the minor versions around the edges. And that in itself is symptomatic of the way that mental health and mental illness are (not) spoken about. We’re always banging on about destigmatising mental health issues but there’s a big stigma – a taboo – about deviating from this woolly, all-embracing, muddled approach to mental health [there’s the opposite taboo too, which I’ll deal with below].

There’s a taboo, in other words, about being clear about mental health and illness. A taboo which comes from a good place – not wanting to say something offensive about someone who is vulnerable – but whose effect is emphatically not good. By not speaking clearly we help no-one, in the long run, and we counsellors in particular make reduce our relevance and our stake in the argument to define what counts as mental health. In the interests of clarity, then, here are some of the things I’ve come across recently that have confused me:

Mental Health = Mental Illness?

met.pngA little while ago [ed. quite a while now, I’ve redrafted this many many times, and held off on pressing ‘Publish’ because breaking taboos is scary] there was a knife attack in Russell Square. Initially it was thought to be a terrorist attack, but the next day on the radio I heard your man from the police saying that it wasn’t terrorism what done it, it was a mental health problem.

I sat up at that phrase. Mental health?

Huh?

Without ever explicitly working it out, I think I’d always associated mental health with the softer end of the spectrum – the kind of thing we feel confident to deal with as counsellors: anxiety, distress, questions about purpose and meaning, that kind of thing. I’d linked it subconsciously with things like ‘wellbeing’ – with the everyday kind of things people mean when they say that 1 in 4 of us will experience mental health issues at some point in our lives. Stuff within the normal range of human experience. Not stuff that would lead you to kill a stranger with a knife.

sun.jpgThat kind of thing I always, unconsciously, thought of as mental illness. Mental illness which was seen, when I was young (and is still in the tabloid press) a kind of bogey-man; the kind of thing that the headline writers want you to think when they say ‘mental patients’ in front pages like the one on the right.

Mental illness = madness?

When we were children, the people we referred to as ‘mental’ were the same people we’d call ‘mad’. So is mental illness the same as madness? Is one a subset of the other? Clearly in The Sun’s mind ‘mental patients’ = ‘mental illness’ = ‘mental’ = ‘mad’, in the old-fashioned sense of the word. Mental patients are killers – the kind of people whose behaviour or thought is way beyond anything a normal person could understand. Where should we stand in relation to this running-together of madness and mental illness?

On the one hand, it’s pretty reprehensible, I think. It deliberately links all manner of mental illness with threats to your (children’s, granny’s) safety with no factual basis. It plays to an inaccurate picture in order to marginalise vulnerable people in order to sell papers.

But on the other hand, the everyday language notion of ‘mad’ or ‘insane’ is less obviously reprehensible. It is hard not to think of someone who deliberately stabs a stranger as insane, almost by definition. Their actions and thoughts are so far outside the normal range of human experience that they are ‘beyond’. So does ‘mad’ mean the same as ‘mentally ill’?

Clearly they’re not co-extensive – there’s people who we describe as suffering from a mental illness whom we wouldn’t want to say are mad. Those suffering from major depression, for example, I wouldn’t want to describe as mad, but I would want to describe as suffering from a mental illness. But there is a subset of those we define as mentally ill who would also be judge ‘mad’ in normal language: those suffering from paranoid schizophrenia, for example, or experiencing psychotic delusions.

A continuum?

We seem, then, to have a continuum which runs from wellbeing at the softest end (where “we all have mental health” [which, if I’m being really cynical, seems to mean we all have emotions], through mental health, which bleeds messily into mental illness, which at its extreme is madness – the kind you can get sectioned for.

Now it may be that the policeman who set this all off just misspoke: he meant mental illness (the type that is co-extensive with madness), but said mental health. But even if he did, his misspeaking betrays a muddledness which lies just under the surface of the way we talk about everything on the continuum.

This muddle is partly borne of the fact that it’s not at all clear who decides how it works or what standards should be applied along its length – there are no authoritative authorities to defer to. And it’s made worse by the many taboos and fears in this area, which mean that we all discuss the continuum in murky, euphemistic and underhand ways.

I’ve tried to get clearer in my mind by slotting the different parts of the continuum together:

  • At the softer end we have a focus on the societal causes, as in the myriad articles and reports focusing on the pressures on Young People from social media and schools and adverts and models and so on. You also see it in the articles which address the way that we organise our working lives, arguing that better mental health (sometimes ‘wellbeing’) could be encouraged through more humane working practices. Individuals are encouraged at this level to take responsibility for their mental wellbeing/health, by seeking out counselling or rearranging the furniture of their lives. Society around them is encouraged to make space for this, as their needs are, in some sense, normal.
  • As we move down the continuum we encounter those issues which counsellors typically feel justified in dealing with – relationship crises, mental distress (a very strange term which I see popping up more and more), obsessive behaviour, minor depression, PTS, generalised anxiety, that sort of thing. In all of these the individual is held to be capable of repairing themselves in the right relationship, though the doctor might need to be called to medicate if the intensity gets too high. Notice, though, how already the focus has switched from society to the individual. There’s much less written about how the workplace can change in order to support those dealing with OCD, for example. Instead the rhetoric here is all to do with destigmatising: these people are still normal; they’re just a bit out-of-sorts. They’re still held to be responsible for sorting themselves out, but they’ll need someone to support them through the process.
  • Further down the continuum you find mental illnesses of a kind that most counsellors are afraid to work with, and most friends and relatives might consider themselves unable to deal with alone. At this ‘harder’ end you find major depression, psychoses, personality disorders, PTSD – that kind of thing. These people are less ‘normal’, and less responsible for their situation. Neither society nor the individual is held to be in any way responsible for the cause or the solution. Instead the cause is defined as genetic or chemical, their distress is made private, and the treatment imposed.

On this continuum, then, responsibility, agency, and normality are key factors. They are, in the way I’m picturing it, proportional to one another: the more ‘normal’ your experience, the more responsibility you have to sort it out yourself, and the more agency you are taken to have in doing this. The less normal your experience, the less you are seen as able to sort it out, and the less responsibility you are expected to take in doing so. In addition to this, you can add social responsibility, which is also proportional: if the individual’s needs are ‘normal’, we as a society are obliged to help them out in our everyday lives. If their needs are abnormal, we are under no such obligation.

The bleeding continuum

What seems to me to be happening now is that each part of the spectrum is bleeding into the other. The DSM-inspired hard end is encroaching on the softer middle and even the soft end, as the language of mental illness (symptoms, chemical causes, medical treatments, parity of esteem, little individual responsibility or agency) spills into the way we describe less-extreme forms of mental health issues. This comes largely from the ‘experts’ who have a vested interest in turning ever larger numbers of people into ijpatients, and use the DSM to achieve this. But it can also be seen in the way that us woolly liberals advocate for more and more expert mental health provision at all levels. While this is done from noble intentions, the effect is to imply that even lower-level problems need to be sorted out by experts, and that these problems are not the responsibility of the individual or the system they’re a part of. For example, the response to increasing levels of childhood depression has been to bring more mental health services (=, in many cases, more drugs) into schools, instead of encouraging us all to see these problems as ‘normal’ and so seeing society as responsible for changing the system that creates depression in children.

And then there’s the backlash to the hard end’s relentless march, as in this kind of headline, which seeks to reclaim mental illness as softer than the hard-end see it. They seek to limit the extent to which the common-person’s conception of mental health/illness is shaped by those at the extremes. Without wishing men.pngto get too deep into the oneupmanship of the ‘they’re not Muslims, they’re insane; they’re not insane, they’re men; they’re not men, they’re evil; they’re not evil, they’re let down by our cultural pessimism‘ Officer-Krupke bullshit, there’s an important re-balancing of the continuum away from the hard end here. Headlines like the one on the right argue that we need to push back at the definition of ‘madness=extreme mental illness’, so that those who are closer to the soft end don’t get infected by the fear created by headlines like the Sun’s.

That is, the soft end (woolly liberals) have argued successfully that lots of people at the soft end should be seen as mentally ill so that they can get treatment, and are now biting back at the hard-end because there’s a risk these people might be re-stigmatised by the focus on ‘mad’ people.  This rebalancing is vital (though it could all have been avoided if we’d come up with a different term for low-level mental illness in the first place), but because it’s being done with a sloppy attention to detail, it all ends up feeling confused and unhelpful.

ter.pngTake, for example, the article on the left. One bit of beef I have with headlines like these is that they focus an awful lot on stigma and an awful little on truth. In the call to stop calling terrorists mentally-ill (which was only done because they wanted to stop calling them religious), there’s very little interest in finding out if they actually are mentally ill. Regardless of stigma, if they are mentally ill, then failing to call them that is a regressive and unhelpful kind of self-censorship. In actual fact, the article concerned is quite well-argued: the author explains that there are many contributing factors to terrorism – that you need to take into account cultural, social, and individual purpose factors to understand how someone becomes a terrorist. In amongst all of this, though, he admits that mental health is a contributing factor, so clearly terrorism is in part a mental health issue, as well as a cultural issue, a social issue, and an individual issue. In his laudable desire to combat the way that mental illness is demonised by the tabloid press, he ends up openly contradicting himself and making an argument that will not change anyone’s mind. This kind of muddle will not help anyone, in the long run, and is intellectually dishonest.

More recently, articles concerning Donald Trump’s mental health have played the same back-and-forth game as ‘experts’ ‘diagnosed’ Trump with various conditions, and then were backlashed by those who argued it was wrong to equate evil/stupidity/meanness with mental illness. Neither side were particularly concerned with the truth of the matter: the experts wanted some official way to mark Trump’s idiocy, while the backlashers were scared that mental illness was getting yet another bogeyman added to their number. Truth, here as elsewhere, mattered little to either side, and so we ended up getting even more muddled.

Stigma

Another area of much muddle is in the constant call for reducing stigma.

I wrote about stigma a little while ago. I’m not entirely sure stigma is a bad thing. And I think a big part of my problem with the anti-stigmatas is precisely the sliding scale I’ve been banging on about. I think stigma at the softer end is by-and-large a bad thing. The fear or shame which holds someone back from talking to their GP about minor depression or anxiety, for example, is helpful to no-one. In the middle of the scale it’s less clear: stigma here has bad effects but might also provoke action (for example, the person who seeks professional help when they hear voices, in part because they know that if they told their friends they’d probably not understand). And at the furthest reaches, it’s hard to imagine why a society wouldn’t want to say that madness is not a good way to be, and society saying that it’s not a good way to be will, in someone who feels that way, induce a feeling of stigma.

When we talk about reducing stigma we’re almost always aiming our comments at the vast majority of the 1in4 who will experience a mental health issue this year. The vast majority of them are experiencing more-intense forms of the problems that everybody face: stress becomes free-floating anxiety, feeling down becomes depression, comfort eating becomes an eating disorder. These are the things that we don’t want to stigmatise, but the reasoning is wrong: we shouldn’t, as is so often argued, destigmatise them because they’re analogous to physical illness, we should destigmatise them because they’re part of the normal picture of human life – just a more extreme version. They should be destigmatised because that’s a more caring and humane way to approach them, and one which will benefit all of us as we change society to make them less likely to happen.

The other side of mental illness – personality disorders, or ‘madness’ as folk psychology knows it – is a different case. Here, we should aim to destigmatise to the extent that this helps people who are suffering take less personal, moral responsibility for their problems. But we should also make clear that these experiences are outside of the normal expectations of human life. These are like physical illnesses. But with this de-agenting (to reduce stigma) we also strip away humanity. These are high stakes to play with, and the bleeding of the analogy-with-physical-illness argument into the lower levels of mental illness is not helpful: applying the same reasoning to the 1in4 is silly and harmful and confusing.

In fact, this misplaced analogy risks stigmatising normal experience, by putting it within the purview of mental health rather than putting the responsibility on the individual and on society to make conditions more amenable to a good life. For example, one reason that someone with low-level mental health issues may feel more stigma in coming forward to seek support is precisely because higher-level mental health issues have been destigmatised and put in the same category as theirs. The same person may previously have sought changes within their relationships and habits (i.e. taken agency and responsibility for themselves) but will now be encouraged instead to privatise their distress, rendering it the responsibility not of society but of professionals.

Enough

I starting writing this six months ago, and have struggled to come up with anything coherent. I apologise for this. If you’ve made it this far, thank you for your patience.

Normally I can’t stand it when people publish things that are unedited or confused or badly-argued, and then apologise for them. It’s better not to put them up at all, until you’ve done a decent job.

But in this instance I’m making an exception, because I’ve spent months sporadically trying to put this into shape and I just can’t: partly this is because I don’t have the intellectual chops I used to, but I think it also reflects the muddledness inherent in the subject matter. It’s so confused there’s nothing to do but be confused. I don’t have a pithy conclusion, but I do feel this is really important. The only way out of the muddle, I think, is to talk openly and honestly about what we all make of mental health / wellbeing / illness / madness and try to come to a better understanding of how we, as a society, want to understand them.

Religious bigot from marginalised community with homophobic views is mentally unwell but earns money from hitting people but is a working-class hero but makes racist statements but speaks out for the homeless but supported brexit but is the victim of racial discrimination. Liberals confused, Tyson’s furious.

On certain days, my facebook feed is chock-full of mental-illness-awareness-raising. It tends to back up on days when there’s a high-profile sufferer who’s spoken out about mental illness and taboo, or when an easily-sound-biteable study is press-released. We’ve got to speak out and speak up, they say. We’ve got to get more treatment. We need parity of esteem. Stigma is bad, you guys! We need more funding. Especially for children. Especially for young men. Especially for those in minority communities. It’s the same message delivered differently. Sometimes there are pictures.

It’s fine, I guess. At the very least it provides a break from looking at babies. But so much of this stuff can just blur into a grey murk of well-intentioned blandness, unremarkable and uninspiring.

What aren’t we talking about?

What’s more interesting, I find, is when people don’t get involved. What’s not said tells you just as much about societal attitudes towards mental illness as what is said, sometimes more. For example, it’s surprised me how few (zero, so far) posts have popped up to speak out about the latest high-profile public figure who’s fallen from grace due to mental ill-health.

And I’m not talking about Will Young. No, I’m talking about the mentally-ill person we all love to hate: Tyson Fury.

With Tyson there’s been nothing. No-one publicly empathising with his plight. No-one highlighting the particular pressures of this sector of employment. No-one asking about how we change attitudes within the sport, or asking ‘are we doing enough for the mental health needs of the travelling community?’ Nothing. Why is this?

Maybe it’s because he’s a sportsman?

Nope, can’t be that. People are all over Gascoigne when he gets addicted to something and says a thing, or O’Sullivan when he pretends to quit and says a thing. And there’s always at least a smattering of attention paid to the sob stories of whichever footballing journeyman turns out to have had a gambling problem. Sports-people are great ammunition for the awareness-raising set, as they not only get to speak to a wider audience than usual, but get to wheel out the ‘crisis in masculinity’ dirge too.

So, is it because he’s from the travelling community?

I don’t think it can be that either. While the travelling community are pretty much the only ethnic group you can get away with discriminating against in polite society, they are still an identifiable minority, and as such should be prime grist for the liberal mill. Highlighting Tyson’s fury should kill two birds with one stone – you can advocate both for the mentally ill and the ethnically marginalised all at the same time. Bonus!

No, the reason people haven’t got involved is because he’s a bit of a shit.

So’s Gascoigne, you might say. But he always apologised (and was really good at kicking, which is nicer than hitting, but let’s leave that to one side). Tyson, on the other hand, is pretty nasty, and doesn’t seem to want to apologise for it, or to play the blame-it-on-my-illness card.

How nasty is he? By boxing standards, not that nasty, just a bit thick and a few PR-advisors short of a [I couldn’t think of anything snappy or funny to put in here. I really tried but I couldn’t]. But by everyday standards, he’s pretty nasty. He’s done the homophobia, the misogyny, the racism. Most of the key players.

This is confusing for us woolly awareness raising de-stigmatisers. We like a clean story, but we don’t get one here.

Goodies and (no) baddies

Why are we so keen on a clear narrative? Why do we need there to be obvious goodies and baddies? A lot of us awareness-raisers come from a person-centred background, which is about the woolliest of all of the counselling jumpers, which embraces each and every person with the message: “you’re great, you are, no matter what you’ve done or what you think. Your heart is a golden nugget and always will be. It’s just got a bit covered up by the bad words that some bad people said when you were young, but inside you is a beautiful unicorn just waiting to come out. Believe in yourself and let your nugget shine, and in no time at all, with no help from anyone else, you’ll become the beautiful rainbow you always were.” Or words to that effect.

We believe people are fundamentally good, in the person-centred world. Which is silly, clearly, but believe it we do. When you combine that childish belief with a medicalised model of mental illness which seeks to assign zero responsibility to those who are suffering, and you’re left with a world in which all badness must come from physical imbalances which are beyond our control, and all goodness is yours from within. There are no real baddies – only brain chemistry – and only goodies.

Which fits very nicely when your high-profile mental-ill-health sufferer is contrite, seeks medical help and externalises their problem. This suits us very well, as we can lazily consign all of the bad in them (even if it’s hurt others) to their illness, and still love the person we knew before they went off the rails. We love to watch as they go into rehab and come out clean, even if they relapse, because it allows us to see them as fundamentally good. Good people in their core, but on the surface afflicted by an illness (be it addiction, compulsive behaviour, anxiety, depression, whatever) that is not their responsibility.

Mental illness = not your responsibility?

Now I’m not going to go into the whole responsibility thing yet again. My views on this are unfashionable and I want to work them out properly before I say them outside of my head. But there is a problem with the standard, physicalising narrative we see in play in our attitudes towards high-profile sufferers: it doesn’t leave space for the Tyson Furys of this world.

Tyson (or, to be entirely accurate, the Tyson that we know through his public utterances) is, undeniably, a bit of a shit. He’s pretty horrible, as a human being. But he also has mental health issues. What we want to do when we hear that he has mental health issues is consign the nastiness to his illness and advocate for him to be treated gently and to not feel any stigma. That’s our stock response.

But we can’t on this occasion, as it’s just too great a leap of logic to put all of that nastiness down to mental illness. Plus, even if we wanted to he’s not making it easy for us – he’s not apologising and saying ‘I need help for my baddies’. He’s not fitting our narrative because after he goes to rehab and comes out clean, he’s still going to be homophobic and misogynystic and the rest. And then we’ll have to face up to a category we’re really not comfortable with: a mentally ill person who is also, in some respects, a bad person. Denied our stock response, then, and unable to fit him into our normal narratives, we stay quiet.

Mental illness = your fault?

Of course, on the other side of the liberal/conservative divide they’ve no problem with the category we’re struggling with here – that of a mentally ill person who is also, perhaps, a bad person. Indeed, if you look to the brexit press, it’s the only category they’ve got sun(where all mentally-ill people are dangerous and violent and completely responsible for their illnesses). This is worse, in terms of its effects and its closed-mindedness. But we’re not, I don’t think, doing that much better than them in terms of the clarity of our thinking, or in our flexibility and openness to the evidence.

On the liberal-accepting side, we too easily use ‘mental-illness’ as a get-out. A way of avoiding thinking difficult thoughts, and actually listening to the people we’ve categorised, or engaging with the nuances of morality and illness. It’s cleaner to put the mentally ill into an all-good (victims) or all-bad (criminals) category because it protects us from engaging with the messy fact that they’re people.

Add to all of this the fact that Tyson hails from the travelling community and has suffered considerable abuse because of his background, and you’ve got a perfect storm for liberal inertia. We know we should be advocating for him and for his community – they’re marginalised after all, and he’s been a victim of abuse – but we also know that he’s a bad man who expresses intolerant views. And that these views are pretty representative of a part of his community. And that he’ll continue to do so even after he’s had his brain fixed. He won’t disown any of it. He won’t apologise and wrap it all up neatly for us.

We just don’t know what to do with him – he challenges our cosy dichotomies and so we (just like the conservatives do with cases which don’t fit their categories) simply ignore him.

Rigid categories, blinkered kindness, fear

Why are we so rigid? Partly because of our woolliness and liberalness. But also, I think, because of the way mental health debate has been taken out of the hands of ordinary discourse and into the hands of medical science. We play along with this relocation because it’s most often done in the name of greater access and awareness etc. But by buying into a very narrow discourse, we’re left in a position where we can’t nuance our views to allow for someone who is discriminated against but is a shit person. Or for someone who discriminates against others but is deserving of our empathy. Or for someone who might be partially responsible for their mental illness. We just don’t talk about it because it doesn’t fit the categories we’ve unquestioningly adopted. Many is the occasion I’ve felt scared – even now as someone who’s trained to work in mental health – to ask some of the difficult questions I have about the way we perceive mental illness for fear of being illiberal and nasty. So I’ve ignored it too.

But by ignoring the cases which challenge our categories we fail to engage with the real world. A world in which we’re none of us completely responsible or completely free of responsibility for any aspect of ourselves, just as we’re none of us completely responsible or free of responsibility for the communities we grew up in. A world in which the dividing line between ‘prick’ and ‘personality disorder’ is blurrier than the sun. By talking about this in a grown-up way we might come to develop a language which situates mental health back in the realm of normal, messy, moral life, rather than the sterile, amoral, medicalised annex it currently occupies. But that’s scary. It means saying some things that feel horrible and bad and might offend people – it means blaming people we’ve got used to pardoning, and vise versa. I’ve tried to do that, but right now I’m too scared; I’ll keep quiet.

South Asians and their taboos

Waiting by passport control for my South Asian partner [for that is her title], I came across an article on the BBC news website last week about South Asian attitudes to mental illness. Normally I’d skim-read an article like this before passing on to something juicier or fluffier, but having just returned to UK connectivity I was (I’m ashamed to say) hungry for digital content of any texture, so read it properly.

It was interesting enough, and had a picture of Monty Panesar at the top, which I liked. It reminded me of a simpler time in English sport, before we got good at things. But there was something in the tone of the article that made me uncomfortable.

It wasn’t the premise of the piece: the question “Why do many South Asians regard mental illness as taboo?” is a very interesting one. The way that different communities regard mental health and illness is a fascinating and important subject for public policy and private understanding. My partner and I often struggle to understand each others’ preconceptions about mental health, which are partly the products of our different cultural upbringings. Encountering assumptions that are foreign to one’s own helps one to become better aware of their contingency and arbitrariness.

South Asians – “a particular problem”

What troubled me was the way that ‘regarding mental illness as taboo’ was equated in the article with ‘wrong’, or, at the very least, ‘a big problem’. For example, the Professor man describes the large role that shame plays in South Asian cultures, and tells us that South Asians do not consider mental illness to be a medical issue, instead holding “superstitious belief[s] that there is something they did in their previous life and they’re being punished”. Later in the article a report is cited which found that mental illness issues were rarely spoken about or allowed out of the house because of fears around the status of the family, and worries about arranged marriages being called off.

The implication of these various statements is, to my mind, clear: South Asian communities are doing mental health badly. We as readers are invited to conclude with the article that it is wrong for shame to play such a large role in their culture, and for mental illness to be considered a moral rather than a medical issue. It is wrong that family status and arranged marriages are put before individual mental health.

The flip-side of them doing it wrong, of course, being that we (read: middle-class, western, mainly white) do it right. We’ve got the right balance of shame and openness, and have moved beyond primitive notions of moral responsibility to a much more sophisticated medical model (or, if we haven’t, we’re certainly working towards it through constant de-stigmatisation and medicalisation). Further, we hold – correctly, mind – the needs of the individual higher than the needs of the community.

I don’t necessarily disagree with the above arguments (I do). My problem lies with the way that the arguments are (not) made, and the way that this allows a degree of unthinking racism to be smuggled past the reader.

This may sound extreme. Read the article, see how racist it feels. Maybe it doesn’t. It didn’t to me when I read it. But that, I think, is because the article and hundreds of others like it doesn’t make its arguments explicit. If they’re held up to the light you can see how contentious the article is, but to avoid any controversy they’re smuggled through the back door, in unspoken assumptions.

The enthymeme

Hiding the most important parts of your argument in assumptions you don’t spell out is a classic philosophy trick, called an enthymeme. What are the enthymemes at work in the article? One of the main arguments being hushed through is that medicalising mental illness is a good thing. Another is that it is bad for a community to use shame to regulate itself. A third is that the needs of individuals should have precedence over the needs of family or community. There are others, especially once you get to the report, but lets stick with those three.

I’ve written before about my various beef with the first assumption, so won’t go into it much here. Suffice it to say that I think medicalisation is not obviously a good thing. At the very least, it risks narrowing the narratives available to individuals to explain and own their troubles, potentially disempowering and harming them.

But what of enthymemes two and three? Is shame a bad thing for a community to use to regulate itself? Should the needs of the individual be put above the needs of the group?

Shame

It’s easy for us to look down on shame, especially when we find it in other cultures, as it seems such an old-fashioned and anti-fun emotion. But unless you’re Carl Rogers (and I sincerely hope you’re not) there are some pretty good reasons to think that shame is essential for humans to live with one another.

For example, just off the top of my head, shame is one of the primary forces that stops me from becoming addicted to computer games. I know these are a drain on my life and cause my arthritis to flare up in ways that have terrible knock-on consequences for my physical and mental health. But it is not this knowledge that motivates me to stop – it’s the shame of being caught.

Shame is also a primary tool in the education of children. Much of the work of growing up is working out how to negotiate the balance between one’s bodily desires and the desires of others. It is shame, in the first instance, that helps a child to regulate their needs, as they seek approval from significant care-givers, and try to avoid losing this. Later on you might dress the shame up with rational argument but ultimately it’s the shame that does the work.

I know it’s unfashionable to say this, but shame does a huge amount of work in western as well as South Asian communities, as it should do. It has huge limitations, and a lot of the work that gets done in therapy is aimed at undoing unhelpful feelings of shame. But the point at which shame ceases to be useful and becomes harmful is not an obvious one, and criticising the South Asians for having a different way of drawing that line to westerners is not a useful response.

What would be useful is if the article had made clear its assumption that we’ve got the right line in the liberal West. At least that way the reader would be invited to question this, and understand the wider relevance of shame within a community.

Individual vs family

In the absence of such clarity, how might we find out how much shame is the right amount? Well, a simple way to do this would be to look at its effects: who gets hurt, who gets helped? In the article there is much focus on the harmful effects of shame on individuals, and an acknowledgement that this is often done because of the perceived needs of the family or group. The voices we hear are those individuals who have been harmed by shame, and rightly so – their voices need to be heard.

The voices that are not heard, though, are the voices of families and groups who have been helped to stay together by shame. And because of this we are not able to explore, within the article, whether or not the trade-off the South Asian communities have arrived at is a good one. As an article on a western website, the author expects us to unthinkingly accept that the any sacrifice required of an individual in the name of family or group is wrong, problematic, or backwards. But is this so?

Every culture balances the needs of the individual and the group in various ways. Obviously. Middle-class white Britain, for example, has embraced a kind of individualistic liberalism over the past 50 years which holds that family is very important so long as the individual chooses to be a part of it. We generally look up to people who sacrifice some of their own happiness to help their families (because family is important), but we do not look down on those who do not (because it was their choice). In other words, family is a good thing if and only if the individual wants it to be. In all things the individual is the final arbiter where any good is concerned.

The picture is different, I think, outside of that middle-class white bubble, but let’s not get into that now. As a white middle-class Brit, I am grateful for the freedom which individualism brings. It’s allowed me to make choices which have aimed at my individual flourishing, regardless of family or social expectations. But I also regret that some of my needs were placed higher than those of my family. For example, I was not required to visit my Grandfather when he was living in an old people’s home and, being a somewhat emotionally awkward twenty-something, didn’t choose to visit him. I regret my choices then and I regret that I didn’t belong to a culture in which my actions would have been shameful.

I regret, too, that it is not shameful for me to avoid ever talking to my neighbours, as this kind of shame would make me a better, more connected, happier human being. Without the societal expectation that I commit to something I don’t want to do, I find myself unable and unwilling to step out of my individual comfort zone and become a better person.

Ok Phil, but why spend so long pedantically tackling a pretty bland article?

Well, while the article itself may be bland, the trend it is a part of is not.

This article, like so many others we read every week about mental illness, unthinkingly holds that mental illness is a neutral object, capable of being observed free of any cultural baggage.

Mental illness is not a thing

But mental illness is not a thing. It does not exist. It cannot be treated as an object which is the same in one community as it is in another. In this respect it is not like physical illness. It is a cultural construct, as your man Foucault spent a huge chunk of his life riffing about (see this for a short, typically blinkered description of Foucault and the anti-psychiatrists). Our conception of mental illness is connected with many other aspects of our culture, including personal motivation, family structures, rituals, habits and so on. Medicalising mental health fits with Western materialistic (not in the hippy sense) individualism, but it doesn’t necessarily fit with a more collectivist culture.

By adopting a Western conception of mental illness, South Asian cultures would probably gain something (lower suicide rates, lower incidences of anxiety or whatever you’re measuring – that kind of thing) but also probably lose something. It might be that they lose something insignificant, but it might also be that they lose something of deep importance. We don’t know, because the article does not address this, and, by refusing to fess up to its enthymemes, tries to stop us from addressing it.

My gut feeling is that the price for adopting a western conception of mental illness would be pretty high. It might include the loss of established truths and norms which provide comfort and security, the loosening of family ties built through expectation and rituals of respect, and the diluting of cultural identity. Whether or not this is a good bargain is not clear, and not answerable by those, like myself, who are not a direct part of the South Asian cultures in question. And neither is it answerable by those who adopt an unthinking scientism where mental health is concerned. The ‘truth’ about mental health is ultimately not purely a physical one, but a cultural one too.

The South Asians who are quoted in the article are, to my mind, the only saving grace of the article, as they are arguing from within that their culture can and should change. These are important voices to hear. What’s missing in this article is the other side – the voices of those South Asians who value the traditional role that shame plays, for example, or who feel that the Western conception of mental health would not fit with their community in other ways. Their voices are unthinkingly erased from this account, because of the assumption that this is not a cultural issue but one of straightforward scientific misunderstanding.

Stripped of its ‘neutral’ surface dressing, an article like this which tells South Asians that they’re bad at mental health is straightforward cultural imperialism which borders on racism. That we don’t recognise it as such is testament to the power that the medicalised, individualised conception of mental illness has amongst us today. It’s become so much a part of our cultural furniture that we don’t even know it’s there.

What’s needed instead of one-sided dismissives is a genuine discussion about the broader cultural context of mental health – one which acknowledges that those on both sides of any cultural divide can learn from each other. Slating the South Asians isn’t good for them because it’s racist and plays into a stereotype of backwardness and rigid hierarchy and anti-science. But it’s not good for ‘us’ either, as it stops us from gaining insight into ourselves, and ideas from others.

Ultimately articles like this constrain rather than enable understanding. They are fundamentally conservative, and, under the cloak of ‘helping backwards Others’, serve mainly to bolster our own sense of right, preventing us from ever asking the question of the place mental health holds in our own culture: is it as good as all that? And that’s a shame.

Footsoldiers or Connoisseurs

(Paper presented at the Keele Counselling Conference on 7/5/16)

When the opportunity to present at the conference came up, my first thought was: what’s the point? Why bother? I’ve got nothing important to say and even if I did it wouldn’t change anything anyway.

For anyone who knows me and knows how passionate I am about counselling and about education and research, that would’ve come as something of a shock; I’m normally the first to jump at opportunities like this. And it shocked me as well. The more I dwelt on this shock and the negativity, the more I thought that I did have something I wanted to say: not to talk about my research, but to tell the story of doing the research – the story which ended with me feeling so negative and dis-empowered.

We’ll hear a lot of positive and inspirational things this weekend about creative research. My paper is going to sound very negative next to them, but I hope this negativity can serve a useful purpose. I hope that my story of isolation will resonate with others’ experiences, and highlight the danger that faces us when we, as practitioners, are separated from the knowledge creators. I also hope that the journey I’ve been on may gesture towards a different way to think about ourselves as professionals, and about what knowledge in counselling could mean.

Research, Knowledge and Fear

I’m going to start, then, with a very brief description of my Masters dissertation. My plan was to investigate my own identity as a white, heterosexual, middle-class man; to look at the privileges that this conferred and how I often failed to acknowledge or engage with these. I wanted to challenge my insider safety and security by involving others in the process – others who didn’t belong to the groups I belong to – others who could challenge and change me.

Fearing that any established method I chose would merely repeat and reinforce my privilege, I adopted an anti-methodological methodology. I hoped to ‘meet’ my participants, in Buber’s sense, with as few technical or power-full impediments as possible. So I sought dialogue – meeting – with Others, with no pre-set method at all except to engage and to keep on engaging. I had no criteria guiding the research except those which emerged in discussion and debate. I was the author and took responsibility for the work, but was not in complete control at any stage.

What did this look like in practical terms? Well, it meant holding an initial dialogue between myself and my participants which focused on identity (but was otherwise unstructured). Following this, both my participants and I would reflect on the transcript of that discussion and engage in further dialogue about these reflections, both via email and in person. This process would continue, spiralling hermeneutically towards a better, richer understanding of our encounters. The work would evolve in dialogue with my participants, rather than being an analysis of this dialogue.

So what happened? Well, it was a complex study, but one of the main threads that runs through the dissertation – and that I want to focus on today – is the way in which, after each dialogue, I would go away and try to understand what had occurred, and then share this attempt at understanding. And each time I shared this attempt at understanding, I would be told in response: “You’re trying to make this too clean, Phil – too final – too sensible”. I was told:  “You’re trying to understand it – to stand underneath it and justify and encompass it all”. And further, I was told that this movement was symptomatic of a privilege which seeks to encompass and erase difference.

As the piece developed, then, my participants were telling me that my goal of telling a clear story, or even of just plain understanding at all were themselves goals of a privilege which whitewashed and denied difference. I was invited instead to sit with the discord, to hear rather than understand; to allow the project to outgrow me.

I found this very difficult, and I shared these difficulties with my participants in a way which itself felt exposing and uncomfortable. But ultimately it was these moral and political criteria which led the writing of the dissertation. Ultimately I decided, in dialogue with my participants, that the moral and political imperative called upon me to include all of our voices, often uncommented upon, instead of rigorous analysis and clear explanation. I spent the majority of my allotted 20,000 words on these dialogues, and trusted to my reader that what mattered would come through in the writing.

The work was hugely worthwhile for me and, I hope, for my participants, and I don’t regret it. The learning I took away was of a moral, emotional and political nature, centring on what it means to be defined by others, and how unethical it can be to resist this. I have kept it with me and continue to learn from it.But the practical consequence of going off-piste in my research was that I got a much worse mark than I would have liked. This was the right mark, but the effect it had on me, which I hadn’t foreseen, was to feel excluded from academia.

And not only to feel excluded, but also, in a small way, to be excluded, as, without a distinction next to my name, I’m less likely to get funding for a PhD and, as I’m a counsellor, there’s certainly no way I can self-fund.

Now, this was my choice – I chose to write in a way which I knew risked getting a bad mark. But the feeling of being excluded from the bodies which create the knowledge that we as counsellors apply, set me in mind of other instances of alienation, and I realised that it’s something of a theme in my professional life.

Being a member of the BACP, for example, is for me an experience of having a distant, paternalistic instructor tell me what not to do. I feel I have very little voice in the body which represents me, and feel that it only represents the bland, quiet, profitable aspects of me.*

And this in turn set me in mind of another instance of isolation from my previous life as a teacher. Some years ago, while doing an MA in early years education, I conducted a piece of action research with my staff team. This research sought to raise our awareness of our interactions with young children and to reflect on these: to learn from the children and to learn how to learn from them. This was a fundamentally trusting, human, and relational piece of work, in which we all had a voice. And it paid great dividends, opening up new avenues of practical knowledge which would not have been accessible without this relational method. It was fundamentally lived, practical knowledge – it’s not the sort of thing that an outsider observing could have discovered. But not only did this knowledge not spread beyond us, it was soon overturned and negated by more official forms of knowledge: by initiatives backed up by extremely dubious but extremely evidence-focused research.

We had been encouraged to find our own practical knowledge, but were effectively told soon afterwards: “This is local, specific and not really proper knowledge. Our large scale studies are more important – they are more true”. In the years which followed this I found myself becoming more and more isolated from the sources of knowledge-creation in education, and, at the same time (because I was required to see and interact with my students in terms of this evidence-based ‘knowledge’), more and more isolated from the children in front of me. Eventually, the gap became too large and, reluctantly, I left.

The Risk to Counselling

Is this really a risk though? Do my own personal experiences really illustrate something larger? I don’t think counselling will ever end up where teaching has. For one thing counselling is much more private an enterprise, and a less political issue than teaching, and it has, at present, no statutory authorities. But I do think it’s worth considering what can happen when those practising a profession are completely isolated from the means of knowledge-creation, as is the case with teachers now. And there are signs that counselling is moving in that direction. For example, how is knowledge created in counselling? Who gets to say what counts and what doesn’t?

Well, to briefly divert into a little Foucault, there are many different discourses through which knowledge is used and defined in counselling. I want to focus on one particular discourse which is steadily gaining power and which I believe, if left un-engaged with, will widen the gap between the creators of knowledge and those who apply it. The discourse is that of evidence-based practice.

This is a discourse which holds that the only real knowledge is knowledge gained through randomised-controlled-trials and objective studies by neutral outsiders. It is a discourse which holds that knowledge is objective and measurable, and all that is not objective or measurable is not knowledge. This discourse has gained its power both through practical means such as the provision of employment to those who agree to it, and by broader cultural means.

On a practical level, for example, if you hope to work for the NHS – the largest employer in the UK – there’s a very good chance that you will have to accept the medical model and drop those elements of your personal beliefs which conflict with this. You will have to accept that you cannot learn from the patient, for example, and that your practice is defined by the research of others – others who measure a relationship as a series of inputs and outputs. You will have to accept that your clients are essentially lacking, and that you will fill in their gaps by operating a manual. If you don’t (or at least if you don’t pretend to), you won’t get work. Them’s the rules.

This practical power is hugely powerful, but there’s a larger societal story to tell too, about the systematic stripping-away of ideology and morality from public discourse. This de-politicising and de-moralising of public debate has left a vacuum into which the evidence-based-practitioners and their friends, the economists, have stepped. Economic impact is now the sole bottom line of almost all public debate, and so, increasingly, the knowledge that counts is knowledge which is measurable and has economic impacts. Just think of Lanyard. Knowledge of a more personal, local kind, does not count, because it cannot be measured.

This means that if you want to be engaged in creating knowledge; knowledge that matters, knowledge that has an impact, then it must be of this sort. Any other just holds no sway. Them’s the rules.

This is particularly pernicious a state of affairs in counselling, where so much of what we do – as is the case in teaching and in creative research – is about remaining open to and meeting the Other. The best of teaching and counselling and research is about a disciplined openness, in which we learn in relationship and from the relationship not about the relationship. But if you’re practising EBP you cannot be open to the client (or the child, or your subject-matter), because they are not in the evidence. And that means that you cannot learn from the client. And that means you let the client down.

As counsellors we can often end up feeling powerless in the face of the ‘evidence-based practice’ discourse: we often feel that the ‘knowledge’ created within this discourse is wrong but feel we cannot say so – we just don’t have the words.

Giving us the Words – Elliot Eisner and the Connoisseur

I want to end today by suggesting a framework within which we can start to stand up for ourselves more vocally and explicitly – a framework which will give us the words. And to do so I’m going to use a concept from the work of an educationalist called Elliot Eisner.

elliot_promotional_photo
Eisner (and a cat)

Instead of the technical or industrial approach to knowledge which we see in evidence-based practice, Eisner suggested that teachers may benefit from adopting a more artistic model of knowledge. Looking to the world of art, Eisner found that although there was no overall regulator dictating standards or evaluative criteria, there were, nevertheless, clear criteria and standards which were constantly being negotiated, developed and refined between artists and critics and audiences. And further, he found that these criteria provided enough structure for people to practice well and to improve their practice.

Within the world of art Eisner found explicit, measurable and objective criteria such as technical skill and draughtsmanship (much as we’d find in EBP), alongside criteria relating to established canons of practice and theory (and so an understanding of what knowledge has been passed down to us – much as we’d find in the ‘schools’ or ‘tribes’ approach to counselling), alongside amorphous but no less important criteria such as, for example, emotional impact and moral worth. Eisner called the person who engages with these different criteria and weighs them up against each other a connoisseur. These connoisseurs have a felt sense honed over years of direct, lived experience and dialogue, and use this engage in a community of rigorous discussion about truth, value and meaning in art. They have a shared sense of purpose, direction and practice, but within that disagree reasonably and rigorously about how to achieve those ends.

Eisner hoped to import that culture of critique and connoisseurship into education. He loathed the curricula which sought to control every aspect of a child’s experience in school. But he also distrusted the wooliness of unreflective teachers who were often just going along with tradition because it’s what we do. Education, as he saw it, was a messy human process, with aspects of culture and morality and subjective taste, as well as aspects of efficacy and science and objective research. He wanted teachers to be open to the cultural and individual, as well as the universal and rational. He wanted them to develop their own language to weigh up these different ways of judging and make informed, situated choices between them. Eisner knew that the only way that the art/science of teaching could be protected from industrialised knowledge-creation was to encourage teachers to take an active role in their own community of connoisseurs; for each and every one of them to become a researcher who could stand up for their own lived knowledge, and engage with each others’.

How does this help us in counselling? Well the best counselling is messy and human. It is a moral and ethical as well as a technical process. As counsellors we are artists but we are not just artists. We are concerned with our impact in the world and with doing counselling well. How these different aspects – these different criteria – are to be balanced is an unsolvable conundrum. But what Eisner’s notion of the connoisseur highlights is that this unsolvable balancing act is one which we must continue to debate instead of ceding, frightened, to one particular discourse. It gives us confidence, I hope, to engage in this debate – to say, unashamedly: “My standard of judging is potentially more important than yours”. To say “I understand things from the inside which you, on the outside, cannot grasp, and vice versa”. To face up to the EBP and engage with it rather than rejecting it out-of-hand, or slavishly submitting to it. To place the lived relationship and therefore the client at the centre of our work and to learn from these, arguing once again in our clients’ best interests.

The notion of the community of connoisseurs gives us a language through which to place practical knowledge on a par with technical knowledge, and to take back some control of our work. It gives us confidence, I hope, to acknowledge the compromised, messy nature of relationship, and to reject the totalising, manualising impulses of industrial knowledge where they are inappropriate.

My Journey to Keele

Which brings me to the closing remarks of my paper, and the question: how do we get to a position in which our voices as connoisseurs can be heard?

The battle has been lost – for the moment – in teaching. I left the profession because I felt I was not enough, and that there were too few people to fight with, and too few words with which to argue. But we are fortunate that we already, in counselling, share aspects of connoisseurship in, for example, the supervisory relationship, and in conferences like this, today. This conference is an opportunity for connoisseurship; for us to find our voices. We won’t find our voices by looking above for someone to give them us: we need to look towards each other, and stand up for – and to – each other. But the point I want to leave you with is that we have to look outwards as well as inwards – to those who disagree as well as to those who agree. If our situated, creative local knowledge matters we need to be saying that to others as well as to each other. We need to stand up together and say: “This matters. It is important. You need to listen”.

Part of my journey has been to expose myself here today and to say: my research was worthwhile because, in that instance, the moral and political were worth more than the analytic and judgemental. The lived-experience was more important than the mark scheme. Part of my own journey has also been to switch from the academic route into blogging as an avenue for reaching more people outside of the bubble of those who agree with me: turning out as well as in. Which seems like a very good place to stop and turn outwards to you for questions…


* After I presented this paper, I attended a keynote presentation by Andrew Reeves (of BACP chair fame), and my views have somewhat changed. An article based on this paper will briefly explore this in an upcoming issue of Therapy Today.

Feast/famine

Some of the things I’ve written recently have been very negative. Most of the things. Living alone and listening to two hours of news a day ferments a pitch of negativity that, if left unchecked, would fester and develop into sores. It needs an outlet – it needs lancing. Normally it’d be her indoors who’d get an earful, but she’s currently displaced. You’ve been my displaced partner, you guys. You’re welcome!

But like any displaced partner you don’t just want to hear me whinge when I get home, so I thought I’d try to say something positive about what is good. It’s harder and scarier than saying something negative, but taking risks is the whole point of being in a relationship isn’t it. Isn’t it?

Anyway, I was also impelled to write this by seeing a therapist again. I’m seeing a therapist again you guys! Not because I’m in a particularly bad state at present – I’m cool – but because the times in my life that I’ve seen a therapist are times in which I’ve lived better and more intensely. I’d not want to see a therapist all the time because, well, money. And shame/self-respect. But therapy with the right person at the right time is ace. The right person at present is a chap who goes in for a bit of the psychosynthesis.

Psychosynthesis?

Sounds like some hypno-hippie-hipster pseudo-scientific bullshit right? It’s not, I don’t think. Maybe it is – I don’t know a lot about it (which is one of the reasons I like it), but I do know that, unlike most flavours of therapy, psychosynthesis seems pretty agnostic in its view of the person. Instead of trying to benevolently manipulate the client into agreement with their true state, it encourages them to make sense of themselves, often through a series of internal characters called subpersonalities or voices. These might be the much-maligned inner child who Freud was so interested in fiddling with, or they may be character-traits which emerge in certain situations, or relationship roles, or imagined future selves, or whatever. Unlike many of the other flavours of therapy there’s no prefigured plan about which voices each person should have. It’s creative and exciting and scary, and allows you – sorry, me – to explore and create with a sense of freedom and playfulness, instead of a fixation on uncovering underlying causes (psychoanalysis), becoming more pro-social (CBT, TA, other acronyms) or on polishing a turd (person-centred).

One of the things that has emerged for me in the course of therapy is the difference between those of my internal voices that speak from a place of feast, and those which speak from a place of famine [I think this distinction comes from the book ‘The Gift‘, by Lewis Hyde, but I’m not sure]. Engaging with them has been fascinating personally, but has also thrown an interesting light on public life – especially on those aspects which make me so angry and negative.

Famine

The voices which speak from a place of famine are those concerned with conservation, preservation and safety. They’re voices dominated by the past and the future: they have learnt the hard way and don’t won’t be bitten twice. They stockpile like a prepper, and are just about as likeable. They’re the voices which whine and wheedle: “Are you sure you’ve got enough strength for that?” or “What if you let him down – it’d be awful to promise something you couldn’t follow through on,” or “You need to be sure you’ve got this right, why don’t you check it again; much better you find the error before anyone else has the chance. In fact, it’s probably better no-one gets to see this at all”.

The voices of famine are afraid of overcommitting and will only take the most calculated and justifiable of risks. They don’t trust themselves very far, and they trust others even less: everything external will potentially let them down, so they seek to gather as much as possible inside themselves, and cut off from anything which can’t be consumed or controlled. And if the world must be engaged with, then it should be engaged with on the safest possible terms: scepticism, atomism, and safety-in-numbers-evidence.

Feast

The voices which speak from a place of feasting are – in me – rarer, but they are vital. They are enthusiastic, generous and profligate; they spend and give and trust recklessly in both themselves and the world. They speak from a place of strength but also vulnerability: in their confidence they expose themselves, consuming and enjoying and thereby making themselves less prepared. The feast can be enjoyed only because the past has been forgotten (ignored) and because the future is a place of hope and trust rather than fear. These voices sing “Expand, make connections with others; they won’t let you down,” and “Make yourself vulnerable: you’ll be held”, and “Believe, why not? You can change later.”

Voices which speak from a place of feast seek to expand, but not in order to control or make safe: their aim is to experience, now, what is good and to experience more of it. These voices are happy with science and evidence, but are not constrained by it as they have faith in something better, and are not tied to the past. They make sense of the world by immediate judgements rather than reasoned argument – aesthetics and virtue predominate: ‘how does it taste’ rather than ‘how many calories’; ‘is it the right thing to do’ rather than ‘can I get away with it’; ‘how am I moved’ rather than ‘what does my friend think’.

What has this got to do with the news and stuff? 

The more I’ve got angry about the flacid paucity of public debate about, for example, the EU referendum, academies, tax prickery, etc., the clearer it has become to me that the only voices with which we permit ourselves to speak, in public, are voices of famine.

Take, for example, the queen of the sciences – the voice to which all other voices much defer, in contemporary debate – economics. Economics is the voice of famine in its purest form: it posits nothing outside of itself, and aims to control by understanding. Anything which exists outside of economics is either irrelevant or reduced to itself. In the EU referendum debate, for example, all of the argument on both sides have been economic-based. No feast voice has been confident enough to stand up with an alternative. Can you imagine a pro-EU politician saying, as I believe they should:”the economic arguments are irrelevant: what matters is something bigger – a principle of shared humanity and generosity. The fact that we’re giving 151 million pounds or whatever a week to nations who are poorer than us is a good thing. We should be giving more”. It just wouldn’t happen.

And this is part of the power of the famine voices – both on a personal and political level – they’re inherently reasonable, and they’re right. You shouldn’t take a risk; there’s nothing to justify it. Because they are, by definition, reasonable and based on the best evidence, they cannot but win if engaged with on their own terms. Even when proved to be absolutely useless, they still win out. It hasn’t gone uncommented upon that very few economists predicted the whole global financial schermozzle, but public debate is dominated now more than ever by the economist. Just like someone suffering from OCD, we may not like the tools we have which keep us safe, and they may limit our lives severely, but they’re the only safety we know.

Similarly, if you listen to people in the 50s talk about their hopes for the future, they talk about 3-day-weeks and enjoying the present tense of leisure time and exploration and creativity and relaxation. Instead (and despite living in a much much much much safer world) we’ve put all our faith in a way of life which, broadly, makes us unhappy. But at least it’s safe.

The same can be seen in education

Read any education research from the 70s and you’ll find all kinds of idealism and hopefulness. You’ll find both sides of the educational divide framing their beliefs in terms of what society is for, and what counts as good or right. You’ll find people opining that as we become technically more adept at teaching and understand more about the brain, we’ll make space within education for all of the richness of human interaction and growth and creativity.

Look to current debates and you’ll find something else. Take, for example, the recent arguments over compulsory academisation. The main argument put up by the unions and the labours was evidential and economic. They argued, erroneously, that the evidence suggested that academisation made for worse results and that they would cost more than LA-run schools. They disagree about the working-out of the sums, but fundamentally they agree. Fundamentally they agree that what matters in education the speed at which a pre-defined skill can be learnt and demonstrated (parroted, or aped, depending on your jungle-based-animal-analogy of choice). They value the present purely on the basis of what it will be in the future: the child’s current experience is relevant only in terms of impact on future life. Sometimes this future-valuation is seen by good people as a bad thing, as when education is reduced to creating economy-fodder. Good people rightly baulk at the contention that experience x is good if and only if it will have a long term positive effect on employability. But good people also use this method of future-evaluation because they don’t know any other: for example, when early education is judged in terms of later mental-health or exam results.

In both cases both sides agree that the child’s experience of education is never to be valued on its own terms: its value is purely extrinsic, and situated in the future. Both sides speak with a voice stuck firmly in a place of fear and famine. Both sides speak with a voice that does not trust, and can not enjoy or value what is happening right now. A voice which is scared of global racers and technologies and tiger economies and Finland.

What else might they have argued? Well, in these times it is hard to think of an argument which isn’t about efficiency and fear, and still harder to make that kind of feast-argument stand up against the famine-status-quo. These kind of arguments just sound silly because they don’t play into the publicly-sanctioned language of debate. They might have said, for example, that even though ‘evidence’ suggests that method x gets better educational outcomes, method y is more humane, and feels more respectful. Ultimately I would argue that those of us who have worked with young children know, from those children, what is right better than those who watch from outside the relationship. We have been told.

The Family

Ultimately, though, I think argument is the wrong way to think about this. Arguing and debate are themselves modes of interaction which come out of famine: they are concerned with correctly organising what we already have rather than creating something better; discovering something new. Instead we ought to look to areas of life where the feast voices are established and undimmed. And chief amongst them is the home. The way we approach education is the complete inverse of the way that we parent (so long as we’re not hot-housing leopards or whatever). When we parent we delight in the moment, valuing the child intrinsically for what they are, trusting that they will grow and develop (without drawing the logical conclusion that, as a child is not yet as developed as they will become, they are therefore inferior and deficient). We are hopeful and confident and so instil hope and courage and boldness and creativity.

One part of the education system in which a more trusting, creative voice still holds some sway is in the Early Years (0-5). Why? Largely because, and excuse the sexism here, the Early Years has always been dominated by women, and sees education as a natural growth from care and parenting, rather than something which needs to be imposed to address a deficit. But even here the voices which speak from a place of joy and delight and feast are being drowned out by the famine voices of whitehall and ofsted and fearful parents.

Now, I’m not saying that we should all become hippies and just love one another. The feast and the famine each have their place. Feast voices can lead to the kind of excesses seen in Weimar Germany or Chelsea. Your man Nietzsche was all about the feast: he wrote about how the strong can afford to forget because they’re strong and can turn any situation to their advantage by dancing or raping or climbing golden trellaces, or whatever else his blonde beasts got up to. But we’re not Nietzsche – the voices of famine are vital to living well with each other and staying safe and learning. Vital. But they’re not everything, and it’s these famine voices that dominate the public sphere at the moment. In private life it’s different: in spheres where the influence has traditionally been more female we find more of the voices of feast: child rearing, care, friendship. But in public life we’re afraid to take a risk and argue (or sing, shout, whatever) passionately and creatively for anything, especially when a famine voice of science, evidence, economics, or plain old fear stands opposite us.

My own voices are often in a similar (im)balance: the conservative voices win out through their exercise of fear, while the creative, vulnerable, trusting voices cower and fester. My problem is that all of those feast voices need to be heard, and if they’re not allowed a positive space they’ll emerge in potentially harmful and destructive ways. The parallel with public life is clear, as bozos like BoJo and Hitler come to fill the space vacated by good people saying interesting, creative, hopeful things. Scums like Farrage and Trump speak to our need to believe in something bigger than just getting by, but these are feast voices which have gone off, badly, and become parodies of themselves. They inspire a belief in something bigger than fear when they are, in fact, governed by precisely the same fear as the famine voices on the other side. If a quieter, more vulnerable voice emerges which offers an alternative, creative way to be in public life, they’re drowned out by the bullshit and sink without trace. A case in point: Gordon Browns.

Remember Gordon Browns?

No, probably not. It’s hard to look back on his premiership without the taint of the narrative he’s since been crowbarred into, but at the time he took power, he offered something new and, to me anyway, exciting: a moral compass. His first 100 days were charcaterised by quiet and principled good leadership. Although he was all about the moneys, he often eschewed arguments from economics, and spoke instead about bigger ideas of right and wrong. It was good. But he sank. He sank because he listened to the famine voices of well-intentioned but spineless advisors who told him to apologise to a bigot whom he had accurately characterised as a bigot. Instead of taking his serious job seriously, he succumbed to stupid advice and tried, excruciatingly, to smile.

What he offered in those early days was an alternative to the narrative of politics as mere application of evidence: his moral compass was such that it privileged what was right over what is reasonable, or rational (in an economic sense). He reached beyond the past and the fear of the future into something bigger. Because he didn’t couple this with a smiling gonk-face, and lost his nerve when he needed to stick to it (against all reasonable advice), he was hounded out by a hostile press who couldn’t understand what he offered and preferred the cleaner narrative lines of economics, bacon sandwiches, and smiling faces. The same will probably be true of the Corbyn, who also makes no sense to the voices of famine, and is insecure and timid when faced by their reasonableness.

Ugh, this has been quite the ramble. I find it harder to marshal and organise my thoughts into clear arguments when trying to be positive. But perhaps that is part of the problem with positive feast voices altogether. They speak from a place of insecurity and confidence. They’re mixed up. They’re paradoxical and unreasonable. They can be picked apart with analysis and critique. They’re wrong. But they’re also important beyond measure as, without them, we are just fitting in and going along and hoping that we don’t get found out. That’s no way to live.

 

Childhood Mental Health

Children are having a hard time of it at the moment. They’re experiencing higher levels of anxiety and stress, are more likely to be bullied, and have more suicidal thoughts. Nine years ago Unicef judged child well-being in the UK to be the lowest in Europe – 21st out of 21. And although we’d gone up a couple of places by 2010, spending cuts in young peoples’ services look likely to put us back down the table.

Good news, then, that the ASCL (one of the organisations that speaks for headteachers) has spoken out against the rise in child mental health issues, and the low standard and quantity of support children and schools receive in this area. Good news right? If a child is suffering from stress, anxiety, self-harm or suicidal thoughts we should get some support out to them, right? Surely, Phil, you’re not going to get your anti-psychiatry out on this most-vulnerable group of people and say they shouldn’t be treated as ill?

No, I’m not. Obviously it’s a good idea to support someone going through awful times. To borrow an analogy I have no right to borrow, when my arthritis manifested as an unsightly rash on my forehead, I was pretty keen to get that seen to. Treating the symptom was important.

P1110968
I wrote this in a rush. Sorry about that. I’ve put in some nice pictures of children on beaches to make up for it.

But what happens next? Do we allow the symptom to tell us something important about what is causing it, or do we ignore it because it’s being treated?

Many years ago your man Laing (King of the anti-psychiatrists) wrote that in a fundamentally insane world, a ‘schizophrenic’ response was more reasonable and rational than a ‘sane’ response. He saw mental illness as a symptom of bad situations in the world, and so alongside helping his patients change and adapt, he advocated for change in the world. He took their symptoms seriously, and listened to them. For example, writing in the late 60s Laing pointed out the number of patients he saw who were suffering mental ill-health because of the stress placed on them by impending nuclear war. As well as helping them to come to terms with this stress and to live with it, he also campaigned against nuclear arms and wrote prolifically on the subject, helping wider society understand the fears that some people found overwhelming them. That is, he took the symptom and allowed it to have a voice in shaping society as a whole.

So what does it tell us that the incidence of child mental ill-health is rising, and that child well-being is the lowest in Europe? What can it tell us about us? Nothing; it’s a statistic. To learn the lessons, we need to go out and listen to the children who are suffering, and we need to allow their voices to change us. I have my own theories (what little contact I still P1110476have with the education system has shown me ever-increasing efficiency inversely proportional to genuine human contact) but it’s not about my theories – it’s about the voice of the children – the most unempowered, unlistened-to, vulnerable group in society.

Last week I wrote about the importance of allowing narratives other than the physicalist narrative to emerge and have a voice in mental health. This is especially the case with children, who lack the social and political weight to make their voices – their understandings – heard in society at large. If we don’t make the effort to listen, and don’t allow their voices to change us, then in the coming years child mental-health will go the same way as adult mental health and education: it will become increasingly efficient, evidence-based, physicalist, and inhumane. It will become a private issue for specialists rather than a public issue which everyone is obliged to engage with. It will treat symptoms more effectively (which means CBT-style coping strategies for children, ‘psycho-education’ the parents, and medication for both as an all-too-reachable second-line), and so will obscure the problem which caused the symptom, allowing us to ignore that problem and continue to push children through the system out into the global race. And soon, we hope we’ll have an education system which rivals that of South Korea. Great.

Bad liver and a broken heart, and arthritis

Every few months the news remembers that mental health is a thing. The news remembers that some percent (high!) of people will experience something in their lives (sad!), and that it costs the economy some amount of money (high!) and that it doesn’t get the funding it deserves despite this (boo!). Then the news forgets.

Along the way a familiar argument is trotted out about parity of esteem – an argument which the news and everyone on it accepts wholesale. It runs thus:

we will only have a decent mental health service if the NHS and society at large view mental and physical health services as equally important and equally things.

I’m fine with this bit; they are both things, and mental health is underfunded and looked down on. The problem is the argument that lurks often unspoken behind this one. On the radio last week it came from a psychiatrist who pointed out:

you wouldn’t feel any shame or stigma about going to your doctor with a bad liver, so neither should you feel shame about going with depression, generalised anxiety, or a phobia. The stigma / shame / underfunding will persist unless and until we treat mental illness as a disease of the mind just like a disease of any other body part.

Let me be clear: there’s a compelling argument for more investment into mental health, and I think the stigma that many people experience around their own mental health issues is horrible. However, I think the argument which holds that mind diseases should be treated like physical diseases is flawed, and also masks a deeper, more important point. Before we get to that point, though, what’s the problem with the argument as it stands?

Is the Mind a Part of the Body?

Ugh. Really? And next you’re going to ask if the chair I’m sitting on really exists, right?

I’m sorry; I’ll be quick. It’s a typical move of the medical model and of scientistic thinking (as best exemplified by the angry athiests like Dawkins and Ince) to blandly presume that mind/emotion/fear/brain/habit/person are all basically the same thing. It’s assumed that all things are fundamentally physical and so are best explained in causal chains involving physical things like brain chemicals. Regardless of whether or not physical reductionism is true (it’s not), it is certainly not the case that the best explanation for mental illness is to find a precursor gene or chemical imbalance. The best explanation for a mental illness (as well as for mental wellness) varies according to the illness and the person that has it. I’m told by people who appear to know that there are a small number of mental illnesses for which good causal physical explanations, but for a lot more the brain chemistry and biology are ambivalent. Physicalising a mental illness does not give us the best explanations in every case, or even in most, and so does not help us to understand, diagnose or treat.

But it’s useful for destigmatising, right?

This is the point that your man will argue, and that everyone seems happy to accept:

we’re best off regarding mental health issues as at root physical because this has positive social impacts on the sufferers. Regardless of truth or explanatory power, if you consider your compulsive habits as being caused by a mind which is basically a brain which is just a bodily organ, you take less personal responsibility for the symptoms and this allows you to strip out a whole level of anxiety and shame.

So, is physicalising useful for destigmatising? My answer to this question is maybe. Sometimes it will be helpful, sometimes not. Self-involved show-off Stephen Fry talked about something similar in his bipolar documentaries, discussing how some people are incredibly grateful for a diagnosis which locates the problem outside of their person – their self, their choices, their personality – and merely in an ‘imbalanced’ body part. Some time ago I remember being given a tentative diagnosis of mild acute depression and finding it incredibly useful because it tied together a range of physical and emotional symptoms under one label which I could then take on in practical ways. In fact it was precisely by de-physicalising my physical symptoms (chronic tiredness, non-specific aches) that the diagnosis helped me.

But your man Fry also spoke with people who found such a physicalising/externalising diagnosis incredibly alienating. One person who had received a diagnosis which came pre-loaded with medication felt disempowered because the illness was something they had lived with and come to integrate into their sense of self, and this was suddenly being taken away from them. They were told that the understanding they had developed was wrong and silly and beside the point. Charles Watkins, the hero of Doris Lessing’s excellent Briefing for a Descent into Hell experiences a more extreme case of this, as his wonderfully rich and enriching experience is diagnosed as a psychotic episode which needs to be treated.

My point is this: any two people experiencing the exact same experiences would make sense of these in a different way; it is the individual’s own telling of their story that determines the extent to which physicalising is useful or not.

Bloody Mary

But people do not exist in vacuums, and neither do they tell purely personal stories. They exist in societies which constrain and enrich the stories they can tell. So we can only address the question of whether phsyicalising mental illness helps us if we look at the broader social context. South Park makes this argument very well in an episode called Bloody Mary [you can watch it in french here]. Randy is caught drunk driving and told to attend AA. Despite his claims that he just drinks a bit too much, he is told at the meeting that he is not responsible for the disease of alcoholism – a disease of the brain just like any other body part – and that he can only get better if he accepts that responsibility for his disease lies outside of himself. Given this permission to externalise moral responsibility for his behaviour, he gets steadily worse and becomes properly alcoholic.

The individual’s understanding of their mental state – their story – needs to gain some purchase within society if it is to stick. It is no good me regarding my depression as an important part of my personality if my social context demands that I cheer up and get back to work. Similarly, Randy’s own understanding (that he was not alcoholic but just drank a bit too much) was silenced by his social context and he fell into a narrative which was much more harmful both to himself and to society at large.

And there’s another side to the social dimension: personal stories of mental health can challenge society and make it better. For example, given the fact that increasing numbers of primary school children are experiencing serious depression, we can ask whether this is a problem with their brain chemistry or a problem with our increasingly industrialised school system? By listening to the voices of the children who are making sense of their depression we can learn important lessons about our society. Conversely, by treating the children we can hush their voices and avoid their disruptive influence.

I’m not being facetious here: it’s not obvious whether the problem in this instance is brains or society. Almost certainly it’s both. Children do, sometimes, need to be treated with drugs to improve their lives. My point is that there is no definitive answer to be found at the end of a microscope or RCT: this is not a question of fact. It is a fundamentally moral question.Which brings me to the third and most important aspect of mental health which the physical=mental equation obscures:

The decisions we make about mental health are at root moral decisions. 

Because we are so unused to talking about moral issues, we often flee from these into the security of RCTs and evidence-based-practice. The comfort provided is twofold: not only have we worked out the best way to treat you, we have also denied you the ability to tell us anything about ourselves. We’re safe from contagion. In the case above, I would argue strongly that the rise in childhood depression is a moral indicator of one of our society’s most egregious faults. But increasingly this kind of argument is not available to me, because we’re all (in the name of good mental health and more funding for the NHS) buying into the parity of mental and physical. We just don’t have the discussion. The parity argument obscures our opportunity to really engage, on a moral and social level, with mental illness and wellness.

Is physical health best addressed by the medical model?

But I want to take this argument one final step further: not only should the treatment of mental illness be open to moral aspects, so, I think, should the treatment of physical illness. For the past three years I have been suffering with a chronic pain condition which has recently been diagnosed as “some kind of arthritis”. Along the way I’ve struggled immensely with pain – both in terms of its reduction and its meaning. While the medical system is great at prescribing drugs for pain (well, actually no it’s not – pain is an anomaly within the medical model: too variable, too subjective, too personal), it’s not very good at dealing with meanings. Because I played along obediently with the diagnosis-treatment approach to pain which I was offered, it took me about a year and half’s treatment before I realised that the meaning of the pain was much more important than the pain itself, and that my focus on pain reduction was stopping me from engaging positively with the meaning component. It is only by moving to the moral sphere, where I have considered my pain in terms of the impact it has on my emotional and practical life, and on the life of others, that I have been able to live a better life.

Similarly, by treating diseases of affluence such as my own arthritis and other peoples’ diabetes and heart disease, we silence any chance these diseases might have had to talk back to us: to tell us something about how badly (morally badly) we’re living as a society.

I think a better argument for parity should aim not to reduce the mental to the physical, but to open both mental and physical health up to a broader kind of engagement. I think it would be better if both could come together to a middle ground where the whole person-in-society – including the moral and social components essential to living a good life – can be engaged in medical treatment. This idea is a struggle because in our increasingly technicalised professions, we have lost faith in our ability to engage with moral or cultural wholes, and prefer to focus increasingly on isolated elements.

The journey to recover or improve or integrate or ignore or live with or whatever verb you choose with mental or physical illness is not a simple one, and it is one which must be meaningful to the person who is suffering, and to the society of which they are a part. Externalising mental illness is one particular story. It is a powerful one, but it is not the only one.

Ethics, consequences, victims

Gideon’s little brother is a psychiatrist! Who would’ve thought it? Fair play to the plucky youngster, going against the family grain, dedicating himself to people instead of mone… Oh, hang on. No, Gideon’s little brother is the kind of psychiatrist you read about in Against Therapy – the kind you thought was a thing of the past. Gideon’s little brother has twice been reprimanded by the medical profession, once for falsifying prescriptions for and ‘escort girl’ (prostitute, no?) (6 month suspension(?)), and more recently for engaging in the archetypal therapist  whatever-you-do-don’t-do-this behaviour: having an affair with a patient whom he later dumped, then threatened in order to keep her quiet. She attempted to take her own life on three occasions following the end of the ‘relationship’. To me this is akin to a secondary school teacher taking advantage of a sixth-form student – it has the same power imbalance and, depending on the patient, a similar degree of vulnerability. I’d expect a custodial sentence. He didn’t get one. He got barred from working as a psychiatrist again (good) at a hearing he didn’t even have to turn up to (bad). He has ruined someone’s life through the wilful disregard of pretty much every rule in the book, and doesn’t have to face any serious consequences.

The case reminded me of the kerfuffle over Hogan-Howe and the over-zealous investigation of child-sex allegations. I don’t want to get into the rights and wrongs of the way that “believe” is interpreted by idiots, or the, in my mind pointless publishing of suspects’ names. What I’ve found anger-making is the way the story so easily focusses on high-profile establishment victims (retired heads of army, etc), and finds it so hard to focus on the many thousands of voiceless victims, who continue to be abused and unheard. Yes, it’s bad that your man who was a rank in a thing wasn’t told that the investigation had been dropped. It sounds horrible; I can’t imagine the disruption to his and his family’s life. Honestly, awful. I’d not wish it on anyone, and it was handled badly. But it just does not compare in any way at all to the experiences of those children who have been abused by high- or low-profile adults. The time given to each is hugely disproportionate, and, frankly, cowardly in the extreme. The space given to the victim of Gideon’s little brother’s callous misconduct is tiny – the extent to which justice could be said to be done by his disciplinary hearing is non-existent.

Anyway, in amongst all of this slightly coherent anger at the way that the disempowered are ignored as soon as they can be, I felt a warm fuzzy feeling that I am now working in a profession (counselling; very different to psychiatry) that takes its ethics seriously. Engaging with the ethical framework and reasoning our way through dilemmas related to this was a large part of my training as a counsellor. Unlike the psychiatrists who are only out to protect their own, our conduct always comes back to the client at its centre: we are governed by the interests of the person who has the least power and the quietest voice. Warm and fuzzy I felt to belong to a profession which takes its ethics seriously. So seriously that, as trainees and recently qualified counsellors we’re terrified of trying anything new or different or client-led, because the sanctions for behaving unethically are, we imagine, so stringent.

Warm and fuzzy, then, until I saw this – the proceedings of the disciplinary panel of the BACP (for anyone who doesn’t know, the BACP is the de facto regulator of the counselling profession. Joining it is not mandatory, but it’s exceedingly difficult to get a job if you’re not a member). I expected to see lists of people being struck off, referred on to criminal hearings, or, if culpable of lesser misdemeanours, having their work monitored and their fitness to practice rigorously assessed. I expected to be able to read these in an easy to find part of the BACP’s website – they don’t appear in the magazine because presumably they’re too numerous. But no, what I saw instead (and you can see too, if you can battle your way through the deliberately hard-to-read pages of tiny, awfully-written text, found in a shed round the back of the website) is a list of people who behaved pretty awfully being given essays to write.

Essays.

“What you did was wrong, and I hope you’re very ashamed of yourself. Now I want you to write me 1000 words on the subject: “Why what I did was wrong, and why I promise I won’t ever do it again'”.

It sounds like I’m joking, but I’m not, Yvonne. Yvonne took in a client who had recently had a stillbirth. The client was told she was an expert, and presumably hoped to work through some of her traumatic experience with Yvonne’s help. When Yvonne arrived, though, Yvonne spent the first 15 minutes telling her client about her broken leg, and how she got it on holiday. When the client tried to turn the conversation back to her stillbirth, Yvonne went on to describe the unsatisfactory experience she had at hospital with her broken leg. Other things happened which I can’t even grasp as the report is written so thickly.

Not abusive behaviour, Yvonne, but I certainly wouldn’t want to go to see you if I were in a vulnerable state. Nor do I want to belong to the same club as you. Fortunately the client complained and her complaint was upheld. I’d presume that she’d be removed from the BACP until such time as she could demonstrate her fitness to practice. Or that she would be required to work with someone monitoring her practice to ensure she was not continuing to practice in this harmful way. But instead she’s told that unless she writes an essay in the next six months and say you’re very very sorry, she’ll have her membership to the BACP revoked.

I don’t want to come across too facetious here: regulation is a difficult and complex question. I like that counselling is not regulated by government. Having experienced ofsted in education I know that the evils of over-regulation far outweigh the evils of self-regulation. And, to be fair to the BACP, they do revoke your membership if you fail to write the essay, have an affair with a student (I’m looking at you Mr Pickles) or are convicted of child sex offenses (Mr Fothergill). That’s something, but it’s not nearly enough. The hidden, apologetic, essay-requiring approach by the BACP feels so very self-serving.  An organisation which was set up to serve clients rather than counsellors would shout from the rooftops about its investigations and be ruthless in removing or retraining counsellors who are incompetent and dangerous. It does neither, and thereby lets us all down.

Supply

I’ve written before about the idea of supervision for teachers, and how its existence might have made a difference to my own teaching life, and my decision to quit. Today I was going to write a little more about what supervision might look like in practice, and explain the approach of the Teachers Case Discussion Group that I am co-facilitating with Joan Fogel. Instead I found myself writing about my own case – one which I would take to supervision, if I had it. [I’ve changed details to keep the thing anonymous].

I am a supply teacher. It pains me to say this, but right now it is what I am. Recently I spent three days covering a junior school class whose teacher had walked out at the beginning of the term, six weeks ago. In weeks that followed, the school tried to fill the vacancy, but no-one wanted to take the job. Successive supply teachers came and went, some so shaken that they walked out before the working day was over, none lasting more than a week.

My three days started at the beginning of week seven. By this time the children in this class were like any class of children would be if they were put through seven weeks of instability, which is to say that they were “challenging”. Without consistent boundaries and, more importantly, a sense that their teacher cared about them or valued them, they had become disruptive, disaffected, scared. They had learnt that putting their faith in teachers was a mistake; that teachers are not to be trusted with this kind of investment, and that boundaries are there to be relentlessly prodded and pushed. While it was a difficult week’s work, especially to someone like me who prides themselves on behaviour management, it wasn’t hard to see past the bluster and bravado to the fear that lay beneath, and to be affected by it.

A difficult week’s work, but as a supply it shouldn’t be too problematic. As a supply I could – if I wish – fail to turn up to the next day’s work without consequence; I would never be inspected, accounted or performance-managed; and further, I did not, in fact, need to do this work to keep the wolf from the door. But in the midst of these three days of stressful but unpressurised teaching, I found myself slipping deep into my own fear once again. I was unable to sleep, and when I did sleep I dreamt of accountability meetings in which I was found wanting, of children lost off-site, and of disruptive voices successfully challenging my authority.

Doubtless the assignment was a challenging one, and I was pushed to the limits of my teaching skills (and my emotional range), but that wasn’t where the fear came from. The school were happy with what I had done, and asked if I could come back full-time until Christmas. I felt supported by the management who desperately wanted the children to get the consistency and care that they deserved. They were not motivated by the fear-driven systems of HMI and accountability. I shared with the headteacher my worries about what progress I would be expected to make the children make if I took the job, and was reassured that the school’s priority was the same as my own: to give the children back the kind of experience of school that they deserved. Progress would come later. And yet, on the third day, at lunchtime (cold leftovers, eaten while sat marking in my classroom) I decided not to take the job.

It was a gut-wrenching decision which I made and un-made a number of times that lunch-break, but ultimately I took the selfish route and turned my back on the needs of those children. It is this – the knowledge that these children rely on you and cannot seek alternative services elsewhere – which makes teaching uniquely rewarding and uniquely stressful. Choosing not to take the job was a moral act with which I have not fully come to terms. Lewis Hyde writes that teaching/counselling/nursing ought not be thought of as jobs existing solely in the monetary economy, but as in large part elements of the gift economy. On that lunchtime I felt torn between the two, and ended up siding with the reasonable, rational, side of myself.

It was with a sense of shame that I told the headteacher that I was going to have to turn down the job – to selfishly prioritise my home life and career ambitions over the needs of 26 children. It was the same shame I felt in admitting to my partner, and to you now, that I chose to put myself first.

Now, I’m not going to claim that a year’s training as a counsellor makes me markedly more emotionally stable or fluent than your man on the street, but you might expect that a good deal of therapy, personal growth and learning about stress and anxiety would equip me to deal with the emotional burdens of a few days challenging work, and the offer of a short-term contract. If I, without the pressure of mortgages, accountability meetings, statutory assessments, reputation, can slip so easily into the fear, what of those working full-time with all of these pressures?

I came home from work each night at about 7:30 that week, ate the dinner that my partner had made for me, and went to bed to fail to sleep. If you had asked me then whether I would like to take a bus across town to talk to a bunch of teachers about my situation, I would have said no. Despite a year’s training in the benefits of opening up, all I wanted to do was close down and hide. Now, a couple of weeks afterwards, I can use this blog as my supervisor, but at the time, opening up to a group of strangers felt impossible and dangerous.

And yet gathering a group of teachers (who do not know each other outside of the group) on a week-night to talk about the emotional impact of their working life is precisely what Joan and I have done. The proposition of joining such a group is difficult and scary, but I believe it is absolutely necessary and vital. To find out exactly what goes on, please read on here.