Mental health, mental illness, insanity, wellbeing, distress, madness…

Hello, my name is Phil, and I don’t know what mental illness is.

Hello Phil.

I have an MSc in Counselling Psychology, work full time as a counsellor, and I don’t know what mental illness is. Neither do I know how it differs from mental health. I have a vague, felt sense of what these terms mean, but I don’t know.

Now admittedly, the ‘psychology’ aspect of my MSc wasn’t the kind that has lab rats and Big Brother body-language experts all that, but still, you’d think someone who’s qualified to work with people who are suffering from mental distress (there’s another ambiguous term to throw into the mix) might have a firmer grasp on such basic terms as ‘mental health’ and ‘mental illness’.

But I don’t. Neither do I really understand how ‘mental illness’ differs from ‘madness’ or ‘insanity’, or what place ‘wellbeing’ takes in relation to them.

A layperson [niche reference which doesn’t work in gender-neutral terms]
Sometimes I feel bad about this, and worry that I’m alone in my confusion – that I’ve missed the obvious distinction which everyone else was told about while I was in the toilet. But most of the time I think we’re all just operating in the dark. If you listen to people talk and write about any area of mental health there’s a real muddled mishmash of terms and attitudes which, to me, betrays a fundamental incoherence in the way that mental health/illness is understood both by the professional and the layperson.

Part of the problem is that the world of counselling is a bit scared of ‘proper’ mental illness – the kind we meant when, as politically-incorrect children we talked about people being ‘psychos’ or ‘mental’. We counsellors often shy away from a world we’re taught to see as too serious for our woolly skills (and too physical in cause). Some of us believe that we can help people with ‘proper’ mental illness deal with their problems, but the overriding discourse says that, at a certain point, we have to pass these people on to the big boys: the psychiatrists with the ability to prescribe and to section.

So there’s a whole big chunk of people who deal in mental health but feel they are not permitted to talk about the ‘real’ part, only the minor versions around the edges. And that in itself is symptomatic of the way that mental health and mental illness are (not) spoken about. We’re always banging on about destigmatising mental health issues but there’s a big stigma – a taboo – about deviating from this woolly, all-embracing, muddled approach to mental health [there’s the opposite taboo too, which I’ll deal with below].

There’s a taboo, in other words, about being clear about mental health and illness. A taboo which comes from a good place – not wanting to say something offensive about someone who is vulnerable – but whose effect is emphatically not good. By not speaking clearly we help no-one, in the long run, and we counsellors in particular make reduce our relevance and our stake in the argument to define what counts as mental health. In the interests of clarity, then, here are some of the things I’ve come across recently that have confused me:

Mental Health = Mental Illness?

met.pngA little while ago [ed. quite a while now, I’ve redrafted this many many times, and held off on pressing ‘Publish’ because breaking taboos is scary] there was a knife attack in Russell Square. Initially it was thought to be a terrorist attack, but the next day on the radio I heard your man from the police saying that it wasn’t terrorism what done it, it was a mental health problem.

I sat up at that phrase. Mental health?


Without ever explicitly working it out, I think I’d always associated mental health with the softer end of the spectrum – the kind of thing we feel confident to deal with as counsellors: anxiety, distress, questions about purpose and meaning, that kind of thing. I’d linked it subconsciously with things like ‘wellbeing’ – with the everyday kind of things people mean when they say that 1 in 4 of us will experience mental health issues at some point in our lives. Stuff within the normal range of human experience. Not stuff that would lead you to kill a stranger with a knife.

sun.jpgThat kind of thing I always, unconsciously, thought of as mental illness. Mental illness which was seen, when I was young (and is still in the tabloid press) a kind of bogey-man; the kind of thing that the headline writers want you to think when they say ‘mental patients’ in front pages like the one on the right.

Mental illness = madness?

When we were children, the people we referred to as ‘mental’ were the same people we’d call ‘mad’. So is mental illness the same as madness? Is one a subset of the other? Clearly in The Sun’s mind ‘mental patients’ = ‘mental illness’ = ‘mental’ = ‘mad’, in the old-fashioned sense of the word. Mental patients are killers – the kind of people whose behaviour or thought is way beyond anything a normal person could understand. Where should we stand in relation to this running-together of madness and mental illness?

On the one hand, it’s pretty reprehensible, I think. It deliberately links all manner of mental illness with threats to your (children’s, granny’s) safety with no factual basis. It plays to an inaccurate picture in order to marginalise vulnerable people in order to sell papers.

But on the other hand, the everyday language notion of ‘mad’ or ‘insane’ is less obviously reprehensible. It is hard not to think of someone who deliberately stabs a stranger as insane, almost by definition. Their actions and thoughts are so far outside the normal range of human experience that they are ‘beyond’. So does ‘mad’ mean the same as ‘mentally ill’?

Clearly they’re not co-extensive – there’s people who we describe as suffering from a mental illness whom we wouldn’t want to say are mad. Those suffering from major depression, for example, I wouldn’t want to describe as mad, but I would want to describe as suffering from a mental illness. But there is a subset of those we define as mentally ill who would also be judge ‘mad’ in normal language: those suffering from paranoid schizophrenia, for example, or experiencing psychotic delusions.

A continuum?

We seem, then, to have a continuum which runs from wellbeing at the softest end (where “we all have mental health” [which, if I’m being really cynical, seems to mean we all have emotions], through mental health, which bleeds messily into mental illness, which at its extreme is madness – the kind you can get sectioned for.

Now it may be that the policeman who set this all off just misspoke: he meant mental illness (the type that is co-extensive with madness), but said mental health. But even if he did, his misspeaking betrays a muddledness which lies just under the surface of the way we talk about everything on the continuum.

This muddle is partly borne of the fact that it’s not at all clear who decides how it works or what standards should be applied along its length – there are no authoritative authorities to defer to. And it’s made worse by the many taboos and fears in this area, which mean that we all discuss the continuum in murky, euphemistic and underhand ways.

I’ve tried to get clearer in my mind by slotting the different parts of the continuum together:

  • At the softer end we have a focus on the societal causes, as in the myriad articles and reports focusing on the pressures on Young People from social media and schools and adverts and models and so on. You also see it in the articles which address the way that we organise our working lives, arguing that better mental health (sometimes ‘wellbeing’) could be encouraged through more humane working practices. Individuals are encouraged at this level to take responsibility for their mental wellbeing/health, by seeking out counselling or rearranging the furniture of their lives. Society around them is encouraged to make space for this, as their needs are, in some sense, normal.
  • As we move down the continuum we encounter those issues which counsellors typically feel justified in dealing with – relationship crises, mental distress (a very strange term which I see popping up more and more), obsessive behaviour, minor depression, PTS, generalised anxiety, that sort of thing. In all of these the individual is held to be capable of repairing themselves in the right relationship, though the doctor might need to be called to medicate if the intensity gets too high. Notice, though, how already the focus has switched from society to the individual. There’s much less written about how the workplace can change in order to support those dealing with OCD, for example. Instead the rhetoric here is all to do with destigmatising: these people are still normal; they’re just a bit out-of-sorts. They’re still held to be responsible for sorting themselves out, but they’ll need someone to support them through the process.
  • Further down the continuum you find mental illnesses of a kind that most counsellors are afraid to work with, and most friends and relatives might consider themselves unable to deal with alone. At this ‘harder’ end you find major depression, psychoses, personality disorders, PTSD – that kind of thing. These people are less ‘normal’, and less responsible for their situation. Neither society nor the individual is held to be in any way responsible for the cause or the solution. Instead the cause is defined as genetic or chemical, their distress is made private, and the treatment imposed.

On this continuum, then, responsibility, agency, and normality are key factors. They are, in the way I’m picturing it, proportional to one another: the more ‘normal’ your experience, the more responsibility you have to sort it out yourself, and the more agency you are taken to have in doing this. The less normal your experience, the less you are seen as able to sort it out, and the less responsibility you are expected to take in doing so. In addition to this, you can add social responsibility, which is also proportional: if the individual’s needs are ‘normal’, we as a society are obliged to help them out in our everyday lives. If their needs are abnormal, we are under no such obligation.

The bleeding continuum

What seems to me to be happening now is that each part of the spectrum is bleeding into the other. The DSM-inspired hard end is encroaching on the softer middle and even the soft end, as the language of mental illness (symptoms, chemical causes, medical treatments, parity of esteem, little individual responsibility or agency) spills into the way we describe less-extreme forms of mental health issues. This comes largely from the ‘experts’ who have a vested interest in turning ever larger numbers of people into ijpatients, and use the DSM to achieve this. But it can also be seen in the way that us woolly liberals advocate for more and more expert mental health provision at all levels. While this is done from noble intentions, the effect is to imply that even lower-level problems need to be sorted out by experts, and that these problems are not the responsibility of the individual or the system they’re a part of. For example, the response to increasing levels of childhood depression has been to bring more mental health services (=, in many cases, more drugs) into schools, instead of encouraging us all to see these problems as ‘normal’ and so seeing society as responsible for changing the system that creates depression in children.

And then there’s the backlash to the hard end’s relentless march, as in this kind of headline, which seeks to reclaim mental illness as softer than the hard-end see it. They seek to limit the extent to which the common-person’s conception of mental health/illness is shaped by those at the extremes. Without wishing men.pngto get too deep into the oneupmanship of the ‘they’re not Muslims, they’re insane; they’re not insane, they’re men; they’re not men, they’re evil; they’re not evil, they’re let down by our cultural pessimism‘ Officer-Krupke bullshit, there’s an important re-balancing of the continuum away from the hard end here. Headlines like the one on the right argue that we need to push back at the definition of ‘madness=extreme mental illness’, so that those who are closer to the soft end don’t get infected by the fear created by headlines like the Sun’s.

That is, the soft end (woolly liberals) have argued successfully that lots of people at the soft end should be seen as mentally ill so that they can get treatment, and are now biting back at the hard-end because there’s a risk these people might be re-stigmatised by the focus on ‘mad’ people.  This rebalancing is vital (though it could all have been avoided if we’d come up with a different term for low-level mental illness in the first place), but because it’s being done with a sloppy attention to detail, it all ends up feeling confused and unhelpful.

ter.pngTake, for example, the article on the left. One bit of beef I have with headlines like these is that they focus an awful lot on stigma and an awful little on truth. In the call to stop calling terrorists mentally-ill (which was only done because they wanted to stop calling them religious), there’s very little interest in finding out if they actually are mentally ill. Regardless of stigma, if they are mentally ill, then failing to call them that is a regressive and unhelpful kind of self-censorship. In actual fact, the article concerned is quite well-argued: the author explains that there are many contributing factors to terrorism – that you need to take into account cultural, social, and individual purpose factors to understand how someone becomes a terrorist. In amongst all of this, though, he admits that mental health is a contributing factor, so clearly terrorism is in part a mental health issue, as well as a cultural issue, a social issue, and an individual issue. In his laudable desire to combat the way that mental illness is demonised by the tabloid press, he ends up openly contradicting himself and making an argument that will not change anyone’s mind. This kind of muddle will not help anyone, in the long run, and is intellectually dishonest.

More recently, articles concerning Donald Trump’s mental health have played the same back-and-forth game as ‘experts’ ‘diagnosed’ Trump with various conditions, and then were backlashed by those who argued it was wrong to equate evil/stupidity/meanness with mental illness. Neither side were particularly concerned with the truth of the matter: the experts wanted some official way to mark Trump’s idiocy, while the backlashers were scared that mental illness was getting yet another bogeyman added to their number. Truth, here as elsewhere, mattered little to either side, and so we ended up getting even more muddled.


Another area of much muddle is in the constant call for reducing stigma.

I wrote about stigma a little while ago. I’m not entirely sure stigma is a bad thing. And I think a big part of my problem with the anti-stigmatas is precisely the sliding scale I’ve been banging on about. I think stigma at the softer end is by-and-large a bad thing. The fear or shame which holds someone back from talking to their GP about minor depression or anxiety, for example, is helpful to no-one. In the middle of the scale it’s less clear: stigma here has bad effects but might also provoke action (for example, the person who seeks professional help when they hear voices, in part because they know that if they told their friends they’d probably not understand). And at the furthest reaches, it’s hard to imagine why a society wouldn’t want to say that madness is not a good way to be, and society saying that it’s not a good way to be will, in someone who feels that way, induce a feeling of stigma.

When we talk about reducing stigma we’re almost always aiming our comments at the vast majority of the 1in4 who will experience a mental health issue this year. The vast majority of them are experiencing more-intense forms of the problems that everybody face: stress becomes free-floating anxiety, feeling down becomes depression, comfort eating becomes an eating disorder. These are the things that we don’t want to stigmatise, but the reasoning is wrong: we shouldn’t, as is so often argued, destigmatise them because they’re analogous to physical illness, we should destigmatise them because they’re part of the normal picture of human life – just a more extreme version. They should be destigmatised because that’s a more caring and humane way to approach them, and one which will benefit all of us as we change society to make them less likely to happen.

The other side of mental illness – personality disorders, or ‘madness’ as folk psychology knows it – is a different case. Here, we should aim to destigmatise to the extent that this helps people who are suffering take less personal, moral responsibility for their problems. But we should also make clear that these experiences are outside of the normal expectations of human life. These are like physical illnesses. But with this de-agenting (to reduce stigma) we also strip away humanity. These are high stakes to play with, and the bleeding of the analogy-with-physical-illness argument into the lower levels of mental illness is not helpful: applying the same reasoning to the 1in4 is silly and harmful and confusing.

In fact, this misplaced analogy risks stigmatising normal experience, by putting it within the purview of mental health rather than putting the responsibility on the individual and on society to make conditions more amenable to a good life. For example, one reason that someone with low-level mental health issues may feel more stigma in coming forward to seek support is precisely because higher-level mental health issues have been destigmatised and put in the same category as theirs. The same person may previously have sought changes within their relationships and habits (i.e. taken agency and responsibility for themselves) but will now be encouraged instead to privatise their distress, rendering it the responsibility not of society but of professionals.


I starting writing this six months ago, and have struggled to come up with anything coherent. I apologise for this. If you’ve made it this far, thank you for your patience.

Normally I can’t stand it when people publish things that are unedited or confused or badly-argued, and then apologise for them. It’s better not to put them up at all, until you’ve done a decent job.

But in this instance I’m making an exception, because I’ve spent months sporadically trying to put this into shape and I just can’t: partly this is because I don’t have the intellectual chops I used to, but I think it also reflects the muddledness inherent in the subject matter. It’s so confused there’s nothing to do but be confused. I don’t have a pithy conclusion, but I do feel this is really important. The only way out of the muddle, I think, is to talk openly and honestly about what we all make of mental health / wellbeing / illness / madness and try to come to a better understanding of how we, as a society, want to understand them.

South Asians and their taboos

Waiting by passport control for my South Asian partner [for that is her title], I came across an article on the BBC news website last week about South Asian attitudes to mental illness. Normally I’d skim-read an article like this before passing on to something juicier or fluffier, but having just returned to UK connectivity I was (I’m ashamed to say) hungry for digital content of any texture, so read it properly.

It was interesting enough, and had a picture of Monty Panesar at the top, which I liked. It reminded me of a simpler time in English sport, before we got good at things. But there was something in the tone of the article that made me uncomfortable.

It wasn’t the premise of the piece: the question “Why do many South Asians regard mental illness as taboo?” is a very interesting one. The way that different communities regard mental health and illness is a fascinating and important subject for public policy and private understanding. My partner and I often struggle to understand each others’ preconceptions about mental health, which are partly the products of our different cultural upbringings. Encountering assumptions that are foreign to one’s own helps one to become better aware of their contingency and arbitrariness.

South Asians – “a particular problem”

What troubled me was the way that ‘regarding mental illness as taboo’ was equated in the article with ‘wrong’, or, at the very least, ‘a big problem’. For example, the Professor man describes the large role that shame plays in South Asian cultures, and tells us that South Asians do not consider mental illness to be a medical issue, instead holding “superstitious belief[s] that there is something they did in their previous life and they’re being punished”. Later in the article a report is cited which found that mental illness issues were rarely spoken about or allowed out of the house because of fears around the status of the family, and worries about arranged marriages being called off.

The implication of these various statements is, to my mind, clear: South Asian communities are doing mental health badly. We as readers are invited to conclude with the article that it is wrong for shame to play such a large role in their culture, and for mental illness to be considered a moral rather than a medical issue. It is wrong that family status and arranged marriages are put before individual mental health.

The flip-side of them doing it wrong, of course, being that we (read: middle-class, western, mainly white) do it right. We’ve got the right balance of shame and openness, and have moved beyond primitive notions of moral responsibility to a much more sophisticated medical model (or, if we haven’t, we’re certainly working towards it through constant de-stigmatisation and medicalisation). Further, we hold – correctly, mind – the needs of the individual higher than the needs of the community.

I don’t necessarily disagree with the above arguments (I do). My problem lies with the way that the arguments are (not) made, and the way that this allows a degree of unthinking racism to be smuggled past the reader.

This may sound extreme. Read the article, see how racist it feels. Maybe it doesn’t. It didn’t to me when I read it. But that, I think, is because the article and hundreds of others like it doesn’t make its arguments explicit. If they’re held up to the light you can see how contentious the article is, but to avoid any controversy they’re smuggled through the back door, in unspoken assumptions.

The enthymeme

Hiding the most important parts of your argument in assumptions you don’t spell out is a classic philosophy trick, called an enthymeme. What are the enthymemes at work in the article? One of the main arguments being hushed through is that medicalising mental illness is a good thing. Another is that it is bad for a community to use shame to regulate itself. A third is that the needs of individuals should have precedence over the needs of family or community. There are others, especially once you get to the report, but lets stick with those three.

I’ve written before about my various beef with the first assumption, so won’t go into it much here. Suffice it to say that I think medicalisation is not obviously a good thing. At the very least, it risks narrowing the narratives available to individuals to explain and own their troubles, potentially disempowering and harming them.

But what of enthymemes two and three? Is shame a bad thing for a community to use to regulate itself? Should the needs of the individual be put above the needs of the group?


It’s easy for us to look down on shame, especially when we find it in other cultures, as it seems such an old-fashioned and anti-fun emotion. But unless you’re Carl Rogers (and I sincerely hope you’re not) there are some pretty good reasons to think that shame is essential for humans to live with one another.

For example, just off the top of my head, shame is one of the primary forces that stops me from becoming addicted to computer games. I know these are a drain on my life and cause my arthritis to flare up in ways that have terrible knock-on consequences for my physical and mental health. But it is not this knowledge that motivates me to stop – it’s the shame of being caught.

Shame is also a primary tool in the education of children. Much of the work of growing up is working out how to negotiate the balance between one’s bodily desires and the desires of others. It is shame, in the first instance, that helps a child to regulate their needs, as they seek approval from significant care-givers, and try to avoid losing this. Later on you might dress the shame up with rational argument but ultimately it’s the shame that does the work.

I know it’s unfashionable to say this, but shame does a huge amount of work in western as well as South Asian communities, as it should do. It has huge limitations, and a lot of the work that gets done in therapy is aimed at undoing unhelpful feelings of shame. But the point at which shame ceases to be useful and becomes harmful is not an obvious one, and criticising the South Asians for having a different way of drawing that line to westerners is not a useful response.

What would be useful is if the article had made clear its assumption that we’ve got the right line in the liberal West. At least that way the reader would be invited to question this, and understand the wider relevance of shame within a community.

Individual vs family

In the absence of such clarity, how might we find out how much shame is the right amount? Well, a simple way to do this would be to look at its effects: who gets hurt, who gets helped? In the article there is much focus on the harmful effects of shame on individuals, and an acknowledgement that this is often done because of the perceived needs of the family or group. The voices we hear are those individuals who have been harmed by shame, and rightly so – their voices need to be heard.

The voices that are not heard, though, are the voices of families and groups who have been helped to stay together by shame. And because of this we are not able to explore, within the article, whether or not the trade-off the South Asian communities have arrived at is a good one. As an article on a western website, the author expects us to unthinkingly accept that the any sacrifice required of an individual in the name of family or group is wrong, problematic, or backwards. But is this so?

Every culture balances the needs of the individual and the group in various ways. Obviously. Middle-class white Britain, for example, has embraced a kind of individualistic liberalism over the past 50 years which holds that family is very important so long as the individual chooses to be a part of it. We generally look up to people who sacrifice some of their own happiness to help their families (because family is important), but we do not look down on those who do not (because it was their choice). In other words, family is a good thing if and only if the individual wants it to be. In all things the individual is the final arbiter where any good is concerned.

The picture is different, I think, outside of that middle-class white bubble, but let’s not get into that now. As a white middle-class Brit, I am grateful for the freedom which individualism brings. It’s allowed me to make choices which have aimed at my individual flourishing, regardless of family or social expectations. But I also regret that some of my needs were placed higher than those of my family. For example, I was not required to visit my Grandfather when he was living in an old people’s home and, being a somewhat emotionally awkward twenty-something, didn’t choose to visit him. I regret my choices then and I regret that I didn’t belong to a culture in which my actions would have been shameful.

I regret, too, that it is not shameful for me to avoid ever talking to my neighbours, as this kind of shame would make me a better, more connected, happier human being. Without the societal expectation that I commit to something I don’t want to do, I find myself unable and unwilling to step out of my individual comfort zone and become a better person.

Ok Phil, but why spend so long pedantically tackling a pretty bland article?

Well, while the article itself may be bland, the trend it is a part of is not.

This article, like so many others we read every week about mental illness, unthinkingly holds that mental illness is a neutral object, capable of being observed free of any cultural baggage.

Mental illness is not a thing

But mental illness is not a thing. It does not exist. It cannot be treated as an object which is the same in one community as it is in another. In this respect it is not like physical illness. It is a cultural construct, as your man Foucault spent a huge chunk of his life riffing about (see this for a short, typically blinkered description of Foucault and the anti-psychiatrists). Our conception of mental illness is connected with many other aspects of our culture, including personal motivation, family structures, rituals, habits and so on. Medicalising mental health fits with Western materialistic (not in the hippy sense) individualism, but it doesn’t necessarily fit with a more collectivist culture.

By adopting a Western conception of mental illness, South Asian cultures would probably gain something (lower suicide rates, lower incidences of anxiety or whatever you’re measuring – that kind of thing) but also probably lose something. It might be that they lose something insignificant, but it might also be that they lose something of deep importance. We don’t know, because the article does not address this, and, by refusing to fess up to its enthymemes, tries to stop us from addressing it.

My gut feeling is that the price for adopting a western conception of mental illness would be pretty high. It might include the loss of established truths and norms which provide comfort and security, the loosening of family ties built through expectation and rituals of respect, and the diluting of cultural identity. Whether or not this is a good bargain is not clear, and not answerable by those, like myself, who are not a direct part of the South Asian cultures in question. And neither is it answerable by those who adopt an unthinking scientism where mental health is concerned. The ‘truth’ about mental health is ultimately not purely a physical one, but a cultural one too.

The South Asians who are quoted in the article are, to my mind, the only saving grace of the article, as they are arguing from within that their culture can and should change. These are important voices to hear. What’s missing in this article is the other side – the voices of those South Asians who value the traditional role that shame plays, for example, or who feel that the Western conception of mental health would not fit with their community in other ways. Their voices are unthinkingly erased from this account, because of the assumption that this is not a cultural issue but one of straightforward scientific misunderstanding.

Stripped of its ‘neutral’ surface dressing, an article like this which tells South Asians that they’re bad at mental health is straightforward cultural imperialism which borders on racism. That we don’t recognise it as such is testament to the power that the medicalised, individualised conception of mental illness has amongst us today. It’s become so much a part of our cultural furniture that we don’t even know it’s there.

What’s needed instead of one-sided dismissives is a genuine discussion about the broader cultural context of mental health – one which acknowledges that those on both sides of any cultural divide can learn from each other. Slating the South Asians isn’t good for them because it’s racist and plays into a stereotype of backwardness and rigid hierarchy and anti-science. But it’s not good for ‘us’ either, as it stops us from gaining insight into ourselves, and ideas from others.

Ultimately articles like this constrain rather than enable understanding. They are fundamentally conservative, and, under the cloak of ‘helping backwards Others’, serve mainly to bolster our own sense of right, preventing us from ever asking the question of the place mental health holds in our own culture: is it as good as all that? And that’s a shame.

Footsoldiers or Connoisseurs

(Paper presented at the Keele Counselling Conference on 7/5/16)

When the opportunity to present at the conference came up, my first thought was: what’s the point? Why bother? I’ve got nothing important to say and even if I did it wouldn’t change anything anyway.

For anyone who knows me and knows how passionate I am about counselling and about education and research, that would’ve come as something of a shock; I’m normally the first to jump at opportunities like this. And it shocked me as well. The more I dwelt on this shock and the negativity, the more I thought that I did have something I wanted to say: not to talk about my research, but to tell the story of doing the research – the story which ended with me feeling so negative and dis-empowered.

We’ll hear a lot of positive and inspirational things this weekend about creative research. My paper is going to sound very negative next to them, but I hope this negativity can serve a useful purpose. I hope that my story of isolation will resonate with others’ experiences, and highlight the danger that faces us when we, as practitioners, are separated from the knowledge creators. I also hope that the journey I’ve been on may gesture towards a different way to think about ourselves as professionals, and about what knowledge in counselling could mean.

Research, Knowledge and Fear

I’m going to start, then, with a very brief description of my Masters dissertation. My plan was to investigate my own identity as a white, heterosexual, middle-class man; to look at the privileges that this conferred and how I often failed to acknowledge or engage with these. I wanted to challenge my insider safety and security by involving others in the process – others who didn’t belong to the groups I belong to – others who could challenge and change me.

Fearing that any established method I chose would merely repeat and reinforce my privilege, I adopted an anti-methodological methodology. I hoped to ‘meet’ my participants, in Buber’s sense, with as few technical or power-full impediments as possible. So I sought dialogue – meeting – with Others, with no pre-set method at all except to engage and to keep on engaging. I had no criteria guiding the research except those which emerged in discussion and debate. I was the author and took responsibility for the work, but was not in complete control at any stage.

What did this look like in practical terms? Well, it meant holding an initial dialogue between myself and my participants which focused on identity (but was otherwise unstructured). Following this, both my participants and I would reflect on the transcript of that discussion and engage in further dialogue about these reflections, both via email and in person. This process would continue, spiralling hermeneutically towards a better, richer understanding of our encounters. The work would evolve in dialogue with my participants, rather than being an analysis of this dialogue.

So what happened? Well, it was a complex study, but one of the main threads that runs through the dissertation – and that I want to focus on today – is the way in which, after each dialogue, I would go away and try to understand what had occurred, and then share this attempt at understanding. And each time I shared this attempt at understanding, I would be told in response: “You’re trying to make this too clean, Phil – too final – too sensible”. I was told:  “You’re trying to understand it – to stand underneath it and justify and encompass it all”. And further, I was told that this movement was symptomatic of a privilege which seeks to encompass and erase difference.

As the piece developed, then, my participants were telling me that my goal of telling a clear story, or even of just plain understanding at all were themselves goals of a privilege which whitewashed and denied difference. I was invited instead to sit with the discord, to hear rather than understand; to allow the project to outgrow me.

I found this very difficult, and I shared these difficulties with my participants in a way which itself felt exposing and uncomfortable. But ultimately it was these moral and political criteria which led the writing of the dissertation. Ultimately I decided, in dialogue with my participants, that the moral and political imperative called upon me to include all of our voices, often uncommented upon, instead of rigorous analysis and clear explanation. I spent the majority of my allotted 20,000 words on these dialogues, and trusted to my reader that what mattered would come through in the writing.

The work was hugely worthwhile for me and, I hope, for my participants, and I don’t regret it. The learning I took away was of a moral, emotional and political nature, centring on what it means to be defined by others, and how unethical it can be to resist this. I have kept it with me and continue to learn from it.But the practical consequence of going off-piste in my research was that I got a much worse mark than I would have liked. This was the right mark, but the effect it had on me, which I hadn’t foreseen, was to feel excluded from academia.

And not only to feel excluded, but also, in a small way, to be excluded, as, without a distinction next to my name, I’m less likely to get funding for a PhD and, as I’m a counsellor, there’s certainly no way I can self-fund.

Now, this was my choice – I chose to write in a way which I knew risked getting a bad mark. But the feeling of being excluded from the bodies which create the knowledge that we as counsellors apply, set me in mind of other instances of alienation, and I realised that it’s something of a theme in my professional life.

Being a member of the BACP, for example, is for me an experience of having a distant, paternalistic instructor tell me what not to do. I feel I have very little voice in the body which represents me, and feel that it only represents the bland, quiet, profitable aspects of me.*

And this in turn set me in mind of another instance of isolation from my previous life as a teacher. Some years ago, while doing an MA in early years education, I conducted a piece of action research with my staff team. This research sought to raise our awareness of our interactions with young children and to reflect on these: to learn from the children and to learn how to learn from them. This was a fundamentally trusting, human, and relational piece of work, in which we all had a voice. And it paid great dividends, opening up new avenues of practical knowledge which would not have been accessible without this relational method. It was fundamentally lived, practical knowledge – it’s not the sort of thing that an outsider observing could have discovered. But not only did this knowledge not spread beyond us, it was soon overturned and negated by more official forms of knowledge: by initiatives backed up by extremely dubious but extremely evidence-focused research.

We had been encouraged to find our own practical knowledge, but were effectively told soon afterwards: “This is local, specific and not really proper knowledge. Our large scale studies are more important – they are more true”. In the years which followed this I found myself becoming more and more isolated from the sources of knowledge-creation in education, and, at the same time (because I was required to see and interact with my students in terms of this evidence-based ‘knowledge’), more and more isolated from the children in front of me. Eventually, the gap became too large and, reluctantly, I left.

The Risk to Counselling

Is this really a risk though? Do my own personal experiences really illustrate something larger? I don’t think counselling will ever end up where teaching has. For one thing counselling is much more private an enterprise, and a less political issue than teaching, and it has, at present, no statutory authorities. But I do think it’s worth considering what can happen when those practising a profession are completely isolated from the means of knowledge-creation, as is the case with teachers now. And there are signs that counselling is moving in that direction. For example, how is knowledge created in counselling? Who gets to say what counts and what doesn’t?

Well, to briefly divert into a little Foucault, there are many different discourses through which knowledge is used and defined in counselling. I want to focus on one particular discourse which is steadily gaining power and which I believe, if left un-engaged with, will widen the gap between the creators of knowledge and those who apply it. The discourse is that of evidence-based practice.

This is a discourse which holds that the only real knowledge is knowledge gained through randomised-controlled-trials and objective studies by neutral outsiders. It is a discourse which holds that knowledge is objective and measurable, and all that is not objective or measurable is not knowledge. This discourse has gained its power both through practical means such as the provision of employment to those who agree to it, and by broader cultural means.

On a practical level, for example, if you hope to work for the NHS – the largest employer in the UK – there’s a very good chance that you will have to accept the medical model and drop those elements of your personal beliefs which conflict with this. You will have to accept that you cannot learn from the patient, for example, and that your practice is defined by the research of others – others who measure a relationship as a series of inputs and outputs. You will have to accept that your clients are essentially lacking, and that you will fill in their gaps by operating a manual. If you don’t (or at least if you don’t pretend to), you won’t get work. Them’s the rules.

This practical power is hugely powerful, but there’s a larger societal story to tell too, about the systematic stripping-away of ideology and morality from public discourse. This de-politicising and de-moralising of public debate has left a vacuum into which the evidence-based-practitioners and their friends, the economists, have stepped. Economic impact is now the sole bottom line of almost all public debate, and so, increasingly, the knowledge that counts is knowledge which is measurable and has economic impacts. Just think of Lanyard. Knowledge of a more personal, local kind, does not count, because it cannot be measured.

This means that if you want to be engaged in creating knowledge; knowledge that matters, knowledge that has an impact, then it must be of this sort. Any other just holds no sway. Them’s the rules.

This is particularly pernicious a state of affairs in counselling, where so much of what we do – as is the case in teaching and in creative research – is about remaining open to and meeting the Other. The best of teaching and counselling and research is about a disciplined openness, in which we learn in relationship and from the relationship not about the relationship. But if you’re practising EBP you cannot be open to the client (or the child, or your subject-matter), because they are not in the evidence. And that means that you cannot learn from the client. And that means you let the client down.

As counsellors we can often end up feeling powerless in the face of the ‘evidence-based practice’ discourse: we often feel that the ‘knowledge’ created within this discourse is wrong but feel we cannot say so – we just don’t have the words.

Giving us the Words – Elliot Eisner and the Connoisseur

I want to end today by suggesting a framework within which we can start to stand up for ourselves more vocally and explicitly – a framework which will give us the words. And to do so I’m going to use a concept from the work of an educationalist called Elliot Eisner.

Eisner (and a cat)

Instead of the technical or industrial approach to knowledge which we see in evidence-based practice, Eisner suggested that teachers may benefit from adopting a more artistic model of knowledge. Looking to the world of art, Eisner found that although there was no overall regulator dictating standards or evaluative criteria, there were, nevertheless, clear criteria and standards which were constantly being negotiated, developed and refined between artists and critics and audiences. And further, he found that these criteria provided enough structure for people to practice well and to improve their practice.

Within the world of art Eisner found explicit, measurable and objective criteria such as technical skill and draughtsmanship (much as we’d find in EBP), alongside criteria relating to established canons of practice and theory (and so an understanding of what knowledge has been passed down to us – much as we’d find in the ‘schools’ or ‘tribes’ approach to counselling), alongside amorphous but no less important criteria such as, for example, emotional impact and moral worth. Eisner called the person who engages with these different criteria and weighs them up against each other a connoisseur. These connoisseurs have a felt sense honed over years of direct, lived experience and dialogue, and use this engage in a community of rigorous discussion about truth, value and meaning in art. They have a shared sense of purpose, direction and practice, but within that disagree reasonably and rigorously about how to achieve those ends.

Eisner hoped to import that culture of critique and connoisseurship into education. He loathed the curricula which sought to control every aspect of a child’s experience in school. But he also distrusted the wooliness of unreflective teachers who were often just going along with tradition because it’s what we do. Education, as he saw it, was a messy human process, with aspects of culture and morality and subjective taste, as well as aspects of efficacy and science and objective research. He wanted teachers to be open to the cultural and individual, as well as the universal and rational. He wanted them to develop their own language to weigh up these different ways of judging and make informed, situated choices between them. Eisner knew that the only way that the art/science of teaching could be protected from industrialised knowledge-creation was to encourage teachers to take an active role in their own community of connoisseurs; for each and every one of them to become a researcher who could stand up for their own lived knowledge, and engage with each others’.

How does this help us in counselling? Well the best counselling is messy and human. It is a moral and ethical as well as a technical process. As counsellors we are artists but we are not just artists. We are concerned with our impact in the world and with doing counselling well. How these different aspects – these different criteria – are to be balanced is an unsolvable conundrum. But what Eisner’s notion of the connoisseur highlights is that this unsolvable balancing act is one which we must continue to debate instead of ceding, frightened, to one particular discourse. It gives us confidence, I hope, to engage in this debate – to say, unashamedly: “My standard of judging is potentially more important than yours”. To say “I understand things from the inside which you, on the outside, cannot grasp, and vice versa”. To face up to the EBP and engage with it rather than rejecting it out-of-hand, or slavishly submitting to it. To place the lived relationship and therefore the client at the centre of our work and to learn from these, arguing once again in our clients’ best interests.

The notion of the community of connoisseurs gives us a language through which to place practical knowledge on a par with technical knowledge, and to take back some control of our work. It gives us confidence, I hope, to acknowledge the compromised, messy nature of relationship, and to reject the totalising, manualising impulses of industrial knowledge where they are inappropriate.

My Journey to Keele

Which brings me to the closing remarks of my paper, and the question: how do we get to a position in which our voices as connoisseurs can be heard?

The battle has been lost – for the moment – in teaching. I left the profession because I felt I was not enough, and that there were too few people to fight with, and too few words with which to argue. But we are fortunate that we already, in counselling, share aspects of connoisseurship in, for example, the supervisory relationship, and in conferences like this, today. This conference is an opportunity for connoisseurship; for us to find our voices. We won’t find our voices by looking above for someone to give them us: we need to look towards each other, and stand up for – and to – each other. But the point I want to leave you with is that we have to look outwards as well as inwards – to those who disagree as well as to those who agree. If our situated, creative local knowledge matters we need to be saying that to others as well as to each other. We need to stand up together and say: “This matters. It is important. You need to listen”.

Part of my journey has been to expose myself here today and to say: my research was worthwhile because, in that instance, the moral and political were worth more than the analytic and judgemental. The lived-experience was more important than the mark scheme. Part of my own journey has also been to switch from the academic route into blogging as an avenue for reaching more people outside of the bubble of those who agree with me: turning out as well as in. Which seems like a very good place to stop and turn outwards to you for questions…

* After I presented this paper, I attended a keynote presentation by Andrew Reeves (of BACP chair fame), and my views have somewhat changed. An article based on this paper will briefly explore this in an upcoming issue of Therapy Today.


Some of the things I’ve written recently have been very negative. Most of the things. Living alone and listening to two hours of news a day ferments a pitch of negativity that, if left unchecked, would fester and develop into sores. It needs an outlet – it needs lancing. Normally it’d be her indoors who’d get an earful, but she’s currently displaced. You’ve been my displaced partner, you guys. You’re welcome!

But like any displaced partner you don’t just want to hear me whinge when I get home, so I thought I’d try to say something positive about what is good. It’s harder and scarier than saying something negative, but taking risks is the whole point of being in a relationship isn’t it. Isn’t it?

Anyway, I was also impelled to write this by seeing a therapist again. I’m seeing a therapist again you guys! Not because I’m in a particularly bad state at present – I’m cool – but because the times in my life that I’ve seen a therapist are times in which I’ve lived better and more intensely. I’d not want to see a therapist all the time because, well, money. And shame/self-respect. But therapy with the right person at the right time is ace. The right person at present is a chap who goes in for a bit of the psychosynthesis.


Sounds like some hypno-hippie-hipster pseudo-scientific bullshit right? It’s not, I don’t think. Maybe it is – I don’t know a lot about it (which is one of the reasons I like it), but I do know that, unlike most flavours of therapy, psychosynthesis seems pretty agnostic in its view of the person. Instead of trying to benevolently manipulate the client into agreement with their true state, it encourages them to make sense of themselves, often through a series of internal characters called subpersonalities or voices. These might be the much-maligned inner child who Freud was so interested in fiddling with, or they may be character-traits which emerge in certain situations, or relationship roles, or imagined future selves, or whatever. Unlike many of the other flavours of therapy there’s no prefigured plan about which voices each person should have. It’s creative and exciting and scary, and allows you – sorry, me – to explore and create with a sense of freedom and playfulness, instead of a fixation on uncovering underlying causes (psychoanalysis), becoming more pro-social (CBT, TA, other acronyms) or on polishing a turd (person-centred).

One of the things that has emerged for me in the course of therapy is the difference between those of my internal voices that speak from a place of feast, and those which speak from a place of famine [I think this distinction comes from the book ‘The Gift‘, by Lewis Hyde, but I’m not sure]. Engaging with them has been fascinating personally, but has also thrown an interesting light on public life – especially on those aspects which make me so angry and negative.


The voices which speak from a place of famine are those concerned with conservation, preservation and safety. They’re voices dominated by the past and the future: they have learnt the hard way and don’t won’t be bitten twice. They stockpile like a prepper, and are just about as likeable. They’re the voices which whine and wheedle: “Are you sure you’ve got enough strength for that?” or “What if you let him down – it’d be awful to promise something you couldn’t follow through on,” or “You need to be sure you’ve got this right, why don’t you check it again; much better you find the error before anyone else has the chance. In fact, it’s probably better no-one gets to see this at all”.

The voices of famine are afraid of overcommitting and will only take the most calculated and justifiable of risks. They don’t trust themselves very far, and they trust others even less: everything external will potentially let them down, so they seek to gather as much as possible inside themselves, and cut off from anything which can’t be consumed or controlled. And if the world must be engaged with, then it should be engaged with on the safest possible terms: scepticism, atomism, and safety-in-numbers-evidence.


The voices which speak from a place of feasting are – in me – rarer, but they are vital. They are enthusiastic, generous and profligate; they spend and give and trust recklessly in both themselves and the world. They speak from a place of strength but also vulnerability: in their confidence they expose themselves, consuming and enjoying and thereby making themselves less prepared. The feast can be enjoyed only because the past has been forgotten (ignored) and because the future is a place of hope and trust rather than fear. These voices sing “Expand, make connections with others; they won’t let you down,” and “Make yourself vulnerable: you’ll be held”, and “Believe, why not? You can change later.”

Voices which speak from a place of feast seek to expand, but not in order to control or make safe: their aim is to experience, now, what is good and to experience more of it. These voices are happy with science and evidence, but are not constrained by it as they have faith in something better, and are not tied to the past. They make sense of the world by immediate judgements rather than reasoned argument – aesthetics and virtue predominate: ‘how does it taste’ rather than ‘how many calories’; ‘is it the right thing to do’ rather than ‘can I get away with it’; ‘how am I moved’ rather than ‘what does my friend think’.

What has this got to do with the news and stuff? 

The more I’ve got angry about the flacid paucity of public debate about, for example, the EU referendum, academies, tax prickery, etc., the clearer it has become to me that the only voices with which we permit ourselves to speak, in public, are voices of famine.

Take, for example, the queen of the sciences – the voice to which all other voices much defer, in contemporary debate – economics. Economics is the voice of famine in its purest form: it posits nothing outside of itself, and aims to control by understanding. Anything which exists outside of economics is either irrelevant or reduced to itself. In the EU referendum debate, for example, all of the argument on both sides have been economic-based. No feast voice has been confident enough to stand up with an alternative. Can you imagine a pro-EU politician saying, as I believe they should:”the economic arguments are irrelevant: what matters is something bigger – a principle of shared humanity and generosity. The fact that we’re giving 151 million pounds or whatever a week to nations who are poorer than us is a good thing. We should be giving more”. It just wouldn’t happen.

And this is part of the power of the famine voices – both on a personal and political level – they’re inherently reasonable, and they’re right. You shouldn’t take a risk; there’s nothing to justify it. Because they are, by definition, reasonable and based on the best evidence, they cannot but win if engaged with on their own terms. Even when proved to be absolutely useless, they still win out. It hasn’t gone uncommented upon that very few economists predicted the whole global financial schermozzle, but public debate is dominated now more than ever by the economist. Just like someone suffering from OCD, we may not like the tools we have which keep us safe, and they may limit our lives severely, but they’re the only safety we know.

Similarly, if you listen to people in the 50s talk about their hopes for the future, they talk about 3-day-weeks and enjoying the present tense of leisure time and exploration and creativity and relaxation. Instead (and despite living in a much much much much safer world) we’ve put all our faith in a way of life which, broadly, makes us unhappy. But at least it’s safe.

The same can be seen in education

Read any education research from the 70s and you’ll find all kinds of idealism and hopefulness. You’ll find both sides of the educational divide framing their beliefs in terms of what society is for, and what counts as good or right. You’ll find people opining that as we become technically more adept at teaching and understand more about the brain, we’ll make space within education for all of the richness of human interaction and growth and creativity.

Look to current debates and you’ll find something else. Take, for example, the recent arguments over compulsory academisation. The main argument put up by the unions and the labours was evidential and economic. They argued, erroneously, that the evidence suggested that academisation made for worse results and that they would cost more than LA-run schools. They disagree about the working-out of the sums, but fundamentally they agree. Fundamentally they agree that what matters in education the speed at which a pre-defined skill can be learnt and demonstrated (parroted, or aped, depending on your jungle-based-animal-analogy of choice). They value the present purely on the basis of what it will be in the future: the child’s current experience is relevant only in terms of impact on future life. Sometimes this future-valuation is seen by good people as a bad thing, as when education is reduced to creating economy-fodder. Good people rightly baulk at the contention that experience x is good if and only if it will have a long term positive effect on employability. But good people also use this method of future-evaluation because they don’t know any other: for example, when early education is judged in terms of later mental-health or exam results.

In both cases both sides agree that the child’s experience of education is never to be valued on its own terms: its value is purely extrinsic, and situated in the future. Both sides speak with a voice stuck firmly in a place of fear and famine. Both sides speak with a voice that does not trust, and can not enjoy or value what is happening right now. A voice which is scared of global racers and technologies and tiger economies and Finland.

What else might they have argued? Well, in these times it is hard to think of an argument which isn’t about efficiency and fear, and still harder to make that kind of feast-argument stand up against the famine-status-quo. These kind of arguments just sound silly because they don’t play into the publicly-sanctioned language of debate. They might have said, for example, that even though ‘evidence’ suggests that method x gets better educational outcomes, method y is more humane, and feels more respectful. Ultimately I would argue that those of us who have worked with young children know, from those children, what is right better than those who watch from outside the relationship. We have been told.

The Family

Ultimately, though, I think argument is the wrong way to think about this. Arguing and debate are themselves modes of interaction which come out of famine: they are concerned with correctly organising what we already have rather than creating something better; discovering something new. Instead we ought to look to areas of life where the feast voices are established and undimmed. And chief amongst them is the home. The way we approach education is the complete inverse of the way that we parent (so long as we’re not hot-housing leopards or whatever). When we parent we delight in the moment, valuing the child intrinsically for what they are, trusting that they will grow and develop (without drawing the logical conclusion that, as a child is not yet as developed as they will become, they are therefore inferior and deficient). We are hopeful and confident and so instil hope and courage and boldness and creativity.

One part of the education system in which a more trusting, creative voice still holds some sway is in the Early Years (0-5). Why? Largely because, and excuse the sexism here, the Early Years has always been dominated by women, and sees education as a natural growth from care and parenting, rather than something which needs to be imposed to address a deficit. But even here the voices which speak from a place of joy and delight and feast are being drowned out by the famine voices of whitehall and ofsted and fearful parents.

Now, I’m not saying that we should all become hippies and just love one another. The feast and the famine each have their place. Feast voices can lead to the kind of excesses seen in Weimar Germany or Chelsea. Your man Nietzsche was all about the feast: he wrote about how the strong can afford to forget because they’re strong and can turn any situation to their advantage by dancing or raping or climbing golden trellaces, or whatever else his blonde beasts got up to. But we’re not Nietzsche – the voices of famine are vital to living well with each other and staying safe and learning. Vital. But they’re not everything, and it’s these famine voices that dominate the public sphere at the moment. In private life it’s different: in spheres where the influence has traditionally been more female we find more of the voices of feast: child rearing, care, friendship. But in public life we’re afraid to take a risk and argue (or sing, shout, whatever) passionately and creatively for anything, especially when a famine voice of science, evidence, economics, or plain old fear stands opposite us.

My own voices are often in a similar (im)balance: the conservative voices win out through their exercise of fear, while the creative, vulnerable, trusting voices cower and fester. My problem is that all of those feast voices need to be heard, and if they’re not allowed a positive space they’ll emerge in potentially harmful and destructive ways. The parallel with public life is clear, as bozos like BoJo and Hitler come to fill the space vacated by good people saying interesting, creative, hopeful things. Scums like Farrage and Trump speak to our need to believe in something bigger than just getting by, but these are feast voices which have gone off, badly, and become parodies of themselves. They inspire a belief in something bigger than fear when they are, in fact, governed by precisely the same fear as the famine voices on the other side. If a quieter, more vulnerable voice emerges which offers an alternative, creative way to be in public life, they’re drowned out by the bullshit and sink without trace. A case in point: Gordon Browns.

Remember Gordon Browns?

No, probably not. It’s hard to look back on his premiership without the taint of the narrative he’s since been crowbarred into, but at the time he took power, he offered something new and, to me anyway, exciting: a moral compass. His first 100 days were charcaterised by quiet and principled good leadership. Although he was all about the moneys, he often eschewed arguments from economics, and spoke instead about bigger ideas of right and wrong. It was good. But he sank. He sank because he listened to the famine voices of well-intentioned but spineless advisors who told him to apologise to a bigot whom he had accurately characterised as a bigot. Instead of taking his serious job seriously, he succumbed to stupid advice and tried, excruciatingly, to smile.

What he offered in those early days was an alternative to the narrative of politics as mere application of evidence: his moral compass was such that it privileged what was right over what is reasonable, or rational (in an economic sense). He reached beyond the past and the fear of the future into something bigger. Because he didn’t couple this with a smiling gonk-face, and lost his nerve when he needed to stick to it (against all reasonable advice), he was hounded out by a hostile press who couldn’t understand what he offered and preferred the cleaner narrative lines of economics, bacon sandwiches, and smiling faces. The same will probably be true of the Corbyn, who also makes no sense to the voices of famine, and is insecure and timid when faced by their reasonableness.

Ugh, this has been quite the ramble. I find it harder to marshal and organise my thoughts into clear arguments when trying to be positive. But perhaps that is part of the problem with positive feast voices altogether. They speak from a place of insecurity and confidence. They’re mixed up. They’re paradoxical and unreasonable. They can be picked apart with analysis and critique. They’re wrong. But they’re also important beyond measure as, without them, we are just fitting in and going along and hoping that we don’t get found out. That’s no way to live.


Autism awareness, or empathy

Last week was autism awareness week, or it’s this week; I’m not sure. There was a flim-flam of a drama on the telly which the radio did a talking about, and lots of worthy articles in the guardian. This is fine; autism exists, and it’s good to be aware of things. In fact being aware of things seems to be the most important thing. More than actually doing anything. But that’s another argument. Autism is not only a thing but it is also something that has profound impacts, especially on those at the more autistic end of the spectrum (who, despite claims to the contrary, very rarely show up on tv or in these kinds of discussions). I’ve taught some amazing children on the autistic spectrum and value the work I did with them as some of the most important I’ve done. The wisdom of creating a spectrum which includes the relatively able and non-noticeable with those who need round-the-clock care is, to me questionable, but the existence of autism is not.

Caveats aside, the more I’ve read worthy articles and heard people doing their talking about it, the more worried I’ve become about what underlies our current obsession with (the less severe end of the spectrum of) autism, (a faddish obsession which fifteen years ago was focused on dyslexia, recently was adhd, and soon will be the new made-up-name-to-describe-prickish-tantrums-syndrome-disorder).

I’m going to start my argument with the pretty unforgivable voice of my teenage self arguing against the way that the then-fashionable dyslexia was treated fifteen years ago. He asked: “Why do we define one type of stupidity as deserving of extra time in exams and special support etc., and not others?” He didn’t like it – couldn’t see the logic. I don’t like the way he expressed himself but we’ll follow through the argument before we completely condemn him.

He was told that it was because the dyslexic child’s profile of abilities was high in all other areas but low where written words were concerned. Dyslexic children can understand and reason and speak and do maths and such, but have a specific measurable deficit in this one area. This deficit is unusual – it doesn’t fit what we’d expect – so we should support them to catch up to their overall level.

Teenage Phil thought that the fact that someone had a particular ability-profile with noticeable troughs with the reading and the writing made them no more deserving of special treatment than those who had uniformly low ability, or a whole jumble of abilities which didn’t fit into a pattern at all.

He was told that lots of people had a similar pattern and so there must be some common cause of the unusual pattern. And if something has a common cause it’s not the individual child’s fault: dyslexic children were not responsible for their deficit.

Why, he wondered aloud (though not very loud; he was scared of the bigger boys) should the fact that one person’s issues were shared by another make any difference to their treatment? And didn’t this invalidate the previous argument about them having unusual profiles?

Teenage Phil was told to stop being clever. The uneven development must be caused by some thing, and so did not reflect an underlying level of cognitive ability (like ability does in normal, evenly-able people). The thing that caused unevenness was as yet unknown (take your pick of made up science), but regardless it was a thing and so not their fault.

Whereas the uniformly, evenly-unable children, he was told, were just, well, slow. And although the argument was never made, the implication to Teenage Phil was clear: people whose cognitive development was uniformly lower were more responsible for their problems – because they more accurately reflected who they were as a person – than the dyslexics who aren’t responsible for theirs, or at least not to the same extent.

Teenage Phil made his case a bit too forcefully, and although he had a point he drew the wrong conclusion. Where he thought there should be less support and understanding for dyslexic children, he should have though that there should be more support and understanding for everyone who was struggling, in whatever way. Nevertheless, that teenage voice rose up again this week when I was reading about autism awareness. In a similar vein I became teenagerly and self-righteously-indignant at the badly-thought-through arguments and vapid cliches. Fortunately, I’m a little less gauche than I was back then, and have recast my teenage self-righteousness into one that paints me in a much better light, to wit: I believe that our focus on autism, adhd, dyslexia and the like not only limits our empathy but also manifests a very murky set of norms which are fundamentally conservative and intolerant.

First, then, how does it limit empathy? Consider the classroom (it is, after all, the battle ground for autism and pretty much the only thing I know anything about). In my experience, in a class of thirty there will be at least two children with specific additional needs. These are needs which schools address because they’re clearly defined and there’s protocols to follow for them. The other 28 children see their 2 classmates being additionally supported and come to accept that little Ian or Terry are different and deserve to be understood and made allowance for. Teachers are generally ace at helping children with additional needs’ classmates to understand and make space for them. This is good. It is empathy and is essential to children growing up as tolerant good people.

But what of the other 28? What of the child whose particular behaviour/inability falls into a pattern not recognised as autistic or dyslexic or adhdish? They don’t get any special treatment, and are not empathised with. What of the child who feels isolated, or struggles to integrate into a class of children who belong to a different race? What of the child who smells, or is cripplingly shy (though not with the specific verbal markers that would lead to a spectrum diagnosis)? Because they don’t have a clear set of symptoms associated with their problems – a set which fits a predetermined pattern – these children receive no additional support. And their classmates are taught, implicitly, that these people don’t need to be empathised with: they’re, well, they’re normal. Normal, but a bit rubbish. In this way our empathy (as well as theirs) is limited to the special cases. This is bad enough, but it gets worse when you consider what this picture tells us about ourselves.

The children with recognisable patterns (syndromes, conditions or disabilities) are empathised with because they’re different. Or, to put it another way, because they’re abnormal. But not just any old abnormal; vitally, they have to be an abnormal in a way which we understand, or think we understand. They must be a normal form of abnormal. Why is this? Because it allows us to stay safe in our normality. By understanding their normal abnormality and giving it a name and a pretend (normally biological, often genetic) cause, we free the child of responsibility for those aspects of them which are not normal. Thereby we tell them: you’re not normal, but that’s ok because your abnormality isn’t your fault – it’s not really a part of you it’s just something that happened to you. You’re still normal really, like us, but it’s just some silly genes have gotten in the way of that. We thus tell them: “We accept you but only because we understand the ways in which you are different”. To understand, in this case, is to forgive – to pardon an abnormality which we would otherwise find offensive because of what it might tell us about ourselves. An undefined un-caused un-labelled abnormality is an offense to us becaus eit might change how we view ourselves as parents, friends, teachers, or society as a whole. A ’caused’ abnormality does not.

This is awful, if you think about it. We talk loads about tolerance, but, under the banner of tolerating difference, the message we’re actually sending is that people must disown their bits which are different in order that we can accept them. What’s that? Oh, Teenage Phil wants to give an example. Ok, go on: “Well I’ve heard that we can’t handle the existence of middle-class well-bough-up children being bad at reading so we invent dyslexia and instruct the newly dyslexic children that they are not responsible for this deficit. This achieved, we can now accept them back into middle-class society and claim to be tolerant”. I’m not sure I agree with him – dyslexia exists too – but he has a point in some cases.

And what of the other children?  The ones who don’t have neatly patterned behaviours or aptitudes? Well, they have the opposite problem: as there is no quasi-cause to which we can attribute their issues, they are seen as normal and so held responsible for their sins. They are normal but not good so are, for want of a better word, bad. Children who are less able across-the-board are forced to own all of their difficulties: they are ignored because they should be responsible for themselves. So too are children who are cripplingly shy or needy (that is, until such a time as we can ‘find’ a syndrome which ’causes’ their shyness or neediness with which we can forgive them).

In this way both parties are poorly treated: our empathy is reserved for those who are different (but only if we understand their difference, so the empathy is ultimately distancing and intolerant), and the normals are required to be completely responsible and un-understood. Our empathy is false because it doesn’t allow us to be changed – it tells us nothing about ourselves, because that with which we empathise is precisely that which we are not: abnormal. Genuine empathy means being vulnerable to change: it means understanding that which is different and may tell us something important about ourselves, not about the Other.

And it gets worse, because all the while we’re concerning ourselves with autism and adhd and dyslexia, who is it doing worst in schools? Who are we failing most? Is it the children who have a label? Well, partly yes, for the reasons I’ve set out above, but I’d argue we need to treat them more as normal people and expand our definition of normal, rather than seeing them as special cases (and as with all of this argument, I’m thinking about the less severe end of the spectrum here).

But the children who we are really failing, in their hundreds of thousands, are those children whose parents have little cultural and social capital, and so are worse at playing the game of school-preparation and networking and all of the bullshit that really makes a difference to life chances. There’s loads of them and we’re failing them, every day. They don’t get any empathy, though, because social exclusion and class-inequality are not things that we, as a society, have that much of a beef with. Sure, we might feel a bit bad about it, but if it’s a choice between empathising with a clean clear-cut medically-named case with medical names and a sweet white child from a good family, or a swathe of socially-deprived families whose very deprivation is causally related to our own privilege, we find it a hell of a lot easier to empathise with the former. Add to that our post-ideological distrust of any arguments made without ‘evidence’, and the medical model’s inability to consider anything beyond the biological individual, and you have a great way to empathise in such a way that keeps us safe from being implicated, from changing.

Our empathy only stretches as far as our politics do. Without wishing to sound glib, autistic children (and all of those other children to whom we have assigned labels) are easy to empathise with – their existence comforts us because it reinforces our own normality. Empathising with someone who’s been diagnosed as ‘different’ is never going to change you; you’ve decided that in advance. But being changed is the whole point of empathy. Similarly, tolerance is not about accepting that others exist who are different but separate; it’s about accepting that the way we define ourselves is open to change.

Autistic children need our empathy – especially those on the more autistic end of the spectrum who tend not to show up on tv because they’re not high-functioning enough for us to engage with unproblematically – but so does every child, including the non-specifically-struggling ones. And so, most importantly, do those people who live on the margins of society and do not have the cultural capital to access it. Empathising with them is difficult, but it’s a much better use of a week.

Save the children

For god’s sake.

At the risk of becoming a broken record, I wanted to note (rant about) yet another instance of flacid pseudo-neutral public ‘debate’, this time around education. Last week there was the argument about academies being held – by both sides – in entirely economic terms (with some admirable exceptions, Michael Rosen). I didn’t join in much because I was too angry and sad. This evening it was Save the Children who I heard on the radio, arguing that we need teachers in Nursery classes. The substance of their case doesn’t interest me; it’s an argument that has been going on for ages and says more about the prejudices of those who get involved (that ‘teaching’ and ‘caring’ are separate things) than it does about the children [edit: this blog written by a childminder well illustrates some of the prejudices suffered by those whose work is dismissed as ‘care’ rather than ‘education’]. What interests me are the terms in which Save the Children present their argument. Essentially it runs thus:

Birth-to-five is the most important period for educational development. Brain science shows that if you don’t get the good stuff when you’re little, you’ll never catch up. Evidence shows that children who attend nurseries often miss out on the good stuff, aren’t ready for school when they start at age 4, and fall even further behind as they go on.  Evidence shows that the best educational outcomes for those who risk being left behind arise through the kind of ‘learning interactions’ (‘shared sustained thinking’, as it used to be called) which are predominantly initiated by teachers. Therefore, we should have more teachers in nurseries.

Presented this way it seems pretty incontrovertible – why wouldn’t you want to improve the life chances of the least-privileged children? And they may be right that more teachers in nurseries is a good thing, but the goodness or otherwise of a choice about education does not depend on evidence. Education is bigger than the ‘scientific’* evidence, which, as your man Hume or Kant or whoever said, can’t turn an ‘is’ into an ‘ought’.

In the paraphrased argument above, for example, there’s an implicit belief that ‘educational outcomes’ are desirable. Says who? Seriously, who? In the evidence presented in the study, ‘educational outcomes’ means ‘level 4+ at KS2 SATs’ or ‘5 good GCSEs’ or ‘goes on to attend university’. Are any of these desirable? The answer’s not in the facts, but in a wider moral and social debate. And they’re not isolated from other impacts either. The decision to push for ‘level 4+ at KS2 SATs’ is not without consequences – consequences including the consistent degrading of childhood to a Korean-style drudge of managed dependence, the elimination of creativity from curricula, and a fundamental disrespect for what is most human and important.

I’m not saying we should get rid of SATs (I am; I definitely am) but there is a wider discussion to be held. ‘Educational outcomes’ is a made up thing. It’s not neutral or natural: it’s a measure of something you’ve chosen to measure and value. If Save the Children want to convince us that more teachers are needed in nurseries they have to convince us that the measure they’ve chosen is a good one.

But they don’t, and thereby they miss a trick: by accepting the neutrality of ‘education outcomes’, they forget the children. They forget that a better argument might be that the speed at which a hoop is jumped through (read: educational outcomes) is pretty much irrelevant, and that moral qualities such as creativity courage and care are more important than these hoops.

And this is the problem with regarding education as a science rather than an art – as something fundamentally objective rather than something fundamentally human. Seeing it as a science blinds us to the moral arguments which underlie any particular educational outcome.

What’s worse is that the narrative which sees education as part of a science is now hard-wired into economics. The lowest point of the interview with the Save the Children man was when your man said “And this is important for everyone – not just parents of young children – as, if we don’t get this right for our most vulnerable young children, it’s the economy which will suffer in the long term”. Yeah! You go save those children mister! They’re not going to become economy fodder without you to go save them! Quick, mister, before they fall behind and become a burden in the global race!

I’m sure that Save the Children are a good bunch of people; they save children. But this is what the pseudo-neutral terms of public debate do to good people: it forces them to speak in a language which makes liars and fools of them. And in this case that means bad thinking about education, which means letting children down. Save the Children have bought into a narrative around education which is leading children towards a narrower bleaker future.

Your man (a different one, I can’t remember who) said that you can judge a society on how it treats its elderly. We do pretty badly at that too, but I think you can tell more from how it treats its children. Education is the paradigm case which expresses what we value and want to be as a society. It’s about our hopes for being better people – for seeing the next generation exceed us. It’s about morality and culture and civilisation and purpose and love and humanity. It’s not about evidence.

*Just a brief note on the evidence – most of it is pure bullshit. The kind of bullshit in which ‘learning’ means ‘memorising’. And ‘brain science’ is the bulliest of all of the shits. Seriously, follow up on some of the links and look at how narrow they are in their concept of learning or impact. You can make whatever evidence you want up to grind your educational axe. It’s a cowards game.

A brief thought on academies

All primary schools are to become academies. This has been coming for a while; we all knew it would happen. But it’s sad all the same. The coming privatisation of education was one of the reasons I left full-time teaching, but to those outside education I’m guessing it must be a hard story to follow. From the outside I imagine it sounds like a relatively minor shift in organisation, moving away from a council-led system to a school-led system. It might even sound empowering and optimistic: a redirection of power away from the council and towards the grass roots of schools and teachers. It’s not.

I’m not going to write too much about the sadness I’m feeling today, or about the very good arguments against academisation, because there’s good people doing the same more eloquently than I can. They’re making arguments about the creeping privatisation that has already crept into the heart of the school system, and about the unfairness of un-redistribution of wealth. They’re also making good but fundamentally self-serving arguments about pay and conditions. And they (Lucy Powell) are making bland not-really-arguments about efficacy and budget management, entirely forgetting that education is not primarily an economic issue.

Those arguments are all great (except the last one, obviously, which isn’t even an argument – more of a whine and a shrug), but the point I want to make is a moral (and I hope practical) one about fear and solidarity.

First, then, solidarity. Schools are not islands. They’re part of a community – often the most important part of a community. They’re also part of groups which share practice and knowledge across different schools, groups which get better prices for services by buying in bulk, and myriad other groups. This is important. Without these groupings schools would not function. In the olden days, the groups to which schools most closely adhered were the Local Authorities (the LA) which ran the schools. These were geographical groupings which, like communities did in the olden days, bought together people and schools of different types and attitudes.

When I taught in Enfield, the LA bought together schools in some of the most deprived and dangerous parts of London with schools serving those who lived in streets where the average property price is over £2 million. This served many purposes – not least the redistribution of funding away from schools where it was not needed towards those where it was. But perhaps more importantly it gave us a sense of identity and solidarity with those unlike us. For example, I’m virulently anti-posh, but felt a great sense of solidarity with the posh schools we were partnered with. I felt impelled to help them where I could, and to be helped in return. The LA allowed us to conceive of education as a moral and communitarian project; one based not on efficacy and efficiency and outcome measures, but on solidarity and care. We were in it together.

Doubtless this was an inefficient way to run things. Doubtless changes could’ve been made to make LA’s more useful and coherent – I was the first to complain about the way that the system was being run. But when you take it away what are you left with?

Well, schools not being islands, they will need to band together, as is already happening. But how do they band together in this new, post-LA world? The more ambitious will band together in loose alliances of similarity. Like single-issue political groups they will look for strength in numbers with those who are the same. Geography being no longer such a relevance (we’ve got skypes after all), schools are free to find similarities at a distance and, like teenagers seeking affirmation, find a grouping which buttresses their sense of uniqueness and importance. The less ambitious will (and this is much more likely) out of fear band together under a new boss who tells them what to be and do. These are academy chains. In both cases there is no sense of solidarity with a project which is open to the world – there is just a niche and an inward-looking. A closing off and an erecting of boundaries.

‘So what?’ You might think. ‘If it makes schools better at teaching children why not do it?’ Quite aside from the fact that it doesn’t make schools better, there is a bigger issue here which is that schools socialise children into society. They act as mini-worlds in which children learn what society thinks about them as individuals and groups. In the messy LA-led school environment, there were plenty problems, but there was at least a sense of community and connection and identity. We were public servants doing things because they were right – culturally, socially and morally – not because they worked or because we were being paid. In these academy chains your identity is not provided even in part by a location or a history or a culture – it is provided entirely by an insular grouping of people whose main aim is to make a profit out of the state. This is it. This is the culture that the child is growing up into. This is the message they receive about themselves and their place in the world: your place, child, is not connected to your area or your family or culture or society or nation – your place is defined by a corporation. A business. In these new academy chains there is no public service, no giving; only rational choice and self-interest. Elliott Eisener wrote about the industrial metaphor in education which sees children as input and output. He wrote in the 60s about how it was thankfully disappearing. But its coming back now.

I’ve already written more than I meant to, so I’ll keep my second point – about fear – brief. I’ve written before about fear in the education system, and how it is propagated and accepted at every level from DfE and OfSTED down. The reason I bring it up again is that this is the other message which we are sending to our children: be afraid.

Imagine, for a moment, what this new loosening of constraints would have looked like if there were no OfSTED or league tables. It might have inspired a revolution in creativity and connection and care, as schools concentrated on what really matters to their children and to society. All of the time spent following developing and implementing useless and harmful curricula could be refocused on children’s development as people. Schools would have grouped together in order to be more responsive, open and creative, and teachers would have become researchers and artists, able and trained to trust the children in their care.

But in a climate of fear and oversight, schools are not in a place to do this. The driving concern is to maintain and keep safe. This is what motivates them to band together. The main aim is not to get it wrong and be found wanting. Creativity can not flourish in these circumstances. Neither can children. But this is what we will soon be teaching them about themselves and the world.


It’s not it’s the economy, stupid, stupid

I watched a bit of that Gogglebox a couple of days ago. When it’s not bland and pointless, it’s a uniquely horrible program, I think, whose USP seems to be to pander to the viewer’s prejudices under the cover of ‘reality’. The characters are selected, edited, and manage themselves so as to conform to our stereotypes, but not so egregiously that we notice and feel bad about it. It’s as negative as that awful airport ‘comedy’ that the Walliams and Lucas blacked up in, but with the insidious subtext that Gogglebox, unlike the airport douchebaggery, is true and representative. It’s not.

Anyway, on the episode I watched ten minutes of, they did a tiny little segment on the Europe referendum. One after another, each of the characters on the show said that they didn’t understand the issues and didn’t know how to vote. Fine, you might think – a democracy depends on voters being well-informed and switched on to the political process. How refreshing, you might think, to hear people acknowledge their own need to do more research on the issues before they make up their minds. And to an extent that’s true: it’s good to recognise that there’s more to a vote than the jerk of a knee.

But when you look at why people don’t feel informed something else comes through which troubles me. The only person in the segment (before I turned off) who wasn’t unsure of their ground – who felt qualified to engage – reeled off a dull and vague series of points about economic impact. It was recycled facts about jobs and trade and regulations and how these would impact on our economy. Nothing else. No-one else said anything. No-one said that they thought that the European project was a good or a bad thing. No-one spoke about history or the emotional connection we feel or don’t feel with the continent. No-one spoke about the cultural connections we do or don’t share with them proper Europeans, or about their own personal experiences. The only reason your man felt safe to air any political views because he was doing so in the sanctioned, ‘neutral’ language of facts about moneys.

And this is the problem: ever since Tony B.Liars [satire], ideology has been leaching from the political scene in this country, to the point where any attempt to make a political argument on non-technical grounds is simply unavailable to the vast majority of us. The characters who said “I don’t know enough about the issues”, meant “I haven’t been told about the economic impact”. And in meaning this, they and we are accepting and adopting a discourse which disempowers us from the political process. As laymen, any personal engagement we might have with the issue in other dimensions (ideological, historical, personal, emotional; whatever) is rendered unspeakable – we don’t feel we have the right to bring it up in the face of technical expertise. There aren’t words for us to stand up for our own understanding. Well, that’s not strictly true – there is one ideological argument against the economic mainstream for which words are readily available: that of the ukips and the fear of garlic, which thrives on the facebooks if not the mainstream media. And I don’t want to make that one.

The argument I want to make about Europe is that the moral case trumps the economic one. Each time I hear a Leave campaigner say that we’re paying in more to the EU than we’re getting out I want to respond: “Thats’ exactly the idea; we’re richer, we spread our wealth; it’s the right thing to do”. Just like I want to respond when I hear someone trot out the ‘fact’ that everyone should arrange their tax affairs so as to pay as little as possible. I like paying tax, even though I’m not in financially a great place at the moment, I don’t claim expenses back for tax purposes because it’s a dishonest thing to do. Morally. This may not be a rational (according to the game theory on which economics is based) thing to do, but it is morally the right thing.

But this argument does not appear in the world we find on gogglebox. It is not available to us. Before we can take a side, we feel we need to have been told the economic bedrock on which this opinion must finally be based. And as we succumb to such economic reductionism, the game theory principle at the heart of economics (self-interest) will become increasingly true in the world. Choosing in the referendum will consist in choosing which rational selfish choice is presented most convincingly to us. Neither side is interested in other modes of valuing or evaluating. And without an alternative we will become prisoners of a narrative which we have no control over, just like the gogglebox characters, meekly accepting the discourse of technical self-interested politics, unable to raise our own voices and beliefs against it.

Contrast this to the other political shenanigan occurring at the moment in the US: in the Republican primaries the whole debate is led by the uninformed layman, and anyone can say anything without any rigour or fact-checking. This is just as pernicious a position as our technicalised debate, as, freed from the need to make rational argument or pay heed to facts, shits like Trump and Rubio can rise to the surface. But the other pole is no better: on this side of the Atlantic we will not come to a better decision than the Republicans because our debate is too narrow – our language too constrained; our politics too fearful. I don’t want a Trump – we’ve got a Farrage after all – but neither do I want a world run by economists and all the murky interests which lie behind them. I want to be able to aim higher. I’m trying.


I wrote this a couple of weeks ago and it’s been getting worse. No new voices emerge on either side of the EU debate to challenge the centrality of money. Radical reform of education is pushed through on the basis of efficiency and practicality. And the budget is sold to us in the same terms. In response to your man Humpheys’ questions on Today yesterday, Gideon said “I know these promises I’m making will sound like big numbers or abstract concepts, but they’re vitally important issues”. His contempt for the voting population is clear: you the people are not qualified to understand economics any more, and politics is nothing but economics, so you can’t understand politics. You’re views are irrelevant. Your feelings and beliefs are irrelevant. You are irrelevant.