
The Crisis of Authority
[103] One way of observing the relationship between depression and competitiveness is in statistical correlations between rates of diagnosis and levels of economic inequality across society. After all, the function of any competition is to produce an unequal outcome. More equal societies, such as Scandinavian nations, record lower levels of depression and higher levels of well-being overall, while depression is most common in highly unequal societies such as the United States and United Kingdom.2 The statistics also confirm that relative poverty – being poor in comparison to others – can cause as much misery as absolute poverty, suggesting that it is the sense of inferiority and status anxiety that triggers depression, in addition to the stress of worrying about money. For this reason, the effect of inequality on depression is felt much of the way up the income scale.
[104] Yet there is more to this than just a statistical correlation. Behind the numbers, there is troubling evidence that depression can be triggered by the competitive ethos itself, afflicting not only the ‘losers’ but also the ‘winners’.
[105] Wherever we measure our self-worth relative to others, as all competitions force us to, we risk losing our sense of self-worth altogether. One of the sad ironies here is that the effect of this is to dissuade people, including schoolchildren, from engaging in physical exercise altogether
[106] Perhaps it is no surprise, then, that a society such as America’s, which privileges a competitive individual mindset at every moment in life, has been so thoroughly permeated by depressive disorders and demand for antidepressants. Today, around a third of adults in the United States and close to half in the UK believe that they occasionally suffer from depression, although the diagnosis rates are far lower than that
[107] In a world where we cannot agree what counts as ‘good’ and what counts as ‘bad’, because it’s all a matter of personal or cultural perspective, measurement offers a solution. Instead of indicating quality, it indicates quantity. Instead of representing how good things are, it represents how much they are. Instead of a hierarchy of values, from the worst up to the best, it
simply offers a scale, from the least up to the most. Numbers are able to settle disputes when nothing else looks likely to.
[108] At its most primitive, the legacy of the 1960s is that more is necessarily preferable to less. To grow is to progress. Regardless of what one wants, desires, or believes, it is best that one gets as much of it as possible. This belief in growth as a good-in-itself was made explicit by some subcultures and psychological movements.
[109] In short, policy-makers needed to listen to economists more closely, a view that reveals the most distinctive Chicagoan trait of all: the fundamental belief that economics is an objective science of human behaviour which can be cleanly separated from all moral or political Considerations.
[110] At the root of this science lay a simple model of psychology that can be traced back via Jevons to Bentham. According to this model, human beings are constantly making cost-benefit trade-offs in pursuit of their own interests. Jevons explained the movement of market prices in terms of such psychological rationality on the part of consumers, who are constantly seeking more bang for their buck (or less buck for their bang). What distinguished the Chicago School was that they extended this model of psychology beyond the limits of market consumption, to apply to all forms of human behaviour. Caring for children, socializing with friends, getting married, designing a welfare programme, giving to charity, taking drugs – all
of these apparently social, ethical, ritualized or irrational activities were reconceived in Chicago as calculated strategies for the maximization of private psychological gain. They referred to this psychological model as ‘price theory’ and saw no limit to its application.
Neoliberalism.
[111] Ever the sceptic, Coase did not accept this style of reasoning. Nothing in real economic life was ever that simple. Markets were never perfectly competitive in actuality, so the categorical distinction between a market that ‘works’ and one that ‘fails’ was an illusion generated by economic theory. The question economists should be asking, Coase argued, is whether there is
good evidence that a specific regulatory intervention will make everyone better off overall. And by ‘everyone’, this shouldn’t just mean consumers or small businesses, but the party being regulated as well. This argument was straight out of Bentham: he was advocating that policy be led purely by statistical data on aggregate human welfare, and abandon all sense of ‘right’
and ‘wrong’ altogether. If there isn’t sufficient data to justify government intervention – and such evidence is hard to assemble – then regulators would be better off leaving the economy alone altogether.
[112] The question that Coase posed that evening in 1960 was a radical one: regulators had long been striving to protect competitors from larger bullies, but what about the welfare of the bully? Didn’t he deserve to be taken into account as well? And – as the Chicago School would later seek to explain – might consumers actually be better off being served by the same very large, efficient monopolists than constantly having to choose between various smaller, inefficient competitors? If the welfare of everyone were taken into account, including the welfare of aggressive corporate behemoths, then it was really not clear what benefit regulation was actually achieving. Here was utilitarianism being reinvented in such a way as to include
corporations in the state’s calculus. Walmart, Microsoft and Apple didn’t exist in 1960, but they could not have imagined a more sympathetic policy template than the one that was cooked up in Chicago on the back of Coase’s work. Once Reagan was in the White House, these ideas spread quickly through the policy and regulatory establishments of Washington, DC, before
permeating many international regulators over the 1990s
[113] Chicago-style competition wasn’t about co-existing with rivals; it was about destroying them. Inequality was not some moral injustice, but an accurate representation of differences in desire and power.
[114] The Chicago School message to anyone complaining that today’s market is dominated by corporate giants is a brutal one: go and start a future corporate giant yourself. What is stopping you? Do you not desire it enough? Do you not have the fight in you? If not, perhaps there is something wrong with you, not with society.
[115] This poses the question of what happens to the large number of people in a neoliberal society who are not possessed with the egoism, aggression and optimism of a Milton Friedman or a Steve Jobs. To deal with such people, a different science is needed altogether.
The science of deflation
[116] The ability of individuals to ‘strive’ and ‘grow’ came under a somewhat different scientific spotlight between 1957 and 1958, due to accidental and coincidental discoveries made by two psychiatrists, Ronald Kuhn and Nathan Kline, working in the United States and Switzerland respectively. As with so many major scientific breakthroughs, it is impossible to specify who exactly
[117] got there first, for the simple reason that neither quite understood where exactly they had got to. The era of psychopharmacology was still very young, with the discovery of the first drug effective against schizophrenia in 1952 and the running of the first successful ‘randomized control trials’ (whereby a drug is tested alongside a placebo, with the recipients not knowing which one they’ve received) on Valium in 1954. These breakthroughs opened up a new neurochemical terrain for psychiatrists to explore.
[118] The drugs did not appear to have any particular effects that could be scientifically classified. There was no specific psychiatric symptom or disorder that they seemed to relieve. Given that psychiatrists of the 1950s still viewed their jobs principally in terms of healing those in asylums and hospitals, it wasn’t clear that these drugs offered anything especially useful at all. As a result, drug companies initially showed little interest in the breakthrough. The drugs simply seemed to make people feel more truly themselves, restoring their optimism about life in general. For a while, psychiatrists struggled to know how to describe the new drugs. Kline chose to refer to his as a ‘psychic energizer’, which remains a decent description of many of the drugs currently marketed as ‘antidepressants’, but which are used to treat anything from eating disorders to premature ejaculation. The subtlety of their effects was perplexing, but this
very property – this selectivity – has since come to be the main promise of
[119] It would take twenty-five years before Kuhn and Kline’s new ‘psychic energizers’ would attain mass market appeal; indeed they were initially marketed as anti-schizophrenia drugs. But culturally, their discovery was perfectly timed. Psychiatrists and psychologists had shown virtually no interest in the notion of happiness or flourishing up until this time. The influence of psychoanalysis meant that psychiatric problems were typically viewed through the lens of neurosis, that is, as conflict with oneself and one’s past. Depression was a recognized psychiatric disorder that could be treated with electric shock therapy if severe enough, but it received comparatively little attention from the psychiatry profession, let alone the medical profession. The Freudian category of ‘melancholia’, as the inability to accept some past loss, continued to shape how chronic unhappiness was understood within much of the psychiatry profession.
[120] The question of how to boost general energy and positivity was an entirely new one for psychologists at the close of the 1950s. But it was slowly emerging as a distinctive field of research in its own right, with a number of new questionnaires, surveys and psychiatric scales through which to compare individuals in terms of their positivity. The year 1958 saw the launch of the Jourard Self-Disclosure Scale and then in 1961 the Beck Depression Inventory, the work of Aaron Beck, the father of cognitive behavioural therapy. Mental health surveys conducted in the United States during the 1950s, aimed partly at assessing the psychological state of war veterans, discovered that generalized depression was a far more common complaint than psychiatrists had assumed. This psychic deflation was coming to appear as a risk that could afflict anyone at any time, whether or not there was psychoanalytic material to back that assessment up.
[121] In the early 1960s, this was an affront to the authority of psychiatrists and doctors, whose professional role involved specifying exactly what was causing a problem and offering a solution to it. The idea that individuals may be suffering from some general collapse of their psychic capabilities, manifest in any number of symptoms, challenged core notions of medical or psychiatric expertise.
[122] A society organized around the boosting of personal satisfaction and fulfilment – ‘self-anchored striving’ – would need to reconceive the nature of authority, when it came to tending and treating the pleasures and pains of the mind. Either that authority would need to become more fluid, counter-cultural and relativist itself, accepting the lack of any clear truth in this arena, or it would need to acquire a new type of scientific expertise, more numerical and dispassionate, whose function is to construct classifications, diagnoses, hierarchies and distinctions, to suit the needs of governments, managers and risk profilers, whose job would otherwise be impossible.
Psychiatric authority reinvented
[123] The American psychiatry profession experienced its own crisis, with an almost identical chronology. In 1968, the American Psychiatric Association (APA) published the second edition of its handbook, the Diagnostic and Statistical Manual of Mental Disorders (DSM). Compared to later versions of the manual, this publication initially elicited very little debate. Even
> The second DSM (Diagnostic and Statistical Manual) of the American Psychological Association was published in 1968 and was met with a shrug
[124] psychiatrists had little interest in the book’s somewhat nerdish question of how to attach names to different symptoms. But within five years, this book was the focus of political controversies that threatened to sink the APA altogether
[125] One problem with the DSM-II was that it seemed to fail in its supposed goal. After all, what was the use of having an officially recognized list of diagnostic classifications if it didn’t appear to constrain how psychiatrists and mental health professionals actually worked? The same year that the DSM-II was published, the World Health Organization published a study
showing that even major psychiatric disorders, such as schizophrenia, were
being diagnosed at wildly different rates around the world. Psychiatrists
seemed to have a great deal of discretion available to them, being led by
theories as to what was underlying the symptoms, which were rarely
amenable to scientific testing in any strict sense. They shared a single
terminology but lacked any strict rules for how it should be applied.
> The DSM II was not precise in steps for diagnostics and symptoms, which lead to misdiagnosis and confusion…
[126] Thomas Szasz, who believed that psychiatry’s main problem was that it was incapable of making testable, scientific propositions.23 In a famous experiment conducted in 1973, nineteen
‘pseudopatients’ managed to get themselves admitted into psychiatric institutions, by turning up and falsely reporting that they were hearing a voice saying ‘empty’, ‘hollow’ and ‘thud’. This was later written up in the journal Science under the title ‘On Being Sane in Insane Places’, adding fuel to the anti-psychiatry movement.
> as example of the inaccuracy of the DSM II, in the early 1970s 19 pseudopatients got admitted into psychiatric institution by faking symptoms.
[127] Most controversially, the DSM-II included homosexuality in its list of disorders, provoking an outcry that gathered momentum from 1970 onwards, with the support of leading anti-psychiatry spokespersons
> ...Ummm also included homosexuality in it’s list of desorders, in 1970 folks...
[128] For these ‘neo-Kraepelinians’, psychiatry’s claims to the status of science depended on diagnostic reliability: two different psychiatrists, faced with the same set of symptoms, had to be capable of reaching the same diagnostic conclusion independently from one another. Whether a psychiatrist truly understood what was troubling someone, what had caused it, or how to
relieve it, was of secondary importance to whether they could confidently identify the syndrome by name. The job of the psychiatrist, by this scientific standard, was simply to observe, classify and name, not to interpret or explain. Within this vision, the moral and political vocation of psychiatry, which in its more utopian traditions had aimed at healing civilization at
large, was drastically shrunk. In its place was a set of tools for categorizing maladies as they happened to present themselves. To many psychiatrists of the 1960s, this seemed like a banally academic preoccupation. But it was about to become a lot more than that.
[129] While they were rejected by the psychiatry profession itself, the St Louis school were not the only voices arguing for greater diagnostic reliability at the time. Health insurance companies in the United States were growing alarmed by the escalating rates of mental health problems, with diagnoses doubling between 1952 and 1967.25 Meanwhile, the pharmaceutical industry
had a clear interest in tightening up diagnostic practices in psychiatry, thanks to a landmark piece of government regulation. There was an increasingly powerful business case for establishing a new consensus on the names that were attached to symptoms.
> The St louis school, health insurance companies and pharmaceutical industry had a clear interest in tightening up diagnosis.
[130] One feature of the Kerfauver-Harris amendment was that drugs had to be marketed with a clear identification of the syndrome that they offered to alleviate. Again, this made clarity around psychiatric classification imperative, although in this case for business reasons. If a drug seemed to have ‘antidepressant properties’, for example, this wasn’t enough to clear the
Kerfauver–Harris regulatory hurdle. It needed a clearly defined disease to target – which in that case would need to be called ‘depression’. As the British psychiatrist David Healy has argued, this legal amendment is arguably the critical moment in the shaping of our contemporary idea of
depression as a disease.26 Thanks to Kerfauver–Harris, we’ve come to believe that we can draw clear lines around ‘depression’, and between varieties of it – lines that magically correspond to pharmaceutical products.
> the Kerfauver-Harris amendment: drugs had to be marketed with a clear identification of the syndrome that they offered to alleviate. Again, this made clarity around psychiatric classification
imperative, although in this case for business reasons.
-
In 1962, Senator Estes Kerfauver of Tennessee and Representative Oren Harris from Arkansas had tabled an amendment to the 1938 Federal Drug, Food and Cosmetic Act, aimed at significantly tightening the rules surrounding regulatory approval of pharmaceuticals.
[131] In the late 1960s, Spitzer had a growing interest in diagnostic classification, spotting an alternative to the status quo. But his status within the APA was marginal, until he was given the task of defusing the homosexuality controversy. To achieve this, he mounted an aggressive
campaign within the APA, offering an alternative description of the syndrome concerned – ‘sexual orientation disturbance’ – which highlighted that suffering must be involved before any diagnosis of sexuality disorder could be made. This was a subtle but telling distinction: Spitzer was implying that the relief of unhappiness should replace the pursuit of normality as the psychiatrist’s abiding vocation. In 1973, he faced down opposition from senior colleagues within the APA on this issue and won. Thanks to Spitzer’s advocacy, the question of sexual ‘normality’ was (not-so-quietly) replaced with one of classifiable misery, hinting at how the character of mental illness was changing more broadly.
> In 1970: Spitzer was tasked to address the homosexuality issue by offering an alternative description of the syndrome concerned – ‘sexual orientation disturbance’ – which highlighted that suffering must be involved before any diagnosis of sexuality disorder could
be made
[132] Every known psychiatric symptom was being listed, alongside a diagnosis. To do this, they drew heavily on a 1972 paper on diagnostic classification authored by the St Louis group, but adding further classifications and criteria.28 Typing away in his office in Manhattan’s West 168th Street, urging on his task force to recite symptoms and diagnoses like some endless psychiatric shopping list, Spitzer was unperturbed. ‘I never saw a diagnosis that I didn’t like’, he was rumoured to have joked.29 A new dictionary of mental and behavioural terminology was drafted
> DSM III: drew heavily on a 1972 paper on diagnostic classification authored by the St Louis group, but adding further classifications and criteria. Spitzer was unperturbed. ‘I never
saw a diagnosis that I didn’t like’, he was rumored to have joked
Relatively unhappy
[133] The resulting document that Spitzer and his team produced in 1978 provided
the basis of the DSM-III, arguably the most revolutionary and controversial text in the history of American psychiatry. Finalized over the course of 1979 and published the following year, this handbook bore scarce resemblance to its 1968 predecessor. The DSM-II outlined 180 categories over 134 pages. The DSM-III contained 292 categories over 597 pages. The St Louis
School’s earlier diagnostic toolkit had specified (somewhat arbitrarily) that a symptom needed to be present for one month before a diagnosis was possible. Without any further justification, the DSM-III reduced this to two weeks.
[134] Henceforth, a mental illness was something detectable by observation and classification, which didn’t require any explanation of why it had arisen. Psychiatric insight into the recesses and conflicts of the human self was replaced by a dispassionate, scientific guide for naming symptoms. And in scrapping the possibility that a mental syndrome might be an understandable
and proportionate response to a set of external circumstances, psychiatry lost the capacity to identify problems in the fabric of society or economy.30 Proponents described the new position as ‘theory neutral’. Critics saw it as an abandoning of the deeper vocation of psychiatry to heal, listen and understand. Even one of the task force members, Henry Pinsker (not from St Louis), started to get cold feet: ‘I believe that what we now call disorders are really but symptoms’
Depressive-competitive disorder
[135] Just do it’. ‘Enjoy more’. Slogans such as these, belonging to Nike and McDonald’s respectively, offer the ethical injunctions of the post-1960s neoliberal era. They are the last transcendent moral principles for a society which rejects moral authority. As Slavoj Žižek has argued, enjoyment has become an even greater duty than to obey the rules. Thanks to the influence of the Chicago School over government regulators, the same is true for corporate profitability.
[136] The entanglement of psychic maximization and profit maximization has grown more explicit over the course of the neoliberal era. This is partly due to the infiltration of corporate interests into the APA. In the run up to the DSM-V, published in 2013, it was reported that the pharmaceutical industry was responsible for half of the APA’s $50 million budget, and that eight of the eleven-strong committee which advised on diagnostic criteria had links to pharmaceutical firms.33 The ways in which we describe ourselves and our mental afflictions are now shaped partly by the financial interests of big pharma.
[137] ‘Today’s brain-based economy puts a premium on cerebral skills, in which cognition is the ignition of productivity and innovation. Depression attacks that vital asset.
[138] Buried within the technocratic toolkits of neoliberal regulators and evaluators is a brutal political philosophy. This condemns most people to the status of failures, with only the faint hope of future victories to cling onto. That school in London ‘where the pupils are allowed to win just one race each, for fear that to win more would make the other pupils seem inferior’ was, in fact, a model of how to guard against a depressive-competitive disorder that few in 1977 could have seen coming. But that would also have required a different form of capitalism, which few policymakers today are prepared to warrant
Social Optimization
‘pay-it-forward’ pricing scheme
[139] It transpires that people will generally pay more for a good, under the pay-it-forward model, than under a conventional pricing system.1 This is true even when the participants are complete strangers. As the study’s lead author, Minah Jung, puts it, ‘People don’t want to look cheap. They want to be fair, but they also want to fit in with the social norms.’ Contrary to what economists have long assumed, altruism can often exert a far stronger influence over our decision-making than calculation.
[140] Similar research findings have been made in the workplace. The notion of ‘performance-related pay’ is a familiar one, suggesting, reasonably enough, that additional effort by an employee is rewarded by a commensurate increase in pay. But studies conducted by researchers at Harvard Business School have discovered that there is a more effective way of extracting greater effort from staff: represent pay increases as a ‘gift’.2 When money is offered in exchange for extra effort, the employee may be minded to view the extra money as their entitlement and carry on as before. But when the employer makes some apparently gratuitous act of altruism, the employee enters a more binding reciprocal relationship and works harder.
[141] Making an explicit moral commitment –even under duress – seems to bind people in certain ways that utilitarian penalties and incentives often do not. - altruism example
[142] But there is also a more disturbing possibility: that the critique of individualism and monetary calculation is now being incorporated into the armoury of utilitarian policy and management. The history of capitalism is littered with critiques of the dehumanizing, amoral world of money, markets, consumption and labour, offered by romantics, Marxists, anthropologists, sociologists and cultural critics among many. These critics have long argued that social bonds are more fundamental than market prices. The achievement of behavioral economics is to take this insight, but to then instrumentalize it in the interests of power. The very idea of the ‘social’ is being captured
[143] John B. Watson had promised in 1917 that, in an age of behaviourist science, ‘the educator, the physician, the jurist and the businessman could utilize our data in a practical way, as soon as we are able, experimentally, to obtain them.’ Behavioural economics has been true to this mission statement. One of its key insights is that, if one wants to control other human beings, it is often far more effective to appeal to their sense of morality and social identity than to their self-interest. By framing notions such as ‘fairness’ and ‘gift’ in purely psychological and neurological terms, behavioural science converts them into instruments of social control
The money-making ‘social’
[144] The strange spectacle of a corporation attempting to project feelings associated with friendship takes an even weirder turn when businesses seize the affordances of Twitter to grant them a quirky, conversational identity. Brands tweet at each other, in a coy, almost flirtatious fashion. Confronted by the phenomenon of the Denny’s diner chain acting cool on Twitter, the writer Kate Losse observed how ‘to become popular and “cool”, brands have had to learn the very techniques we learned as resistant teens to deal with power: our sarcastic humor and our endlessly remixable memes’.6 Corporations now want to be your friend.
[145] Meanwhile, neuromarketers have begun studying how successfully images and advertisements trigger common neural responses in groups, rather than in isolated individuals. This, it seems, is a far better indication of how larger populations will respond
[146] From the 1960s onwards, brands were increasingly targeting niche groups and ‘tribes’ who had to be understood in a more subtle fashion, through careful observation and focus groups. Social media allows for an even finer grain of consumer insight, allowing researchers to spot how tastes, opinions and consumer habits travel through social networks. It allows advertising to be tailored to specific individuals, on the basis of who else they know, and what those other people liked and purchased. These practices, which are collectively referred to as ‘social analytics’, mean that tastes and behaviours can be traced in unprecedented detail.
[147] The most valuable trick, from a marketing perspective, is how to induce individuals to share positive brand messages and adverts with each other, almost as if there were no public advertising campaign at all. The business practice known as ‘friendvertising’ involves creating images and video clips which social media users are likely to share with others, for no conscious commercial purpose of their own.8 ‘Sponsored conversations’, in which
individuals participate in online discussions and blogs with the commercial support of a business, are a slightly less-well-hidden attempt to achieve the same thing. The science of viral marketing, or the creation of ‘buzz’, has led marketers to seek lessons from social psychology, social anthropology and social network analysis.
[148] At the same time that behavioural economics has been highlighting the various ways in which we are social, altruistic creatures, social media offers businesses an opportunity to analyse and target that social behaviour. The end goal is no different from what it was at the dawn of marketing and management in the late nineteenth century: making money. What’s changed is that each one of us is now viewed as an available instrument through which to alter the attitudes and behaviours of our friends and contacts. Behaviours and ideas can be released like ‘contagions’, in the hope of ‘infecting’ much larger networks. While social media sites such as Facebook offer whole new possibilities for marketing, the analysis of email networks can do the same for human resource management in workplaces. The project initiated by Elton Mayo in the 1920s, of understanding the business value of informal relationships, can now be subjected to a far more rigorous and quantitative scientific analysis.
[149] The ideology of this new ‘social’ economy depends on painting the ‘old’ economy as horribly individualistic and materialistic. The assumption is that, prior to the World Wide Web and the Californian gurus that celebrate it, we lived atomized, private lives, with every relationship mediated by cash. Before it became ‘social’, business was a nasty, individualist affair, driven
only by greed.
[150] This picture is, of course, completely false. Corporations have been trying to produce, manage and influence social relationships (as an alternative to purely monetary transactions) since the birth of management in the mid-nineteenth century. Businesses have long worried about their public reputation and the commitment of their employees. And it goes without saying
that informal social networks themselves are as old as humanity. What has changed is not the role of the ‘social’ in capitalism, but the capacity to subject it to a quantitative, economic analysis, thanks primarily to the digitization of social relationships. The ability to visualize and quantify social relations, then subject them to an economic audit, is growing all the
time
[151] Viewing social relations and giving in this tacitly economic way introduces an unpleasant question: what’s in it for me? One of the most persuasive answers emerging is that friendship and altruism are healthy, for both mind and body.
The medical ‘social’
[152] Christakis is an unusual sociologist. Not only is he far more mathematically adept than most, but he has also published a number of articles in high-ranking medical journals. The fractal-like images we were watching on the screen that day represented social networks in a Baltimore neighbourhood, within which particular ‘behaviours’ and medical symptoms were moving around. Christakis’s message to the assembled policy-makers was a powerful one. Problems such as obesity, poverty and depression, which so often coincide, locking people into chronic conditions of inactivity, are contagious. They move around like viruses in social networks, creating risks to individuals purely by virtue of the people they happen to hang out with.
[153] If medical practitioners can change the behavior of just a few influential people in a network for the better, potentially they can then spread a more positive ‘contagion’. The question is whether policymakers could ever possibly hope to attain this kind of sociological data en masse, without some form of mass surveillance of social life. While we grow increasingly accustomed to the idea of a private company, such as Google, collecting detailed data on the everyday behavior of millions, the notion that the government might do the same remains more chilling.
[154] While marketers desperately seek to penetrate our social networks in order to alter our tastes and desires, policy-makers have come to view social networks as means of improving our health and well-being. One important aspect of this is the discovery that a deficit of social relations – or loneliness – is not only a cause of unhappiness, but a serious physiological health risk as well. The ‘social neuroscience’ pioneered by Chicago neuroscientist John Cacioppo suggests that the human brain has evolved in such a way as to depend on social relationships.
[155] Driven particularly by neuroscience, the expert understanding of social life and morality is rapidly submerging into the study of the body. One social neuroscientist, Matt Lieberman, has shown how pains that we have traditionally treated as emotional (such as separating from a lover) involve the same neurochemical processes as those we typically view as physical
(such as breaking an arm). Another prominent neuroscientist, Paul Zak (known in the media as Dr Love), has focused on a single neurochemical, oxytocin, which he argues is associated with many of our strongest social instincts, such as love and fairness. Scientists at the University of Zurich have discovered that they can trigger a sense of ‘right and wrong’ by stimulating a particular area of the brain.12 Social science and physiology are converging into a new discipline, in which human bodies are studied for the ways they respond to one another physically.
[156] A combination of positive psychology with social media analytics has demonstrated that psychological moods and emotions travel through networks, much as Christakis found in relation to health behaviour. For example, through analysing the content of social media messages, researchers at Beihang University in China found that certain moods like anger tended to travel faster than others through networks.13 A negative frame of mind, including depression itself, is known to be socially ‘contagious’. Happy, healthy individuals can then tailor their social relationships in ways that protect them against the ‘risk’ of unhappiness. Guy Winch, an American psychologist who has studied this phenomenon, advises happy people to be
on their guard. ‘If you find yourself living with or around people with negative outlooks,’ he writes, ‘consider balancing out your friend roster’.14 The impact of this friend-roster-rebalancing on those unfortunates with the ‘negative outlooks’ is all too easy to imagine.
Playing God
[157] Moreno’s desire to play God never really deserted him. The idea of humans as individual gods in their own social worlds, creators of themselves and creators of their relationships, animated his work as a psychoanalyst and social psychologist during his adulthood. His 1920 work, The Words of the Father, outlined a frightening humanistic philosophy, where individuals confront situations of infinite possibility, in which the only limiting factor upon their own powers of self-creation is that they exist in social groups. But social groups are also malleable and improvable. Every god needs its angels.‘Well, Dr. Freud, I start where you leave off,’ he told him. ‘You meet people in the artificial setting of your office. I meet them on the street and in their homes, in their natural surroundings.’
[158] Moreno believed that careful observation to patterns of relationships might reveal ways in which psychological satisfaction could be improved, with relatively minor changes. In 1916, he laid down these thoughts in a letter to the Austro-Hungarian minister of the interior as follows: The positive and negative feelings that emerge from every house, between houses, from every factory, and from national and political groups in the community can be explored by means of sociometric analysis. A new order by means of sociometric methods is herewith recommended
[159] What was this ‘sociometric’ analysis he referred to? And how would it help? Though still undeveloped as a mathematical science, let alone a computational one, ‘sociometry’, as Moreno imagined it, laid the groundwork for what later became social network analysis and, consequently, social media.
[160] As Moreno’s curt remark to Freud indicated, his problem with psychoanalysis was that it studied individuals as separate from society, without the constraints offered by existing relationships. But what was the alternative? The danger was that the extreme individualism of Freudianism could flip directly into the equally extreme collectivism of Marxism, or else the form of statistical sociology pioneered by Émile Durkheim. In Moreno’s eyes, this left Europeans with a bipolar choice, between the enforced collectivity of the socialist state and the unruly egoism of the unconscious self. New York, however, suggested that some sort of third way was possible. Here was a city where individuals lived on top of one another, cooperating in various subtle ways, but without having their individual freedom trammelled in the process. America, Moreno reasoned, was a nation built upon self-forming groups.
The mathematics of friendship
[161] Relationships are there to serve the individual. Spontaneity and creativity derive wholly from each of us individually, but our capacity to release them depends on being in the right social circumstances. The task of sociometry was to place the study of an individual’s social relationships on a scientific footing, which would ultimately incorporate mathematics.
[162] The following year, Moreno got another chance to implement sociometry, at the New York Training School for Girls in Hudson. This time, he focused more explicitly on individual attitudes towards each other, asking them with whom they would like to share a room and whom they already knew. This study witnessed Moreno produce visual sociometric maps of the results for the first time, marking out webs of common links between girls in the school in hand-drawn red lines, later to be published in his 1934 work Who Shall Survive? The social world had just become visible in an entirely new way. This, arguably, was the means of visualization which would dominate twenty-first-century understandings of the ‘social’.
[163] It wasn’t until the 1970s that a succession of software packages was developed for purposes of social network analysis.18 Of course these still required academic researchers to go and collect data to feed into the computers. This was still a laborious way of analysing the social world,which – compared with statistics – had little hold over the public imagination. All it took was for a broad mass of individuals to become regular users of networked computers, and Moreno’s methodology could become a dominant way of understanding the meaning of the term ‘social’. At the beginning of the twenty-first century, this was the very situation which had arisen, the opportunities of which were seized by the ‘Web 2.0’ companies which emerged from 2003 onwards. The sociometric studies which Moreno had conducted through interviews with a few dozen people, producing hand-drawn diagrams, could now be carried out in Facebook HQ at the flick of a switch, with a billion participants.
[164] But methods of social analysis are never as politically innocent as they appear. While social network analysis purports to be a simple, stripped- down mathematical study of the ties that bind us, it’s worth reflecting on the philosophy that inspired its founder. As far as Moreno was concerned, other people are there to prop up and please individual egos. A friendship is valuable to the extent that it makes me feel better. Once the study of social life is converted into a branch of mathematical psychology, then this produces some worrying effects on how people start to relate to each other. The narcissism of the small boy playing God surrounded by his angels has become another model for how pleasure is now manufactured and measured.
Addicted to contact
[165] The DSM-V, which was launched in early 2013, added a further item to the menu of dysfunctional compulsions: internet addiction. Many doctors and psychiatrists are confident that this latest syndrome qualifies as a true addiction, no less than addiction to drugs. Sufferers show all the hallmarks of addictive behaviour. Internet use can overwhelm their ability to maintain relationships or hold down a career. When internet addicts are cut off from the web as a form of ‘cold turkey’, they can develop physiological withdrawal symptoms. They lie to those they are close to in an effort to get their fix. Neuroscience shows that the pleasures associated with internet use can be chemically identical to those associated with cocaine use or other addictive pastimes
[166] The key difference between the two games is that World of Warcraft involves playing against other gamers in real time. It involves respect and recognition from real people. Unlike Halo, which the boy had played obsessively but not addictively, World of Warcraft is a social experience. Even while the boy remained alone in his room staring at moving graphics on
a monitor, the knowledge that other players were present offered a form of psychological ‘hit’ that wasn’t available from regular video games. Clearly, the boy was not simply addicted to technology but to a particular type of egocentric relationship which networked computers are particularly adept at providing.
[167] Graham has since become a noted authority on the topic of social media addiction, especially among young people. What he noticed, in the case of the World of Warcraft addict, was simply an extreme case of an affliction that has become widespread in the age of Facebook and smartphones. Social media addiction may be classed as a particular subset of internet addiction, as far as the DSM is concerned, but it is the social logic of it which is so psychologically powerful. Not unlike the gamer, people who cannot put down their smartphones are not engaging with images or gadgetry for the sake of it: they are desperately seeking some form of human interaction, but of a kind that does nothing to limit their personal, private autonomy. In America today, it is estimated that 38 per cent of adults may suffer from some
form of social media addiction.19 Some psychiatrists have suggested that Facebook and Twitter are even more addictive than cigarettes and alcohol.
[168] What we witness, in the case of a World of Warcraft addict, a social media addict or, for that matter, a sex addict, is only the more pathological element of a society that cannot conceive of relationships except in terms of the psychological pleasures that they produce. The person whose fingers twitch to check their Facebook page, when they’re supposed to be listening to their friend over a meal, is the heir to Jacob Moreno’s ethical philosophy, in which other people are only there to please, satisfy and affirm an individual ego from one moment to the next. This inevitably leads to vicious circles: once a social bond is stripped down to this impoverished psychological level, it becomes harder and harder to find the satisfaction that one desperately wants. Viewing other people as instruments for one’s own pleasure represents a denial of core ethical and emotional truths of friendship, love and generosity.
[169] If happiness resides in discovering relationships which are less ego-oriented, less purely hedonistic, than those which an individualistic society offers, then Facebook and similar forms of social media are rarely a recipe for happiness.
Neoliberal socialism
[170] Our society is excessively individualistic. Markets reduce everything to a question of individual calculation and selfishness. We have become obsessed with money and acquisition at the expense of our social relationships and our own human fulfilment.
[171] Now, the gurus of marketing, self-help, behavioural economics, social media and management are first in line to attack the individualistic and materialist assumptions of the marketplace. But all they’re offering instead is a marginally different theory of individual psychology and behaviour.
[172] The depressed and the lonely, who have entered the purview of policy-making now that their problems have become visible to doctors and neuroscientists, exhibit much that has gone wrong under the neoliberal model of capitalism. Individuals want to escape relentless self-reliance and self- reflection. On this, the positive psychologists have a very clear understanding of the malaise of extreme individualism, which locks individuals into introverted, anxious questioning of their own worth relative to others. Their recommended therapy is for people to get out of themselves and immerse themselves in relationships with others. But in reducing the idea of society to the logic of psychology, the happiness gurus follow the same logic as Jacob Moreno, behavioural economics and Facebook. This means that the ‘social’ is an instrument for one’s own medical, emotional or monetary gain. The vicious circle of self-reflection and self-improvement continues.
[173] What we encounter in the current business, media and policy euphoria for being social is what might be called ‘neoliberal socialism’. Sharing is preferable to selling, so long as it doesn’t interfere with the financial interests of dominant corporations. Appealing to people’s moral and altruistic sense becomes the best way of nudging them into line with agendas that they had no say over. Brands and behaviours can be unleashed as social contagions, without money ever changing hands. Empathy and relationships are celebrated, but only as particular habits that happy individuals have learnt to practise. Everything that was once external to economic logic, such as friendship, is quietly brought within it; what was once the enemy of utilitarian logic, namely moral principle, is instrumentalized for utilitarian ends.
[174] Facebook has had to go to great lengths to ensure that the same mistakes are not made, in particular, anchoring online identities in ‘real’ offline identities, and tailoring its design around the interests of marketers and market researchers. Perhaps it is too early to say that it has succeeded
[175] The reduction of social life to psychology, as performed by Jacob Moreno and behavioural economists, or to physiology as achieved by social neuroscience, is not necessarily irreversible either. Karl Marx believed that by bringing workers together in the factory and forcing them to work together, capitalism was creating the very class formation that would eventually
overwhelm it. This was despite the ‘bourgeois ideology’ which stressed the primacy of individuals transacting in a marketplace. Similarly, individuals today may be brought together for their own mental and physical health, or for their own private hedonistic kicks; but social congregations can develop their own logic, which is not reducible to that of individual well-being or pleasure. This is the hope that currently lies dormant in this new, neoliberal socialism.
Living in the Lab
[175] Following the acquisition of the GM contract, JWT set about accumulating consumer insight on an unprecedented scale. In less than eighteen months, over 44,000 interviews were done around the world, many in relation to cars, but also on topics such as food and toiletry consumption. This was the most ambitious project of mass psychological profiling ever attempted. A detailed map of global consumer tastes was being built up from scratch. And yet this was not achieved without encountering some resistance.
[175] The term ‘data’ derives from the Latin, datum, which literally means ‘that which is given’. It is often an outrageous lie. The data gathered by surveys and psychological experiments is scarcely ever just given. It is either seized through force of surveillance, thanks to some power inequality, or it is given in exchange for something else, such as a monetary reward or a chance to win a free iPad. Often, it is collected in a clandestine way, like the one-way mirrors through which focus groups are observed.
[175] Evidently, this political dimension was still visible in the 1920s, when JWT were expanding oversees. In the years since, however, it has receded from view. Questions of what people think or feel, how they intend to vote, how they perceive certain brands, have become simple matters of fact. This is no less true of happiness. Gallup now surveys one thousand American adults on their happiness and well-being every single day, allowing them to trace public mood in great detail, from one day to the next. We are now so familiar with the idea that powerful institutions want to know what we’re feeling and thinking that it no longer appears as a political issue. But
possibilities for psychological and behavioural data are heavily shaped by the power structures which facilitate them. The current explosion in happiness and well-being data is really an effect of new technologies and practices of surveillance. In turn, these depend on pre-existing power
inequalities.
Building the new laboratory
[175]We live during a time of tremendous optimism regarding the possibilities for data collection and analysis that is refueling the behaviourist and utilitarian ambition to manage society purely through careful scientific observation of mind, body and brain. Whenever a behavioural economist or happiness guru stands up and declares that finally we can access the secrets of human motivation and satisfaction, they are implicitly referring to a number of technological and cultural changes which have transformed opportunities for psychological surveillance.
[175] In other circumstances, this data is being ‘opened up’ on the basis that it is a public good. After all, we the public created it by swiping our smart cards, visiting websites, tweeting our thoughts, and so on. Big data should therefore be something available to all of us to analyze. What this more liberal approach tends to ignore is the fact that, even where data is being opened up, the tools to analyze it are not. As the ‘smart cities’ analyst Anthony Townsend has pointed out with regard to New York City’s open data regulations, they judiciously leave out the algorithms which are used by e-government contractors to analyse the data.
[175] Entire businesses are now built on the capacity to interpret and make connections within big data
[175] When obliged to report on their inner mental states for research purposes, people do so only grudgingly. But when doing so of their own volition, suddenly reporting on behaviour and moods becomes a fulfilling, satisfying activity in its own right. The ‘quantified self’ movement, in which individuals measure and report on various aspects of their private lives –from their diets, to their moods, to their sex lives – began as an experimental group of software developers and artists. But it unearthed a surprising enthusiasm for self-surveillance that market researchers and behavioural scientists have carefully noted. Companies such as Nike are now exploring ways in which health and fitness products can be sold alongside quantified self apps, which will allow individuals to make constant reports of their behaviour (such as jogging), generating new data sets for the company in theprocess.
[175] Ways of reading an individual’s mood, through tracking his body, face and behaviour, are now expanding rapidly. Computer programmes designed to influence our feelings, once they have been gauged, are another way in which emotions and technology are becoming synched with each other. Already, computerized cognitive behavioural therapy is available thanks to
software packages such as Beating the Blues and FearFighter. As affective computing advances, the capabilities of computers to judge and influence our feelings will grow.
[175] Facial scanning technologies hold out great promise for marketers and advertisers wanting to acquire an ‘objective’ grasp of human emotion. These are beginning to move beyond the limited realms of computing or psychology labs and permeate day-to-day life. The supermarket chain Tesco has already trialled technologies which advertise different products at different individuals, depending on what moods their faces are communicating.8 Cameras can be used to recognize the faces of unique consumers in the street and market products at them based on their previous shopping behaviours.9 But this may be just the beginning. One of the leading developers of face- reading software has piloted the technology in classrooms, to identifywhether a student is bored or focused
[175] The combination of big data, the narcissistic sharing of private feelings and thoughts, and more emotionally intelligent computers opens up possibilities for psychological tracking that Bentham and Watson could never have dreamed of. Add in smartphones and you have an extraordinary apparatus of data gathering, the like of which was previously only plausible within university laboratories or particularly high-surveillance institutions such as prisons.
[175] As we move beyond the age of the survey, many of the same questions are being asked, but now with far more fine-grained answers. In place of opinion-polling, sentiment-tracking companies such as General Sentiment scrape data from 60 million sources every day, to produce interpretations of what the public thinks. In place of users’ satisfaction surveys, public service providers and health-care providers are analysing social media sentiment for
more conclusive evaluations.11 And in place of traditional market research, data analytics apparently reveals our deepest tastes and desires.
[175] Will this sort of activity still prompt outrage in another ten or twenty years, or will we have grown used to it? More to the point, will Facebook still bother to publish their findings, or will they simply run experiments for their own private benefit? What is troubling about the situation today is that the power inequalities on which such forms of knowledge depend have become largely invisible or taken for granted. The fact that they combine ‘benign’ intentions (to improve our health and well-being) with those of profit and elite political strategy is central to how they function. The only way in which such blanket administration of our everyday lives can now be challenged is if we also challenge the automatic right of experts to deliver any form of emotion to us, be it positive or negative.
The truth of happiness?
[175] Twitter is a case in point. Twitter’s 250 million users produce 500 million tweets per day, producing a constant stream of data which can potentially be analysed for various purposes. This is one of the more dramatic examples of big data accumulation in recent years. Ten per cent of this stream is made freely available at no cost, opening up enticing opportunities for social researchers, both in business and universities. The rest of the stream, up to the complete fire-hose of every single tweet, is available for a range of fees. The research challenge is how to make sense of so much data, which involves building algorithms capable of interpreting millions of tweets. At the University of Pittsburgh, a group of psychologists has built one such algorithm, aimed at capturing how much happiness is expressed in a single 140-character tweet. To do this, the researchers created a database of five thousand words, drawn from digital texts, and gave every word a ‘happiness value’ on a scale of 1–9. A tweet can then be automatically scored in terms of its expression of happiness.
One such project is the ‘Durkheim Project’, developed by researchers at Dartmouth College, named after Émile Durkheim. Durkheim is known as one of the founders of sociology, and author of Suicide, an analysis of variations in national suicide rates in the nineteenth century. Durkheim was drawing on the new statistical data on death rates that had recently accrued over a number of decades in Europe at the time. The Durkheim Project aimed to go one better: drawing on analysis of social media data and mobile phone conversations, suicide would be predicted.
Smartphone apps such as Track Your Happiness developed at Harvard or Mappiness at London School of Economics, which prod people every few hours for details of their present mood (reported as a number) and present activity, enable economists and well-being specialists to accumulate knowledge which was impossible to imagine only a decade ago. It turns out that people are happiest while having ‘intimate relations’, though one wonders what reporting this via a phone does for the quality of that experience.
When researchers first began trying to collect data on the happiness of entire societies during the 1960s, they encountered a problem. This is another of those technical problems which cut to the heart of utilitarianism: to what extent can you trust people’s own reports of their happiness? The way people report happiness is likely to be skewed by a couple of things, though this of course assumes that there is something ‘objective’ about happiness to be reported in the first place. Firstly, they may forget how they actually experience their day-to-day lives and end up with a sunnier or gloomier overall take than is actually representative of their mood. We might consider this to be a form of delusion, although people are of course at liberty to narrate their lives however they see fit.
Secondly, they will be influenced by cultural norms regarding how to answer a survey question. If the question is, ‘Overall how happy do you feel with your life?’ or ‘How happy were you yesterday?’, some individuals may immediately react in certain ways, due to culture or upbringing, which lead them towards certain types of answers. They may feel that it is defeatist to complain and so exaggerate their happiness (a distinctly American problem), or conversely that it is vulgar to declare oneself happy and so under-report it (a more frequent phenomenon in France).
What has always been so seductive about the science of happiness is its promise to unlock the secrets of subjective mood. But as that science becomes ever more advanced, eventually the subjective element of it starts to drop out of the picture altogether. Bentham’s presumption, that pleasure and pain are the only real dimensions of psychology, is now leading squarely towards the philosophical riddle whereby a neuroscientist or data scientist can tell me that I am objectively wrong about my own mood. We are reaching the point where our bodies are more trusted communicators than our words.
The problem is that this is never the end of the matter. What begins as a scientific enquiry into the conditions and nature of human welfare can swiftly mutate into new strategies for behavioural control. Philosophically speaking, there is a gulf separating utilitarianism from behaviourism: the former privileges the inner experience of the mind as the barometer of all value, whereas the latter is only concerned with the various ways in which the
observed human animal can be visibly influenced and manipulated. But in terms of methods, technologies and techniques, the tendency to slip from the former into the latter is all too easy. Inner subjective feelings are granted such a priority under utilitarianism that the appeal of machines capable of reading and predicting them in an objective, behaviourist fashion becomes
all the greater.
The truth of decisions?
Behavioural psychology is founded on a brutally simple question: how to render the behaviour of another person predictable and controllable? Experiments which manipulate the environment, purely to discover how people respond, always bring ethical dilemmas with them. But when these travel beyond the confines of the traditional psychology lab and permeate everyday life, the problem becomes more political. Society itself is used and prodded to serve the research projects of a scientific elite.
In 2013, the British government was embarrassed when a blogger discovered that jobseekers were being asked to complete psychometric surveys whose results were completely bogus.19 Regardless of how the user answered the questions, they got the same results, telling them
what their main strengths are in the job market. It later transpired that this was an experiment being run by the government’s ‘Nudge Unit’, to see if individual behaviour was altered by having this survey offer them these findings. Social reality had been manipulated to generate findings for those looking down from above.
For the rest of us, talking to our neighbours or engaging in debate, we are constantly drawing on assumptions of what people intend, what they’re thinking, why they have chosen the path they did, and what they actually meant when they said something. On a basic level, to understand what another person says is to draw on various cultural presuppositions about the words they’ve used and how they’ve used them. These presuppositions may not be theories in any strict sense, but more like rules of thumb, which help us to interpret the social world around us. The claim that it is possible to know how decisions are taken, purely on the basis of data, is one that only the observer in his watchtower can plausibly make. For him, ‘theory’ is simply that which hasn’t yet become visible, and in the age of big data, fMRI and affective computing, he hopes to be able to abandon it altogether.
Add mass behavioural surveillance to neuroscience, and you have a cottage industry of decision experts, ready to predict how an individual will behave under different circumstances. Popular psychologists such as Dan Ariely, author of Predictably Irrational, and Robert Cialdini, author of Influence: The Psychology of Persuasion, unveil secrets of why people really take the decisions that they do. It transpires, so we’re told, that individuals are not in charge of their choices at all, that they can’t really tell you why they do what they do. Whether it be the pursuit of workplace efficiency, the design of public policy or seeking a date, the general science
of choice promises to introduce facts where previously there was only superstition. The fact that, no matter what the context, ‘choice’ always seems to refer to something which resembles shopping suggests that the decision scientists may not have thrown off the scourge of prejudice or theory as much as they may like.
And yet the apparent legitimacy of this data-led approach to understanding people is contributing to further expansions in surveillance capabilities. Human resource management is one of the latest fields to be swept up in data euphoria, with new techniques known as ‘talent analytics’ now available, which allow managers to evaluate their employees algorithmically, using data produced by workplace email traffic.20 The Boston-based company Sociometric Solutions goes further, producing gadgets to be worn by employees, to make their movements, tone of voice and conversations traceable by management. ‘Smart cities’ and ‘smart homes’, which are constantly reacting to and seeking to alter their inhabitants’ behaviour, are other areas where the new scientific utopia is being built. In an ironic twist in the history of consumerism, it has emerged that we could soon be relieved even of the responsibility for our purchasing decisions thanks to ‘predictive shopping’, in which companies mail products (such as books or groceries) directly to the consumer’s home, without being asked to, purely on the basis of algorithmic analysis or smart-home monitoring
The happiness utopia
The next stage for the happiness industry is to develop technologies whereby those two separate indicators of well-being can be unified. Monism, the belief that there is a single index of value through which any ethical or political outcome can be assessed, is always frustrated by the fact that no single ultimate indicator of this value can be found or built. Money is all very well, but it leaves out other psychological and physiological aspects of well-being. Measuring blood pressure or pulse rate is fine up to a point, but it cannot indicate how satisfied we are with our lives. fMRI scans can now visualize emotions in real time, but they miss broader notions of health and flourishing. Affect scales and questionnaires run up against cultural problems of how different words and symptoms are understood.
Monism is useful rhetorically, and attractive from the perspective of the powerful who yearn for simple ways of working out what to do next. But does anyone actually believe that all pleasures and pains sit on a single index? Sure, we might debate matters as if that were the case, using the metaphor of ‘utility’ or ‘well-being’ with which to do so. But take away its objective neural, facial, psychological, physiological, behavioural and monetary indicators, and the ghostly notion of happiness as a single quantity also vanishes into thin air
In which case, why build such an apparatus of measurement? Why go to such lengths to ensure that the various separate bits of it are joined up, connecting our bank balances to our bodies, our facial expressions to our shopping habits, and so on? Under the auspices of scientific optimism, we are being governed by a philosophy that makes no real sense. It is unable to
specify, finally, whether happiness is something physical or metaphysical. Every time it is asserted as the former, it slips away again. Yet the apparatus of measurement keeps on growing, creeping further into our personal and social lives.
There are a number of critical psychologists over the years who have sought to point this out, by stressing how mental illness is entangled with disempowerment. There are plenty of inspiring ventures and experiments which seek to give people hope partly through restoring their say over their own lives. There are also businesses which do not rely on behavioural science to manage and sell to people. These scattered alternatives are all parts of some larger alternative, which correctly understood might even be a better recipe for happiness.
Critical Animals
It has long been understood that working outdoors has certain psychological and emotional benefits, especially when it involves tending nature.
There is a long history of putting mentally troubled people to work on farms. The routines of milking, tilling and harvesting offer their own form of normality for those who cannot cope with the normality offered in society at large.
> saint pa Netflix
People who can’t seem to find coherence in their own lives, can’t relate to a conventional job, or have suffered some brutal emotional rupture, discover that the presence of plants and animals has a calming influence. The harshness of agricultural life may sometimes be part of its value. Crops fail, weather turns bad, but the only plausible response is to laugh and collectively have another go. Neither individual glory nor individual blame are appropriate,
The agencies funding Growing Well, and the doctors referring patients to volunteer there, have one theory as to what is going on. Aldridge and his colleagues have another one entirely. According to the former, the volunteers are medically ill and receiving a form of treatment. According to the latter, they are rediscovering their dignity, exercising judgement, and participating in a business which trades successfully in the local area. In the first theory,
the volunteers are passive, without any medically relevant interpretation of their own of their situation. In the second theory, they are active and gaining opportunities to influence the world around them, through interpreting and debating it
Understanding unhappiness
Why do people become unhappy, and what should anyone do about it? These are questions which concern philosophers, psychologists, politicians, neuroscientists, managers, economists, activists and doctors alike. How one sets about answering such questions will depend heavily on what sorts of theories and interpretations one employs. A sociologist will offer different
types of answers from a neuroscientist, who will offer different types of answers again from a psychoanalyst. The question of how we explain and respond to human unhappiness is ultimately an ethical and political one, of where we choose to focus our critique and, to be blunt about it, where we intend to level the blame.
Beren Aldridge’s insight, on which the structure and ethos of Growing Well is based, is an important one. Treating the mind (or brain) as some form of decontextualized, independent entity that breaks down of its own accord, requiring monitoring and fixing by experts, is a symptom of the very culture that produces a great deal of unhappiness today. Disempowerment is an integral part of how depression, stress and anxiety arise. And despite the best efforts of positive psychologists, disempowerment occurs as an effect of social, political and economic institutions and strategies, not of neural or behavioural errors. To deny this is to exacerbate the problem for which happiness science claims to be the solution.
Beyond the various behaviourist and utilitarian disciplines that have been explored in this book, there are a number of research traditions which share this focus on disempowerment. The community psychology tradition, which emerged in the United States during the 1960s, insists that individuals can only be understood within their social contexts. Clinical psychologists have been among the most outspoken critics of the medicalization of distress, and the role of the pharmaceutical companies in encouraging it. Allied to a critique of capitalism, these psychologists – such as David Smail and Mark Rapley in the UK – have offered alternative interpretations of psychiatric symptoms, based on a more sociological and political understanding of unhappiness.3 Social epidemiology, as practised by Carles Muntaner in Canada or Richard Wilkinson in the UK, tries to understand how mental disorders vary across different societies and different social classes, correlating with different socioeconomic conditions.
At various points in history, these more sociological approaches even found their way into the thinking of business. As Chapter 3 explored, there was a period during the 1930s and 1940s when market research acquired a quasi-democratic dimension, seeking to discover what the public wanted from and thought about the world. Sociologists, statisticians and socialists became instrumental to how the attitudes of the public were represented. As Chapter 4 discussed, the emphasis which management came to put on teamwork, health and enthusiasm from the 1930s onwards has occasionally produced more radical analyses which highlight the importance of collective power and voice in the workplace as contributing factors to productivity and well-being. This potentially points towards whole new models of organization, and not simply new techniques of management.
Business school students who have strongly internalized materialist values (that is, of measuring their own worth in terms of money) report lower levels of happiness and self-actualization than those who haven’t.10 Individuals who spend their money in obsessive
ways – either too cautiously or too loosely – have been discovered to suffer from lower levels of well-being.11 And materialism and social isolation have been shown to be mutually reinforcing: lonely people seek material goods more compulsively, while materialist individuals are more at risk of loneliness
Advertising and marketing play a crucial role in sustaining these negative spirals; indeed they (and their paymasters) have a clear economic interest in doing so. If consumption and materialism remain both cause and effect of individualistic, unhappy cultures, then the vicious circle is a profitable one for those involved in marketing. The precise role of advertising in the
propagation of materialist values is disputed, although research does at the very least confirm that the two have risen in tandem with one another.
If we want to live in a way that is socially and psychologically prosperous, and not simply highly competitive, lonely and materialistic, there is a great deal of evidence from clinical psychology,
social epidemiology, occupational health, sociology and community psychology regarding what is currently obstructing this possibility. The problem is that, in the long history of scientifically analysing the relationship between subjective feelings and external circumstances, there is always the tendency to see the former as more easily changeable than the latter. As many
positive psychologists now enthusiastically encourage people to do, if you can’t change the cause of your distress, try and alter the way you react and feel instead. This is also how critical politics has been neutralized.
And yet the utilitarian and behaviourist visions of anindividual as predictable, malleable and controllable (so long as there is sufficient surveillance) have not triumphed merely due to the collapse of collectivist alternatives. It has been repeatedly pushed by specific elites, for
specific political and economic purposes, and is experiencing another major political push right now.
Scientific tramlines
.
That industry is heavily invested in brain research is unsurprising. The pharmaceutical industry has some very obvious incentives to push the boundaries of science in this area, while neuromarketers maintain the hope that the brain’s ‘buy button’ will eventually be identified once and for all. It is then only a question of working out how such a button might be pushed by
advertising. The implications of neuroscience for anyone seeking to influence and control people – be they employees, delinquents, soldiers, ‘problem families’, addicts or whatever – are quite obvious, even if they are occasionally exaggerated. Crudely causal explanations of why an individual took decision x, as opposed to y, and how to alter this in future, have a lucrative market among the powerful.
The greatest successes of behavioural and happiness science occur when individuals come to interpret and narrate their own lives according to this body of expertise. As laypeople, we come to attribute our failures and sadness to our brains or our troublesome minds. Operating with constantly split personalities, we train our selves to be more suspicious of our thoughts, or more tolerant of our feelings, with the encouragement of cognitive behavioural therapy. In ways that will baffle cultural historians a century from now, we even engage in quantified self-monitoring of our own accord, volunteering information on our behaviours, nutrition and moods todatabases, maybe out of sheer desperation to be part of something larger than just ourselves. Once we are split down the middle in this way, a relationship – perhaps a friendship? – with oneself becomes possible, which when taken too literally breeds loneliness and/or narcissism.
Mystical seductions
What would an escape from this hard psychological science look like? If politics and organization have been excessively psychologized, reducing every social and economic problem to one of incentives, behaviour, happiness and the brain, what would it take for them to be de-psychologized? One answer is a constant temptation, but we should be wary of it. This is to flip the harsh, rationalist objective science of the mind (and brain) into its opposite, namely a romantic, subjective revelling in the mysteries of consciousness, freedom and sensation.
Confronted by a social world that has been reduced to quasi-mechanical natural forces of cause and effect, the lure of mysticism grows all the greater. In the face of the radical objectivism of neuroscience and behaviourism, which purport to render every inner feeling visible to the outside world, there is a commensurate appeal in radical subjectivism, which claims that
what really matters is entirely private to the individual concerned. The problem is that these two philosophies are entirely compatible with one another; there is no friction between them, let alone conflict. This is a case of what Gustav Fechner described as ‘psychophysical parallelism’.
For evidence of this, see how the promotion of mindfulness (and many versions of positive psychology) slips seamlessly between offering scientific facts about what our brains or minds are ‘doing’ and quasi-Buddhist injunctions to simply sit, be and ‘notice’ events as they flow in and out of the consciousness. The limitation of the behavioural and neurosciences is that, while they purport to ignore subjective aspects of human freedom, they speak a language which is primarily meaningful to expert researchers in universities, governments and businesses. By focusing on whatever can be rendered ‘objective’, they leave a gap for a more ‘subjective’ and passive discourse. New age mysticism plugs this gap.
> Million dollar paragraph
The language and theories of expert elites are becoming more idiosyncratic and separate from those of the public. How ‘they’ narrate human life and how ‘we’ do so are pulling apart from each other, which undermines the very possibility of inclusive political deliberation. For
example, positive psychology stresses that we should all stop comparing ourselves to each other and focus on feeling more grateful and empathetic instead. But isn’t comparison precisely what happiness measurement is there to achieve? Doesn’t giving one person a ‘seven’ and another person a ‘six’ work so as to render their differences comparable? The morality that is being offered by way of therapy is often entirely insulated from the logic of the science and technologies which underpin it.
‘I know how you feel’
Witnessing someone else’s brain ‘light up’ is something that costs a lot of money. A state of the art fMRI scanner costs $1 million, with annual operating costs of between $100,000–$300,000. The insights that such technologies offer into mental illness, brain defects and injuries are
considerable. Gradually, our everyday language of moods, choices and tastes is being translated into terms that correspond to different physical parts of our brains. Neuromarketers can now specify that one advertisement causes activity in a given part of the brain, while a different advertisement does not. This is believed to have significant commercial implications. But to what extent does so much technological progress aid us in a more fundamental
problem of social life, that of understanding other people?
But what if this philosophy is grounded in a mistake? And what if it is a mistake that we keep on making, no matter how advanced our brain-scanning, mind-measuring and facial-reading devices become? In fact, what if we actually become more liable to make this mistake as our technology grows more sophisticated? For Ludwig Wittgenstein, and those who have followed
him, a statement such as Bentham’s about our ‘two sovereign masters’ is based upon a fundamental misunderstanding of the nature of psychological language. To rediscover a different notion of politics, we might first have to excavate a different way of understanding the feelings and behaviours of others.
To understand what a word means, Wittgenstein argued, is to understand how it is used, meaning that the problem of understanding other people is first and foremost a social one. Equally, to understand what another person is doing is to understand what their actions mean, both for them and for others who are involved. If I ask the question, ‘What is that person feeling?’ I can answer by interpreting their behaviour, or by asking them. The answer is not
inside their head or body, to be discovered, but lies in how the two of us interact. There is nothing stopping me from being broadly right about what they are feeling, so long as that is recognized as an interpretation of what they are doing and communicating, or what their behaviour means. I am not going to discover what they are feeling as some sort of fact, in the way that I can discover their body temperature. Nor would they be reporting a fact, should they tell me what they’re thinking.
‘Psychological attributes’, Wittgenstein argued, ‘are attributes of the animal as a whole’. It is nonsense to say that ‘my knee wants to go for a walk’, because only a human being can want something. But due to the hubris of scientific psychology and neuroscience, it has become a commonplace to say, for example, ‘Your mind wants you to buy this product’ or ‘My brain
keeps forgetting things’. When we do this, we forget that wanting and forgetting are actions which only make any sense on the basis of an interpretation of human beings, embedded in social relations, with intentions and purposes. Behaviourism seeks to exclude all of that, but in the process does considerable violence to the language we use to understand other People.
Psychology is afflicted by the same error, time and time again, of being modelled on physiology or biology, either by force of metaphor or by a more literal reductionism. Of course, this attempt to either reduce psychology to the physical, or at least base it on mechanical or biological metaphors, is one of the main strategies of power and control offered by the various theorists
explored over the course of this book. For Jevons, the mind was best understood as a mechanical balancing device; for Watson, it was nothing but observable behaviour; for Selye, it could be discovered in the body; for Moreno, it was manifest in measurable social networks; marketers now like to attribute our decisions and moods to our brains; and so on.
At its most fundamental, the choice between Bentham and Wittgenstein is a question of what it means to be human. Bentham posited the human condition as one of mute physical pain, to be expertly relieved through carefully designed interventions. This is an ethic of empathy, which is
extrapolated to a society of scientific surveillance. It also views the division between humans and animals as philosophically insignificant. For Wittgenstein, by contrast, there is nothing prior to language. Humans are animals which speak, and their language is one that other humans understand. Pleasure and pain lose their privileged position, and cannot treated as matters of scientific fact. ‘You learned the concept “pain” when you learned language’, but it is fruitless to search for some reality of consciousness outside of the words we have to express ourselves.
19 If people are qualified to speak for themselves, the constant need to anticipate – or to try and
measure – how they are feeling suddenly disappears. So, potentially, does the need for ubiquitous psychosomatic surveillance technologies.
‘How else to know people?’
Psychology and social science are perfectly possible under the sorts of
conditions described by Wittgenstein; indeed they are much more straight-forward. Systematic efforts to understand other people, through their behaviour and speech, are entirely worthwhile. But they are not so different from the forms of understanding that we all make of one another in everyday life. As the social psychologist Rom Harré argues, we all face the occasional
problem o f not being sure what other people mean or intend but have ways of overcoming this. ‘The only possible solution’, he argues, ‘is to use our understanding of ourselves as the basis for the understanding of others, and our understanding of others of our species to further our understanding of ourselves’.
One implication of this, when it comes to acquiring psychological knowledge, is that we have to take what people say far more seriously. Not only that, but we have to assume that for the most part, they meant what they said, unless we can identify some reason why they didn’t. Where
behaviourism always attempts to get around people’s ‘reports’ of what they’re feeling, in search of the underlying emotional reality, an interpretative social psychology insists that feeling and speaking cannot be ultimately disentangled from each other. Part of what it means to understand the feelings of another is to hear and understand what they mean when they use the word ‘feeling’.
Against psychological control
Would an enlightened mental health practitioner or social epidemiologist find it equally funny? I suspect not. Many psychiatrists and clinical psychologists are entirely aware that the problems they are paid to deal with do not start within the mind or body of a solitary individual, or even
necessarily within the family. They start with some broader social, political or economic breakdown. Delimiting psychology and psychiatry within the realms of medicine (or some quasi-economic behavioural science) is a way of neutering the critical potential of these professions. But what would they and we demand, given the chance?
Businesses which are organized around a principle of dialogue and co-operative control would be another starting point for a critical mind turned outwards upon the world, and not inwards upon itself. One of the advantages of employee-owned businesses is that they are far less reliant on the forms of psychological control that managers of corporations have relied on since the 1920s. There is no need for somewhat ironic HR rhetoric about the ‘staff being the number one asset’ in firms where that is constitutionally recognized. It is only under conditions of ownership and management which render most people expendable that so much ‘soft’ rhetorical effort has to be undertaken to reassure them that they are not.
Arguing for democratic business structures cannot plausibly mean the democratization of every single decision, at every moment in time. But it is not clear that the case for management autarchy still works either, even on its own terms. If the argument for hierarchies is that they are efficient, that they cut costs, that they get things done, a more nuanced reading of much of the research on unhappiness, stress, depression and absence in the workplace would suggest that current organizational structures are failing even in this limited aim.
Stress can be viewed as a medical problem, or it can be viewed as a political one. Those who have studied it in its broader social context are well aware that it arises in circumstances where individuals have lost control over their working lives, which ought to throw the policy spotlight on
precarious work and autarchic management, not on physical bodies or medical therapies. In 2014, John Ashton, the president of the UK Faculty of Public Health, argued that Britain should gradually move towards a four-day week, to alleviate the combined problems of over-work and under-work, both of which are stress factors.
At the frontier of utilitarian measurement and management today is a gradual joining up of economics and medicine, into a single science of well-being, accompanied by a monistic fantasy of a single measure of human optimality. Measurements which target the body are becoming commensurable with those geared towards productivity and profit. This is an
important area of critique and of resistance. As a point of principle, we might state that the pursuit of health and the pursuit of money should remain in entirely separate evaluative spheres.29 Extrapolating from this principle yields various paths of action, from the defence of public healthcare, to opposition to workplace well-being surveillance, to rejection of apps and
devices which seek to translate fitness behaviours into monetary rewards.
As Chapter 5 argued, neoliberalism’s respect for ‘free’ markets has, in any case, always been
Exaggerated. Marketing, which seeks to reduce business uncertainty, has long been far more attractive to corporations than markets. Suspicion of services offered for free, such as most social media platforms, is a symptom of a more general anxiety regarding technologies of psychological control, which is not simply reducible to traditional concerns about privacy.
Advertising is among the most powerful techniques of mass behavioural manipulation, since it first became ‘scientific’ at the dawn of the twentieth century. On this issue, advertisers have a vested interest in contradicting themselves. The customer is sovereign and cannot be conned; the advertisement is simply a vehicle for the product. On the other hand, spending on advertising continues to rise, and efforts to inhibit the power of brands and marketing agencies to flood the media, public space, sports and public institutions with imagery are vigorously attacked. If advertising is so innocent, then why is there so much of it around?
A US organization, Commercial Alert, runs an annual ‘Ad Slam’ contest, in which $5,000 is awarded to the school that has removed the most advertising from its common spaces.
Campaigns such as these are inevitably dependent on some quite traditional ideas of how to defend the public, and target some relatively old-fashioned techniques of psychological control. Product placement in ‘free’ media and entertainment content is a different type of problem altogether, while the internet enables marketing to monitor and target individuals in a far
more subtle and individualized fashion. ‘Smart’ infrastructures, which offer constant feedback loops between individuals and centralized data stores, are assumed to be the future of everything from advertising, to health care, to urban governance, to human resource management. The all-encompassing laboratory, explored in Chapter 7, is a frightening prospect, not least because it is difficult to see how it might ever be reversed, should that be desired in
future. But there is no reason to assume that practices such as facial scanning in public places must remain legal.
What would the critique of smartness look like? And what would resistance to it mean? Would it be a celebration of ‘dumbness’? Would we simply refuse to wear the health-tracking wristbands? Perhaps. Some aspects of the Benthamite utopia can seem almost impossible to duck out of – the sentiment analyser who discovers the happiest neighbourhood in the city,
through mining the geo-data of tweets; the instructions from one’s doctor to exercise more gratitude so as to improve both mood and reduce physical stress. But remembering the philosophical contradictions inherent in these ventures, and their historical and political origins, may at least offer a source of something which has no simple bodily or neural correlate, and involves a strange tinge of happiness in spite of unhappiness: hope.
