more on offensive ideas

In response to my previous post on this subject, my friend Chad Wellmon sent me a link to a (paywalled) essay by his colleague Elizabeth Barnes on the value of responding to offensive ideas. Barnes makes a useful distinction between ideas that are deeply offensive but not widely or seriously held — an argument in defense of rape, for instance — and the ideas of, say, Peter Singer. 

So what’s the difference with Peter Singer? His views are, from my perspective at least, no less offensive than the pro-rape argument. Yet he strikes me as different for the simple reason that, when it comes to a description of what many people think or what many people’s everyday views imply, Singer isn’t wrong.
Most people would, of course, be far too polite to say what Singer says. But Singer’s claims about the comparative value of disabled lives follow naturally from the casual remarks that disabled people and caregivers hear all the time. They’re implicit in the grave “I’m so sorry” quietly whispered to my friend after colleagues meet her beautiful, smiling daughter for the first time. They’re the unspoken message when another friend is reassured, just after her son is born: “But you can have another child.” They’re the natural conclusion of a well-meaning doctor remarking to me, on learning that I don’t have children: “Oh, that’s probably for the best — your children might’ve inherited [your condition].”
I seriously doubt that the well-intentioned people who say these things would endorse Singer’s conclusions. But Singer is right that his conclusions flow straightforwardly from these sorts of common attitudes. For this reason, I find myself strangely grateful for the brutal honesty of Peter Singer. He says explicitly what others only gesture at implicitly. 

(Barnes has a rare medical condition that, as far as I can tell, does not threaten her life but makes that life more difficult in various ways.) Now, someone might argue in response that if Singer’s arguments indeed extend commonly-held views, that’s all the more reason to ignore them — to push them further and further to the margins. Barnes: 

People worry that grappling with offensive views gives those views undue legitimacy. But in the case of someone like Singer, the views have legitimacy whether or not I choose to engage with them. To state the obvious, the arguments of the Ira W. DeCamp Professor of Bioethics at Princeton University are going to matter whether or not I pay attention to them. But, more important, Singer’s views already have legitimacy because people will continue to think about disability in ways directly relevant to his arguments regardless of whether progressive academics decide those arguments are simply too offensive to be discussed. (After all, as Singer himself wryly notes, the sales of Practical Ethics tend to increase whenever there are calls to “no platform” his talks.) Even Singer’s views on infant euthanasia aren’t a dystopian thought experiment. At least one major European country (the Netherlands) openly practices infanticide in some cases of disability.

If ideas have actual social and political purchase, if they are doing work in the world, then it’s rather naïve to think that by ignoring them we could somehow delegitimize them. That’s simply wishful thinking. 
In the talk I gave at Duke in January, called “Embrace the Pain: Living with the Repugnant Cultural Other,” I tried to make a case similar to the one Barnes makes, though on somewhat different grounds. I also think my argument is a kind of response to the thoughtful comments Alastair Roberts made on my earlier post. 
Anyway, here’s an excerpt: 

So, if we dare to embrace the pain while striving to minimize the harm, what does that look like? And how does it help us deal with our RCO? How can the presence of my RCO in my community to be seen as a feature rather than a bug? It begins with the understanding that we come together, temporarily, in this place so that we may play a certain complex and meaningful game, a game that involves trying out intellectual and personal positions, testing my beliefs and my identity in relation to others that are doing the same — and playing this game under the guidance and direction of people whom we all trust to run it fairly and with our flourishing in mind. With that framework in place, then, we might be able genuinely to hear Mill’s word of warning: “He who knows only his own side of the case, knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side; if he does not so much as know what they are, he has no ground for preferring either opinion.” In a healthily functioning academic community, these words can be heard as a health-giving challenge rather than a threat to be feared.
In such a community, my RCO can therefore play a role in strengthening and clarifying my convictions — even if that’s the last thing he would want to do! Recall my opening promise that, following G K Chesterton, I would try not to ask you to consider that you might be wrong. To take a couple of extreme examples: Do we really want a world in which Elie Wiesel seriously considers whether the Nazis might have been justified after all in implementing their Final Solution? Or where Malcolm X pauses to consider whether white supremacy is, after all is said and done, the best social order? I think not. But that doesn’t mean that — even in the big and uncontrolled outside world, and still more in the semi-controlled realm of academic conversation — we don’t benefit from a better understanding of what people we disagree with think, and why they think as they do.
Chesterton deplored the movement of modesty from “the organ of ambition” to “the organ of conviction.” He doesn’t want you to be modest about your convictions, but rather about your ambitions — by which he means all the ways you hope to put your convictions into effect. He wants you to be confident about your ends but critical and even skeptical about your preferred means to those ends. He wants you to consider all the different ways you might get to the goal you treasure — and in this endeavor your RCO can help, even if, again, he wouldn’t want to.

I also argued in that talk that we stand a better chance of getting people to, as Roberts puts it in his aforementioned comments, “stress-test” their beliefs under two conditions: if we are able to cultivate a game-like character in our campus conversations and if we faculty members work much, much harder to create an environment in which our students trust us to manage and direct those games. 
A postscript: at dinner after my talk I sat next to Bob Blouin, the Provost of the University of North Carolina, and he commented that he thought that faculty would do a better job of cultivating their students’ trust if they felt trusted by administrators. Well, yes. Precisely.  

being right to no effect

This post of mine from earlier today, which was based on this column by Damon Linker, has a lot in common with this post by Scott Alexander:

I write a lot about how we shouldn’t get our enemies fired lest they try to fire us, how we shouldn’t get our enemies’ campus speakers disinvited lest they try to disinvite ours, how we shouldn’t use deceit and hyperbole to push our policies lest our enemies try to push theirs the same way. And people very reasonably ask – hey, I notice my side kind of controls all of this stuff, the situation is actually asymmetrical, they have no way of retaliating, maybe we should just grind our enemies beneath our boots this one time.
And then when it turns out that the enemies can just leave and start their own institutions, with horrendous results for everybody, the cry goes up “Wait, that’s unfair! Nobody ever said you could do that! Come back so we can grind you beneath our boots some more!”
Conservatives aren’t stuck in here with us. We’re stuck in here with them. And so far it’s not going so well. I’m not sure if any of this can be reversed. But I think maybe we should consider to what degree we are in a hole, and if so, to what degree we want to stop digging.

Which in turn has a lot in common with this post by Freddie deBoer:

Conservatives have been arguing for years that liberals essentially want to write them out of shared cultural and intellectual spaces altogether. I’ve always said that’s horseshit. But I’m trying to be real with you and take an honest look at what’s happening in the few spaces that progressive people control. In the halls of actual power, meanwhile, conservatives have achieved incredible electoral victories, running up the score against the progressives who in turn take out their frustrations in cultural and intellectual spaces. This is not a dynamic that will end well for us.
Of course by affirming this version of events from conservatives, I am opening myself to the regular claim that I am a conservative. Which is incorrect; I have never been further left in my life than I am today. But you can understand it if you understand the contemporary progressive tendency to treat politics as a matter of which social or cultural group you associate with rather than as a set of shared principles and a commitment to enacting them by appealing to the enlightened best interest of the unconverted. That dynamic may, I’m afraid, also explain why progressives risk taking even firmer control of campus and media and Hollywood and losing everything else.

Which, in another turn, has a lot in common with this column by Andrew Sullivan:

I know why many want to dismiss all of this as mere hate, as some of it certainly is. I also recognize that engaging with the ideas of this movement is a tricky exercise in our current political climate. Among many liberals, there is an understandable impulse to raise the drawbridge, to deny certain ideas access to respectable conversation, to prevent certain concepts from being “normalized.” But the normalization has already occurred — thanks, largely, to voters across the West — and willfully blinding ourselves to the most potent political movement of the moment will not make it go away. Indeed, the more I read today’s more serious reactionary writers, the more I’m convinced they are much more in tune with the current global mood than today’s conservatives, liberals, and progressives. I find myself repelled by many of their themes — and yet, at the same time, drawn in by their unmistakable relevance.

What all these writings have in common is this: We are all saying to the Angry Left that it’s unwise, impractical, and counterproductive to think that you can simply refuse to acknowledge and engage with people who don’t share your politics — to trust in your power to silence, to intimidate, to mock, and to shun rather than to attempt to persuade.

I think we’ve all made very good cases. I also think that almost no one who needs to hear what we have to say will listen. So what will be the result?

Freddie is right to say that the three industries where the take-no-prisoners model is most entrenched are Hollywood, the news media, and the university. And that entrenchment leads, as I have explained before, to the perception of ideological difference as defilement — a thesis that I think goes a long way towards explaining the intensity of the outrage about Bret Stephens’s NYT column on climate-change rhetoric. The purging of those who have defiled the community is a feasible practice unless and until the departure of those people is costly to the community; and each of those three cultural institutions assumes without question that no costs will be incurred by cathartic expulsion of the repugnant cultural Other.

Hollywood could be right to make this assumption: certainly there are no plausible alternatives to its dominance, though that dominance might take new forms — e.g. more movies and series made outside the conventional studio structure by new players like Netflix and Amazon. (It’s possible, though I think highly unlikely, that those new players will attempt to exploit a socially conservative audience.)

But it’s hard to think of two white-collar professions more imperiled than journalism and academia. The belief that left or left-liberal university administrators and professors, and journalists and editors, have in their own impregnability is simply delusional. If they connected their political decisions to their worried meetings about rising costs and desiccating sources of revenue, they would realize this; but the power of compartmentalization is great.

So what I foresee for both journalism and academia is a financial decline that proceeds at increasing speed, a decline to which ideological rigidity will be a significant contributor, though certainly not the only one. (The presence of other causes will ensure that publishers, editors, administrators, and the few remaining tenured faculty members will be able to deny the consequences of rigidity.) I also expect this decline to proceed far more quickly for journalism than for academia, since the latter still has a great many full-time faculty who can be replaced by contingent faculty willing to work for something considerably less than the legal minimum wage.

But at least the people who run those institutions will be able to preserve their purity right up to the inevitable end.

into the morass

Following up on yesterday’s request for help with the notorious Bret Stephens op-ed on climate change — no help has been forthcoming, by the way — I’d like to call your attention to this superb column by Damon Linker:

Stephens didn’t deny the reality of climate change. He merely dared to advocate a slight rhetorical adjustment to the way environmental activists and their cheering sections at websites like Slate and Vox, and newspapers like the Times, go about making their case to the wider public. What followed was not a reasoned debate about the rhetorical effectiveness of claims to modesty and certainty, dispassionate concern and outright alarmism. Instead, there was simple, pure, satisfying, but politically impotent condemnation: “You can’t say that!”

Perhaps the most telling response was that of Susan Matthews at Slate, who admitted that Stephens had not denied any of the facts of climate change, and agreed that Stephens is exactly right in his claim that scientists and journalists who speak for scientists often mishandle probabilities and discount their own biases — but insisted somehow all that makes his column even “scarier and more damaging.” Your overall argument is not wrong, and that’s why it’s unforgivable.

I think journalists are so upset with Stephens not because he challenges the scientific consensus on climate change — he clearly doesn’t — but because he challenges them. His argument, as Linker suggests above, is about rhetorical effectiveness: He claims that if people who are seriously and legitimately concerned about climate change went about their business in a more epistemically modest way, they might well win over more people. That is, rhetorical extremism might not be the best way to go, even when the facts warrant it. But, it appears, if there’s anything worse that climate-change denialism, it’s journalistic-wisdom denialism.

Yet in other arenas, arenas where they don’t perform, I’d bet those same journalists could understand the legitimacy of Stephens’s general point. For instance, when people have accused Rod Dreher of being “alarmist” in The Benedict Option, Rod has typically replied that he writes that way because he’s genuinely alarmed. To which some of his critics have said “Yeah, but you don’t have to sound so alarmed. You’re scaring people off who don’t already agree with you.” And isn’t this a a reasonable criticism? Especially given what we have learned about the backfire effect — the tendency people have to double down on wrong ideas when they’re presented with facts that challenge those ideas? And if it is a reasonable criticism, mightn’t it apply to journalists too? Believing in SCIENCE doesn’t give you infallible judgment.

There’s one more context for this whole argument. I have been meditating over the last couple of days on this tweet from my friend Yoni Appelbaum:

For some time now I’ve asked the New York Times to give better and fairer coverage of social conservatives and religious people, and hiring Stephens seems to have been at least a small step in that direction. But if their core constituency continues to engage in freakouts of this magnitude over any deviation from their views, will we see any more such steps? Given the economic realities Yoni’s tweet points to, I’d say: not bloody likely. The pressures of the market are relentless. And the more of our institutions, especially our intellectual institutions, are governed by those relentless pressures, the fewer places we will have to turn for nonpartisan inquiry.

Again, my concern here applies to every institution that deals in ideas. When people ask me how academic administrators can allow student protestors to behave so badly — can allow them even to get away with clearly illegal behavior — I answer: The customer is always right. And I’ve got a feeling that’s exactly what the publishers of the New York Times are thinking as members of their core constituency cancel their subscriptions. Religious weirdos like me are a lost cause; but they can’t lose their true believers. Mistakes were made; heads will roll; it won’t happen again. And America will sink deeper and deeper into this morass of “alternative facts” and mutually incomprehensible narratives.

weird beliefs and the hermeneutics of suspicion

This probably belongs on the blog for my How to Think, but since I haven’t started blogging there yet, I’ll just go ahead and put it here.

As I’ve said many times, Tim Burke is one of the bloggers — I guess blogging isn’t wholly dead, it’s just mostly dead, like Westley when he’s taken to Miracle Max — who really helps me think, so it’s sad (if understandable) to hear his tone of discouragement here. “I don’t know what to do next, nor do I have any kind of clear insight about what may come of the moment we’re in.” Sounds like something I’ve thought myself.

But then he picks himself up and makes a useful contribution to a problem that a good many people are worrying over these days, which is why so many people believe so many things that aren’t true — or, to put the problem in one form that I’ve written about before, why so many people mistrust expert judgment. Tim:

First, let’s take the deranged fake stories about a pizza restaurant in Washington DC being a center of sex trafficking. What makes it possible to believe in obvious nonsense about this particular establishment? In short, this: that the last fifty years of global cultural life has revealed that public innocence and virtue are not infrequently a mask for sexual predation by powerful men. Bill Cosby. Jimmy Savile. Numerous Catholic priests. On and on the list goes. Add to that the fact that one form of feminist critique of Freud has long since been validated: that what Freud classed as hysteria or imagination was in many cases straightforward testimony by women about what went on within domestic life as well as within the workplace lives of women. Add to that the other sins that we now know economic and political power have concealed and forgiven: financial misdoings. Murder. Violence. We may argue about how much, how often, how many. We may argue about typicality and aberration. But whether you’re working at it from memorable anecdotal testimony or systematic inquiry, it’s easy to see how people who came to adulthood in the 1950s and 1960s all over the world might feel as if we live on after the fall, even if they know in their hearts that it was always thus…. The slippery slope here is this: that at some point, people come to accept that this is what all powerful men do, and that any powerful man – or perhaps even powerful woman – who professes innocence is lying. All accusations sound credible, all power comes pre-accused, because at some point, all the Cosbys and teachers at Choate Rosemary Hall and Catholic priests have made it plausible to see rape, assault, molestation everywhere.

Tim then gives other examples to illustrate his key point, which is, if I may summarize, that people who believe things that clearly aren’t true, that seem to us just crazy, actually may have good cause to adopt, if not those particular beliefs, then a habit of suspicion that leads to such beliefs. To which I’ll add an example of my own.

Recently I was listening to an episode of the BBC’s More or Less podcast which discussed what some researchers call the “backfire effect”: the tendency that most of us have to double down on our beliefs when they’re challenged or even simply refuted. (The most influential study is this one.) An example given in the podcast is the belief that vaccinations cause autism, and Tim Harford and his guests point out that when parents are shown there there is no link whatsoever between vaccination and autism, rather than agreeing to vaccinate their children they simply fall back on other reasons for refusing to vaccinate. Harford mentions that one such reason is the belief that vaccines are promoted by a medical profession in collusion with the big international pharmaceutical companies to sell us drugs we don’t need — and then they move on without comment, as though they’ve clearly demonstrated just how irrational such people are.

But hang on a minute: isn’t that a legitimate worry? Don’t we actually have a good deal of evidence, over the past few decades, of unhealthy alliances between the medical profession and Big Pharma leading to some drugs being favored over others that might work better, or over non-drug treatment? And haven’t these controversies often focused on the exploitation of parents’ worries in order to overmedicate children — as with the likely overuse of Ritalin?

No, I’m not an anti-vaxxer, I’m a pro-vaxxer. And the anti-vaxxers are definitely making a logical error here, which is to generalize too broadly from particulars. But those parents who think “I suspect doctor-pharma collusion and so will decline to vaccinate, while also taking advantage of herd immunity” are not ipso facto any less rational than those who think “Doc says it, I believe it, and that settles it.”

The key point here is that the hermeneutics of suspicion is not a train that you can stop, even if you wish you could; nor should it stop, given what Tim Burke points out: the horrifying record of abuse of power by people who wield it. But that train needs brakes to slow it down sometimes, and one of the key topics we all should be reflecting on is this: What could the leading institutions of American life do to renew trust in their basic integrity? As Tim suggests, there’s no evidence that the Democratic Party — or for that matter any other major American institution — is giving any discernible attention to this question.

thinking about thinking

As I hope my last post illustrates, in general I’m less interested in staking out positions on the issues of the day than I am in uncovering the hidden assumptions that govern many of our debates. It’s not that I don’t have views — sometimes very strong views, though more often, I suppose, “extreme views weakly held” — but rather that I know there will be plenty of people out there advocating for positions I like, and not very many people looking into the terms on which the conversation is held. Sometimes debates are fruitless or even counterproductive because we’re largely unaware of the assumptions that underlie them.

So, similarly, I am less interested in staking out a position on the best ways to punish lawbreakers than I am in noting what such questions look like when one considers them from the position of power rather that the position of those on whom power will be exercised. I am less interested in evaluating the usefulness of particular algorithms than in “interrogating,” as we academics like to say, the hidden assumptions of algorithmic culture. And so on. This habit of mine, I believe, is a natural one for someone who considers himself a teacher who writes rather than a writer who teaches. I’m pedagogical through and through, I guess.

All this to explain a forthcoming project: a book called How to Think: A Survival Guide for a World at Odds, which will be published this fall by Convergent Books here in the U.S. and by a publisher in the U.K. I’ll be able to name soon. As that book comes closer to publication, I’ll move a good bit of my blogging about thinking to that site. I hope you’ll join me there from time to time.

structures of presumption: case studies

One of the most disturbing books I’ve read in a long time is Richard Beck’s We Believe the Children: A Moral Panic in the 1980s. Beck recounts the history of a time when a great many Americans became convinced that day-care workers around the country were regularly abusing and raping children and forcing them to participate in Satanic rituals. Over a period of several years, the nightly news brought forth further horrific stories, and those stories grew more and more extreme:

In North Carolina, children said that their teachers had thrown them out of a boat into a school of sharks. In Los Angeles, children said that one of their teachers had forced them to watch as he hacked a horse to pieces with a machete. In New Jersey, children said their teacher had raped them with knives, forks, and wooden spoons, and a child in Miami told investigators about homemade pills their caretakers had forced them to eat. The pills, the child said, looked like candy corn, and they made all of the children sleepy.

Many day-care workers were brought to trial, and some were convicted, even though “No pornography, no blood, no semen, no weapons, no mutilated corpses, no sharks, and no satanic altars or robes were ever found.” One trial, that of the owners of the McMartin preschool in California, became the longest and most expensive trial in American history, and ended with no convictions — because there was no evidence that the charges were true.

Prosecutors, parents, and therapists dealt with this problem by repeating what became a common refrain. Set aside the lack of corroborating evidence, they said, and consider this basic fact: children all over the country were fighting through fear and shame to come forward and say they had been abused — how could a decent society ignore these stories? Therapists pointed to their own profession’s long and inglorious history of ignoring children who tried speak out about abuse, and they said this was a mistake the country could not afford to repeat. “All children who are sexually abused anywhere,” one abuse expert said at the National Symposium on Child Molestation in 1984, “need to have their credibility recognized and to have advocates working for them. Among the things that is most damaging is the sense of being alone and having no one to talk to.”

Thus the book’s title: We Believe the Children.

We don’t hear many claims these days that day-care workers, or anyone else, are forcing children to participate in Satanic rituals. But reading Beck’s narrative, I couldn’t help reflecting on the ways in which certain structures of presumption that drove that “moral panic” thirty years ago are still in place and still having massive social effects — just in somewhat different contexts. There’s a standard sequential logic practiced primarily by therapists and counselors but widely adopted by observers. It goes like this:

1) Identify classes of people who have historically been neglected, marginalized, thought to be less competent than the dominant figures in society — classes of people whose pain has been ignored or denied.

2) Take great care to listen to them for stories of trauma, abuse, or pressure to conform to dominant social practices and expectations.

3) Believing that people who have suffered in these ways may be reluctant to talk about their pain, or have repressed knowledge of what happened to them or who they really are, suggest to them the narrative of their lives that you think likely.

4) If they are reluctant to accept this narrative, that may well be a sign of repression — the greater the reluctance, the deeper the repression — so press them harder to accept the narrative you believe to be true. (Beck, in a discussion of the debate over repressed memories of childhood sexual abuse, quotes something Roseanne Barr said to Oprah: “When someone asks you, ‘Were you sexually abused as a child?’ there are only two answers. One of them is, ‘Yes,’ and one of them is, ‘I don’t know.’ You can’t say, ‘No.’”)

5) Having established to your own satisfaction, and perhaps to that of the counseled people, the disturbing truth, consistently describe them as “victims” and “survivors.”

6) Insist that those who doubt this narrative are complicit in the suffering of the innocent.

7) Recruit the family members of the victims/survivors to support the narrative.

8) If the family members of the victims/survivors question the narrative, accuse them of not just complicity but of having actively contributed to suffering.

9) If any health-care professionals doubt the narrative, condemn them as upholders of oppressive structures and, if they do not give in, try to destroy their careers. (When a high-level FBI investigator named Kenneth Lanning said that he could find no evidence of day-care workers engaged in Satanic rituals, many counselors and therapists accused him of being himself a Satanist.)

10) No matter what happens, even if those you counsel ultimately reject the narrative you pressed upon them, never apologize or admit error. You were, after all, acting in the interests of the insulted and the injured, the marginalized and the oppressed. Beck was unable to find a single apology from therapists who coerced children into telling false stories that seriously damaged, and in some cases effectively destroyed, many lives.

It’s important to note that Beck is anything but a conservative. He attributes much of the panic to a deep residual antifeminism in American life, an interpretation that Kay Hymowitz strongly challenged in her review of his book. Hymowitz rightly points out that many American feminists eagerly participated in child-abuse panic, and indeed Beck should have acknowledged that, but I do not find his explanations as implausible as Hymowitz does. His claim that the hysteria arose from a situation in which “the nuclear family was dying,” and, though there was (and is) much hand-wringing about this fact, “people mostly did not want to save it” seems exactly right to me.

Anyway, given his politics Beck might not agree with my argument here: that the precise logic I have outlined above is at work today in two prominent venues, sexual assault cases on college campuses and the increasingly widespread diagnoses of gender dysphoria among young people. Just as child abuse is real and tragic — and often in the past was diminished or ignored — so too with sexual assault and profound gender dysphoria. But as Beck’s narrative shows, attempts to correct past neglect can go wildly, destructively awry; and the “structures of presumption” I have laid out above make it virtually impossible to have a reasonable discussion of how to assess claims that have immense consequences for human lives.

And if we cannot have such a reasonable discussion, we will almost certainly end up, sooner or later, with another massively damaging crisis like the one Beck describes. How that crisis will develop I can’t predict, but I’m sure of two things: first, that when it happens no one will acknowledge their responsibility for it; and second, that when it’s over we will contrive to forget it, just as completely as we have forgotten how readily millions of Americans believed all those accusations of ritual Satanism.

physicians, patients, and intellectual triage

Please, please read this fascinating essay by Maria Bustillos about her daughter’s diagnosis of MS — and how doctors can become blind to some highly promising forms of treatment. The problem? The belief, drilled into doctors and scientists at every stage of their education, that double-blind randomized tests are not just the gold standard for scientific evidence but the only evidence worth consulting. One of the consequences of that belief: that diet-based treatments never get serious considerations, because they can’t be tested blindly. People always know what they’re eating.

See this passage, which refers to Carmen’s doctor as “Dr. F.”:

In any case, the question of absolute “proof” is of no interest to me. We are in no position to wait for absolute anything. We need help now. And incontrovertibly, there is evidence — not proof, but real evidence, published in a score of leading academic journals — that animal fat makes MS patients worse. It is very clearly something to avoid. In my view, which is the view of a highly motivated layperson whose livelihood is, coincidentally, based in doing careful research, there is not the remotest question that impaired lipid metabolism plays a significant role in the progression of MS. Nobody understands exactly how it works, just yet, but if I were a neurologist myself, I would certainly be telling my patients, listen, you! — just in case, now. Please stick to a vegan plus fish diet, given that the cost-benefit ratio is so incredibly lopsided in your favor. There’s no risk to you. The potential benefit is that you stay well.

But Dr. F, who is a scientist, and moreover one charged with looking after people with MS, is advising not only against dieting, but is literally telling someone (Carmen!) who has MS, yes, if you like butter, you should “enjoy” it, even though there is real live evidence that it might permanently harm you, but not proof, you know.

In this way, Dr. F. illustrates exactly what has gone wrong with so much of American medicine, and indeed with American society in general. I know that sounds ridiculous, like hyperbole, but I mean it quite literally. Dr. F. made no attempt to learn about or explain how, if saturated fat is not harmful, Swank, and now Jelinek, could have arrived at their conclusions, though she cannot prove that saturated fat isn’t harmful to someone with MS. The deficiency in Dr. F.’s reasoning is not scientific: it’s more like a rhetorical deficiency, of trading a degraded notion of “proof” for meaning, with potentially catastrophic results. Dr. F. may be a good scientist, but she is a terrible logician.

I might say, rather than “terrible logician,” Dr. F. is someone who is a poor reasoner — who has made herself a poor reasoner by dividing the world into things that are proven and all other things, and then assuming that there’s no way to distinguish among all those “other things.”

You can see how this happens: the field of medicine is moving so quickly, with new papers coming out every day (and being retracted every other day), that Dr. F. is just doing intellectual triage. The firehose of information becomes manageable if you just stick to things that are proven. But as Bustillos says, people like Carmen don’t have that luxury.

What an odd situation. We have never had such powerful medicine; and yet it has never been more necessary for sick people to learn to manage their own treatment.

Mary Midgley on cooperative thinking

Mary Midgley is one of my favorite philosophers. Her The Myths We Live By plays a significant role in a forthcoming book of mine and her essay “On Trying Out One’s New Sword” eviscerates cultural relativism, or what she calls “moral isolationism,” more briefly and elegantly than one would have thought possible.

Midgley studied philosophy at Oxford during World War II, along with several other women who would become major philosophers: Elizabeth Anscombe, Philippa Foot, Mary Warnock, Iris Murdoch. People have often wondered how this happened — how, in a field so traditionally inhospitable to women, a number of brilliant ones happened to emerge at the same time and in the same place. Three years ago, in a letter to the Guardian, Midgley offered a fascinating sociological explanation:

As a survivor from the wartime group, I can only say: sorry, but the reason was indeed that there were fewer men about then. The trouble is not, of course, men as such – men have done good enough philosophy in the past. What is wrong is a particular style of philosophising that results from encouraging a lot of clever young men to compete in winning arguments. These people then quickly build up a set of games out of simple oppositions and elaborate them until, in the end, nobody else can see what they are talking about. All this can go on until somebody from outside the circle finally explodes it by moving the conversation on to a quite different topic, after which the games are forgotten. Hobbes did this in the 1640s. Moore and Russell did it in the 1890s. And actually I think the time is about ripe for somebody to do it today. By contrast, in those wartime classes – which were small – men (conscientious objectors etc) were present as well as women, but they weren’t keen on arguing.

It was clear that we were all more interested in understanding this deeply puzzling world than in putting each other down. That was how Elizabeth Anscombe, Philippa Foot, Iris Murdoch, Mary Warnock and I, in our various ways, all came to think out alternatives to the brash, unreal style of philosophising – based essentially on logical positivism – that was current at the time. And these were the ideas that we later expressed in our own writings.

Given that so many people think of philosophy simply as arguing, and therefore as an intrinsically competitive activity, it might be rather surprising to hear Midgley claim that interesting and innovative philosophical thought emerged from her environment at Oxford because of the presence of a critical mass of people who “weren’t keen on arguing” but were “more interested in understanding this deeply puzzling world” (emphasis mine).

In a recent follow-up to and expansion of that letter, Midgley quotes Colin McGinn describing his own philosophical education at Oxford, thirty years later, especially in classes with Gareth Evans: “Evans was a fierce debater, impatient and uncompromising; as I remarked, he skewered fools gladly (perhaps too gladly). The atmosphere in his class was intimidating and thrilling at the same time. As I was to learn later, this is fairly characteristic of philosophical debate. Philosophy and ego are never very far apart. Philosophical discussion can be … a clashing of analytically honed intellects, with pulsing egos attached to them … a kind of intellectual blood-sport, in which egos get bruised and buckled, even impaled.” To which Midgley replies, with her characteristic deceptively mild ironic tone:

Well, yes, so it can, but does it always have to? We can see that at wartime Oxford things turned out rather differently, because even bloodier tournaments and competitions elsewhere had made the normal attention to these games impossible. So, by some kind of chance, life had made a temporary break in the constant obsession with picking small faults in other people’s arguments – the continuing neglect of what were meant to be central issues – that had become habitual with the local philosophers. It had interrupted those distracting feuds which were then reigning, as in any competitive atmosphere feuds always do reign, preventing serious attempts at discussion, unless somebody deliberately controls them.

And Midgley doesn’t shy away from stating bluntly what the thinks about the intellectual habits that Gareth Evans was teaching young Colin McGinn and others: “Such habits, while they prevail, simply stop people doing any real philosophy.”

So Midgley suggests that other habits be taught: “Co-operative rather than competitive thinking always needs to be widely taught. Feuds need to be put in the background, because all students equally have to learn a way of working that will be helpful to everybody rather than just promoting their own glory.” Of course, promoting your own glory is the usual path to academic success, and if that’s what you want, then your way is clear. But Midgley wants people who choose that path to know that if they don’t learn co-operative thinking, “they can’t really do effective philosophy at all.” They won’t make progress “in understanding this deeply puzzling world.”

I can’t imagine any academic endeavor that wouldn’t be improved, intellectually and morally, if its participants heeded Midgley’s counsel.

launch and iterate

I enjoyed this brief interview with Rob Horning of The New Inquiry, and was particularly taken with this passage:

What do you think is good about the way we interact with information today? How has your internet consumption changed your brain, and writing, for the better?

I can only speak for myself, but I find that the Internet has made me far more productive than I was before as a reader and a writer. It seems to offer an alternative to academic protocols for making “knowledge.” But I was never very systematic before about my “research” and am even less so now; only now this doesn’t seem like such a draw back. Working in fragments and unfolding ideas over time in fragments seems a more viable way of proceeding. I’ve grown incapable of researching as preparation for some writing project — I post everything, write immediately as a way to digest what I am reading, make spontaneous arguments and connections from what is at hand. Then if I feel encouraged, I go back and try to synthesize some of this material later. That seems a very Internet-inspired approach.

Let me pause to note that I am fundamentally against productivity and then move on to the more important point, which is that online life has changed my ways of working along the lines that Horning describes — and I like it.

There’s a mantra among some software developers, most notably at Google: Launch and iterate. Get your app out there even with bugs, let your customers report and complain about those bugs, apologize profusely, fix, release a new version. Then do it again, and again. (Some developers hate this model, but never mind.) Over the past few years I’ve been doing something like this with my major projects: throw an idea out there in incomplete or even inchoate form and see what responses it gets; learn from your respondents’ skepticism or criticism; follow the links people give you; go back to the idea and, armed with this feedback, make it better.

Of course, writers have always done something like this: for example, going to the local pub and making people listen to you pontificate on some crack-brained intellectual scheme and then tell you that you’re full of it. And I’ve used that method too, which has certain advantages … but: it’s easy to forget what people say, you have a narrow range of responses, and it can’t happen very often or according to immediate need. The best venue I’ve found to support the launch-and-iterate model of the intellectual life: Twitter.

relevance and ignorance

A few days ago I wrote, “Between the writers who desperate to be published and the editors desperate for “content,” the forces militating against taking time — time to read, time to think — are really powerful.” If you want evidence for that claim, you couldn’t do better than read this interview with cartoonist and writer Matthew Thurber, who cheerfully describes the pleasures of writing about a subject he can’t be bothered to learn anything about — in this case, the so-called and just-around-the-corner Singularity:

I like not-knowing in general. And if I’d waited until I’d read all of Ayn Rand and all of the singularity literature, I wouldn’t have been able to work fast enough to get this comic done. I felt an urgency to get it out before it became completely irrelevant. YouTube has been around for a decade. The Snowden stuff happened when this book was coming out. But I felt like it would be funny if I didn’t know what those things were. Writing a book responding to the singularity but not really knowing what it was. It was just a rumor. Ineptitude can be funny, too.

Thurber pushes this point hard enough that it eventually becomes clear that he wants to be thought of as knowing even less than he does: “I don’t know what the singularity really is. I understand that it involves the hybridization of humans and technology, or A.I. Or actually, no, I don’t know what it is. A robot? Like the movie D.A.R.Y.L.? Or any movie where there’s a robot who has feelings?” So he asks the interviewer: “And maybe it already happened? Do people think it already happened?”

There’s much to consider here. First of all, Thurber’s claim “I like not-knowing in general.” Well, have I got a political party for you, then! But more seriously, this reminds me of that prince among legal scholars, Hugo Grotius, who wrote, Nescire quaedam magna pars sapientiae est — “Not to know some things is the greater part of wisdom.” But Grotius’s point — made in an era of rapidly expanding knowledge, of having too much to know — was that you have to make a discipline of foregoing certain kinds of knowledge that are not necessary to your chosen intellectual path in order to cultivate other kinds that have first-order importance for you. (When I posted that Grotius quote to Twitter a while back, my internet friend Erin Kissane shot back a famous line from Sherlock Holmes asserting his indifference to the Copernican theory: “You say that we go round the sun. If we went round the moon it would not make a pennyworth of difference to me or to my work…. Now that I do know it I shall do my best to forget it.”)

But, obviously, this is a vert different point than the one Thurber is making, which is that he likes knowing nothing about the very things he is writing about. Or at least he is quite willing to remain ignorant in order to avoid being slowed down in his work. It’s not hard to imagine making a pleasant intellectual game out of writing about what you don’t know, but Thurber is clear that for him this is all about being relevant — it’s specifically the quest for relevance that mandates ignorance. Thurber’s argument goes like this: A great many people are talking about something called the Singularity; I don’t know much about it, but I’d like to draw the attention of those people; but those other people, like me, have short attention spans and may soon be talking about something else; so I’d better write something about the Singularity quickly so I can attract their eyeballs before it’s too late.

This is all phrased light-heartedly, but I wonder if that tone isn’t at least a little misleading: Thurber really does seem afraid of getting left behind. And he’s not the only one: it’s pretty clear that in writing The Circle Dave Eggers was so eager to make a Socially Relevant Intervention about tech companies that he didn’t bother to learn how they actually work. So what we have hear is an urgency to be heard coupled with a need to be relevant. The result: social commentary made by people who have nothing but vague, uninformed speculations to guide their writing. This is how whole books become indistinguishable from the average blog comment.