Morozov on Carr

Evgeny Morozov is probably not really “Evgeny Morozov,” but he plays him on the internet and has been doing so for years. It’s a simple role — you tell everyone else writing about technology that they’re wrong — and I suspect that it gets tiring after a while, though Morozov himself has been remarkably consistent in the vigor he brings to the part. A few years ago he joked on Twitter, “Funding my next book tour entirely via Kickstarter. For $10, I promise not to tweet at you. For $1000, I won’t review your book.” Well, I say “joked,” but …

In his recent review of Nicholas Carr’s book The Glass Cage — a book I reviewed very positively here — Morozov takes a turn which will enable him to perpetuate and extend his all-critique-all-the-time approach indefinitely. You can see what’s coming when he chastises Carr for being insufficiently inattentive to philosophical traditions other than phenomenology. If, gentle reader, upon hearing this you wonder why a book on automation would be obliged to attend to any philosophical tradition, bear with me as Morozov moves toward his peroration:

Unsurprisingly, if one starts by assuming that every problem stems from the dominance of bad ideas about technology rather than from unjust, flawed, and exploitative modes of social organization, then every proposed solution will feature a heavy dose of better ideas. They might be embodied in better, more humane gadgets and apps, but the mode of intervention is still primarily ideational. The rallying cry of the technology critic — and I confess to shouting it more than once — is: “If only consumers and companies knew better!” One can tinker with consumers and companies, but the market itself is holy and not to be contested. This is the unstated assumption behind most popular technology criticism written today.


Even if Nicholas Carr’s project succeeds — i.e., even if he does convince users that all that growing alienation is the result of their false beliefs in automation and even if users, in turn, convince technology companies to produce new types of products — it’s not obvious why this should be counted as a success. It’s certainly not going to be a victory for progressive politics.


At best, Carr’s project might succeed in producing a different Google. But its lack of ambition is itself a testament to the sad state of politics today. It’s primarily in the marketplace of technology providers — not in the political realm — that we seek solutions to our problems. A more humane Google is not necessarily a good thing — at least, not as long as the project of humanizing it distracts us from the more fundamental political tasks at hand. Technology critics, however, do not care. Their job is to write about Google.

So on this account, if you make the mistake of writing a book about our reliance on technologies of automation and the costs and benefits to human personhood of that reliance, instead of writing about “unjust, flawed, and exploitative modes of social organization”; if your book does not strive to be “a victory for progressive politics”; if your book merely pushes for “a different Google” rather than … I don’t know, probably the dismantling of global capitalism; if your book, in short, is so lamentably without “ambition”; well, then, there’s only one thing to say.

I guess everyone other than Michael Hardt and Antonio Negri, Thomas Piketty, and maybe David Graeber have been wasting their (and our) time. God help the next person who writes about Bach without railing against the music industry’s role as an ideological state apparatus, or who writes a love story without protesting the commodification of sex under late capitalism. I don’t think Morozov will be happy until every writer sounds like a belated member of the Frankfurt School.

But the thing is, Carr’s book could actually be defended on political grounds, should someone choose to do so. The book is primarily concerned with balancing the gains in automated efficiency and safety with the costs to human flourishing, and human flourishing is what politics is all about. People who have become so fully habituated to an automated environment that they simply can’t function without it will scarcely be in a position to offer serious resistance to our political-economic regime. Carr could be said to be laying part of the foundation for such resistance, by getting his readers to begin to think about what a less automated and more active, decisive life could look like.

But is it really necessary that every book be evaluated by these criteria?

the Underground Man addresses the solutionists

In essays and books, Evgeny Morozov has outlined four intellectual pathologies of the code-literate and code-celebrant: populism, utopianism, internet-centrism, and solutionism. These are fuzzy overlapping categories and Morozov still seems to be in search of a stable set of terms to articulate his critique. We might simplify matters by saying that all the people who fit into these categories tend to have excelled, in their academic and professional careers, as problem-solvers, so that we might sum up their thinking in this way: to many men who can code, every moment of discomfort looks like a problem in need of a code-based solution.

Thus Morozov, from To save Everything, Click Here:

Design theorist Michael Dobbins has it right: solutionism presumes rather than investigates the problems that it is trying to solve, reaching “for the answer before the questions have been fully asked.” How problems are composed matters every bit as much as how problems are resolved.

In one important passage in the book, Morozov responds to the proposal that cameras be installed in your kitchen to watch you cook a meal and correct you whenever you go astray. But this, Morozov points out, is to reduce people to machines carrying out instructions. It rules out any possibility of autonomous action, of actually learning an art-form, of creative improvisation, of discovering that your “error” led to a better result than the recipe you failed to follow. “Imperfection, ambiguity, opacity, disorder, and the opportunity to err, to sin, to do the wrong thing: all of these are constitutive of human freedom, and any concentrated attempt to root them out will root out that freedom as well.”

I simply want to note that the argument that Morozov makes here was made more powerfully, and with a greater sense of the manifold implications of this tension, one hundred and fifty years ago by Dostoevsky in Notes from Underground. In one especially vital passage Dostoevsky anticipates the whole of the current solutionist paradigm:

Furthermore, you say, science will teach men (although in my opinion this is a superfluity) that they have not, in fact, and never have had, either will or fancy, and are no more than a sort of piano keyboard or barrel-organ cylinder; and that the laws of nature still exist on the earth, so that whatever man does he does not of his own volition but, as really goes without saying, by the laws of nature. Consequently, these laws of nature have only to be discovered, and man will no longer be responsible for his actions, and it will become extremely easy for him to live his life. All human actions, of course, will then have to be worked out by those laws, mathematically, like a table of logarithms, and entered in the almanac; or better still, there will appear orthodox publications, something like our encyclopaedic dictionaries, in which everything will be so accurately calculated and plotted that there will no longer be any individual deeds or adventures left in the world. ‘Then,’ (this is all of you speaking), ‘a new political economy will come into existence, all complete, and also calculated with mathematical accuracy, so that all problems will vanish in the twinkling of an eye, simply because all possible answers to them will have been supplied. Then the Palace of Crystal will arise. Then….’ Well, in short, the golden age will come again.

All problems will vanish in the twinkling of an eye, simply because all possible answers to them will have been supplied. To this utopian prediction the Underground Man has a complex response:

Can man’s interests be correctly calculated? Are there not some which not only have not been classified, but are incapable of classification? After all, gentlemen, as far as I know you deduce the whole range of human satisfactions as averages from statistical figures and scientifico-economic formulas. You recognize things like wealth, freedom, comfort, prosperity, and so on as good, so that a man who deliberately and openly went against that tabulation would in your opinion, and of course in mine also, be an obscurantist or else completely mad, wouldn’t he? But there is one very puzzling thing: how does it come about that all the statisticians and experts and lovers of humanity, when they enumerate the good things of life, always omit one particular one? They don’t even take it into account as they ought, and the whole calculation depends on it. After all, it would not do much harm to accept this as a good and add it to the list. But the snag lies in this; that this strange benefit won’t suit any classification or fit neatly into any list.

What does he mean here? What is the “one particular” “good thing in life” that is always omitted from the list? We would see it quite clearly, the Underground Man says, if the solutionist utopia were ever actually realized, because at that moment someone would arise to say, “Come on, gentlemen, why shouldn’t we get rid of all this calm reasonableness with one kick, just so as to send all these logarithms to the devil and be able to live our own lives at our own sweet will?”

And such a figure would certainly find many followers. Why? Because

that’s the way men are made. And all this for the most frivolous of reasons, hardly worth mentioning, one would think: namely that a man, whoever he is, always and everywhere likes to act as he chooses, and not at all according to the dictates of reason and self-interest; it is indeed possible, and sometimes positively imperative (in my view), to act directly contrary to one’s own best interests. One’s own free and unfettered volition, one’s own caprice, however wild, one’s own fancy, inflamed sometimes to the point of madness – that is the one best and greatest good, which is never taken into consideration because it will not fit into any classification, and the omission of which always sends all systems and theories to the devil. Where did all the sages get the idea that a man’s desires must be normal and virtuous? Why did they imagine that he must inevitably will what is reasonable and profitable? What a man needs is simply and solely independent volition, whatever that independence may cost and wherever it may lead.

And so the solutionist utopia will inevitably sow the seeds of its own undermining. “This good” — the good of independent volition — “is distinguished precisely by upsetting all our classifications and always destroying the systems established by lovers of humanity for the happiness of mankind. In short, it interferes with everything.” This is true for good and for ill; but it is always and everywhere true.

tech intellectuals and the military-technological complex

I was looking forward to reading Henry Farrell’s essay on “tech intellectuals”, but after reading it I found myself wishing for a deeper treatment. Still, what’s there is a good start.

The “tech intellectual” is a curious newfangled creature. “Technology intellectuals work in an attention economy,” Farrell writes. “They succeed if they attract enough attention to themselves and their message that they can make a living from it.” This is the best part of Farrell’s essay:

To do well in this economy, you do not have to get tenure or become a contributing editor to The New Republic (although the latter probably doesn’t hurt). You just need, somehow, to get lots of people to pay attention to you. This attention can then be converted into more material currency. At the lower end, this will likely involve nothing more than invitations to interesting conferences and a little consulting money. In the middle reaches, people can get fellowships (often funded by technology companies), research funding, and book contracts. At the higher end, people can snag big book deals and extremely lucrative speaking engagements. These people can make a very good living from writing, public speaking, or some combination of the two. But most of these aspiring pundits are doing their best to scramble up the slope of the statistical distribution, jostling with one another as they fight to ascend, terrified they will slip and fall backwards into the abyss. The long tail is swarmed by multitudes, who have a tiny audience and still tinier chances of real financial reward.

This underlying economy of attention explains much that would otherwise be puzzling. For example, it is the evolutionary imperative that drives the ecology of technology culture conferences and public talks. These events often bring together people who are willing to talk for free and audiences who just might take an interest in them. Hopeful tech pundits compete, sometimes quite desperately, to speak at conferences like PopTech and TEDx even though they don’t get paid a penny for it. Aspirants begin on a modern version of the rubber-chicken circuit, road-testing their message and working their way up.

TED is the apex of this world. You don’t get money for a TED talk, but you can get plenty of attention—enough, in many cases, to launch yourself as a well-paid speaker ($5,000 per engagement and up) on the business conference circuit. While making your way up the hierarchy, you are encouraged to buff the rough patches from your presentation again and again, sanding it down to a beautifully polished surface, which all too often does no more than reflect your audience’s preconceptions back at them.

The last point seems exactly right to me. The big tech businesses have the money to pay those hefty speaking fees, and they are certainly not going to hand out that cash to someone who would like to knock the props right out from under their lucrative enterprise. Thus, while Evgeny Morozov is a notably harsh critic of many other tech intellectuals, his career is also just as dependent as theirs on the maintenance of the current techno-economic order — what, in light of recent revelations about the complicity of the big tech companies with the NSA, we should probably call the military-technological complex.

The only writer Farrell commends in his essay is Tim Slee, and Slee has been making these arguments for some time. In one recent essay, he points out that “the nature of Linux, which famously started as an amateur hobby project, has been changed by the private capital it attracted. . . . Once a challenger to capitalist modes of production, Linux is now an integral part of them.” In another, he notes that big social-media companies like Facebook want to pose as outsiders, as hackers in the old sense of the word, but in point of fact “capitalism has happily absorbed the romantic pose of the free software movement and sold it back to us as social networks.”

You don’t have to be a committed leftist, like Farrell or Slee, to see that the entanglement of the tech sector with both the biggest of big businesses and the powers of vast national governments is in at least some ways problematic, and to wish for a new generation of tech intellectuals capable of articulating those problems and pointing to possible alternative ways of going about our information-technology work. Given the dominant role the American university has long had in the care and feeding of intellectuals, should we look to university-based minds for help? Alas, they seem as attracted by tech-business dollars as anyone else, especially now that VCs are ready to throw money at MOOCs. Where, then, will the necessary voices of critique come from?


Now I want to take the thoughts from my last post a little further.

Just as it is true in one sense to say “guns don’t kill people, people kill people,” though only at the cost of ignoring how much easier it is to kill someone if you’re holding a loaded gun than if you can’t get one, so also I don’t want my previous post to be read as simply saying “Tech doesn’t distract people, people distract themselves.” I am easily distracted, I want to be distracted, but that’s easier for me to accomplish when I have a cellphone in my hand or lots notifications enabled — thanks, Growl! — on my laptop.

Still, I really think we should spend more time thinking about what’s within rather than what’s without — the propensities themselves rather than what enables and intensifies them. Self-knowledge is good.

And along these lines I find myself thinking about a fascinating and provocative article in the Journal of the American Medical Association that says, basically, it’s time to stop studying the effects of various diets and debating about which ones are best because, frankly, there ain’t a dime’s worth of difference among them: “The long history of trials showing very modest differences suggests that additional trials comparing diets varying in macronutrient content most likely will not produce findings that would significantly advance the science of obesity.”

In short, such comparative studies are wasting the researchers’ time, because while countless studies have not told us anything conclusively about which diets are best they have told us conclusively that whatever diet you choose the thing that really matters is whether you’re able to achieve the discipline to stick with it. Therefore, “Progress in obesity management will require greater understanding of the biological, behavioral, and environmental factors associated with adherence to lifestyle changes including both diet and physical activity.”

Adherence: that’s what matters in achieving weight loss and more general increases in health. Do you actually follow your diet? Do you actually keep to your exercise regimen? And that’s also what’s most mysterious: Why are some people able to adhere to their plans while others (most of us) are not? This, the authors suggest, is what we should be studying.

The same is true for technological addictions. Some people use apps like Freedom to try to break their addictions — which is great as long as they remember to turn the app on and resist the temptation to override it. Jonathan Franzen uses superglue to render his computer un-networkable — which is great as long as he doesn’t hunt down another computer or keep a smartphone within reach. Evgeny Morozov locks his phone and wireless router in a safe so he can get some work done — which is great as long as he actually does that when he needs to.

In all these cases, what people are trying to do — and it’s an intelligent thing to attempt — is to create friction, clumsiness, a set of small obstacles that separate the temptation to seek positive reinforcement from the giving in to that temptation: time to take a couple of deep breaths, time to reconsider, time to remind themselves what they want to achieve. But in the end they still have to resist. They have to adhere to their commitments.

Which takes us back to the really key question that the JAMA article points us to: whether it’s diet or exercise or checking Twitter, why is adherence so difficult? Why do most of us adhere weakly, like Post-It notes, rather than firmly, like Jonathan Franzen’s superglued ethernet port?

I’ll have more to say about this in another post.


As I mentioned the other day, the Shirky/Doctorow thesis is that the internet in general and social media in particular tend to generate political freedom; the Evgeny Morozov thesis is that those media tend to enable governmental surveillance and control of protestors and dissidents.

My question is: why are we so determined to speak in these essentialist terms? Maybe the most significant change in my thinking over the past twenty years is a deepening suspicion of generalizations. “To Generalize is to be an Idiot,” wrote William Blake; “To Particularize is the Alone Distinction of Merit.” The internet is new; social media are even newer; both are vastly dispersed throughout the global social order. Moreover, the internet is not just one thing, it’s ten million things; and different social media have different purposes, different architectures, different sets of users.

So when Clay Shirky says, “social media . . . helps [sic] angry people coordinate their actions,” I don’t know how we would even figure out whether a statement that broad is true. Which social media? Which actions? In which societies? Presumably when people connect with each other they won’t always agree, so how do we know that some social media, anyway, don’t exacerbate conflicts? Maybe some people in some societies would coordinate better if they met face to face. Maybe, though there are certainly dangers in meeting face to face, there may be just as many dangers in coordinating via social media, depending on how careful the users are and how technologically sophisticated the oppressors are.

Statements as broad as Shirky’s are close to useless. Here’s a post that shows how fruitless and abstract such debates can be, even when they start by focusing on a single country’s situation — the impulse to generalize is just too strong.

The only way to make any progress in thinking about these matters is to “Particularize” and to keep particularizing. So maybe we should start by asking questions along these lines: What social media played a role in the recent political upheavals in Tunisia, and what role did they play? How many Tunisians use social media, which ones do they use, and how do they use them? How many Egyptians were aware of the Tunisian situation, and how did they become aware? How did their media present the Tunisian situation to them? What media have they relied on, if any, in the days since January 25th? Is there even one Egyptian answer to these questions, or do we need to distinguish between Cairo and Alexandria, between the cities and the outlying areas — and among various social classes? Even these questions are broad, but they stand a chance of getting us somewhere.

The Whale and the Reactor (8)

It was a remarkable experience to read Winner’s sixth chapter, “Mythinformation,” in the light of some recent online debates. Continuing his attempt to think in a seriously political way about technology, Winner is here concerned with the technological uses of the language of revolution:

It seems all but impossible for computer enthusiasts to examine critically the ends that might guide the world-shaking developments they anticipate. They employ the metaphor of revolution for one purpose only — to suggest a dramatic upheaval, one that people ought to welcome as good news. . . .

If technophiles were to consider the “computer revolution” in light of “social upheavals of the past,” especially those caused by the Industrial Revolution, then they might be able to think more seriously about their own language and its political implications. But, Winner says, “a consistently ahistorical viewpoint prevails. What one often finds emphasized, however, is a vision of drastically altered social and political conditions, a future upheld as both desirable and, in all likelihood, inevitable.”

Well, some of this hasn’t changed at all in the past twenty-five years. Celebrants of technology still aren’t very historically aware, they still emphasize the inevitability of technological development, they still see it almost wholly as progressive.

But the political implications of technology are getting more serious and thoughtful consideration these days, and that may well lead to deeper conversations on other fronts. Just consider the vigorous and fascinating debate going on right now between Evgeny Morozov and Cory Doctorow. Morozov’s new book The Net Delusion offers an exceptionally strong critique of the common belief that social media promote freedom and democracy; that argument has been getting some equally strong pushback from, among others, Doctorow, whose longest and most thoughtful response may be found here.

This is a debate I may have more to say about later, but for now let me just note that if Langdon Winner wanted serious debates about the political implications of technology, we’re getting just such a debate now, though focused perhaps too narrowly on the role of social media. But couple that with the recent Wikileaks debate . . . we’re getting somewhere.