Facebook, communication, and personhood

William Davies tells us about Mark Zuckerberg’s hope to create an “ultimate communication technology,” and explains how Zuckerberg’s hopes arise from a deep dissatisfaction with and mistrust of the ways humans have always communicated with one another. Nick Carr follows up with a thoughtful supplement:

If language is bound up in living, if it is an expression of both sense and sensibility, then computers, being non-living, having no sensibility, will have a very difficult time mastering “natural-language processing” beyond a certain rudimentary level. The best solution, if you have a need to get computers to “understand” human communication, may to be avoid the problem altogether. Instead of figuring out how to get computers to understand natural language, you get people to speak artificial language, the language of computers. A good way to start is to encourage people to express themselves not through messy assemblages of fuzzily defined words but through neat, formal symbols — emoticons or emoji, for instance. When we speak with emoji, we’re speaking a language that machines can understand.

People like Mark Zuckerberg have always been uncomfortable with natural language. Now, they can do something about it.

I think we should be very concerned about this move by Facebook. In these contexts, I often think of a shrewd and troubling comment by Jaron Lanier: “The Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?” In this sense, the degradation of personhood is one of Facebook’s explicit goals, and Facebook will increasingly require its users to cooperate in lowering their standards of intelligence and personhood.

art as industrial lubricant

Holy cow, does Nick Carr pin this one to the wall. Google says, “At any moment in your day, Google Play Music has whatever you need music for — from working, to working out, to working it on the dance floor — and gives you curated radio stations to make whatever you’re doing better. Our team of music experts, including the folks who created Songza, crafts each station song by song so you don’t have to.”

Nick replies:

This is the democratization of the Muzak philosophy. Music becomes an input, a factor of production. Listening to music is not itself an “activity” — music isn’t an end in itself — but rather an enhancer of other activities, each of which must be clearly demarcated….  

Once you accept that music is an input, a factor of production, you’ll naturally seek to minimize the cost and effort required to acquire the input. And since music is “context” rather than “core,” to borrow Geoff Moore’s famous categorization of business inputs, simple economics would dictate that you outsource the supply of music rather than invest personal resources — time, money, attention, passion — in supplying it yourself. You should, as Google suggests, look to a “team of music experts” to “craft” your musical inputs, “song by song,” so “you don’t have to.” To choose one’s own songs, or even to develop the personal taste in music required to choose one’s own songs, would be wasted labor, a distraction from the series of essential jobs that give structure and value to your days. 

Art is an industrial lubricant that, by reducing the friction from activities, makes for more productive lives.

If music be the lube of work, play on — and we’ll be Getting Things Done.

Paul Goodman and Humane Technology

This is a kind of thematic follow-up to my previous post.

A few weeks ago Nick Carr posted a quotation from this 1969 article by Paul Goodman: “Can Technology Be Humane?” I had never heard of it, but it’s quite fascinating. Here’s an interesting excerpt:

For three hundred years, science and scientific technology had an unblemished and justified reputation as a wonderful adventure, pouring out practical benefits, and liberating the spirit from the errors of superstition and traditional faith. During this century they have finally been the only generally credited system of explanation and problem-solving. Yet in our generation they have come to seem to many, and to very many of the best of the young, as essentially inhuman, abstract, regimenting, hand-in-glove with Power, and even diabolical. Young people say that science is anti-life, it is a Calvinist obsession, it has been a weapon of white Europe to subjugate colored races, and manifestly—in view of recent scientific technology—people who think that way become insane. With science, the other professions are discredited; and the academic “disciplines” are discredited.

The immediate reasons for this shattering reversal of values are fairly obvious. Hitler’s ovens and his other experiments in eugenics, the first atom bombs and their frenzied subsequent developments, the deterioration of the physical environment and the destruction of the biosphere, the catastrophes impending over the cities because of technological failures and psychological stress, the prospect of a brainwashed and drugged 1984. Innovations yield diminishing returns in enhancing life. And instead of rejoicing, there is now widespread conviction that beautiful advances in genetics, surgery, computers, rocketry, or atomic energy will surely only increase human woe.

Goodman’s proposal for remedying this new mistrust and hatred of technology begins thus: “Whether or not it draws on new scientific research, technology is a branch of moral philosophy, not of science,” and requires the virtue of prudence. Since “in spite of the fantasies of hippies, we are certainly going to continue to live in a technological world,” this redefinition of technology — or recollection of it to its proper place — is a social necessity. Goodman spells out some details:

  • “Prudence is foresight, caution, utility. Thus it is up to the technologists, not to regulatory agencies of the government, to provide for safety and to think about remote effects.”
  • “The recent history of technology has consisted largely of a desperate effort to remedy situations caused by previous over-application of technology.”
  • “Currently, perhaps the chief moral criterion of a philosophic technology is modesty, having a sense of the whole and not obtruding more than a particular function warrants.”
  • “Since we are technologically overcommitted, a good general maxim in advanced countries at present is to innovate in order to simplify the technical system, but otherwise to innovate as sparingly as possible.”
  • “A complicated system works most efficiently if its parts readjust themselves decentrally, with a minimum of central intervention or control, except in case of breakdown.”
  • “But with organisms too, this has long been the bias of psychosomatic medicine, the Wisdom of the Body, as Cannon called it. To cite a classical experiment of Ralph Hefferline of Columbia: a subject is wired to suffer an annoying regular buzz, which can be delayed and finally eliminated if he makes a precise but unlikely gesture, say by twisting his ankle in a certain way; then it is found that he adjusts quicker if he is not told the method and it is left to his spontaneous twitching than if he is told and tries deliberately to help himself. He adjusts better without conscious control, his own or the experimenter’s.”
  • “My bias is also pluralistic. Instead of the few national goals of a few decision-makers, I propose that there are many goods of many activities of life, and many professions and other interest groups each with its own criteria and goals that must be taken into account. A society that distributes power widely is superficially conflictful but fundamentally stable.”
  • “The interlocking of technologies and all other institutions makes it almost impossible to reform policy in any part; yet this very interlocking that renders people powerless, including the decision-makers, creates a remarkable resonance and chain-reaction if any determined group, or even determined individual, exerts force. In the face of overwhelmingly collective operations like the space exploration, the average man must feel that local or grassroots efforts are worthless, there is no science but Big Science, and no administration but the State. And yet there is a powerful surge of localism, populism, and community action, as if people were determined to be free even if it makes no sense. A mighty empire is stood off by a band of peasants, and neither can win — this is even more remarkable than if David beats Goliath; it means that neither principle is historically adequate. In my opinion, these dilemmas and impasses show that we are on the eve of a transformation of conscience.”

If only that last sentence had come true. I hope to reflect further on this article in later posts.

Carr on Piper on Jacobs

Here’s Nick Carr commenting on the recent dialogue at the Infernal Machine between me and Andrew Piper:

It’s possible to sketch out an alternative history of the net in which thoughtful reading and commentary play a bigger role. In its original form, the blog, or web log, was more a reader’s medium than a writer’s medium. And one can, without too much work, find deeply considered comment threads spinning out from online writings. But the blog turned into a writer’s medium, and readerly comments remain the exception, as both Jacobs and Piper agree. One of the dreams for the web, expressed through a computer metaphor, was that it would be a “read-write” medium rather than a “read-only” medium. In reality, the web is more of a write-only medium, with the desire for self-expression largely subsuming the act of reading. So I’m doubtful about Jacobs’s suggestion that the potential of our new textual technologies is being frustrated by our cultural tendencies. The technologies and the culture seem of a piece. We’re not resisting the tools; we’re using them as they were designed to be used.

I’d say that depends on the tools: for instance, this semester I’m having my students write with CommentPress, which I think does a really good job of preserving a read-write environment — maybe even better, in some ways, than material text, though without the powerful force of transcription that Andrew talks about. (That may be irreplaceable — typing the words of others, while in this respect better than copying and pasting them, doesn’t have the same degree of embodiment.)

In my theses I tried to acknowledge both halves of the equation: I talked about the need to choose tools wisely (26, 35), but I also said that without the cultivation of certain key attitudes and virtues (27, 29, 33) choosing the right tools won’t do us much good (36). I don’t think Nick and I — or for that matter Andrew and I — disagree very much on all this.

Morozov on Carr

Evgeny Morozov is probably not really “Evgeny Morozov,” but he plays him on the internet and has been doing so for years. It’s a simple role — you tell everyone else writing about technology that they’re wrong — and I suspect that it gets tiring after a while, though Morozov himself has been remarkably consistent in the vigor he brings to the part. A few years ago he joked on Twitter, “Funding my next book tour entirely via Kickstarter. For $10, I promise not to tweet at you. For $1000, I won’t review your book.” Well, I say “joked,” but …

In his recent review of Nicholas Carr’s book The Glass Cage — a book I reviewed very positively here — Morozov takes a turn which will enable him to perpetuate and extend his all-critique-all-the-time approach indefinitely. You can see what’s coming when he chastises Carr for being insufficiently inattentive to philosophical traditions other than phenomenology. If, gentle reader, upon hearing this you wonder why a book on automation would be obliged to attend to any philosophical tradition, bear with me as Morozov moves toward his peroration:

Unsurprisingly, if one starts by assuming that every problem stems from the dominance of bad ideas about technology rather than from unjust, flawed, and exploitative modes of social organization, then every proposed solution will feature a heavy dose of better ideas. They might be embodied in better, more humane gadgets and apps, but the mode of intervention is still primarily ideational. The rallying cry of the technology critic — and I confess to shouting it more than once — is: “If only consumers and companies knew better!” One can tinker with consumers and companies, but the market itself is holy and not to be contested. This is the unstated assumption behind most popular technology criticism written today.

And:

Even if Nicholas Carr’s project succeeds — i.e., even if he does convince users that all that growing alienation is the result of their false beliefs in automation and even if users, in turn, convince technology companies to produce new types of products — it’s not obvious why this should be counted as a success. It’s certainly not going to be a victory for progressive politics.

And:

At best, Carr’s project might succeed in producing a different Google. But its lack of ambition is itself a testament to the sad state of politics today. It’s primarily in the marketplace of technology providers — not in the political realm — that we seek solutions to our problems. A more humane Google is not necessarily a good thing — at least, not as long as the project of humanizing it distracts us from the more fundamental political tasks at hand. Technology critics, however, do not care. Their job is to write about Google.

So on this account, if you make the mistake of writing a book about our reliance on technologies of automation and the costs and benefits to human personhood of that reliance, instead of writing about “unjust, flawed, and exploitative modes of social organization”; if your book does not strive to be “a victory for progressive politics”; if your book merely pushes for “a different Google” rather than … I don’t know, probably the dismantling of global capitalism; if your book, in short, is so lamentably without “ambition”; well, then, there’s only one thing to say.

I guess everyone other than Michael Hardt and Antonio Negri, Thomas Piketty, and maybe David Graeber have been wasting their (and our) time. God help the next person who writes about Bach without railing against the music industry’s role as an ideological state apparatus, or who writes a love story without protesting the commodification of sex under late capitalism. I don’t think Morozov will be happy until every writer sounds like a belated member of the Frankfurt School.

But the thing is, Carr’s book could actually be defended on political grounds, should someone choose to do so. The book is primarily concerned with balancing the gains in automated efficiency and safety with the costs to human flourishing, and human flourishing is what politics is all about. People who have become so fully habituated to an automated environment that they simply can’t function without it will scarcely be in a position to offer serious resistance to our political-economic regime. Carr could be said to be laying part of the foundation for such resistance, by getting his readers to begin to think about what a less automated and more active, decisive life could look like.

But is it really necessary that every book be evaluated by these criteria?

Near, Far, and Nicholas Carr

Nicholas Carr, whose new book The Glass Cage explores the human meaning of automation, last week put up a blog post about robots and artificial intelligence. (H/t Alan Jacobs.) The idea that “AI is now the greatest existential threat to humanity,” Carr writes, leaves him “yawning.”

He continues:

The odds of computers becoming thoughtful enough to decide they want to take over the world, hatch a nefarious plan to do so, and then execute said plan remain exquisitely small. Yes, it’s in the realm of the possible. No, it’s not in the realm of the probable. If you want to worry about existential threats, I would suggest that the old-school Biblical ones — flood, famine, pestilence, plague, war — are still the best place to set your sights.

I would not argue with Carr about probable versus possible — he may well be right there. But later in the post, quoting from an interview he gave to help promote his book, he implicitly acknowledges that there are people who think that machine consciousness is a great idea and who are working to achieve it. He thinks that their models for how to do so are not very good and that their aspirations “for the near future” are ultimately based on “faith, not reason.”

near ... or far?

All fine. But Carr is begging one question and failing to observe a salient point. First, it seems he is only willing to commit to his skepticism for “the near future.” That is prudent, but then one might want to know why we should not be concerned about a far future when efforts today may lay the groundwork for it, even if by eliminating certain possibilities.

Second, what he does not pause to notice is that everyone agrees that “flood, famine, pestilence, plague and war” are bad things. We spend quite a serious amount of time, effort, and money trying to prevent them or mitigate their effects. But at the same time, there are also people attempting to develop machine consciousness, and while they may not get the resources or support they think they deserve, the tech culture at least seems largely on their side (even if there are dissenters in theory). So when there are people saying that an existential threat is the feature and not the bug, isn’t that something to worry about?

how not to write a book review, techno-utopian edition

Maria Bustillos’s review of Nick Carr’s new book The Glass Cage is really, really badly done. Let me illustrate with just one example (it’s a corker):

In the case of aviation, the answer is crystal clear, yet Carr somehow manages to draw the opposite conclusion from the one supported by facts. In a panicky chapter describing fatal plane crashes, Carr suggests that pilots have come to rely so much on computers that they are forgetting how to fly. However, he also notes the “sharp and steady decline in accidents and deaths over the decades. In the U.S. and other Western countries, fatal airline crashes have become exceedingly rare.” So yay, right? Somehow, no: Carr claims that “this sunny story carries a dark footnote,” because pilots with rusty flying skills who take over from autopilot “often make mistakes.” But if airline passengers are far safer now than they were 30 years ago — and it’s certain they are — what on Earth can be “dark” about that?

Note that Bustillos is trying so frantically to refute Carr that she can’t even see what he’s actually saying. (Which might not surprise anyone who notes that in the review’s first sentence she refers to Carr as a “scaredy-cat” — yeah, she actually says that — and in its third refers to his “paranoia.”) She wants us to believe that Carr’s point is that automating the piloting of aircraft is just bad: “the opposite conclusion from the one supported by facts.” But if Carr himself is the one who notes that “fatal airline crashes have become exceedingly rare,” and if Carr himself calls the decline in air fatalities a “sunny story,” then he just might not be saying that the automating of flight is simply a wrong decision. Bustillos quotes the relevant passages, but can’t see the plain meaning that’s right in front of her face.

Carr cites several examples of planes that in recent years have crashed when pilots unaccustomed to taking direct control of planes were faced with the failure of their automated systems. Does Bustillos think these events just didn’t happen? If they did happen, then we have an answer to her incredulous question, “If airline passengers are far safer now than they were 30 years ago … what on Earth can be “dark” about that?” That answer is: If you’re one of the thousands of people whose loved ones have died because pilots couldn’t deal with having to fly planes themselves, then what you’ve had to go through is pretty damned dark.

Again, Bustillos quotes Carr accurately: The automation of piloting is a sunny story with a dark footnote. If Carr says anywhere in his book that we would be better off if we ditched our automated systems and went back to manual flying, I haven’t seen it. I’d like for Bustillos to show it to me. But I don’t think she can.

The point Carr is making in that chapter of The Glass Cage is that flight automation shows us that even wonderful technologies that make us safer and healthier come with a cost of some kind — a “dark footnote” at least. Even photographers who rejoice in the fabulous powers of digital photography knows that there were things Cartier-Bresson could do with his Leica and film and darkroom that they struggle to replicate. Very, very few of those photographers will go back to the earlier tools; but thinking about the differences, counting those costs, is a vital intellectual exercise that helps to keep us users of our tools instead of their thoughtless servants. If we don’t take care to think in this way, we’ll have no way of knowing whether the adoption of a new technology gives us a sunny story with no more than a footnote’s worth of darkness — or something far worse.

All Carr is saying, really, is: count the costs. This is counsel Bustillos actively repudiates: “Computers are tools, no different from hammers, blowtorches or bulldozers; history clearly suggests that we will get better at making and using them. With the gifts of intelligence, foresight and sensible leadership, we’ve managed to develop safer factories, more productive agricultural systems and more fuel-efficient cars.” Now I just need her to explain to me how those “gifts of intelligence, foresight and sensible leadership” have also yielded massively armored local police departments and the vast apparatus of a national surveillance state, among other developments.

I suppose “history clearly suggests” that those are either not problems at all or problems that will magically vanish — because if not, then Carr might be correct when he writes, near the end of his book, that “The belief in technology as a benevolent, self-healing, autonomous force is seductive.”

But that’s just what a paranoid scaredy-cat would say, isn’t it?

UPDATE: Evan Selinger has some very useful thoughts — I didn’t see them until after I wrote this post.

Carr on automation

If you haven’t done so, you should read Nick Carr’s new essay in the Atlantic on the costs of automation. I’ve been mulling it over and am not sure quite what I think.

After describing two air crashes that happened in large part because pilots accustomed to automated flying were unprepared to take proper control of their planes during emergencies, Carr comes to his key point:

The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.

And late in the essay he writes,

In schools, the best instructional programs help students master a subject by encouraging attentiveness, demanding hard work, and reinforcing learned skills through repetition. Their design reflects the latest discoveries about how our brains store memories and weave them into conceptual knowledge and practical know-how. But most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity. Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience. We pick the program that lightens our load, not the one that makes us work harder and longer. Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.

Carr isn’t arguing here that the automating of tasks is always, or even usually, bad, but rather than the default assumption of engineers — and then, by extension, most of the rest of us — is that when we can automate we should automate, in order to eliminate that pesky thing called “human error.”

Carr’s argument for reclaiming a larger sphere of action for ourselves, for taking back some of the responsibilities we have offloaded to machines, seems to be twofold:

1) It’s safer. If we continue to teach people to do the work that we typically delegate to machines, and do what we can to keep those people in practice, then when the machines go wrong we’ll have a pretty reliable fail-safe mechanism: us.

2) It contributes to human flourishing. When we understand and can work within our physical environments, we have better lives. Especially in his account of Inuit communities that have abandoned traditional knowledge of their geographical surroundings in favor of GPS devices, Carr seems to be sketching out — he can’t do more in an essay of this length — an account of the deep value of “knowledge about reality” that Albert Borgmann develops at length in his great book Holding on to Reality.

But I could imagine people making some not-obviously-wrong counterarguments — for instance, that the best way to ensure safety, especially in potentially highly dangerous situations like air travel, is not to keep human beings in training but rather to improve our machines. Maybe the problem in that first anecdote Carr tells is setting up the software so that in certain kinds of situations responsibility is kicked back to human pilots; maybe machines are just better at flying planes than people are, and our focus should be on making them better still. It’s a matter of properly calculating risks and rewards.

Carr’s second point seems to me more compelling but also more complicated. Consider this: if the Inuit lose something when they use GPS instead of traditional and highly specific knowledge of their environment, what would I lose if I had a self-driving car take me to work instead of driving myself? I’ve just moved to Waco, Texas, and I’m still trying to figure out the best route to take to work each day. In trying out different routes, I’m learning a good bit about the town, which is nice — but what if I had a Google self-driving car and could just tell it the address and let it decide how to get there (perhaps varying its own route based on traffic information)? Would I learn less about my environment? Maybe I would learn more, if instead of answering email on the way to work I looked out the window and paid attention to the neighborhoods I pass through. (Of course, in that case I would learn still more by riding a bike or walking.) Or what if I spent the whole trip in contemplative prayer, and that helped me to be a better teacher and colleague in the day ahead? I would be pursuing a very different kind of flourishing than that which comes from knowing my physical environment, but I could make a pretty strong case for its value.

I guess what I’m saying is this: I don’t know how to evaluate the loss of “knowledge about reality” that comes from automation unless I also know what I am going to be doing with the freedom that automation grants me. This is the primary reason why I’m still mulling over Carr’s essay. In any case, it’s very much worth reading.

who quantifies the self?

The Quantified Self (QS) movement is comprised of people who use various recent technologies to accumulate detailed knowledge of what their bodies are doing — how they’re breathing, how much they walk, how their heart-rate varies, and so on — and then adjust their behavior accordingly to get the results they want. This is not surveillance, some QS proponents say, it’s the opposite: an empowering sousveillance.

But any technology that I can use for my purposes to monitor myself can be used by others who have power or leverage over me to monitor me for their purposes. See this trenchant post by Nick Carr:

One can imagine other ways QS might be productively applied in the commercial realm. Automobile insurers already give policy holders an incentive for installing tracking sensors in their cars to monitor their driving habits. It seems only logical for health and life insurers to provide similar incentives for policy holders who wear body sensors. Premiums can then be adjusted based on, say, a person’s cholesterol or blood sugar levels, or food intake, or even the areas they travel in or the people they associate with — anything that correlates with risk of illness or death. (Rough Type readers will remember that this is a goal that Yahoo director Max Levchin is actively pursuing.)

The transformation of QS from tool of liberation to tool of control follows a well-established pattern in the recent history of networked computers. Back in the mainframe age, computers were essentially control mechanisms, aimed at monitoring and enforcing rules on people and processes. In the PC era, computers also came to be used to liberate people, freeing them from corporate oversight and control. The tension between central control and personal liberation continues to define the application of computer power. We originally thought that the internet would tilt the balance further away from control and toward liberation. That now seems to be a misjudgment. By extending the collection of data to intimate spheres of personal activity and then centralizing the storage and processing of that data, the net actually seems to be shifting the balance back toward the control function. The system takes precedence.  

Do please read it all.

pre-tweeted for your convenience

Nick Carr:

Frankly, tweeting has come to feel kind of tedious itself . It’s not the mechanics of the actual act of tweeting so much as the mental drain involved in (a) reading the text of an article and (b) figuring out which particular textual fragment is the most tweet-worthy. That whole pre-tweeting cognitive process has become a time-sink.

That’s why the arrival of the inline tweet — the readymade tweetable nugget, prepackaged, highlighted, and activated with a single click — is such a cause for celebration. The example above comes from a C.W. Anderson piece posted today by the Nieman Journalism Lab. “When is news no longer what’s new but what matters?” Who wouldn’t want to tweet that? It’s exceedingly pithy. The New York Times has also begun to experiment with inline tweets, and it’s already seeing indications that the inclusion of prefab tweetables increases an article’s overall tweet count. I think the best thing about the inline tweet is that you no longer have to read, or even pretend to read, what you tweet before you tweet it. Assuming you trust the judgment of a publication’s in-house tweet curator, or tweet-curating algorithm, you can just look for the little tweety bird icon, give the inline snippet a click, and be on your way. Welcome to linking without thinking!

Please click through to the original and you’ll see that Nick has thoughtfully singled out his best aphoristic zingers for immediate tweetability. What a guy.