no, go ahead and quit Facebook

Siva Vaidhyanathan contends that people currently on Facebook should not delete their accounts but rather stay and try to change it:

So go ahead and quit Facebook if it makes you feel calmer or more productive. Please realize, though, that you might be offloading problems onto those who may have less opportunity to protect privacy and dignity and are more vulnerable to threats to democracy. If the people who care the most about privacy, accountability and civil discourse evacuate Facebook in disgust, the entire platform becomes even less informed and diverse. Deactivation is the opposite of activism.

As you might guess from the tweet posted above, I am not especially sympathetic to this argument. It seems to me that there is no connection at all between deactivation and activism: a person could pursue both or neither or one rather than the other. Vaidhyanathan argues that “Hope lies … with our power as citizens. We must demand that legislators and regulators get tougher. They should go after Facebook on antitrust grounds.” But this can be done by people who don’t have Facebook accounts.

He says that “Our long-term agenda should be to bolster institutions that foster democratic deliberation and the rational pursuit of knowledge. These include scientific organizations, universities, libraries, museums, newspapers and civic organizations.” This too can be done by people who don’t have Facebook accounts.

He says “If we act together as citizens to champion these changes, we have a chance to curb the problems that Facebook has amplified. If we act as disconnected, indignant moral agents, we surrender the only power we have: the power to think and act collectively.” Again, no Facebook account is required to think and act collectively.

It’s only in his concluding paragraph (the first one I quote above) that Vaidhyanathan comes close to making an argument for staying on Facebook — or, in my case, returning to it, since his logic would demand not just that existing users stay on but that non-users sign up. “If the people who care the most about privacy, accountability and civil discourse evacuate Facebook in disgust, the entire platform becomes even less informed and diverse” — well, then, I suppose that people like me who do care about “privacy, accountability and civil discourse” need to run to Facebook right away. But does this make sense?

I don’t think so. If I see people being swept away by a powerful flood, it is unlikely that my best course of action is to leap into the water with them. I would do better to try to bring them to the safety of the shore. To put the case less metaphorically, it would make more sense for people to bring knowledge and sweet reason to Facebook if they could be sure that their friends regularly saw their knowledge and sweet reason. But the company’s algorithms are written in such a way that that’s highly unlikely. Even those who take delight in knowledge and sweet reason are unlikely to take the incredibly complicated and often fruitless steps a user has to take to bring any kind of sanity at all to a Facebook feed.

So I continue to think that deactivation-plus-activism is the way to go — not least because if there is anything that could drive Facebook to make actual changes to their platform (as opposed to the make-believe changes they regularly announce), it would surely be a significant drop in their user base.

Facebook, again

I know many people who spend considerable time on Facebook, and as far as I can tell, very few of them know about the current scandal and among the handful who know, very very few care. I think almost everyone likely to be seriously troubled by Facebook’s behavior has already ditched the service. Given Mark Zuckerberg’s silence on all these matters, I assume that Facebook’s strategy is simply to ride out the storm by sheltering in place, and my expectation is that that strategy will be successful. I would be surprised if a year from now Facebook isn’t stronger than ever. To be sure, I would be very pleased by any fall in that nasty company’s fortunes, but that’s neither here nor there; at this point I’m pretty sure that they can spend their way out of any difficulties. 
UPDATE: So Zuck hath spoken, and offered an apology that, as many observers have pointed out, leaves Facebook’s business model of mining and selling its users’ data firmly in place. So what’s to come, especially since Zuckerberg has agreed that maybe Facebook should be regulated by the government? 
1. If regulation does happen, it will probably have the effect that Michael Brendan Dougherty predicts in one of the most thoughtful responses to this whole kerfuffle. Nobody in the media minded when Facebook’s data was being used in a very similar way to benefit the Obama campaigns; but now we have a panic. “Silicon Valley is just making up the rules as they go along. Some large-scale data harvesting and social manipulation is okay until the election. Some of it becomes not okay in retrospect. They sigh and say okay so long as Obama wins. When Clinton loses, they effectively call a code red.” 

If I can add my own prediction to [Niall] Ferguson’s it would be this. To the center-Left, it doesn’t matter how much Silicon Valley’s tools enable extremists in the Third World, or how much wealth they extract from the public treasuries through their tax-sheltering arrangements. All that matters is that the new tools continue to keep the center-Left in power, and make them look glamorous and smart. This is a deal that Silicon Valley will take. 

If regulation happens it will almost surely proceed along the lines Dougherty sketches. 
2. But regulation will require effective bipartisan action by Congress — by this Congress. So who are we kidding? 
Therefore, I’m sticking with my earlier prediction: at the end of this whole tempest-in-a-teapot, the tempest will have remained safely enclosed in the teapot that belongs to the tech punditocracy; no significant number of people will leave Facebook; and Mark Zuckerberg’s business model will remain the same as it has been all along. 
UPDATE 2 (April 4, 2018): From The Ringer’s report on Zuckerberg’s conference call with the press today: 

Have several weeks of negative Facebook headlines and a #DeleteFacebook hashtag actually caused people to abandon the social network? “I don’t think there’s been any meaningful impact that we’ve observed,” Zuckerberg said.
That’s not a huge shock. According to the social media analytics firm Keyhole, #DeleteFacebook was tweeted about 364,000 times in the month of March, when the current controversy was cresting. #DeleteUber racked up 412,000 tweets in early 2017 when that company was going through its own PR nightmare, even though Uber has a much smaller user base. For now, the threat to leave Facebook seems to be a hollow one for most people.

Still sticking with my prediction. Nothing substantial will change at Facebook, and nothing substantial will change for Facebook. 

people and algorithms, principalities and powers

In this interview, Jill Lepore comments,

To be fair, it’s difficult not to be susceptible to technological determinism. We measure the very moments of our lives by computer-driven clocks and calendars that we keep in our pockets. I get why people think this way. Still, it’s a pernicious fallacy. To believe that change is driven by technology, when technology is driven by humans, renders force and power invisible.

I like this point, largely because I’ve made it myself — browsing this tag will give you some examples. But to say this is not to say that those humans are simply free agents, self-determining actors. It’s not as though Mark Zuckerberg is holed up here:

Zuck’s model of Facebook controlli — um, healing the world is one you should be enormously skeptical of, for reasons Nick Carr explains quite eloquently here. But even if you think Zuck is as wicked Sauron or Voldemort — which I don’t, by the way; I think he’s as well-meaning as his core assumptions allow him to be — he isn’t Sauron or Voldemort, not structurally speaking. When the Ring of Power is unmade, Sauron’s “slaves quailed, and his armies halted, and his captains suddenly steerless, bereft of will, wavered and despaired.” When Voledmort is killed, the Death Eaters slink away, fearful and powerless. But if any of the Captains of Technological Industry were to undergo some kind of moral conversion and walk away from their posts … nothing would change.

We have to keep insisting that algorithms are written by people for specific purposes in order to refute the simplistic and dangerous idea that algorithms are neutral and true and SCIENCE. But those people who write the algorithms, and those people who instruct others to write those algorithms, are implicated in the power-knowledge regime or Domination System or governmentality that I described in my previous post. The really vital long-term task is understanding how those structures work so that they may be both resisted and redeemed.

the fragility of platforms

In a comment on my previous post, Adam Roberts writes:

In terms of human intermediation, facebook and twitter are radically, fundamentally ‘thin’ platforms, where things like the church or the family are deep-rooted and ‘thick’. FB/Twitter-etc are also transient—both relatively recent and already showing signs of obsolescence. The sorts of institutions we’re talking about need to endure if they’re to do any good at all. Doesn’t this very temporariness magnify the volume of the reaction? People have been living with quite profound changes to social and cultural mores for decades, much longer than there has been such a thing as social media. When they take to Twitter they are trying to express deep-seated and profoundly-contextualised beliefs in 140 characters. It’s not surprising that what emerges is often just a barbaric yawp.

I think this is a very powerful point, because it reminds us that when we replace institutions with platforms, especially now that those platforms are uniformly digital, we’re moving from structures that, if not altogether antifragile, are relatively robust to structures that are either palpably fragile or untested.

Thought experiment: What if Twitter actually does as many have suggested and bans Donald Trump? They would be perfectly within their rights to do so — he would have no one to appeal to — so what would he do? The very platform he uses to howl his anger and outrage would be denied him, so where would he go? Facebook? But the architecture of Facebook doesn’t lend itself quite as well to his preferred tactics of engagement (for reasons I wish I had time to explore but do not). Trump’s ability to disseminate his messages in unedited form, and more particularly to change the subject when things aren’t going his way, would be dramatically curtailed. He would be dependent on others to share his message, others whose voices don’t reach as far as his now does. Could his Presidency survive his being exiled from this platform that he has made his own?

lessons learned

Maybe I should have been writing about Facebook instead of Twitter, but never mind, because my friend Brian Phillips has done it for me. But along the way Brian writes,

What had really happened was that the left had become sensitized to the ways in which conventional moral language tended to shore up existing privilege and power, and had embarked on a critique of this tendency that the right interpreted, with some justification, as an attack on the very concept of meaning. But what would we have without meaning? Isolation and chaos, conditions in which it would presumably be easy to raise the capital gains tax. So if the left found itself in the strange position of supporting science on the one hand while insisting that truth was a cultural construct on the other, the right found itself in the even stranger position of investing in meaning even as it dissociated itself from fact. Evolution was a myth and climate change was a hoax, but philosophers still had access to objective truth, provided they had worn curly wigs and died enough centuries ago.

I don’t know when it happened. Maybe with intelligent design? Maybe Colin Powell’s WMD testimony? Maybe it was already under way, with Fox News and Rush Limbaugh? But at some point, the American right — starting with the non-alt version, the one before the one we just elected — took another look at the postmodern critique of the linguistic basis of virtue and tumbled absolutely spinning into love with it. It turned out that postmodernism also contained the seeds of a system that would shore up existing privilege and power. All you had to do was take the insights of subversion and repurpose them for the needs of authority.

As you might imagine, I don’t agree with all of this, but I agree with a lot of it. The academic left interrogated the discourses of “truth” and “reason,” revealed the aporias thereof, exposed the inner workings of the power-knowledge regime, all in the name of social justice. I remember vividly Andrew Ross’s insistence, twenty-five years ago, that it was actually perfectly appropriate and consistent for a would-be revolutionary like him to have a tenured position at Princeton: “I teach in the Ivy League in order to have direct access to the minds of the children of the ruling classes.” It turns out that the children of the ruling classes learned their lessons well, so when they inherited positions in their fathers’ law firms they had some extra, and very useful, weapons in their rhetorical armory.

In precisely the same way, when, somewhat later, academic leftists preached that race and gender were the determinative categories of social analysis, members of the future alt-right were slouching in the back rows of their classrooms, baseball caps pulled down over their eyes, making no external motions but in their dark little hearts twitching with fervent agreement.

Back when people thought that Andrew Ross mattered, I participated in many conversations at Wheaton College about postmodernism, and had to hear many colleagues chortle that things were going to be better for Christians now because “we have a level playing field.” No longer did we have to fear being brought before the bar of Rational Evidence, that hanging judge of the Enlightenment who had sent so many believers to the gallows! You have your constructs and we have our constructs, and who’s to say which are better, right? O brave new world that hath such a sociology of knowledge in it!

To which my reply was always: “Now when they reject you and your work they don’t have to defend their decision with an argument.” I knew because I was shopping a book around then, and heard from one peer reviewer that it was well-researched and well-written but was also characterized by “underlying evangelical theological propositions.” Rejected without further explanation. As Brian rightly says in his post, “An America where we are all entitled to our own facts is a country where the only difference between cruelty and justice is branding.”

Sauce for the goose, sauce for the gander. It seems that we’ve all now learned the lessons that the academic left taught, and how’s that working out for us? The alt-right/Trumpistas are Caliban to the academic left’s Prospero: “You taught me language, and my profit on’t is, I know how to curse.”

again with the algorithms

The tragically naïve idea that algorithms are neutral and unbiased and other-than-human is a long-term concern of mine, so of course I am very pleased to see this essay by Zeynep Tufecki:

Software giants would like us to believe their algorithms are objective and neutral, so they can avoid responsibility for their enormous power as gatekeepers while maintaining as large an audience as possible. Of course, traditional media organizations face similar pressures to grow audiences and host ads. At least, though, consumers know that the news media is not produced in some “neutral” way or above criticism, and a whole network — from media watchdogs to public editors — tries to hold those institutions accountable.

The first step forward is for Facebook, and anyone who uses algorithms in subjective decision making, to drop the pretense that they are neutral. Even Google, whose powerful ranking algorithm can decide the fate of companies, or politicians, by changing search results, defines its search algorithms as “computer programs that look for clues to give you back exactly what you want.”

But this is not just about what we want. What we are shown is shaped by these algorithms, which are shaped by what the companies want from us, and there is nothing neutral about that.

One other great point Tufecki makes: the key bias at Facebook is not towards political liberalism, but rather towards whatever will keep you on Facebook rather than turning your attention elsewhere.

Prince, tech, and the Californian Ideology

I recently gave some talks to a gathering of clergy that focused on the effects of digital technology on the cultivation of traditional Christian practices, especially the more contemplative ones. But when I talked about the dangers of having certain massive tech companies — especially the social-media giants: Facebook, Twitter, Instagram, Snapchat — dictate to us the modes of our interaction with one another, I heard mutters that I was “blaming technology.”

I found myself thinking about that experience as I read this reflection on Prince’s use of technology — and his resistance to having technological practices imposed on him by record companies.

Prince, who died Thursday at 57, understood how technology spread ideas better than almost anyone else in popular music. And so he became something of a hacker, upending the systems that predated him and fighting mightily to pioneer new ones. Sometimes he hated technology, sometimes he loved it. But more than that, at his best Prince was technology, a musician who realized that making music was not his only responsibility, that his innovation had to extend to representation, distribution, transmission and pure system invention.

Many advances in music and technology over the last three decades — particularly in the realm of distribution — were tried early, and often first, by Prince. He released a CD-ROM in 1994, Prince Interactive, which featured unreleased music and a gamelike adventure at his Paisley Park Studios. In 1997, he made the multi-disc set “Crystal Ball” set available for sale online and through an 800 number (though there were fulfillment issues later). In 2001, he began a monthly online subscription service, the NPG Music Club, that lasted five years.

These experiments were made possible largely because of Prince’s career-long emphasis on ownership: At the time of his death, Prince reportedly owned the master recordings of all his output. With no major label to serve for most of the second half of his career and no constraints on distribution, he was free to try new modes of connection.

No musician of our time understood technology better than Prince — but he wasn’t interested in being stuffed into the Procrustean bed of technologies owned by massive corporations. He wanted to own his turf and to be free to cultivate it in ways driven by his own imagination.

The megatech companies’ ability to convince us that they are not Big Business but rather just open-minded, open-hearted, exploratory technological creators is perhaps the most powerful and influential — and radically misleading — sales jobs of the past 25 years. The Californian ideology has become our ideology. Which means that many people cannot help seeing skepticism about the intentions some of the biggest companies in the world as “blaming technology.” But that way Buy n Large lies.

The Grand Academy of Silicon Valley

After writing today’s post I couldn’t shake the notion that all this conversation about simplifying and rationalizing language reminded me of something, and then it hit me: Gulliver’s visit to the grand academy of Lagado.

A number of the academicians Gulliver meets there are deeply concerned with the irrationality of language, and pursue schemes to adjust it so that it fits their understanding of what science requires. One scholar has built a frame (pictured above) comprised of a series of turnable blocks. He makes some of his students turn the handles and other students to write down the sentences produced (when sentences are produced, that is).

But more interesting in light of what Mark Zuckerberg wants are those who attempt to deal with what, in Swift’s time, was called the res et verba controversy. (You can read about it in Hans Aarsleff’s 1982 book From Locke to Saussure: Essays on the Study of Language and Intellectual History.) The controversy concerned the question of whether language could be rationalized in such a way that there is a direct one-to-one match between things (res) and words (verba). This problem some of the academicians of Lagado determined to solve — along with certain other problems, especially including death — in a very practical way:

The other project was, a scheme for entirely abolishing all words whatsoever; and this was urged as a great advantage in point of health, as well as brevity. For it is plain, that every word we speak is, in some degree, a diminution of our lunge by corrosion, and, consequently, contributes to the shortening of our lives. An expedient was therefore offered, “that since words are only names for things, it would be more convenient for all men to carry about them such things as were necessary to express a particular business they are to discourse on.” And this invention would certainly have taken place, to the great ease as well as health of the subject, if the women, in conjunction with the vulgar and illiterate, had not threatened to raise a rebellion unless they might be allowed the liberty to speak with their tongues, after the manner of their forefathers; such constant irreconcilable enemies to science are the common people. However, many of the most learned and wise adhere to the new scheme of expressing themselves by things; which has only this inconvenience attending it, that if a man’s business be very great, and of various kinds, he must be obliged, in proportion, to carry a greater bundle of things upon his back, unless he can afford one or two strong servants to attend him. I have often beheld two of those sages almost sinking under the weight of their packs, like pedlars among us, who, when they met in the street, would lay down their loads, open their sacks, and hold conversation for an hour together; then put up their implements, help each other to resume their burdens, and take their leave.

But for short conversations, a man may carry implements in his pockets, and under his arms, enough to supply him; and in his house, he cannot be at a loss. Therefore the room where company meet who practise this art, is full of all things, ready at hand, requisite to furnish matter for this kind of artificial converse.

Rationalizing language and extending human life expectancy at the same time! Mark Zuckerberg and Ray Kurzweil, meet your great forbears!

Facebook, communication, and personhood

William Davies tells us about Mark Zuckerberg’s hope to create an “ultimate communication technology,” and explains how Zuckerberg’s hopes arise from a deep dissatisfaction with and mistrust of the ways humans have always communicated with one another. Nick Carr follows up with a thoughtful supplement:

If language is bound up in living, if it is an expression of both sense and sensibility, then computers, being non-living, having no sensibility, will have a very difficult time mastering “natural-language processing” beyond a certain rudimentary level. The best solution, if you have a need to get computers to “understand” human communication, may to be avoid the problem altogether. Instead of figuring out how to get computers to understand natural language, you get people to speak artificial language, the language of computers. A good way to start is to encourage people to express themselves not through messy assemblages of fuzzily defined words but through neat, formal symbols — emoticons or emoji, for instance. When we speak with emoji, we’re speaking a language that machines can understand.

People like Mark Zuckerberg have always been uncomfortable with natural language. Now, they can do something about it.

I think we should be very concerned about this move by Facebook. In these contexts, I often think of a shrewd and troubling comment by Jaron Lanier: “The Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?” In this sense, the degradation of personhood is one of Facebook’s explicit goals, and Facebook will increasingly require its users to cooperate in lowering their standards of intelligence and personhood.

brief book reviews: The Internet of Garbage

Sarah Jeong’s short book The Internet of Garbage is very well done, and rather sobering, and I recommend it to you. The argument of the book goes something like this:

1) Human societies produce garbage.

2) Properly-functioning human societies develop ways of disposing of garbage, lest it choke out, or make inaccessible, all the things we value.

3) In the digital realm, the primary form of garbage for many years was spam — but spam has effectively been dealt with. Spammers still spam, but their efforts rarely reach us anymore: and in this respect the difference between now and fifteen years ago is immense.

And then, the main thrust of the argument:

4) Today, the primary form of garbage on the internet is harassment, abuse. And yet little progress is being made by social media companies on that score. Can’t we learn something from the victorious war against spam?

Patterning harassment directly after anti-spam is not the answer, but there are obvious parallels. The real question to ask here is, Why haven’t these parallels been explored yet? Anti-spam is huge, and the state of the spam/anti-spam war is deeply advanced. It’s an entrenched industry with specialized engineers and massive research and development. Tech industries are certainly not spending billions of dollars on anti-harassment. Why is anti-harassment so far behind?

(One possibility Jeong explores without committing to it: “If harassment disproportionately impacts women, then spam disproportionately impacts men — what with the ads for Viagra, penis size enhancers, and mail-order brides. And a quick glance at any history of the early Internet would reveal that the architecture was driven heavily by male engineers.” Surely this is a significant part of the story.)

Finally:

5) The problem of harassment can only be seriously addressed with a twofold approach: “professional, expert moderation entwined with technical solutions.”

After following Jeong’s research and reflections on it, I can’t help thinking that the second of these recommendations is more likely to be followed than the first one. “The basic code of a product can encourage, discourage, or even prevent the proliferation of garbage,” and code is more easily changed in this respect than the hiring priorities of a large organization. Thus:

Low investment in the problem of garbage is why Facebook and Instagram keep accidentally banning pictures of breastfeeding mothers or failing to delete death threats. Placing user safety in the hands of low-paid contractors under a great deal of pressure to perform as quickly as possible is not an ethical outcome for either the user or the contractor. While industry sources have assured me that the financial support and resources for user trust and safety is increasing at social media companies, I see little to no evidence of competent integration with the technical side, nor the kind of research and development expenditure that is considered normal for anti-spam.

I too see little evidence that harassment and abuse of women (and minorities, especially black people) is a matter of serious concern to the big social-media companies. That really, really needs to change.