Tony Stark and the view from above

Many people writing about the new Captain America: Civil War have commented on what seems to them a fundamental irony driving the story: that Tony Stark, the self-described “genius billionaire playboy philanthropist” who always does his own thing, agrees to the Sokovia Accords that place the Avengers under international political control, while Steve Rogers, Captain America, the devoted servant of his country, refuses to sign them and basically goes rogue. But I don’t think there’s any real irony here, for two reasons.

The first and simplest reason is that the destruction of Sokovia, which we saw in Avengers: Age of Ultron, was Tony Stark’s fault. Ultron was his creation and no one else’s, and in this new story he is forced to remember that the blood of the people who died there is on his hands. There’s a funny moment in the movie when Ant-Man is rummaging around in Tony’s Iron Man suit to do some mischief to it, and when Tony asks who’s there, replies, “It’s your conscience. We don’t talk a lot these days.” But Tony’s conscience is the chief driver of this plot. Cap was not responsible for Sokovia, and so doesn’t feel responsible (even though he regrets the loss of life).

But I think another point of difference between the two is more broadly significant, and relates to one of the more important themes of this here blog. Tony Stark is basically a plutocrat: a big rich boss, who controls massive amounts of material and massive numbers of people. He sits at the pinnacle of a very, very high pyramid. When the U. S. Secretary of State deals with Tony, he’s dealing with an equal, or maybe a superior: while at one point he threatens Tony with prison, he never follows through, and Tony openly jokes that he’s going to put the Secretary on hold the next time he calls — and does just that. Tony Stark’s view is always the view from above.

But Steve Rogers was, and essentially still is, a poor kid from Brooklyn whose highest ambition was to become an enlisted solider in the U. S. Army. That he became something else, a super-soldier, was initially presented to him as a choice, but quite obviously (to all those in control) a choice he wasn’t going to refuse — he wouldn’t have made it into the Army if he had not been a potential subject of experimentation. After that, he did what he was told, even (in the first Captain America movie) when that meant doing pep rallies for soldiers with a troupe of dancing girls. And gradually he has come to question the generally accepted definition of a “good soldier” — because he has seen more and more of the people who make and use good soldiers, and define their existence.

I think the passion with which he defends, and tries to help, Bucky Barnes, while it obviously has much to do with their great and lasting friendship, may have even more to do with the fact that Bucky, like him, is the object of experimentation — someone who was transformed into something other than his first self because it suited the people in power so to transform him.

Tony Stark is, by inheritance and habit and preference, the experimenter; Steve Rogers is the one experimented upon. And that difference, more than any other, explains why they take the divergent paths they take.

I spoke earlier of a recurring theme of this blog, and it’s this: the importance of deciding whether to think of technology from a position of power, from above, or from a position of (at least relative) powerlessness, from below. My most recent post considered the venture capitalist’s view of “platform shifts” and “continuous productivity,” which offers absolutely zero consideration of the well-being of the people who are supposed to be continuously productive. Even more seriously, there’s this old post about a philosopher who speculates on what “we” are going to do with prisoners — because “we” will always be the jailers, never the jailed.

As with politics, so with technology: it’s always about power, and the social position from which power is considered. Tony Stark’s view from above, or Steve Rogers’s view from below. Take your pick. As for me, I’m like any other morally sane person: #teamcap all the way.

Who, whom?

A good many people — some of them very smart — are praising this post by Steven Sinofsky on “platform shifts” — in particular, the shift from PCs to tablets. I, however, think it’s a terrible piece, because it’s based on three assumptions that Sinofsky doesn’t know are assumptions:

Assumption the first: “The reality is that these days most email is first seen (and often acted) on a smartphone. So without even venturing to a tablet we can (must!) agree that one can be productive without laptop, even on a tiny screen.” For whom is that “the reality”? For many people, no doubt, but how many? Not for me: I almost never emailed on my phone, even when I used a smartphone — I like dealing with email in batches, not in dribbles and drabbles throughout the day.

“But most people can’t do that! Most people have to be available all the time!” Again: this is true of many people, but most? Show me the evidence, please. And let’s make a clear distinction between people who have some kind of felt need to constantly available — either via peer pressure or innate anxiousness — and those who genuinely can’t, without losing their jobs or at least compromising their positions, be away from email and other social media. (I know not everyone has the freedom I have; but more people have it than think they have it.)

Assumption the second: that the shift to mobile platforms means a shift from PCs to tablets. That internet traffic is moving inexorably towards mobile devices is indubitable; that tablets are going to play a major role in that shift is not so obvious. It may be that since, as even Sinofsky admits, some common tasks are harder to do on a tablet than on a PC, the majority of people will do what they can on a smartphone and do what they have to on a PC.

Assumption the third (the key one): that this “platform shift” is inevitable and the only question is how well you’ll adjust to it. It’s a classic Borg Complex move. As is often the case when people deploy this rhetoric, Sinofsky’s prose overflows with faux-compassion: “Change is difficult, disruptive, and scary so we’ll talk about that.” “The hard part is that change, especially if you personally need to change, requires you to rewire your brain and change the way you do things. That’s very real and very hard and why some get uncomfortable or defensive.” The message is clear: People who do things my way are brave and exploratory, but people who want to do things differently are fearful and defensive. That’s okay, I’m here to help you be more like me.

Let’s try looking at this in another way: кто кого? Assuming that this “platform shift” happens, who benefits from whom? Answer: the companies who make the devices people will use, and the companies who want their employees to exhibit “continuous productivity.” That’s another Sinofsky post, which he ends triumphantly: Continuous productivity “makes for an incredible opportunity for developers and those creating new products and services. We will all benefit from the innovations in technology that we will experience much sooner than we think.” We will all benefit! (Except for poor schmucks like you and me who might want occasionally to have some time to call our own.)

Never believe a venture capitalist who tells you that resistance is futile.

what buttons want

Ned O’Gorman, in his response to my 79 theses, writes:

Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read — each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve “wanting” for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.

We’re in interesting and difficult territory here, because what O’Gorman thinks obviously true I think obviously false. In fact, it seems impossible to me that O’Gorman even believes what he writes here.

Take for instance the case of the button that “wants to be pushed.” Clearly O’Gorman does not believe that the button sits there anxiously as a finger hovers over it thinking o please push me please please please. Clearly he knows that the button is merely a piece of plastic that when depressed activates an electrical current that passes through wires on its way to detonating a weapon. Clearly he knows that an identical button — buttons are, after all, to adopt a phrase from the poet Les Murray, the kind of thing that comes in kinds — might be used to start a toy car. So what can he mean when he says that the button “wants”?

I an open to correction, but I think he must mean something like this: “That button is designed in such a way — via its physical conformation and its emplacement in contexts of use — that it seems to be asking or demanding to be used in a very specific way.” If that’s what he means, then I fully agree. But to call that “wanting” does gross violence to the term, and obscures the fact that other human beings designed and built that button and placed it in that particular context. It is the desires, the wants, of those “will-bearing” human beings, that have made the button so eminently pushable.

(I will probably want to say something later about the peculiar ontological status of books and texts, but for now just this: even if I were to say that texts don’t want I wouldn’t thereby be “divesting” them of “meaningfulness,” as O’Gorman claims. That’s a colossal non sequitur.)

I believe I understand why O’Gorman wants to make this argument: the phrases “philosophical voluntarism” and “technological instrumentalism” are the key ones. I assume that by invoking these phrases O’Gorman means to reject the idea that human beings stand in a position of absolute freedom, simply choosing whatever “instruments” seem useful to them for their given project. He wants to avoid the disasters we land ourselves in when we say that Facebook, or the internal combustion engine, or the personal computer, or nuclear power, is “just a tool” and that “what matters is how you use it.” And O’Gorman is right to want to critique this position as both naïve and destructive.

But he is wrong if he thinks that this position is entailed in any way by my theses; and even more wrong to think that this position can be effectively combatted by saying that technologies “want.” Once you start to think of technologies as having desires of their own you are well on the way to the Borg Complex: we all instinctively understand that it is precisely because tools don’t want anything that they cannot be reasoned with or argued with. And we can become easily intimidated by the sheer scale of technological production in our era. Eventually we can end up talking even about what algorithms do as though algorithms aren’t written by humans.

I trust O’Gorman would agree with me that neither pure voluntarism nor purely deterministic defeatism are adequate responses to the challenges posed by our current technocratic regime — or the opportunities offered by human creativity, the creativity that makes technology intrinsic to human personhood. It seems that he thinks the dangers of voluntarism are so great that they must be contested by attributing what can only be a purely fictional agency to tools, whereas I believe that the conceptual confusion this creates leads to a loss of a necessary focus on human responsibility.

cross-posted at The Infernal Machine

happy

Yuval Noah Harari introduces his new book Sapiens:

We are far more powerful than our ancestors, but are we much happier? Historians seldom stop to ponder this question, yet ultimately, isn’t it what history is all about? Our understanding and our judgment of, say, the worldwide spread of monotheistic religion surely depends on whether we conclude that it raised or lowered global happiness levels. And if the spread of monotheism had no noticeable impact on global happiness, what difference did it make?

Let me just put my cards on the table and say that this entire paragraph is so nonsensical that it’s not even wrong. It is so conceptually confused that it has not, to borrow a phrase from C. S. Lewis, risen to the dignity of error.

To begin with, what in the world might it mean to say that happiness is “what history is all about”? History, as I and everyone else in the world except Harari knows, is “about” what has happened. And many things, I think it is fair to say, have happened other than happiness.

I truly can’t guess, with any confidence, what Harari means by that statement, but if I had to try I’d paraphrase it thus: The chief reason for studying history is to find out what made people happy and what didn’t. Lord, I hope that’s not what he means, but I fear it is.

And as for “what made people happy,” Harari wants to define that in terms of “global happiness levels.” And how are we supposed to evaluate those? Where would we get our data set? And — to ask a question that goes back to the earliest responses to Bentham’s utilitarianism — how do we count such stuff? Does one person’s horrific misery count the same as another person’s mild pleasure? Or do we add an intensity factor? Also, on the unhappiness scale, how might we compare a quick and painless death at age 19 to an extended agony of fatal illness at age 83?

I suspect Harari hasn’t thought much about these matters, but let’s try to go with him. Instead of considering something as amorphous as “monotheistic religion,” let’s focus on the militant Islam of today. It has clearly made many people very miserable; but it has equally clearly given other people great satisfaction. If the number of people who delight in militant Islam exceed the number of people made miserable by it, then do we conclude that militant Islam is a net contributor to “global happiness levels” and therefore something to be applauded? And what if the balance sheet comes out pretty level, so that global happiness has been neither appreciably increased nor appreciably decreased by militant Islam? Are we to conclude then that it really hasn’t “made a difference”? 

This little thought experiment also raises the question of whether happiness might be defined differently by different people in different cultures. Harari has this one covered. Some people, he tells us,

agree that happiness is the supreme good, but think that happiness isn’t just a matter of pleasant sensations. Thousands of years ago Buddhist monks reached the surprising conclusion that pursuing pleasant sensations is in fact the root of suffering, and that happiness lies in the opposite direction…. For Buddhism, then, happiness isn’t pleasant sensations, but rather the wisdom, serenity and freedom that come from understanding our true nature.

Ah, now we’re getting somewhere! Finally, time for a serious consideration of rival views of happiness! So here’s Harari’s response: “True or false, the practical impact of such alternative views is minimal. For the capitalist juggernaut, happiness is pleasure. Full stop.”

Full stop. The “capitalist juggernaut” has decided what happiness is — and, needless to say, resistance is futile — so we don’t need to think about it any more. We don’t even need to ask whether said juggernaut is equally powerful everywhere in the world, or whether, conversely, there are significant numbers of people who live in a different regime — even though the scale of the book is supposed to be global.

So here we have an argument that happiness is “what history is all about,” and that therefore everything that we do should be evaluated in terms of its contribution to “global happiness levels,” but that can’t be bothered to ask what happiness consists in. As I say: not even wrong. Miles from being even wrong.

laptops of the Borg

What, yet another Borg-Complex argument for laptops in the classroom? Yeah. Another one.

Laptops are not a “new, trendy thing” as suggested in the final sentence of the article – they are a standard piece of equipment that, according to the Pew Internet and American Life Project, are owned by 88% of all undergraduate students in the US (and that’s data from four years ago). The technology is not going away, and professors trying to make it go away are simply never going to win that battle. If we want to have more student attention, banning technology is a dead end. Let’s think about better pedagogy instead.

Sigh. It should not take a genius to comprehend the simple fact that the ongoing presence and usefulness of laptops does not in itself entail that they should be present in every situation. “Banning laptops from the shower is not the answer. Laptops are not going away, and if we want to have cleaner students, we need to learn to make use of this invaluable resource.”

And then there’s the idea that if you’re not more interesting than the internet you’re a bad teacher. Cue Gabriel Rossman:

Honestly. 
Robert Talbert, the author of that post, assumes that a teacher would only ban laptops from the classroom because he or she is lecturing, and we all know — don’t we? —that lecturing is always and everywhere bad pedagogy. (Don’t we??) But here’s why I ban laptops from my classrooms: because we’re reading and discussing books. We look at page after page, and I and my students use both hands to do that, and then I encourage them to mark the important passages, and take brief notes on them, with pen or pencil. Which means that there are no hands left over for laptops. And if they were typing on their laptops, they’d have no hands left over for turning to the pages I asked them to turn to. See the problem? 
I’ve said it before, often, but let me try it one more time: Computers are great, and I not only encourage their use by my students, I try to teach students how to use computers better. But for about three hours a week, we set the computers aside and look at books. It’s not so great a sacrifice. 

the keys to society and their rightful custodians

Recently Quentin Hardy, the outstanding technology writer for the New York Times, tweeted this:

If you follow the embedded link you’ll see that Head argues that algorithm-based technologies are, in many workplaces, denying to humans the powers of judgment and discernment:

I have a friend who works in physical rehabilitation at a clinic on Park Avenue. She feels that she needs a minimum of one hour to work with a patient. Recently she was sued for $200,000 by a health insurer, because her feelings exceeded their insurance algorithm. She was taking too long.

The classroom has become a place of scientific management, so that we’ve baked the expertise of one expert across many classrooms. Teachers need a particular view. In core services like finance, personnel or education, the variation of cases is so great that you have to allow people individual judgment. My friend can’t use her skills.

To Hardy’s tweet Marc Andreesen, the creator of the early web browser Mosaic and the co-founder of Netscape, replied,

Before I comment on that response, I want to look at another story that came across my Twitter feed about five minutes later, an extremely thoughtful reflection by Brendan Keogh on “games evangelists and naysayers”. Keogh is responding to a blog post by noted games evangelist Jane McGonigal encouraging all her readers to find people who have suffered some kind of trauma and get them to play a pattern-matching video game, like Tetris, as soon as possible after their trauma. And why wouldn’t you do this? Don’t you want to “HELP PREVENT PTSD RIGHT NOW”?

Keogh comments,

McGonigal … wants a #Kony2012-esque social media campaign to get 100,000 people to read her blog post. She thinks it irresponsible to sit around and wait for definitive results. She even goes so far as to label those that voice valid concerns about the project as “games naysayers” and compares them to climate change deniers.

The project is an unethical way to both present findings and to gather research data. Further, it trivialises the realities of PTSD. McGonigal runs with the study’s wording of Tetris as a potential “vaccine”. But you wouldn’t take a potential vaccine for any disease and distribute it to everyone after a single clinical trial. Why should PTSD be treated with any less seriousness? Responding to a comment on the post questioning the approach, McGonigal cites her own suffering of flashbacks and nightmares after a traumatic experience to demonstrate her good intentions (intentions which I do not doubt for a moment that she has). Yet, she wants everyone to try this because it might work. She doesn’t stop to think that one test on forty people in a controlled environment is not enough to rule out that sticking Tetris or Candy Crush Saga under the nose of someone who has just had a traumatic experience could potentially be harmful for some people (especially considering Candy Crush Saga is not even mentioned in the study itself!).

Further, and crucially, in her desire to implement this project in the real world, she makes no attempt to compare or contrast this method of battling PTSD with existing methods. It doesn’t matter. The point is that it proves games can be used for good.

If we put McGonigal’s blog post together with Andreesen’s tweet we can see the outlines of a very common line of thought in the tech world today:

1) We really earnestly want to save the world;

2) Technology — more specifically, digital technology, the technology we make — can save the world;

3) Therefore, everyone should eagerly turn over to us the keys to society.

4) Anyone who doesn’t want to turn over those keys to us either doesn’t care about saving the world, or hates every technology of the past 5000 years and just wants to go back to writing on animal skins in his yurt, or both;

5) But it doesn’t matter, because resistance is futile. If any expresses reservations about your plan you can just smile condescendingly and pat him on the head — “Isn’t that cute?” — because you know you’re going to own the world before too long.

And if anything happens go to astray, you can just join Peter Thiel on his libertarian-tech-floating-earthly-Paradise.

Seasteading

Enjoy your yurts, chumps.

Kevin Kelly's New Theology

Kevin Kelly’s theology is a contemporary version of the one George Bernard Shaw articulated a hundred years ago. In “The New Theology: A Sermon,” Shaw wrote,

In a sense there is no God as yet achieved, but there is that force at work making God, struggling through us to become an actual organized existence, enjoying what to many of us is the greatest conceivable ecstasy, the ecstasy of a brain, an intelligence, actually conscious of the whole, and with executive force capable of guiding it to a perfectly benevolent and harmonious end. That is what we are working to. When you are asked, “Where is God? Who is God?” stand up and say, “I am God and here is God, not as yet completed, but still advancing towards completion, just in so much as I am working for the purpose of the universe, working for the good of the whole of society and the whole world, instead of merely looking after my personal ends.” In that way we get rid of the old contradiction, we begin to perceive that the evil of the world is a thing that will finally be evolved out of the world, that it was not brought into the world by malice and cruelty, but by an entirely benevolent designer that had not as yet discovered how to carry out its benevolent intention. In that way I think we may turn towards the future with greater hope.

We might compare this rhetoric to that of Kelly’s new essay in Wired, which begins with a classic Borg Complex move: “We’re expanding the data sphere to sci-fi levels and there’s no stopping it. Too many of the benefits we covet derive from it.” But if resistance is futile, that’s no cause for worry, because resistance would be foolish.

It is no coincidence that the glories of progress in the past 300 years parallel the emergence of the private self and challenges to the authority of society. Civilization is a mechanism to nudge us out of old habits. There would be no modernity without a triumphant self.So while a world of total surveillance seems inevitable, we don’t know if such a mode will nurture a strong sense of self, which is the engine of innovation and creativity — and thus all future progress. How would an individual maintain the boundaries of self when their every thought, utterance, and action is captured, archived, analyzed, and eventually anticipated by others?The self forged by previous centuries will no longer suffice. We are now remaking the self with technology. We’ve broadened our circle of empathy, from clan to race, race to species, and soon beyond that. We’ve extended our bodies and minds with tools and hardware. We are now expanding our self by inhabiting virtual spaces, linking up to billions of other minds, and trillions of other mechanical intelligences. We are wider than we were, and as we offload our memories to infinite machines, deeper in some ways.

There’s no point asking Kelly for details. (“The self forged by previous centuries will no longer suffice” for what? Have we really “broadened our circle of empathy”? What are are we “wider” and “deeper” than, exactly? And what does that mean?) This is not an argument. It is, like Shaw’s “New Theology,” a sermon, directed primarily towards those who already believe and secondarily to sympathetic waverers, the ones with a tiny shred of conscience troubling them about the universal surveillance state whose arrival Kelly awaits so breathlessly. Those who would resist need not be addressed because they’re on their way to — let’s see, what’s that phrase? — ah yes: the “dustbin of history.” Now, someone might protest at this point that I am not being fair to Kelly. After all, he does say that a one-way surveillance state, in which ordinary people are seen but do not see, would be “hell”; and he even says “A massively surveilled world is not a world I would design (or even desire), but massive surveillance is coming either way because that is the bias of digital technology and we might as well surveil well and civilly.” Let’s pause for a moment to note the reappearance of the Borg here, and Kelly’s habitual offloading of responsibility from human beings to our tools: for Woody Allen, “the heart wants what it wants” but for Kelly technology wants what it wants, and such sovereign beings always get their way. But more important, notice here that Kelly thinks it’s a simple choice to decide on two-way surveillance: we “might as well.” He admits that the omnipotent surveillance state would be hell but he obviously doesn’t think that hell has even the remotest chance of happening. Why is he so confident? Because he shares Shaw’s belief in an evolutionary religion in which all that is true and good and holy emerges in history as the result of an inevitably beneficent process. Why should we worry about possible future constrictions of selfhood when the track record of “modernity” is, says Kelly, so utterly spotless, with its “glories of progress” and its “triumphant self”? I mean, it’s not as though modernity had a dark side or anything. All the arrows point skyward. So: why worry? The only difference between Shaw and Kelly in this respect is that for Shaw the emerging paradisal “ecstasy of a brain” is a human brain; for Kelly it’s digital. Kelly has just identified digital technology as the means by which Shaw’s evolutionary progressivist Utopia will be realized. But what else is new? The rich, powerful, and well-connected always think that they and people like them (a) will end up on the right side of history and (b) will be insulated from harm — which is after all what really counts. Kelly begins his essay thus: “I once worked with Steven Spielberg on the development of Minority Report” — a lovely opener, since it simultaneously allows Kelly to boast about his connections in the film world and to dismiss Philip K. Dick’s dystopian vision as needlessly fretful. When the pre-cog system comes, it won’t be able to hurt anyone who really matters. So let’s just cue up Donald Fagen one more time and get down to the business of learning to desire whatever it is that technology wants. The one remaining spiritual discipline in Kelly’s theology is learning to love Big Brother.