this is dialogue?

library ad infinitum:

Putting The Shallows into dialogue with Shirky’s Cognitive Surplus, the latter book seems like the one with an actual idea. However smartly dressed, Carr’s concern about the corrosiveness of media is really a reflex, one that’s been twitching ever since Socrates fretted over the dangers of the alphabet. Shirky’s idea — that modern life produces a surplus of time, which people have variously spent on gin, television, and now the Internet — is something to sink one’s teeth into.

This is pretty typical of the technophilic reviews I’ve seen so far of Carr’s book: let’s just pretend that Carr didn’t cite any research to support his points, or that the research doesn’t exist. Let’s just assert that Carr made assertions. In short: Carr makes claims I would prefer to be false, so I’ll call his position an archaic “reflex.” That way I won’t have to think about it.(Steven Johnson, by contrast — see my comments a few posts back —, acknowledges that the research on multitasking is there, that it’s valid, and that Carr has cited it fairly. He just doesn’t think that losing 20% of our attentiveness is all that big a deal.)It would be a wonderful thing if someone were to put Carr’s book and Shirky’s into dialogue with each other — I might try it myself, if I can find time to finish Cognitive Surplus — but saying, in effect, “this book sucks” and “this other book is awesome” doesn’t constitute dialogue.

Steven Johnson’s numbers game

Unfortunately, Steven Johnson, once one of the sharpest cultural commentators around, seems to be turning into a caricature. His recent response to the concerns about digital life articulated by Nicholas Carr and others is woefully bad. He simply refuses to take seriously the increasingly large body of evidence about the negative consequences of always-on always-online so-called multitasking. Yes, “multitasking makes you slightly less able to focus,” or, as he later says, “I am slightly less focused,” or, still later, “we are a little less focused.” (Am I allowed to make a joke here about how multitasking makes you less likely to notice repetitions in your prose?)But what counts as a “little less”? Choosing to refer only to one of the less alarming of the many studies available, Johnson reports that it “found that heavy multitaskers performed about 10 to 20 percent worse on most tests than light multitaskers.” Apparently for Johnson losing 20% of your ability to concentrate is scarcely worth mentioning. And apparently he hasn’t seen any of the studies showing that people who are supremely confident in their multitasking abilities, as he appears to be, are more fuddled than anyone else.Johnson wants us to focus on the fabulous benefits we receive from a multitasking life. For instance,

Thanks to e-mail, Twitter and the blogosphere, I regularly exchange information with hundreds of people in a single day: scheduling meetings, sharing political gossip, trading edits on a book chapter, planning a family vacation, reading tech punditry. How many of those exchanges could happen were I limited exclusively to the technologies of the phone, the post office and the face-to-face meeting? I suspect that the number would be a small fraction of my current rate.

And then, later: “We are reading more text, writing far more often, than we were in the heyday of television.” So it would appear that Johnson has no concept whatsoever of quality of interaction — he thinks only in terms of quantity. How much we read, how much we write, how many messages we exchange in a day.That’s it? That’s all? Just racking up the numbers, like counting your Facebook friends or Twitter followers? Surely Johnson can do better than this. I have my own concerns about Carr’s arguments, some of which I have tried to articulate here, but the detailed case he makes for the costs of connection deserves a far more considered response than Johnson is prepared to give it.I think the Steven Johnson of a few years ago would have realized the need to make a much stronger — and probably a wholly different — case for the distracted life than this sad little counting game. He should get offline for a few weeks and think about all this some more.

every day in every way. . .

Jonah Lehrer:

There is little doubt that the Internet is changing our brain. Everything changes our brain. What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind. For instance, a comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. This surprising result led the scientists to propose that even simple computer games like Tetris can lead to “marked increases in the speed of information processing.” One particularly influential study, published in Nature in 2003, demonstrated that after just 10 days of playing Medal of Honor, a violent first-person shooter game, subjects showed dramatic increases in visual attention and memory.Carr’s argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a “book-like text.” Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn’t making us stupid — it’s exercising the very mental muscles that make us smarter.

I wish I could believe this. And Clay Shirky too.Also, I wanted to finish reading this story but I had to write this blog post. And tweet some.

chose any two

Ars Technica summarizes a new report in Science:

Humans are capable of pursuing multiple goals at once—for example, I am pursuing writing an article and eating a bowl of Froot Loops—but how those activities get divided by the brain is still somewhat of a mystery. A new study, published in Science this week, imaged human brains and watched them try to multitask as subjects performed a set of variously interrupted tasks. They saw that our brains can divide resources fairly easily for two tasks, but have a much harder time juggling three or more.

Here’s a link to the original article, but it’s subscription-only.

the incomprehending tweeters

It’s hard for me to believe that anyone — anyone — would think it a good idea to project a giant stream of Twitter commentary on a speech while the speaker is giving it — but that’s what they do at the big Web 2.0 conference, with predictably disastrous results for Danah Boyd. Note the comments by Kathy Sierra, who has been on the receiving end of some nasty commentary herself. And see some further reflections on this ludicrous practice here and here and here.Let me make an observation that these other observers are, it seems, reluctant to make. That actual multitasking is cognitively impossible has been established beyond reasonable doubt: see Christine Rosen’s article in this very journal, or, if you prefer to look beyond our house organ, try here and here. In fact, it has become clear that the people who think they are skilled multitaskers actually are worse at it than other people.So when you set up a Twitter stream to project as a speaker is speaking, and invite people to participate in it, you are simply asking them to fail, miserably, to understand what the speaker is saying. If a speaker makes a point that you find dubious, are you going to wait to see if later stages in the argument clarify that point, or perhaps make it more plausible? You are not. You are going to tweet your immediate reaction and therefore simply miss the next stage in the speaker’s argument. Every tweet you write, and every tweet you read on the big screen, compromises still further your comprehension of the lecture. I bet that after the talk was over there weren’t a dozen people in that audience who could have given even a minimally competent summary of what Boyd said.Boyd understands all this: “Had I known about the Twitter stream, I would’ve given a more pop-y talk that would’ve bored anyone who has heard me speak before and provided maybe 3-4 nuggets of information for folks to chew on. It would’ve been funny and quotable but it wouldn’t have been content-wise memorable.”That is, she would have given a talk that did not make a sequential argument but just strung together sound-bites, because the audience couldn’t have grasped anything other than disconnected aphoristic statements. In other words, she would have given a talk made of tweets, because that’s all that her tweeting audience could possibly have received. And even then they would have gotten only some of her verbal tweetery.(Incidentally, or maybe not incidentally, there are certain ironies involved in Boyd being the one to complain about this situation.)So what the people at Web 2.0 are saying to their speakers, loudly and clearly, is this: We don’t want sequential reasoning. We don’t want ideas that build on other ideas. We don’t want arguments. Just stand up there and fire off a series of unsubstantiated claims that have no connection to one another. Preferably 140 characters at a time.

On being in the world

Apropos the recent pair of posts here on lifelogging, I might recommend for further reading Christine Rosen’s essay on multitasking from The New Atlantis last year, and Walter Kirn’s 2007 essay on that subject in The Atlantic. From Kirn’s piece:

Productive? Efficient? More like running up and down a beach repairing a row of sand castles as the tide comes rolling in and the rain comes pouring down. Multitasking, a definition: “The attempt by human beings to operate like computers, often done with the assistance of computers.” It begins by giving us more tasks to do, making each task harder to do, and dimming the mental powers required to do them.

Kirn’s essay contains so many asides and parentheticals but builds in such a crescendo that I think he must have intentionally crafted the form of the essay to itself be a sort of meditation on focus. He directs his ire not so much at the technologies of multitasking as at the ways they are used, and at the unquestioned premises behind the tools’ design and promotion — premises that can produce effects quite the opposite of what is promised and intended.
Take e-readers, for example. Let’s put aside the claims that reading is coming to an end and the counter-claims that reading is undergoing a renaissance; instead, let’s focus on the e-reader technology itself. The difference between, say, the Kindle and printed books (playfully explored here by Alan Jacobs on one of our sister blogs) is of course partly a matter of comfort for the eye and the hand. But more importantly, screens are generally part of a series of technologies that immerse us in a vast web of constant connection to other things, people, and ideas — rather than just the things, people, and ideas right in front of us. In another New Atlantis article last year, Christine Rosen described her experience attempting to read Dickens’s Nicholas Nickleby on a Kindle:

… I quickly adjusted to the Kindle’s screen and mastered the scroll and page-turn buttons. Nevertheless, my eyes were restless and jumped around as they do when I try to read for a sustained time on the computer. Distractions abounded. I looked up Dickens on Wikipedia, then jumped straight down the Internet rabbit hole following a link about a Dickens short story, “Mugby Junction.” Twenty minutes later I still hadn’t returned to my reading of Nickleby on the Kindle.

Maryanne Wolf wonders about the implications of that kind of distraction for children on the New York Times website:
The child’s imagination and children’s nascent sense of probity and introspection are no match for a medium that creates a sense of urgency to get to the next piece of stimulating information. The attention span of children may be one of the main reasons why an immersion in on-screen reading is so engaging, and it may also be why digital reading may ultimately prove antithetical to the long-in-development, reflective nature of the expert reading brain as we know it….
The habitual reader Aristotle worried about the three lives of the “good society”: the first life is the life of productivity and knowledge gathering; the second, the life of entertainment; and the third, the life of reflection and contemplation….
I have no doubt that the digital immersion of our children will provide a rich life of entertainment and information and knowledge. My concern is that they will not learn, with their passive immersion, the joy and the effort of the third life, of thinking one’s own thoughts and going beyond what is given.
E-readers wouldn’t be nearly as problematic if they didn’t — both explicitly by being Internet-enabled and implicitly through their digital and screeny natures — draw us into the mode of interaction that is characteristic of the digital world. Reading itself may not be going anywhere, but sustained and focused reading might become increasingly difficult.
And of course these concerns about screens and reading apply more broadly to our interactions with people, places, and the world around us in general. Just take a look at the pilots who recently not only overflew their airport by 150 miles but didn’t even respond to frantic hails from airports and other nearby pilots, all because they were distracted by their laptops. Maybe the pilots are lying — maybe they were really asleep — but even then, the fact that they would use laptops as an excuse and that so many of us would find that excuse plausible suggests that we understand the great power that the screen can have over us. One shudders to imagine how our interaction with the world will shift if the medium of information immersion is slapped right onto our eyeballs.
(Hat tip: Justin Henderson)
[Photo credits: Parviz Research Group, University of Washington; Ryon Day via The Austin Map Project]

don’t confuse me with the facts

Tyler Cowen in the Wilson Quarterly:

Many critics charge that multitasking makes us less efficient. Researchers say that periodically checking your e-mail lowers your cognitive performance level to that of a drunk. If such claims were broadly correct, multitasking would pretty rapidly disappear simply because people would find that it didn’t make sense to do it. Multitasking is flourishing, and so are we.

Right, because human beings don’t ever do things that don’t make sense. We’re rational actors through and through. Addictive online behavior a problem? Impossible. The power of the variable-interval reinforcement schedule of email? Hogwash.All of which means that any study which says that we engage in unproductive or damaging behavior can simply dismissed out of hand. “Multitasking is flourishing, and so are we” — the 21st-century version of “Every day in every way I am getting better and better.”

getting it all confused

Eyal Ophir, the study’s lead investigator and a researcher at Stanford’s Communication Between Humans and Interactive Media Lab, said: "We kept looking for multitaskers’ advantages in this study. But we kept finding only disadvantages. We thought multitaskers were very much in control of information. It turns out, they were just getting it all confused."

— NYT

accept my cyborg self!

Danah Boyd doesn't just want to be a cyborg, she wants to be accepted as a cyborg. Recently at a conference she was criticized for fooling around on the web rather than paying attention to the speakers. This upsets her. Interestingly, she doesn't do what — in my experience, anyway — most people similarly accused do: she doesn't claim Awesome Multitasking Powers. She freely admits that she wasn’t paying much attention to the conference speakers, but says that people don't listen to speakers at conferences anyway — “I don't think that people were paying that much attention before” laptops — and anyway she learned a lot while looking up words the speaker used on Wikipedia instead of trying to follow the argument. “Am I learning what the speaker wants me to learn? Perhaps not. But I am learning and thinking and engaging.” For Boyd, the criticism she received is a function of two things: first, an “anti-computer attitude,” and second, a refusal to “embrace those who learn best when they have an outlet for their questions and thoughts.” (Stop trying to crush my spirit of inquiry!) In response to all this I have a few questions. My chief one is this: why go sit in a room where someone is lecturing if you so conspicuously aren't interested? Or why not quietly edge out if a particular talk leaves you cold? That way you don't have to subject yourself to boring stuff — you can do your “learning and thinking and engaging” somewhere with coffee and pastries — and you don't distract, by your ceaseless typing and mousing, people who are trying to listen? And one more: If you can learn via Twitter and Wikipedia, couldn't you also — just possibly — learn by listening to another human being for a while? Lord knows there are more than enough dreary lecturers in the world — “Earth to boring guy,” as Bart Simpson once said — but some people speak rather well. Think of the best TED talks: do you really want to be staring at your screen and typing while those are going on? All I am saying: Give listening a chance.