digital culture through file types

This is a fabulous idea by Mark Sample: studying digital culture through file types. He mentions MP3, GIF, HTML, and JSON, but of course there are many others worthy of attention. Let me mention just two:

XML: XML is remarkably pervasive, providing the underlying document structure for things ranging from RSS and Atom feeds to office productivity software like Microsoft Office and iWork — but secretly so. That is, you could make daily and expert use of a hundred different applications without ever knowing that XML is at work under the hood.

Text: There’s a great story to be told about how plain text files went from being the most basic and boring of all file types to a kind of lifestyle choice — a lifestyle choice I myself have made.

If you have other suggestions, please share them here or with Mark.

Should Computers Replace Physicians?

In 2012, at the Health Innovation Summit in San Francisco, Vinod Khosla, Sun Microsystems co-founder and venture capitalist, declared: “Health care is like witchcraft and just based on
tradition.” Biased and fallible physicians, he continued, don’t use enough science or data — and thus machines will someday rightly replace 80 percent of doctors. Earlier that same year, Khosla had penned an article for TechCrunch in which he had made a similar point.
With the capacity to store and analyze every single biological detail, computers would soon outperform human doctors. He writes, “there are three thousand or more metabolic pathways, I was once told, in the human body and they
impact each other in very complex ways. These tasks are perfect for a computer to model as ‘systems biology’ researchers are trying to do.” In Khosla’s
vision of the future, by around 2022 he expects he will “be able to ask Siri’s great great grandchild (Version 9.0?) for an opinion far more accurate than the one I get
today from the average physician.” In May 2014,

Khosla reiterated his assertion that computers will replace most doctors
. “Humans are not good when 500 variables affect a disease. We can handle three to five to seven, maybe,” he said. “We are guided too much by opinions, not by
statistical science.”

The dream of replacing doctors with advanced artificial intelligence is unsurprising, as talk of robots replacing human workers in various fields — from eldercare to taxi driving — has become common. But is Vinod Khosla right about medicine? Will we soon
walk into clinics and be seen by robot diagnosticians who will cull our health information, evaluate our symptoms, and prescribe a treatment? Whether or not the technology will exist is difficult to predict, but we are certainly on our way there. The IBM
supercomputer Watson is already being used in some hospitals to help diagnose cancer and recommend treatment, which it does by sifting through millions of patient records and producing treatment options based on previous outcomes. Analysts at Memorial Sloan Kettering Cancer Center are training Watson “to extract and interpret
physician notes, lab results, and clinical research.” All this is awe-inspiring. Let us generously assume, then, for a moment, that the technology for Khosla’s future will be
available and that all knowledge about and treatment options for medical problems will be readily analyzable by a computer within the next decade or so. If this is the future, why
shouldn’t physicians be replaced?

There are several errors in Khosla’s way of thinking about this issue. First of all, modern health care is not “like witchcraft.” Academic
physicians, for example, use evidence-based medicine whenever it is available.
And when it isn’t, then they try to reason through a problem using what biologists know about disease presentation, physiology, and pharmacology.

Moreover, Khosla mischaracterizes the doctor-patient interaction. For Khosla, a visit to the doctor involves “friendly banter” and questions about symptoms. The
doctor then assesses these symptoms, “hunts around … for clues as to their source, provides the diagnosis, writes a prescription, and sends you off.” In Khosla’s estimation the entire
visit “should take no more than 15 minutes and usually takes probably less than that.” But the kind of visit Khosla writes about is an urgent care visit wherein quick and minor issues are addressed: strep throat or a small laceration requiring a
stitch or two. Yes, these visits can take fifteen minutes, but so much of medicine does not involve these brief interactions. Consider the diabetic
patient who has poorly controlled blood sugars, putting her at risk for stroke, heart attack, peripheral nerve destruction, and kidney failure, but who hasn’t
been taking her medications. Or consider a patient addicted to cigarettes or on the verge of alcoholism. Consider the patient with Parkinson’s disease who wonders how this new diagnosis
will affect his life. And what about the worried parents who want antibiotics for their child even though their child has a viral infection and not a
bacterial infection? I can go on and on with scenarios like these, which occur hourly, if not daily, in nearly every medical specialty. In fact,
fifteen-minute visits are the exception to the kind of medicine most physicians need to practice. One cannot convince an alcoholic to give up alcohol, get
a diabetic patient to take her medications, or teach a Spanish-speaking patient to take his pills correctly in fifteen minutes. In addition, all this is impossible without “friendly banter.”

As Dr. Danielle Ofri, an associate professor of medicine at the New York University School of Medicine,

wrote in a New York Times blog post, compliance with blood pressure medications or diabetic medications is extremely difficult, involving multiple factors:

Besides obtaining five prescriptions and getting to the pharmacy to fill them (and that’s assuming no hassles with the insurance company, and that the
patient actually has insurance), the patient would also be expected to cut down on salt and fat at each meal, exercise three or four times per week, make
it to doctors’ appointments, get blood tests before each appointment, check blood sugar, get flu shots — on top of remembering to take the morning pills
and then the evening pills each and every day.

Added up, that’s more than 3,000 behaviors to attend to, each year, to be truly adherent to all of the
doctor’s recommendations.

Because of the difficulties involved in getting a patient to comply with a complex treatment plan, Dr. John Steiner argues in an article in the Annals of Internal Medicine that in
order to be effective we must address individual, social, and environmental factors:

Counseling with a trusted clinician needs to be complemented by outreach interventions and removal of structural and organizational barriers. …[F]ront-line clinicians, interdisciplinary teams, organizational leaders, and policymakers will need to coordinate efforts in
ways that exemplify the underlying principles of health care reform.

Therefore, the interaction between physician and patient cannot be dispensed with in fifteen minutes. No, the relationship involves, at minimum, a
negotiation between what the doctor thinks is right and what the patient is capable of and wants. To use the example of the diabetic patient, perhaps the
first step is to get the patient to give up soda for water, which will help lower blood sugars, or to start walking instead of driving, or taking the
stairs instead of the elevator. We make small suggestions and patients make small compromises in order to change for the better — a negotiation that helps
patients improve in a way that is admittedly slow, but necessarily slow. This requires the kind of give-and-take that we naturally have in relationships with other people, but not with computers.

This kind of interaction also necessitates trust — trust regarding illicit drugs, alcohol, tobacco, and sexual activity, all of which can contribute to or
cause certain medical problems. And a computer may ask the questions but cannot earn a patient’s confidence. After all, these kinds of secrets can only be
exchanged between two human beings. David Eagleman, a neuroscientist at the Baylor College of Medicine, writes in his book Incognito that when we reveal a secret, we almost always feel that “the receiver of the secrets
has to be human.” He wonders why, for example, “telling a wall, a lizard or a goat your secrets is much less satisfying.” As patients, we long for that human reception
and understanding that a physician can provide and use to our advantage in coming up with a diagnosis.

Khosla neglects other elements of medical care, too. Implicit in his comments is the idea that the
patient is a consumer and the doctor a salesman. In this setting, the patient buys health in the same way that he or she buys corn on the cob. One doesn’t need friendly banter or a packet of paperwork to get the best corn, only a short visit to the
grocery store.

And yet, issues of health are far more serious than buying produce. Let’s take the example of a mother who brings her child in for ADHD medication, a
scenario I’ve seen multiple times. “My child has ADHD,” she says. “He needs Ritalin to help his symptoms.” In a consumer-provider scenario, the doctor gives the
mother Ritalin. This is what she wants; she is paying for the visit; the customer is king. But someone must explain to the mother what ADHD
is and whether her child actually has this disorder. There must be a conversation about the diagnosis, the medication, and its side effects, because the consequences of these are lifelong. Machines would have to be more than just clerks. In many instances, they would have to convince the parent that, perhaps, her child does not have
ADHD; that she should hold off on medications and schedule a follow-up to see how the child is doing. Because the exchange of goods in
medicine is so unique, consequential, and rife with emotion, it is not just a consumer-cashier relationship. Thus computers, no matter how
efficient, are ill-fitted to this task.

Khosla also misunderstands certain treatments, which are directly based on human interactions. Take psychiatry for example. We know that

cognitive behavioral therapy and medication combined are the best treatment for a disease like depression
. And cognitive behavioral therapy has at its core the relationship between the
psychiatrist or therapist and the patient, who together work through a depressed patient’s illness during therapy sessions. In cognitive behavioral therapy, private
aspects of life are discussed and comfort is offered — human expressions and emotions are critical for this mode of treatment.


To be sure, Khosla is right about quite a lot. Yes, technology ought to make certain aspects of the patient visit more efficient. Our vital signs may one day easily be taken with the help of our mobile phones, as he suggests, which
would save time checking in to a clinic and could help give physicians constant and accurate measurements of blood pressure in hypertensive patients or EKG
recordings in patients with heart disease. Technology of this sort could also indicate when an emergency is happening or how a patient ought to alter medication
doses.
Furthermore, Khosla correctly identifies some of the limitations of human physicians: “We cannot expect our doctor to be able to remember everything from medical
school twenty years ago or memorize the whole Physicians Desk Reference (PDR) and to know everything from the latest research, and so on and so forth.”
True, the amount of information accumulated by modern medical research is beyond the capability of any human being to know, and doctors do make mistakes because they forget or are not up on the latest research. In a 2002 study in the Journal of Neurology, Neurosurgery and Psychiatry, investigators found that 15 percent of patients with a diagnosis of Parkinson’s disease do not
necessarily fulfill criteria for the disease and 20 percent of patients with Parkinson’s disease who have already seen medical providers have not been diagnosed.
These are large percentages that have profound implications for people’s lives. And this is exactly why physicians must use technologies like Watson to do a
better job, not necessarily abdicate the job altogether. Most of us already carry smartphones or tablets on rounds, to look up disease processes or confirm
our choice of antibiotic.
Lastly, Khosla wisely points out that physician bias can negatively affect a patient’s treatment. As he writes, “a physician’s bias makes all these
personal decisions for patients in a majority of the cases without the patient (or sometimes even the physician) realizing what ‘preferences’ are being
incorporated into their recommendations. The situation gets worse the less educated or economically less well-off the patient is, such as in developing
countries, in my estimation.” Undoubtedly, this dilemma is real. I have spent many of my posts on this blog writing about the issue of remaining unbiased or level-headed in the face of difficult patient interactions.
study published in Obesity in 2013 found that physicians “demonstrated less emotional rapport with overweight and obese patients … than for normal weight patients,” which may
“weaken the patient-physician relationship, diminish patients’ adherence to recommendations, and decrease the effectiveness of behavior change counseling.”
And

as Tara Parker-Pope remarks in the New York Times
, “studies show that patients are far more likely to follow a doctor’s advice and to have a better health outcome when they believe their doctor empathizes
with their plight.” If bias exists in lieu of empathy, it makes sense that patients have worse outcomes. What makes doctors most valuable,
their humanity, can have negative consequences.
But people can learn from studies, alter their behavior, and remain human. Computers or robots can learn from studies and alter their behavior, but they will
always be robots. They will never earn the trust of the chronically ill drug addict. They will never be able to negotiate with the most difficult patients
who demand specific treatments but may not be entirely sure why. An ideal system would not be one built solely on fallible human doctors but one
in which new tools significantly augment human physicians’ skill and knowledge. A measured combination of these will put all the information at a doctor’s
fingertips while keeping the art of medicine alive.

laptops of the Borg

What, yet another Borg-Complex argument for laptops in the classroom? Yeah. Another one.

Laptops are not a “new, trendy thing” as suggested in the final sentence of the article – they are a standard piece of equipment that, according to the Pew Internet and American Life Project, are owned by 88% of all undergraduate students in the US (and that’s data from four years ago). The technology is not going away, and professors trying to make it go away are simply never going to win that battle. If we want to have more student attention, banning technology is a dead end. Let’s think about better pedagogy instead.

Sigh. It should not take a genius to comprehend the simple fact that the ongoing presence and usefulness of laptops does not in itself entail that they should be present in every situation. “Banning laptops from the shower is not the answer. Laptops are not going away, and if we want to have cleaner students, we need to learn to make use of this invaluable resource.”

And then there’s the idea that if you’re not more interesting than the internet you’re a bad teacher. Cue Gabriel Rossman:

Honestly. 
Robert Talbert, the author of that post, assumes that a teacher would only ban laptops from the classroom because he or she is lecturing, and we all know — don’t we? —that lecturing is always and everywhere bad pedagogy. (Don’t we??) But here’s why I ban laptops from my classrooms: because we’re reading and discussing books. We look at page after page, and I and my students use both hands to do that, and then I encourage them to mark the important passages, and take brief notes on them, with pen or pencil. Which means that there are no hands left over for laptops. And if they were typing on their laptops, they’d have no hands left over for turning to the pages I asked them to turn to. See the problem? 
I’ve said it before, often, but let me try it one more time: Computers are great, and I not only encourage their use by my students, I try to teach students how to use computers better. But for about three hours a week, we set the computers aside and look at books. It’s not so great a sacrifice. 

coping with OS frustration

Alex Payne recently did what I do, in a less thorough way, from time to time: he re-evaluated his commitment to the Apple ecosystem. It’s a valuable exercise; among other things, it helps me to manage my frustrations with my technological equipment.

And frustrations there are — in fact, they have increased in recent years. You don’t have to look far to find articles and blog posts on how Apple’s quality control is declining or iOS 7 is a disaster. (Just do a Google search for those terms.) And I have to say that after a month of using iOS 7 I would, without question, revert to iOS 6 if I could, a handful of new and useful features notwithstanding. Moreover, even after more than a decade of OS X the ecosystem still lacks a first-rate web browser and a largely bug-free email client. (Most people know what’s wrong with Mail.app, but I could write a very long post on what’s wrong with Safari, Chrome, and Firefox. Postbox is looking pretty good as an email client right now, but time will tell whether it’s The Answer.)

But in the midst of these frustrations and others I need to keep two points in mind. First, we ask more of our computers than we ever have. Browsers, for instance, are now expected not just to render good old HTML but to play every kind of audio and video and to run web apps that match the full functionality of desktop apps. And increasingly we expect all our data to sync seamlessly among multiple devices: desktops, laptops, tablets, phones. There is so much more that can go wrong now. And so it sometimes does.

Second, as Alex Payne’s post reminds us, every other ecosystem has similar problems — or worse ones. And that’s a useful thing to keep in mind, especially when I’m gritting my teeth at the realization that, for instance, if you want to see the items in your Reminders app in chronological order you must, painstakingly, move them into the order you want one at a time. The same is true on the iOS versions. It seems very strange to me that such an obviously basic feature did not make it into the first released version of the software, and frankly unbelievable that manual re-ordering is your only option two years after the app was first introduced (in iOS 5) — but hey, influential Mac users have been complaining about fundamental inconsistencies in the behavior of the OS X Finder for about a decade now, with no results. This is the way of the world: the things that need to be fixed are ignored and the things that don’t need to be fixed get changed, as often as not for the worse. So whaddya gonna do?

One thing I’m not going to do is to throw the whole ecosystem out with the bathwater — and thanks to Alex Payne for preventing me from doing so. Better for me to make the most of a system I know how to use than to start over from scratch with something utterly unfamiliar that has at least as many problems of its own. And one thing I most certainly will do: I’ll keep asking Why in the hell won’t this thing just work?

down memory lane

A conversation on Twitter the other day reminded me of my earliest experiences with online life. It was in 1992 that I learned that I could have my college computer connected to something called the “Internet” — though I don’t know how I learned it, or what I thought the Internet was.
I had a Mac SE/30 at the time — the first computer my employer ever bought for me — and someone from Computing Services came by, plugged me in, and installed some basic software. I know I didn’t get any training, so what puzzles me now is how I learned how to use the programs. I must have checked out some books . . . but I don’t remember checking them out.

Here’s something else I don’t remember: very few people I knew had email, so how did I find out my friends’ email addresses? I must have asked when I saw then and wrote the addresses down on paper. But in any case I soon developed a small group of people that I corresponded with, using the venerable Pine — and again, how I, a Mac user from the start of my computing career and therefore utterly mouse-dependent, adjusted to a mouseless console environment. . . . But I did, not only when using Pine, but when accessing Wheaton’s library catalogue via Telnet, and when finding some rudimentary news sources via Gopher, followed a couple of years later by my first exposure to the World Wide Web, via Lynx. Pine, Telnet, and Lynx were the internet for me for several years — and they were great programs, primarily because they gave the fastest possible response on slow connections.

It was only when I got a Performa — with a CD drive! — that I began to turn away from the text-only goodness of those days. I was seduced by all the pretty pictures, by Netscape and, above all, by what must remain even today the greatest time-waster of my life.

How odd for all this to be nostalgia material. After all, the whole point at the time was to be cutting-edge. But even when Wheaton eliminated Telnet access to the library catalogue and moved it to the Web, I knew that I was losing something. To this day I’d search catalogues on Telnet if I could.

backup strategies

Here’s an odd post by Gina Barreca about a student of hers whose completely un-backed-up computer died. Everything was lost: documents of all kinds, photos, email, etc. Barreca’s response: See, this is why you should print everything out. (Everything? Even the photos?)

Barreca says she pays for a backup service, though she seems to make a point of emphasizing that she doesn’t understand it: “one of those companies which (I am told) will keep my stuff safe in the ether or the cloud or the memory of one really smart guy who’ll be able to recite everything I have on my hard-drive.” So what she really relies on is “filled-to-overflowing filing cabinets of paper and shelves of hand-written notebooks.”

Is that really the most appropriate response? About a month ago my computer died — as I mentioned in an earlier post — and I would have been completely miserable, indeed would be completely miserable for some time to come, if I could only rely on paper copies of everything. (Everything textual, that is: I don’t think anyone could seriously suggest printing out high-resolution copies of every digital photo they’ve taken, and few would suggest burning every song they own to CD.) The best possible scenario for making materials useful to me again would be to scan them to PDF and use OCR software to make at least some of the text readable again. But this would take countless hours and would lead to highly imperfect results.

What I did instead: I had my whole computer backed up to an external hard drive in my office, my entire home folder backed up to Amazon S3, and my essential files backed up to Dropbox. So while I was waiting for my new computer to arrive I used Dropbox on other computers to keep working, and when the computer finally did show up I just transferred the whole contents of my previous computer to the new one. Using Apple’s Migration Assistant, I set it up one afternoon when I left the office, and had everything in place when I got back the next morning.

And Barreca thinks it makes more sense just to print everything out? 

I want to believe

Returning to the subject of today’s earlier post: The authors of that study write this in summation:

Statistical findings, said Heuser, made us realize that genres are icebergs: with a visible portion floating above the water, and a much larger part hidden below, and extending to unknown depths. Realizing that these depths exist; that they can be systematically explored; and that they may lead to a multi-dimensional reconceptualization of genre: such, we think, are solid findings of our research.

Nothing this vague counts as “solid findings.” What does it mean to say that a genre is like an iceberg? What are those “parts” that are below the surface? What sorts of actions would count as “exploring those depths”? What would be the difference between “systematically” exploring those depths and doing so non-systematically? What would a “reconceptualization” of genre look like? Would that be different than a mere adjustment in our generic definitions? What would be the difference between a “multi-dimensional reconceptualization of genre” and a unidimensional one?
The rhetoric here is very inflated, but if there is substance to the ideas I cannot see it. I would like to be able to see it. Like Agent Mulder, I want to believe — but these guys aren’t making it easy for me.

doing things with computers

This is the kind of thing I just don’t understand the value or use of:

This paper is the report of a study conducted by five people – four at Stanford, and one at the University of Wisconsin — which tried to establish whether computer-generated algorithms could “recognize” literary genres. You take David Copperfield, run it through a program without any human input – “unsupervised”, as the expression goes – and … can the program figure out whether it’s a gothic novel or a Bildungsroman? The answer is, fundamentally, Yes: but a Yes with so many complications that make it necessary to look at the entire process of our study. These are new methods we are using, and with new methods the process is almost as important as the results.

So human beings, over a period of centuries, read many, many books and come up with heuristic schemes to classify them — identify various genres, that is to say, “kinds,” kinship groups. Then those human beings specify the features they see as necessary to the various kinds, write complex programs containing instructions for discerning those features, and run those programs on computers . . . to see how well (or badly) computers can replicate what human beings have already done?

I don’t get it. Shouldn’t we be striving to get computers to do things that human beings can’t do, or can’t do as well? The primary value I see in this project is that it could be a conceptually clarifying thing to be forced to specify the features we see as intrinsic to genres. But in that case the existence of programmable computers becomes just a prompt, and one accidental, not essential, to the enterprise of thinking more clearly and precisely.

reader’s report: Jane Smiley

Well, the recent traveling and busyness may have kept me from posting, but it didn’t keep me from reading. Nothing keeps me from reading. So here’s what’s been going on:

I’ve been working my way through Tony Judt’s magisterial Postwar, but it’s a very large book — exactly the kind of thing the Kindle was made for, by the way — and I’ve been pausing for other tastes. For instance, I read Jane Smiley’s brief and brisk The Man Who Invented the Computer: The Biography of John Atanasoff, Digital Pioneer, the title of which is either misleading or ironic or, probably, both. Smiley begins her narrative with a straightforward claim:

The inventor of the computer was a thirty-four-year-old associate professor of physics at Iowa State College named John Vincent Atanasoff. There is no doubt that he invented the computer (his claim was affirmed in court in 1978) and there is no doubt that the computer was the most important (though not the most deadly) invention of the twentieth century.

But by the end of the narrative, less than half of which is about Atanasoff, she writes, more realistically and less definitively,

The computer I am typing on came to me in a certain way. The seed was planted and its shoot was cultivated by John Vincent Atanasoff and Clifford Berry, but because Iowa State was a land-grant college, it was far from the mainstream. Because the administration at Iowa State did not understand the significance of the machine in the basement of the physics building, John Mauchly was as essential to my computer as Atanasoff was — it was Mauchly who transplanted the shoot from the basement nursery to the luxurious greenhouse of the Moore School. It was Mauchly who in spite of his later testimony was enthusiastic, did know enough to see what Atanasoff had done, was interested enough to pursue it. Other than Clifford Berry and a handful of graduate students, no one else was. Without Mauchly, Atanasoff would have been in the same position as Konrad Zuse and Tommy Flowers — his machine just a rumor or a distant memory.

Each person named in that paragraph gets a good deal of attention in Smiley’s narrative, along with Alan Turing and John von Neumann, and by the time I finished the book I could only come up with one explanation for Smiley’s title and for her ringing affirmation of Atanasoff’s role as the inventor of the computer: she too teaches at Iowa State, and wants to bring it to the center of a narrative to which it has previously been peripheral. A commendable endeavor, but not one that warrants the book’s title.

In the end, Smiley’s narrative is a smoothly readable introduction to a vexed question, but it left me wanting a good deal more.

learning Greek

Sure, people think it’s a good idea to learn Greek. But of course they would when you put the question that way. It’s a good idea to learn all sorts of things. The problems come when you try to determine relative goods. Is learning ancient Greek more valuable than learning calculus?For the last few years I’ve made a conscious decision to work on retrieving some of my lost math and science knowledge, primarily in order to facilitate my understanding and use of computers. But this has been a fairly rough road, in part because of many years of exercising my mind in other ways, but also because the computer science world is generally not friendly to beginners/newbies/noobs. Even in some of the polite and friendly responses to my comment on this Snarkmarket post you can sense the attitude: “I just don’t have the time or energy to be introductory.”Maybe I’ve just been unlucky, but with a few notable exceptions, that’s how it has gone for me as I’ve tried to learn more about computing: variants on “RTFM.” I think this response to learners happens when people think that their own field is the wave of the future, and like the idea that they’re among the few who realize it — as Neal Stephenson once put it, they’re the relatively few Morlocks running the world for the many Eloi — and don’t especially need any more company in the engine room. Whereas classicists like Mary Beard are advocates for their fields because they are pained by their marginality. Maybe I should be pursuing Greek instead of Ruby on Rails. . . .But wait! Just as I wrote and queued up this post, I came across an interesting new endeavor: Digital Humanities Questions and Answers. Now this might restore one’s hopefulness!