Can anyone help me understand Ephraim Radner?

While I’m on the twofold subject of (a) reading outside my speciality and (b) asking for help, I want to say something about the theologian Ephraim Radner. Several people I know and admire very much have encouraged me to read Radner, whom they in turn admire very much, and for a good many years now I have tried, repeatedly. But there’s a problem. The problem is that I simply cannot understand what he is saying. I do not know that I’ve ever come across a writer — not even Jacques Lacan — who has defeated me as thoroughly as Radner has. And this genuinely worries me, because while most of these people will acknowledge that Radner is not the most elegant writer, none of them seem to have any trouble making sense of his writing, and seem befuddled by my befuddlement.

Let me take some illustrative examples from Radner’s recent book A Time to Keep: Theology, Mortality, and the Shape of a Human Life. Here is what he describes as his “central argument”:

To have a body and deploy it is bound up with the fact that we are born and we die within a short span of years. And this being born and dying is itself — in all its biology of connection, memory, and hope — a mirror of and vehicle for the truth of God’s life as our creator.

The first sentence there seems clear enough: we know our bodies only as dying bodies. That doesn’t seem like a controversial point, but assuming I have read it correctly, I move on to the next sentence — and immediately run aground. “Being born and dying” has, or is accompanied by, a “biology of connection,” but I have absolutely no idea what might be meant by “biology of connection.” I am not even able to hazard a serious guess: maybe something like, we are biologically wired to be connected to … each other? Or maybe to the rest of the created order, in that we eat other living things? And all this confusion comes before we get to the idea of a biology of memory and hope, which I find even more inscrutable.

But then it gets even tougher. Because this being born and dying, with its accompanying biologies, is a “mirror” of … it would be difficult enough if the rest of the sentence were “God’s life as our creator,” but the phrase “the truth of” comes first, so I am once more wholly at sea. Let’s try to unpack this. God has a life “as our creator,” which I assume must mean something like the life God experiences in relation to Creation, as opposed to the internal life of the Trinitarian godhead. The “truth of” this life is distinguished, I suppose, from false ideas about it? It is, then, the character of that life truly perceived? So that if we perceive the life of God-as-creator truly we will then see that it is a mirror of our lives? — but if so, is it a mirror in the sense of being its opposite, its reversal? And then the brevity of our lives is the “vehicle” by which we perceive the eternal life of our God as creator? Probably not, because God is eternal in himself, not just as our creator … but I’m out of guesses. I cannot make any sense out of this passage, or indeed out of Radner’s writing as a whole.

One might say that all this becomes clearer if you read the whole book. But I have read the whole book — my eyes have passed over every word, I have scribbled thoughts and queries in the margins — and I am no better off.

At the end of the book Radner comments that “the argument of this book has been that thinking about who we are as created human beings comes down to numbering our days,” and while the phrases “numbering our days” and “day-numbering” occur frequently in the book, I’m afraid I don’t know what they mean either. It sometimes seems to me that the whole book does not say anything more or other than what a priest whispers to me each Ash Wednesday, as he inscribes an ashy cross on my forehead: Remember that thou art dust, and to dust thou shalt return. But there must be more to this book than that. Can anyone help me understand?

against tweetstorms

A few weeks ago I took to Twitter to unleash a tweetstorm against tweetstorms. (I was in an ironic mood. Also, if you’re wondering what a tweetstorm is, you can see a few by Mark Andreessen, thought by some to be the originator if not the master of the form, here.) Now I want to make that argument more properly. Hang on tight, we’re getting into the Wayback Machine for one of my geekiest posts ever!

One of the most distinctive characteristics of biblical Hebrew is parataxis, which connects clauses almost wholly by coordinating conjunctions — “and” and its cognates. Without getting too technical here, I want to acknowledge that there is disagreement among Hebrew scholars today about whether the Hebrew word waw should always be translated as “and”: some believe that it has different shades of meaning, in different contexts, that translators should strive to bring those shades out. But in the King James translation, waw is always rendered as “and,” which gives to biblical storytelling a very distinctive rhythm, and also contributes to what Erich Auerbach famously called its “reticence.”

A classic example is the Akedah, the story of the binding of Isaac:

And Abraham took the wood of the burnt offering, and laid it upon Isaac his son; and he took the fire in his hand, and a knife; and they went both of them together. And Isaac spake unto Abraham his father, and said, My father: and he said, Here am I, my son. And he said, Behold the fire and the wood: but where is the lamb for a burnt offering? And Abraham said, My son, God will provide himself a lamb for a burnt offering: so they went both of them together. And they came to the place which God had told him of; and Abraham built an altar there, and laid the wood in order, and bound Isaac his son, and laid him on the altar upon the wood. And Abraham stretched forth his hand, and took the knife to slay his son. And the angel of the Lord called unto him out of heaven, and said, Abraham, Abraham: and he said, Here am I. And he said, Lay not thine hand upon the lad, neither do thou any thing unto him: for now I know that thou fearest God, seeing thou hast not withheld thy son, thine only son from me. And Abraham lifted up his eyes, and looked, and behold behind him a ram caught in a thicket by his horns: and Abraham went and took the ram, and offered him up for a burnt offering in the stead of his son. And Abraham called the name of that place Jehovahjireh: as it is said to this day, In the mount of the Lord it shall be seen.

As Kierkegaard famously showed in Fear and Trembling, the story fairly cries out for elucidation: What was Abraham thinking? What did he feel? But all we get is this unembellished, uninflected, set of steps: And … And … And…..

Parataxis is perfectly suited to the chief genres of the Hebrew Bible — narrative, law, poetry, prophecy — or, maybe better, the genres of the Hebrew Bible are what they are because of the paratactic tendencies of the Hebrew language? Hard to say. In any case, in the New Testament, as long as the genres are carried over from the Hebrew Bible, the parataxis is there also, even though now in Greek rather than Hebrew:

When he was come down from the mountain, great multitudes followed him. And, behold, there came a leper and worshipped him, saying, Lord, if thou wilt, thou canst make me clean. And Jesus put forth his hand, and touched him, saying, I will; be thou clean. And immediately his leprosy was cleansed. And Jesus saith unto him, See thou tell no man; but go thy way, shew thyself to the priest, and offer the gift that Moses commanded, for a testimony unto them. And when Jesus was entered into Capernaum, there came unto him a centurion, beseeching him, And saying, Lord, my servant lieth at home sick of the palsy, grievously tormented. And Jesus saith unto him, I will come and heal him.

It’s when we get to the letters of Paul that we begin to suspect that God knew what he was doing in bringing the Christian Gospel to the world at a moment and in a place where the lingua franca was Greek. For Greek lends itself to complexities of conjunction and disjunction, all manner of relations between clause and clause, idea and idea. (Sometimes Paul gets himself tangled in those complexities: try reading Ephesians 1, for instance, in any translation, and see if you can diagram those sentences.) If instead of narrating or legislating or poetizing or prophesying you need to be engaged in dialectical exposition and argumentation, Greek is the language you want. Greek gives you parataxis if you need it, but syntaxis also. And the more complex your argument is, the more you need that syntaxis.

Hey, wasn’t this supposed to be a post about Twitter and tweetstorms? Yes. My point is: Twitter enforces parataxis. I don’t mean that in the sense that you absolutely can’t make an argument on Twitter, only that everything about the platform militates against it, and very few people have the commitment or the resourcefulness to push back. So a typical tweetstorm, even when it’s trying to make a case for something, even when it needs to be an argument and its author wants it to be an argument, isn’t an argument: it’s a series of disconnected assertions, effectively no more than And … And … And…. I think this is enforced not primarily by the 140-character limit itself, but more by the tweeter’s awareness that each tweet will be read individually, and retweeted individually, losing any context. So the tweeter tries to make each tweet as self-contained as possible, forgoing syntactic relations and complications.

Moreover, even a lengthy tweetstorm, by tweetstorm standards, isn’t long enough to develop an argument properly. (You’d need to use seven or eight tweets just for my previous paragraph, depending on your strategy for connecting the tweets. This whole post? Maybe 50 tweets. Who does 50-tweet storms?)

So what does this atomization of thought remind me of? Biblical proof-texting, that’s what. The founders of Twitter are to our discursive culture what Robert Estienne — the guy who divided the Bible up into verses — is to biblical interpretation. Is it possible, when faced with Paul’s letter to the Ephesians divided into verses, to keep clearly in mind the larger dialectical structure of his exposition? Sure. But it’s very hard, as generations of Christians who think that they can settle an argument by quoting a verse, a verse that might not even be a complete sentence, have demonstrated to us all. Becoming habituated to tweet-sized chunks of thought is damaging to one’s grasp of theology and social issues alike.

All this is why I think people who have interesting and even slightly complicated things to say should get off Twitter and get onto a blog, or Medium, or something — any venue that allows extended prose sequences and therefore full-blown syntaxis. Of course, in other contexts, Twitter — with its enforcement of linguistic and argumentative simplicity, its encouragement of unsequenced and disconnected thoughts — might be just the thing you need. If you want to be President of the United States, for example.

Stay tuned for a follow-up to this post.

no, Microsoft Word really is that bad

My family will tell you that I’ve been difficult to be around for the past few days — grumpy, impatient. And there’s a straightforward reason for that: in order to work on revisions for a forthcoming book, I’ve been using Microsoft Word.

It’s become so commonplace for people to hate Word that a counterintuitive Slate post praising it was long overdue, but even by Slate standards Heather Schwedel has done a poor job. For one thing, she shows just how informed she is about these matters by referring to “the unfamiliar, bizarro-world file format RTF” — a format created by Microsoft. But when she says that her devotion to Word is a function of her being “a copy editor and thus prone to fussy opinions about fonts and formatting and all such things” — it is to laugh. Because if you care about “fonts and formatting and all such things” Word is the worst possible application to deal with.

As Louis Menand wrote some years ago, with the proper emphasis, “Microsoft Word is a terrible program.

To begin with, the designers of Word apparently believe that the conventional method of endnote numbering is with lowercase Roman numerals—i, ii, iii, etc. When was the last time you read anything that adhered to this style? … To make this into something recognizably human, you need to click your way into the relevant menu (View? Insert? Format?) and change the i, ii, iii, etc., to 1, 2, 3, etc. Even if you wanted to use lowercase Roman numerals somewhere, whenever you typed “i” Word would helpfully turn it into “I” as soon as you pressed the space bar. Similarly, if, God forbid, you ever begin a note or a bibliography entry with the letter “A.,” when you hit Enter, Word automatically types “B.” on the next line. Never, btw (which, unlike “poststructuralism,” is a word in Word spellcheck), ask that androgynous paper clip anything. S/he is just a stooge for management, leading you down more rabbit holes of options for things called Wizards, Macros, Templates, and Cascading Style Sheets. Finally, there is the moment when you realize that your notes are starting to appear in 12-pt. Courier New. Word, it seems, has, at some arbitrary point in the proceedings, decided that although you have been typing happily away in Times New Roman, you really want to be in the default font of the original document. You are confident that you can lick this thing: you painstakingly position your cursor in the Endnotes window (not the text!, where irreparable damage may occur) and click Edit, then the powerful Select All; you drag the arrow to Normal (praying that your finger doesn’t lose contact with the mouse, in which case the window will disappear, and trying not to wonder what the difference between Normal and Clear Formatting might be) and then, in the little window to the right, to Times New Roman. You triumphantly click, and find that you are indeed back in Times New Roman but that all your italics have been removed.

This kind of disaster — and worse — still happens. In the document I’ve been working on recently, I was conversing with my editors in the comments pane about the advisability (or lack thereof) of certain changes, and then at a certain point, without warning, every time I tried to type a comment Word would paste in a paragraph I had recently deleted from another page. I wasn’t choosing to paste — I wasn’t even using any special keys (Command, Control, Option). I was just typing letters of the alphabet. And Word insisted on inserting an entire paragraph every time my fingers hit the keys. I ended up having to write all my comments in my text editor and then paste them into the comment box. I was grateful that Word allowed me to do that.

If you really care about “fonts and formatting and all such things” Word is a nightmare, because in such matters its consistent practice is to do what it thinks you probably want to do, or what it thinks you should do. Contrast that to a program that genuinely cares about formatting, LaTeX, which always does precisely what you tell it to do. Now, this mode of doing business can generate problems of its own, as every user of LaTeX knows, since from time to time you will manage to tell it to do something that you don’t really want it to do. But those problems are always fixable, and over time you learn to avoid them, whereas in Word anything can happen at any time and you will often be completely unable either to figure out what happened or set it right.

In every book that I work on, the worst moment of the entire endeavor occurs when I have to convert my plain-text draft into Word format for my editors. I don’t have to open Word to do that, thanks to pandoc, whose use I explain here; but I know then that I have only a short time before they send me back an edited text which I will have to open in Word. And from that point on there can be no joy in the labor, only misery. Microsoft Word is not just a terrible program. It is a terrible, horrible, no good, very bad program. It is the program than which no worse can be conceived. We hates it, preciousss. We hates it.

Carr on Piper on Jacobs

Here’s Nick Carr commenting on the recent dialogue at the Infernal Machine between me and Andrew Piper:

It’s possible to sketch out an alternative history of the net in which thoughtful reading and commentary play a bigger role. In its original form, the blog, or web log, was more a reader’s medium than a writer’s medium. And one can, without too much work, find deeply considered comment threads spinning out from online writings. But the blog turned into a writer’s medium, and readerly comments remain the exception, as both Jacobs and Piper agree. One of the dreams for the web, expressed through a computer metaphor, was that it would be a “read-write” medium rather than a “read-only” medium. In reality, the web is more of a write-only medium, with the desire for self-expression largely subsuming the act of reading. So I’m doubtful about Jacobs’s suggestion that the potential of our new textual technologies is being frustrated by our cultural tendencies. The technologies and the culture seem of a piece. We’re not resisting the tools; we’re using them as they were designed to be used.

I’d say that depends on the tools: for instance, this semester I’m having my students write with CommentPress, which I think does a really good job of preserving a read-write environment — maybe even better, in some ways, than material text, though without the powerful force of transcription that Andrew talks about. (That may be irreplaceable — typing the words of others, while in this respect better than copying and pasting them, doesn’t have the same degree of embodiment.)

In my theses I tried to acknowledge both halves of the equation: I talked about the need to choose tools wisely (26, 35), but I also said that without the cultivation of certain key attitudes and virtues (27, 29, 33) choosing the right tools won’t do us much good (36). I don’t think Nick and I — or for that matter Andrew and I — disagree very much on all this.

launch and iterate

I enjoyed this brief interview with Rob Horning of The New Inquiry, and was particularly taken with this passage:

What do you think is good about the way we interact with information today? How has your internet consumption changed your brain, and writing, for the better?

I can only speak for myself, but I find that the Internet has made me far more productive than I was before as a reader and a writer. It seems to offer an alternative to academic protocols for making “knowledge.” But I was never very systematic before about my “research” and am even less so now; only now this doesn’t seem like such a draw back. Working in fragments and unfolding ideas over time in fragments seems a more viable way of proceeding. I’ve grown incapable of researching as preparation for some writing project — I post everything, write immediately as a way to digest what I am reading, make spontaneous arguments and connections from what is at hand. Then if I feel encouraged, I go back and try to synthesize some of this material later. That seems a very Internet-inspired approach.

Let me pause to note that I am fundamentally against productivity and then move on to the more important point, which is that online life has changed my ways of working along the lines that Horning describes — and I like it.

There’s a mantra among some software developers, most notably at Google: Launch and iterate. Get your app out there even with bugs, let your customers report and complain about those bugs, apologize profusely, fix, release a new version. Then do it again, and again. (Some developers hate this model, but never mind.) Over the past few years I’ve been doing something like this with my major projects: throw an idea out there in incomplete or even inchoate form and see what responses it gets; learn from your respondents’ skepticism or criticism; follow the links people give you; go back to the idea and, armed with this feedback, make it better.

Of course, writers have always done something like this: for example, going to the local pub and making people listen to you pontificate on some crack-brained intellectual scheme and then tell you that you’re full of it. And I’ve used that method too, which has certain advantages … but: it’s easy to forget what people say, you have a narrow range of responses, and it can’t happen very often or according to immediate need. The best venue I’ve found to support the launch-and-iterate model of the intellectual life: Twitter.

the confidence of the elect

Right after I wrote my last post I came across an interestingly related one by Tim Parks:

No one is treated with more patronizing condescension than the unpublished author or, in general, the would-be artist. At best he is commiserated. At worst mocked. He has presumed to rise above others and failed. I still recall a conversation around my father’s deathbed when the visiting doctor asked him what his three children were doing. When he arrived at the last and said young Timothy was writing a novel and wanted to become a writer, the good lady, unaware that I was entering the room, told my father not to worry, I would soon change my mind and find something sensible to do. Many years later, the same woman shook my hand with genuine respect and congratulated me on my career. She had not read my books.

Why do we have this uncritical reverence for the published writer? Why does the simple fact of publication suddenly make a person, hitherto almost derided, now a proper object of our admiration, a repository of special and important knowledge about the human condition? And more interestingly, what effect does this shift from derision to reverence have on the author and his work, and on literary fiction in general?

But Parks’s key point is not that people generally change their attitudes towards a writer once he or she gets published — the writer changes too:

I have often been astonished how rapidly and ruthlessly young novelists, or simply first novelists, will sever themselves from the community of frustrated aspirants. After years fearing oblivion, the published novelist now feels that success was inevitable, that at a very deep level he always knew he was one of the elect (something I remember V.S. Naipaul telling me at great length and with enviable conviction). Within weeks messages will appear on the websites of newly minted authors discouraging aspiring authors from sending their manuscripts. They now live in a different dimension. Time is precious. Another book is required, because there is no point in establishing a reputation if it is not fed and exploited. Sure of their calling now, they buckle down to it. All too soon they will become exactly what the public wants them to be: persons apart, producers of that special thing, literature; artists.

Notice that this is another major contributor to the problem of over-writing and premature expressiveness that I mentioned in my post: the felt need to sustain and consolidate an established reputation.

And then there’s the sense that most successful people have — and, again, need to have — that their success is not only deserved but inevitable. Immediately after reading this essay by Parks I read an interview with Philip Pullman in which he plays to the type that Parks identifies:

Yet on one thing, Pullman’s faith is profound and unshakeable. He’s now in his mid-60s, and though he thinks about death occasionally, it never wakes him up in a sweat at night. ‘I’m quite calm about life, about myself, my fate. Because I knew without doubt I’d be successful at what I was doing.’ I double-take at this, a little astounded, but he’s unwavering. ‘I had no doubt at all. I thought to myself, my talent is so great. There’s no choice but to reward it. If you measure your capacities, in a realistic sense, you know what you can do.’

Note the easy elision here between “knowing what you can do” and “knowing you’ll be recognized and rewarded for it.” If talent is so reliably rewarded, then I don’t have to consider the possibility that my neighbor is getting less than he deserves — or that I’m getting more.

These reflections aren’t just about other people. How I think they apply to me is something I want to get to in another post.

the dissenters

You know how I wrote that blog post a whole back about how much I hate writing on my iPad? Apparently not everyone feels that way:

Most people only use the iPad’s on-screen keyboard for tapping out emails, tweets or Facebook updates. 

But Patrick Rhone of St. Paul wrote a book that way — with his Apple tablet at a slight incline on a desk or table at a variety of locations, and his index fingers flying across the virtual keys. 

This isn’t Rhone’s only feat of mobile productivity. The technology consultant and prolific blogger customarily composes lengthy blog posts — sometimes nearing 1,000 words each — on his iPhone screen in horizontal orientation. 

“The majority of the blog posts I write these days, I write in landscape, using my iPhone, typing with my thumbs,” he said. “Why? Well, because it’s what I have on hand all the time, and when inspiration hits me, I could be anywhere.”

What a nightmare. But to each his own.

disrupting journalism!

(Nah, not really. Just wanted to try out that language for size.)

But: I was talking with some people on Twitter this morning about my frustrations with what has now become a very familiar set of experiences: the whole merry-go-round of publicity that accompanies the appearance of a book.

Before I go any further, I should note that my adventures on this merry-go-round amount to nothing in comparison with what people-who-make-their-living-by-writing go through. Only once in my career have I written a book that generated perceptible media attention, and doing the publicity for that absolutely exhausted me — which probably accounts for my dyspeptic attitude towards even small bouts of book-promoting exercises today. I can’t even begin to imagine what it must be like to be Neil Gaiman: “I’m currently dealing with how to go back to being a writer. Rather than whatever it is that I am. A traveller, a signer, a promoter, a talker, a lecturer.”

So here’s how it goes: a journalist writes or calls to ask for an interview, and wants to do the interview by phone. If I agree — in violation of my profound dislike of the telephone — then commences the awkward dance of trying to find a time when we can both talk, and, when that’s finally worked out, I am permitted to try to improvise on the spot answers to questions that I have already answered, with considerably greater care, in the book itself. Then I just have to hope — though the years have almost cured me of hoping — that the journalist transcribes what I say accurately and in its proper context. And, for dessert, I get to be annoyed by the way I put things and wish I could go back and express myself more clearly.

(By the way, no belief is more sacrosanct among journalists that the belief that it would be profoundly unethical to let me rewrite my comment about, say, nineteenth-century controversies over the Ornaments Rubric — even though I’ve yet to find anyone who can explain to me why that would be so. They always invoke politicians and political controversy, without explaining why the same rules should apply to interviewing politicians and interviewing scholars or other writers.)

Perhaps you can tell that I’m not thrilled about this way of doing things? So my common practice now is to decline phone interviews and ask to do things by email instead. Sometimes I am told that this is not permissible, in which case, Oh well. (When I’ve been given a reason, that reason has always been “because in email you don’t get the give-and-take,” which always makes me wonder whether there are email clients without Reply buttons.) But when people agree, then I sit down to answer the questions and realize, wait a minute, I’m writing the article! I’m going to do all the work and they’re going to get the byline and the paycheck! Well, it was my choice, after all….

I’m supposed to be willing to do all this because it gets my book “exposure,” it has “publicity value,” and I suppose that once may have been true, but I wonder to what extent it now is? Certainly publishers believe in it, and promote the model; but I have my doubts that a model formed by a kind of handshake agreement among publishers (who want to get the word out about their books) and journalists (who need ever-new “content”) is all that it needs to be when we all have the internet and its social media at our fingertips.

I’m just wondering — genuinely wondering — whether there might be models of doing … this kind of thing … don’t know what to call it … that might be more flexible and generous and less taxing to everyone concerned. Especially, of course, The Author, but I’ve been on both sides of this fence: I have interviewed people for articles — almost always by email, though once I bought lunch for a well-known musician for an Oxford American piece that never saw the light of day — and I’ve written for dailies, weeklies, bimonthlies, monthlies, quarterlies, the whole show, so I know those challenges as well. There’s drudgery for journalists in the usual way of doing business, and maybe it could be made more fun for them as well.

Even small adjustments could help: Alex Massie suggested to me the value of IM interviews, and that made me remember the few times I’ve done those — I really enjoyed them. They have the spontaneity of conversation but also allow you to take a moment to get your thought into shape before committing to the Enter key. In another exchange that happened almost simultaneously — I like that about Twitter — Erin Kissane emphasized just this value of conversation, and I suppose that’s one reason why I have always enjoyed talking with Ken Myers for his wonderful Mars Hill Audio Journal: the dialogue gradually and naturally unfolds, and while Ken always edits with care and skill to make me sound smarter than I am, he never eliminates that conversational tone. If doing publicity were always like that….

Anyway, I’d love to hear some good — disruptive! innotative! — ideas in the comments, especially from journalists. And thanks to those of you who, over the years, have helped to put my ideas before the public.

And by the way: if you don’t subscribe to the Mars Hill Audio Journal, you should consider it. It’s great.

why writing on the iPad remains a lousy experience

Go to a search engine and type in the words “iPad consumption creation.” You’ll be introduced to a debate that has been going on since the first iPad appeared in 2010: is the iPad — and by extension are tablets more generally — built just for consuming media, or is it a device one can make on as well?

If we’re going to get serious about this, we need to ask, “Creation of what?” Maybe tablets are better for some kinds of things than others. Not long after the iPad came out videos like this one started showing up on YouTube to to demonstrate how you can make real music — well, sort of real — with GarageBand; and the talented folks at 53 have created a tumblelog to showcase the artwork people have made with their justly-celebrated app Paper.

But what about writing? Well, there are advocates for the iPad as a writing environment, most eloquent of them being Federico Viticci, who makes a great case for using the iPad with the writing app Editorial. And I too think Editorial is a genuinely innovative, brilliantly designed app that offers the best writing experience you can get on a tablet.

However: I hate writing on my iPad. Why? Let me count the ways.

First, and most fundamentally, some of the most basic and frequently-used text-manipulation actions remain very difficult to perform on iOS — indeed, have not discernibly improved since the first iPhone appeared in 2007. Trying to select just the text I need to select is often enough to make the sweat break out on my brow: No, I wanted ALL of that word, not just part of it — oh crap, the damned thing has decided that I want the whole paragraph! I just want the last four sentences! But it won’t let me choose the last four sentences! Okay, well, I’ll have to use the delete key to X out the unwanted stuff — once I get it pasted. So let me try to get my finger in the exact place where I need — no, damn it, not there! My finger must have slipped at the last instant! Okay, where’s the undo? How do I undo that? CRAP.

It’s like that all the time.

But I can already hear you saying, “Oh, you foolish boy, why aren’t you using a physical keyboard?” Yeah, well, I do use a physical keyboard, but the keyboard shortcuts and arrow keys that are so fundamental to my text-manipulations on a laptop or desktop computer work inconsistently or not at all on the iPad, so it’s still not possible to avoid altogether the finger-accuracy issues I describe above. But a keyboard helps in some ways, for sure. Now, what kind of keyboard should I get?

I’ve used one of these keyboard/cover hybrids. The good: highly portable. The bad: somewhat flimsy, and difficult to balance on my lap, which is a problem if I want to continue my long-standing practice of writing while seated in an easy chair. (Basically, I need to add a lap-desk to make it work smoothly.) And then the keyboard is smaller than standard, which leads to a lot of mistyping. All in all, a pretty frustrating experience.

So let’s try Apple’s Bluetooth keyboard — a lovely piece of engineering, I must admit, and a pleasure to type on … once I find a way to stand up my iPad so I can see what I’m typing, that is. So I can buy a stand — but then easy-chair typing is seriously compromised, unless I get something like this workstation which gives me a somewhat shaky platform to type on and creates a situation in which I am regularly assembling and disassembling my typing environment — in which case the portability of the iPad, one of its key features, is significantly diminished.

If this post weren’t too long already, I’d go on another rant about the severe limits of iOS application switching — but you get the point. I’m typing this post on my MacBook Air, and it’s a real pleasure. It’s lightweight and fits in my lap nicely. It was trivially easy for me to insert all those links into this post, and it’ll also be trivially easy for me to upload what I’ve written to Blogger. When I made mistakes in typing it was simple to correct them. Unless I were compelled by economic or other necessity to use an iPad to write, why would I ever do so?

writing big

The bigger your writing project, the less likely it is that you’ll find a writing environment that’s adequate to your needs. When you’re writing a book, you need to find some way to juggle research, ideas, notes, drafts, outlines … which is hard to do.

As far as I know — I’d be happy to be corrected — the only product on the market that even tries to do all this in a single app is Scrivener, which many writers I know absolutely swear by. Me? I hate it. I freely acknowledge the irrationality of this hatred, but so it goes. I can objectively approve of the quality of an app and yet be frustrated by using it. I have the same visceral dislike of Evernote, though in that case sheer ugliness is the chief problem. But both Scrivener and Evernote are created by people who follow the more-features-the-better philosophy, and that’s one I am congenitally uncomfortable with. (The user manual for Scrivener is over 500 pages long.)

A few years ago I thought my answer for big projects might be Ulysses 2. I couldn’t put PDFs in it, but I didn’t mind that because I like to annotate PDFs and you need a separate app to do that properly; and in other respects it had a lot going for it. I could write in plain text with Markdown, and could always have visible onscreen notes, or an outline, for the chapter I was working on and even, in a small pane on the left, the text of another chapter. Also, a Ulysses document was basically a package containing text and RTF files with some metadata — easy to unpack and open in other apps if necessary.

I liked Ulysses, but it tended to be unstable and some of its behavior was inconsistent (especially in exporting documents for printing or sending to others). I was pleased to learn that the makers were working on a updated version — but surprised when Ulysses III came out and proved to be a completely new application. And after I tried it out, surprise gave way to disappointment: essentially, it seems to me, it’s now an ordinary document-based text editor — an attractive one, to be sure, but not at all suited to the creation and management of major projects. As far as I can tell, you can replicate all the features of Ulysses III, except for its appearance, for free with TextWrangler and pandoc.

I use phrases like “it seems to me” and “as far as I can tell” because Ulysses III is getting some good press: see here and here and here and here. But these tend to focus on how the app looks, how well it syncs with iCloud, and its export options — not its status as an environment for organizing your writing, especially a project of any size. Ulysses III seems to me a nice app if you’re writing blog posts, but if you’re working on something big, it’s a significant step backwards from previous versions of the app.