growth and form

D Arcy Wentworth Thompson 1860 1948In my previous post I explored some of the biological contexts of the idea of morphosis, form-changing, in Pynchon’s work. But I also hinted at the moral, the theological, and the literary-imaginative uses of the immensely rich concept of form.  In light of all this it’s worth noting that by general consent the most remarkable endeavor in the history of biological morphology is D’Arcy Wentworth Thompson’s massive and magisterial On Growth and Form — over 1100 pages in its second edition of 1942.

Pretty much everything about Thompson is fascinating, but I’d like to call particular attention to the fact that he was a classicist as well as a biologist and mathematician. Legend has it that at the University of St. Andrews he was offered his choice of professorships in classics, mathematics, or zoology (though his very versatility, and the unpredictable views it spawned, meant that he was never hired at Oxford or Cambridge, though he applied several times for jobs at those universities).

He became a hero and model to, among other scholars, Stephen Jay Gould, who in 1971 published a wonderful essay about Thompson — and published it in New Literary History, later to become the leading journal of literary theory. In that essay, a revised version of Gould’s senior undergraduate thesis at Antioch College, Gould comments that

D’Arcy Thompson’s mathematics has a curious ring. We find none of the differential equations and mathematical statistics that adorn modern work in ecology or population genetics; we read, instead, of the partitioning of space, the tetrakaidekahedron, the Miraldi angle, the logarithmic spiral and the golden ratio. Numbers rarely enter equations; rather, they exemplify geometry. For D’Arcy Thompson was a Greek mathematician with 20th century material and insights. Growth and Form is the synthesis of his two lives: eminent classicist and eminent zoologist. As he stated in a Presidential Address to the Classical Association (1929): “Science and the Classics is my theme today; it could hardly be otherwise. For all I know, and do, and well nigh all I love and care for (outside of home and friends) lies within one or the other; and the fact that I have loved them both has colored all my life, and enlarged my curiosity and multiplied my inlets to happiness.” 

(“Multiplied my inlets to happiness” — what a delightful phrase.) The geometrical character of Thompson’s biological mathematics keeps him close to the sensually accessible character of actual creatures: he uses geometry to describe things we can actually see. And this positions his work within the same ambit as literature and ordinary language, something he was quite aware of. Gould’s essay takes as its epigraph at important sentence from the latter pages of On Growth and Form: “Our own study of organic form, which we call by Goethe’s name of Morphology, is but a portion of that wider Science of Form which deals with the forms assumed by matter under all aspects and conditions, and, in a still wider sense, with forms which are theoretically imaginable” (emphasis mine).

This notion of a “wider Science of Form” was immensely attractive to Gould. In The Structure of Evolutionary Theory, his attempt, published just weeks before his death in 2002, to write his own magnum opus along the lines of On Growth and Form, Gould makes an interesting comment on the sources of his mature thinking about evolution:

I read the great European structuralist literatures in writing my book on Ontogeny and Phylogeny. I don’t see how anyone could read, from Goethe and Geoffroy down through Severtzov, Remane and Riedl, without developing some appreciation for the plausibility, or at least for the sheer intellectual power, of morphological explanations outside the domain of Darwinian functionalism — although my resulting book, for the last time in my career, stuck closely to selectionist orthodoxy, while describing these alternatives in an accurate and sympathetic manner.

That “selectionist orthodoxy,” which he would later call “Darwinian fundamentalism,” became for him the chief enemy of a truly universal science of form, the kind of thing that Thompson had imagined, an account that could potentially be equally useful in illuminating the structure of crystals, the petal arrangements of roses, the  shape of a novel’s plot.

I don’t yet know, but I have a suspicion that meditation on these themes will be useful to me as I try to come to grips with Thomas Pynchon’s body of work. And I have this sinking feeling that at some point I’m going to have to reckon with Goethe’s role in this history….

the giant in the library

The technological history of modernity, as I conceive of it, is a story to be told in light of a theological anthropology. As what we now call modernity was emerging, in the sixteenth century, this connection was widely understood. Consider for instance the great letter that Rabelais’ giant Gargantua writes to his son Pantagruel when the latter is studying at the University of Paris. Gargantua first wants to impress upon his son how quickly and dramatically the human world, especially the world of learning, has changed:

And even though Grandgousier, my late father of grateful memory, devoted all his zeal towards having me progress towards every perfection and polite learning, and even though my toil and study did correspond very closely to his desire – indeed surpassed them – nevertheless, as you can well understand, those times were neither so opportune nor convenient for learning as they now are, and I never had an abundance of such tutors as you have. The times were still dark, redolent of the disaster and calamity of the Goths, who had brought all sound learning to destruction; but, by the goodness of God, light and dignity have been restored to literature during my lifetime: and I can see such an improvement that I would hardly be classed nowadays among the first form of little grammar-schoolboys, I who (not wrongly) was reputed the most learned of my century as a young man.

(I’m using the Penguin translation by M. A. Screech, not the old one I linked to above.) And this change is the product, in large part, of technology:

Now all disciplines have been brought back; languages have been restored: Greek – without which it is a disgrace that any man should call himself a scholar – Hebrew, Chaldaean, Latin; elegant and accurate books are now in use, printing having been invented in my lifetime through divine inspiration just as artillery, on the contrary, was invented through the prompting of the devil. The whole world is now full of erudite persons, full of very learned teachers and of the most ample libraries, such indeed that I hold that it was not as easy to study in the days of Plato, Cicero nor Papinian as it is now.

Note that technologies come to human beings as gifts (from God) and curses (from the Devil); it requires considerable discernment to tell the one from the other. The result is that human beings have had their powers augmented and extended in unprecedented ways, which is why, I think, Rabelais makes his characters giants: enormously powerful beings who lack full control over their powers and therefore stumble and trample through the world, with comical but also sometimes worrisome consequences.

But note how Gargantua draws his letter to a conclusion:

But since, according to Solomon, ‘Wisdom will not enter a soul which [deviseth] evil,’ and since ‘Science without conscience is but the ruination of the soul,’ you should serve, love and fear God, fixing all your thoughts and hopes in Him, and, by faith informed with charity, live conjoined to Him in such a way as never to be cut off from Him by sin. Beware of this world’s deceits. Give not your mind unto vanity, for this is a transitory life, but the word of God endureth for ever. Be of service to your neighbours and love them as yourself. Venerate your teachers. Flee the company of those whom you do not wish to resemble; and the gifts of grace which God has bestowed upon you receive you not in vain. Then once you know that you have acquired all there is to learn over there, come back to me so that I may see you and give you my blessing before I die.

The “science without conscience” line is probably a Latin adage playing on scientia and conscientia: as Peter Harrison explains, in the late medieval world Rabelais was educated in, scientia is primarily an intellectual virtue, the disciplined pursuit of systematic knowledge. The point of the adage, then, is that even that intellectual virtue can serve vice and “ruin the soul” if it is not governed by the greater virtues of faith, hope, and love. (Note also how the story of Prospero in The Tempest fits this template. The whole complex Renaissance discourse, and practice, of magic is all about these very matters.)

So I want to note three intersecting notions here: first, the dramatic augmentation, in the early-modern period, of human power by technology; second, the necessity of understanding the full potential of those new technologies both for good and for evil within the framework of a sound theological anthropology, an anthropology that parses the various interactions of intellect and will; and third, the unique ability of narrative art to embody and illustrate the coming together of technology and theological anthropology. These are the three key elements of the technological history of modernity, as I conceive it and hope (eventually) to narrate it.

The ways that narrative art pursues the interrelation of technology and the human is a pretty major theme of mine: see, for instance, here and here and here. (Note how that last piece connects to Rabelais.) It will be an even bigger theme in the future. Stay tuned for further developments — though probably not right away. I have books to finish….

thoughts on the processing of words

This review was commissioned by John Wilson and meant for Books and Culture. Alas, it will not be published there.

“Each of us remembers our own first time,” Matthew Kirschenbaum writes near the beginning of his literary history of word processing — but he rightly adds, “at least … those of us of a certain age.” If, like me, you grew up writing things by hand and then at some point acquired a typewriter, then yes, your first writing on a computer may well have felt like a pretty big deal.

The heart of the matter was mistakes. When typing on a typewriter, you made mistakes, and then had to decide what, if anything, to do about them; and woe be unto you if you didn’t notice a mistyped word until after you had removed the sheet of paper from the machine. If you caught it immediately after typing, or even a few lines later, then you could roll the platen back to the proper spot and use correcting material — Wite-Out and Liquid Paper were the two dominant brands, though fancy typewriters had their own built-in correction tape — to cover the offending marks and replace them with the right ones. But if you had already removed the paper, then you had to re-insert it and try, by making minute adjustments with the roller or the paper itself, to get everything set just so — but perfect success was rare. You’d often end up with the new letters or words slightly out of alignment with the rest of the page. Sometimes the results would look so bad that you’d rip the paper out of the machine in frustration and type the whole page again, but by that time you’d be tired and more likely to make further mistakes….

Moreover, if you were writing under any kind of time pressure — and I primarily used a typewriter to compose my research papers in college and graduate school, so time pressure was the norm — you were faced with a different sort of problem. Scanning a page for correctable mistakes, you were also likely to notice that you had phrased a point awkwardly, or left out an important piece of information. What to do? Fix it, or let it be? Often the answer depended on where in the paper the deficiencies appeared, because if they were to be found on, say, the second page of the paper, then any additions would force the retyping of that page but of every subsequent page — something not even to be contemplated when you were doing your final bleary-eyed 2 AM inspection of a paper that had to be turned in when you walked into your 9 AM class. You’d look at your lamentably imprecise or incomplete or just plain fuddled work and think, Ah, forget it. Good enough for government work — and fall into bed and turn out the light.

The advent of “word processing” — what an odd phrase — electronic writing, writing on a computer, whatever you call it, meant a sudden and complete end to these endless deliberations and tests of your fine motor skills. You could change anything! anywhere! right up to the point of printing the thing out — and if you had the financial wherewithal or institutional permissions that allowed you to ignore the cost of paper and ink, you could even print out a document, edit it, and then print it out again. A brave new world indeed. Thus, as the novelist Anne Rice once commented, when you’re using a word processor “There’s really no excuse for not writing the perfect book.”

But there’s the rub, isn’t there? For some few writers the advent of word processing was a pure blessing: Stanley Elkin, for instance, whose multiple sclerosis made it impossible for him to hold a pen properly or press a typewriter’s keys with sufficient force, said that the arrival of his first word-processing machine was “the most important day of my literary life.” But for most professional writers — and let’s remember that Track Changes is a literary history of word processing, not meant to cover the full range of its cultural significance — the blessing was mixed. As Rice says, now that endless revision is available to you, as a writer you have no excuse for failing to produce “the perfect book” — or rather, no excuse save the limitations of your own talent.

As a result, the many writers’ comments on word processors that Kirschenbaum cites here tend to be curiously ambivalent: it’s often difficult to tell whether they’re praising or damning the machines. So the poet Louis Simpson says that writing on a word processor “tells you your writing is not final,” which sounds like a good thing, but then he continues: “It enables you to think you are writing when you are not, when you are only making notes or the outline of a poem you may write at a later time.” Which sounds … not so good? It’s hard to tell, though if you look at Simpson’s whole essay, which appeared in the New York Times Book Review in 1988, you’ll see that he meant to warn writers against using those dangerous machines. (Simpson’s article received a quick and sharp rebuttal from William F. Buckley, Jr., an early user of and advocate for word processors.)

Similarly, the philosopher Jacques Derrida, whom Kirschenbaum quotes on the same page:

Previously, after a certain number of versions, everything came to a halt — that was enough. Not that you thought the text was perfect, but after a certain period of metamorphosis the process was interrupted. With the computer, everything is rapid and so easy; you get to thinking you can go on revising forever.

Yes, “you get to thinking” that — but it’s not true, is it? At a certain point revision is arrested by publishers’ deadlines or by the ultimate deadline, death itself. The prospect of indefinite revision is illusory.

But however ambivalent writers might be about the powers of the word processor, they are almost unanimous in insisting that they take full advantage of those powers. As Hannah Sullivan writes in her book The Work of Revision, which I reviewed in these pages, John Milton, centuries ago, claimed that his “celestial patroness … inspires easy my unpremeditated verse,” but writers today will tell you how much they revise until you’re sick of hearing about it. This habit predates the invention of the word processor, but has since become universal. Writers today do not aspire, as Italian Renaissance courtiers did, to the virtue called sprezzatura: a cultivated nonchalance, doing the enormously difficult as though it were easy as pie. Just the opposite: they want us to understand that their technological equipment does not make their work easier but far, far harder. And in many ways it does.

Matthew Kirschenbaum worked on Track Changes for quite some time: pieces of the book started appearing in print, or in public pixels, at least five years ago. Some of the key stories in the book have therefore been circulating in public, and the most widely-discussed of them have focused on a single question: What was the first book to be written on a word processor? This turns out to be a very difficult question to answer, not least because of the ambiguities inherent in the terms “written” and “word processor.” For instance, when John Hersey was working on his novel My Petition for More Space, he wrote a complete draft by hand and then edited it on a mainframe computer at Yale University (where he then taught). Unless I have missed something, Kirschenbaum does not say how the handwritten text got into digital form, but I assume someone entered the data for Hersey, who wanted to do things this way largely because he was interested in his book’s typesetting and the program called the Yale Editor or just E gave him some control over that process. So in a strict sense Hersey did not write the book on the machine; nor was the machine a “word processor” as such.

But in any case, Hersey, who used the Yale Editor in 1973, wouldn’t have beaten Kirschenbaum’s candidate for First Word-Processed Literary Book: Len Deighton’s Bomber, a World War II thriller published in 1970. Deighton, an English novelist who had already published several very successful thrillers, most famously The IPCRESS File in 1962, had the wherewithal to drop $10,000 — well over $50,000 in today’s money — on IBM’s Frankensteinian hybrid of a Selectric typewriter and a tape-based computing machine, the MT/ST. IBM had designed this machine for heavy office use, never imagining that any individual writer would purchase one, so minimizing the size hadn’t been a focus of the design: as a result, Deighton could only get the thing into his flat by having a window removed, which allowed it to be swung into his study by a crane.

Moreover, Deighton rarely typed on the machine himself: that task was left to his secretary, Ellenor Handley, who also took care to print sections of the book told from different points of view on appropriately color-coded paper. (This enabled Deighton to see almost at a glance whether some perspectives were over-represented in his story.) So even if Bomber is indeed the first word-processed book, the unique circumstances of its composition set it well apart from what we now think of as the digital writing life. Therefore, Kirschenbaum also wonders “who was the first author to sit down in front of a digital computer’s keyboard and compose a published work of fiction or poetry directly on the screen.”

Quite possibly it was Jerry Pournelle, or maybe it was David Gerrold or even Michael Crichton or Richard Condon; or someone else entirely whom I have overlooked. It probably happened in the year 1977 or 1978 at the latest, and it was almost certainly a popular (as opposed to highbrow) author.

After he completed Track Changes, Kirschenbaum learned that Gay Courter’s 1981 bestselling novel The Midwife was written completely on an IBM System 6 word processor that she bought when it first appeared on the market in 1977 — thus confirming his suspicion that mass-market authors were quicker to embrace this technology than self-consciously “literary” ones, and reminding us of what he says repeatedly in the book: that his account is a kind of first report from a field that we’ll continue to learn more about.

In any case, the who-was-first questions are not as interesting or as valuable as Kirschenbaum’s meticulous record of how various writers — Anne Rice, Stephen King, John Updike, David Foster Wallace — made, or did not quite make, the transition from handwritten or typewritten drafts to a full reliance on the personal computer as the site for literary writing. Wallace, for instance, always wrote in longhand and transcribed his drafts to the computer at some relatively late stage in the process. Also, when he had significantly altered a passage, he deleted earlier versions from his hard drive so he would not be tempted to revert to them.

The encounters of writers with their machines are enormously various and fun to read about. Kirschenbaum quotes a funny passage in which Jonathan Franzen described how his first word processor kept making distracting sounds that he could only silence by wedging a pencil in the guts of the machine. Franzen elsewhere describes using a laptop with no wireless access whose Ethernet port he glued shut so he could not get online — a problem not intrinsic to electronic writing but rather to internet-capable machines, and one that George R. R. Martin solves by writing on a computer that can’t connect to the internet, using the venerable word-processing program WordStar. Similarly, my friend Edward Mendelson continues to insist that WordPerfect for MS-DOS is the best word-processing program, and John McPhee writes using a computer program that a computer-scientist friend coded for him back in 1984. (I don’t use a word-processing program at all, but rather a programmer’s text editor.) If it ain’t broke, don’t fix it. And if it is broke, wedge a pencil in it.

Kirchenbaum believes that this transition to digital writing is “an event of the highest significance in the history of writing.” And yet he confesses, near the end of his book, that he’s not sure what that significance is. “Every impulse that I had to generalize about word processing — that it made books longer, that it made sentences shorter, that it made sentences longer, that it made authors more prolific — was seemingly countered by some equally compelling exemplar suggesting otherwise.” Some reviewers of Track Changes have wondered whether Kirschenbaum isn’t making too big a deal of the whole phenomenon. In the Guardian of London, Brian Dillon wrote, “This review is being drafted with a German fountain pen of 1960s design – but does it matter? Give me this A4 pad, my MacBook Air or a sharp stick and a stretch of wet sand, and I will give you a thousand words a day, no more and likely no different. Writing, it turns out, happens in the head after all.”

Maybe. But we can’t be sure, because we can’t rewind history and make Dillon write the review on his laptop, and then rewind it again, take him to the beach, and hand him a stick. I wrote this review on my laptop, but I sometimes write by speaking, using the Mac OS’s built-in dictation software, and I draft all of my books and long essays by hand, using a Pilot fountain pen and a Leuchtturm notebook. I cannot be certain, but I feel that each environment changes my writing, though probably in relatively subtle ways. For instance, I’m convinced that when I dictate my sentences are longer and employ more commas; and I think my word choice is more precise and less predictable when I am writing by hand, which is why I try to use that older technology whenever I have time. (Because writing by hand is slower, I have time to reconsider word choices before I get them on the page. But then I not only write more slowly, I have to transcribe the text later. If only Books and Culture and my book publishers would accept handwritten work!)

We typically think of the invention of printing as a massive consequential event, but Thomas Hobbes says in Leviathan (1650) that in comparison with the invention of literacy itself printing is perhaps “ingenious” but fundamentally “no great matter.” Which I suppose is true. This reminds us that assessing the importance of any technological change requires comparative judgment. The transition to word processing seemed like a very big deal at the time, because, as Hannah Sullivan puts it, it lowered the cost of revision to nearly zero. No longer did we have to go through the agonies I describe at the outset of this review. But I am now inclined to think that it was not nearly as important as the transition from stand-alone PCs to internet-enabled devices. The machine that holds a writer’s favored word-processing or text-editing application will now, barring interventions along the lines of Jonathan Franzen’s disabled Ethernet port, be connected to the endless stream of opinionating, bloviating, and hate-mongering that flows from our social-media services. And that seems to me an even more consequential change for the writer, or would-be writer, than the digitizing of writing was. Which is why I, as soon as I’ve emailed this review to John Wilson, will be closing this laptop and picking up my notebook and pen.

writing for young people revisited

Jonathan Myerson has standards. Not for him the craven apologies of the Creative Writing Program at the University of Kent, their admission of wrongdoing at having suggested that children’s literature isn’t really literature at all. Myerson hoists his literary flag:

Come on, University of Kent, why the grovelling retreat? Your creative writing website got it right first time. You know perfectly well that when you made a distinction between “great literature” and “mass-market thrillers or children’s fiction”, you were standing up for something. That Keats is different from Dylan, or, in this instance, that Philip Roth does say something rather more challenging than JK Rowling, that Jonathan Franzen does create storylines more ambiguous and questioning than Stephanie Meyer’s. What’s so wrong with that? I’ll go forward carrying the banner even if you won’t.

Like Kent, we at City University take on creative writing MA students specifically to write literary novels – so we are quite ready to define what’s required to write for adults as opposed to children. It isn’t about the quality of the prose: the best children’s books are better structured and written than many adult works. Nor is it about imaginary worlds – among the Lit Gang, for instance, Kazuo Ishiguro, Cormac McCarthy and Michael Chabon have all created plenty of those. It’s simpler than that: a novel written for children omits certain adult-world elements which you would expect to find in a novel aimed squarely at grown-up readers.

The problem here is that Myerson fails to see that self-consciously “adult” novels, while they are indeed open to experiences, and to techniques, that children’s lit doesn’t reckon with, also have blind spots, vast areas of human experience of which they are apparently ignorant. The estimable Adam Roberts covered this just a couple of months ago in an absolutely brilliant blog post that I wrote about here. The elaboration of “ambiguous and questioning … story lines” may be a literary virtue — though perhaps not one that Jonathan Franzen possesses — but it is certainly not the only literary virtue or an indispensable one. The novel that is self-consciously for adults isn’t more comprehensive than the novel that is self-consciously for young people; it just covers different things. And, as Roberts makes clear, it habitually omits some of the most important experiences of life.

I want to believe

Returning to the subject of today’s earlier post: The authors of that study write this in summation:

Statistical findings, said Heuser, made us realize that genres are icebergs: with a visible portion floating above the water, and a much larger part hidden below, and extending to unknown depths. Realizing that these depths exist; that they can be systematically explored; and that they may lead to a multi-dimensional reconceptualization of genre: such, we think, are solid findings of our research.

Nothing this vague counts as “solid findings.” What does it mean to say that a genre is like an iceberg? What are those “parts” that are below the surface? What sorts of actions would count as “exploring those depths”? What would be the difference between “systematically” exploring those depths and doing so non-systematically? What would a “reconceptualization” of genre look like? Would that be different than a mere adjustment in our generic definitions? What would be the difference between a “multi-dimensional reconceptualization of genre” and a unidimensional one?
The rhetoric here is very inflated, but if there is substance to the ideas I cannot see it. I would like to be able to see it. Like Agent Mulder, I want to believe — but these guys aren’t making it easy for me.

doing things with computers

This is the kind of thing I just don’t understand the value or use of:

This paper is the report of a study conducted by five people – four at Stanford, and one at the University of Wisconsin — which tried to establish whether computer-generated algorithms could “recognize” literary genres. You take David Copperfield, run it through a program without any human input – “unsupervised”, as the expression goes – and … can the program figure out whether it’s a gothic novel or a Bildungsroman? The answer is, fundamentally, Yes: but a Yes with so many complications that make it necessary to look at the entire process of our study. These are new methods we are using, and with new methods the process is almost as important as the results.

So human beings, over a period of centuries, read many, many books and come up with heuristic schemes to classify them — identify various genres, that is to say, “kinds,” kinship groups. Then those human beings specify the features they see as necessary to the various kinds, write complex programs containing instructions for discerning those features, and run those programs on computers . . . to see how well (or badly) computers can replicate what human beings have already done?

I don’t get it. Shouldn’t we be striving to get computers to do things that human beings can’t do, or can’t do as well? The primary value I see in this project is that it could be a conceptually clarifying thing to be forced to specify the features we see as intrinsic to genres. But in that case the existence of programmable computers becomes just a prompt, and one accidental, not essential, to the enterprise of thinking more clearly and precisely.

Why Hope?: Transhumanism and the Arts (Another Response to James Hughes)

In another of the series of posts to which Professor Rubin recently responded, James Hughes argues that transhumanism has been marked by a tension between “fatalistic” beliefs in both technological progress and doom. Hughes’s intention is to establish a middle ground that acknowledges both promise and peril without assuming the inevitability of either. This is a welcome antidote to the willful blindness of libertarian transhumanism.
But conspicuously absent from Prof. Hughes’s post is any account of why techno-fatalism is so prominent among transhumanists — and so of why his alternative provides a viable and enduring resolution to the tension between its utopian and dystopian poles.

I would suggest that the prominence of techno-fatalism among transhumanists is closely linked to how they construe progress itself. Consider Max More’s description of progress, which is pretty well representative of the standard transhumanist vision:

Seeking more intelligence, wisdom, and effectiveness, an indefinite lifespan, and the removal of political, cultural, biological, and psychological limits to self-actualization and self-realization. Perpetually overcoming constraints on our progress and possibilities.

What is striking about this and just about any other transhumanist description of progress is that it is defined in almost entirely negative terms, as the shedding of various limits to secure a realm of pure possibility. (Even the initial positive goods seem, in the subsequent quote in Hughes’s post, to be of interest to More primarily as means to avoiding risk on the path to achieving pure possibility.) The essential disagreement Hughes outlines is only over the extent to which technological growth will secure the removal of these limits.
Transhumanists, following their early-modern and Enlightenment predecessors, focus on removing barriers to the individual pursuit of the good, but offer no vision of its content, of what the good is or even why we should want longer lives in which to pursue it — no vision of what we should progress towards other than more progress. Hughes seems to acknowledge this lacuna — witness his call to “rediscover our capacity for vision and hope” and to “stir men’s souls.” But in his post he offers this recently updated Transhumanist Declaration as an example of such “vision and hope,” even though it turns back to the well that left him so thirsty in the first place:

We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented.

This, along with much of the rest of the Declaration, reads as a remarkably generic account of the duties of any society — putting the transhumanists decisively back at square one in describing both social and individual good.

For transhumanists — or anyone — to articulate the content of the good would require an embrace of the discipline devoted to studying precisely that question: the humanities, particularly literature and the arts. Hughes is right when he suggests elsewhere the postmodern character of transhumanist morality. The triumphant postmodernist is a cosmopolitan of narratives and aesthetics, a connoisseur who samples many modes of being free of the binding power of any. Because the postmodernist redefines the good as the goods, he is compelled even more than his predecessors to be a voracious consumer of culture and cultures, particularly of narratives and aesthetics.
The transhumanist vision of progress begins from this postmodern freedom to function in any mode of being. But, seemingly paradoxically, transhumanists tend to be indifferent to the study of literature and the arts as a means of knowing the good(s) (with the notable exception of science fiction). If they were not indifferent, then they might be aware of the now-lengthy tradition in the arts dealing with precisely the postmodern problem of maintaining “vision and hope.” Near the middle of the last century, the novelist Walker Percy wrote of the subject of postmodern novel:

How very odd it is … that the very moment he arrives at the threshold of his new city, with all its hard-won relief from the sufferings of the past, happens to be the same moment that he runs out of meaning!… The American novel in past years has treated such themes as persons whose lives are blighted by social evils, or reformers who attack these evils…. But the hero of the postmodern novel is a man who has forgotten his bad memories and conquered his present ills and who finds himself in the victorious secular city. His only problem now is to keep from blowing his brains out.

Postmodern art moves from abstract theories to realized depictions of how the heroically actualized self lives. Inevitably in such depictions the triumphant victory of theory gives way to the unsustainable alienation of postmodern life, and the problem theory has shirked becomes pressing: Why hope? How to keep from blowing your brains out?
For the likes of the Beats, the solution could be found in a frantically earnest embrace of the postmodern imperative to move from one mode of being to the next. For Percy’s protagonists, the solution lies partly in embracing the same imperative, but ironically. For the readers of The Catcher in the Rye, the viewers of American Beauty, and the listeners of Radiohead, there is a consoling beauty to be found in the artistic depiction of alienation itself. For the French existentialists, the solution might just be to go ahead and blow your brains out.
That transhumanists have not grappled with the hollow and alienating character of their vision of progress could be taken as evidence of their historical and philosophical myopia. But of course their uninterest in depictions of the good(s) is not simply an oversight but an underlying principle. Whereas the postmodernist’s freedom from all modes of being is constitutionally ironic, the transhumanist is gravely serious about his freedom. His primary attitude towards discussions about the relative merits of different value systems or ways of life is not playfulness but wariness — or sometimes, as we have seen in the comments on this blog, outright hostility and paranoia.
Whereas the postmodernist takes the freedom from and to choose any mode of being as inherent, the transhumanist believes that it must be fought for — else there would be no gap between here and transcendence. Indeed, it is the effort to bridge this gap that constitutes transhuman teleology; the feat of the earning itself is the central end of transhuman progress. Transhumanism takes the lemons of postmodern alienation and makes the will to lemonade.
Hence the essential insatiability of the transhumanist project. It has as its goal not some fulfilled form, but a constant seeking after transgressive will and power which, once secured in some measure, surrenders its transgressiveness to the quotidian and so must be sought in still greater measure. The transhumanist, unlike even the theoretical postmodernist, can never fully actualize.
And hence the unsexiness Prof. Hughes bemoans in his project to split the difference between fatalisms, for his “pessimism of the intellect” appears only as a dreary accidental impediment to transcendence. A transhumanist project versed in the arts might be able to provide a more unified and compelling vision of its quest for progress — but it would also have to confront the everyday despair that lies at its heart.
[Images: “Transhuman DNA”, courtesy Biopolitical Times; Walker Percy; Radiohead.]

don’t tell anyone, but I agree with Germaine Greer

When she says this, anyway:

If you haven’t read Proust, don’t worry. This lacuna in your cultural development you do not need to fill. On the other hand, if you have read all of A la Recherche du Temps Perdu, you should be very worried about yourself. As Proust very well knew, reading his work for as long as it takes is temps perdu, time wasted, time that would be better spent visiting a demented relative, meditating, walking the dog or learning ancient Greek.

Seriously, I wish I had back the time I’ve spent reading Proust. And I never made it all the way through. I want to say to Proust what Ezra Pound said to Joyce, for somewhat different reasons, about Finnegans Wake: “Nothing, so far as I can make out, nothing short of divine vision or a new cure for the clap can possibly be worth all that circumambient peripherization.”

the Republic of Letters

Here’s an excellent article by Robert Darnton, about which I will have more to say later. But for now here’s a taste:

The eighteenth century imagined the Republic of Letters as a realm with no police, no boundaries, and no inequalities other than those determined by talent. Anyone could join it by exercising the two main attributes of citizenship, writing and reading. Writers formulated ideas, and readers judged them. Thanks to the power of the printed word, the judgments spread in widening circles, and the strongest arguments won. The word also spread by written letters, for the eighteenth century was a great era of epistolary exchange. Read through the correspondence of Voltaire, Rousseau, Franklin, and Jefferson — each filling about fifty volumes — and you can watch the Republic of Letters in operation. All four writers debated all the issues of their day in a steady stream of letters, which crisscrossed Europe and America in a transatlantic information network. I especially enjoy the exchange of letters between Jefferson and Madison. They discussed everything, notably the American Constitution, which Madison was helping to write in Philadelphia while Jefferson was representing the new republic in Paris. They often wrote about books, for Jefferson loved to haunt the bookshops in the capital of the Republic of Letters, and he frequently bought books for his friend. The purchases included Diderot’s Encyclopédie, which Jefferson thought that he had got at a bargain price, although he had mistaken a reprint for a first edition. Two future presidents discussing books through the information network of the Enlightenment — it’s a stirring sight. But before this picture of the past fogs over with sentiment, I should add that the Republic of Letters was democratic only in principle. In practice, it was dominated by the wellborn and the rich. Far from being able to live from their pens, most writers had to court patrons, solicit sinecures, lobby for appointments to state-controlled journals, dodge censors, and wangle their way into salons and academies, where reputations were made. While suffering indignities at the hands of their social superiors, they turned on one another.

By “Letters” these figures did not mean epistles, though obviously they produced plenty of those, but rather Writing, humane learning, what we might call “literature” in the broadest sense of the word. (They used the word “literature” quite differently than we do. To us it means — more or less — poetry, fiction, drama, and some kinds of essay; to them it meant the scope of a person’s reading, especially in the classics and the best moderns. “He is a man of great literature” is a characteristic phrase of the period: it means “he is exceptionally well-read in the best books.”) Anyway, as I said, more on all this later. But read the whole essay.