morphosis

Here’s a passage from my review of Adam Roberts’s edition of Coleridge’s Biographia Literaria:

As the culmination of the long repudiation of [David] Hartley’s thought, Coleridge famously opposes this Imagination (later divided into Primary and Secondary) to the “Fancy,” which “has no other counters to play with, but fixities and definites.” The Fancy indeed merely plays with the “counters” that have been given it by the memory; “it must receive all its materials ready made from the law of association.” If we were reliant only on the Fancy, we would indeed be Hartleian beings, shuffling our fixed and defined impressions like cardboard coins; but as beings made in the image of God, Coleridge says, we can do more: “The primary IMAGINATION I hold to be the living Power and prime Agent of all human Perception, and as a repetition in the finite mind of the eternal act of creation in the infinite I AM.”

The importance of this distinction is evident from Coleridge’s redeployment of it in other terms elsewhere in the Biographia: “Could a rule be given from without, poetry would cease to be poetry, and sink into a mechanical art. It would be μóρφωσις [morphosis], not ποíησις [poiesis]” — shaping, not making. Roberts, whose background in classics serves him very well as an annotator of Coleridge, points out that “when Coleridge uses [morphosis] in the Biographia he has in mind the New Testament use of the word as ‘semblance’ or ‘outward appearance’, which the King James version translates as ‘form’” — mere form, as it were, mere appearance. And it may be also that Coleridge is thinking of the New Testament uses of poiesis and its near relations as well: for instance, when Paul writes of human beings (Eph. 2:10) as poiesis theou — “God’s workmanship”; God’s poem.

(Not incidentally, Adam’s blog is called Morphosis.) I’ve just discovered in my Great Pynchon Re-Read that the word “Morphosis” is used five times in Mason & Dixon, though not, it seems, in Coleridge’s sense of the term. Here’s the best example:

If you look at the OED entry for the word here’s what you see:

(You might have to right-click or control-click on the image and open it in a new window or tab to see it properly.) The very bottom is the first relevant thing here, since, in the passage earlier cited from Mason & Dixon, the apostrophe at the beginning of the word suggests that it is an abbreviation of “Metamorphosis” — and indeed, all five uses in the novel employ the apostrophe. 
But it’s also worth noting that Maskelyne — this is Nevil Maskelyne, the Astronomer Royal from 1765 to 1811 — clearly uses the word in a pejorative sense: morphosis is “veering into error.” (I can’t help being reminded here of the root meaning of hamartia — the New Testament word for sin, and Aristotle’s word for some trait of the tragic hero that no one has ever been able reliably to identify — is to “miss the mark.” This is all very Pynchonian, who is obsessed with vectors, especially tragic ones.) And most of the meanings of morphosis listed in the OED are either subtly or clearly pejorative: John Owen’s identification of Catholicism as an inadequate morphosis of true faith, which is clearly derived from the biblical meaning of mere semblance; but also the medical sense of a “pathological” or “morbid” change of form — the most obvious example of which being a malignant tumor, which is nothing other than unchecked morphosis: the healthy organ does not so change, but rather retains a stability of form and function. 
What makes all this especially interesting for the reader of Mason & Dixon is that three of the five uses of the term occur within a few pages, and all refer to Vaucanson’s famous Digesting Duck, who plays a significant role in the story by virtue of having become animate and articulate: the duck refers to this as his ‘Morphosis. And this should call to mind an earlier post about the Bad Priest in V. and her “progression towards inanimateness.” To be animate, to be organic, is necessarily to undergo morphosis, and so life itself, in this account of things, is therefore intrinsically malignant, cancerous. 
The view shared by the Bad Priest and the animate duck is perhaps the opposite of that articulated in the famous closing sentence of Darwin’s Origin of Species: “There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved” (form being, of course, μορϕή). For the Bad Priest evolving, changing, is itself an evil — perhaps the root of all evil — and certainly not something to take delight in, as Darwin did. 

And this preference for the inanimate over the animate may be interrogated from another perspective as well. In a wonderful essay from many years ago — one which I cannot, alas, find online — Wendell Berry describes his encounter with an advertisement celebrating a John Deere tractor as an “earth space capsule” that fully isolates its driver from the outside world with all its changes of weather. Berry finds it both curious and sad that farmers, of all people, would desire to be so separated from the natural world. And he comments, more generally, 

Of course, the only real way to get this sort of freedom and safety—to escape the hassles of earthly life—is to die. And what I think we see in these advertisements is an appeal to a desire to be dead that is evidently felt by many people. These ads are addressed to the perfect consumers: the self-consumers, who have found nothing of interest here on earth, nothing to so, and are impatient to be shed of earthly concerns.

After all, the perfect “earth space capsule” is the coffin. 
Pynchon’s novels return again and again to this fear or hatred of organic life, of time and change, not to celebrate it, but to understand it. I suspect that for him this repulsion is at the heart of technological society, of a culture-wide compulsion to trust in and defer to the inorganic and the human-made — which is, ultimately, a form of idolatry: as the Psalmist says, “They have hands, but they handle not: feet have they, but they walk not: neither speak they through their throat. They that make them are like unto them; so is every one that trusteth in them.” 
There is much more to be said about all this, and I hope to say some of it in this book on Pynchon and theology that I am trying to write. For now I’ll just note that my respect for Pynchon’s acuity on all these matters — respect that was already verging on awe — has just been significantly increased by my reading of Jessica Riskin’s astonishing book The Restless Clock. I can’t say too much more, because I have just written a lengthy review of Riskin for John Wilson’s forthcoming joint Education and Culture, and I’ll link to that review in due course (probably in a couple of months); but the combination of reading Riskin and reading Pynchon has seriously altered my understanding of the last five hundred years of intellectual and cultural history, and has significantly intensified my belief that the only truly theological account of modernity is one deeply immersed in the technological history of this past half-millennium. 

open letter to Adam Roberts on the Protocols of the Elders of the Internet

This started as a reply to a comment Adam made on my previous post. But then it underwent gigantism.

Adam, I see these matters a little differently than you do — let’s see if I can find out why. I’ll start with cars. I’d say that the main thing that makes it possible for there to be an enormous variety of automobiles is the road. A road is an immensely powerful platform — in this case literally a platform — because it is so simple. Anything that walks, runs, or rolls can use it, which causes problems sometimes, as when a cow wanders onto the street; but for the most part that openness to multiple uses makes it an indispensable technology. Even if your British automobile has its steering wheel on the wrong side, you can still drive it in France or Germany.

By contrast, railway lines are rather less useful because of the problem called break of gauge, which has in the past forced people to get off one train at a national boundary and get on another one that fits the gauge of the tracks in that country. (And of course for a period there was no standardization of gauge even in England.)

Notice that the lack of standardization only becomes a problem when you get beyond your locality — but that’s precisely what mechanical transportation is for: to take us away from our homes. The technology’s power creates its problems, which new technologies must often be devised to fix.

All of these difficulties are dramatically magnified when we get to the the internet, which used to be called, in a significant nickname, the “information superhighway.” Here it seems to me that we have an ongoing struggle between the differentiation that arises from economic competition and the standardization that “platforms” are always looking for. PC makers want you to buy PCs, while Apple wants you to buy Macs, so they differentiate themselves from one another; but Microsoft just wants you to use Word and Excel and PowerPoint and so makes that software for both platforms. Though they’d prefer a world in which every computer ran Windows, they have enough of an interest in standardization to make cross-platform applications. And now they make web versions of their apps that you can use even on Linux.

They do that by employing protocols that were designed back in the era when there were few computers in the world but the ones that existed were made using a variety of architectures and a wide array of parts. When you email the Word file you created on your PC to a Mac user like me, you do so using those same protocols, and updated versions of the communications lines that were first laid out more than a century ago. So again: a tension, in purely technical terms, between variability and standardization.

This tension may be seen in other digital technologies as well. Board games all employ the same basic and highly flexible technological platform: paper and ink (with a bit of plastic or metal, perhaps, though those flourishes are unnecessary). But when it comes to video games, contra Roberts, everyone does not have an Xbox. My house is a PlayStation house. Which means that there are some games that we can’t play, and that’s the way the console makers like it: Microsoft wants us to choose Xbox and stick with it; Sony wants us to choose PlayStation and stick with it. However, the makers of video games don’t like seeing their markets artificially reduced in this way, and so, if they have the resources to do so, will make their games available on multiple platforms, and then will use those internet protocols mentioned above to enable players to play with and against each other.

But the most popular games will always be online ones, because they allow almost anyone who has a computer and an internet connection to play, and to interact with one another: like the makers of board games, they look for the broadest possible platform — but they also encourage us to look beyond the local, indeed to ignore locality when playing with others (players typically have no idea where their opponents and teammates are).

So digital technologies, like the mechanical transportation technologies mentioned earlier, are meant to transcend locality, to remove emplacement as a limitation on sociability. (You can typically only play board games with people who are in the same room with you — though it should be noted that there was a longstanding if not almost abandoned tradition of playing chess by mail.) But this can only be accomplished with the development of either (a) standardized platforms or (b) shared protocols designed to bridge the gaps created by platform variability. Thus Google wants to solve that problem you have sharing photos with your wife: you use the Android version of the Google Photos app, she uses the iOS version, and presto! Solution achieved.

So — still working this through, please bear with me — let’s look at Twitter in light of this analysis. Twitter may be understood as a platform-agnostic MMORPG which, like most other MMORPGs, relies on the standard set of internet protocols, and therefore exchanges data with everything else that uses those protocols. This means that while Roberts is once again wrong when he says that everyone is on Twitter — it has maybe 20% as many users as Facebook and 80% as many as Instagram — those larger platforms, and of course the great vastness of the open web, can be used to magnify the influence of anything tweeted. In that sense there are a great many people in the world who, despite not being on Twitter, are on Twitter. So while some people blame Trump’s skillful Twitter provocations for both his political success and the debasing of our political culture, and others place the blame on the fake-news-wholesaling of Facebook, in fact the two work together, along with Google’s algorithms. In these matters I’m a conspiracy theorist, and I blame the Protocols of the Elders of the Internet.

So, Adam, in your response you referred to “homogenization,” whereas I’ve been referring to “standardization.” At this point we have the standardization of practices without the homogenization of ideology — and that’s the source of all of our conflicts. The platforms that allow us to connect with like-minded people are equally open to people whose ideas we despise, and we have no reliable means of shutting them out; but the encoded, baked-in tendencies of Twitter as a platform are universally distributed, which means that whether you’re a SJW or an alt-righty, you’re probably going to respond to people you disagree by instantaneous minimalist sneering. (The tendencies of early print culture were rather different, but produced a similar degree of hostility, which I’ve discussed in this post.)

I think, though, that this conflict between standardization and homogenization could be a temporary state of affairs, at least for people who rely on the Protocols, and that means most of us. That is, while standardization does not inevitably produce homogeneity, it certainly nudges everyone strongly in that direction. There’s no way that public opinion in the U.S. about same-sex marriage could have changed so quickly without social media. TV certainly had a significant influence, but social media are collectively a powerful force-and-speed multiplier for opinion alteration. And if you feel good about that, then you might consider how social media have also nudged tens of millions of Americans towards profound fear of immigrants.

So in this environment, majority opinions and opinions that are held very strongly by sizable minorities are going to be the chief beneficiaries. And that could lead ultimately to significantly increased homogeneity of opinion, a homogeneity that you only stand a chance of avoiding if you minimize your exposure to the Protocols. And here, Adam, it seems to me that your novel New Model Army is disturbingly relevant.

For much of the novel, the soldiers who fight for Pantegral are independent, free agents. When they fight, they fight according to, yes, protocols established and enforced by the software they all use — but they can stop fighting when they want to, they come and go. Indeed, this is one of the chief appeals to them of the New Model Army: it doesn’t own them. Or doesn’t at first; or doesn’t seem to. In the end the protocols prove to be more coercively powerful (or should I say more intensely desirable?) than they had ever expected. And then we have homogeneity indeed — on a truly gigantic scale.

Caveat lector, is what I’m saying.

the giant in the library

The technological history of modernity, as I conceive of it, is a story to be told in light of a theological anthropology. As what we now call modernity was emerging, in the sixteenth century, this connection was widely understood. Consider for instance the great letter that Rabelais’ giant Gargantua writes to his son Pantagruel when the latter is studying at the University of Paris. Gargantua first wants to impress upon his son how quickly and dramatically the human world, especially the world of learning, has changed:

And even though Grandgousier, my late father of grateful memory, devoted all his zeal towards having me progress towards every perfection and polite learning, and even though my toil and study did correspond very closely to his desire – indeed surpassed them – nevertheless, as you can well understand, those times were neither so opportune nor convenient for learning as they now are, and I never had an abundance of such tutors as you have. The times were still dark, redolent of the disaster and calamity of the Goths, who had brought all sound learning to destruction; but, by the goodness of God, light and dignity have been restored to literature during my lifetime: and I can see such an improvement that I would hardly be classed nowadays among the first form of little grammar-schoolboys, I who (not wrongly) was reputed the most learned of my century as a young man.

(I’m using the Penguin translation by M. A. Screech, not the old one I linked to above.) And this change is the product, in large part, of technology:

Now all disciplines have been brought back; languages have been restored: Greek – without which it is a disgrace that any man should call himself a scholar – Hebrew, Chaldaean, Latin; elegant and accurate books are now in use, printing having been invented in my lifetime through divine inspiration just as artillery, on the contrary, was invented through the prompting of the devil. The whole world is now full of erudite persons, full of very learned teachers and of the most ample libraries, such indeed that I hold that it was not as easy to study in the days of Plato, Cicero nor Papinian as it is now.

Note that technologies come to human beings as gifts (from God) and curses (from the Devil); it requires considerable discernment to tell the one from the other. The result is that human beings have had their powers augmented and extended in unprecedented ways, which is why, I think, Rabelais makes his characters giants: enormously powerful beings who lack full control over their powers and therefore stumble and trample through the world, with comical but also sometimes worrisome consequences.

But note how Gargantua draws his letter to a conclusion:

But since, according to Solomon, ‘Wisdom will not enter a soul which [deviseth] evil,’ and since ‘Science without conscience is but the ruination of the soul,’ you should serve, love and fear God, fixing all your thoughts and hopes in Him, and, by faith informed with charity, live conjoined to Him in such a way as never to be cut off from Him by sin. Beware of this world’s deceits. Give not your mind unto vanity, for this is a transitory life, but the word of God endureth for ever. Be of service to your neighbours and love them as yourself. Venerate your teachers. Flee the company of those whom you do not wish to resemble; and the gifts of grace which God has bestowed upon you receive you not in vain. Then once you know that you have acquired all there is to learn over there, come back to me so that I may see you and give you my blessing before I die.

The “science without conscience” line is probably a Latin adage playing on scientia and conscientia: as Peter Harrison explains, in the late medieval world Rabelais was educated in, scientia is primarily an intellectual virtue, the disciplined pursuit of systematic knowledge. The point of the adage, then, is that even that intellectual virtue can serve vice and “ruin the soul” if it is not governed by the greater virtues of faith, hope, and love. (Note also how the story of Prospero in The Tempest fits this template. The whole complex Renaissance discourse, and practice, of magic is all about these very matters.)

So I want to note three intersecting notions here: first, the dramatic augmentation, in the early-modern period, of human power by technology; second, the necessity of understanding the full potential of those new technologies both for good and for evil within the framework of a sound theological anthropology, an anthropology that parses the various interactions of intellect and will; and third, the unique ability of narrative art to embody and illustrate the coming together of technology and theological anthropology. These are the three key elements of the technological history of modernity, as I conceive it and hope (eventually) to narrate it.

The ways that narrative art pursues the interrelation of technology and the human is a pretty major theme of mine: see, for instance, here and here and here. (Note how that last piece connects to Rabelais.) It will be an even bigger theme in the future. Stay tuned for further developments — though probably not right away. I have books to finish….

modernity as temporal self-exile

In The Theological Origins of Modernity, Michael Allen Gillespie writes,

What then does it mean to be modern? As the term is used in everyday discourse, being modern means being fashionable, up to date, contemporary. This common usage actually captures a great deal of the truth of the matter, even if the deeper meaning and significance of this definition are seldom understood. In fact, it is one of the salient characteristics of modernity to focus on what is right in front of us and thus to overlook the deeper significance of our origins. What the common understanding points to, however, is the uncommon fact that, at its core, to think of oneself as modern is to define one’s being in terms of time. This is remarkable. In previous ages and other places, people have defined themselves in terms of their land or place, their race or ethnic group, their traditions or their gods, but not explicitly in terms of time. Of course, any self-understanding assumes some notion of time, but in all other cases the temporal moment has remained implicit. Ancient peoples located themselves in terms of a seminal event, the creation of the world, an exodus from bondage, a memorable victory, or the first Olympiad, to take only a few examples, but locating oneself temporally in any of these ways is different than defining oneself in terms of time. To be modern means to be “new,” to be an unprecedented event in the flow of time, a first beginning, something different than anything that has come before, a novel way of being in the world, ultimately not even a form of being but a form of becoming.

The notion that there is some indissoluble and definitive link between my identity and my moment accounts for some of the most characteristic rhetorical flourishes in our political debates: When people say that history is on their side, or ask how someone can hold Position X in the twenty-first century, or explain that they care about the things they do because of the generation they belong to, or insist that someone they don’t like acts the way he does because of the generation he belongs to, they’re assuming that link. But if time is so definitive time is also a prison: we are bound to our moment and cannot think or live outside it.

And yet people who are so bound congratulate themselves on being emancipated from “their land or place, their race or ethnic group, their traditions or their gods.” They believe they are free, but in fact they have exchanged defining structures that can (and often do) offer security and meaning for a defining abstraction that can offer neither — a home for a prison. This helps to explain why people who believe they are emancipated nevertheless tend to seek, with an intensity born of unacknowledged nostalgia, compensatory stories set in fantastic realms where the longed-for structures are firmly in place. To be imprisoned-by-emancipation is the fate of those who define their being in terms of time. Modernity is thus temporal self-exile — though it may be other things as well.

a technological tale for Reformation Day

What I have been calling the technological history of modernity is in part a story about the power of recognizing how certain technologies work — and the penalties imposed on those who fail to grasp their logic.

In his early book Renaissance Self-Fashioning, Stephen Greenblatt tells a story:

In 1531 a lawyer named James Bainham, son of a Gloucestershire knight, was accused of heresy, arrested, and taken from the Middle Temple to Lord Chancellor More’s house in Chelsea, where he was detained while More tried to persuade him to abjure his Protestant beliefs. The failure of this attempt called forth sterner measures until, after torture and the threat of execution, Bainham finally did abjure, paying a £20 fine to the king and standing as a penitent before the priest during the Sunday sermon at Paul’s Cross. But scarcely a month after his release, according to John Foxe, Bainham regretted his abjuration “and was never quiet in mind and conscience until the time he had uttered his fall to all his acquaintance, and asked God and all the world forgiveness, before the congregation in those days, in a warehouse in Bow lane.” On the following Sunday, Bainham came openly to Saint Austin’s church, stood up “with the New Testament in his hand in English and the Obedience of a Christian Man [by Tyndale] in his bosom,” and, weeping, declared to the congregants that he had denied God. He prayed the people to forgive him, exhorted them to beware his own weakness to die rather than to do as he had done, “for he would not feel such a hell again as he did feel, for all the world’s good.” He was, of course, signing his own death warrant, which he sealed with letters to the bishop of London and others. He was promptly arrested and, after reexamination, burned at the stake as a relapsed heretic.

When Bainham was first interrogated by More, he told the Lord Chancellor that “The truth of holy Scripture was never, these eight hundred years past, so plainly and expressly declared unto the people, as it hath been within these six years” — the six years since the printing of Tyndale’s New Testament in 1525.

The very presence of this book was, to ecclesial traditionalists, clearly the essential problem. So back in 1529 Thomas More and his friend Cuthbert Tunstall, then Bishop of London, had crossed the English Channel to Antwerp, where Tyndale’s translation was printed. (Its printing and sale were of course forbidden in England.) More and Tunstall searched high and low, bought every copy of the translation they could find, and burned them all in a great bonfire.

Tyndale gladly received this as a boon: he had already come to recognize that his first version of the New Testament had many errors, and he used the money received from More and Tunstall to hasten his work on completing and publishing a revision, which duly appeared in 1534.

I/O

I’m still thinking about the myths and metaphors we live by, especially the myths and metaphors that have made modernity, and the world keeps giving me food for thought.

So speaking of food, recently I was listening to a BBC Radio show about food — I think it was this one — and one of the people interviewed was Ken Albala, a food historian at the University of the Pacific. Albala made the fascinating comment that in the twentieth century, much of our thinking about proper eating was shaped (bent, one might better say) by thinking of the human body as a kind of internal combustion engine. Just as in the 21st century we think of our brains as computers, in the 20th we thought of our bodies as automobiles.

But perhaps, given the dominance of digital computing in our world, including its imminent takeover of the world of automobiling, we might be seeing a shift in how we conceive of our bodies, from analog metaphors to digital ones. Isn’t that what Soylent is all about, and the fascination with smoothies? — Making nutrition digital! An amalgamated slurry of ingredients goes in one end; an amalgamated slurry of ingredients comes out the other end. Input/Output, baby. Simple as that.

UPDATE: My friend James Schirmer tells me about Huel — human fuel! Or, as pretty much everyone will think of it, “gruel but with an H.”

“Please, sir, may I have some more”?

some thoughts on the humanities

I can’t say too much about this right now, but I have been working with some very smart people on a kind of State of the Humanities document — and yes, I know there are hundreds of those, but ours differs from the others by being really good.

In the process of drafting a document, I wrote a section that … well, it got cut. I’m not bitter about that, I am not at all bitter about that. But I’m going to post it here. (It is, I should emphasize, just a draft and I may want to revise and expand it later.)

Nearly fifty years ago, George Steiner wrote of the peculiar character of intellectual life “in a post-condition” — the perceived sense of living in the vague aftermath of structures and beliefs that can never be restored. Such a condition is often proclaimed as liberating, but at least equally often it is experienced as (in Matthew Arnold’s words) a suspension between two worlds, “one dead, / The other powerless to be born.” In the decades since Steiner wrote, humanistic study has been more and more completely understood as something we do from within such a post-condition.

But the humanities cannot be pursued and practiced with any integrity if these feelings of belatedness are merely accepted, without critical reflection and interrogation. In part this is because, whatever else humanistic study is, it is necessarily critical and inquiring in whatever subject it takes up; but also because humanistic study has always been and must always be willing to let the past speak to the present, as well as the present to the past. The work, the life, of the humanities may be summed up in an image from Kenneth Burke’s The Philosophy of Literary Form (1941):

Imagine that you enter a parlor. You come late. When you arrive, others have long preceded you, and they are engaged in a heated discussion, a discussion too heated for them to pause and tell you exactly what it is about. In fact, the discussion had already begun long before any of them got there, so that no one present is qualified to retrace for you all the steps that had gone before.You listen for a while, until you decide that you have caught the tenor of the argument; then you put in your oar. Someone answers; you answer him; another comes to your defense; another aligns himself against you, to either the embarrassment or gratification of your opponent, depending upon the quality of your ally’s assistance. However, the discussion is interminable. The hour grows late, you must depart. And you do depart, with the discussion still vigorously in progress.

It is from this ‘unending conversation’ that the materials of your drama arise.

It is in this spirit that scholars of the humanities need to take up the claims that our movement is characterized by what it has left behind — the conceptual schemes, or ideologies, or épistèmes, to which it is thought to be “post.” In order to grasp the challenges and opportunities of the present moment, three facets of our post-condition need to be addressed: the postmodern, the posthuman, and the postsecular.

Among these terms, postmodern was the first-coined, and was so overused for decades that it now seems hoary with age. But it is the concept that lays the foundation for the others. To be postmodern, according to the most widely shared account, is to live in the aftermath of the collapse of a great narrative, one that began in the period that used to be linked with the Renaissance and Reformation but is now typically called the “early modern.” The early modern — we are told, with varying stresses and tones, by host of books and thinkers from Foucault’s Les Mots et les choses (1966) to Stephen Grenblatt’s The Swerve (2011) — marks the first emergence of Man, the free-standing, liberated, sovereign subject, on a path of self-emancipation (from the bondage of superstition and myth) and self-enlightenment (out of the darkness that precedes the reign of Reason). Among the instruments that assisted this emancipation, none were more vital than the studia humanitatis — the humanities. The humanities simply are, in this account of modernity, the discourses and disciplines of Man. And therefore if that narrative has unraveled, if the age of Man is over — as Rimbaud wrote, “Car l’Homme a fini! l’Homme a joué tous les rôles!” — what becomes of the humanities?

This logic is still more explicit and forceful with regard to the posthuman. The idea of the posthuman assumes the collapse of the narrative of Man and adds to it an emphasis on the possibility of remaking human beings through digital and biological technologies leading ultimately to a transhuman mode of being. From within the logic of this technocratic regime the humanities will seem irrelevant, a quaint relic of an archaic world.

The postsecular is a variant on or extension of the postmodern in that it associates the narrative of man with a “Whig interpretation of history,” an account of the past 500 years as a story of inevitable progressive emancipation from ancient, confining social structures, especially those associated with religion. But if the age of Man is over, can the story of inevitable secularization survive it? The suspicion that it cannot generates the rhetoric of the postsecular.

(In some respects the idea of the postsecular stands in manifest tension with the posthuman — but not in all. The idea that the posthuman experience can be in some sense a religious one thrives in science fiction and in discursive books such as Erik Davis’s TechGnosis [1998] and Ray Kurzweil’s The Age of Spiritual Machines [1999] — the “spiritual” for Kurzweil being “a feeling of transcending one’s everyday physical and mortal bounds to sense a deeper reality.”)

What must be noted about all of these master concepts is that they were articulated, developed, and promulgated primarily by scholars in the humanities, employing the traditional methods of humanistic learning. (Even Kurzweil, with his pronounced scientistic bent, borrows the language of his aspirations — especially the language of “transcendence” — from humanistic study.) The notion that any of these developments renders humanistic study obsolete is therefore odd if not absurd — as though the the humanities exist only to erase themselves, like a purely intellectual version of Claude Shannon’s Ultimate Machine, whose only function is, once it’s turned on, to turn itself off.

But there is another and better way to tell this story.

It is noteworthy that, according to the standard narrative of the emergence of modernity, the idea of Man was made possible by the employment of a sophisticated set of philological tools in a passionate quest to understand the alien and recover the lost. The early humanists read the classical writers not as people exactly like them — indeed, what made the classical writers different was precisely what made them appealing as guides and models — but nevertheless as people, people from whom we can learn because there is a common human lifeworld and a set of shared experiences. The tools and methods of the humanities, and more important the very spirit of the humanities, collaborate to reveal Burke’s “unending conversation”: the materials of my own drama arise only through my dialogical encounter with others, those from the past whose voices I can discover and those from the future whose voices I imagine. Discovery and imagination are, then, the twin engines of humanistic learning, humanistic aspiration. In was in just this spirit that, near the end of his long life, the Russian polymath Mikhail Bakhtin wrote in a notebook,

There is neither a first nor a last word and there are no limits to the dialogic context (it extends into the boundless past and the boundless future)…. At any moment in the development of the dialogue there are immense, boundless masses of forgotten contextual meanings, but at certain moments of the dialogue’s subsequent development along the way they are recalled and invigorated in new form (in a new context). Nothing is absolutely dead: every meaning will have its homecoming festival.

The idea that underlies Bakhtin’s hopefulness, that makes discovery and imagination essential to the work of the humanities, is, in brief, Terence’s famous statement, clichéd though it may have become: Homo sum, humani nihil a me alienum puto. To say that nothing human is alien to me is not to say that everything human is fully accessible to me, fully comprehensible; it is not to erase or even to minimize cultural, racial, or sexual difference; but it is to say that nothing human stands wholly outside my ability to comprehend — if I am willing to work, in a disciplined and informed way, at the comprehending. Terence’s sentence is best taken not as a claim of achievement but as an essential aspiration; and it is the distinctive gift of the humanities to make that aspiration possible.

It is in this spirit that those claims that, as we have noted, emerged from humanistic learning, must be evaluated: that our age is postmodern, posthuman, postsecular. All the resources and practices of the humanities — reflective and critical, inquiring and skeptical, methodologically patient and inexplicably intuitive — should be brought to bear on these claims, and not with ironic detachment, but with the earnest conviction that our answers matter: they are, like those master concepts themselves, both diagnostic and prescriptive: they matter equally for our understanding of the past and our anticipating of the future.

The World Beyond Kant’s Head

For a project I’m working on, and will be able to say something about later, I re-read Matthew Crawford’s The World Beyond Your Head, and I have to say: It’s a really superb book. I read it when it first came out, but I was knee-deep in writing at the time and I don’t think I absorbed it as fully as I should have. I quote Crawford in support of several of the key points I make in my theses on technology, but his development of those points is deeply thoughtful and provocative, even more than I had realized. If you haven’t read it, you should.

But there’s something about the book I want to question. It concerns philosophy, and the history of philosophy.

In relation to the kinds of cultural issues Crawford deals with here — issues related to technology, economics, social practices, and selfhood — there are two ways to make use of the philosophy of the past. The first involves illumination: one argues that reading Kant and Hegel (Crawford’s two key philosophers) clarifies our situation, provides alternative ways of conceptualizing and responding to it, and so on. The other way involves causation: one argues that we’re where we are today because of the triumphal dissemination of, for instance, Kantian ideas throughout our culture.

Crawford does some of both, but in many respects the chief argument of his book is based on a major causal assumption: that much of what’s wrong with our culture, and with our models of selfhood, arises from the success of certain of Kant’s ideas. I say “assumption” because I don’t think that Crawford ever actually argues the point, and I think he doesn’t argue the point because he doesn’t clearly distinguish between illumination and causation. That is, if I’ve read him rightly, he shows that a study of Kant makes sense of many contemporary phenomena and implicitly concludes that Kant’s ideas therefore are likely to have played a causal role in the rise of those phenomena.

I just don’t buy it, any more than I buy the structurally identical claim that modern individualism and atomization all derive from the late-medieval nominalists. I don’t buy those claims because I have never seen any evidence for them. I am not saying that those claims are wrong, I just want to know how it happens: how you get from extremely complex and arcane philosophical texts that only a handful of people in history have ever been able to read to world-shaping power. I don’t see how it’s even possible.

One of Auden’s most famous lines is: “Poetry makes nothing happen.” He was repeatedly insistent on this point. In several articles and interviews he commented that the social and political history of Europe would be precisely the same if Dante, Shakespeare, and Mozart had never lived. I suspect that this is true, and that it’s also true of philosophy. I think that we would have the techno-capitalist society we have if Duns Scotus, William of Ockham, Immanuel Kant, and G.F.W. Hegel had never lived. If you disagree with me, please show me the path which those philosophical ideas followed to become so world-shapingly dominant. I am not too old to learn.

myths we can’t help living by

One reason the technological history of modernity is a story worth telling: the power of science and technology to provide what the philosopher Mary Midgley calls “myths we live by”. For instance, Midgley writes,

Myths are not lies. Nor are they detached stories. They are imaginative patterns, networks of powerful symbols that suggest particular ways of interpreting the world. They shape its meaning. For instance, machine imagery, which began to pervade our thought in the seventeenth century, is still potent today. We still often tend to see ourselves, and the living things around us, as pieces of clockwork: items of a kind that we ourselves could make, and might decide to remake if it suits us better. Hence the confident language of ‘genetic engineering’ and ‘the building-blocks of life’.

Again, the reductive, atomistic picture of explanation, which suggests that the right way to understand complex wholes is always to break them down into their smallest parts, leads us to think that truth is always revealed at the end of that other seventeenth-century invention, the microscope. Where microscopes dominate our imagination, we feel that the large wholes we deal with in everyday experience are mere appearances. Only the particles revealed at the bottom of the microscope are real. Thus, to an extent unknown in earlier times, our dominant technology shapes our symbolism and thereby our metaphysics, our view about what is real.

This is why I continue to protest against the view which, proclaiming that “ideas have consequences,” goes on to ignore the material and technological things that press with great force upon our ideas. Consider, for instance, the almost incredible influence that computers have upon our understanding of the human brain, even though the brain does not process information and is most definitely not in any way a computer. The metaphor is almost impossible for neuroscientists to escape; they cannot, generally speaking, even recognize it as a metaphor.

If we can even begin to grasp the power of such metaphors and myths, we can understand why a technological history of modernity is so needful.

why blog?

The chief reason I blog is to create a kind of accountability to my own reading and thinking. Blogging is a way of thinking out loud and in public, which also means that people can respond — and often those responses are helpful in shaping further thoughts.

But even if I got no responses, putting my ideas out here would still be worthwhile, because it’s a venue in which there is no expectation of polish or completeness. Sometimes a given post, or set of posts, can prove to be a dead end: that’s what happened, I think, with the Dialogue on Democracy I did over at The American Conservative. I wanted to think through some issues but I don’t believe I really accomplished anything, for me or for others. But that’s all right. It was worth a try. And perhaps that dead end ended up leading me to the more fruitful explorations of the deep roots of our politics, and their relation to our technological society, that I’ve been pursuing here in the last couple of weeks.

As I have explained several times, over the long haul I want to pursue a technological history of modernity. But I have two books to write before I can even give serious consideration to that project. Nevertheless, I can try out the occasional random idea here, and as I do that over the next couple of years, who knows what might emerge? Possibly nothing of value; but possibly something essential to the project. Time will tell.

I’ve been blogging a lot lately because I had a chunk of free-ish time between the end of the Spring semester and the beginning of a long period of full-time book writing. I’m marking that transition by taking ten days for research (but also for fun) in England and Italy, so there will be no blogging for a while. And then when I return my activity will be sporadic. But bit by bit and piece by piece I’ll be building something here.