Toward a Typology of Transhumanism

Years ago, James Hughes sought to typify the emerging political debate over transhumanism with a three-axis political scale, adding a biopolitical dimension to the familiar axes of social and fiscal libertarianism. But transhumanism is a very academic issue, both in the sense that many transhumanists, including Hughes, are academics, and in the sense that it is very removed from everyday practical concerns. So it may make more sense to characterize the different types of transhumanists in terms of the kinds of intellectual positions to which they adhere rather than to how they relate to different positions on the political spectrum. As Zoltan Istvan’s wacky transhumanist presidential campaign shows us, transhumanism is hardly ready for prime time when it comes to American politics.

And so, I propose a continuum of transhumanist thought, to help observers understand the intellectual differences between some of its proponents — based on three different levels of support for human enhancement technologies.

First, the most mild form of transhumanists: those who embrace the human enhancement project, or reject most substantive limits to human enhancement, but who do not have a very concrete vision of what kinds of things human enhancement technology may be used for. In terms of their intellectual background, these mild transhumanists can be defined by their diversity rather than their unity. They adhere to some of the more respectable philosophical schools, such as pragmatism, various kinds of liberalism, or simply the thin, “formally rational” morality of mainstream bioethics. Many of these mild transhumanists are indeed professional bioethicists in good standing. Few, if any of them would accept the label of “transhumanist” for themselves, but they reject the substantive arguments against the enhancement project, often in the name of enhancing the freedom of choice that individuals have to control their own bodies — or, in the case of reproductive technologies, the “procreative liberty” of parents to control the bodies of their children.

Second, the moderate transhumanists. They are not very philosophically diverse, but rather are defined by a dogmatic adherence to utilitarianism. Characteristic examples would include John Harris and Julian Savulescu, along with many of the academics associated with Oxford’s rather inaptly named Uehiro Center for Practical Ethics. These thinkers, who nowadays also generally eschew the term “transhumanist” for themselves, deploy a simple calculus of costs and benefits for society to moral questions concerning biotechnology, and conclude that the extensive use of biotechnology will usually end up improving human well-being. Unlike those liberals who oppose restrictions on enhancement, liberty is a secondary value for these strident utilitarians, and so some of them are comfortable with the idea of legally requiring or otherwise pressuring individuals to use enhancement technologies.

Some of their hobbyhorses include the abandonment of the act-omission distinction — that is, that there are consequences of omitting to act; John Harris famously applied this to the problem of organ shortages when he argued that we should perhaps randomly kill innocent people to harvest their organs, since failing to procure organs for those who will die without them is little different than killing them. Grisly as it is, this argument is not quite a transhumanist one, since such organ donation would hardly constitute human enhancement, but it is clear how someone who accepts this kind of radical utilitarianism would go on to accept arguments for manipulating human biology in other outlandish schemes for maximizing “well-being.”
Third, the most extreme form of transhumanism is defined less by adherence to a philosophical position than to a kind of quixotic obsession with technology itself. Today, this obsession with technology manifests in the belief that artificial intelligence will completely transform the world through the Singularity and the uploading of human minds — although futurist speculations built on contemporary technologies have of course been around for a long time. Aldous Huxley’s classic novel Brave New World, for example, imagines a whole world designed in the image of the early twentieth century factory. Though this obsession with technology is not a philosophical position per se, today’s transhumanists have certainly built very elaborate intellectual edifices around the idea of artificial intelligence. Nick Bostrom’s recent book Superintelligence represents a good example of the kind of systematic work these extreme transhumanists have put into thinking through what a world completely shaped by information technology might be like.

*   *   *

Obviously there is a great deal of overlap between these three degrees of transhumanism, and the most mild stage in particular is really quite vaguely defined. If there is a kind of continuum along which these stages run it would be one from relatively open-minded and ecumenical thinkers to those who are increasingly dogmatic and idiosyncratic in their views. The mild transhumanists are usually highly engaged with the real world of policymaking and medicine, and discuss a wide variety of ideas in their work. The moderate transhumanists are more committed to a particular philosophical approach, and the academics at the Oxford’s Uehiro Center for Practical Ethics who apply their dogmatic utilitiarianism to moral problems usually end up with wildly impractical proposals. Though all of these advocates of human enhancement are enthusiastic about technology, for the extreme transhumanists, technology almost completely shapes their moral and political thought; and though their actual influence on public policy is thankfully limited for the time being, it is these more extreme folks, like Ray Kurzweil and Nick Bostrom, and arguably Eric Drexler and the late Robert Ettinger, who tend to be most often profiled in the press and to have a popular following.

The Grand Academy of Silicon Valley

After writing today’s post I couldn’t shake the notion that all this conversation about simplifying and rationalizing language reminded me of something, and then it hit me: Gulliver’s visit to the grand academy of Lagado.

A number of the academicians Gulliver meets there are deeply concerned with the irrationality of language, and pursue schemes to adjust it so that it fits their understanding of what science requires. One scholar has built a frame (pictured above) comprised of a series of turnable blocks. He makes some of his students turn the handles and other students to write down the sentences produced (when sentences are produced, that is).

But more interesting in light of what Mark Zuckerberg wants are those who attempt to deal with what, in Swift’s time, was called the res et verba controversy. (You can read about it in Hans Aarsleff’s 1982 book From Locke to Saussure: Essays on the Study of Language and Intellectual History.) The controversy concerned the question of whether language could be rationalized in such a way that there is a direct one-to-one match between things (res) and words (verba). This problem some of the academicians of Lagado determined to solve — along with certain other problems, especially including death — in a very practical way:

The other project was, a scheme for entirely abolishing all words whatsoever; and this was urged as a great advantage in point of health, as well as brevity. For it is plain, that every word we speak is, in some degree, a diminution of our lunge by corrosion, and, consequently, contributes to the shortening of our lives. An expedient was therefore offered, “that since words are only names for things, it would be more convenient for all men to carry about them such things as were necessary to express a particular business they are to discourse on.” And this invention would certainly have taken place, to the great ease as well as health of the subject, if the women, in conjunction with the vulgar and illiterate, had not threatened to raise a rebellion unless they might be allowed the liberty to speak with their tongues, after the manner of their forefathers; such constant irreconcilable enemies to science are the common people. However, many of the most learned and wise adhere to the new scheme of expressing themselves by things; which has only this inconvenience attending it, that if a man’s business be very great, and of various kinds, he must be obliged, in proportion, to carry a greater bundle of things upon his back, unless he can afford one or two strong servants to attend him. I have often beheld two of those sages almost sinking under the weight of their packs, like pedlars among us, who, when they met in the street, would lay down their loads, open their sacks, and hold conversation for an hour together; then put up their implements, help each other to resume their burdens, and take their leave.

But for short conversations, a man may carry implements in his pockets, and under his arms, enough to supply him; and in his house, he cannot be at a loss. Therefore the room where company meet who practise this art, is full of all things, ready at hand, requisite to furnish matter for this kind of artificial converse.

Rationalizing language and extending human life expectancy at the same time! Mark Zuckerberg and Ray Kurzweil, meet your great forbears!

Thanks to Computers, We Are “Getting Better at Playing Chess”

According to an interesting article in the Wall Street Journal, “Chess-playing computers, far from revealing the limits of human ability, have actually pushed it to new heights.”

Reporting on the story of Magnus Carlsen, the newly minted world chess champion, Christopher Chabris and David Goodman write that the best human chess players have been profoundly influenced by chess-playing computers:

Once laptops could routinely dispatch grandmasters … it became possible to integrate their analysis fully into other aspects of the game. Commentators at major tournaments now consult computers to check their judgment. Online, fans get excited when their own “engines” discover moves the players miss. And elite grandmasters use computers to test their opening plans and generate new ideas.

[Chess-playing programs] are not perfect; sometimes long-term strategy still eludes them. But players have learned from computers that some kinds of chess positions are playable, or even advantageous, even though they might violate general principles. Having seen how machines go about attacking and especially defending, humans have become emboldened to try the same ideas…. [A] study published on ChessBase.com earlier this year showed that in the tournament Mr. Carlsen won to qualify for the world championship match, he played more like a computer than any of his opponents.

The net effect of the gain in computer skill is thus, ironically, a gain in human skill. Humans — at least the best ones—are getting better at playing chess.

The whole article is well worth a read (h/t Gary Rosen).

For various obvious reasons, the literature about AI and transhumanism has a lot to say about chess and computers. The Wall Street Journal article about the Carlsen victory reminds me this remark that Ray Kurzweil makes in passing in one of the epilogues to his 1999 book The Age of Spiritual Machines:

After Kasparov’s 1997 defeat, we read a lot about how Deep Blue was just doing massive number crunching, not really “thinking” the way its human rival was doing. One could say that the opposite is the case, that Deep Blue was indeed thinking through the implications of each move and countermove, and that it was Kasparov who did not have time to really think very much during the tournament. Mostly he was just drawing upon his mental database of situations he had thought about long ago….  [page 290]

Is Kurzweil right about how Kasparov thinks? What can we know about how Carlsen’s thinking has been changed by playing against computers? There are fundamental limits to what we can know about a person’s cognitive processes — even our own — notwithstanding all the talk about how the best players think in patterns or “decision trees” or whatnot. Diego Rasskin-Gutman spends a significant portion of his 2009 book Chess Metaphors: Artificial Intelligence and the Human Mind trying to understand how chess players think, but this is his ultimate conclusion:

If philosophy of the mind can ask what the existential experience of being a bat feels like, can we ask ourselves how a grandmaster thinks? Clearly we can [ask], but we must admit that we will never be able to enter the mind of Garry Kasparov, share the thoughts of Judit Polgar, or know what Max Euwe thought when he discussed his protocol with Adriaan de Groot. If we really want to know how a grandmaster thinks, it is not enough to read Alexander Kotov, Nikolai Krogius, or even de Groot himself…. If we really want to know how a grandmaster thinks, there is only one sure path: put in the long hours of study that it takes to become one. It is easier than trying to become a bat. [pages 166–167]

Then again, who knows — maybe we can try to become bats and play chess.

I could do this in the dark, too, Ras

Speculations on the Future of AI

Thanks for
the shoutout and the kind words
, Adam, about my
review of Kurzweil’s latest book
. I’ll take a stab at answering the question
you posed:

I wonder how far Ari and [Edward] Feser would be willing to
concede that the AI project might get someday, notwithstanding the faulty
theoretical arguments sometimes made on its behalf…. Set aside questions of
consciousness and internal states; how good will these machines get at
mimicking consciousness, intelligence, humanness?

Allow me to come at this question by looking instead the
big-picture view you explicitly asked me to avoid — and forgive me, readers,
for approaching this rather informally. What follows is in some sense a brief
update on my thinking on questions I
first explored in my long 2009 essay on AI
.

The big question can be put this way: Can the mind be
replicated, at least to a degree that will satisfy any reasonable person that
we have mastered the principles that make it work and can control the same? A
comparison AI proponents often bring up is that we’ve recreated flying without
replicating the bird — and in the process figured out how to do it much faster
than birds. This point is useful for focusing AI discussions on the practical.
But unlike many of those who make this comparison, I think most educated folk
would recognize that the large majority of what makes the mind the mind has yet
to be mastered and magnified in the way that flying has, even if many of its
defining functions have been.

So, can all of the mind’s functions be recreated in a
controllable way? I’ve long felt the answer must be yes, at least in theory.
The reason is that, whatever the mind itself is — regardless of whether
it is entirely physical — it seems certain to at least have entirely physical causes.
(Even if these physical causes might result in non-physical causes, like free
will.) Therefore, those original physical causes ought to be subject to
physical understanding, manipulation, and recreation of a sort, just as with
birds and flying.

The prospect of many mental tasks being automated on a computer
should be unsurprising, and to an extent not even unsettling to a “folk
psychological
” view of free will and first-person awareness. I say this
because one of the great powers of consciousness is to make habits of its own
patterns of thought, to the point that they can be performed with minimal to no
conscious awareness; not only tasks, skills, and knowledge, but even emotions,
intuitive reasoning, and perception can be understood to some extent as
products of habitualized consciousness. So it shouldn’t be surprising that we
can make explicit again some of those specific habits of mind, even ones like
perception that seem prior to consciousness, in a way that’s amenable to
proceduralization.

The question is how many of the things our mind does can be
tackled in this way. In a sense, many of the feats of AI have been continuing
the trend established by mechanization long before — of having machines take
over human tasks but in a machinelike way, without necessarily understanding or
mastering the way humans do things. One could make a case, as Mark
Halpern has in The New Atlantis
, that the intelligence we seem to
see in many of AI’s showiest successes — driverless cars, supercomputers
winning chess and Jeopardy! — may be better understood as belonging to
the human programmers than the computers themselves. If that’s true, then
artificial intelligence thus far would have to be considered more a matter of
advances in (human) artifice than in (computer) intelligence.

It
will be curious to see how much further those methods can go without AI
researchers hav
ing to return to attempting to understand human intelligence
on its own terms. In that sense, perhaps the biggest, most elusive goal for AI
is whether it can create (whether by replicating consciousness or not) a generalized
artificial intelligence — not the big accretion of specifically tailored
programs we have now, but a program that, like our mind, is able to tackle just
about any and every problem that is put before it, only far better than we can.
(That’s setting aside the question of how we could control such a
powerful entity to suit our preferred ends — which despite
what the Friendly AI folks say
, sounds like a contradiction in terms.)

So, to Adam’s original question: “practically speaking …
how good will these machines get at mimicking consciousness, intelligence,
humanness?” I just don’t know, and I don’t think anyone intelligently can say
that they do. I do know that almost all of the prominent AI predictions turn
out to be grossly optimistic in their time scale, but, as Kurzweil rightly
points out, a large number that once seemed impossible have been conquered.
Who’s to say how much further that line will progress — how many functions of
the mind will be recreated before some limit is reached, if one is at all? One
has to approach and criticize particular AI techniques; it’s much harder to
competently engage in generalized speculation about what AI might someday be
able to achieve or not.

So let me engage in some more of that speculation. My view
is that the functions of the mind that require the most active intervention of
consciousness to carry out — the ones that are the least amenable to
habituation — will be among the last to fall to AI, if they do at all (although
basic acts of perception remain famously difficult as well). The most obvious
examples are highly
creative acts
and deeply engaged conversation. These have been imitated by
AI, but poorly.

Many philosophers of mind have tried to put this the other
way around by devising thought experiments about programs that completely
imitate, say, natural language recognition, and then arguing that such a
program could appear conscious without actually being so. Searle’s
Chinese Room is the most famous among many such arguments. But Searle et al.
seem to put an awful lot into that assumption: can we really imagine how it
would be possible to replicate something like open-ended conversation (to pick
a harder example) without also replicating consciousness? And if we could
replicate much or all of the functionality of the mind without its first-person
experience and free will, then wouldn’t that actually end up all but evacuating
our view of consciousness? Whatever you make of the validity of Searle’s
argument, contrary to the claims of Kurzweil and other of his critics, the
Chinese Room is a remarkably tepid defense of consciousness.

This is the really big outstanding question about
consciousness and AI, as I see it. The idea that our first-person experiences
are illusory, or are real but play no causal role in our behavior, so deeply
defies intuition that it seems to require an extreme degree of proof which
hasn’t yet been met. But the causal closure of the physical world seems to
demand an equally high burden of proof to overturn.

If you accept compatibilism, this isn’t a problem — and many
philosophers do these days, including
our own Ray Tallis
. But for the sake of not letting this post get any longer,
I’ll just say that I have yet to see any satisfying case for compatibilism that
doesn’t amount to making our actions determined by physics but telling us don’t
worry, it’s what you wanted anyway
.

I remain of the position that one or the other of free will
and the causal closure of the physical world will have to give; but I’m
agnostic as to which it will be. If we do end up creating the AI-managed utopia
that frees us from our present toiling material condition, that liberation may
have to come at the minorly ironic expense of discovering that we are actually
enslaved.

Images: Mr. Data from Star Trek, Dave and HAL from 2001, WALL-E from eponymous, Watson from real life

Alex Knapp Grades Ray Kurzweil’s Predictions

Over at Forbes, Alex Knapp has taken a look at Ray Kurzweil’s technological predictions for 2009 from his 1999 The Age of Spiritual Machines. This is something we were planning on doing last year here on Futurisms, but never got around to — so we’re glad now that we don’t have to, thanks to Mr. Knapp! He finds most of Kurzweil’s predictions to be wrong. Here’s my favorite item:

“Accelerating returns from the advance of computer technology have resulted in continued economic expansion. Price deflation, which had been a reality in the computer field during the twentieth century, is now occurring outside the computer field. The reason for this is that virtually all economic sectors are deeply affected by the accelerating improvements in the price performance of computing.” Comment: Not only did the tech bubble burst shortly after this prediction was made, leading to a decade of economic stagnation, it’s arguable that more and better computing actually made the financial instruments that caused the financial collapse possible. Wrong in every way.

(See Paul J. Cella for more on how computational mindset in the financial sector can be dangerous.) Kurzweil makes a few good points in his rebuttal, which Knapp graciously posted on his blog — although one should take with a grain of salt Kurzweil’s citation of a detailed report showing his predictions to be highly accurate, given that he doesn’t mention that he himself was the author of the report.

Ray Kurzweil for Leader of Antiquated Tribal Political Council (a.k.a. Kurzweil for President)

Even transhumanists shudder to hear Ray Kurzweil described as their leader. But he’s running for president!

Well — not really. As my friend Aaron Saenz reports at the Singularity Hub, Kurzweil has been nominated for Americans Elect, an online organization attempting to draft a third-party candidate for the 2012 presidential election. He looks to be one of maybe a couple hundred candidates listed, and currently has 25 supporters (their top listed candidate is, shockingly, Ron Paul, with 1,746 supporters).


Saenz’s post has the details, among which is that apparently the Singularity Hub itself was involved in nominating Kurzweil, and Kurzweil may not even know about it himself yet. Looks like Internet-Kurzweil just became self-aware.

Of course, it’s a little strange for either Kurzweil or his followers to be getting involved in such an arbitrary, outmoded human institution as the American electoral process. After all, as Kurzweil wrote in The Singularity Is Near, “A charismatic leader is part of the old model. That’s something we want to get away from.” But I guess you’ve got to join the system to beat it.

Speaking of selling out going mainstream, where else did Ray Kurzweil appear recently but in the Best Buy Super Bowl ad:

Computerized Translation and Resurrecting the Dead

Tim Carmody wrote a fascinating article recently on the future of computerized translation, noting that Google recently shut down its Translate interface for programmers (and later reopened it, but now as a paid service).
Apparently more and more of the data Google were using to refine its translation technology were drawn from pages that had themselves been generated by being run through Google Translate. As James Fallows put it:

The more of this auto-translated material floods onto the world’s websites, the smaller the proportion of good translations the computers can learn from. In engineering terms, the signal-to-noise ratio is getting worse.

One wonders what implications this has for the project suggested by the likes of Ray Kurzweil and David Chalmers to resurrect the dead by recreating minds from their artifacts, such as letters, video recordings, and so forth: if the mind is a “fractal,” as Kurzweil likes to claim, would such a project be magnifying more the signal or the noise?

They wuz robbed

Despite some promising early results, and finishing in 30th place in the online public poll, it looks like Ray Kurzweil did not, after all, make the Time 100 most influential people in the world, which was ultimately selected by the editors to highlight the most influential “artists and activists, reformers and researchers, heads of state and captains of industry. Their ideas spark dialogue and dissent and sometimes even revolution.”

While I can contain my outrage, I have to admit that the result is bizarre given the stated criteria. Kim Jong Un, the done-nothing son of the tyrant of North Korea makes the list, but not Ray Kurzweil? Prince William and Kate Middleton (notably counted as one person on the list) are a couple of cute kids, and I enjoyed watching their wedding, but what original or influential ideas have they had? Patti Smith but not Ray Kurzweil? Amy Poehler but not Ray Kurzweil? Lionel Messi but not Ray Kurzweil?
I’m hard-pressed to explain the result. Is it that transhumanism is not after all winning, let alone won? That those of us interested in it (for or against) are in fact merely patrons of a small and not yet fashionable intellectual boutique? Or is it that transhumanist goals are so mainstream (longer! better! faster!) that the team at Time can’t see them as anything but self-evident truths? Does the truth lie somewhere in between? Or is the list just another example of the sorry results you get when you try to repackage and extend the lifetime of mortal things like once-influential news magazines?

[Royal wedding image via Mashable.]

Link roundup

(h/t: Caitrin Nicol, Elana Clift-Reaves)

Humanity’s Last Breath

In Ray Kurzweil’s 2005 tome The Singularity is Near, he has a section rebutting what he calls “the criticism from holism” — the idea that “machines are organized as rigidly structured hierarchies of modules, whereas biology is based on holistically organized elements in which every element affects every other.” His response is that “It’s true that biological design represents a profound set of principles … [but] there is nothing that restricts nonbiological systems from harnessing the emergent properties of the patterns found in the biological world.”

For the sake of argument, let’s suppose that Kurzweil is correct in claiming that all of the phenomena of the human being can be replicated on machines. Let’s instead consider a different proposition: that the transhumanist understanding of humans is by its nature shallow and incomplete — in particular, its methodology blinds them to aspects of human nature only apparent when the human being is considered as a whole, and in relation to society, culture, and environment. If so, then transhumanists are not able to recognize many of the defining characteristics of that “pattern” known as the human being, and so by their approach won’t be able to fully replicate and modify us — even if such a feat is in principle possible.
Kurzweil’s description of the replacement of the human circulatory and respiratory systems perfectly exemplifies this myopic methodology. Kurzweil notes what impressive “machines” the heart and lungs are but highlights their vulnerability to failure, and argues that we can replace them with machines that perform the same functions but with much greater efficiency and reliability. Soon a runner might only need to take a single breath to sprint a mile, and

Eventually… there will be no reason to continue with the complications of actual breathing and the burdensome requirement of breathable air everywhere we go. If we find breathing itself pleasurable, we can develop virtual ways of having this sensual experience.

This argument gets to the heart (a phrase that may lose its meaning if this scheme is carried out) of the transhumanist approach to the human being as a sort primitive production economy just waiting for its own Henry Ford to break it into processes fit for assembly lines. At first blush (another phrase that draws its meaning from human respiration and circulation) the approach seems sensible enough, particularly in a case like this: breathing is simply a bodily function for providing oxygen for respiration, with the apparent epiphenomenon of a pleasurable sensation. Why not separate the two, maximizing both by making the respiratory function more efficient, and the respiratory sensation more pure and not dependent on the function?
But since Kurzweil here at least implicitly claims to be interested in replicating and improving all of the “patterns” of human existence, his scheme for replicating breathing should capture all of its goods before it sets about improving them. So let’s take a look at how his ostensibly complete account of breathing stacks up against other commonly available accounts.
Just to name a few:
  • A quick look at the scientific literature shows that breathing is not simply a respiratory process but, as a function of the autonomic nervous system, is integrally connected to other bodily processes. For example, as yoga instructors have long known, proper breathing is strongly correlated with overall physical wellbeing: labored breathing can contribute to and breathing therapy can alleviate stress and stress-related diseases such as hypertension and blood pressure.
  • In a New Atlantis essay from last year, Alan Rubenstein notes that “The activity of breathing demonstrates very nicely how action on the world can be initiated by an organism either deliberately, as in conscious breathing (think yoga, or simply ‘take a deep breath’) or ‘unconscious’ breathing (think breathing while we sleep or, in fact, most of the time that we are awake and not paying attention).”

    Further, he writes, “Breathing is an activity of the whole organism, an action taken by the organism, toward the world, and spurred by the organism’s felt need. The body of an animal needs what the world has to give and works constantly in its own interests to obtain it.”

    Rubenstein suggests that the absence of an organism’s impulse to breathe, its drive to continue its existence through a basic engagement with its environment, ought to be considered alongside the absence of heartbeat, brain activity, and awareness as one of the basic markers of death.

  • For Alexi Murdoch and Radiohead, to remember to breathe is to remember to be grounded in the world, to maintain sense and clarity in the face of confusion, alienation, and suffering. For R.E.M., to stop breathing is to surrender to these forces.
  • For Laika, breathing signifies a connection to wind and the seasons, the breath of nature.
  • For The Prodigy, Frou Frou, and The Police, to feel the breath of another is to have one’s being wrapped up in theirs. For Telepopmusik, to breathe is to be grounded in the world or taken out of it through another.
  • For The Corrs (among many others), to be in awe is to be breathless.
  • For Margaret Atwood, to love and be loved, to live for another, is to wish “to be the air that inhabits you for a moment only…to be that unnoticed & that necessary.”
  • For Roger Ebert, the feelings we have towards other human beings — as equal or lesser beings — are something we breathe.
  • For Geography professor Yi-Fu Tuan, in Space and Place: The Perspective of Experience, “The real is the familiar daily round, unobtrusive like breathing.”
  • For Lydia Peelle, the Reasons for and Advantages of Breathing include a rootedness in existence that allows us the possibility of catching “a glimpse of the infinite.”
  • For Walker Percy, breathing is the first force of gravity that grounds a person in his own existence when he attempts to fly away from it entirely through scientific detachment: “I stood outside of the universe and sought to understand it…. The only difficulty was that though the universe had been disposed of, I myself was left over. There I lay in my hotel room with my search over yet still obliged to draw one breath and then the next.”
Just to name a few.
One may dismiss some of these understandings of breathing as unreal or unimportant. But if any of these aspects are deemed integral to our experience, it must be noted that none will survive the transhumanist decomposition of the human in general and breathing in particular into function and sensation. Just in the attempt to isolate the respiratory function of breathing, the place of breathing within the whole human body — its autonomic connections to other bodily functions — will make the task of decomposition far more practically difficult than its proponents suggest. But that’s only part of the picture.
In the basic act of breathing, there is not simply a feeling of pleasure and a co-incidental act of sustenance, but a feeling of pleasure as an act of sustenance. The sensation of rhythmed breathing during a long jog, or gasping for breath after surfacing from the bottom of a river, is not simply a feeling of pleasure as pleasure, like eating a sweet dessert, but the feeling that comes from the being’s act of sustaining its own life. No matter how accurate a virtual simulation of breathing, the sensation when divorced from function can never be the full phenomenon, the phenomenon of breathing as the act of a being working for its existence from the surrounding world. None of the other aspects of breathing — its connection to love, to spirit, to nature, to the experience of being — could survive either.
Transhumanists find the relationships between the various components of human existence quixotic, and best to ignore. It’s easy to pick us apart, and so, they assume, it must be to put us together — so even when it comes to a feature of our existence as basic as breathing, they cannot grasp that there might be some purposeful relationship worth preserving between what it is, what it is like, and what it is for. Transhumanists may succeed in making us into some new being, but it will be one bereft of all the everyday depths of experience to which they are now so blind.
[Image credit: “breathe” by deviantart user sibayak.]