[NOTE: From time to time, we invite guest contributors to Futurisms. This post is from Mark A. Gubrud, who has written widely on new and emerging technologies — including especially their implications for war, peace, and international security.]
This weekend, a philosophy professor will step up to the podium at Harvard University’s Science Center and drop a bomb on the audience gathered for the annual transhumanist showcase now called the “H+ Summit.” Although he shares their aspirations and even tells me that he “would love to be uploaded,” he will explain to the assembled devotees of transhumanism exactly “Why Uploading Will Not Work.”
I’ve seen a preview, and it’s a devastating critique — although it isn’t really new. Others have made essentially the same argument before, including me. In 2003, after years of debating this with transhumanists, I even presented a paper making very similar points at one of the previous transhumanist conferences in this series.
To understand why this is important, we first have to understand what “uploading” is and why it matters to the transhumanist movement. Put simply, uploading is the proposition that, by means of some future technology, it may be possible to “transfer” or “migrate” a mind from its brain into some new “embodiment” (in the same way one “migrates” a computer file or application from one machine to another). That may mean transferring the mind into a new cloned human body and brain, or into some other computational “substrate,” such as a future supercomputer with the horsepower to emulate a human brain.
From the latter stage of this transfiguration, the path would be clear to ascend beyond human physical and intellectual limitations simply by upgrading the hardware and software — adding more memory, faster processors, more efficient algorithms, etc. The mind (or consciousness, identity — or, shall we say, soul) could then become a being of pure information, immortal, flying freely in cyberspace, traveling interplanetary distances as bits encoded in beams of light, assuming any desired physical form by linking to the appropriate robot. One could copy oneself, disperse and later re-merge the copies. One could grow into a gigantic computerized brain (sometimes called a “Jupiter brain”) of immense power, able to contemplate the deepest mysteries of mathematics, physics, and the Meaning of It All. One could become as a god, even literally the creator of new universes, whose inhabitants would wonder who or what put them there. And one could have an awful lot of sex.
That’s the Promised Land of such transhumanist prophets as Hans Moravec and Ray Kurzweil. The latter predicted, in his 1999 book The Age of Spiritual Machines, that “nonbiological intelligence” will vastly exceed the collective brainpower of Homo sapiens within this century, and that the human race will voluntarily “merge with technology,” so that by 2100 there basically won’t be any of our kind left.
Now comes Patrick Hopkins, a transhumanist and professor of philosophy at Millsaps College in Mississippi, to break the bad news: Uploading just won’t work. As he explains in his abstract for the upcoming H+ conference, uploading
will not preserve personal identity. Transhumanist hopes for such transfer ironically rely on treating the mind dualistically — and inconsistently with materialism — as the functional equivalent of a soul, as is evidenced by a carefully examination [sic] of the language used to describe and defend uploading. In this sense, transhumanist thought unwittingly contains remnants of dualistic and religious categories.
Or, as I put it in my 2003 paper:
Arguments for identity transfer cannot be stated without invoking nonphysical entities, and lead to absurdities that cannot be avoided without introducing arbitrary rules…. Dualism is built into the language that Moravec uses throughout, and that we use on a daily basis, “my brain, my body,” as if brain and body were distinguishable from “me,” the true “me” — the soul…. Moravec does not use the word “soul,” but he uses words which are effectively synonymous.
Transhumanists maintain that they do not believe in anything supernatural; they usually abjure belief in God and in an immortal soul. Yet every explanation of and argument for the idea of having your brain scanned and disassembled, bit by bit, so that some kind of copy can be made by some kind of Xerox machine, contains some word whose function and meaning, in this context, are the same as those of that venerable word, and the ancient idea it stands for: the soul.
In the traditional understanding, the soul — dual of the body, and separable from it — carries or constitutes the true identity of the human person. The soul is what we feel in a person’s presence, what we see when we look into a person’s eyes, and it remains steadily in place as a person’s body changes over the years — despite the constant exchange of mere atoms with the environment. The soul, not the brain, is what is conscious, as no mere material thing can be. It is connected to that transcendent world of pure spirit, where, perhaps, all will be understood. In some accounts, the soul endures after death and goes to Heaven, or to Hell, or else hangs around in graveyards and abandoned houses. A voodoo priest can capture a soul and imprison it in a doll — more or less what the proponents of uploading hope to do by means of technology.
Thus Moravec, in Mind Children (1988), argues that some future “robot surgeon” might have the ability to probe your brain a few neurons at a time, building up a detailed model of those cells, until a computer simulation is able to predict those neurons’ firing patterns exactly. Then it could override the output of those neurons, effectively replacing them with simulated neurons. The rest of the brain would go on working as normal, and since the rest of the brain can provide input to, and work with the output of, the simulation, just as well as it would with the real neurons, Moravec argues that “you should experience no difference.”
Continuing the process,
Eventually your skull is empty, and the surgeon’s hand rests deep in your brainstem. Though you have not lost consciousness, or even your train of thought, your mind has been removed from the brain and transferred to a machine.
The premise that Moravec starts with — that you would not feel any different if some small number of your natural neurons were to be replaced by artificial neurons that provided the same input-output functions to the rest of the brain — might at first seem reasonable. Neurons die all the time, by the thousands every day, and you don’t notice any difference. If a few of them were replaced, how could you even tell?
But what if your brain cells all died at once, faster than the speed of neural transmission, say because a bomb exploded nearby? Would you notice then? No, you would be gone before you could feel anything. Death does not require our ability to perceive it; nor can we escape the Reaper by refusing to acknowledge him. In this we are unlike Wile E. Coyote, who can’t fall until he sees that he’s over the edge of the cliff.
Moravec claims that “you have not lost consciousness” at the completion of his process. This is powerful verbal magic, appealing to the sense that consciousness is an indivisible whole. Yet any number of experiments and observations from psychology show this to be an illusion. You are one body, leading one life, but your mind’s unity is a synthesis.
What is this thing, the “mind,” that Moravec claims can be “removed” and “transferred”? What exactly is it made of? Some say “information,” and that sounds appropriately scientific, but information has no existence, so far as we know, without the physical “substrate” used to “represent it.” When we speak of “information transfer” from one thing to another, we usually mean that some physical agent makes some physical measurement of the first thing and imposes related physical changes on the second thing. Pure information, completely separated from any physical matter or energy, would be something whose existence could not be distinguished from its nonexistence.
Even though transhumanists generally do not admit to believing in an immaterial “soul,” proponents of uploading continually invent or repurpose technical-sounding terms as stand-ins for that forbidden noun. Thus Moravec advances a theory of
pattern-identity … [which] defines the essence of a person, say myself, as the pattern and the process going on in my head and body, not the machinery supporting that process. If the process is preserved, I am preserved. The rest is mere jelly.
Not only has Moravec introduced “pattern” as a stand-in for “soul,” but in order to define it he has referred to another stand-in, “the essence of a person.” But he seems aware of the inadequacy of “pattern,” and tries to cover it up with another word, “process.” So now we have a pattern and a process, separable from the “mere jelly.” Is this some kind of trinity? Or is the “mere jelly,” once appropriately patterned and undergoing the processes of life, what real human beings are made of — that and nothing else that is known to science?
Similarly, Kurzweil argues that
we should not associate our fundamental identity with a specific set of particles, but rather the pattern of matter and energy that we represent.
If taken literally, this carelessly worded statement suggests that we are not our true selves, but mere representations of our true selves! But note again that Kurzweil points to the assumed existence of a “fundamental identity” which is distinct from the body. In other words, he is referencing the idea of the soul, and manipulating the dualism that is embedded in our way of thinking about people.
So it goes with every author who advocates the idea of uploading as a route to immortality and transcendence. They must always introduce some term as a stand-in for “the soul” and argue that by whatever process they propose, this object will be moved from the old brain to the new one. Otherwise, they would have to describe their proposal not as transferring “you” (your soul) to a new body, but as making some kind of copy — perhaps an “identical” copy, structured the same way at the molecular level, or perhaps a mere functional copy, “instantiated” in some different kind of “substrate” (as one might copy an Old Master’s painting to some pattern of ink dots on paperboard).
Having one’s brain copied, particularly if it requires, as any even remotely plausible technical scenario for uploading would require, the complete disassembly of the original, hardly sounds appealing, since having your brain disassembled will pretty clearly kill you.
Thus the conclusion: Uploading cannot work, if we define its “working” as a way for people to escape death and transcend to existence as a technological super-being.
To be sure, these arguments against uploading are not novel — they go back years. In my own case, I’ve been making these points for around a decade, including in the paper I presented at the 2003 conference. At that conference — it was called “Transvision,” the annual meeting of the World Transhumanist Association (WTA), before the group was renamed H+ — I gave a talk in a session chaired by no less a transpersonage than WTA cofounder Nick Bostrom, who now heads his own well-endowed “Institute on the Future of Humanity” at Oxford University. (Note: The conference organizers enhanced my bio and awarded me an honorary doctorate for the occasion.) I also posted the paper on various listservs, provoking comments from another transhumanist luminary, Trinity College lecturer and past WTA president James Hughes, who accused me of shooting a dead horse (although I was unaware of anyone else making these arguments about uploading at that time). So two of the most important leaders of the transhumanist movement were among those familiar with these arguments against uploading.
Which raises a question: Why only now does the leadership of the transhumanist movement see fit to acknowledge the serious case against uploading? Why will this weekend’s H+ conference feature not only Professor Hopkins with his critique of uploading, but also WTA cofounder David Pearce arguing that uploading “involves some fairly radical metaphysical assumptions” and that Kurzweil’s vision of a voluntary mass exit for Homo sapiens “is sociologically implausible to say the least”?
The answer may be connected to public relations. The transhumanist movement has begun to break the surface of public awareness, not only through the infiltration of its science-fiction visions and technophilic attitudes, but overtly as transhumanism. The further mainstreaming of transhumanism seems to require some P.R. maneuvering, including a rebranding (the glossy new name “H+”). It may also require a moderating of ambitions. The old “Extropian” dreams of uploading and wholesale replacement of humanity with technology may be too scary and weird for mass audiences. Perhaps more modest ambitions will have a broader public appeal: life extension and performance enhancement, cool new gadgets and drugs, and only minimal forms of cyborgization (implanting technological devices within the body). In other words, more Aubrey de Grey, less Hans Moravec; more public policy and less cyberpunk; more hipster geeks and fewer socially-impaired nerds. A kinder, gentler Singularity. Maybe even one with women in it.
Perhaps some of the transhumanist top guns recognized “uploading” all along for the ontological nonsense it is. Perhaps that’s why they now stand ready to throw it overboard like so much ballast threatening to drag down their balloon. It sounds too loony and it’s too easy a target, too obviously inconsistent with transhumanism’s claim to being a creed grounded in science and technology.
If so, distancing themselves from uploading is probably a smart move for the H+ leaders, but it risks a split with their base, and the formation of new, hard-core splinter groups still yearning for cyber-heaven, still committed to becoming something that is too obviously not at all human.
In the longer run, this strategy to divert attention away from transhumanism’s original and ultimate aims will not work — or, at least, I am hopeful that it will not. For transhumanism itself is uploading writ large. Not only is the idea of uploading one of the central dogmas of transhumanism, but the broader philosophy of transhumanism suffers from the same kind of mistaken dualism as uploading, a dualism applied not just to human beings but to humanity, collectively understood. Transhumanism posits that “the essence of humanity” is something that can be preserved through any degree of alteration of the human form. In other words, it posits that humanity has an essence that is wholly separable from living human beings, an essence that is transferable to the products of technology.
The early history of transhumanist writing helps emphasize this point. If you investigate the origins of the notion of uploading, you’ll find that initially, back in the 1980s and 90s, it was called “downloading.” Back in the days of green screens and floppy disks, far fewer people thought that existence as a computer program would somehow be a step up from being alive and human. The idea was just that it would be nice to have backup copies; in case of an accident, the atman file could be uploaded back to fresh-cloned flesh.
But as technology and the cult of technology co-evolved, “downloading” became “uploading” — a dream of ascension and transcendence that became a vision of rapture for the geeks. Transhumanism raises technology above humanity, and all but deifies it, at least making it an end in itself, and the end of humanity, rather than a tool to serve human ends. This seems a new low for philosophy, an upside-down morality, a grotesque distortion of the scientific rationality and enlightened humanism of which it claims to be a continuation.
The H+ leadership would like to hide this by underlining their own humanity and recasting their rampant technophilia as a human desire for betterment. But if the “H” stands for humanity, what is the “+”? If it is cyborg hybrids of man and machine instead of superhuman robots, is that so much better? Do we really want that to follow us, as the next step in “evolution”? Whatever the “+” may stand for, I am sure it’s not my kid.
A little history for you.
Transhumanism as we know and love or hate it today is an outgrowth of the fusion of the L-5 Society with life extension/cryonics that took place in the late 80's (I was very much a part of this). Eric Drexler's first book "Engines of Creation" was essentially the old L-5 scenario on steroids along with radical life extension. There was no mention of uploading and none of us talked about this at the time.
Drexler's interest in "nanotechnology" was sparked by thinking about how to create the ultra-light wight solar sails that were seen as the primary mode of deep space transportation for space colonization. Eric Drexler was one of the original members of L-5 Society. His original concepts, which are actually starting to be developed now, were based in biological processes like protein synthesis and the like. Nanotechnology was seen as the tool to enable the human expansion into space as well as to cure aging and ensure open-ended youthful life spans. These objectives are seen as quite tame and even quaint by many "transhumanists" today.
This obsession with uploading and AI is something that came along sometime later, mid to late 90's, I think. I don't know for sure because I was out of the country, living a life where seeing a woman with blonde hair became quite the exotic sight to me! Same goes for the singularity. I think it was Vernor Vinge's AI paper in 1993 that kicked the whole thing off, but I cannot be certain of this. I know this is when the extropians really got going because I heard bits and pieces from friend of mine and I read the first GQ article about them, published around '94 or so.
What I can tell you is that the people heavily obsessed with uploading tend to be computer and software specialists who lack understanding and appreciation of bio-chemistry and biology. The biologically oriented people are involved in bio-gerontology as well as cryo-preservation. I know many of these people from the 80's. Many of them, like me, are quite skeptical of mechanically based nanotechnology, let alone uploading.
In short, I think uploading is a fantasy. We may get AI in the next 3-4 decades, but probably not sooner. However, that AI is likely to be very DIFFERENT from us.
I think we will remain "biological" for a long time to come, at least 3-4 centuries. However, even within the constraints of biology there are considerable improvements that can be made. Curing aging and increasing cognitive ability are obvious ones. Less obvious ones are multiple sets of chromosomes and other alterations to improve radiation resistance (desirable for space settlers) as well as modifications for living in reduced gravity. Another one is thicker skin and stronger airway and gut muscles to allow one to survive space vacuum for longer periods of time without a space suit.
This is over 20 years old, but does give some idea of possibilities that biology has to offer.
Your telling of transhumanism's prehistory is very interesting, although I think it represents just one thread of the loom.
It seems to me that the focus on AI and the notion of uploading received a big boost from Drexler's sketching, in Engines, of more or less plausible scenarios for computers orders of magnitude faster and denser than the human brain, and for the wholesale restructuring of the body at the molecular level. The influential books by Moravec and Kurzweil followed, and Max More's "Extropian" cult became the nucleus of the broader transhumanist movement. Vinge's 1993 paper seems to me a relatively minor contribution, but it did crystallize the idea of the coming Singularity.
In the previous thread, I asked if you could point to any technical errors in the exposition of "mechanical nanotechnology" as presented by its primary technical authors. Since your responses did not point to any technical errors, and since these authors have advanced detailed technical arguments, I chose not to reply.
However, I would point out that, as a reading of current technical literature demonstrates, "mechanical" nanotechnology is just as much an emerging reality as its "biological" counterpart. While it may be true that biotech, in crude forms, is already a large industry, that biotech is very far from the wholesale remaking of the body that you suggest will eventually be possible. As for self-assembly vs. directed or robotic assembly, many people think in terms of a hybrid, snapping together molecular blocks like legos in arbitrary patterns, using some kind of robotics, masking, etc. Drexler, in particular, has always maintained that the use of proteins or some other "foldamer" would be a route to a more general and higher-performing style of nanotech.
You suggest that "We may get AI in the next 3-4 decades, but probably not sooner." I don't know what you base this on, but people have different estimates of how long it will take to reach particular technical objectives. I do agree (and earnestly hope) that AI will be completely different from human intelligence, but I would be surprised if, 30 years from now, there were not systems that can do anything a human can, including act convincingly like a human. Artificial intelligence already exists, after all, and it vastly outperforms us for many purposes.
Then you say that "we will remain "biological" for… at least 3-4 centuries." But of course, none of us will be around 300 years from now. The question is whether humanity will still be around.
I second kurt9's remarks. That '70's Transhumanism I encountered as a teenager didn't mention mind uploading, and therefore doesn't even depend on the concept. You don't find mind uploading in Robert Ettinger's 1972 book Man Into Superman, for example, which received a favorable review that year in New Scientist magazine.
I never found mind uploading especially compelling myself, even though Ray Kurzweil in his singularity book credits me with coining the word "singularitarian" almost 20 years ago.
Why don't we lose our identity/become different people when the constituent proteins making up our cells are continuously renewed? Is the soul transferred from the old proteins to the new proteins? How is that scientific?
You ask a very good question, one of the key questions which lead people into the nihilistic wilderness in which a noxious weed like transhumanism can flourish.
As a physicist who would be quite astonished if the result of some experiment could not be explained without recourse to the supernatural, I believe that body and soul are one, i.e. each are just words for aspects of what we know and perceive about ourselves and each other.
We do indeed change over time as our atoms and molecules are replaced, and more importantly as we grow, learn, mature, age, live and die. We are the same people, across time, only by reference to the physical facts. Each of us is a single life, a single body, continuous from birth to death. There is no deeper fact that justifies our notion of personal identity.
Various thought experiments, involving uploading, teletransporters, cryonics followed by reconstruction, multiple copies, etc., so violate the given rules of human existence, that we don't know what to make of personal identity in such cases. There really just isn't any objective fact (e.g. one that is falsifiable by a physical measurement) about identity in such cases.
If you say, for example, that someone has his head frozen upon death, and 200 years later it's disassembled and a new brain and body are made on the pattern of the old, well, you have just described your scenario and there isn't any objective fact about whether the reconstruction is actually the same person who died 200 years ago. If you say it is, I will ask, What if two copies were made? But certainly, the reconstruction will believe itself to be the same person. I use the pronoun "it" because, after all, the reconstruction might be an upload, and not even human. In the latter case, we have a definite basis on which to reject the claim of identity, but even so, the objective fact is not that identity is not preserved, rather, the objective fact is that the reconstruction is not a human being. Then again, it might be a human being, albeit one created by technology. There still is no fact to justify our agreeing that it is the same human being who died 200 years ago (rather than a copy of him), or for that person to have gone to his death believing that he was just taking a nice nap and would wake up 200 years later (instead of, say, 201 years later, as a second copy, or as a third, etc).
You see that, if you do believe that physics describes the universe, if you don't believe in ghosts, then we really are alone in a metaphysical wilderness. Except we are not alone — we have each other.
Transhumanists look outside the human community for a source of meaning and moral order. They believe in a great story of Evolution, Intelligence, and Destiny. It's kind of a throwback to a pre-Copernican worldview.
Humanists understand that this is a random universe, to which we bring our own meanings. Humanists treasure humanity and nature, while regarding technology as the tool we use to protect these primary values, rather than a primary value in itself.
This really is a debate about human values and the future of humanity. There still ain't nobody else here.
Bio-chemistry works! We know this because we exist.
This means we can make nanotechnology based on bio-chemistry even if it takes us five or more decades to develop it into a mature technology. Also, everything that is actually being made in labs is also based on bio-chemistry, everything from Venter's "synthetic" organisms to Seeman's DNA "robots" and "assembly lines". All of the "dry" nanotech remains theoretical design work.
Yes, Drexler does talk about AI capability in "Engines of Creation". However, he presents it as a supercomputing technology, something to be used as a tool, rather than as a sentient being or something we upload into. I got one of the first 20 copies of the original hardcover from Drexler himself at the space conference in 1986.
It is true that the Extropians, which were definitely cult-like in the early to mid 90's, were the first people I know of who talked about uploading. Max More (formerly Max O'Conner) used to talk about "living in cyberspace" when he started Extropy magazine in 1989. I left the scene in 1990 when I attended grad school, then moved to Japan in summer of '91. I do remember that the Extropians did talk a lot about Vernor Vinge's 1993 paper. At least it was in all of their writings.
I agree with you that the biological paradigm of transhumanism is only one thread of the whole ball. However, it was the original one in vogue at the time (late 80's) that I had face to face contact with the people at the time. I do have face to face contact with bio-gerontologist and cryonics people (we had a meeting in Portland last Sunday that Aubrey de Grey attended).
I do find extreme amusement in the dualistic underpinnings of Transhumanism and the Singularity. When I first encountered these groups, I, like many of my command-and-control, logic-worshiping brethren bought this claptrap like it was the Gospel. Why wouldn't we? We were trained to not question orders and follow the leader from birth. Luckily, I broke free through deeply questioning the tenets of these respectable, albeit incorrect, men.
"A kinder, gentler Singularity. Maybe even one with women in it."
I laughed. Actually, that was the first thought I had when I entered the group…Where are all the women? Logic (and expertise in it) has always been a man's game. Yes, women play…But they play by men's rules. (If you want to create an Artificial General Intelligence, why not ask a woman? They would most likely understand the problems that plague the logical approach: Novelty and Salience.)
The fact of the matter is, if you argue this information long enough with a hard-core mind-upload Transhumanist, they will likely run out of answers and fall back on science fiction as a support for their argument. Watch for it.
It's silly to say that transhumanists value technology as an end in itself. The Transhumanist FAQ and many other transhumanist documents make it clear that we see technology as a tool for opening up new possibilities to explore, not as an end in itself. In fact, I wouldn't even consider AI as a "technology" so much as a new kind of mind.
Your description of transhumanism as a "noxious weed" is surprising. Why are you so passionate about your condemnation of it? Are you afraid that transhumanism will lead to a chaotic world without order, right or wrong? If so, I can sympathize with your concerns.
We are alone in the metaphysical wilderness, that's right. There is no fact of the matter about identity, because identity is just a concept. Some people may feel that they've continued to be the same person through the uploading process, some might not. We can disagree because the only facts are physical facts.
I treasure humanity and nature, I just don't view them as static Platonic forms, but developing and transforming fuzzy categories that need to be nurtured to grow. For instance, for most of history, homosexuals were viewed as abominations, but now, some places in the world respect them as human beings. Today, transgendered pioneers and a rich counterculture continue to push and challenge the boundaries of humanity, while religious conservatives recoil at anything not in agreement with their narrow definitions of humanity and family.
As Mark and Kurt point out, many transhumanists don't even buy into the concept of mind uploading anyway, but I should point out that this is mostly first-generation transhumanists affiliated with the extropian mailing list. Mind uploading advocacy is pretty much de rigeur among younger transhumanists, especially in California.
In 1993, Vinge's paper changed the whole landscape of transhumanism, and today's transhumanism is more of a reflection of Moravec, Kurzweil, and Vinge than Ettinger and Drexler, who to my knowledge don't even call themselves transhumanists.
If you agree with Derek Parfit about there being no further fact of personal identity aside from the physical facts, then the important thing to individuals is which continuants they identify with. My identification might have a narrow reach, so that I would not identify with, and be concerned about the fate of, the being that would follow if I developed amnesia from some organic trauma. Or I might care about that successor almost as much as a non-amnesiac continuant. I might consider the cessation of consciousness in sleep or coma to be death, followed by the formation of a new individual, or I might not.
Likewise, I might identify with a silicon-based computer designed to closely imitate my memories, personality, and concerns (reaching a level of behavioral similarity comparable to the ordinary level of similarity between different time-slices of myself), or I might not.
If what we mean by, "physical system X will be me" or "I will survive in physical system X" is just that we have this sort of concern for and identification with system X, as we do for our ordinary temporal successors/time-slices where's the ontological weirdness?
Also, the article would benefit from a bit of academic context, e.g. the fact that views like multiple realizability, and Parfit-style accounts of personal identity are widespread and influential in philosophy, not idiosyncratic transhumanist inventions. Look at the stats for philosophers who believe that one survives Parfit's teletransporter case at the PhilPapers survey.
Uploading really isn't that complicated, though it is non-intuitive. Either you deny the concept of self entirely, including your current self which is continuously changing as we move through time, or you acknowledge that the 'self' is ambiguous and what constitutes your 'self' will be the same whether the cells are aging in situ or being replaced one by one, or completely frozen and re-instantiated in another substrate.
If you say uploading is impossible then you are also making the claim that identity and the concept of 'self' is meaningless, which I don't think you'll admit to since I suspect you (and most everyone) are tied to the concept of their selves. You can't have it both ways.
Would you not mind if someone killed you tomorrow, since your tomorrow self is not the same self as today? What about people with cochlear implants or people suffering from parkinson's who have deep brain stimulators? Are you saying they are not 100% their 'selves'. And if they continue to replace parts of their brains, at what point do they cease being their 'selves' and become mere 'copies'?
Camus was prescient. Either you deny your self and admit existence is absurd (and maybe commit suicide as Camus sought) or you concede that the 'self' is ambiguous and uploading is as viable as maintaining a 'self' from chronon to chronon.
If you don't see technology as an end in itself, what do you mean by saying that "I wouldn't even consider AI as a "technology" so much as a new kind of mind"? What difference does it's being a "mind" make?
I don't view humanity and nature as "Platonic forms" either, but as living realities, now threatened by the specter of technology which has escaped human control and allowed to have its own ends.
Transhumanism is noxious because it would allow, or even promote the latter.
Homosexuality and people who want to be the opposite gender have always been with us. To compare an aversion to cyborgization and technology gone wild with the oppression that people of different colors, cultures and gender orientations have suffered throughout our cruel history is to trivialize the latter.
At least men who want to be women want to be women and not robots. Maybe future VR technology will enable people to explore cross-gender fantasies without requiring a crude mutilation of the body.
I've read a bit of Parfit but I hadn't in 2003 when I wrote "Bursting the Balloon," so I didn't need Parfit to tell me that questions about whether object B at a later time is the same as object A at an earlier time are most generally questions not about objects A and B but about the subject who is regarding them, and that the best way to reveal the truth about transporters, copiers, uploaders, etc. is to describe them in purely physical terms, since that's all there is.
You might choose to "identify with" any number of objects, but that does not change the fact that any of the above-mentioned devices will begin work by disassembling your brain.
Of course this blog post is not a review of the academic literature on philosophy of personal identity, whatever the esteemed philosophers variously mean by that term. As for the polls, I'm sure 4 out of 5 theologians believe in God, too.
I am my body, my body is my soul, my soul is my body, my body and soul are my life. I was born, I live one life, I will die. This is my concept of self, firmly rooted in physical reality.
I am human, I have human children, I don't want to die too soon (or at all, really, but that's how it is), and I want my children to live, to have children, and so on. I don't want them to believe that life is meaningless, and I don't want their world to be taken over by out of control technology.
Also, Carl, I wanted to add that if you are concerned about the fate of some future copy of your brain, you should be very careful not to allow your brain to be disassembled after your death, or frozen for future disassembly.
Once the data file exists, someone could make any number of copies. Some could be enslaved, others subjected to unspeakable tortures, others imprisoned in no-input-no-output oubliettes.
Laws could be passed to forbid this, but you better hope computer security is better in the 22nd Century than it is now.
this is a fallacy. you are using an argument that refutes the very argument you are making. transhumanism is merely a meme to motivate action and critical thought about any and all future positive outcomes or the avoidance of negative outcomes. any reasonable transhumanist understands and affirms the notion of humanism in their own mind, yet still dares to venture out and practice some healthy foresight in order to potentially protect these aims. i see technology and society at large as an unstoppable emergent system of unknown, subjective, and psychological parameters coupled with controllable, objective technologies that underpin these aforementioned sociological systems. most of the time, the human animal acts as a medium for various hedonisms, using its large brain and relatively effective problem-solving and speculative algorithms to respond to the environment intelligently. for counter-example, a person with autism or a very young child does not have a working theory of mind, and can sometimes exhibit asocial behaviors as a result. however, a succesful businessman or the chief of a native american tribe will usually have a reasonably good theory of mind and is able to predict emerging social patterns more intuitively than an autistic person, although to some degree, an autistic person can learn some social skills through "working in the dark" as i like to call it. one must argue solipsism if one wishes to deny with certainty of assertion the possibilty of future technologies helping us to understand the mind as a stand-alone concept. you argue epiphenomenalism (which i cant decide whether you argue for this or a poetic existentialism) in this thread, which does not rule out the possibility of understanding the mind reductively, and is a red herring in terms of the discussion of the humane benefits of inanimate technology and its possibly humane/inhumane future. tldr: you argue hypothetics as intrinsics/its far too cynical/unproductive/self-refuting
I'm aware of that sort of issue, but why bring it up? Are you suggesting that those sorts of concerns are reasons not to be concerned about such successors? One could make a similar argument about old age: it's terrible to identify with one's future selves afflicted with painful ailments, since then entities one cares about will suffer. So we should think of our older successors as separate people.
If you just want to raise the possibility that the future could be much nastier than the present, and the importance of working to avoid avoid that, I'm all with you.
No, Carl, the point is really that whatever anyone does with such a data file obviously does not affect you, or at least not any more than whatever might happen to other people, or say, other species living elsewhere in the universe.
In particular, it's hard to see how any particular promise about what happens to the data file should cause you to agree to having your brain disassembled, as long as you would not otherwise be suicidal.
As for discounting the future, people do this, and we see the results in people's lives. This points to the need for a somewhat deeper moral philosophy than anything currently being discussed in this thread.
>"I am my body, my body is my soul, my soul is my body, my body and soul are my life. I was born, I live one life, I will die. This is my concept of self, firmly rooted in physical reality."
Actually, that is only firmly rooted in our false intuition of what physical reality is, science informs us otherwise. 'My body is my soul' adds nothing to the discussion. Do you have a different 'soul' after you amputate your leg or go blind? Does 'I will live one life, I will die' include going unconscious during surgery or going into a coma then waking up possibly months or years later? Are you awaking as a separate person who is not you, you having died?
It is apparent that you are arguing purely from an emotional bias, it seems you just don't like the idea of uploading, which is fine. If the idea is abhorrent to your notion of what humans are then you can be honest about that, just don't try to fool yourself or others into reasoning it away as some impossibility. If you don't like what you or the world will have become after an upload, if you are too committed to your present form, then that is a fine and respectable stance to take, but don't confuse that with the claim that uploading is impossible.
Again, you can't have it both ways. You can say that the 'self' is a false concept or illusion anyway, like existentialists who view life as absurd, or like (real) Buddhists with their concept of 'Anatta' who come to terms and accept 'not-self' as reality and give up improving the self as unnecessary. Or you can do what most humans have always done and want to expand the opportunities of experience, which logically moves towards the direction of uploading.
It's hard to take seriously the conjunction of these two claims:
1. Personal identity as a concept doesn't carve the world at its joints, it's something that we project onto the world with no further fact of the matter.
2. The unique, obvious, and true account of personal identity (which should guide our practical choices and moral judgments) is a certain sort of biological continuity.
At that point it seems you're left asserting your naked personal intuition, which isn't very convincing to those who have considered all the cases you raise and have different responses to them. I'll leave it at that.
Re your "two claims," where did I make either? If you can show me how something I wrote got you so confused, I will try not to cause so much confusion in the future.
I think a correct account of "personal identity" is that it is just another stand-in term for "the soul," in this case one preferred by certain academic philosophers.
Thus, while Moravec prattles about "pattern and process" and others about "brain information" or "algorithm," and still others "the essence of a person" or "what makes me who I am," philosophers discourse about "personal identity."
Moravec conjures verbal voodoo to create the illusion that he knows how to capture a soul and transfer it to a machine; likewise certain academic philosophers weave (doubtless more refined) verbal magic to create the illusion that it it reasonable to believe that one can survive having one's brain disassembled.
They argue that "personal identity" can be transferred in such cases, as if it were some mobile thing. Or, sometimes they say, "We may regard it as…" which translates to, "Let's pretend."
As you see, I am in general not very impressed by this literature. Its dissection is a project I will defer until I have more time.
What you refer to in #2 as "a certain sort of biological continuity" is of course the only sort of continuity of personal existence which humankind has any experience of. It is clearly the source and the reference point of the notion of the soul, or "personal identity" if you prefer.
And this notion of "personal identity" does have an unambiguous meaning as long as we stick to the ordinary facts of human existence. That it breaks down when we introduce hypothetical pathological cases does not change the fact that any remotely plausible scenario for teleportation, uploading, copying, etc. requires the disassembly of your brain.
The meaning of my "body and soul are one" is just that, if you say we are just our bodies, people visualize the flesh, and they object that there is more to a person than is captured in that image. So they talk about a person's soul, which brings forth images and impressions of another kind. Clearly the words "body" and "soul" refer to different aspects of (ways of looking at) one physical reality. Unless you, too, believe in some kind of separable duality.
I never said "the 'self' is a false concept or illusion." I exist, and obviously you do too. Plus, I'm pretty sure you're human, just like me. You are your body, but not just your body; your body is everything that you are.
I get the idea you reject functionalism (but accepts reductionist materialism??). Every atom(and their fundamental particles of course) have a unique "vibration". And not only does it exist it's also essential for your specific consciousness. So consciousness isn't the arrangement(or pattern), which you reject as some kind of dualism.
The author needs to take a course in personal identity and philosophy of mind. The uploading idea is not dualist, it's functionalist. That distinction is clear to anyone who has studied the subject. As a transhumanist referred to in this article (falsely as leader of a "cult" — a typically dishonest attempt to dismiss ideas), let me note that I wrote my PhD dissertation on this very topic.
I should also note (as have one or two comments here) that you can certainly be a transhumanist without accepting the possibility of uploading.
After posting my previous comment, I realized that I should acknowledge that, of all the people associated with transhumanist or uploading views, Hans Moravec may very well be the one who could legitimately be called a dualist. In some of his writing, and in personal conversation, he has appeared to be a Platonist who sees mathematical abstractions as the primary reality.
Unfortunately, too many critics of transhumanism or of uploading, have fixated on Moravec (for instance, Erik Davis, in his Techgnosis). The vast majority of those who think that uploading is at least theoretically possible are functionalists who reject Platonism. That means that we can survive even while leaving behind this particular body, but not unless we have some kind of physical embodiment (even if distributed).
Hello Max, good to see you here behind enemy lines.
I don't think Moravec is a dualist, because as a scientist he accepts that mentality is generated by physical phenomena. From reading his works I guess he thinks mental scapes are _as important as_ underlying physical phenomena, but he does not deny they are generated by physical phenomena.
He accepts that we can survive even while leaving behind this particular body, but not unless we have some kind of physical embodiment (even if distributed), but he thinks the actual physical embodiment can be very different from our current physical embodiment (computroniom brains and even stranger substrates described in his books).
My own position is: Mind Uploading is feasible in principle. This is the only position compatible with materialism, the scientific method, and current scientific knowledge. Denying this is falling into vitalism and mysticism. Our bodies and brains ARE machines which operate according to the laws of physics, machines which can be fully understood by science and improved by engineering. We ARE information, and information CAN be transferred from one computational substrate to another.
My more detailed comments to this article:
Honored as I am by your taking the time to comment, I remain unimpressed by your appeals to academic credentials (particularly in this subject) and curriculum recommendations. Either you engage the arguments in some effective way, or not.
Perhaps you could explain what the difference is between your "functionalism" and the dualism I criticized. If you only mean that an "upload" could "function" as if it were the late human, I don't dispute that. The question is, function for whose benefit? The late human is, well, dead. If you deny that, you are back to dualism.
I read your PhD thesis years ago, and I would summarize your argument as "I can become a cyborg or pure machine and it will still be me because I say it will be and because I choose to do it." That's a comprehensible position, wrongheaded as it may be, but it doesn't answer any of the points made here.
I characterize the transhumanist movement, and even more so its "Extropian" forerunner as a "cult" based on my experiences with members of this cult. That's not to say ya'll are planning a mass suicide – or are you?
I focused on the arguments of Moravec because they have been particularly influential and because they are particularly well-expressed and effective – "verbal voodoo" as I put it – in persuading people that, after all, it might just work… it has to… I mean, if you never even lose consciousness…. Very nice, and it requires a fairly radical rethinking of assumptions to free one's mind from the grip of Moravec's "robot surgeon."
I agree that uploading is feasible in principle, but only by means that kill you.
You say we "ARE information," but other than capitalizing "ARE" you offer no response to the point made that pure information, separated from matter or energy, is nothing at all. I have pointed out that identifying the soul (i.e. what we "ARE") with "information" is just another form of dualism. I don't see your response to that.
Maybe even one with women in it, huh? Sounds splendid. Radical feminist Shulamith Firestone strikes me as an ideal icon. Way back in 1970 she advocated employing technology to liberating ends such as artificial wombs and full automation through machines as smart as people. Though not yet considered a transhumanist writer in any circle I'm aware of, she fits right in with her discussion of the tyranny of biology and ambitious vision for social transformation.
I agree that "pure information, separated from matter or energy, is nothing at all." My point is that copying information to a new material substrate can result in an entity indistinguishable from the original For All Practical Purposes (FAPP).
My favorite answer to objections to "we ARE information" is, "what else may we be"? As I said earlier, this seems to me the only position compatible with materialism, the scientific method, and current scientific knowledge. Denying that we are information is falling into vitalism and mysticism, because in order to deny that we are information you have to posit some kind of non physical "elan vital".
Mark I agree that uploading is feasible in principle, but only by means that kill you.
Of course, this depends on who is you. Or, more precisely, whom you can and want to accept as you.
There was a toddler playing on the beach so many years ago. I remember less than 1% of his memories, and his memories are less than 1% of mine. The cells in our bodies are all different and we have quite different personalities. Question: in which sense we are the same person? Answer: in the sense that I say so.
Before going to sleep, I could think that I will cease to exist and another person, who remembers most of my memories, will wake up in my bed tomorrow morning.
This cannot be theoretically disproved, but of course thinking so would be masochism. Instead, I choose to think that I will sleep, and the same I will wake up. We all do.
I can make this choice because experience tells me that today’s me has always felt like, and accepted himself as, a continuation of yesterday’s me. And today, I am willing to accept tomorrow’s me as a valid continuation of today’s me. There is continuity, because the perception and acceptance of continuity is never broken.
The same applies to uploading. Note that most people would accept teleport (you disappear here and an identical copy appears there), which logically is the same as uploading. This tells me that the difficulty is mainly psychological.
I am persuaded that, once mind uploading will be a reality distributed in society, all issues related to personal identity will disappear like snowflakes in the sun, and everyone will just assume continuity of personal identity after uploading.
You might say that survival by means of copying is not what we evolved to do, and thus does not count as survival at all. We evolved to relate to continuous existence in spacetime without direct copies or the possibility of copies. (Of minds, that is.)
The counterargument is that we should not fear death from disassembly in the copying process because that is not something (or even in the same class as something) we evolved to fear. The fear you achieve by denying survival in the case of copies is just as devoid of evolutionary context as any realistic hope of functionally living on in a copy.
Patterns are a materialistic phenomenon. The fact that a substrate of matter is required for them to persist does not negate their existence as material entities. It is true that the separation of pattern from substrate is an abstraction, i.e. a useful fiction — no matter exists without a pattern of some kind, nor pattern without a substrate. But continuous existence as a particular human being is likewise only a useful fiction.
Time itself creates an infinite series of slightly modified copies, which relate to each other via memories and projection — exactly as would continue to be the case if an atomic-precision copy was created of yourself.
I agree that "survival by means of copying is not what we evolved to do".
On the other hand, I can't agree that "We evolved to relate to continuous existence in spacetime", etc, because I'm not even sure what you mean by this.
Seems to me we evolved to live, hunt, gather and grow food, make art and music, figure stuff out, invent tools, dance, mate, have children, take care of ourselves, each other and our children, gaze at the stars in wonder, try to stay alive as long as we can, and eventually die (to make a long story ridiculously short).
My own evolutionarily-determined "programming" causes me to fear death and wish to avoid anything that might kill me, including having my brain disassembled by a copying machine.
I do try not to fear the inevitability of death, since that sort of fear can only hasten death, while making the meantime unpleasant.
When you say "Time itself creates an infinite series of slightly modified copies," the image is the same as in Zeno's paradox, and it is a false image. There is no "series," but only one continuing reality.
It might be possible to make a copy of a human being which would be fine for all the practical purposes of employers, generals, and other people who use other people for practical purposes. However, such copying would not be of use to the person who wishes to avoid death, since any reasonable proposal as to how this could be done will require the destruction of the person.
What else may we be besides "information"?
Human beings, Giulio, physical, living, humans. People.
In what sense are you the same person as that toddler (I imagine not so many) years ago?
In the very sense in which "the same person" has always been meant, i.e. that the toddler survived, grew up, and here you are, having led one continuous life from your conception to the present time.
Before going to sleep, you could think that you will die and somebody else will wake in your bed tomorrow, but it would not be conducive to sleep. Furthermore, it would not be true, unless it were true, i.e. unless somebody did come along, kill you and take your bed. That would be very different from the usual scenario, in which you just sleep, and awake very slightly changed, mostly feeling better.
I mean, yeah, the whole world could just be an illusion set up to fool you, as Descartes famously imagined. But you don't really think so, because you are not so stupid, and anyway, I can tell you it's not so. I'm here, living my own life, having this conversation with you.
I'm not sure that most people would accept teleportation, particularly if it were explained to them that it involved disassembling them in order to create a copy somewhere else.
We can of course imagine that some person would accept teleportation by copying, and after a "visit to Mars" the copy of the copy would succeed in convincing friends of the original that she was the original herself, and that they should try teleportation too, and that gradually many people would be persuaded to get into the machines, and that the copies of copies of copies of copies would remember having "teleported" many times and think nothing of it.
But that seems pretty unlikely to me, especially if these people are as intelligent as people today, or even more so. They would probably think about it. They would probably be trembling as they stepped into the machines, struggling to suppress their emotions with transhumanist catechisms. And when the copy woke up at the other end, she might go crazy at the realization that she was just a copy, and that the original person was killed so that the copy could be created, and she might destroy herself, or she develop a heightened, morbid fear of death or of ever stepping into a teleporter or copying machine herself.
This could create a serious refugee problem on Mars.
I agree with Max More that you need to take a course on the philosophy of mind; as a philosopher, I found much of your discussion extremely confusing and muddled.
To begin, you need to distinguish between personal identity and the mind. These are *different* phenomena, each of which has its own distinct set of competing theories. (Indeed, further distinctions may be needed between what Ned Block terms “access” and “phenomenal” consciousness; look it up if you must.) Thus, since personal identity and the mind are distinct issues, one can hold, e.g., that an uploaded mind is indeed conscious (*just like* you and I), yet it is *non-identical* to the original mind, even if the original is destroyed gradually. Or one can hold that the uploaded mind is conscious and identical to the original; or one can hold that the uploaded mind would *not* be conscious at all (as people like Searle would argue). And so on.
Functionalism is a thesis about what *mental states* are; a particularly common version of functionalism is called “computationalism,” a view that sees mental states as functional states whose relations are computational in nature. Virtually *all* of contemporary cognitive science is founded on computationalism. If this position turns out to be true, and it looks like it will, then it follows that an uploaded mind *will* indeed be conscious (*just like* you and I), with thoughts and feelings and experiences and so on. It will have a “soul,” insofar as materialists are willing to say that we humans have (not non-physical) “souls.”
Notice that there is absolutely nothing dualistic about this: minds are “organizational invariants,” which means that if a physical substrate instantiates the right causal-functional organization, a mind will “arise.” Descriptively, minds are distinct, but ontologically, they are not. (By analogy, one ought to make a strong distinction between the software and hardware of a computer; but no one would argue that software involves anything non-physical – quite to the contrary!!) If you have trouble with this point, consider a dot matrix whose dots form an image of (say) a house. At the right distance, one sees not only a bunch of dots, but the house too. Are there two entities here? Well, yes, of course there are; but not in the ontological sense! The image of the house merely “supervenes” on the matrix of dots, just like the mind supervenes on the brain, or any other appropriately organized substrate, according to computationalists. (So, one can be a descriptive dualist without being a metaphysical dualist.)
[See Part 2 below]
(Part 2, continued) Now, the “patternism” you mention above is essentially an attempt to apply functionalism to the *distinct* problem of personal identity. The claim here is not that *mentality* is the result of specific patterns, but that the *self* is equivalent to some sort of temporally enduring set of patterns – that is, as the materiality of the substrate “implementing” these patterns undergoes constant changes. (Imagine the dots of the dot matrix mentioned above being erased and then quickly redrawn, or changing from black to grey to black again over time, all-the-while preserving the "emergent" image of the house.) Although I myself am highly skeptical of patternism about the self, the point is that it is *not* a dualistic position, at least not in any metaphysical sense, any more than the view that computer software is distinct from hardware is a dualistic view. (In fact, patternism is a version of the “psychological continuity theory” of the self, which was popularized by Locke and is currently one of the most respected positions among *materialist* philosophers.)
(I should add that, personally, I am a computationalist about the mind, which means that I think mind-uploading *will* produce conscious beings, if done correctly; but I am also highly sympathetic with the “no self” view of personal identity, which means I disagree with Kurzweil and others that we can upload *ourselves* to a computer.)
I suggest you take a look at (at least) these extremely helpful, and very introductory articles: (1) http://plato.stanford.edu/entries/functionalism/, and (2) http://plato.stanford.edu/entries/identity-personal/. There is also an excellent paper by Susan Schneider on personal identity and transhumanism here: http://repository.upenn.edu/cgi/viewcontent.cgi?article=1037&context=neuroethics_pubs.
PS: (There is so much muddlement in your artile that I must add a "PS".) One *can* talk about “essences” and be a physicalist. There is absolutely no problem with this. In fact, one of the domains in which essentialist is most secure is chemistry: each chemical element, for example, is identified according to its particular “essence” – a set of properties that are necessary and sufficient for something to be of a specific kind, e.g., the kind of hydrogen, or lead, or whatever. Again, I think you are way outside your area of expertise with this article, and I would strongly encourage you to stay within it from now on. There are good critiques to be made of transhumanism, but this isn't one of them.
Giulio: What else may we be besides "information"?
Mark: Human beings, Giulio, physical, living, humans. People.
This is not an answer but a circular statement, because in the present context "human beings" and "people" is precisely what we don't agree upon. So we are left with "physical" and "living". See below.
In what sense are you the same person as that toddler (I imagine not so many) years ago? In the very sense in which "the same person" has always been meant, i.e. that the toddler survived, grew up, and here you are, having led one continuous life from your conception to the present time.
The toddler was playing on the beach 45 years ago. He was 7, which makes my reference to 1% of common memories more or less correct. Coming back to "physical" and "living", note that that toddler does not exist anymore in terms of physical continuity of living matter as explained on Michael's blog:
If you insist on "physical" and "living", that toddler is dead. Gone. Disappeared. He does not exist anymore in our universe. R.I.P.
But I don't feel very dead, and actually I feel I am that toddler. So we need a definition of personal identity more robust than one based on physical continuity of a biological organism. A definition based on pattern continuity is much more robust and operational.
Thanks for your comments, which are at least more substantial than Max's. I am sorry that you were confused by my text, and I do not doubt that it could be improved for the benefit of readers like yourself. Then again, perhaps what I need to do is teach a course, and invite you to enroll.
It do think that many people writing on this topic use the terms "personal identity" and "the mind" interchangeably and with reference to the same underlying concept, which is the same as that referred to by "the soul." I am happy to make a distinction between these terms, if we can be clear about what either means. You, for example, first describe them as "phenomena," then as "issues," then as things about which we may "hold" whatever views we prefer, according to whatever "theories" we prefer.
I do not doubt that an "upload," i.e. a computational network in which some form of excitation was gated and propagated isomorphically to the way it was in a human brain, would be conscious. This is because physics would insist it must behave as the brain would, and such behavior can only be explained on the hypothesis that the thing is conscious ("zombies" don't make sense).
However, this does not mean that the upload would be human. It might be, or it might not be. I don't mean to suggest some philosophical question here. I mean that that an upload might be a configuration of atoms in the form of a human being, even if the atoms were arranged by artificial means. Or it might be something else, which might be conscious, and even humanoid, but not human.
I don't have a problem with your (muddled) discourse on "functional states" and "computational relations", but when you say that "if a physical substrate instantiates the right causal-functional organization, a mind will "arise"", you slip not only into dualism, but worse, magical imagery, as if a sorceror puts in a pinch of this, a snip of that, and – poof! – the mind "arises" like a conjured demon. So, you should try to avoid this kind of language, if it is not what you mean.
You say that in a dot-matrix print an "image"…"supervenes" on the dots. But the only thing there on paper is dots of ink. The second "entity" in your description will exist only in the mind of a beholder. However, minds must exist in themselves.
Alternatively, we could say that the dots are the image, so there is only one thing. Likewise, we could say that the brain is the mind, or that these words only refer to different aspects (ways of looking at) the same single thing.
The trouble with the claim that "one can be a descriptive dualist without being a metaphysical dualist" is that if one has really eliminated dualism from one's metaphysics, one ought to be able to eliminate it from one's description. Why can't uploading proponents do this? Because, if you eliminate the dualistic language, you are left with "This machine will disassemble your brain, and use the data it collects to make copies." Not a great sales pitch.
Cutting to the chase, and skipping over a lot of other poorly-defined terms given in quotation marks or with emphasis, many of which still seem to me to imply fuzzily dualistic concepts and imagery, it seems, finally, that you and I are in agreement, since I allow that uploading may result in things that are conscious, and you agree that it makes no sense to say that we can upload ourselves.
PS – Your PS is quite incorrect; elements are no longer considered as defined by any list of properties, but by what they are, specifically by the number of protons contained in the nucleus. Gold, for example, is the element formed with 79 protons and 118 neutrons, give or take a few.
Perhaps you would reply that the fact of containing 79 protons per atomic nucleus is precisely the "property" that now defines gold. Yeah, but, what does it add to call this an "essence" — as if, were we to distill gold, we could isolate an even more precious substance, pure 79-ness, whose transfer to any other substance would confer the property of being gold (truly a philosopher's stone)? And what does that tell us about uploading?
There really is no ambiguity about what is meant, in any human language, by the terms "human beings" and "people." Yes, there are cases that people might disagree about, such as frozen embryos and frozen heads, people in persistent vegetative states, and of course the hypothetical monstrosities proposed by transhumanism. We can decide these cases by comparing them with the ordinary meaning of the words, since that is the meaning that the words ordinarily convey. Alternatively, it may be best to describe things as they are; thus frozen embryos are frozen embryos, frozen heads are frozen heads, human beings living their lives are, well, human beings, and chimeras of human and machine would be just that. Or, people who have cochlear implants to partially restore their hearing are people who use cochlear implants… get it? The thing is what it is.
The toddler who played on the beach 45 years ago did not die, and if he had, you would not be here now. I really don't see your argument for why "we need a definition of personal identity more robust than one based on physical continuity…."
@Mark: you are not replying to my arguments, but simply restating your starting assumptions, with which I disagree.
Your using the term "monstruosities" seems to confirm my impression that we are really talking of gut feelings and psychological reactions. Which are important and not to be dismissed, but also not to be confused with rational arguments.
I am submitting this comment (in two parts) on behalf of Philippe Verdoux, because it had been rejected by the moderators for incivility, but I would prefer to stand and face whatever hot air gets blown at me:
PHILIPPE VERDOUX WROTE:
Sorry for another long(ish) post.
To be honest, Mark, I read your paper on mind-uploading (linked above), and I think it is one of the most egregious examples of sciolism that I have ever come across. I mean, honestly, it’s worse than a really, really bad undergraduate paper by a stubbornly dogmatic student who has virtually no grasp of the issues he or she is discussing. You know, the University of Maryland has a superb Department of Philosophy; you would, I believe, benefit tremendously from going over there and talking to a professor (such as Georges Rey — although there are many others). (Indeed, many of the extremely elementary ideas that you keep getting tripped up on could be easily cleared up by talking to someone who actually understands the issues; in fact, a number of the UMD faculty are affiliated with the Cognitive Science program. These are the people you should target.)
You know, there's nothing wrong with not knowing about a subject X. As collective human knowledge grows, relative ignorance does too. Fine — this is the predicament of *everyone* these days. But there is indeed something wrong with not knowing about X and then pontificating about X.
I will write a longer critique of your really, really abysmal paper for another website. For the nonce, let me respond to your comment about essentialism. Clearly, you have absolute no idea what you're talking about. *Of course*, the relevant property is the atomic number!! The property of "having atomic number 1," for example, is both necessary and sufficient for any object in the universe, whatever it may be, to count as "hydrogen." If it has this property, then it's hydrogen; if it doesn't, then it's not. You write: "Yeah, but, what does it add to call this an 'essence'." Well, you'd call that property an essence because that is precisely what an essence *is*: an essence is a set of properties (or maybe just a single property) that all and only those objects in universe that fall within a certain ontological category have.
VERDOUX part 2:
I think you're struggling with a lot of very, very basic concepts, Mark. Such as the notion of a property. Again, I would *strongly* encourage you to talk to someone in the UMD department — they may be able to bring some much needed mental clarity to these simple ideas. Philosophy can be very difficult for some people: one can be a great biologist, or physicist, or whatever, but be a horrible philosopher, and vice versa.
Also, if you do seek the help of a knowledgeable philosopher, be sure to ask about what a "category mistake" is too — many of your criticisms (e.g., of mind-uploading) are as guilty of this mistake as anything I've ever read. For example, you add to the passage quoted above: "…as if, were we to distill gold, we could isolate an even more precious substance, pure 79-ness, whose transfer to any other substance would confer the property of being gold (truly a philosopher's stone)." This is so confused, I'm not sure where to start. First, the essentialist absolutely does *not* think that one can somehow isolate "pure 79-ness." That is a category mistake. Atomic numbers are properties *had by* physical entities. Furthermore, *yes*, if something — anything — has atomic number 79, then it is gold. "Having atomic number 79" is necessary and sufficient for membership in the extension of "gold." That is, once again, precisely what an essence *is*. (The trouble with alchemy was the process of *transferring* the property of "having atomic number 79" to another element, thereby changing that element to gold. But if you *could* transfer this property, then of course you'd have gold!!)
In sum, I'm not at all shocked that someone doesn't know about philosophy. After all, everyone today knows almost nothing about most things. But I *am* shocked when someone as completely out of their zone of intellectual familiarity as you are, Mark, gets published on a blog like Futurisms.
(PS. try perusing the Stanford Encyclopedia of Philosophy; it is a tremendous resource.)
First, in response to the one substantive point you make, about how you justify your comment about “essences” in chemistry: Yes, yes. So, if you posit that we may define any class of objects in terms of some list of “essential” properties, I will posit (with reference to common usage and common understanding) that an essential property of all human beings is that they are made of human flesh, a tissue of human cells (plus their arguably nonliving products such as bone, hair, and fingernails) and not, say, silicon transistors and copper wires. If you disagree, please point to an exception (not a hypothetical future exception, but an exception in the way the term “human being” is understood today and has been down through history). This is important because I do accuse transhumanists of attempting to water down and redefine important words and concepts – categories, if you like – such as “human”, to include things that would otherwise be recognized as not belonging to those categories, e.g. not human.
As to the rest, it strikes me as odd that you hurl so many put-downs and assertions of your superior learning at me, accompanied by so little substance. You promise a “longer critique” of my 2003 paper – will that be just more of the same? I look forward to reading even a short critique. But I must add that, in my own view, that paper, as posted, was not a finished work, and certainly not written to any scholarly standards. It was, rather, just a draft, intended to express some ideas. Your “critique” will therefore be of little interest if it only attacks the paper’s rough spots, rather than actually engaging the ideas expressed. I must say I am not optimistic. But please, fire away.
Mark A. Gubrud
While criticizing the transhumanist movement for its lack of women is valid, 1) insinuating, even sarcastically, that women would make transhumanism "kindler" and "gentler" is its own kind of sexism, and 2) it strikes me as a rather ironic bone to pick on a blog run by three men, at a magazine whose masthead of 17 editors contains only 3 women.
Queersingularity's point about Shulamith Firestone shows that it probably isn't transhumanism's lack of appeal to women that makes it appear women-less, but a history which consistently erases women's contributions. In fact, without presuming to know queersingularity's gender, the general lack of response to the comment on Firestone — as far as I can tell, the _only_ comment to bring up gender in anything approaching an intelligent way (ArcAnge1M's disgusting "ask a woman" certainly wouldn't count) — is perhaps yet another minor instance of this.
First, I think the charge of sciolism is indeed a substantive one. I apologize for any unnecessary hyperbole (I'm not a fan of "flamewars," as it were), but I'm sure that *you'd* become extremely frustrated too if you came across an article that I wrote, for example, about how some theory in physics is wrong – about how I have an urge to "demolish" all the pretty architecture (as you put it in your paper) that's been built up by learned experts over the years to explain some phenomenon. And then, on top of it, when you suggest to me that I need to take a course on physics (because I don't know all that much about the field), I respond with: "Or maybe I need to *teach* a course on it, and you need to attend." I'm sure that would ruffle your feathers a bit – wouldn't you agree?
Second, you suggest that "an essential property of all human beings is that they are made of human flesh, a tissue of human cells." This seems, at first glance, to be circular; but maybe not. More importantly, I think you've switched issues again (just like you like to switch between talk of "minds" and talk of "selves"). I think your claim about what it takes to be human is orthogonal to the question about whether an uploaded mind would be conscious, or be the same individual as the original. That is, it's not entirely clear that a crucial part of my self is that I am “human” (whatever that means exactly); and from this it follows that even if an uploaded mind is not human, it may still be the same as the original. (By the way, I don’t hear transhumanists talking about uploading “humans”; rather, they are interested in uploading “minds” and “selves.”) But these are extremely difficult conceptual issues that, I think, one ought to think about with great care. Even immensely intelligent philosophers who spend their entire lives cogitating these issues are often rather unsure about which position is ultimately correct.
So, as a "visitor to this field" (as you put it), I would be *especially* tentative in my remarks. Again, I'm sure you'd be perturbed and annoyed if some "visitor" entered your own field of expertise – physics – and started pontificating as if s/he had all the answers.
Mark: Since I didn't take the time to respond at length, I will note that I agree with what Giulio wrote, and very much with the posts by Philippe Verdoux — especially his reply concerning your conflation of the concepts of human and person.
Your use of the word "sciolism" does not engage the substance of my paper or the issues under discussion. It's just a put-down, but I find it more amusing than offensive. Ditto all your advice to study this, read that, talk to so and so and stop "pontificating" about a subject I've been thinking about since before you were born. Of course, my suggestion that you might take a course from me was meant as an ironic rejoinder, but I didn't mean a course in physics.
Yes, my account of what people are thinking when they talk about "identity," how the magic (i.e. illusion) of identity transfer works, and how it compares with reality, is to some extent informed by my faith that the things we have been discussing here will not be found to conflict with known physics.
It is easy to see why "immensely intelligent philosophers" can "spend their entire lives cogitating on these issues" and still end up "rather unsure about which position is ultimately correct." My position, as you might put it, is that the only "ultimately correct" thing is physical reality itself, and therefore the best description is one that describes physical reality most clearly, precisely and accurately. Or, since the most precise description of things tends to be unwieldy, higher-level descriptions should be rigorously referenced to more precise lower-level descriptions without introducing the sort of ambiguities that can too easily be exploited by verbal voodoo artists.
Thus, uploading proponents are telling the truth when they describe their proposals in technical detail, e.g. "your brain will be sliced up layer by layer to map the network of synapses, etc., and then we will program a computer to simulate you". But when they argue, using verbal sleight-of-hand, or by appealing to the works of various philosophers, that the result is that "your consciousness," "mind," "pattern," or simply "you", all terms which, in this context, refer to the same idea as the word "soul," is thereby "transferred to a machine," they are dealing in fiction and magic.
I don't deny that, if things are done right, the computer would be conscious ("functionalism" if you please). But it would be incorrect to say that "its consciousness" is "your consciousness," because there is no such physical thing as "consciousness," separable from its "substrate." Thus we really should avoid using "consciousness" as a noun, because as soon as we do we slip into dualism. Or, if you insist that "my consciousness" is a reasonable object, a thing that definitely exists (because you experience it, or shall we say, you are it), I must insist that "your consciousness" is just an aspect of, and physically inseparable (even if philosophically distinguishable) from your brain, and that any other thing's "consciousness" is its own — I insist on this because it is the only way to interpret such expressions sensibly in reference to physical realities.
If this thesis is really so unfamiliar to you, and if it stands apart from the body of philosophical works on mind and identity that you have studied, well, perhaps it is something new, and perhaps it ought to be published, in some revised form, in a philosophical journal. Your critique would doubtless be of help to me in rewriting "Bursting the Balloon." Perhaps we could even publish jointly. So, where's your critique?
Let's try a thought experiment (the only kind philosophers ever do). Suppose we ask every English-speaker on the planet what they think of and what they think you mean when you say the words "a person."
Now, I know that we have legal "personhood" for corporations, and decades worth of science fiction and philosophy talking about non-human persons and non-person humans, but honestly, when you say the word person, most people think of a man or woman, maybe a child, but not a robot, nor a corporation, nor a creature from Arcturus, nor any other thing than a human being.
Equivocation is the name for the rhetorical strategy that either exploits existing ambiguities in the meanings of words, or seeks to introduce ambiguity where there is none, in order to confuse the audience into accepting arguments they would otherwise reject. It is transhumanism's favorite device.
Thus, you guys labor to erase the distinctions between prosthesis and cyborgization, between therapy and "enhancement," between human and tool, between life and non-life.
So don't tell me about "conflation."
Mark: The distinction between "human" and "person" was not invented by transhumanists. It was made by philosophers and is accepted practically universally.
So, yes, you *are* conflating the concepts. You don't seem to be interested in actually learning the philosophy needed to engage productively in this discussion, so I'm exiting it.
I'm not sure what to say, Mark. I think you have a very, very poor understanding of the issues — which is precisely what leads to you think that you've devised an original idea that ought to be published. The hubris here is quite extraordinary. Again, I would like to *strongly* recommend you talk to someone in the Philosophy Dept. at the University of Maryland. Print out our correspondences, if you'd like, and ask someone to explain to you what functionalism, dualism, etc. are exactly. Again, there is a *huge* difference between being confused about X because X really is confusing, and being confused about X because you simply don't understand X, or haven't studied X enough to say anything intelligible about it.
I know you were talking about me enrolling in your philosophy — not physics — course. I was asking you what you'd think if I came up with some outrageously uninformed theory about some physical phenomenon and then, when you told me that I was simply uninformed about the subject matter and needed to learn more before I make any strong claims about the issue, I suggested that *I* should teach a course in it — that *my* ideas might be wholly original and worth publishing.
Incidentally, I think you *should* try to publish your ideas. Seriously — it might be a good lesson in humility.
I don't know if more women would make the transhumanist movement kinder and gentler; I know that there are many ungentle and unkind women out there. But you have my (more satirical than sarcastic) remarks backward: I was suggesting that one objective the H+ leaders may have in mind, by taking some of the harder edges off the message, is to recruit more women. I don't see how that suggestion could be construed as anti-feminist.
Dear Philippe –
Thanks for expressing, reiterating, and re-reiterating your stance that I am unlearned and unqualified to speak about such matters as what it means to be human, conscious, and alive, about what people mean by "personal identity," and what kinds of games are being played by proponents of what I termed "identity transfer." Your contribution to this discussion has been exemplary (of something).
I have had the pleasure of corresponding with people less schooled than I am in physics, who have displayed misconceptions and made incorrect claims. I have responded by engaging the ideas and claims, and attempting to show exactly where and how they were mistaken. Sometimes this effort has been successful, other times not. But in all cases, I made a good faith effort to engage and inform, rather than wasting a lot of words just calling people stupid.
I am still looking forward to your actual critique of my ideas.
I think that opening with the statements below would have eliminated a great deal of cross talk and confusion. It is clear that the author has identified the physical human body with what it means to be human.
Quoting the author:
I will posit (with reference to common usage and common understanding) that an essential property of all human beings is that they are made of human flesh, a tissue of human cells (plus their arguably nonliving products such as bone, hair, and fingernails) and not, say, silicon transistors and copper wires. If you disagree, please point to an exception…
A few exceptions: anyone with a cochlear implant. Amputees with artificial limbs. People with pacemakers. People with brain implants to control epilepsy, Parkinson's, or depression. Blind people with artificial eyes (beta testers!).
Indeed, it is possible to be a human and have artificial parts, even in the brain where the magic happens.
The author further notes:
If you only mean that an "upload" could "function" as if it were the late human, I don't dispute that. The question is, function for whose benefit? The late human is, well, dead. If you deny that, you are back to dualism.
Were this true, then the person you were a few minutes ago is now dead since the atoms in your brain have changed. Indeed, neuron death occurs continuously and creates a new person each time if this is true.
In the very sense in which "the same person" has always been meant, i.e. that the toddler survived, grew up, and here you are, having led one continuous life from your conception to the present time….
I find this statement ironic since the author criticized the concept of being conscious during the process of replacing neurons with artificial substitutes until nothing remained of the organic brain. If having a continuous life is the "essence" of being the same person, then wouldn't this upload process qualify?
The key question is, "At what point does a human cease to exist during an upload?" Is it hip replacement surgery? Is it when the limbs are replaced with plastic and transistors? Perhaps a pacemaker? Or a brain implant to alleviate Parkinsons? All of this activities replace flesh and blood with transistors and wires.
If it is the replacement of neurons that destroys personhood, then how many neurons must be replaced? Does personhood change when neurons die naturally – and if not, what makes replacing them with artificial substitutes special? If neurons can die and a person still be the same person, then what is the essence of person-ness?
If you really want to make a structured, believable argument against uploading, you'd probably be better off sticking to physical theory and analysis of uploading as a plausible future technology.
When you say,
"Now, I know that we have legal "personhood" for corporations, and decades worth of science fiction and philosophy talking about non-human persons and non-person humans, but honestly, when you say the word person, most people think of a man or woman, maybe a child, but not a robot, nor a corporation, nor a creature from Arcturus, nor any other thing than a human being."
It devolves your argument considerably. Thousands of years ago, most human cultures had concepts (or translated equivalents) of 'person' and personhood that included only males of particular genetic lineages. Today those ideas are considered much worse than wrong. The entire point of transhumanism is in fact to change our entire concept of personhood and what it means to be human .
You seem to have an ill-defined notion of personal identity which is linked to a narrow biological definition of humanity. Uploading is rightly unsettling because it challenges deep intuitions – but that doesn't make it any more wrong or right than quantum mechanics.
Uploading is a broad category which encompasses a huge set of possible future technologies. The best way to grok the identity and personhood issues is to map out this categorical taxonomy and then determine where to draw a line (if any) on what procedures would preserve identity and what would not.
Consider a hypothetical perfect amnesiac patient who completely loses all memory and must relearn everything (speaking, walking, language, etc) from scratch. As an added twist, lets posit that the patient starts a new life in a new country with a new language, customs, etc. From the perspective of the DNA, the body and even the brain most everything is physically preserved; but its nearly self-evident that this would NOT be the same person in any practically useful sense. That which is physically missing or different – the information encoded physically in the brain's fine synaptic structures, is therefor the crux of identity.
Likewise, imagine a hypothetical technology that could transfer just that essential physical information from one brain to another, to transfer the mind from a dying body to a clone body (presumably in some form of stasis). The awakened clone would have all the same memories, skills, etc, but would now be in a different body. Practically, most people would accept this would be a continuation of the scanned person – in a new body. In other words, most people accept that the essence of personhood are the mental intangibles – the information pattern – physically encoded in the brain's synapses.
Would it really matter then whether the patient's mind was transferred into a biological clone or a robotic android body? For most, it would probably only matter to the degree that the robotic body was observably different, no more, no less.
For uploading to be impossible, there must be some deep physics which prevents these technologies from ever being possible even in theory. Perhaps uploading will always fail and new physics of consciousness will come in to play, but i wouldn't hold my breath.
So if you admit something like this is possible, then you are left insisting that even though post-upload patients have all the memories, skills, knowledge, traits, etc preserved, that somehow the procedure always destroys the original and creates a new conscious doppledanger out of thin air – which frankly begs more questions than it answers.
Where you do draw the line on uploading? What if only a fraction of the brain was replaced? What if the brain is left in tact but supplemented with additional circuitry that communicates with the original cells? What if the additional circuitry are nanobots circulating in the blood and the original neurons are left in tact? Not all the uploading procedures are destructive, but thats all irrelevant anyway. Mind and personal identity supervene on neurons and survive neuron destruction on a daily basis.
I am opposed to the creation of autonomous, out-of-control technology that would pose a severe threat to the future of humanity. Do you equate this with sexism and racism? Does it never occur to you guys that some people might have reason to find that deeply offensive?
In the case of a hypothetical, totally amnesiac patient, what is the evidence, apart from all-caps, that "this would NOT be the same person in any practically useful sense"? I'll bet the hospital staff would call her by the same name, her family would visit and wish for her recovery, and no new birth certificate or passport would be issued. Actually, according to your description, she obviously is the same person, in the usual sense. Just lost her memory. But why argue over that? You told us the whole story, and the "same person" and "crux of identity" questions add nothing.
In the case of the cloned human upload, how do you know "most people would accept this would be a continuation of the scanned person – in a new body"? Have you taken a poll? Well, polls can be fixed; did you explain that the "new body" was actually completely new, and that the "scanning" actually required the destruction of the first person's brain? Did you explain that your process could just as well have been used to make two, three, or three billion copies? That actually, this is copy #37? That you still have the file and can make more copies if you want? That, actually, the data isn't completely exact, and all the copies are a little different, too? There are a lot of details — how about we run a focus group, with you giving your arguments and me giving mine, and let's see what the people decide. I'll bet most of them would be pretty confused.
As to the difference between the cloned body and the android bot, one would be human (though unnatural), the other not human.
Copying human brains, either to artificially-created human brains or to some other type of computer, may be technologically possible at some point in the future. It will always require destroying the original because there is no other way you could obtain a 3D map at sufficient resolution to have a chance of recovering even a functioning copy, let alone a reasonably faithful one.
It is true that we survive the death of individual neurons on a daily basis. That's part of being alive. "Personal identity" references the continuity of our bodies and lives. There is nothing more to it, nothing that you can manipulate to justify the idea that a person can survive the wholesale destruction of her brain, regardless of how fast it is done, or what other object is created in the process.
Cochlear implants and prosthetic arms are not human. The people who use them are. This is a straightforward matter of physical description.
People make and use tools. Sometimes the tools are located outside our skin, sometimes inside. People who have implants may say they consider the implants part of themselves. So what? That does not change the fact of what the implants are: technology, made not of human flesh.
I consider this a sufficient response to all of your points, since you seem to agree that it is nonsense to claim that a person "dies" just because of the atom exchange and slow change that is a normal part of the process of living.
Mark, in the total amnesiac thought experiment, the amnesiac starts a completely new life in a foreign country, and is raised and educated in a completely new language and culture (for say a decade at least), with zero memory of anything from his previous life – his family all thinks he's dead – they don't visit him in the hospital. The patternist/functionalist theory of Mind claims that this is essentially a totally new person. This theory best explains and also predicts how others will perceive the patient. From every angle – sociological, economical, political, psychological, scientific – this is a new human mind that is not related to the previous mind residing in the body. The starting over in a new country/culture is the essential bit because the patternist/functionalist theory of Mind holds that our mind is a socially constructed information pattern. From both a *scientific* perspective and an everyday sense, any reasonable tests of personal identity would identify these as two completely different people – who share only a body. Thats not to say that the physical body itself is not part of personal identity – it is to some degree, but its not the dominant part. These thought experiments are meant to illustrate that by showing most people reasonably identify with their minds as defined by their memory patterns, not their bodies.
Really really – you claim: "Actually, according to your description, she obviously is the same person, in the usual sense. Just lost her memory."
What would you rather lose – all of your memories (not some, not a little – absolutely 100% all of information in your brain – complete wipe back to pre-infant state), or your body?
Is this even a question? I'm pretty confident that most people identity with their memories and mental patterns more than their bodies – its the basis of a whole swath of science fiction stories in book and film where two people swap bodies.
Most people would accept a cloned human upload if it was indistinguishable from the original for the simple reason that they wouldn't be able to distinguish it from the original. If it gets close enough to the original, at some point for all intents and purposes it becomes the original.
There are many versions of the mind transfer idea – including non-'destructive' and non copying versions at that: the entire brain could be physically implanted into a new body. Would that be the same person?
Some portion of the original brain could be implanted into a new body, along with some new biological neurons grown in. Where do you draw the line? Is there even a line?
Uploading could also involve nanobots which slowly repair, and even then could slowly replace neurons. When does such a person stop being the original person and become a doppledanger? I have yet to see you address this.
You doubt that a person could survive "the wholesale destruction of her brain" – so does the patternist, but it all depends on what you mean by 'destruction'.
The most reasonable consistent view on these issues is that personal identity is an information pattern, that many transformations can preserve that information pattern, but that it is also fluid and constantly changing over time.
"Where do you draw the line? Is there even a line?"
Good question. On most of these questions, if there is a line, it is our job to find it. If not, it is often our job to draw a line.
People often make "slippery slope" arguments in support of equivocation, saying that you can't distinguish hot from cold because it's a slippery slope.
Well, when a popular walking trail must traverse a slippery slope, what does the Park Service do? That's right, they put in a guard rail. Where do they put it? Somewhere.
Then of course, sometimes our job is to realize that the wrong questions are being asked, and then to ask the right ones. Was this helpful to you?
Slow replacement, e.g. by nanobots:
I'll tell a story two ways.
1. Working a neuron at a time, nanobots will map axons, dendrites and synapses, carefully measuring geometry, synapse strengths, protein, RNA, and other biomolecule concentrations, and epigenetic states, building a computational model and testing it against the neuron's actual firing, growth and internal changes until it achieves 100% (YMMV) predictive accuracy.
At that point, the nanobots will replace the neuron's biomolecular machinery with more-efficient diamondoid nanomachinery, and move on to the next neuron.
You'll never feel a thing. Each day will seem normal as you undergo the process. Your mind and consciousness will be as before, but in a few weeks' time, your entire brain will have been upgraded to the latest nanotechnology, and you'll be ready to "overclock" and zoom ahead into the Singularity!
2. The nanobots will study each neuron, then stealthily insert a nanoelectromechanical replacement, killing the cell. Since this replacement will function the same as the original, the rest of your brain will not notice any difference. After a few weeks, your entire brain will have been killed. In its place within your skull will be a nanotechnological brain, which will take command of your life and body. Into the Singularity!
Now, it seems to me, from a physical point of view, that each telling describes exactly the same story.
"personal identity is an information pattern"
"the soul is a thing made of information, an immaterial substance"
Live long and prosper, Friend Jake!
I know I ignored the aspect of your scenario which erased (not as categorically as you do in your reply) not only the patient's memory, but any knowledge of the actual facts of her history, (where by "actual facts" I mean an accurate account of the physical history of the body) on the part of those around her. I did that because I wanted to point out how people would behave and think of the patient if they did know her history.
If you insist on erasing any information about the amnesiac, then I suppose it would be natural for those around her to regard her as a completely new person, at least until they received some information – perhaps through the patient's own reemergent memories, or some glimpses thereof – about the patient's past life. I expect that most people who knew and cared about the patient (without knowing her history) would be keenly interested in any such information, and attuned to any hints of memories resurfacing, or any clues to her "actual" or, if you prefer, "former" identity.
Note that the latter preference requires severing the link between "identity" and actual physical (biological) history. I guess that's OK, since the "identity" that is left is the "identity" of "what I'm like" rather than which particular person, of the enumerable set of persons that live or have lived, I am. With respect to "what I'm like," the question of "sameness" is always a subjective judgment.
You write that "the patternist/functionalist theory of Mind holds that our mind is a socially constructed information pattern." Setting aside metaphysical questions for the moment, do you really mean that if a person were isolated from human society (from infancy, if you like) then they would have no mind?
"What would you rather lose – all of your memories (not some, not a little – absolutely 100% all of information in your brain – complete wipe back to pre-infant state), or your body?"
Hobson's choice, indeed.
The first scenario is not death of the body, but implies a kind of massive violence to the brain. Ouch. Life continues, but we can't touch it in our imagination, it feels like death.
The second scenario, assuming by "body" you mean "including the brain," is certainly death. Some other thing exists, somewhere, but we can't reach it, we are not saved from destruction.
If you didn't mean "including the brain," then, if as a mere brain I can be reasonably promptly outfitted with a new body, with which I may seamlessly integrate, I guess that would be the obvious choice.
But if it meant living as a disembodied brain, well, that sounds like living Hell.
"Is this even a question?"
I don't know, you asked it.
In your scenario 2, are you claiming that the line you draw is when the last biological neuron has been replaced? Is this the point that personhood ends from your point of view? Apparently cochlear implants are OK, even though they replace neurons, so I'd like to split this hair finer. We have standards for path grades and guard rails that are based on metrics for balance and safety. What metrics do we use to measure the percentage of biological neurons that can be replaced?
I don't see the difference between killing biological neurons and replacing them more biological neurons (as happens naturally) versus replacing them with artificial neurons. I can rephrase your scenario 2 to refer to natural cell death and creation of new neurons, and now the new neurons take command of your life and body. Does this happen? No! The pattern of information remains the same, and this is what controls your life and body. This is you.
So if the pattern can survive the death and replacement of biological neurons, it will survive if they are replaced by artificial substitutes.
Lastly, how can you misread that the information pattern of the brain is an immaterial substance? Are dendrites, neurons, etc immaterial? Can they not be mapped and measured? This is the pattern of personal identity.
Mark, your completely correct that the two versions of slow neuronal replacement are effectively the same – but your ridiculous conclusion about 2 doesn't map to 1 – its the other way around!
You seem to have an issue with the concept of supervenience and multiple realizability. A complex system such as a cell is not a set of particular molecules – its a particular pattern of organization of those molecules. Like building with legos, there are a massive space of possible molecular configurations of any particular neuron that are all effectively the same for its functional identity as a particular neuron. Over time, molecular building blocks come and go and yet the neuron remains.
Likewise, your mind/brain is not a particular set of neurons – its a higher level organization of said neurons and their interconnection synapses. Neurons can come and go over time, as long as the connection pattern persists. You are not your particular neurons anymore than a neuron is its particular molecules or a corporation is its particular members – a mind is an organizational pattern built out of neuronal connections – not a particular set of neurons.
If you really maintain that 2 results in the person's death, and 1 is the same as 2, then you are saying that 1 results in the person's death just because some of their neuron's molecular machinery is replaced. Ahhh – this happens naturally already in your neurons right now! – and depending on what you eat (such as ratios of various fatty acids), you are over time replacing your neuron's molecular machinery with new parts which can change overall function.
Are you arguing that you are constantly being replaced by a doppledanger! If the everday biological replacement of neuron molecular machinery is scenario 0, then there is no difference between 0, 1, and 2: and therefore according to your logic we are continuously dying.
I think you notion of death is not useful.
Here's a wholly other counterproof route: doppledangers.
If you really are arguing that neuron replacement scenario 2 results in the end of a person's conscious identity and their replacement by a wholly new person, you are arguing that its possible that minds can be replaced with wholly new doppledanger minds without any physically observable differences.
Thats directly equivalent to the philosophical zombie idea, and just as easily dismissed.
The only objective evidence we have for other conscious minds is their observed functional behavior, and any argument which is equivalent or reducible to saying that conscious minds can be replaced with doppledangers without any observable functional differences is not objectively testable and thus not provable or useful.
There are no doppledangers or zombies, and uploading works (in theory).
I'm sure you've come across it by now, but just in case you haven't, my response to your critique of mind-uploading has been posted on the website of the Institute for Ethics and Emerging Technologies. The URL is here: http://ieet.org/index.php/IEET/more/verdoux20100624/.
Please take a look and, if you'd like, comment on my comments. I'd like to keep the discussion going.
Readers interested in Mr. Verdoux's "critique" should be sure and read my response posted to the blog page, and also my comments on his notes.
doubtertom: Actually, there is very little replacement of neurons in our brains after early childhood. That's because, in fact, the "information that is contained in" the network of axons, synapses, dendrites, and cell bodies cannot survive cell death. Any new cells must start from scratch.
What is my "ridiculous conclusion about 2"? I told the story two ways. Was telling #2 not factually (physically) accurate?
Your ridiculous conclusion about 2: "After a few weeks, your entire brain will have been killed." You have a non-scientific view of death then – if the brain keeps on functioning without any scientifically observable deficits, how was it killed?
Whats even more ridiculous is that you take your ridiculous conclusion about 2 (that it somehow 'kills' the brain), and then map it on to 1 (slow replacement of neuron building blocks with nanomachinery). Why stop there? 1 is indistinguishable from 0 – the slow replacement of your neuron's lipid building blocks with new lipid 'nanomachinery' that happens naturally over the course of months.
So therefor, according to your argument, we die every couple of months, and thus your notion of death isn't useful.
You haven't addressed any of the arguments, and you should just admit you do not have an objective, scientific, or rational view on the mind/brain.
Jake – Does your brain consist of brain cells? Okay, if they all get killed, then your brain has been killed, no?
Do we die every couple of months? Not usually, no. Usually we die only once, at the end of our lives. Otherwise, we don't die. However, if all of our brain cells were killed, we would be dead, i.e. no longer be. I don't see what is so difficult about all this.
No, according to Jake's definition of death you can't kill something and still have a perfect copy. It's meaningless. Death only happens when you don't have a copy anymore.
Mark's definition strikes me as inconsistent since he maintains that humans do not die every few months but a Morovac transfer would kill you. If all the molecules in a neuron are being continually replaced by exact copies, surely that counts as death of the neuron just as much as replacing all the neurons in a human brain would count as death of the human.
I agree with you, but because of my poor english I can't respond well to the mind uploading fans.
If someone in this blog speaks italian I can describe well my viewpoint.
I reccomend this articles:
I would highly recommend that the author of this article read the following counter-article, in the hope that it may elucidate some of the murkier topics he fervently tries to grapple with (making reference to Jake Cannell's and Luke's logical standpoint on gradual replacement of areas of the brain)…
Gradual replacement of various cortical areas does not equal gradual demise of the individual.
In reference to making copies of the original individual, all copies WOULD ALSO BE the individual. These would not be some ghastly, abhorrent distortions, but rather the individual would, in effect simply be multiplied. This relies on the very well-established axiom that, "A difference that makes no difference IS no difference."
In my paper, “Cyborgs, Uploading and Immortality: Some Serious
Concerns,” I explored the possibilities of Uploading the mind, the
complete contents of the human brain, to a more eﬃcient and long lasting
super-computational medium. Following this I discussed the consequent
possibility of Immortality.2 The conclusion of this investigation showed the
current Extropian concept of Uploading to be seriously ﬂawed in principle,
which has only been strengthened by subsequent research.
Harle, R.F. 2002 Cyborgs: Uploading & Immortality. Some Serious Concerns. Sophia
International. Vol, 41. no. 2. Ashgate.
This is also interesting:
Hey, Mark. So, you have stated that biological continuity is the ONLY possible account of personal identity in a physical universe. And very oddly, you say that a psychological/informational account of personal identity is "dualist". What, pray tell, is dualist about it? All it says that so-and-so patterns which persist in your brain constitute you. There is nothing non-physical about this. And ironically, the biological continuity says something very similar (so and so patterns…). It simply has different criteria… it just so happens that in the first account the patterns (personality, memories, etc.) can persist in different medium (which you yourself admit). I find this to be a rather strange misunderstanding of yours. Anyway. So, of course you haven't explained why biological continuity MATTERS AT ALL. Presenting your feelings is not convincing to anyone who disagrees with you. In a materialist universe, is there any Platonic reason why we should care whether there is ANY continuation of "ourselves" in any form? I think not (if you think so, try and prove it). Either someone cares about biological or psychological or whatever kind continuity or they don't. There's no objective fact of the matter that they should prefer your view or any view.
I recently read Technophobia, by Daniel Dinello.
It's sad how all of you technophiles hate the human biological body…more precisely "YOURS BODY"
If you don't like this context, feel free to change into a bodiless inhuman AI how your idol Moravec says.
"The streamlining could begin with the elimination of the body- simulation along with the portions of the downloaded mind dedicated to interpreting sense-data. These would be and replaced with simpler integrated programs that produced approximately the same net effect in one's consciousness. One would still view the cyber world in terms of location, color, smell, faces, and so on, but only those details we actually notice would be represented. We would still be at a disadvantage compared with the true artificial intelligences, who interact with the cyberspace in ways optimized for their tasks. We might then be tempted to replace some of our innermost mental processes with more cyberspace-appropriate programs purchased from the AIs, and so, bit by bit, transform ourselves into something much like them. Ultimately our thinking procedures could be totally liberated from any traces of our original body, indeed of any body. But the bodiless mind that results, wonderful though it may be in its clarity of thought and breadth of understanding, could in no sense be considered any longer human."
What a nightmarish future…
I reccomend to read "Technophobia: science fiction visions of posthuman technology"
What in our world besides human brains is immune to scanning and simulation in the manner you describe?
I don't see any relevance to the 'death' thing after reengineering. There's physical discontinuity in every part of your life; such as when you go to sleep, for example. The actual molecules that make you up are constantly shifting, you're just a pattern. Questions like, "if you replace all your body parts one at a time, are you still you?" are meaningless angels on pinheads/solopsist garbage; pseudo-philosophy that doesn't mean anything.
The question, supposing the technology existed, would not be 'will I continue existing?' but rather will the next sequential pattern be just another fleshbag or a machine made of supercarbon.
The same thing applies to sci-fi transporters that rip you apart atom by atom. Are you the same person? Did you die? These questions don't mean anything, they give a supernatural meaning to death and identity.
No one has the slightest idea of how mind uploading would WORK. We still don't have a complete understanding of how our brains create things like memory and the identity of the self, although advances are made in this field every month it seems.
I'm not interested in, honestly silly, arguments of functionalism vs. materialism and other esoteric philosophy.
I'm only interested in what happens to ME, myself, if I go through a mind upload process.
Now, in the original process as described, a machine copies your exact mind state/structure/pattern. This is all fine and probably technically possible in a few decades. The problem is the next step. See, the "transfer" that is talked about is actually switching you off. If you didn't do this, there'd simply be two copies of you.
Now the argument that all your neurons are replaced anyway isn't actually true, some neurons do get replaced though your lifetime, but not all of them. Anyway, it's the PATTERNS that your neurons produce that is important and creates your identity. Now this pattern does shift and change, and you could make the argument that your identity as a child has died, but this is a very gradual process, and you don't notice it. The original biological pattern hasn't been destroyed (as it would in mind uploading), it's just changed.
There is one option to resolve all this, to allow mind uploading to occur but also retain your identity. It may be possible to use nanobots to gradually replace your biological neurons with synthetic ones, ones that worked faster and were more durable (perhaps silicon, or photonic biopolymer switches, or whatever they come up with), perhaps over the time span of a few years. Or maybe children could be implanted with the nanobots at birth and as they grow so does the neural net. Eventually your entire mind would be composed of synthetic neurons and your mind removed and placed into a new housing.
This will all however require revolutionary advances in the understanding of the physical processes of the brain.
Help me understand something:
1. Materialism asserts that thoughts are identical with brain states, and do not merely correspond to brain states.
2. Consciousness may be more than just the sum total of an individual's thoughts, but I think everyone would agree that it requires thought. No thought = No conciousness. As a corrollary, a different group of thoughts would correspond to a different conciousness.
3. If #1 and #2 are true, it seems mind uploading has an intractable problem. The 'brain states' of a separate machine are by definition not identical with the brain states of the copied entity. The thoughts are thus not identical, and the conciousness cannot be identical. The copy may have some type of conciousness, but it's difficult to say how it could be said to be the *same* conciousness.
What am I missing here?
Defining a mind as a brain, to deny the possibility of mind uploading, is like defining a book as a printed text, to deny the possibility of e-books.
Uploading is impossible. We are a spirit and soul within a carnal body. Sort of like iron man. You get the idea. However, since we have a soul, and it is tied to our bodies by the silver cord, that makes things much more complicated when it comes to uploading. We already live forever. Our souls are immortal and the only way they can be destroyed is by their creator. Our consciousness is our soul, not brain. If you look up NDEs and study them, you find out that these people who died and left their bodies remembered everything and were able to see their doctors and family in the hospital room talking and crying, TRYING TO REVIVE THEM. You cannot move into another "vessel" with technology. It is not allowed. There are laws beyond materialism which support this impossibility. Those laws are placed by God himself. I will tell you this though: They will try, and they will fail. It is only a matter of time until this Earth will perish. And let's say it was possible to upload. Where is your immortality when this earth with everything on it perishes? That is not immortality then. The answers are already here for eternal life but people chose to ignore and go their own way about it. Sorry but there is only one way. Jesus Christ.
Comments are closed.