organizing the sensorium

In his extraordinary book The Presence of the Word (1967), Walter Ong wrote,

Growing up, assimilating the wisdom of the past, is in great part learning how to organize the sensorium productively for intellectual purposes. Man’s sensory perceptions are abundant and overwhelming. He cannot attend to them all at once. In great part a given culture teaches him one or another way of productive specialization. It brings him to organize his sensorium by attending to some types of perception more than others, by making an issue of certain ones while relatively neglecting other ones. The sensorium is a fascinating focus for cultural studies. Given sufficient knowledge of the sensorium exploited within a specific culture, one could probably define the culture as a whole in virtually all its aspects.

The idea of organizing the sensorium productively for intellectual purposes is a very powerful one, and links the history of technology with the history of institutions. Consider, for instance, the way that medieval guilds were means of teaching people the use of particular technologies but also of ratifying their abilities to participate in the life of the guild community. Medieval universities worked in much the same way: texts were scarce and had to be cared for, so people were painstakingly initiated into their responsible use. The disputatio was at once a social ceremony and a demonstration of technical mastery. This technological mastery was demonstrated by the disciplined use of sight, hearing, and speech — an organization of the sensorium embedded in a structure of social organization.

When Martin Luther came along and had the local printer print for his students a clean text of Paul’s letter to the Romans with wide margins and no commentary, he was initiating those students into a different technology and an correspondingly different model of social integration.

In light of these thoughts, the “technological history of modernity” that I have been calling for will also need to be sociological through and through. I’m getting in way over my head here, but I wonder if in trying to think about these technological/sociological connections I need to read John Levi Martin’s Social Structures, which Gabriel Rossman has described as “all about emergence and how fairly minor changes in the nature of social mechanisms can create quite different macro social structures.” And Rossman himself has written about “the diffusion of legitimacy”: how “innovations – concrete products and behaviors – [are] nested within institutions – abstract cognitive schema for evaluating the legitimacy of innovations. In effect, social actors assess the legitimacy of innovations vis-a-vis conformity to institutions such that a sufficiently legitimate innovation may be adopted without direct reference to the behavior of peers.” (Hey Gabriel: Why do you refer to institutions as “abstract cognitive schema” rather than as social organizations with significant physical presences in the world?)

Especially noteworthy in this regard are the connections between emergent behavior in social insects and internet protocols, as though there’s an underlying logic of emergence — of small acts with large consequences — shared by many different animals, including human animals with their digital machines. And these are political as well as biological and technological questions: consider Adam Roberts’s extraordinary novel New Model Army, which imagines how the conjunction of anarchist theory and secure social media tech might produce a new lifeform, what I’ve called a “hivemind singularity.”

Perhaps apparently insignificant, and merely local, adjustments in how people in a given institution strive to “organize the sensorium” can have major consequences down the line. (Not the “butterfly effect” but the “Luther’s print shop effect.”) And larger changes, like the “haptic simplification” of interacting with glass screens often to the exclusion of other forms of tactile exploration? And the ways that those screens increasingly serve as the standard user interface of automated procedures? How can those consequences not be massive?

There’s too damn much that needs to be known about all this, and I know the tiniest fraction of it. But a genuine technological history of modernity will be alert to emergent effects, social structures, and the relation between technical expertise and communal belonging.

a clarification

A quick note: in response to my previous post several people have emailed or tweeted to recommend Jacques Ellul or Lewis Mumford or George Grant or Neil Postman. All of those are valuable writers and thinkers, but none of them do anything like what I was asking for in that post. They provide a philosophical or theological critique of technocratic society, but that’s not a technological history of modernity. If you look at the books I recommend in that post, all of them are deeply engaged with the creation, implementation, and consequences of specific technologies — and that’s what I think we need more of, though in a larger frame, covering the whole of modernity fromt he 16th century to today. A deeply material history — a history of the pressur of made things on human behavior; something like Siegfried Giedion’s Mechanization Takes Command but theologically informed — or at least infused with a stronger sense of human telos than Giedion has; a serious critique of technological modernity that’s not afraid to get grease on its hands.

the technological history of modernity

I’m going to try to piece a few things together here, so hang on for the ride —

I have been reading and enjoying Matthew Crawford’s The World Beyond Your Head, and I’ll have more to say about it here later. I strongly recommend it to you. But today I’m going to talk about something in it I disagree with. On the book’s first page Crawford writes of “profound cultural changes” that have

a certain coherence to them, an arc — one that begins in the Enlightenment, accelerates in the twentieth century, and is perhaps culminating now. Though digital technologies certainly contribute to it, our current crisis of attention is the coming to fruition of a picture of the human being that was offered some centuries ago.

With this idea in mind, Crawford later in the book gives us a chapter called “A Brief History of Freedom” that spells out the philosophical ideas that, he believes, paved the way for the emergence of a culture in which lengthy and patient attentiveness is all but impossible.

Since attention is something I think about a lot — and have written about here and elsewhere — I’m deeply sympathetic to Crawford’s general critique. But I am not persuaded by his history. In fact, I have come to believe — as I have also written here — that the way Crawford tells the history has things backwards, in much the same way that the neo-Thomist interpretation of history gets things backwards. I don’t think we have our current attention economy because of Kant, any more than we have Moralistic Therapeutic Deism because of Ockham and Duns Scotus.

To make the kind of argument that Crawford and the neo-Thomists make is to take philosophy too much at its own self-valuation. Philosophy likes to see itself as operating largely independently of culture and society and setting the terms on which people will later think. But I believe that philosophy is far more a product of existing social and economic structures than it is an independent entity. We don’t have the modern attention economy because of Kant; rather, we got Kant because of certain features of technological modernity — especially those involving printing, publishing, and international postal delivery — that also have produced our current attention economy, which, I believe, would work just as it does if Kant had never lived. What I call the Oppenheimer Principle — “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success” — has worked far more powerfully to shape our world than any of our master thinkers. Indeed, those thinkers are, in ways we scarcely understand, themselves the product of the Oppenheimer Principle.

So while it is true that, as I said in one of those earlier posts, “those of us who are seriously seeking alternatives to the typical modes of living in late modernity need a much, much better philosophy and theology of technology,” we also need better history — what I think I want to call a technological history of modernity.

To be sure, that already exists in bits and pieces — indeed, in fairly large chunks. Some existing works that might help us re-orient our thinking towards a better account of how we got to Us:

Those of us who — out of theological conviction or out of some other conviction — have some serious doubts about the turn that modernity has taken have been far too neglectful of this material, economic, and technological history. We need to remedy that deficiency. And someone needs to write a really comprehensive and ambitious technological history of modernity. I don’t think I’m up to that challenge, but if no one steps up to the plate….

My current book project has convinced me of the importance of these issues. All of the figures I am writing about there understood that they could not think of World War II simply as a conflict between the Allies and the Axis. There were, rather, serious questions to be asked about the emerging character of the Western democratic societies. On some level each of these figures intuited or explicitly argued that if the Allies won the war simply because of their technological superiority — and then, precisely because of that success, allowed their societies to become purely technocratic, ruled by the military-industrial complex — their victory would become largely a hollow one. Each of them sees the creative renewal of some form of Christian humanism as a necessary counterbalance to technocracy.

I agree with them, and think that at the present moment our world needs — desperately — the kind of sympathetic and humane yet strong critique of technocracy they tried to offer. But such a critique can only be valuable if it grows from a deep understanding — an attentive understanding — of both the present moment, in all its complexities, and the present moment’s antecedents, in all their complexities. In the coming months, as I continue to work on my book, I’ll be thinking about how that technological history of modernity might be told, and will share some thoughts here. That will probably mean posting less often but more substantively; we’ll see. The idea is to lay the foundation for future work. Please stay tuned.

more on the “Californian ideology”

A brief follow-up to a recent post … Here’s an interesting article by Samuel Loncar called “The Vibrant Religious Life of Silicon Valley, and Why It’s Killing the Economy.” A key passage:

The “religion of technology” is not itself new. The late historian David Noble, in his book by that title, traced its origins in a particular strain of Christianity which saw technology as means of reversing the effects of the Fall. What is new, and perhaps alarming, is that the most influential sector of the economy is awash in this sea of faith, and that its ethos in Silicon Valley is particularly unfriendly to human life as the middle classes know it. The general optimism about divinization in Silicon Valley motivates a widespread (though by no means universal) disregard for, and even hostility toward, material culture: you know, things like bodies (which Silva calls “skin bags”) and jobs which involve them.

The very fact that Silicon Valley has incubated this new religious culture unbeknownst to most of the outside world suggests how insulated it is. On the one hand, five minutes spent listening to the CEO of Google or some other tech giant will show you how differently people in Silicon Valley think from the rest of the country — listen carefully and you realize most of them simply assume there will be massive unemployment in the coming decades — and how unselfconscious most are of their differences. On the other hand, listen to mainstream East Coast journalists and intellectuals, and you would think a kind of ho-hum secularism, completely disinterested in becoming gods, is still the uncontested norm among modern elites.

If religion makes a comeback, but this is the religion that comes back….

More on this later, but for now just one brief note about bodies as “skin bags”: in the opening scene of Mad Max: Fury Road, Max is captured and branded and used to provide blood transfusions to an ill War Boy named Nux. Nux calls Max “my blood bag.” Hey, it’s only a body.

ideas and their consequences

I want to spend some time here expanding on a point I made in my previous post, because I think it’s relevant to many, many disputes about historical causation. In that post I argued that people don’t get an impulse to alter their/our biological conformation by reading Richard Rorty or Judith Butler or any other theorists within the general orbit of the humanities, according to a model of Theory prominent among literary scholars and in Continental philosophy and in some interpretations of ancient Greek theoria. Rather, technological capability is its own ideology with its own momentum, and people who practice that ideology may sometimes be inclined to use Theory to provide ex post facto justifications for what they would have done even if Theory didn’t exist at all.

I think there is a great tendency among academics to think that cutting-edge theoretical reflection is … well, is cutting some edges somewhere. But it seems to me that Theory is typically a belated thing. I’ve argued before that some of the greatest achievements of 20th-century literary criticism are in fact rather late entries in the Modernist movement: “We academics, who love to think of ourselves as being on the cutting-edge of thought, are typically running about half-a-century behind the novelists and poets.” And we run even further behind the scientists and technologists, who alter our material world in ways that generate the Lebenswelt within which humanistic Theory arises.

This failure of understanding — this systematic undervaluing of the materiality of culture and overvaluing of what thinkers do in their studies — is what produces vast cathedrals of error like what I have called the neo-Thomist interpretation of history. When Brad Gregory and Thomas Pfau, following Etienne Gilson and Jacques Maritain and Richard Weaver, argue that most of the modern world (especially the parts they don’t like) emerges from disputes among a tiny handful of philosophers and theologians in the University of Paris in the fifteenth century, they are making an argument that ought to be self-evidently absurd. W. H. Auden used to say that the social and political history of Europe would be exactly the same if Dante, Shakespeare, and Mozart had never lived, and that seems to me not only to be true in those particular cases but also as providing a general rule for evaluating the influence of writers, artists, and philosophers. I see absolutely no reason to think that the so-called nominalists — actually a varied crew — had any impact whatsoever on the culture that emerged after their deaths. When you ask proponents of this model of history to explain how the causal chain works, how we got from a set of arcane, recondite philosophical and theological disputes to the political and economic restructuring of Western society, it’s impossible to get an answer. They seem to think that nominalism works like an airborne virus, gradually and invisibly but fatally infecting a populace.

It seems to me that Martin Luther’s ability to get a local printer to make an edition of Paul’s letter to the Romans stripped of commentary and set in wide margins for student annotation was infinitely more important for the rise of modernity than anything that William of Ockham and Duns Scotus ever wrote. If nominalist philosophy has played any role in this history at all — and I doubt even that — it has been to provide (see above) ex post facto justification for behavior generated not by philosophical change but by technological developments and economic practices.

Whenever I say this kind of thing people reply But ideas have consequences! And indeed they do. But not all ideas are equally consequential; nor do all ideas have the same kinds of consequences. Dante and Shakespeare and Mozart and Ockham and Scotus have indeed made a difference; but not the difference that those who advocate the neo-Thomist interpretation of history think they made. Moreover, and still more important, scientific ideas are ideas too; as are technological ideas; as are economic ideas. (It’s for good reason that Robert Heilbroner called his famous history of the great economists The Worldly Philosophers.)

If I’m right about all this — and here, as in the posts of mine I’ve linked to here, I have only been able to sketch out ideas that need much fuller development and much better support — then those of us who are seriously seeking alternatives to the typical modes of living in late modernity need a much, much better philosophy and theology of technology. Which is sort of why this blog exists … but at some point, in relation to all the vital topics I’ve been exploring here, I’m going to have to go big or go home.

prosthetics, child-rearing, and social construction

There’s much to think and talk about in this report by Rose Eveleth on prosthetics, which makes me think about all the cool work my friend Sara Hendren is doing. But I’m going to set most of that fascinating material aside for now, and zero in on one small passage from Eveleth’s article:

More and more amputees, engineers, and prospective cyborgs are rejecting the idea that the “average” human body is a necessary blueprint for their devices. “We have this strong picture of us as human beings with two legs, two hands, and one head in the middle,” says Stefan Greiner, the founder of Cyborgs eV, a Berlin-based group of body hackers. “But there’s actually no reason that the human body has to look like as it has looked like for thousands of years.”

Well, that depends on what you mean by “reason,” I think. We should probably keep in mind that having “two legs, two hands [or arms], and one head in the middle” is not something unique to human beings, nor something that has been around for merely “thousands” of years. Bilateral symmetry — indeed, morphological symmetry in all its forms — is something pretty widely distributed throughout the evolutionary record. And there are very good adaptive “reasons” for that.

I’m not saying anything here about whether people should or should not pursue prosthetic reconstructions of their bodies. That’s not my subject. I just want to note the implication of Greiner’s statement — an implication that, if spelled out as a proposition, he might reject, but is there to be inferred: that bilateral symmetry in human bodies is a kind of cultural choice, something that we happen to have been doing “for thousands of years,” rather than something deeply ingrained in a vast evolutionary record.

You see a similar but more explicit logic in the way the philosopher Adam Swift talks about child-rearing practices: “It’s true that in the societies in which we live, biological origins do tend to form an important part of people’s identities, but that is largely a social and cultural construction. So you could imagine societies in which the parent-child relationship could go really well even without there being this biological link.” A person could say that the phenomenon of offspring being raised by their parents “is largely a social and cultural construction” only if he is grossly, astonishingly ignorant of biology — or, more likely, has somehow managed to forget everything he knows about biology because he has grown accustomed to thinking in the language of an exceptionally simplistic and naïve form of social constructionism.

N.B.: I am not arguing for or against changing child-rearing practices. I am exploring how and why people simply forget that human beings are animals, are biological organisms on a planet with a multitude of other biological organisms with which they share many structural and behavioral features because they also share a long common history. (I might also say that they share a creaturely status by virtue of a common Maker, but that’s not a necessary hypothesis at the moment.) In my judgment, such forgetting does not happen because people have been steeped in social constructionist arguments; those are, rather, just tools ready to hand. There is a deeper and more powerful and (I think) more pernicious ideology at work, which has two components.

Component one: that we are living in a administrative regime built on technocratic rationality whose Prime Directive is, unlike the one in the Star Trek universe, one of empowerment rather than restraint. I call it the Oppenheimer Principle, because when the physicist Robert Oppenheimer was having his security clearance re-examined during the McCarthy era, he commented, in response to a question about his motives, “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb.” Social constructionism does not generate this Prime Directive, but it can occasionally be used — in, as I have said, a naïve and simplistic form — to provide ex post facto justifications of following that principle. We change bodies and restructure child-rearing practices not because all such phenomena are socially constructed but because we can — because it’s “technically sweet.”

My use of the word “we” in that last sentence leads to component two of the ideology under scrutiny here: Those who look forward to a future of increasing technological manipulation of human beings, and of other biological organisms, always imagine themselves as the Controllers, not the controlled; they always identify with the position of power. And so they forget evolutionary history, they forget biology, they forget the disasters that can come from following the Oppenheimer Principle — they forget everything that might serve to remind them of constraints on the power they have … or fondly imagine they have.

more on the Theses

So let’s recap. Here are my original theses for disputation. Responses:

Just a wonderful conversation — I am so grateful for the responses. The past few weeks have been exceptionally busy for me, so right now I just have time to make a few brief notes, to some of which I hope I can return later.

First, Julia Ticona is exactly right to point out that my theses presume a social location without explicitly articulating what that location is. I’ve thought about these matters before, and written relatively briefly about them: see the discussion of African Christians whose Bibles are on their phones late in this essay; and a modern Orthodox Jewish take on textual technologies here; and the idea of “open-source Judaism” here. But I haven’t done nearly enough along these lines, and Ticona’s response reminds me that we are in need of a more comprehensive set of technological ethnographies.

Second, I am really intrigued by Michael Sacasas’s template for thinking about attention. I wonder if we might complicate his admirably clear formulation — hey, it’s what academics do, sue me — by considering Albert Borgmann’s threefold model of information in his great book Holding on to Reality, from the Introduction of which I’ll quote at some length here:

Information can illuminate, transform, or displace reality. When failing health or a power failure deprives you of information, the world closes in on you; it becomes dark and oppressive. Without information about reality, without reports and records, the reach of experience quickly trails off into the shadows of ignorance and forgetfulness.

In addition to the information that discloses what is distant in space and remote in time, there is information that allows us to transform reality and make it richer materially and morally. As a report is the paradigm of information about reality, so a recipe is the model of information for reality, instruction for making bread or wine or French onion soup. Similarly there are plans, scores, and constitutions, information for erecting buildings, making music, and ordering society….

This picture of a world that is perspicuous through natural information and prosperous through cultural information has never been more than a norm or a dream. It is certainly unrecognizable today when the paradigmatic carrier of information is neither a natural thing nor a cultural text, but a technological device, a stream of electrons conveying bits of information. In the succession of natural, cultural, and technological information, both of the succeeding kinds heighten the function of their predecessor and introduce a new function. Cultural information through records, reports, maps, and charts discloses reality much more widely and incisively than natural signs ever could have done. But cultural signs also and characteristically provide information for the reordering and enriching of reality. Likewise technological information lifts both the illumination and the transformation of reality to another level of lucidity and power. But it also introduces a new kind of information. To information about and for reality it adds information as reality. The paradigms of report and recipe are succeeded by the paradigm of the recording. The technological information on a compact disc is so detailed and controlled that it addresses us virtually as reality. What comes from a recording of a Bach cantata on a CD is not a report about the cantata nor a recipe-the score-for performing the cantata, it is in the common understanding music itself. Information through the power of technology steps forward as a rival of reality.

Thinking about Borgmann in relation to Sacasas, I formulate a question which I can only register right now: What if different kinds of information elicit, or demand, different forms of attention?

Finally: Most of my respondents have in some way — though it’s interesting to note the variety of ways — emphasized the need to distinguish between individual decision-making and structural analysis: between (a) whatever technologies you or I might choose to employ or not employ, when we have a choice, and (b) the massive global-capitalist late-modern forces that sustain and enforce our current technopoly. Seeing these distinctions I am reminded of a very similar conversation, that surrounding climate change.

There has been an interesting recent turn in writing about climate change. Whereas advocates for the environment once placed a great emphasis on the things that individuals and families can do — reducing one’s carbon footprint, recycling, etc. — now, it seems to me, it’s becoming more common for them to say that “being green won’t solve the problem”. The problems must be addressed at a higher level — at the highest possible level. Technopoly, similarly, won’t be altered by boycotting Facebook or writing more by hand or taking the occasional digital detox.

But I might be. Recycling and installing solar panels and avoiding plastic water bottles — these are actions that matter only insofar as they limit destruction to our environment; they don’t do anything in particular for me, except add inconvenience. But even if sending postcards to my friends instead of tweeting to them doesn’t lessen the grip of the great social-media juggernauts, it can still be a good and worthwhile thing to do. We just need to be sure we don’t confuse personal culture with social critique.

on the attention economy

Let me zero in on what I think is the key paragraph in my friend Chad Wellmon’s response to some of my theses:

But this image of a sovereign self governing an internal economy of attention is a poor description of other experiences of the world and ourselves. In addition, it levies an impossible burden of self mastery. A distributive model of attention cuts us off, as Matt Crawford puts it, from the world “beyond [our] head.” It suggests that anything other than my own mind that lays claim to my attention impinges upon my own powers to willfully distribute that attention. My son’s repeated questions about the Turing test are a distraction, but it might also be an unexpected opportunity to engage the world beyond my own head.

I want to begin by responding to that last sentence by saying: Yes, and it is an opportunity you can take only by ceding the sovereignty of self, by choosing (“willfully”) to allow someone else to occupy your attention, rather than insisting on setting your own course. This is something most of us find it hard to do, which is why Simone Weil says “Attention is the rarest and purest form of generosity.” And yet it is our choice whether or not to practice that generosity.

I would further argue that, in most cases, we manage to cede the “right” to our attention to others — when we manage to do that — only because we have disciplined and habituated ourselves to such generosity. Chad’s example of St. Teresa is instructive in this regard, because by her own account her ecstatic union with God followed upon her long practice of rigorous spiritual exercises, especially those prescribed by Francisco de Osuna in his Tercer abecedario espiritual (Third Spiritual Alphabet) and by Saint Peter of Alcantara in his Tractatus de oratione et meditatione (Treatise on Prayer and Meditation). Those ecstatic experiences were a free gift of God, Teresa thought, but through an extended discipline of paying attention to God she had laid the groundwork for receptivity to them.

(I’m also reminded here of the little experiment the violinist Joshua Bell tried in 2007, when he pretended to be a busker playing in a D.C. Metro station. Hardly anyone noticed, but those who did were able to do so because of long experience in listening to challenging music played beautifully.)

In my theses I am somewhat insistent on employing economic metaphors to describe the challenges and rewards of attentiveness, and in so doing I always had in mind the root of that word, oikonomos (οἰκονόμος), meaning the steward of a household. The steward does not own his household, any more than we own our lifeworld, but rather is accountable to it and answerable for the decisions he makes within it. The resources of the household are indeed limited, and the steward does indeed have to make decisions about how to distribute them, but such matters do not mark him as a “sovereign self” but rather the opposite: a person embedded in a social and familial context within which he has serious responsibilities. But he has to decide how and when (and whether) to meet those responsibilities. So too the person embedded in an “attention economy.”

In this light I want to question Weil’s notion of attention as a form of generosity. It can be that, of course. In their recent biography Becoming Steve Jobs, Brent Schlender and Rick Tetzeli tell a lovely story about a memorial service for Jobs during which Bill Gates ignored the high-powered crowd and spent the entire time in a corner talking with Jobs’s daughter about horses. That, surely, is attention as generosity. But in other circumstances attention may not be a free gift but a just rendering — as can happen when my son wants my attention while I am reading or watching sports on TV. This is often a theme in the religious life, as when the Psalmist says “Ascribe to the Lord the glory due his name” or in a liturgical exchange: “Let us give thanks to the Lord our God.” “It is meet and right so to do.”

There is, then, such a thing as the attention that is proper and adequate to its object. Such attention can only be paid if attention is withheld from other potential objects of our notice or contemplation: the economy of our attentional lifeworld is a strict one. But I would not agree with Chad that this model “levies an impossible burden of self mastery”; rather, it imposes the difficult burden of wisely and discerningly distributing my attention in ways that are appropriate not to myself qua self but to the “household” in which I am embedded and to which I am responsible.

Cross-posted at The Infernal Machine

returning: a process

Hello everybody. I’m back from a wonderful visit to the good people at the Institute for Advanced Studies in Culture at the University of Virginia, and am now in the midst of a harrowing game of catch-up. So while regular posting will resume here soon, it won’t resume immediately.

At the Institute I presented 79 Theses on Technology, for disputation, and while we didn’t conduct a full-fledged disputatio, I got some wonderful responses from the group there that will make my thinking better. There will also be more detailed and formal responses on the Infernal Machine blog, and the first of them has now been posted by my friend (and kind host while I was there) Chad Wellmon.

Look for more responses there, and for my own counter-blasts, which will probably be posted both here and at the Infernal Machine.

report from the Luddite kingdom

What world does Michael Solana live in? Apparently, a world where Luddites have taken power and have driven our kind and benevolent technologists into some pitiful hole-and-corner existence, where no one dares to suggest that technology can solve our problems. “Luddites have challenged progress at every crux point in human history. The only thing new is now they’re in vogue, and all our icons are iconoclasts. So it follows here that optimism is the new subversion. It’s daring to care. The time is fit for us to dream again.” 

Yes! Dare to dream! But take great care — do you realize what those Luddites will do to you if you as much as hint that technology can solve our problems? 

I have to say, it’s pretty cool to get a report from such a peculiar land. Where you and I live, of course, technology companies are among the largest and most powerful in the world, our media are utterly saturated with the prophetic utterances of their high priests, and people continually seek high-tech solutions to every imaginable problem, from obesity to road rage to poor reading scores in our schools. So, you know, comparative anthropology FTW. 

And now, two serious points:

1) To quote Freddie deBoer, Victory is yours. It has already been accomplished.” Is it really necessary for you to extinguish every last breath of dissent — even what comes to us in fiction? Relatedly:

2) Here again we see the relentless cultural policing of the pink police state. Stop reading young adult fiction! Stop writing dystopian fiction! Stop imagining what we do not wish you to imagine! 

In T. H. White’s The Once and Future King, here’s what happens when the Wart is turned into an ant: 

The place where he was seemed like a great field of boulders, with a flattened fortress at one end of it — between the glass plates. The fortress was entered by tunnels in the rock, and, over the entrance to each tunnel, there was a notice which said:
EVERYTHING NOT FORBIDDEN IS COMPULSORY
He read the notice with dislike, though he did not understand its meaning.

Welcome to the ant’s little world, where of course the converse is necessarily true: Everything that is not compulsory is forbidden. In the pink police state there are no adiaphora