on the Quants and the Creatives

Over the past few months I’ve thought from time to time about this Planet Money episode on A/B testing. The episode illustrates the power of such testing by describing how people at NPR created two openings for an episode of the podcast, and sent one version out to some podcast subscribers and the second to others. Then they looked at the data from their listeners — presumably you know that such data exists and gets reported to “content providers” — and discovered that one of those openings resulted in significantly more listening time. The hosts are duly impressed with this and express some discomfort that their own preferences may have little value and could, in the future, end up being ignored altogether.

I keep thinking about this episode because at no point during it does anyone pause to reflect that no “science” went into the creation of A and B, only the decision between them. A/B testing only works with the inputs it’s given, and where do those come from? A similar blindness appears in this reflection in the NYT by Shelley Podolny: “these days, a shocking amount of what we’re reading is created not by humans, but by computer algorithms.” At no point in the essay does Podolny acknowledge the rather significant fact that algorithms are written by humans.

These wonder-struck, or horror-struck, accounts of the new Powers That Be habitually obscure the human decisions and acts that create the technologies that shape our experiences. I have written about this before — here’s a teaser — and will write about it again, because this tendentious obfuscating of human responsibility for technological Powers has enormous social and political consequences.

All this provides, I think, a useful context for reading this superb post by Tim Burke, which concerns the divide between the Quants and the Creatives — a divide that turns up with increasing frequency and across increasingly broad swaths of American life. “This is only one manifestation of a division that stretches through academia and society. I think it’s a much more momentous case of ‘two cultures’ than an opposition between the natural sciences and everything else.”

Read the whole thing for an important reflection on the rise of Trump — which, yes, is closely related to the division Tim points out. But for my purposes today I want to focus on this:

The creatives are able to do two things that the social science-driven researchers can’t. They can see the presence of change, novelty and possibility, even from very fragmentary or implied signs. And they can produce change, novelty and possibility. The creatives understand how meaning works, and how to make meaning. They’re much more fallible than the researchers: they can miss a clue or become intoxicated with a beautiful interpretation that’s wrong-headed. They’re either restricted by their personal cultural literacy in a way that the methodical researchers aren’t, and absolutely crippled when they become too addicted to telling the story about the audience that they wish was true. Creatives usually try to cover mistakes with clever rhetoric, so they can be credited for their successes while their failures are forgotten. However, when there’s a change in the air, only a creative will see it in time to profit from it. And when the wind is blowing in a stupendously unfavorable direction, only a creative has a chance to ride out the storm. Moreover, creatives know that the data that the researchers hold is often a bluff, a cover story, a performance: poke it hard enough and its authoritative veneer collapses, revealing a huge hollow space of uncertainty and speculation hiding inside of the confident empiricism. Parse it hard enough and you’ll see the ways in which small effect sizes and selective models are being used to tell a story, just as the creatives do. But the creative knows it’s about storytelling and interpretation. The researchers are often even fooling themselves, acting as if their leaps of faith are simply walking down a flight of stairs.

Now, there are multiple possible consequences of this state of affairs. It may be that the Quants are going to be able to reduce the power of the Creatives by simply attracting more and more money, and thereby in a sense sucking all the air out of the Creatives’ room. But something more interesting may happen as well: the Creatives may end up perfectly happy with the status quo, in which they can work without interference or even acknowledgement to shape the world, like Ben Rhodes in his little windowless office in the West Wing. Maybe poets are the unacknowledged legislators of the world after all.

And then? Well, maybe this:

Their complete negligence is reserved, however,
For the hoped-for invasion, at which time the happy people
(Sniggering, ruddily naked, and shamelessly drunk)
Will stun the foe by their overwhelming submission,
Corrupt the generals, infiltrate the staff,
Usurp the throne, proclaim themselves to be sun-gods,
And bring about the collapse of the whole empire.

more on social structures and imaginative work

A couple of follow-ups on yesterday’s oddball rantish thing on the social and economic structures that enable or disable genuine imagination:

First, a really thoughtful response from my friend Bryan McGraw, who can provide a political philosopher’s take on these issues. Please read it all, but here’s an excerpt:

No doubt lots of folks on the political and cultural Left will read this (or see pithily tweeted link) and cheer. See, they’ll say, the universities are being “corporatized” and here’s another casualty! Ah, but I think Alan’s point is meant to cut more deeply than that, because what our libertarian economists and socialist sociologists share is a deep, deep commitment to a modern (and post-modern) conception of human moral psychology that reduces human beings to calculating preference machines (whether those preferences emerge out of appetites, culture, whatever makes for many of our differences, but that they rule us is widely held). And since we can see “through” human beings that way, we can organize them (or allow them to organize themselves) in some unitary and unified way. That’s why we can see what looks superficially like a paradox – a society that is both more libertine (sexual ethics limited only by consent) and puritanical (don’t smoke!) – is, in fact, not and why there is a tremendous amount of pressure to remake every institution and range of human activity in the image of, well, something or someone.

In a well-known passage, C. S. Lewis writes, “Nothing strikes me more when I read the controversies of past ages than the fact that both sides were usually assuming without question a good deal which we should now absolutely deny. They thought that they were as completely opposed as two sides could be, but in fact they were all the time secretly united — united with each other and against earlier and later ages — by a great mass of common assumptions. We may be sure that the characteristic blindness of the twentieth century — the blindness about which posterity will ask, ‘But how could they have thought that?’ — lies where we have never suspected it, and concerns something about which there is untroubled agreement between Hitler and President Roosevelt or between Mr. H. G. Wells and Karl Barth.” I think (I hope) that later ages will see almost all of today’s political thought as wrapped up in the unquestioned and even unconfronted assumption that people are simply “calculating preference machines.”

More directly to the point of my article, while Eisenhower may have wanted us to distrust the “military-industrial complex” because of its power to involve private industry in policy-making, and while that is a very important warning indeed, when government, mega-industry, and the university system all become entangled beyond the possibility of disentanglement, the flow of influence runs in all directions, but especially from the richer to the less-rich — from the patrons to the patronized. And that puts universities in the position of being shaped far more than they shape; and that, in turn, puts the artists and writers who work for the university in an even more dependent position. This worries me.

I think I’ll have more to say about Bryan’s smart response, but for now just one note: I do think the anti-capitalist left is likely to find something to cheer in my post; they and I have a good deal in common. My politics are probably too incoherent to describe, but one might say that they are sorta kinda paleo-conservative green-communitarian, emphasizing the need to renew and strengthen the institutions (especially family and local community, and schools insofar as they grow out of family and local community) that mediate between the individual and the nation-state, for the better care of people and the created order. And since the nation-state that is growing and growing and growing in power is an international-capitalist one, I end up agreeing with the left that that nation-state’s dominance is probably our largest single political problem. When I think about politics, I have infinitely more sympathy for a left-anarchist like David Graeber than I do for any National Greatness conservatism. (Bryan, set me straight if I’m leaving the true path here.)

Second: One of the reasons I want to make an argument for regenerating genuine imagination, genuine creativity, is that “imagination” and “creativity” and today almost totally co-opted by scenes like this — the happy-clappy “super excited” artificially-generated enthusiasm of the TED world that Benjamin Bratton has called, in one of the most apt phrases of the twenty-first century, “middlebrow megachurch infotainment”. If that’s what imagination and creativity are all about, may God save us all from them.

biased against creativity?

“People say they like creativity but they really don’t” is Slate’s summary of a new paper. Having read the paper, “The Bias Against Creativity: Why People Desire But Reject Creative Ideas” (PDF), I think Slate is right that that’s the paper’s claim. But I don’t think that’s what the research actually shows.

The paper’s authors rightly say that “Creative ideas are both novel and useful,” but if I’m reading their paper rightly — and please correct me in the comments if I’m not — what they show is that the people they tested were suspicious of novelty. What the authors seem not to be taking into account is that all creative ideas are by definition novel, but not all novel ideas are creative — in fact, I think it’s fair to say that most novel ideas are pretty stupid. (Samuel Johnson is often credited — erroneously — with saying to a writer “Your manuscript is both original and good. But the parts that are good are not original, and the parts that are original are not good.” Even if Dr. Johnson didn’t say it, the quip makes a point.)

So when people prove to be skeptical about novel ideas, aren’t they just being rational? Running the numbers appropriately? That doesn’t mean that they’re “biased against creativity,” only that they know from experience that the great majority of people who think they’re creative really aren’t. That’s why when my wife was in a meeting some years ago and she heard for the thousandth time someone making an appeal for “thinking outside the box” she replied, somewhat plaintively, “Can we first try finding one or two people who can think inside the box?”

routines and rituals

Here’s a thoughtful brief review by Siobhan Phillips of Daily Rituals, a book by Mason Currey based on his now-dormant blog Daily Routines. Phillips:

An artist’s schedule is important, Currey’s book reminds us, for its refusal to squeeze the most working minutes out of the artist’s waking hours. At a moment when we’re working longer than ever — and, as we dutifully lean in, trying to feel inspired and empowered by working more — it’s useful to recall that many of the greatest minds planned to fritter away parts of their days, that their routines protected creativity by filling the time around a more or less fixed window of possible, genuine intensity. Some strategies are more whimsical, like Patricia Highsmith’s habit of tending snails or Flannery O’Connor’s of raising birds, but most are very ordinary: Stephen King watching baseball, Jean Stafford gardening. There’s a good bit of smoking in this book, and a steady attention to drinking; there’s a lot of walking, too. (It seems to work even if you don’t, like Tchaikovsky, panic at any stroll shorter than two hours.) But one suspects that smoking and drinking and walking are so popular because they are the most universally accessible way to stave off the restlessness of the hours when one cannot — should not — be at a desk. They offer a way to forget how brief and chancy is the ability to create something new, to refine something beautiful, to think something true.

And about that ability, of course, schedules can say very little. That’s another point to be taken from this fascinating compendium. As if to recognize the mystery, Currey’s title evolved, when he turned his blog into this book, from Daily Routines to Daily Rituals. The amendment sneaks something spiritual back into his obsession with habit. Like the rites of religious devotion, the timetables of art surround an essence that is unrepeatable and unquantifiable. “It will appear like a calm existence,” Maira Kalman says of her schedule, but “the turmoil is invisible.” We fetishize that trackable calm because we cannot reproduce the inexplicable turmoil.

Lovely, and correct — and an understanding of creative labor pretty much impossible to reconcile with our society’s current obsession with “productivity.” There are many lessons to be learned from Currey’s book, but people who read Lifehacker might not be ready to hear them.

one weird trick to unleash your creativity

Thomas Frank is exasperated:

What our correspondent also understood, sitting there in his basement bathtub, was that the literature of creativity was a genre of surpassing banality. Every book he read seemed to boast the same shopworn anecdotes and the same canonical heroes. If the authors are presenting themselves as experts on innovation, they will tell us about Einstein, Gandhi, Picasso, Dylan, Warhol, the Beatles. If they are celebrating their own innovations, they will compare them to the oft-rejected masterpieces of Impressionism — that ultimate combination of rebellion and placid pastel bullshit that decorates the walls of hotel lobbies from Pittsburgh to Pyongyang.

Those who urge us to “think different,” in other words, almost never do so themselves. Year after year, new installments in this unchanging genre are produced and consumed. Creativity, they all tell us, is too important to be left to the creative. Our prosperity depends on it. And by dint of careful study and the hardest science — by, say, sliding a jazz pianist’s head into an MRI machine — we can crack the code of creativity and unleash its moneymaking power.

That was the ultimate lesson. That’s where the music, the theology, the physics and the ethereal water lilies were meant to direct us. Our correspondent could think of no books that tried to work the equation the other way around — holding up the invention of air conditioning or Velcro as a model for a jazz trumpeter trying to work out his solo.

And why was this worth noticing? Well, for one thing, because we’re talking about the literature of creativity, for Pete’s sake. If there is a non-fiction genre from which you have a right to expect clever prose and uncanny insight, it should be this one. So why is it so utterly consumed by formula and repetition?

I’d like to suggest an answer to this question: the problem is that there’s actually no such thing as “creativity.” It’s a made-up concept bearing no relation to anything that exists. It’s a classic case of what the Marxists used to call “false reification.” Let’s never speak of it again.

where “we” are

Peggy Nelson:

We’ve moved from the etiquette of the individual to the etiquette of the flow.

Question: Who are “we”?

This is not mob rule, nor is it the fearsome hive mind, the sound of six billion vuvuzelas buzzing. This is not individuals giving up their autonomy or their rational agency. This is individuals choosing to be in touch with each other constantly, exchanging stories and striving for greater connection. The network does not replace the individual, but augments it. We have become individuals-plus-networks, and our ideas immediately have somewhere to go. As a result we’re always having all of our conversations now, flexible geometries of nodes and strands, with links and laughing and gossip and facts flying back and forth. But the real message is movement. . . .Eventually I learned to stop worrying and love the flow. The pervasiveness of the new multiplicity, and my participation in it, altered my perspective. Altered my Self. The transition was gradual, but eventually I realized I was on the other side. I was traveling with friends, and one of them took a call. Suddenly, instead of feeling less connected to the people I was with, I felt more connected, both to them and to their friends on the other end of the line (whom I did not know). My perspective had shifted from seeing the call as an interruption to seeing it as an expansion. And I realized that the story I had been telling myself about who I was had widened to include additional narratives, some not “mine,” but which could be felt, at least potentially and in part, personally. A small piece of the global had become, for the moment, local. And once that has happened, it can happen again. The end of the world as we know it? No — it’s the end of the world as I know it, the end of the world as YOU know it — but the beginning of the world as WE know it. The networked self is a verb.

Question: In the Flow, is there any reason not to text one person while you’re having sex with another one?

How might this apply to storytelling? It does not necessarily mean that every story must be, or will become, hopelessly fragmented, or that a game mentality can or should replace analysis. It does mean that everyone is potentially a participant in the conversation, instead of just an audience member or consumer at the receiving end. I think the shift in perspective from point to connection enables a wider and more participatory storytelling environment, rather than dictating the shape of stories that flow in the spaces.

Ah, it’s consumption vs. creation again. Question: In the Flow, is there ever any value to listening? Or, to put it another way: In the Flow, are “listening” and “consuming” distinguishable activities?

creativity in crisis

Well, this does not seem to be good news:

With intelligence, there is a phenomenon called the Flynn effect — each generation, scores go up about 10 points. Enriched environments are making kids smarter. With creativity, a reverse trend has just been identified and is being reported for the first time here: American creativity scores are falling.Kyung Hee Kim at the College of William & Mary discovered this in May, after analyzing almost 300,000 Torrance scores of children and adults. Kim found creativity scores had been steadily rising, just like IQ scores, until 1990. Since then, creativity scores have consistently inched downward. “It’s very clear, and the decrease is very significant,” Kim says. It is the scores of younger children in America — from kindergarten through sixth grade — for whom the decline is “most serious.”

On the surface, this seems to run counter to Clay Shirky’s thesis that the internet and related technologies are yielding a “cognitive surplus” that allows us greater scope for creativity. It will therefore be interesting to hear how Shirky responds to these findings. Presumably he won’t reconsider his thesis; it’s possible that he will find flaws in the research, or in the definition of “creativity” the studies use.But my bet is that he’ll say something like this: These studies identify a decline in creativity that begins before the digital era, which means that the blame cannot be placed on use of the internet, but rather on the preceding dominant technology, television; therefore, as our attention shifts more and more completely to the interactive media enabled by the internet, the decline in creativity will be arrested and then reversed. I don’t think such a response is adequate to the facts on the ground, but I’m guessing this is what we’ll hear from Shirky and other congenital optimists.I’m finding typing too laborious to give my own response in any detail, but I’m inclined to blame not the internet but rather our culture of managerial parenting, in which children are given almost no opportunity, from toddlerhood through late adolescence, to engage in unstructured play. Which would not be the worst news in the world: it’s more likely that parents learn to back off a bit than that we abandon online life.

creation and consumption

From Megan Garber’s largely positive, thoughtful review of Clay Shirky’s Cognitive Surplus:

But the problem with TV, in this framing, is its very teeveeness; the villain is the medium itself. The differences in value between, say, The Wire and Wipeout, here, don’t much matter — both are TV shows, and that’s what defines them. Which means that watching them is a passive pursuit. Which means that watching them is, de facto, a worse way — a less generous way, a more selfish way — to spend time than interacting online. As Shirky puts it: “[E]ven the banal uses of our creative capacity (posting YouTube videos of kittens on treadmills or writing bloviating blog posts) are still more creative and generous than watching TV. We don’t really care how individuals create and share; it’s enough that they exercise this kind of freedom.”The risk in this, though, for journalism, is to value creation over creativity, output over impulse. Steven Berlin Johnson may have been technically correct when, channeling Jeff Jarvis, he noted that in our newly connected world, there is something profoundly selfish in not sharing; but there’s a fine line between Shirky’s eminently correct argument — that TV consumption has been generally pernicious in its very passivity — and a commodified reading of time itself. Is the ideal to be always producing, always sharing? Is creating cultural products always more generous, more communally valuable, than consuming them? And why, in this context, would TV-watching be any different from that quintessentially introverted practice that is reading a book?

Sometimes it seems that in Shirky’s ideal world everyone is talking and no one is listening.(I commented on the idea that not sharing is selfish here.)

“darkness and silence”

Robert McCrum:

For new and original books to flourish, there must be privacy, even secrecy. In Time Regained, Marcel Proust expressed this perfectly. “Real books”, he wrote, “should be the offspring not of daylight and casual talk, but of darkness and silence.”How many “real books” enjoy “darkness and silence” today? Not many. In 2010, the world of books, and the arts generally, is a bright, raucous and populist place. The internet – and blogs like this – expose everything to scrutiny and discussion. There’s a lot of self-expression, but not necessarily much creativity.So the question I ask is: can the secret state of creative inspiration flourish on global platforms on which everything is exposed, analysed and dissected?

I don’t think this is quite right. I think there are some kinds of books — some kinds of art — that can only be made in privacy, by people who seclude themselves from other voices and work through a project without interference. But that’s not a universal rule. Many of the ideas in the book I’m writing now, on reading in a digital age, have made their first appearance on this blog: I have tried out thoughts, had readers agree or disagree or send me links to related ideas. Even when you don’t get a lot of comments on an idea, just putting it before the public forces you to think about it in a different way than when it’s only in your head.Maybe what McCrum should have written is that there must be a stage in the making of any significant work that takes place “in darkness and silence.” But even Proust gained a great deal of the knowledge and insight that fed his books from social occasions — even Proust!

Asking too much of “compression”

[Continuing coverage of the 2009 Singularity Summit.]

Juergen Schmidhuber’s talk is underway now: “Compression Progress: The Algorithmic Principle Behind Curiosity, Creativity, Art, Science, Music, Humor.” (Abstract and bio available here.)

Dude has an immediate stage presence. Charming German accent, cool deadpan delivery. He starts off staring at the audience, and launches into a story which turns out to be a joke. The gist (sorry, this blog is not the best venue for delivery of jokes) is that there are three prisoners facing death, and they are asked their last wishes. The first says something to establish the joke rule of threes. The second, a German, says, “I vant to give a speech!” The last, an Englishman, says, “I want to be executed before the German.” Ba-da-boom!
He’s still telling some jokes and building up to his talk. I heard people complaining over the break about people who took too long to get into their talks. But the topic here, among other things, is humor. One way or another, I’m liking this guy. Now he’s showing a slide that says is his “take-home message”: “(Human) Unsupervised Intelligent Agent.” (I guess we’re meant to insert verbs in there, like on CNN headlines?)
Okay, so he’s now outlining the technical specs of a human as a computer. “It” has the capacity to store a lifetime (say, 100 years) of sensory input, at a compressed bit rate. He’s detailing the “compression algorithm” we have, which he says gets better as individual learn — meaning, as we learn what data to save. He says what matters is not the number of bits we’re saving, but the change in the number of bits we need to save as our learning algorithm gets better.
Juergen Schmidhuber
So far, he’s said a whole lotta’ nothin’, but he says this is all we need to understand in order to get the rest of his talk on music, humor, etc. He’s talking about a scenario with a robot sitting in a dark room. This is boring, he says, because it’s completely compressible and there’s no change. But if you hear music, there are more bits and you can compress it more, so there’s more discovery going on in the process of compressing more.
He’s talking about art, now, such as caricatures, where talented artists discover how to represent famous faces in just a few figures (he claims one artist can represent President Obama’s face in five lines, though he’s not showing it to us). Science and art, he says, are all about distilling this “essence” and discovering how to compress more. Now he’s tying this back into entropy and artificial neural nets, which try to discover ways of compressing or matching data sets in previously unknown efficient ways.
Schmidhuber’s claim is thought-provoking, to say the least. Certainly one of the most distinguishing features of intelligence is its ability to “distill the essence” of phenomenon so that they can be explained in more simple ways. He talked in particular about representing visual data (which I found especially interesting because I’m such a big fan of PNG and vector-based encoding of images, because they’re lossless and massively more efficient than pixel-based JPGs).
But this strikes me as a great example of metaphor gone awry. First of all, the metaphor itself is only so good. He only mentioned “distilling the essence” and simplicity once; mostly, he’s been talking again and again about bit rates, reward optimizers, and compression. Even if we could perfectly replicate intelligence based on these principles, getting into strong claims about how “this is really what’s going” (which is implicit in all of this) is absurd, because these are all recent inventions of engineering. But really, the applicability of the metaphor itself is weak. We might notice some useful correspondences between the system he’s describing and the way our own intelligence works. But there is in his talk a casual presumption that we’ve somehow gotten at some deeper scientific truth at how intelligence works, rather than having just, as A.I. folks like to say, produced an interesting heuristic.
The other point that needs to be made is that the aspect of intelligence for which Schmidhuber has produced a metaphor is just one part of intelligence, and not even close to the essence of it. Hardly all of intelligence, creativity, and humor, is “figuring out the essence behind things.” That implies a stripping down (a “compression,” as he would call it) that is in fact the opposite of what most creativity and humor is about. Where is the compression in this?:
“Autumn Rhythm”, Jackson Pollock

“Fractals,” you might say. Sure, maybe fractals are behind Pollock’s painting, and maybe they even have something to do with why some people find it intuitively pleasing or fascinating. Yet we don’t revel in aesthetic awe at the simple equations behind fractals, but rather at their embodied (or “uncompressed”) form in the painting itself.

UPDATE: Oh, so that’s how beauty works! One of Schmidhuber’s slides: