A while ago, a friend of mine (jokingly) pitched me on a new fad diet: XP-ganism. As all good D&D players know, agents in the world—in this case, animals—can be ranked by the amount of XP you’d be awarded for defeating them. Killing a lion or a brown bear would net you 200XP, whereas a boar would only yield 50XP, and a spider only 10XP. XP-ganism derives its principles from this: you declare a minimum XP threshold, and only eat the flesh or products of agents with XP above that threshold. If you think plants are agents, then you might have accidentally committed yourself to the bizarre carnivore diet that is so beloved by crypto bros and Jordan Peterson’s daughter. If not, you’re competing for status amongst the other XP-gans. “Oh, you’re a 20XP-gan?” you might ask, in faux-politeness, “That’s such a great start! I’m actually an 75XP-gan.”

For a hedonic utilitarian whose moral circle is sufficiently expanded, a more conventional diet like veganism can be justified on obvious grounds: eating chicken requires that a chicken be raised and slaughtered; this causes the chicken to suffer X amount, and has expected Y additional negative effects on the wellbeing of others; a substitute food Z causes less net suffering, etc. A preference utilitarian’s assessment runs along similar lines: to the extent that the chicken has moral status, it has interests and preferences, and those preferences include ‘not being slaughtered for meat’; the maximising the global preference-satisfaction entails ‘not eating the chicken’.

The XP-gan introduces a kind of game theoretic approach to these calculations.

Imagine a very particular agent: anti-Stella. Whatever you want, anti- you wants exactly the opposite. Whatever event would cause you the most pleasure would cause anti- you the most pain. Cooperation with a hypothetical opposite of this kind is definitionally impossible. Every choice you could consider is perfectly zero-sum.

XP-ganism is a crude gesture towards a kind of “preference utilitarianism as decision theory, where the preferences of other agents are weighted (in your calculations) according to the extent to which your interests & preferences are aligned with that agent”. In a world containing Stella and anti-Stella, the ‘naïve’ approach says that (suddenly, bizarrely) neither of you matter, morally, at all. XP-ganism says “well, the shape of Stella’s preferences is closer to mine than that of anti-Stella, so—to the extent of that resemblance—I’ll coordinate with her, and not with anti-her.”

I say all of this as preamble to a question. When you talk about those games of Risk that our mutual group of friends played—about the way that “the line between play and life got blurrier” until, eventually, after “a few late nights and a couple too direct words”, everyone collectively opted to stop playing—you say that “We lost something in doing so.” What was it exactly that you think was lost?

My guess is that there’s a trade-off—in postmodern life, in Western cultures—between two things:

  1. the ability of a group to speak in precise and open terms about the nature of the incentive structures operating within the group, and
  2. the ability of a group to coordinate (sensibly & strategically) in relation to the outside world.

Almost all humans keep accounts, on some private level, regarding their interactions with others.

“Alice is really burning through good-will lately,” you might say, or think, or (barely) allow yourself to feel, “But after yesterday, I’m more confident that Bob can be relied upon in crisis.”

No two real-world agents are perfectly aligned. Yet, somehow, groups of modern humans find very particular ways to pretend that everyone within the group is perfectly aligned. A game such as Risk is purpose-built to render explicit (in low-stakes & stylised form) whole categories of territorial claims and coordinations and conflicts that our whole culture otherwise depends on not talking about ‘in the open’. While it’s true that claims on fixed territory are fundamental to agrarian cultures and ways of thinking, it’s also true that ‘modern post-agrarian modes of thinking’ are adaptive cultural strategies baked into the languages and lives of groups of humans who outcompeted other groups of humans over millenia.

Let me be direct for a moment. If you put yourself into the mindset of the dominant players within the culture—the aristocracy, whether figurative or literal—the basic strategy that constitutes fixed agrarian modes of life is obviously as follows:

  1. control productive land;
  2. steal productive land from others;
  3. control the people who do the work that makes the land produce things;
  4. steal some portion of the things those people produce;
  5. repeat steps 1-4 as frequently and intensely as you can without being overthrown.

How was such a strategy actually made to work? In part, I claim, this approach came to dominate because the humans involved simultaneously found ways of

  1. subordinating and suppressing explicit discussion of the nature of the game,
  2. deceiving and coercing the victims of the game,
  3. making ‘control of fixed territory’ seem so fundamental that it was taken to be natural.

This is all, I think, pretty obviously contingent. Indigenous Australian cultures provide ample evidence that it’s possible for groups of humans to speak in more precise and open terms about incentives (including the accounting within the group) while also engaging in robust coordination. In a lot of ways, such cultures are—or, at least, were—obviously healthier than our own post-agrarian post-modernity. But those cultures also didn’t produce Risk … or, as you said, Matt Yglesias.

The real task, I think, is how we can reconstruct robust group and individual agency from a worldview in which almost everyone (by default) plays Risk and then defers to the need to stop playing lest harm be done. It’s telling that every party of adventurers in D&D is itinerant.