A while ago, a friend of mine (jokingly) pitched me on a new fad diet: XP-ganism. As all good D&D players know, agents in the worldâin this case, animalsâcan be ranked by the amount of XP youâd be awarded for defeating them. Killing a lion or a brown bear would net you 200XP, whereas a boar would only yield 50XP, and a spider only 10XP. XP-ganism derives its principles from this: you declare a minimum XP threshold, and only eat the flesh or products of agents with XP above that threshold. If you think plants are agents, then you might have accidentally committed yourself to the bizarre carnivore diet that is so beloved by crypto bros and Jordan Petersonâs daughter. If not, youâre competing for status amongst the other XP-gans. âOh, youâre a 20XP-gan?â you might ask, in faux-politeness, âThatâs such a great start! Iâm actually an 75XP-gan.â
For a hedonic utilitarian whose moral circle is sufficiently expanded, a more conventional diet like veganism can be justified on obvious grounds: eating chicken requires that a chicken be raised and slaughtered; this causes the chicken to suffer X amount, and has expected Y additional negative effects on the wellbeing of others; a substitute food Z causes less net suffering, etc. A preference utilitarianâs assessment runs along similar lines: to the extent that the chicken has moral status, it has interests and preferences, and those preferences include ânot being slaughtered for meatâ; the maximising the global preference-satisfaction entails ânot eating the chickenâ.
The XP-gan introduces a kind of game theoretic approach to these calculations.
Imagine a very particular agent: anti-Stella. Whatever you want, anti- you wants exactly the opposite. Whatever event would cause you the most pleasure would cause anti- you the most pain. Cooperation with a hypothetical opposite of this kind is definitionally impossible. Every choice you could consider is perfectly zero-sum.
XP-ganism is a crude gesture towards a kind of âpreference utilitarianism as decision theory, where the preferences of other agents are weighted (in your calculations) according to the extent to which your interests & preferences are aligned with that agentâ. In a world containing Stella and anti-Stella, the ânaĂŻveâ approach says that (suddenly, bizarrely) neither of you matter, morally, at all. XP-ganism says âwell, the shape of Stellaâs preferences is closer to mine than that of anti-Stella, soâto the extent of that resemblanceâIâll coordinate with her, and not with anti-her.â
I say all of this as preamble to a question. When you talk about those games of Risk that our mutual group of friends playedâabout the way that âthe line between play and life got blurrierâ until, eventually, after âa few late nights and a couple too direct wordsâ, everyone collectively opted to stop playingâyou say that âWe lost something in doing so.â What was it exactly that you think was lost?
My guess is that thereâs a trade-offâin postmodern life, in Western culturesâbetween two things:
- the ability of a group to speak in precise and open terms about the nature of the incentive structures operating within the group, and
- the ability of a group to coordinate (sensibly & strategically) in relation to the outside world.
Almost all humans keep accounts, on some private level, regarding their interactions with others.
âAlice is really burning through good-will lately,â you might say, or think, or (barely) allow yourself to feel, âBut after yesterday, Iâm more confident that Bob can be relied upon in crisis.â
No two real-world agents are perfectly aligned. Yet, somehow, groups of modern humans find very particular ways to pretend that everyone within the group is perfectly aligned. A game such as Risk is purpose-built to render explicit (in low-stakes & stylised form) whole categories of territorial claims and coordinations and conflicts that our whole culture otherwise depends on not talking about âin the openâ. While itâs true that claims on fixed territory are fundamental to agrarian cultures and ways of thinking, itâs also true that âmodern post-agrarian modes of thinkingâ are adaptive cultural strategies baked into the languages and lives of groups of humans who outcompeted other groups of humans over millenia.
Let me be direct for a moment. If you put yourself into the mindset of the dominant players within the cultureâthe aristocracy, whether figurative or literalâthe basic strategy that constitutes fixed agrarian modes of life is obviously as follows:
- control productive land;
- steal productive land from others;
- control the people who do the work that makes the land produce things;
- steal some portion of the things those people produce;
- repeat steps 1-4 as frequently and intensely as you can without being overthrown.
How was such a strategy actually made to work? In part, I claim, this approach came to dominate because the humans involved simultaneously found ways of
- subordinating and suppressing explicit discussion of the nature of the game,
- deceiving and coercing the victims of the game,
- making âcontrol of fixed territoryâ seem so fundamental that it was taken to be natural.
This is all, I think, pretty obviously contingent. Indigenous Australian cultures provide ample evidence that itâs possible for groups of humans to speak in more precise and open terms about incentives (including the accounting within the group) while also engaging in robust coordination. In a lot of ways, such cultures areâor, at least, wereâobviously healthier than our own post-agrarian post-modernity. But those cultures also didnât produce Risk ⌠or, as you said, Matt Yglesias.
The real task, I think, is how we can reconstruct robust group and individual agency from a worldview in which almost everyone (by default) plays Risk and then defers to the need to stop playing lest harm be done. Itâs telling that every party of adventurers in D&D is itinerant.