Summary
I wanted to write up some notes about thoughts I’ve had on the philosophy of desire over the years. I should be clear, at the outset, that this is not a topic I’ve written on before, and that I haven’t read the prior literature on this topic. Everything I know about it comes from my own life, and from reading in related areas of philosophy (the philosophy of mind, the philosophy of wellbeing, moral psychology etc). My plan is to write this, and then read the Stanford Encyclopedia of Philosophy entry on desire immediately afterward, and see if it changes how I feel about the subject.
In summary:
We don’t know a lot about how what people intrinsically desire changes over time- either the causes of its change or the typical ways or patterns in which it changes.
The history of the relationship between desire and philosophy has a traumatic past- a lot of people even today are shaken by the reality that desire is arbitrary- desire can’t be right or wrong and: “It is not irrational to prefer the destruction of the world to the pricking of my finger.”
Desire itself is an ambiguous term that can refer to multiple different things. Whether ‘desire’ in general or overall, is a useful concept is unclear.
Despite Humean restrictions, there are some limited ways in which we can, roughly speaking, compare the rationality of one set of desires with another.
Leaving aside the ambiguities in the concept of desire, we have no idea what it is people really intrinsically desire, whether there are facts of the matter about what things people intrinsically desire (at least prior to asking them), and how much individuals vary from each other with respect to their intrinsic desires. It may not be that intrinsic desires are for particular outcomes, so much as they are for prototypical lifestyles, or even for groups or ‘clusters’ of desires in which individual components are not necessarily either determinately intrinsic or extrinsic.
The philosophy of desire may be a vital part of what we need to create safe artificial intelligence, and also an intrinsic part of what we need to create human-like artificial intelligence.
Desire is weirder than is dreamt of in your or my philosophy, if you want to think through desire in a personal way, start by thinking about what you’d desire in a utopia.
Starting to think about the weirdness of changing desires
I remember playing a D&D campaign once. We were raising a red dragon. It was evil from birth. We were trying to convince it not to be evil. It was hard to imagine what to do. How do you make an intrinsic desire that things go well for other people ‘come out of nowhere?’
Or look at it this way. Suppose you were a prison psychologist. You had all the time in the world, all the resources you wanted. Your job was to convince, say, Ted Bundy, to care about other people. How would you do it? It’s not an easy question to answer. It’s not even clear there is an answer. You might know how to nurture the flame of a little compassion into more compassion, but how do you start the process from nothing or nearly nothing? From the absence of an intrinsic desire for the well-being of others?
The Humean picture of belief and desire
Since Hume, the implicit picture held by many intellectuals about propositional mental states has been this. People have beliefs (or degrees of belief) and desires. A special kind of desire is called an intrinsic desire- the desire of a thing for its own sake. There is no belief such that holding it guarantees we have an intrinsic desire for anything. There is no desire such that holding it guarantees we believe something. Beliefs and desires are thus (logically) independent. Desires give us our goals, beliefs tell us how to pursue them, thus in a sense ‘belief’ or ‘reason’ is the ‘slave’ of ‘desire’ or the ‘passions’- or as Hume put it:
Reason Is and Ought Only to Be the Slave of the Passions
Now this applies to reason and desire only as logical categories, neither conceptually necessitates the other, but this says nothing about their causal relationship. While, in theory no belief necessitates any desire and vice-versa- in practice, we might find that the way our brain is built, certain beliefs automatically lead, or tend to lead, to certain desires and vice-versa. Nonetheless, we assume that belief and desire are causally unrelated until evidence shows otherwise. Evidence often does show otherwise- there are many results from empirical psychology suggesting belief and desire are causally intertwined. To sum up: belief and [intrinsic] desire never conceptually necessitate each other, but sometimes belief and desire can be causally related.
What about a relationship of rationality? In principle, the lack of a relationship of conceptual necessitation between holding a belief and holding a desire doesn’t rule out the possibility that an agent who is fully rational and who holds a certain belief must therefore desire a certain thing. Some people think, for example, that a being who believes that pressing a certain button will condemn herself and everyone else to everlasting torment must therefore if they are rational, desire not to press the button. This topic is heavily debated, but I tend to agree, once again, with Hume on this- as weird as it might seem, there’s nothing strictly irrational about wanting eternal suffering for everyone including yourself.
Desire dynamics
The other night though, something strange occurred to me. We don’t know much about how desires, change over time at all. In the Ted Bundy and Red Dragon cases I discussed above, you can give a simple answer- total sociopaths generally don’t just start caring about people. However, even if this particular change in desire doesn’t often, or ever, happen, nonetheless it does seem like people sometimes change their intrinsic desires. How does this process work?
Compare with belief. We have at least a rough idea of how what people believe can be changed. For example:
New evidence and argument (yes, these do sometimes work, I’ve seen it happen- I’ve had it happen to me).
Cognitive dissonance (I want to sleep with my friend’s husband. I believe that there’s a good chance that if I did it I’d get caught, and it would devastate my friend. I convince myself it wouldn’t be so bad, and I won’t get caught).
Peer pressure (this can be a sub-case of both 1 & 2 but can also be its own factor).
But we don’t really have a very good idea of how intrinsic desires change. To be clear, we have a pretty good idea of how instrumental desires- desiring something for the sake of achieving something else- can change. If I desire to be healthy and I hear that drinking a glass of red at dinner might help with that, I might start desiring to drink a glass of red at dinner. If I later learn that this research is flawed (I’m not saying it is- this is not medical advice- it’s just an example), I will lose my instrumental desire for a glass of red at dinner. It’s pretty clear how both cognitive dissonance and new evidence can affect instrumental desires.
But the change in intrinsic desires over time is a harder thing to understand. Let’s go through the factors we listed for belief and see if they apply:
It’s hard to see that any new evidence or reasoning could require an intrinsic desire to change. The sole exception, perhaps, is when we realize that certain of our intrinsic desires are incompatible, this might lead to their revision. However, even this is unclear- because knowingly wanting inconsistent things- while angsty- doesn’t seem as impossible as knowingly believing inconsistent things.
It’s also hard to see how cognitive dissonance could influence intrinsic desires or at least the examples are less obvious than in the case of beliefs and instrumental desires.
Of the list above, peer pressure seems like it could be a factor, certainly, but even the parameters of this relationship are poorly understood.
One plausible (partial) mechanism for how desires can change over time might be called the sour grapes mechanism. As I come to realize I’ll never get something, I stop intrinsically desiring it. The inverse- me coming to intrinsically desire what I know will end up being the case- seems plausible but less certain.
But in the main, the dynamics of desire are much more mysterious than the dynamics of belief, and the dynamics of belief are pretty mysterious themselves, so that’s certainly saying something. I find this fascinating, few topics could be more important than the question of what people want for its own sake- it matters in all the human arts and sciences, yet we have very little to say about how what we really want changes.
[Edit: Having now read the Stanford Encyclopedia of Philosophy entry, they suggest one possible route I hadn’t written about in this essay. Instrumental desires that successfully lead to the fulfillment of intrinsic desires gradually become intrinsic desires over time.]
A look at the psychological literature suggests that while there is plenty of work on transformations in what people want, it doesn’t typically (in my view) sufficiently pull apart intrinsic and non-intrinsic desires. I can’t really blame it for this, because as I’ll discuss later, this is almost impossible. It may even be impossible in practice to figure out which desires are truly intrinsic.
So I’m left wondering about the dynamics of how intrinsic desires change- by what methods do intrinsic desires change over the lifespan, how much do intrinsic desires change over the lifespan, and what are the patterns of change in intrinsic desires over the lifespan?
Hume and the trauma of desire
Let’s go back to Hume and then to Nietzsche for a moment. I’m going to tell a little parable about the history of philosophy. I don’t know whether or not it’s true, so please do not rely on it. However, I do think it is interesting.
Once upon a time, philosophers spent a great deal of energy thinking about what it is that we desire and what it is we should desire because desire and belief were not seen as these two neat, split categories. Then Hume came along with his moral psychology and split them. The trauma of this split made philosophers stop thinking so much about the things we desire, although arguably that trend had set in earlier. Make no mistake, it was a traumatic split, the discovery that desire was arbitrary in the sense of being unrestrained by evidence and reason led to the panic over nihilism. As Hume remarked: “It is not irrational to prefer the destruction of the world to the pricking of my finger.” There’s tremendous moral vertigo inherent in this, for anyone who had thought that morality is grounded on reason, it’s almost a philosophical jump scare.
A lot of people still are traumatized by the arbitrary character of desire. When I was younger I was greatly bothered by it- it even fed into my clinical depression at times. When I considered that ordinary morality had no binding power to force a ‘reasonable’ agent to want moral outcomes, that I could just as easily and reasonably prefer that the whole world be transformed into a vast torture chamber as a utopia- I was chilled. Action and wanting felt empty and ungrounded. Then as I got older I stopped caring. I happen to want everything to turn out well for everyone. I could just as easily want everything to turn out badly. Morality and desire both are my choices and, from a certain point of view, are arbitrary. Big whoop! I’ll do and feel what I like. All life is a rebellion against the void anyway. It’s not as if the ungroundedness of intrinsic desires in reason makes my desires irrational- they’re just arational.
There’s still a lot of scope, I think, for systematically considering the things we actually desire, even if the desire isn’t much constrained by reason beyond, perhaps, consistency. To a certain extent, this is what already happens in moral philosophy, aesthetics and in the theory of prudence, but my gut says it could go so much further.
Is there any way to reason about what desires we should have given Humean restrictions?
Kind of.
In the Analytic tradition, Rawls's concept of reflective equilibrium as a methodology for ethical and normative-political work is cool. Here’s one way of construing what it’s doing: reflective equilibrium recognizes the arbitrary nature of (ethical) wants -constrained only by other wants- and, recognizes then that the only possible way to reason about desire is through the consistency of desire with desire. It then says “alright, let’s go for it”. We test our ethical desires in particular cases (e.g. through thought experiments) and revise our positions on both the cases and general principles. There’s no reason why we couldn’t apply similar principles to questions of what we should desire.
I’ve previously written about what I call a practical dominance methodology. When I wrote about it, it was in the context of moral views, but I think it works in the context of desire generally. It works like this. Suppose that there are two frameworks of desire A & B. According to both A & B, you will achieve more of what you want if your desire framework is B. That is to say, you will get more of the things desired by the A framework if you adopt the B framework- B is superior to A not only by its own lights but by A’s lights as well. Under these circumstances, I think there is a compelling case to stop desiring the A things and start desiring the B things. Whether you actually can do this- can bring desire under conscious control- is very unclear though. Certainly, if it is possible, it is difficult.
The nature of desire
What, exactly, is desire? If we could answer this more exactly we’d be able to say something useful about how it changes over time, but as it turns out, this is not so easy. Here are some things in the ‘vicinity’ of desire.
First, there is craving. Roughly, a sensation of wanting, a sometimes difficult-to-resist motivational force leading us to seek something.
Then there are what we might call drives. A drive is an innate tendency to crave certain things.
Then there is pleasure, and what we will call pleasurances- that which you take pleasure in. If you enjoy fine red wine for example we would say that you have a pleasurance for fine red wine. Obviously, we can imagine similar but inverse categories for pain.
Then there is preference. A state of seeking one state over another. There are, I think, important categories here, consider the division between: A) Sincerely stated preferences and B) behavioral preferences, or what the behavior of the agent aims at, or would aim at under the right circumstances. These can diverge- people often sincerely state they want something, while not actually working towards it, or even working for the opposite. Preferences can also be what we might call actual or all-other-things-being-equal (AOTBE preference). An actual circumstances preference is a preference given the conditions that actually obtain whereas an AOTBE preference is, as the name suggests ‘all other things being equal.’ Thus I might have a preference to have sex with my friend’s husband but not given that the betrayal this would involve under current circumstances- I have an AOTBE preference, but not an actual preference. Thus for preferences, we have a 2x2 matrix of sincerely stated vs behavioral and AOTBE vs actual circumstances. Also, remember, only some of these preferences are intrinsic or non-instrumental, so we could potentially make this a 2x2x2 matrix of preferences’- but I’ll leave off this step.
There are lots of weird tricky and intermediate cases when it comes to desire-like states. For example, consider a man who says he doesn’t want to meet his ex, would not press the “meet” my ex button etc. Nor does he feel any craving to meet his ex. However, he keeps ‘accidentally’ hanging at places where he might run into him. Leave aside the question of whether this sort of thing ever actually happens in humans- it seems at least in principle possible. This isn’t exactly a behavioral preference to run into his ex- he wouldn’t press the button after all. It seems to me that it is best described as a kind of unconscious craving, but I am not sure.
As far as I can tell, all four of craving, drives, plesurance, and preference are separable from each other. Even craving something and having a drive towards something can be separated. I might have an innate drive towards X, but have wholly beaten that innate drive so no longer crave it, and I can certainly crave things I was not born with a drive towards (the heroin addict wasn’t born with a heroin drive).
So we have:
Craving (the sensation of wanting)
Drives (the innate predisposition to crave a certain thing)
Plesurances (the disposition of taking pleasure in something)
Preferences which can be:
The preferences you honestly say you have OR the preferences your behavior reveals
All-other-things-being-equal (AOTBE) OR under actual circumstances
I think you can have any combination of those desire-like things without necessitating the others. I haven’t checked all combinations, but I think this even holds of the 2x2 subtypes of preference I gave.
In a way, this reminds me of my prior writing on the multiplicity of the concept of belief- the independent categories and mental states that are involved in what we generally call ‘believing’.
I am also reminded of debates over eliminative materialism, and particularly Stitch’s remark that a full understanding of the mind might leave us radically uncertain of whether or not it really contained the objects of folk psychology because folk psychology is vague and indeterminate. Maybe ‘desire’ tout-court is just not a very useful category.
The nature of intrinsic desire- prototypes, indeterminacy, individual differences, and a series of unanswered questions.
Now we come to the part of the piece that surprised me the most as I wrote it [I discover all my ideas as I write them].
Let’s put aside the complexities we’ve uncovered in the concept of desire, and for the moment just take it as an unproblematic concept. Part of the problem in understanding how intrinsic desires change is that it’s very hard to get at what it is that people intrinsically want.
Let’s start by considering a confusion that arises in this area- conflating intrinsic desire with pleasure. There are, I think, two quite distinct understandings, one of which is, in my opinion ‘correct’, of what intrinsic desire is. The wrong definition is that we intrinsically desire that which will give us pleasurance, that which, upon achieving it, will give us pleasure or happiness, or we think will do so. This definition seems to me to be wrong- to not capture what intrinsic desire is in the most important sense. It owes to an old 19th/18th-century psychological theory called hedonism- essentially the claim that the only two motivators of human behavior are pleasure and pain.
I would argue, contra hedonism that there are things we desire intrinsically that don’t give us pleasure- maybe even that can’t give us pleasure by definition. For example, I might desire that my children be happy after my death, or that alien species outside my backward lightcone flourish and enjoy their existence.
Let me instead put forward this definition of intrinsic desire:
To intrinsically desire something is, roughly at least, to think its mere addition to a situation with all else left equal, makes the situation better.
But this turns out to be really tricky, and in practice, we’re sometimes very uncertain. For example- do you intrinsically want sex, in the sense that if you considered two lives, one of which involved sex, and the other didn’t, but in both lives you were equally happy, fulfilled, socially connected, and had your share of excitement etc, you would prefer the life with sex in it?
I think I do, in fact, intrinsically value sex in this way. I might even be willing to give up other things I value intrinsically- like a little bit of happiness (not a lot- it doesn’t have to be a lot) to have as much sex as I wanted.
Do you intrinsically value sex in this way? You’ve probably never thought about that before. To be sure, unless you are asexual you likely value sex, but whether you value sex for its own sake, or view it as an instrument for attaining other goods- pleasure, happiness, intimacy, etc. probably hasn’t bothered you much. We could ask you the same thing of many other goods- do you value a good reputation for its own sake? Fame? artistic or creative achievement?
The problem is that so many many of the goods we want are entangled in complex ways, and there is no clear sense of which are ends and which are means. In fact, looking at it from this angle, I begin to doubt whether there even is a fact of the matter about what it is we intrinsically desire, at least prior to contemplation. Was there some truth about whether or not I intrinsically desired sex that pre-existed my thinking about whether or not I intrinsically desire sex or just the things that go along with it? I think, perhaps not? What we intrinsically desire is unclear, maybe even indeterminate.
Perhaps instead of talking about intrinsic desires, we should talk about desire complexes, where a whole heap of desires are not determinately either instrumental or intrinsic, but it is the complex as a whole is intrinsically desired. The rough picture might be something like this. Where getting A and B overlap, we don’t necessarily make our selection as to which it is that we intrinsically want, and which it is that we instrumentally want. Our intrinsic desire is, in a sense, indeterminate between them, neither present nor absent in each. Only when the attainment of A and B come apart- either in reality or in the context of a hypothetical is that choice made and we decide whether we value A, B or both non-instrumentally. I’m not quite sure how this model would work or if it could, but if someone wanted to create a formal or semi-formal model I’d be fascinated to see it.
Or perhaps desire complexes is not the right way to look at it: maybe what I desire is not a discrete list of things like “sex” “friendship” etc. but instead, I desire to embody certain prototypes to live a certain kind of life or to be a certain kind of person. If this is right, the picture of intrinsically desiring certain specific things might be a mistake, much like the idea that words have necessary and sufficient conditions. Instead, words have prototypes that define the core case of a thing. On this theory, our desired life is based off models, real or imaginary, rather than a list of things.
The problem with this view is if we don’t have a prior sense of what we want how do we identify and choose lives to be the prototypes of what we want? But if we do have a prior sense of what we want, then what role do the prototypes play?
I haven’t got clean answers, but I think the traditional answer- that there is some prior matter of fact about whether you intrinsically value X for all X, is almost certainly misconceived- or if it does turn out to be true, we should recognize that as, in and of itself, somewhat bizarre. For if there is a preexisting list inscribed in my heart, why did it take me ages to decide whether I intrinsically want sex, a good reputation, etc?
Individual variation in intrinsic desires- not merely what is intrinsically desired, but the conceptual form that intrinsic desire takes- is quite possible. Maybe some people intrinsically desire lists of things, some people intrinsically desire only pleasure or certain sensations and emotions, and some people intrinsically desire to embody certain ways of being a person which are given by prototypes, but which do not have strict necessary and sufficient conditions.
Another important variation in how people desire might be given by Nozick’s case of the experience machine- a machine that can allow you any number of pleasurable experiences. Some people say that living a life in the experience machine would be just as good for them as living a real life. Others, including myself, think that the experience machine is preferable to oblivion but still far inferior to real life, at least unless it includes other real people and a few other important caveat. Maybe these differences are explained by genuine divergences in what people think. Coming back to the theme of indeterminacy, it is equally possible that there is no fact of the matter on whether or not an experience machine could fulfill our desires until we confront the dilemma it poses to make the choice.
So getting at how people’s intrinsic desires change over time is a tricky proposition in part because most people don’t know their own intrinsic desires, or even the form of their own intrinsic desires, and in any case, it’s quite possible that the form intrinsic desires take varies. Intrinsic desires might not take the form of neat and discrete things we want, but something more like prototypes.
The control problem as the problem of desire
The control problem in artificial intelligence has sometimes been presented as the problem of “solving ethics”. If we could develop a formal understanding of what, exactly, it is that moral behavior, or at least behavior that is not unethical, required, we would be close to solving the control problem. Our uncertainty over desire is, perhaps, an even more fundamental problem than our uncertainty over ethics in relation to the control problem.
It’s very interesting that hitherto attempts to create human-like artificial intelligence have tended to focus, almost exclusively, on attempts to create something analogous to human beliefs, instead of human desires. Consider GPT-3- There’s something quite funny or poetic in artificial intelligence that contains implicitly the structure of, billions of words of text but only ‘wants’ one thing- to predict the next word of text.
Wanting to want and other mysteries
We haven’t really even begun to scratch the surface of the alien forms! I remember, late high school circa 2006. I am trying to write something on the concept of wanting to want- the situation of lacking a desire or dream, and desperately wanting a desire or dream. It eventually became clear to me that what I wanted was a kind of transcendent, perfect desire, the attainment of which would be satisfactory and wouldn’t leave me wondering what should I want now. What I wanted, in an explicit, almost signposted way was a contradiction. So to speak- a longing to transcend the conceptual parameters of being while still being. Moreover, this seemingly nonsense desire led to specific behavior on my part- writing and seeking, and thinking.
But the picture gets more complex. As I got older, I came to recognize that alot of that nameless desire I felt in high school for something beyond the imagination- some nameless thing- was in fact, loneliness. How does this- the possibility of unknown, unrecognized desire- fit with what we have said- about the mysteries of what people intrinsically desire, and into the taxonomy I provided earlier of desire-like states? Most clearly it is a drive, an innate biological tendency to need something- but it did not emerge in my experience as tendency to prefer being with others rather it emerged only as a vague dissatisfaction. Since I had, more or less, no friends until I reached university I did not even have a clear craving for anything, since I did not know what I was missing.
One great framework for thinking about desire is utopia. Not what kind of utopia do you want, but what would you want in utopia, after scarcity? Imagine we’re all inside a computer, interacting together, able to conjure anything we wanted into being with our thoughts. What would you pursue? If there was no struggle you were immediately confronted with, how would you create struggle? Would you create struggle? Would you begin a centuries-long quest to write the best possible poem you could? Experience every possible physical pleasure? Commence a mathematical research program? Perhaps you’d just vibe and fuck around? Or would the absence of a directly presented struggle- a given agony- drive you mad? Would you place yourself in a simulation of struggle and try to convince yourself that it’s real- maybe a pivotal point in history? There’s no one answer to that question, and that alone shows that we confront the world in different ways.
‘Humanity’ is a great mystery, but even more so each individual is a cut gem of refracting secrets.
Edit: After having read the Stanford Encylopedia of Philosophy entry
I was struck by how we arrived at similar questions but with different approaches. For example, their section on the question ‘what is desire’ curiously mirrors my discussion of different kinds of desires, but I took the default hypothesis as being that they all exist in tandem, and are somewhat ‘desire-like’. However, the entry took the default hypothesis as being that one of the ideas it lists is what desire truly is. Is this a case study in philosophy’s tendency to reify words, so each must correspond to one thing there is ‘a’ philosophical puzzle about the ‘nature’ of the one thing tagged by each word?
Thanks for your post!
“There’s something quite funny or poetic in artificial intelligence that contains implicitly the structure of, billions of words of text but only ‘wants’ one thing- to predict the next word of text.”
I know this was just an aside comment, but I disagree with this--I think it's the wrong level of analysis to look for "desire" in the language model (would be like saying all brains desire is for depolarization to lead to action potentials or something).
I really appreciated hearing about your teenage existential angst. I liked to think about how different it was from my own teenage existential angst. I think I was on the surface worried about everything being meaningless or groundless or something classic like that. But it felt more selfish/almost solipsistic, and there was a little Buddhist flair to it. I also came to the conclusion (later) that it was just being lonely and trying to convince myself that this was the feeling of being super smart. Probably also had to do with being a closeted gay teen. Anyways, fun times.
About the topic of intrinsic or fundamental desires, I always think about these from the perspective of attachment theory, about people fundamentally wanting (1) safety and (2) exploration. And then everything else on top is kind of a mishmash of conditioned habits, things that are associated with other things, things that symbolize other things (in a personal way), things that are instrumental to other things, etc.
I like the idea that some of this has relevance to AGI, but I feel kind of cynical about the likelihood of that. I feel like they are just going to make something that works, solves intelligence or whatever, without having any greater depth of understanding of the Human Condition. But maybe that's just the existential angst of being the age I am now.