More than eleven philosophical ideas Bear wants to publish, but needs a collaborator for
You ever wanted to be an ursine philosopher? This could be your big break.
What human philosopher- or academic generally- hasn’t dreamed of breaking into the exclusive world of ursine philosophy? The glamorous artic soirees with the polar epistemologists. The soulful sessions eating bamboo, lolling around on the ground, and contemplating the form of the good with the pandas. The salmon run discussions on justice as bearness. The late-night trash can raids discussing the radical aesthetics of the discarded. Truly, it is an august world that few human philosophers can ever hope to join, however much they may and must pine, but a unique opportunity has arisen.
The Philosophy Bear has a bunch of ideas he wants to write up as papers but lacks sufficient background for. I’m looking for philosophers, economists, or others with backgrounds in the following areas- not necessarily a big background mind you! From experience, I tend to end up doing over half the work in collaborations and I always prefer to split credit equally. Even if nothing comes of the ideas, I think it would be fun to thrash them out with somebody. Putting all these ideas out there makes me feel a bit entitled, so let me just say I take myself at most 40% as seriously as may appear.
1. The idea of a generalized philosophical argument for the left
There’s a standing challenge in the literature now, by Joshi (Forthcoming) What Are the Chances You’re Right About Everything? arguing that we shouldn’t expect either side to have a monopoly on political truth, because, roughly, numerous political positions are orthogonal to each other. Thus it would be a massive coincidence if one side were right about all of politics.
I have an argument to make that we should expect the left to be correct on most issues because:
The left represents the weak and disadvantaged.
The side of the weak is usually the better side in political disputes.
Therefore the left is usually right.
For premise 1, I have empirical evidence from the study of social dominance orientation. For premise 2, I have a few converging lines of evidence.
I want to write a paper on this. I could use some expertise in social epistemology or political philosophy to do it. Jump on board if you’re interested.
There is, I think, a rich literature waiting to develop here on the philosophy of ‘left’ and ‘right’ as categories. As it stands, political philosophy usually abstracts from these categories and that seems odd.
2. Bayesian epistemology as a replacement for traditional epistemology.
I suspect that Bayesianism- particularly but not only subjective Bayesianism, makes classical epistemological categories like coherentism and foundationalism obsolete, and throws new light on skepticism among other problems, like the a priori. I’d argue these implications are so comprehensive that little of traditional epistemology remains. I’d love to collaborate with an epistemological coauthor, formal or otherwise, or a decision theorist or something in that vein
3. AI, functionalism and large language models
On a functionalist account of minds, the rules are pretty loose- so loose that, so I would argue, language models may well ‘conjure’ into existence minds with beliefs and desires in the process of picking the next word. I’d love to write an article on this, but don’t have the phil of mind chops.
4. Free will as secondary to mitigation
A coconut falls on a monkey’s head, and the monkey irritably lashes out at another monkey he’s usually friendly with. I don’t know, but I think there’s a good chance that the other monkey will be more forgiving than he otherwise might be.
I think that the idea of mitigation- of not holding something against someone because of their difficult circumstances- is far older than the idea of free will. More controversially, I think it may be, in a sense, the more important concept, at least in conjunction with a variety of other concepts such as expression-of-character, and should take theoretical precedence over free will in moral philosophy. I’m looking for someone with more knowledge of the relevant literature in free will, moral luck and related topics, to discuss these ideas with, with the plan being that a paper might come from it.
5. Personal identity- a 2x2 matrix approach
In general, the two most popular reductionist views about personal identity come in two families, physical [inc animalism] and psychological. I have an idea for a different way to conceptualize things-
A) Do you think that what matters is the persistence of a pattern or of an object?
B) Do you think what matters is the persistence of the body or the mind?
Consider two thought experiments:
i) Teletransportation: Your whole body, brain included is scanned, destroyed, and then recreated exactly elsewhere- do you survive?
ii) Brain transplant: Your brain moved to another body, and your old body discarded- do you survive?
If you answered yes to i), you are a pattern theorist, if you answered no, you are an object theorist. If you answered yes to ii) you are a mind theorist, if you answered no you are a body theorist. These answers, rather than the standard physical and psychological options, form a 2x2 matrix of possibilities, each of which, in my view, has something that can be said for it. For example, some people identify with their mental states but do not think they would survive teletransportation- this matrix can explain their position- they are mind-object theorists.
I’d like to write a paper about that.
6. Double booking
I have an unusual view about social epistemology- viz, I think that people can and should keep two separate sets of beliefs. One set of beliefs formulated on the basis of all available evidence. Another set of beliefs, formulated on the basis of a subset of available evidence, the delimitation of which I will not get into here, but restriction to which will usually generate more ‘radical’ beliefs. They should advocate for this more radical set of beliefs, but practice the more conservative set- doing so will strike the right balance between epistemic diversity and optimal practice. Moreover, I reckon that there’s at least some evidence that people already do this. I’d love to work with a social epistemologist on this! Will Fleischer did some similar work which was very good, so maybe this would also be very good, we won’t know until you give it a go.
7. Sleeping beauty
A while ago I gave an argument that if one is a thirder about sleeping beauty, one must think the many worlds interpretation of quantum mechanics is infinitely more likely than views that hold that there’s only one world. On this basis, I reckon that thirderism is false. Anyway, since posting about that, I’ve had a few more ideas, vis a vis sleeping beauty. Specifically, I think thirderism might be self-defeating in a particular sense by implying all probabilities are undefined. Get in touch, if you want to talk about them, and we’ll see if maybe I’ve got something.
8. Medium rare subjectivism
Intuitively what we ought to do has something to do with what we want to do- a view called subjectivism. Simple subjectivism holds that what we ought to do has to do with what we actually want. Idealising subjectivism holds that what we ought to do has to do with what we want if we were idealized, perfected in certain ways- for example, we knew more, had experienced more and so on and so forth. Simple subjectivism has a flaw- it implies I ought to drink the glass of petrol I think is gin. Idealising subjectivism has a problem- what if I would reject the guidance of a more idealized me, even understanding that they are idealized- for example because I disagree with one of the list of proposed conditions- e.g. I think too much knowledge makes you wicked. Also, part of the charm of subjectivism is it doesn’t depend on values external to me, but if that’s right, where did this list of ideal conditions come from?
I’m working on an idea I call medium rare idealization. What you ought to do is given by:
What a version of you
That you would agree is maximally ideal
Would do.
So if I would defer maximally to an omniscient version of myself- and I would defer to it more than , and an omniscient version of myself would do X, I should do X. It bears some similarities with Kate Manne’s view, but gives different results in a few important ways, and doesn’t rely on an inbuilt concept of ideal. It would be great to thrash out these ideas with a wellbeing theorists, metaethicists etc.
9. Knowledge- A take down
Sick of epistemologists talking about the exact conditions of knowledge as if something rides on it? Sick of the way the Post-Gettier debates on the nature of knowledge have been carried on? Think the whole thing is likely to be merely verbal in some sense and purely defined by the vagaries and variances in use? Think the idea that there’s an independently existing ‘special’ entity - a natural kind, metaphysical kind or normative kind called Knowledge, to which our usage is accountable, is absurd? I mean a lot of people think this, but want to really drive the point home. Have a background in epistemology? Want to write a paper on it? Want to contact me and write a paper about it? Or tell me the paper’s already been written and point me to it? That would be appreciated, because periodically I go looking for it.
10. Something more exotic
Maybe you’re an anthropologist, or a cognitive scientist
Here’s some interesting things I think:
A) The development of ideas even within a single indvidual is analogous to natural selection.
B) People restrain the tyrannical power of texts by investing them with more, not less, authority.
C) The many worlds interpreation of quantum mechanics has bizarre and entertaining implications for Soteriology
D) The structure of narratives makes people less likely to believe in AI risk
E) Debates about logical axioms can be merely verbal
F) Moral realism and anti-realism have practical implications for behaviour that have been underexplored in the literature
I don't think I would be a suitable collaborator on any of these topics, but I would be interested to read about many of them! A few thoughts:
4. IANAL, but there has to be a well-developed body of law concerning mitigation and responsibility, right? It might be useful to approach this question at least partly from a jurisprudential angle, rather than from the perspective of abstract moral philosophy alone.
9. Doesn't some sort of Wittgensteinian move get you most of what you want here? e.g. "Instead of talking about knowledge, we should talk about the (social?) conditions under which knowledge claims are advanced and/or taken as valid." The problem is that mainstream analytic epistemology seems to have mostly rejected this sort of move. Timothy Williamson, for example, sees it as a sign of progress that metaphysicians and epistemologists since the 1970s generally spend their time debating claims about X, rather than about the meaning(s) of "X". I don't understand why he thinks this is progress; it strikes me as just a retreat into dogmatism. And there is no point arguing with dogmatists: the only thing to do is to seek out alternative interlocutors.
But really, it's your "exotic" proposals that I find most intriguing:
10B. One way this might be true is that elevating canonical texts to higher authority can make them more esoteric, which opens up greater space for interpretation. Compare: "This book gets a lot right, even if it gets a few things wrong. For example, it is wrong about [...], because [...]" vs. "Our sacred book appears to say something that is obviously wrong, but since it is sacred, it can't be wrong: what it actually *means* is [...]" Once you get sufficiently comfortable with the latter move, your hermeneutic prowess can liberate you from the tyranny of the text. Is this the sort of thing you are talking about?
10D. Yes, and: The structure of narratives makes people more likely to believe in AI risk.
10E. (Until recently it had never occurred to me that debates over logical axioms might be anything other than merely verbal.)
10F. Are the practical implications that you have in mind psychological, epistemological or (morally) normative?
https://philpapers.org/rec/QURTMW this paper relates to one of the points made