If you are omniscient, noone else can be. Omniscience makes it feel like youre responsible for everything because it removes any factual point in modeling others as agents.
I think this is True, but doesn't change the essential point. Let's say Prometheus is the only omniscient deity, despite the fact that the other Gods are more powerful than he is. Well, he stole fire and used it to empower Humans. He did so, despite knowing it would result in his imprisonment and torture. I think its fruitless to argue Zeus and the rest of the Olympians aren't agents in that story. So, I would argue that Prometheus is a consequentialist, not a deontologist.
Although i've seen people (mostly anti-consequentialst) argue the opposite, in the absence of omniscience you can't be a literal consequentialist, in the sense of believing that the right action is the one that produces the best results - no one can know this when they choose to act.
More subtly, unless you know everything that can possibly happen, you can't be a Bayesian/EU act consequentialist. But I think you can, and should be a rule consequentialst.
Finally, I've seen people use the unknowability of the future to argue against consequentialism, then help themselves to assumptions of perfect knowledge when they expound their onw positions
In the literature on utilitarianism, a useful distinction is made between a criterion of rightness and a decision procedure. A criterion of rightness tells us what it takes for an action (or rule, policy, etc.) to be right or wrong. A decision procedure is something that we use when thinking about what to do.24
Utilitarians believe that their moral theory is the correct criterion of rightness (at least in the sense of what “ideally ought” to be done, as discussed above). However, they almost universally discourage using utilitarianism as a decision procedure to guide our everyday actions. This would involve deliberately trying to promote aggregate well-being by constantly calculating the expected consequences of our day-to-day actions. But it would be absurd to figure out what breakfast cereal to buy at the grocery store by thinking through all the possible consequences of buying different cereal brands to determine which one best contributes to overall well-being. The decision is low stakes, and not worth spending a lot of time on.
The view that treats utilitarianism as both a criterion of rightness and a decision procedure is known as single-level utilitarianism. Its alternative is multi-level utilitarianism, which only takes utilitarianism to be a criterion of rightness, not a decision procedure. It is defined as follows:
Multi-level utilitarianism is the view that individuals should usually follow tried-and-tested rules of thumb, or heuristics, rather than trying to calculate which action will produce the most well-being.
According to multi-level utilitarianism we should, under most circumstances, follow a set of simple moral heuristics—do not lie, steal, kill etc.—expecting that this will lead to the best outcomes overall. Often, we should use the commonsense moral norms and laws of our society as rules of thumb to guide our actions. Following these norms and laws usually leads to good outcomes, because they are based on society’s experience of what promotes overall well-being. The fact that honesty, integrity, keeping promises, and sticking to the law generally have good consequences explains why in practice utilitarians value such things highly, and use them to guide their everyday actions.25
In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis.
Can you not believe that the best paperclip-producing action is the one that produces the most paperclips, because you don't know which action that is?
This doesn't seem to work against the doctrine of double effect (a position endorsed by about half of philosophers going by the PhilPapers survey, as about half of philosophers switch on the trolley problem, but refuse to push the fat man on the footbridge). None of the high-order negative consequences of your action are actually causing its positive consequences (in fact its positive consequences considerably predate its negative consequences), so they're not considered deontologically relevant by the doctrine of double effect.
(That is, except if the expected positive consequences of your action are themselves high-order, so an omniscient deontologist couldn't spend their entire life dancing around in the streets in order to get the precise combination of butterfly effects that would maximize total utility.)
So what would this kind of knowledge do to free will. Would one feel like one had, or would one actually have free will if one knew the entire consequences of one's actions?
If you are omniscient, noone else can be. Omniscience makes it feel like youre responsible for everything because it removes any factual point in modeling others as agents.
I think this is True, but doesn't change the essential point. Let's say Prometheus is the only omniscient deity, despite the fact that the other Gods are more powerful than he is. Well, he stole fire and used it to empower Humans. He did so, despite knowing it would result in his imprisonment and torture. I think its fruitless to argue Zeus and the rest of the Olympians aren't agents in that story. So, I would argue that Prometheus is a consequentialist, not a deontologist.
Although i've seen people (mostly anti-consequentialst) argue the opposite, in the absence of omniscience you can't be a literal consequentialist, in the sense of believing that the right action is the one that produces the best results - no one can know this when they choose to act.
More subtly, unless you know everything that can possibly happen, you can't be a Bayesian/EU act consequentialist. But I think you can, and should be a rule consequentialst.
Finally, I've seen people use the unknowability of the future to argue against consequentialism, then help themselves to assumptions of perfect knowledge when they expound their onw positions
I'm just going to quote from utilitarianism.net
https://utilitarianism.net/types-of-utilitarianism/#multi-level-utilitarianism-versus-single-level-utilitarianism
In the literature on utilitarianism, a useful distinction is made between a criterion of rightness and a decision procedure. A criterion of rightness tells us what it takes for an action (or rule, policy, etc.) to be right or wrong. A decision procedure is something that we use when thinking about what to do.24
Utilitarians believe that their moral theory is the correct criterion of rightness (at least in the sense of what “ideally ought” to be done, as discussed above). However, they almost universally discourage using utilitarianism as a decision procedure to guide our everyday actions. This would involve deliberately trying to promote aggregate well-being by constantly calculating the expected consequences of our day-to-day actions. But it would be absurd to figure out what breakfast cereal to buy at the grocery store by thinking through all the possible consequences of buying different cereal brands to determine which one best contributes to overall well-being. The decision is low stakes, and not worth spending a lot of time on.
The view that treats utilitarianism as both a criterion of rightness and a decision procedure is known as single-level utilitarianism. Its alternative is multi-level utilitarianism, which only takes utilitarianism to be a criterion of rightness, not a decision procedure. It is defined as follows:
Multi-level utilitarianism is the view that individuals should usually follow tried-and-tested rules of thumb, or heuristics, rather than trying to calculate which action will produce the most well-being.
According to multi-level utilitarianism we should, under most circumstances, follow a set of simple moral heuristics—do not lie, steal, kill etc.—expecting that this will lead to the best outcomes overall. Often, we should use the commonsense moral norms and laws of our society as rules of thumb to guide our actions. Following these norms and laws usually leads to good outcomes, because they are based on society’s experience of what promotes overall well-being. The fact that honesty, integrity, keeping promises, and sticking to the law generally have good consequences explains why in practice utilitarians value such things highly, and use them to guide their everyday actions.25
In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis.
This is all sensible. But if you read (for example) the Stanford Encyclopedia of Philosophy you will see discussion of some much less sensible ideas.
Say what?
Can you not believe that the best paperclip-producing action is the one that produces the most paperclips, because you don't know which action that is?
This doesn't seem to work against the doctrine of double effect (a position endorsed by about half of philosophers going by the PhilPapers survey, as about half of philosophers switch on the trolley problem, but refuse to push the fat man on the footbridge). None of the high-order negative consequences of your action are actually causing its positive consequences (in fact its positive consequences considerably predate its negative consequences), so they're not considered deontologically relevant by the doctrine of double effect.
(That is, except if the expected positive consequences of your action are themselves high-order, so an omniscient deontologist couldn't spend their entire life dancing around in the streets in order to get the precise combination of butterfly effects that would maximize total utility.)
So what would this kind of knowledge do to free will. Would one feel like one had, or would one actually have free will if one knew the entire consequences of one's actions?
Rule consequentialism is a short hop from deep conservatism.