I don't see any reason why this would be true? Not all systems are chaotic, and many are self-correcting by design. Many paths can lead to a macroscopically similar outcome.
1. The weather is a chaotic system influenced by my small actions including human movements.
2. The weather changes human movements.
3. Even small movements by a man or woman about to have procreative sex will change which sperm gets to the egg. Indeed, if I'm going to have procreative sex in a year from now, and I, say, take a different route to work today, that will change the DNA in the sperm in the egg.
My thinking about this goes back to an discussion with John Broome. John argued in his book on climate ethics that children who will be conceived in the near future are fixed, but children decades or centuries from now are not. I argued that chaos means that even children conceived very soon from now are probably affected by our every action.
Does Macaskill have a written version of this? I'd be interested in having a look, but I haven't got time to listen to a podcast.
"But most ‘non-consequentialists’ endorse an act/omission distinction: it’s worse to knowingly cause a harm than it is to merely allow a harm to occur. And they further believe harms and benefits are asymmetric: it’s more wrong to hurt someone a given amount than it is right to benefit someone else an equal amount.
So, in this example, the fact that your actions caused X deaths should be given more moral weight than the fact that you also saved X lives.
It’s because of this that the nonconsequentialist feels they shouldn’t roll the dice just to gain $10. But as we can see above, if they’re being consistent, rather than leave the house, they’re obligated to do whatever would count as an ‘inaction’, in order to avoid the moral responsibility of foreseeably causing people’s deaths."
This seems quite different to me. If anything it's reminiscent of an argument developed by Mark Colyvan that since the agent is confronted with a spread of possible outcomes each with a probability, non-consequentialist ethics, which tends to forbid causing certain actions "knowingly" or alternatively "intentionally"- but these concepts tend to break down in a decision space like that imagined by the decision theorist.
Similarity is that foundational idea is that even our smallest actions have large consequences. Your post argues this makes us blind idiot gods, and Will is discussing the ethical consequences of being such gods.
I don't see any reason why this would be true? Not all systems are chaotic, and many are self-correcting by design. Many paths can lead to a macroscopically similar outcome.
Here's one route:
1. The weather is a chaotic system influenced by my small actions including human movements.
2. The weather changes human movements.
3. Even small movements by a man or woman about to have procreative sex will change which sperm gets to the egg. Indeed, if I'm going to have procreative sex in a year from now, and I, say, take a different route to work today, that will change the DNA in the sperm in the egg.
Will Macaskill has somewhat similar ideas which they call the 'paralysis argument'. https://80000hours.org/podcast/episodes/will-macaskill-paralysis-and-hinge-of-history/?
My thinking about this goes back to an discussion with John Broome. John argued in his book on climate ethics that children who will be conceived in the near future are fixed, but children decades or centuries from now are not. I argued that chaos means that even children conceived very soon from now are probably affected by our every action.
Does Macaskill have a written version of this? I'd be interested in having a look, but I haven't got time to listen to a podcast.
Oh, I see, there's a transcript.
"But most ‘non-consequentialists’ endorse an act/omission distinction: it’s worse to knowingly cause a harm than it is to merely allow a harm to occur. And they further believe harms and benefits are asymmetric: it’s more wrong to hurt someone a given amount than it is right to benefit someone else an equal amount.
So, in this example, the fact that your actions caused X deaths should be given more moral weight than the fact that you also saved X lives.
It’s because of this that the nonconsequentialist feels they shouldn’t roll the dice just to gain $10. But as we can see above, if they’re being consistent, rather than leave the house, they’re obligated to do whatever would count as an ‘inaction’, in order to avoid the moral responsibility of foreseeably causing people’s deaths."
This seems quite different to me. If anything it's reminiscent of an argument developed by Mark Colyvan that since the agent is confronted with a spread of possible outcomes each with a probability, non-consequentialist ethics, which tends to forbid causing certain actions "knowingly" or alternatively "intentionally"- but these concepts tend to break down in a decision space like that imagined by the decision theorist.
Similarity is that foundational idea is that even our smallest actions have large consequences. Your post argues this makes us blind idiot gods, and Will is discussing the ethical consequences of being such gods.
This is going beyond interesting into downright dangerous territory, imho.