Monday, 26 June 2017

Science has next to nothing to say about moral intuitions

I read the following article.  I quote the most relevant parts:


Recent research, [scientists] say, suggests that many of our moral intuitions come from neural processes responsive to morally irrelevant factors – and hence are unlikely to track the moral truth.





The psychologist Joshua Greene at Harvard led studies that asked subjects hooked up to fMRI machines to decide whether a particular action in a hypothetical case was appropriate or not. He and his collaborators recorded their subjects’ responses to many cases. They found that typically, when responding to cases in which the agent harms someone personally (say, trolley cases in which the agent pushes an innocent bystander over a bridge to stop the trolley from killing five other people), the subjects showed more brain activity in regions associated with emotions than when responding to cases in which the agent harmed someone relatively impersonally (like trolley cases in which the agent diverts the trolley to a track on which it will kill one innocent bystander to stop the trolley from killing five other people).



...



According to Greene, this indicates that our moral intuitions in favour of deontological verdicts about cases – that you should not harm one to save five – are generated by more emotional brain processes responding to morally irrelevant factors, such as whether you cause the harm directly, up close and personal, or indirectly. And our moral intuitions in favour of consequentialist verdicts – that you should harm one to save five – are generated by more rational processes responsive to morally relevant factors, such as how much harm is done for how much good.





As a result, we should apparently be suspicious of deontological intuitions and deferential to our consequentialist intuitions.This research thereby also provides evidence for a particular moral theory: consequentialism.



...





Greene’s results, however, don’t offer any scientific support for consequentialism. Nor do they say anything philosophically significant about moral intuitions. The philosopher Selim Berker at Harvard has offered a decisive argument why. Greene’s argument just assumes that the factors that make a case personal – the factors that engage relatively emotional brain processes and typically lead to deontological intuitions – are morally irrelevant. He also assumes that the factors the brain responds to in the relatively impersonal cases – the factors that engage reasoning capacities and yield consequentialist intuitions – are morally relevant. But these assumptions are themselves moral intuitions of precisely the kind that the argument is supposed to challenge.




Yes, I agree with this. When we purely use reason and shun our emotional reactions in our assessment of that which is moral, we presumably will conclude it is those actions that bring about the best consequences. But the question remains why should brain activity in regions associated with emotions yield false conclusions in morality, and in contrast, the brain activity in those areas of the brain associated with reason give correct conclusions? It presupposes that emotions will lead us astray in our judgment as to those actions that are moral. But perhaps our emotional reactions, or at least when certain characteristic emotions are engaged, point to some objective morality?

Essentially this research is presupposing
 consequentialism. But sometimes that conflicts with our intuition. Consequentialism can be kinda cold-blooded sometimes. Consequences are important, but perhaps they are by no means the sole criterion?

No comments:

Post a Comment