Already a subscriber? - Login here
Not yet a subscriber? - Subscribe here

Browse by:



Displaying: 1-7 of 7 documents


articles

1. Philosophy in the Contemporary World: Volume > 25 > Issue: 2
Robert Paul Churchill

view |  rights & permissions | cited by
2. Philosophy in the Contemporary World: Volume > 25 > Issue: 2
Christian Matheis

view |  rights & permissions | cited by
3. Philosophy in the Contemporary World: Volume > 25 > Issue: 2
Candice L. Shelby

view |  rights & permissions | cited by
4. Philosophy in the Contemporary World: Volume > 25 > Issue: 2
James Snow

view |  rights & permissions | cited by
5. Philosophy in the Contemporary World: Volume > 25 > Issue: 2
Robert Paul Churchill

view |  rights & permissions | cited by
6. Philosophy in the Contemporary World: Volume > 25 > Issue: 2
Eddy Souffrant

abstract | view |  rights & permissions | cited by
We have witnessed, and in some instance from afar, disasters of all sorts that span the globe from the Caribbean, South and North America, Asia, to Australia and other affected regions of the world. Some of these destabilizing and at times fatal events have resulted in lives lost, forced migration, and a restructuring of the physical, social and economic architecture of the affected parts of the globe. Further, the disasters as massive restructuring of the physical and psychological status quo are at times human made and at others, natural.
7. Philosophy in the Contemporary World: Volume > 25 > Issue: 2
Karen Lancaster

abstract | view |  rights & permissions | cited by
An elderly patient in a care home only wants human nurses to provide her care – not robots. If she selected her carers based on skin colour, it would be seen as racist and morally objectionable, but is choosing a human nurse instead of a robot also morally objectionable and speciesist? A plausible response is that it is not, because humans provide a better standard of care than robots do, making such a choice justifiable. In this paper, I show why this response is incorrect, because robots can theoretically care as well as human nurses can. I differentiate between practical caring and emotional caring, and I argue that robots can match the standard of practical care given by human nurses, and they can simulate emotional care. There is growing evidence that people respond positively to robotic creatures and carebots, and AI software is apt to emotionally support patients in spite of the machine’s own lack of emotions. I make the case that the appearance of emotional care is sufficient, and need not be linked to emotional states within the robot. After all, human nurses undoubtedly ‘fake’ emotional care and compassion sometimes, yet their patients still feel adequately cared for. I show that it is a mistake to claim that ‘the human touch’ is in itself a contributor to a higher standard of care; ‘the robotic touch’ will suffice. Nevertheless, it is not speciesist to favour human nurses over carebots, because carebots do not (currently) suffer as the result of such a choice.