However, Sparrow extends the thought experiment by introducing a scenario in which the machine could reject sexual advances, verbally refusing and physically retreating. In this case, the robot might cry out phrases like “Stop it, you are raping me. Stop raping me. He’s raping me” while trying to halt the act. Similarly, Sinziana Gutiu critiques the nature of consent in sex robots, which are designed to comply with any sexual interaction initiated by their users.
Anthropomorphism in AI: hype and fallacy
https://link.springer.com/article/10.10 ... 24-00419-4
Anthropomorphism is also a kind of fallacy, and this is often overlooked. The fallacy occurs when one assumes or makes the unwarranted inference that a non-human entity has a human quality. This can involve projecting human characteristics onto non-humans, such as: “My car is angry at me” or making an unwarranted inference about non-humans, such as “The robot is friendly because it waved at me”. In this way, anthropomorphism can be regarded as either a factual error—when it involves the attribution of a human characteristic to some entity that does not possess that characteristic, or as an inferential error—when it involves an inference that something is or is not the case when there is insufficient evidence to draw such a conclusion.
As a kind of fallacy, then, anthropomorphism involves a factually erroneous or unwarranted attribution of human characteristics to non-humans. Given this, when anthropomorphism becomes part of reasoning it leads to unsupported conclusions. The following will discuss some of these conclusions and how they occur within moral judgment. In this way, some of the negative ethical implications of anthropomorphizing AI will be exposed.
Curious that you never see people anthropomorphizing dildos.