In another “real life sci-fi” moment, legitimate concerns are being raised over the ethics of harming or torturing artificial intelligence. Humans have been giving names and emotional attachments to machines as long as we’ve had them (think about cars and ships), but we are just now hitting a point where machines imitate life well enough to play our reflexive emotional heartstrings.
Experts in the robotics field have already seen this coming. Several years ago, a set of “rules of robotics” was proposed to help combat our emotional responses by saying that robots “should not be designed in a deceptive way”. In short, we should be able to tell that they are imitating life rather than actually possessing it. That particular point seems moot as uncanny valley, human looking bots have not only been inveted but are already in use in various industries.
But that brings up the complex question of where “life” begins. If you think of pain as just a feedback to environmental stimuli, then robots can in theory be programmed to “feel” pain. In fact they already have been in order to study pain and practice operations. So is it unethical to torture a robot? Or to design a robot expressly for the purpose of feeling pain to test tools and environments?
It’s a complex problem with no easy solution, and mankind is not exactly known for tempering new science in the face of moral quandaries so the clock is ticking. At what point do robots become “alive” and earn rights? Are the benefits of advancing artificial intelligence that far worth the potential moral and legal snags?