Eric Schwitzgebel on user perception of the moral status of AI

“I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they...

Om Podcasten

Interviews with activists, social scientists, entrepreneurs and change-makers about the most effective strategies to expand humanity’s moral circle, with an emphasis on expanding the circle to farmed animals. Host Jamie Harris, a researcher at moral expansion think tank Sentience Institute, takes a deep dive with guests into advocacy strategies from political initiatives to corporate campaigns to technological innovation to consumer interventions, and discusses advocacy lessons from history, sociology, and psychology.