I think humans have human-centrism (understandable--humans are what we are) when it comes to interacting with other things or thinking about other things. So we want intelligent machines to be like us--because "us" is the best way to be!<--that's the human-centrism at work. But we are afraid because (a) we also know that humans can be pretty awful, and the more us-like machines become, the more likely they become like us in the awful ways, too, and (b) the unknown. What if intelligent machines are different from us in ways we don't understand? That scares us.
(By us I don't necessarily mean you or me, personally, or all people, but people in general.)
no subject
(By us I don't necessarily mean you or me, personally, or all people, but people in general.)