Say that I succeed in making a perfectly human robot, and that she then goes off and refuses to do what I tell her to do. I’d then just reprogram her motivation module so that she acted out of the motivations I wanted her to have. But this is only an appropriate response because her motivation module is not something that she possesses by herself, but a mere conduit for what I want her to do. She has no motivations, just extensions and means to execute my motivations. Such a machine might pass the Turing test with flying colors, but it would still have no self.
David T said,
October 31, 2013 at 8:12 pm
The conceit is that at some level of complexity, the machine will become “self-aware” and start exercising an independent will, ala SkyNet in the Terminator movies or HAL. Magical thinking, of course, like thinking that if you write a fictional character with enough detail he will at some point become really alive, but there you have it…
thenyssan said,
November 1, 2013 at 6:34 am
I’m not sure about this one. Lately (because of you, actually), I’ve been inclined to think that if you succeeded in creating such a thing, you’d have created a living thing of a strange kind. I’d also take the opposite conclusion about the programmer’s behavior–to rewrite the code would be an example of the programmer failing to imitate the way God treats His creation.
James Chastek said,
November 1, 2013 at 11:26 am
Consider that “motivation module” in the robot. Functionally and ontologically it’s just an extension of my own desires as a programmer – that’s why I build the thing with an input that I can plug into a keyboard.
Machines and technology advance towards the limit of producing an all-powerful slave, and so a rational one. This idea might well be contradictory since nothing rational can be a slave – Aristotle was right something can only be a slave by nature if it possesses imperfect rationality – and not just any sort of rationality but that minimal one of having self-direction over one’s own actions. A lot of our fiction on Strong AI seems to be articulating just this difficulty.
I haven’t worked out all the theological probloems here – I know that the first way makes all being instrumental to God but that the self cannot be instrumental. You seem to be addressing this problem by a doctrine of divine non-interference. Something like this has to be true, but I don’t know how all the details would flesh out.