-Assume: there is only an engineering difference, and not an ontological one, between the brain and a computer.
Naturalist: Therefore everything thought to be intentional is just physical.
Dualist or Cartesian: Therefore the brain is an instrument of human action, just as a computer is.
-Say I can work with fundamental particles like Legos, and I have a lot of time on my hands. So I assemble a full grown human. Therefore, I have made an artificial intelligence, right?
–Naturalist: Insemination suffices to make a human being. But insemination is physical, therefore the physical suffices to account for a human being.
-Forget artificial intelligence. What would machine reproduction be? Take the simplest case: if you take a potato, you can’t point to any part of it and say “this is the part that is supposed to break off and form a new potato”. If you designed a machine to break apart and assemble a new machine, it would not be reproducing itself, it would simply be two identical machines at different stages of assembly, temporarily attached. You might as well say two engines are a family because you put them in the same shed.
-Artificial intelligence is really just intelligence we can control, i.e. slavery. Art seems to intuit this: whenever we make clones, they tend to be slaves.
-The scientist started with operational definitions. This is fine, until either he or someone else starts declaring ontological conclusions. But the operational definition never cared about the difference between being and non-being. Absences, idealizations, and purely theoretical entities all exist qua operationally though not “really” (whatever this means, though all sides agree to it). Black boxes, frictionless-motion, ideal gasses, test particles, virtual particles, zero velocity, a single standard for the day and year, and many other things exist operationally even though everyone who uses them realizes that they don’t exist.
-But if we didn’t use operational definition, we would never get to do all these neat things! True, but why assume there are no ontological costs? We all recognize, and even celebrate the fact that pure metaphysics never achieved anything useful (i.e. operational), but given we are separating the two, why assume that the operational could achieve the metaphysical result? We all scoff at the operational costs of the one who chooses to do metaphysics; but we miss the metaphysical costs of the one who wants to define operationally.
If this seems like a negligible difference, consider the operationalist distinction between agent and instrument – which is exactly what is at issue in the debate over brains and thoughts.