Minds v. programs

-Given it is an artifact, each program has a relation to the human mind as its maker.

Given it is a program,  it is also within the class of programs and so none could be the maker of all of them, any more than a fire could light all possible fires (a flame can’t light itself).

-If artificial intelligence is intelligence, it is another intelligence alongside ours; if it is artificial it is subordinate to ours.

-But not all activities of agents and instruments are diverse, are they? Mowing, digging… even computing. Is it significant that these tend to be transitive actions, even the computing of a correct answer? So far as intelligence is productive, transitive, etc. it’s action might well be outsourced to machines. This might even go as far as having them write symphonies and books. But to relate to them as true or desirable, both which require self-awareness, is a very different thing.

 

 

9 Comments

  1. December 1, 2015 at 4:23 pm

    If a program is capable of writing a reasonably intelligent and original book, it will be capable of participating in a conversation. And if it is capable of participating in a conversation and claims to be self-aware, very few people will doubt that it is actually self-aware. I doubt that you would yourself, at least in the long run, if you found it impossible to distinguish between a conversation with it and conversation with a person.

    • December 1, 2015 at 5:35 pm

      There are a lot of counterfactuals here, and they seem to rule out a lot of possible things the machine might say (or is it “say”?) Will it fear its own disassembly? Will it see identical programs as the same self? Will it be amazed at my own human transcendental existence, or pray to me as a god or see metal as its father? What would it say to this argument? The possible experience is too variable for me to declare now what I will think of it.

      Another related problem: if we develop this Turing-tested AI and it says “I’m not conscious” would we reprogram it to say that it was? IOW what standard will we use to treat its responses about its conscious states as mistaken and therefore in need of reprogramming?

      At any rate, human beings have a long history of imputing consciousness to things that lack it, from trees to idols to magic eight balls and ouija boards. Maybe AI will be the last great trick we play on ourselves. I’m reminded of the last episode of “Black Mirror” where a person became attached to a computer program that assembled responses to questions from the social-media posts of a dead loved one and spoke them back in the loved one’s voice. That seems to be both an instance of what you’re speaking of, and an objection to it.

      • Zippy said,

        December 1, 2015 at 7:51 pm

        I think Searle’s Chinese Room argument against the consciousness of AI is pretty much as irrefutable as arguments get.

      • December 3, 2015 at 9:55 am

        I agree that concrete facts could make a difference in what one would conclude. That actually was my main point, that although it is reasonable to consider philosophical arguments, it is an issue where people’s opinions in fact are also going to be affected by empirical facts. If you are so convinced of some philosophical theory on the matter that no experience could possibly change your opinion (as Zippy seems to suggest in his Chinese room example), I do not think that is something to be proud of.

        It is very unlikely that anyone trying to develop an AI would try to reprogram particular responses like whether it says it is conscious, simply because particular responses would normally be the result of a general algorithm. And it might not even be reasonably possible, given that you don’t simply have a lookup table; if you try to make it say that it is conscious, it might simply say in a roundabout way that it doesn’t feel conscious or even have any good idea of what consciousness means.

        While I agree that even if everyone concludes that AI is conscious, that will not be an absolute proof that it is true, neither do I have any absolute proof that anyone besides myself is conscious, but that doesn’t worry me anyway.

        The state of the art is certainly not autocorrect, but rather things more like IBM’s Watson, Google Deep Mind’s video game learning program, and Google’s self driving cars. We would not have any temptation to call any of those things actually intelligent, but they are much more impressive than autocorrect.

      • Zippy said,

        December 3, 2015 at 10:16 am

        Autocorrect is actually a pretty good example of the state of the art, because it involves day to day interaction with large numbers of ordinary people in non-contrived circumstances in natural language. Siri is also a good example. Both of those are more real world than contrived research projects, and have vast resources behind them as I already mentioned: top minds, top equipment, top money.

        Self-driving cars, not so much. They are great technology, don’t get me wrong, and impress people as a demo of cool technology; but they are basically a glorified industrial control system. I personally designed and built a simple autonomous vehicle as an undergrad decades ago.

        Don’t misunderstand; tech is great. But computers will never, never, never, never actually become conscious. Strong AI is nonsense. And yes, I think it is provably nonsense (to the extent that much of anything is provable).

  2. Zippy said,

    December 1, 2015 at 7:53 pm

    The “state of the art” in computer AI these days, with vast sums of money and rooms filled with hardware behind it, is autocorrect on a smart phone.

    Make of that what you will.

    • December 2, 2015 at 6:13 am

      I’m a total babe in the woods about tech stuff, so I can’t tell if this is serious. Really? Autocorrect?

      • Zippy said,

        December 2, 2015 at 7:56 am

        I was being somewhat tongue in cheek, but it wouldn’t be funny (to me at least) if there weren’t truth to it. I don’t know what happens in the AI labs in universities, but I know that it has orders of magnitude fewer resources – money, brainpower, manpower, warehouses of ‘cloud’ hardware and software, etc – than showcase features of smartphones. And Apple’s Siri (for example) can’t really tell when it is and is not appropriate to zip up your pants.

        So I’m not too worried about Skynet becoming self aware and deciding to kill off humanity any time soon. AI is a conceit on the part of computer programmers that they are doing something anywhere in the vicinity of the equivalent dignity of parenthood.

  3. Reubs said,

    December 6, 2015 at 11:16 pm

    “AI is a conceit on the part of computer programmers that they are doing something anywhere in the vicinity of the equivalent dignity of parenthood.”

    perfect


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: