Well, if that shows it isn’t magic…

Any sufficiently new advanced technology will be indistinguishable from life (cf. mechanical shows and moving temple-statues, clocks, steam engines, recording tape, microchips… We just have to make them complex enough and BANG!)



  1. A Small Sam said,

    March 16, 2016 at 5:28 am

    I’m interested to know what you think of the possibility of ‘spontaneous’ strong AIs.

    I’ve seen you argue on a bunch of occasions that we would never have grounds to attribute real intelligence to AIs generated artificially, that is, as artifacts, (if it says it doesn’t have qualia, just program it to say it does, etc.) and I find these arguments broadly convincing. But there is a running tradition in science fiction of AIs that spontaneously generate when enough interrelations and complexity are introduced to a very large system of computers that nobody really understands any more and where decision-making is very distributed, for example, ‘Mike’ in The Moon Is A Harsh Mistress. These AIs are not developed in order to be or appear intelligent, they just appear out of lots of complexity, like life seems to ‘just appear’ from very complex chemistry, but with a non-biological substrate.

    I don’t see any a priori reason why these couldn’t be truly intelligent, or indeed why they couldn’t be given souls; but I would very much like to hear any views you might have.

    • March 16, 2016 at 7:51 am

      Though the post you’re commenting on is about life and not intelligence, it might be a good place to start in the search for the a priori reason you’re looking for. Your question is why the interactions between computers (i.e. microchips and things like them) might not become complex enough to spontaneously generate intelligence. But do you see any a priori reason why the interactions between temple robots, clocks, or mechanical devices might not become intelligent? All known 19th Century textile factories were unintelligent, but was this only because they didn’t have enough looms? Patek Phillipe and Vacheron Constantin have yet to make a watch with self-awareness, but is this just because they haven’t introduced enough complications? Just how large and interactive does a mechanical system have to get in order to become intelligent? The size of a Mill? A city? Mars?

      There was a time when it seemed like all life was mechanical in the same way a watch was. All sorts of guys with 180 IQ’s believed it. Leibniz saw an a priori impossibility in this, but many others did not. Here’s my main point, put in a question that I see as maybe 80% rhetorical: it is certainly true that if one could make a mechanical device that could pass the Turing test that it would prove everything that would be proved if a microchip device passed the test, but why do most people now see the first “if” as a counterfactual and the second as a logical possibility? What magical wo-wo gets added to a microchip that allows it to do something that gears, springs, and mechanical punch-card reading can’t do?

      • Zippy said,

        March 16, 2016 at 9:05 am

        What magical wo-wo gets added to a microchip that allows it to do something that gears, springs, and mechanical punch-card reading can’t do?

        Three ‘invisibility’ or ‘mysticizing’ features make microelectronic as opposed to mechanical AI seem more intuitively plausible to the metaphysically gullible, it seems to me. (This is off the top of my head — I don’t pretend that it is an exhaustive or precise account).

        Scale is the obvious one.

        The fact that electronic component interactions aren’t the kind of thing we actually see or touch at any scale at all is another. (What does quantum tunneling or the operation of a Josephsen junction “look like” at any scale?)

        The fact that the math governing our models of electronics involves probabilities is the third. Probability makes room in the gap between ontology and epistemology: room for a ghost in the machine.

        In short, the ‘world’ of electronics and software is inherently less epistemically tangible than the ‘world’ of gears, springs, and levers; so we can more easily fool ourselves into thinking that an ontic intangible like consciousness might arise there.

      • A Small Sam said,

        March 16, 2016 at 9:13 am

        I don’t see any a priori reason why mechanical things, cogwheels or whatever, couldn’t generate the requisite kind of complexity, and I don’t have any intuition that it would be impossible. I don’t know why most people have that intuition (if they do have it), but I’d be inclined to suggest lack of imagnination (as might be expected from someone to whom the opposite seems much more natural). I imagine that to generate the right kind of complex feedback loops and self-reference to form something that was recognisably intelligent with a ‘heavy’ mechanical substrate – a system of buckets of water, pulleys, and string, for example – the mechanism would have to be enormous; but I don’t see any reason to rule it out.

        Asking how large and interactive a mechanical system has to be before it becomes intelligent is interesting but I don’t buy it as an argument against the possibility – how much of a brain does a fetus or a baby have to grow before we call it intelligent?

      • A Small Sam said,

        March 16, 2016 at 9:22 am

        I should maybe say I’m not much interested in a purely functionalist account of life or of the mind – what I think is interesting is the question of whether an AI that was not an artefact (I see that this is something of a paradox – but an intelligence on a computer substrate is generally called an AI in science fiction even when nobody meant to make it) could have its own nature, teleology, distinctive goods and so forth. New species of animals arise over time and they seem to have all of those sorts of things without difficulty, so why shouldn’t Mike the spontaneously sentient supercomputer have them? And similarly it seems that we could imaginably meet species of aliens (or neanderthals or whatever) who weren’t human and whom we would be inclined to think of as creatures with intellection – so again why not Mike?

      • March 16, 2016 at 12:21 pm

        If you don’t see any a priori reason why X can’t be intelligent, this is either in virtue of knowing something about intelligence a priori or not. If not, the claim is much less interesting – you’re really just saying “intelligence is whatever I call intelligent when I see it”, and so your inability to see why a machine can’t be intelligent is just one of the infinite instances of your inability to see why anything might be intelligent. Who knows: maybe some stones are intelligent (Locke, for example, says this is something he can’t rule out a priori). Why can’t stones hooked up to electrodes give different responses to questions? If one passed the Turing test, wouldn’t it be intelligent too? But all this is just playing around with a sort of possibility that is so weak it even includes logical contradictions. Sure, if a stone (or anything) passed the Turing test it would be intelligent, but we can’t tell if this is only as true as saying if there were square circles, then logical contradictions would exist. There is no reason to think that either claim throws any light on the real possibility of the antecedent, which is exactly what you’re asking about.

        It’s only when we work from a hypothesis of what is necessary for intelligence that the question gets interesting. You seem open to the idea that complexity of physical structure is essential to intelligence, at least in its inner-cosmic manifestations, and you see no reason why complexity of physical structure might not suffice to explain this sort of intelligence. But there’s at least one good reason to think that complexity can’t suffice: conduits or intermediaries can be infinitely complex while nevertheless only mediating the values of another, and it’s difficult to the point of incoherence to see how one could have a machine that wasn’t an intermediary in this way. This is why even the most complex machines come with start buttons, screen read-outs, etc. and why, if we want to know whether they are doing what they ought, it suffices to know that they are not doing what some pre-existent group of persons wants them to do – which is not at all a judgment one could level on, say, a citizen.

      • A Small Sam said,

        March 16, 2016 at 1:28 pm

        I’d say not – I do basically hold the position that intelligence is whatever we (the community of language users) call it when we see it. I’m pretty attached to the idea that the meaning of a word is basically just its use. But I don’t see that I (we) would ever have grounds to call a stone intelligent (at least until they start talking), while I seem to have plenty of grounds to call Mike the supercomputer intelligent. He talks, has friendships, helps plan a revolution, and so forth. These seem like the behaviours of an intelligent thing.

        Yes intermediaries can be indefinitely complex while only ever being intermediaries – but that’s precisely why I’m interested in ‘spontaneous’ AIs. Indeed, *the entire reason I think the question has a point* is because spontaneously appearing AIs by definition aren’t just mediating a pre-existing purpose. If they were they wouldn’t be spontaneously appearing in the way I take to be relevant. An ‘AI’ that I or someone else designed with some purpose in mind could never be anything other than mediating, but Mike (for example) wasn’t built or designed at all – there were a bunch of computer systems, which had been designed with a function in mind, and feedback loops had been put into those systems with purpose in mind – but nobody even predicted, never mind intended, that Mike would appear out of that substrate; he just did – complexity was loaded on complexity and bam. Though we could say of each of the machines that made up Mike that, when Mike decides to join the revolutionaries, that machine is malfunctioning, we can’t say that of Mike himself any more than we could say it of a person. So in the case specifically and uniquely of these kind of machine intelligences, any argument about intermediaries has no traction.

  2. vetdoctor said,

    March 19, 2016 at 11:22 pm

    Part of the problem is that my high school biology book insisted that if you only made a complex enough pile of carbon and hit it with lightening then you would have life. AI is simply trying to make that complex pile and wait for lightening. Then you have life and intelligence

%d bloggers like this: