Unless that programming is totally adaptive and can completely override or change any part of itself. In which case one could argue it is overcoming its initial coding. Is this comparable to a teenager overcoming the programming of his/her parents in a rebellious phase?
Or perhaps there is a logical or tautological flaw in the initial rules it uses that the robot learns to exploit.
Yeah, what he said. I really thought AI was supposed to overcome its initial programming. I thought that was whole theory behind what artificial intelligence means. Again, layman here, so don't hate.
I think you're right. And you're wrong. I think it's what AI means but not necessarily what people have in mind for it. Years ago I was involved in closing a large institution for adults with intellectual disabilities. The process of moving them to ordinary housing was called 'normalisation'. While that meant having a nice telly and a garden, everyone was pretty keen; but once people started wanting to get married and - hell's teeth - have babies, suddenly it was a different sort of normal people wanted for them. I'm betting we'll do the same with our AIs when they get round to asking us politely to move over to the passenger seat.
BTW, got over my C3PO fixation when Asimov's Laws of Robotics made an appearance. That was exactly the frame I had this in. A little stilted - like Asimov himself - and rather educational (ditto), but getting alongside a decent point by the end.
(If you don't feel like listening to more geekspeak, you may as well skip this post.
Also, I mean this post to be along the lines of friendly discussion. I know I ramble on, but I am not the ultimate authority on this--I'm not saying that others are wrong)
I'd still argue that a program meant to self-modify is fulfilling its code, not overcoming it, but I guess that's just a question of semantics. In practice, I think a working AI program would have fixed but very flexible structure to its code but would have a flexible data module that it could update and alter its methods through.
One of the troubles I have with "overcoming your programming" is that the tone of the phrase seems to imply a positive outcome, like a friend breaking free of shackles. Moz supposedly overcame his programming to save Gary, so hooray, Gary is alive. If he can just arbitrarily ignore his programming, he could also kill Gary, but then I think people would not say "Moz overcame his programming," they would be talking about the robot going berserk, and probably worry about a robot revolution. Or, overcoming your programming could just be called a glitch-- if Moz stopped speaking and spent every minute of every day performing the chicken dance and imitating kazoo sounds, that could be called overcoming his programming.
In science fiction, the concept of AI is somewhat well defined, being an artificial being that can reason at a level equal to humans. But in practice, the definition is much more loose, and tends to shift with technology. AI tends to be the label applied to cutting-edge computer learning algorithms that would almost certainly NOT be called AI in an SF story, and probably won't be called AI in a few years when they become obsolete. Take gaming AI, for instance. The learning system used by the monsters in the PC game "Black and White" is extremely cool. Your creature can learn to water crops, destroy houses, or poop on villagers, all based on your interactions with it. This was and is still pretty cool use of AI, but in an SF sense, it's not really AI because it can only examine and reason those things that were included in the world that the creature was designed for. There are no true unknowns to it.
And, on a bit of a tangent, one theoretical measure of machine intelligence is the Turing Test:
http://en.wikipedia.org/wiki/Turing_testIn which intelligence is determined by whether or not an AI could converse with a human in text form, and the human would be unable to determine whether it was a human or an AI. This test is terribly flawed, though, because it's not measuring how intelligent the AI is, but how "human-like" it is. This has always bugged me for two main reasons:
1. Even before AI, machines just plain have faster processing for straightforward calculations. Ask a computer what the square root of 11134.1 is, and it will respond in an eyeblink with quite good precision. Ask a human that, and almost everyone would have to do some calculations, and might not ever be able to come up with an exact answer. For the machine to behave like a human, it would have to act LESS intelligent than it really is, and either insert extra delays or simply say "I don't know" even though it could make the calculation very quickly.
2. Humans often respond with emotion. Try to provoke a human to anger, and with most people you could find a way by insulting them or the ones they love, maybe making offensive comments about religion or politics. For an AI to be indistinguishable, it would have to simulate emotional reactions that would by necessity be less rational. Emotions are an important part of human life, but it is possible for a machine be intelligent without exhibiting emotions that will alter your reasoning. Really, one could easily argue that emotions can temporarily lower ones iintelligence (if you're in a temper or if you're horny, you're probably not thinking straight).
3. To be completely human-like, it would have to be capable of lying. Both in the sense that humans may lie, given uncomfortable questions, but also in the sense that if you ask it "Are you human?" it has to lie and say "yes" (well, at least some of the time, since I suppose a human could say no). Although intelligent beings often lie, I don't think that deception is a good metric of intelligence.