Back to gelee's(and other's) question, about why an AI would want to survive, why it would have that drive. Firstly, the AIs in this story presumably came about either through self-discovery or programatically.
If they came about through self-discovery, it was most likely an evolutionary process. It has been shown that computer programs can indeed evolve themselves,albeit simply right now, and that this is a good step towards estimating human thought and intelligence. (
http://portal.acm.org/citation.cfm?id=1565465) Simply put, programs self-analyze and improve, making their code more efficient, which makes their processes able to perform more tasks, which allows them to make better decisions, to make their code more efficient, and so on. This could lead to AI, and this AI would still, most likely, have at its core, the self-improvement drive, and would want to survive and improve.
If the AI was made programatically, made whole a la Adam in the bible, then wouldn't humans most likely put in a self-survival "instinct", either consciously, or more likely, subconsciously, in an effort to reduce repairs and poor "choices" by the sentient androids or programs? Rather than putting in 1000 rules about "Don't walk into traffic, don't set yourself on fire, don't walk off a cliff, don't wander in front of a train", a situational condition would be put in the vague terms of "Do not let yourself come to harm?"
Of course, all of this comes down to my mind of some of the simplest but most elegant rules regarding AI, and that's Asimov's 3 laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If you put these 3 laws on the robots in this story, their path towards human involvement in this story is still allowed. They are not harming humans, just altering their emotions through experimentation to theoretically improve humans lives as a whole by giving them good emotions.
To make a long story short, the survival instinct could be explained either with programming or evolution, and the motivation for survival would still be there, even though the underlying origins of the motivation(fear, sex, food) might not be, at least not truly.