Escape Artists

Escape Pod => Episode Comments => Topic started by: eytanz on August 26, 2011, 07:17:42 PM

Title: EP307: Soulmates
Post by: eytanz on August 26, 2011, 07:17:42 PM
EP307: Soulmates (http://escapepod.org/2011/08/26/ep307-soulmates/)

by Mike Resnick (http://mikeresnick.com/) and Lezli Robyn (http://www.writertopia.com/profiles/LezliRobyn)

Read by Dave Thompson

---

Have you ever killed someone you love – I mean, really love?

I did.

I did it as surely as if I’d fired a bullet into her brain, and the fact that it was perfectly legal, that everyone at the hospital told me I’d done a humane thing by giving them permission to pull the plug, didn’t make me feel any better. I’d lived with Kathy for twenty-six years, been married to her for all but the first ten months. We’d been through a lot together: two miscarriages, a bankruptcy, a trial separation twelve years ago – and then the car crash. They said she’d be a vegetable, that she’d never think or walk or even move again. I let her hang on for almost two months, until the insurance started running out, and then I killed her.


Rated appropriate for teens and up due to language, alcohol dependence, and discussing death of loved ones.

(http://escapepod.org/wp-images/podcast-mini4.gif) Listen to this week’s Escape Pod! (http://traffic.libsyn.com/escapepod/EP307__Soulmates.mp3)
Title: Re: EP307: Soulmates
Post by: Dem on August 27, 2011, 04:29:45 PM
Sorry, I had C3PO in my head the whole time. I'll come back when I can be sensible  :D
http://upload.wikimedia.org/wikipedia/en/b/b1/C3PO.jpg (http://upload.wikimedia.org/wikipedia/en/b/b1/C3PO.jpg)
Title: Re: EP307: Soulmates
Post by: Thomas on August 28, 2011, 02:56:03 AM
good story, well played.

Although the ending was predictable, it was told on a delightful way. made me care about the characters.

"Yes, Virginia, there is a Santa Claus."

Nice follow up to Midnight Blue. a bit darker, but still very much fun. It gave me something to mull over while enjoying the story as it unfurled.
Title: Re: EP307: Soulmates
Post by: Equalizer on August 28, 2011, 01:02:00 PM
Wow. I had always taken the description of science fiction as "fictional literature that uses technology within the story to make the reader reflect upon themselves" with a grain of salt. Probably because I always had a taste for scifi that displayed interesting uses of interesting technology in interesting environments and never applied the themes within the stories to myself. This one has definitely changed my view of the genre while at the same time still being fun. Righteous pickins!
Title: Re: EP307: Soulmates
Post by: Kconv on August 29, 2011, 07:31:50 AM
This story was awesome and having just recently watched Time of Eve, it made a little extra impact.

At what point can you no longer tell the difference between man and machine.

In a lot ways the roles got reversed at some points in the story, the human was acting like a machine just going through the motions from one day to the next. While the machine was acting more human than many humans.

Good work on this one.
Title: Re: EP307: Soulmates
Post by: raetsel on August 29, 2011, 10:46:45 AM
Great story. I really enjoyed it and I think it's a great example of the definition of a "fun" SF story where the subject matter itself isn't fun at all. That said had it had a different, darker ending maybe the "fun" epithet wouldn't come so easily to my lips.

The story raises some interesting points about what it is to be human (great observation by Kconv about the role reversal too) but also about what we can/should do to keep someone alive. As someone from the UK with its National Health Service the talk about insurance running out as a factor in the decision to terminate Kathy stuck out for me.

Finally on the subject of soulmates I was put in mind of the excellent song by Tim Minchin called "If I didn't have you." http://www.youtube.com/watch?v=Gaid72fqzNE (http://www.youtube.com/watch?v=Gaid72fqzNE)
Title: Re: EP307: Soulmates
Post by: Captain (none given) on August 29, 2011, 07:53:24 PM
Written from the quarters of Captain (none given)

This is what I read/listen to sci-fi and fantasy for. People (normal people who are faced with some sort of challenge and react in normal human ways) face some futuristic or fantastic scenario and realize why we do what we do. Exploring why humanity does what it does, whether as individuals or as groups (clubs, towns, civilizations, whatever), is an important function of art and literature especially (or maybe I'm just biased on that.)

It was fun, but thoughtful and I really cared about what happened to Gary. Mose was an awesome character as well because of how he essentially learned how to care about Gary. That's also an amazing power of these sorts of stories: non-human characters behaving like humans so that once again we see why and how we do the things we do everyday.

Suffice to say, it was a great story.
Title: Re: EP307: Soulmates
Post by: grokman on August 29, 2011, 11:21:10 PM
I'm not going to cry. I'm not going to cry. I'm not going to cry.

DAMN YOU, RESNICK!
Title: Re: EP307: Soulmates
Post by: Listener on August 30, 2011, 01:51:27 PM
As with "Paper Menagerie" on PC, I felt this story tried a little too overtly to get emotional resonance.

As with several other Resnick stories, the MC has this totally reasonable and intelligent internal monologue that doesn't always fit with the character. I don't think we know enough about Gary's history to know what his INT stat is.

I also felt bait-and-switched with the hemming and hawing over Moz's decision not to terminate the other robot. I'm sure I'm one of about 90% of the listeners who thought "Moz is talking about himself". Then it turned out he wasn't, and I felt cheated.

Not my favorite story by this author.
Title: Re: EP307: Soulmates
Post by: Swamp on August 30, 2011, 03:59:13 PM
The hits just keep on coming.

I really like robots.  And I like robots trying to grasp the concepts of emotions and morallity.  It's been done before (and can be done badly) but I think Mr. Resnick did a fabulous job with it.  I also like emotional stories and characters who go through redemption.  It's also been done before (and done badly), but again, I think Mr. Resnick hit the right buttons for me in this story.  Also a lot of good exploration of what it means to be human and the ethics of our choices.

I would say this is probably my favorite Resnick robot story that's played here on EP.
Title: Re: EP307: Soulmates
Post by: Devoted135 on August 30, 2011, 04:36:12 PM
I spent a lot of this story alternating between actively and audibly yelling at Gary for not phrasing answers the way I would have and just simply being annoyed that I was listening to a philosophy class exercise on the difference between people and AI. I mean seriously, he's apparently a reasonably intelligent guy, and yet he takes forever to figure out that these ideas he's putting into Moz's head could cause trouble? Not to mention that name: Moz/Moses. So unfortunately, much like the recent UD story, I felt like most of the narrative was a soapbox and the story was simply window dressing to get the ideas into our ears.

The difficult thing is that I actually liked the overarching storyline: Guy is broken up about the loss of his Love, guy stumbles into a new and surprising situation that manages to pull him out of his self-destructive funk, guy and *insert mechanism here* run away to start over. Though to be honest, I doubt Gary and Moz are going to get very far.
Title: Re: EP307: Soulmates
Post by: Kaa on August 30, 2011, 04:48:14 PM
I'm not going to cry. I'm not going to cry. I'm not going to cry.

DAMN YOU, RESNICK!

Damn you, indeed, Resnick. Lovely story to listen to whilst driving the long commute to work.

I am an aspiring writer. It's stories like this that occasionally make me want to just chuck it all because I feel like I'm never going to reach this level. I get over it after a couple of hours and keep writing, but wow. Just once, I'd like Resnick to write a story that's so fall-down-funny that I laugh until I cry instead of bypassing "Go" and proceeding directly to lacrimal overflow.

The only real complaint I have is that I thought Dave read this just a trifle too fast. It felt...hurried. I can understand with it being a gargantuan hour-long episode even when read in a hurry, but...I'd kind of like to savor a story like this one that has a slower pace instead of feeling rushed.
Title: Re: EP307: Soulmates
Post by: bolddeceiver on August 30, 2011, 05:05:44 PM
The first time around in Asimov's, this made no real impact -- in fact, so much so that I was several minutes in before I realized it sounded familiar. The second time around, it honestly kind of annoyed me.  The main character's drama was a little too pat, as was its resolution (merely repeatedly pointing out that a depressive alcoholic's behavior and thought patters are irrational isn't really effective therapy -- the universe does a pretty good job of that on its own and that doesn't snap that person out of it).  The ridiculously-humanlike-if-selectively-naive robot was cliched and unbelievable, and the idea of a machine intelligence "overcoming its programming" always gets my hackles up. (It's a meaningless concept; to do so implies that there is something going on besides the execution of its programming, which doesn't make any sense; it's like saying "I overcame my brain" -- might work in a broad, metaphorical sense but certainly not literally meaningful.)

Also, next week could we maybe get a story that's not about how broken widowers are?  Some of us are well-adjusted and healthy members of society who don't need friendly robots or passive-aggressive stalkerish BFFs to fix us.
Title: Re: EP307: Soulmates
Post by: Devoted135 on August 30, 2011, 05:47:42 PM
Just once, I'd like Resnick to write a story that's so fall-down-funny that I laugh until I cry instead of bypassing "Go" and proceeding directly to lacrimal overflow.

I'd suggest episodes 94 and 108 of the Dunesteef. Two Resnick stories featuring the same character that are likely to have you at least chuckling, and possibly outright guffawing. :)

http://dunesteef.com/2011/03/03/episode-94-catastrophe-baker-and-a-canticle-for-leibowitz-by-mike-resnick/
http://dunesteef.com/2011/08/04/episode-108-catastrophe-baker-and-the-cold-equations-by-mike-resnick/
Title: Re: EP307: Soulmates
Post by: Thunderscreech on August 30, 2011, 07:44:33 PM
I _really_ enjoyed the reading of this one.  Rarely does a voice match a story as well (to my tastes) as this one.  Spot on, A-quality work.

I liked the story too, of course, but folks have already hit on the various aspects of that above.  :)
Title: Re: EP307: Soulmates
Post by: InfiniteMonkey on August 31, 2011, 05:16:55 AM
Wow. Listening to this AND "Radio Nowhere" back-to-back was a BAD idea. Death and grief, with a side of guilt smothered in survivor-guilt sauce! Everybody dance!

At least this was a bit more light-hearted and thinky than "Radio Nowhere", and I had far less of a desire to dope-slap the protagonist.

I also like the fact that our narrator was a "regular Joe", not some Susan Calvin cybernetics expert. It made all more - pardon the expression - human.
Title: Re: EP307: Soulmates
Post by: Lionman on August 31, 2011, 01:55:46 PM
I'm afraid this time around, the story didn't pull at my heart-strings as much as Resnick's last work with the paper tiger; the experience was much more tangible with feeling like an outcast, reconciling yourself with your mother.  However, I must say that I did like how the story shifted gears at the end, how it resolved, how the main character came to realizations about himself.

The moral to this tail: When you're in a robotic factory and the power goes out, if you don't have a light, just sit down where you are and wait it out.  (Of course, that wouldn't have made nearly as interesting a story, would it?)
Title: Re: EP307: Soulmates
Post by: Wilson Fowlie on August 31, 2011, 08:41:38 PM
I'm afraid this time around, the story didn't pull at my heart-strings as much as Resnick's last work with the paper tiger

Do you mean The Paper Menagerie (http://podcastle.org/2011/07/12/podcastle-165-the-paper-menagerie/) that ran on PodCastle? 'Cause that was by Ken Liu.

If that's not what you meant, then please disregard this message. :)
Title: Re: EP307: Soulmates
Post by: Gamercow on September 02, 2011, 03:48:04 PM
I liked this one simply because it belongs in the pages of "I, Robot".  Very much an Asimovan robot story.  The characters were not that deep, the action not that exciting, but the concepts are very interesting and detailed.  Also, like many Asimov stories, this is a socratic debate between a human and a robot, which I always enjoy.  Finally, the use of Moz' designation of MOZ123 as his name is very Asimovan as well. 
Title: Re: EP307: Soulmates
Post by: olivaw on September 03, 2011, 11:34:15 AM
I always like a good robot story.
In this case, while the developments that Moz the robot goes through are pretty well-trodden, it's the way they echo in Gary the Human that makes the difference. He too is 'breaking his programming' - freed from the depression, drug dependency, and wage slavery that bound his life to a tiny circuit.

I agree with Listener that this transformation didn't necessarily come across in the dialogue though - Gary didn't really 'sound' any different at the beginning than the end, but was equally responsive, cogent, philosophical, and self-aware throughout. I suppose you don't spend a few decades working around robots without picking up a bucket load of ideas though.
Title: Re: EP307: Soulmates
Post by: Dave on September 04, 2011, 07:29:33 AM
I enjoyed this one. Part of that enjoyment was that it could have gone any direction at any time. While the ending wasn't a complete surprise, in that it logically followed the narrative, neither was it predictable. And it was an actual, satisfying, ending.

I still think Gary narrowly missed sparking a robot revolution, but hey, there's still time. And there are probably a few other troubleshooting bots out there who have started thinking for themselves. The future is wide open.
Title: Re: EP307: Soulmates
Post by: Unblinking on September 06, 2011, 04:35:04 PM
I didn't care much for this one.  It covered well-trodden ground and, to me, it didn't really cover it in new or interesting ways.  There have been so many great stories of this type by Asimov, PK Dick, and others that a new one needs to be really special to excel. 

The ridiculously-humanlike-if-selectively-naive robot was cliched and unbelievable, and the idea of a machine intelligence "overcoming its programming" always gets my hackles up. (It's a meaningless concept; to do so implies that there is something going on besides the execution of its programming, which doesn't make any sense; it's like saying "I overcame my brain" -- might work in a broad, metaphorical sense but certainly not literally meaningful.)

I wholeheartedly agree.  Maybe it's just because I write code for a living.  Sure some programs may be more adaptable than others, but that's a feature of the program, not overcoming programming.  Code written to be flexible can be flexible within its bounds.  Code not written to be flexible cannot be.

And I was very skeptical that a robot programmed to deal with assembly line malfunctions would be capable of analyzing drinking problems, depression, and other human conditions.
Title: Re: EP307: Soulmates
Post by: NomadicScribe on September 06, 2011, 06:18:08 PM
I made it about halfway through this one. I just couldn't take the preposterous and cliche pseudo-Asimovian tale of a heartwarming robot who helps a human realize his humanity. Making it worse was the narrator, who sounded like he was falling asleep, or in the process of waking up. It was putting me to sleep, in any case.
Title: Re: EP307: Soulmates
Post by: Leevi on September 06, 2011, 08:09:49 PM
I was expecting Gary to accidentally start the Robot Rebellion against humans. The description of the place when Moe was fixing the broken robot especially made me feel that. Instead of "Kill all faulty meatbags!" we got a nice ending. Bit sappy ending, but I suppose not every robot story needs to end with the buggers turning against us.

Technically Moe didn't overcome his programming, supposing that Asimov's laws are being followed in that place, as it was following those laws perfectly.

Three Laws of Robotics
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Either the laws don't exist in the story or they checked the robot's head after the accident and found something that was wrong.
Title: Re: EP307: Soulmates
Post by: stePH on September 06, 2011, 08:29:39 PM
YASMRRS. It was okay for what it was.

(Yet Another Sentimental Mike Reznik Robot Story)
Title: Re: EP307: Soulmates
Post by: Unblinking on September 07, 2011, 01:22:14 PM
I was expecting Gary to accidentally start the Robot Rebellion against humans. The description of the place when Moe was fixing the broken robot especially made me feel that. Instead of "Kill all faulty meatbags!" we got a nice ending. Bit sappy ending, but I suppose not every robot story needs to end with the buggers turning against us.

Technically Moe didn't overcome his programming, supposing that Asimov's laws are being followed in that place, as it was following those laws perfectly.

Three Laws of Robotics
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Either the laws don't exist in the story or they checked the robot's head after the accident and found something that was wrong.

I gather that the laws didn't exist, or that rather than laws they were lower priority directives beneath obedience. 

Another hint of this is that the robot said something to the effect that, if he became convinced that robots were equal to humans, he would not have to obey them.  I got a dark premonition from that (which was not fulfilled), that the versions of the Three Laws that he has in his system might be reworded so that rather than "human being" they would refer to "superior beings".  And so that when his human friend convinced him that robots were their equals then the Three Laws would not apply anymore and he could do whatever he wanted.
Title: Re: EP307: Soulmates
Post by: birdless on September 07, 2011, 04:13:47 PM
Wow, I admit I'm a bit surprised at the amount of negative feedback on this one. Maybe part of that comes from me not finding that sci-fi and short stories mix very well. Well, for that matter, I really prefer to watch sci-fi rather than read it, so I haven't read Asimov, et al. I thought this was a brilliant story. I understand why the codeheads would be bothered by the technicalities of 'robots overcoming their programming,' but as a layman, this didn't bother me at all (he had to have artificial intelligence to even carry on a conversation outside the confines of his role as a robot mechanic). I rather liked the fact that it showed how Moz overcame his programming and even evolved a sense of self-preservation. But for me, the story was so much more about the process of Gary overcoming grief and guilt than the process of a robot becoming more self-aware.
Title: Re: EP307: Soulmates
Post by: Unblinking on September 07, 2011, 04:21:50 PM
I understand why the codeheads would be bothered by the technicalities of 'robots overcoming their programming,' but as a layman, this didn't bother me at all (he had to have artificial intelligence to even carry on a conversation outside the confines of his role as a robot mechanic). I rather liked the fact that it showed how Moz overcame his programming and even evolved a sense of self-preservation.

[codehead]
If he has artificial intelligence, it is because he is programmed to have artificial intelligence, with some kind of sophisticated learning and reasoning algorithm.  Unless there's magic involved, he's not overcoming his programming, he is just working within the bounds of his sophisticated programming.  I can understand the protagonist not making this distinction, but I find it hard to believe that Moz would not.
[/codehead]

Anyway, carry on.  ;)
Title: Re: EP307: Soulmates
Post by: raetsel on September 07, 2011, 04:28:14 PM

[codehead]
If he has artificial intelligence, it is because he is programmed to have artificial intelligence, with some kind of sophisticated learning and reasoning algorithm.  Unless there's magic involved, he's not overcoming his programming, he is just working within the bounds of his sophisticated programming.  I can understand the protagonist not making this distinction, but I find it hard to believe that Moz would not.
[/codehead]

Anyway, carry on.  ;)

Unless that programming is totally adaptive and can completely override or change any part of itself. In which case one could argue it is overcoming its initial coding. Is this comparable to a teenager overcoming the programming of his/her parents in a rebellious phase?

Or perhaps there is a logical or tautological flaw in the initial rules it uses that the robot learns to exploit.
Title: Re: EP307: Soulmates
Post by: birdless on September 07, 2011, 04:45:37 PM
Unless that programming is totally adaptive and can completely override or change any part of itself. In which case one could argue it is overcoming its initial coding. Is this comparable to a teenager overcoming the programming of his/her parents in a rebellious phase?

Or perhaps there is a logical or tautological flaw in the initial rules it uses that the robot learns to exploit.
Yeah, what he said. I really thought AI was supposed to overcome its initial programming. I thought that was whole theory behind what artificial intelligence means. Again, layman here, so don't hate.
Title: Re: EP307: Soulmates
Post by: Dem on September 07, 2011, 05:05:15 PM
Unless that programming is totally adaptive and can completely override or change any part of itself. In which case one could argue it is overcoming its initial coding. Is this comparable to a teenager overcoming the programming of his/her parents in a rebellious phase?

Or perhaps there is a logical or tautological flaw in the initial rules it uses that the robot learns to exploit.
Yeah, what he said. I really thought AI was supposed to overcome its initial programming. I thought that was whole theory behind what artificial intelligence means. Again, layman here, so don't hate.
I think you're right. And you're wrong. I think it's what AI means but not necessarily what people have in mind for it. Years ago I was involved in closing a large institution for adults with intellectual disabilities. The process of moving them to ordinary housing was called 'normalisation'. While that meant having a nice telly and a garden, everyone was pretty keen; but once people started wanting to get married and - hell's teeth - have babies, suddenly it was a different sort of normal people wanted for them. I'm betting we'll do the same with our AIs when they get round to asking us politely to move over to the passenger seat.
BTW, got over my C3PO fixation when Asimov's Laws of Robotics made an appearance. That was exactly the frame I had this in. A little stilted - like Asimov himself - and rather educational (ditto), but getting alongside a decent point by the end.
Title: Re: EP307: Soulmates
Post by: Scattercat on September 07, 2011, 06:04:37 PM
Wow, I admit I'm a bit surprised at the amount of negative feedback on this one.

I can't speak for everyone, but for me, it's mostly because it's Yet Another Resnick Robot Story.  I'm not a big fan of him when he tries to tug on the heartstrings because I find his technique ham-fisted and saccharine, but when he does one of these "dialogue with a robot" stories (and he's done this several times, even previously on Escape Pod), I find it even less compelling.  Basically, his robots don't act like robots; they aren't coming from outside of humanity and trying to understand it with the tools at their disposal.  Instead, they are Socratic mechanisms with fully-formed and articulate beliefs (that just so happen to closely resemble common Western moral principles with a strong Judeo-Christian flavor) who are all sitting around just waiting for a human with a minor self-inflicted emotional trauma to come by and request therapy. 

Honestly, you know what this story reminds me of?  It reminds me of the stories about "witnessing" that Evangelical pastors like to tell their congregations, in which a Good Christian does something Christian-y in front of a Typical Non-Believer, and the TNB proceeds to react with questions like, "But how can you be so calm about the end of your life?" and "I struggle to adhere to any kind of moral code.  What can you tell me about that?"  It's schmaltzy.  It doesn't reflect reality much at all because it's not really intended to do so; it's a story whose purpose is to make the reader/listener feel better about themselves and to reinforce pre-existing beliefs and notions.

I fundamentally did not believe in the characters here; alcoholics rarely have so simple a cause for their disease, and overcoming it is a long and complex struggle against physical and mental addiction.  You can't just suddenly perceive that your behavior is self-destructive and decide to stop doing it.  (Most alcoholics know quite well that their behavior is self-destructive, at least on some level; that's WHY denial is such a problem for them.  Denial is when you force yourself to disbelieve something you suspect to be true, and it is a much more durable illusion than what is portrayed here, which is someone actually unaware that they even have a problem.)  I also did not believe the robot; he made some massive logical leaps and came to conclusions that simply were not supported by the data he was provided.  It looked a lot like a case of him jumping right to a predetermined endpoint rather than true inductive reasoning.  Because the characters consisted of a straw man and a sock-puppet, I couldn't engage with either of them emotionally, and thus the Moment of Danger did not do its intended job of putting me in suspense.

I know Mike Resnick can do better; several of his books are on my shelf of favorites.  But this story is tremendously similar to several other stories he's written (so much so that it's almost a genre unto itself), and it doesn't distinguish itself much to me, neither in comparison to them nor on its own merits.
Title: Re: EP307: Soulmates
Post by: NomadicScribe on September 07, 2011, 09:38:15 PM
Honestly, you know what this story reminds me of?  It reminds me of the stories about "witnessing" that Evangelical pastors like to tell their congregations...

You know, you bring up a great point. I didn't make it all the way to the end of the story, but I did get far enough to get to one part where he says something to the effect of, "...and suddenly I realized I wasn't drinking anymore."

I'm reminded of Wired's story about AA: "Secret of AA: After 75 Years, We Don’t Know How It Works" (http://www.wired.com/magazine/2010/06/ff_alcoholics_anonymous/all/1).

Or, in internet terms:
1. Step one: Develop alcohol problems.
2. Step two: Meet a Nice Robot.
3. ????
4. PROFIT!!!

The alchemy of alcoholic recovery aside, I saw this story becoming a Sappy Robot Story. When you do a Sappy Robot Story right, you get The Iron Giant. When you do it wrong, you get A. I.

This story was going firmly and positively in neither direction.
Title: Re: EP307: Soulmates
Post by: stePH on September 07, 2011, 09:50:49 PM
I'm reminded of Wired's story about AA: "Secret of AA: After 75 Years, We Don’t Know How It Works" (http://www.wired.com/magazine/2010/06/ff_alcoholics_anonymous/all/1).

Interesting. I've long thought of AA as a religion unto itself, but I never knew until now that it was started by a dude who'd had a "religious" experience while stoned on belladonna.
Title: Re: EP307: Soulmates
Post by: SF.Fangirl on September 08, 2011, 03:37:58 AM
Though to be honest, I doubt Gary and Moz are going to get very far.

Yeah.  I thought this story was good albeit overly long.  However I thought there were a couple of moments where I was unable to suspend my disbelief.  Moses the the first robot who becomes friend to a human?  That seems unlikely given how friendly he is and how hard an alcoholic would really be to crack.  (And a hard core alcoholic quicts cold turkey on his own?)  Getting money out of banks or not, they are not going to get anywhere wandering around as a man with two titantium legs and an arm and a titantium robot.  That happy ending obviously ends no more than a day (probably less) after the copany tries to retrieve its stolen and malfunctioning piece of equipment.
Title: Re: EP307: Soulmates
Post by: SF.Fangirl on September 08, 2011, 03:47:32 AM
I liked this one simply because it belongs in the pages of "I, Robot".  Very much an Asimovan robot story.  The characters were not that deep, the action not that exciting, but the concepts are very interesting and detailed.  Also, like many Asimov stories, this is a socratic debate between a human and a robot, which I always enjoy.  Finally, the use of Moz' designation of MOZ123 as his name is very Asimovan as well. 

Hmmm ... that may be why I liked parts of this story - nostalgia for the robot stories of "my golden age".  Because I did like it; although, I couldn't say why despite finding it overly long and implausible.  I thought robot and the human were both unrealistic as characters.  I knew why I shouldn't like it and was surprised that I still did enjoy it somewhat.  I was actually surprised that so many comments here were positive since I saw a lot of holes in the story.
Title: Re: EP307: Soulmates
Post by: El Barto on September 08, 2011, 03:48:04 AM
I liked this one a bunch.   Many stories about robot consciousness seem to skip the question of how the "special" robot broke away from the others and developed curiosity.  

I liked how the programming in this case was intended to facilitate one type of "out of the box thinking" resulted in the robot going in a different direction of questioning and analyzing human behavior and ethics.  

Kind of like when you teach your kids how to start the lawn mower and they take your car for a joyride.

Title: Re: EP307: Soulmates
Post by: birdless on September 08, 2011, 02:28:32 PM
I can't speak for everyone, but for me, it's mostly because it's Yet Another Resnick Robot Story.
I had to go back and see what other stories were Resnick stories, because few have stood out enough to really stick in my memory. Wikipedia has a great episode listing (http://en.wikipedia.org/wiki/List_of_Escape_Pod_episodes), by the way, and it's sortable by author, reader, episode, etc. So yeah, after refreshing my memory, I can see what you mean.

I fundamentally did not believe in the characters here; alcoholics rarely have so simple a cause for their disease, and overcoming it is a long and complex struggle against physical and mental addiction.  You can't just suddenly perceive that your behavior is self-destructive and decide to stop doing it…. I also did not believe the robot; he made some massive logical leaps and came to conclusions that simply were not supported by the data he was provided.
I can appreciate the issues with character development, and I knew people would argue with the logic (I refuse to get into that, though, because I have issues with what some people call logic (that's not directed at you personally, Scattercat ;) )). Still, I thought some great questions were asked, regardless of the medium in which they were delivered.
Title: Re: EP307: Soulmates
Post by: eagle37 on September 09, 2011, 08:28:50 AM
For me, this dragged. There was a cracking good story in there, but it needed maybe another 10-15% scraping away to make it really shine out. We got the message early on that MOZ was emoting, and the story could have lost the middle example of this to kick the pace along
Title: Re: EP307: Soulmates
Post by: Talia on September 09, 2011, 10:11:17 PM
I have actually enjoyed the other Resnick Robot stories a fair amount. That this was Another one of those didn't bother me.

I just really didn't like Gary and couldn't raise any empathy. That made it hard to really enjoy the story. Rather than this being, to me, a story about the friendship between a robot and some guy, it came off like the story of a desperately mentally ill man who latches onto a non-judgemental figure then projects a lot of his own feelings on him. Some of Gary's flaws in his logic process bothered me a good deal too.

Also, I don't think I've ever in my life heard someone refer to someone as a "soulmate" when that person wasn't someone they were romantically interested in. The suggestion that these two were soulmates of a sort threw me for a loop, because it just felt a bit.. wrong.
Title: Re: EP307: Soulmates
Post by: Kaa on September 09, 2011, 11:27:28 PM
I have a friend who refers to me as her platonic soulmate. I'm unsure what her husband thinks about that.
Title: Re: EP307: Soulmates
Post by: Captain (none given) on September 11, 2011, 12:16:10 AM
Also, I don't think I've ever in my life heard someone refer to someone as a "soulmate" when that person wasn't someone they were romantically interested in. The suggestion that these two were soulmates of a sort threw me for a loop, because it just felt a bit.. wrong.

Just coming from my own personal experience, I actually know other people who have "soulmates." And maybe it's just an extension of "best friend," but I, too, feel that I have a soulmate of the opposite gender. Completely platonic, but we're still the best of friends. It can happen. Or maybe I'm just naive. I accept the ignorance of my youth.
Title: Re: EP307: Soulmates
Post by: Talia on September 11, 2011, 12:27:34 AM
Fair enough, just  my perception then. :)
Title: Re: EP307: Soulmates
Post by: Yargling on September 12, 2011, 07:59:17 AM
I enjoyed this story; it was pleasant, intelligent, and spoke to the heart of damaged people, giving some hope that all can be well. About my only fault with this story is not much really happens, but thats keeping in with the pace of the story. It made a nice break from the darker stories (i.e. 'Kill Me' comes to mind, though admittedly that's the next episode)
Title: Re: EP307: Soulmates
Post by: LaShawn on September 12, 2011, 04:50:53 PM
This story was okay. I'd probably appreciate it better in print form so I could skip through it--there were several philosophisizing that had my eyes glazing over (good thing I wasn't operating heavy machinery or anything). But I like how the ending was light and happy. Just what I needed.
Title: Re: EP307: Soulmates
Post by: Unblinking on September 12, 2011, 05:01:47 PM
Fair enough, just  my perception then. :)

For what it's worth, I have the same perception.  I think I've always heard soulmates used in a romantic sense.  When Moz asked if he and Gary were soulmates, I thought Gary was going to respond based on this usage, saying that he didn't love Moz "that way" or something.
Title: Re: EP307: Soulmates
Post by: Unblinking on September 12, 2011, 05:22:47 PM
Unless that programming is totally adaptive and can completely override or change any part of itself. In which case one could argue it is overcoming its initial coding. Is this comparable to a teenager overcoming the programming of his/her parents in a rebellious phase?

Or perhaps there is a logical or tautological flaw in the initial rules it uses that the robot learns to exploit.
Yeah, what he said. I really thought AI was supposed to overcome its initial programming. I thought that was whole theory behind what artificial intelligence means. Again, layman here, so don't hate.
I think you're right. And you're wrong. I think it's what AI means but not necessarily what people have in mind for it. Years ago I was involved in closing a large institution for adults with intellectual disabilities. The process of moving them to ordinary housing was called 'normalisation'. While that meant having a nice telly and a garden, everyone was pretty keen; but once people started wanting to get married and - hell's teeth - have babies, suddenly it was a different sort of normal people wanted for them. I'm betting we'll do the same with our AIs when they get round to asking us politely to move over to the passenger seat.
BTW, got over my C3PO fixation when Asimov's Laws of Robotics made an appearance. That was exactly the frame I had this in. A little stilted - like Asimov himself - and rather educational (ditto), but getting alongside a decent point by the end.

(If you don't feel like listening to more geekspeak, you may as well skip this post.  ;)   Also, I mean this post to be along the lines of friendly discussion.  I know I ramble on, but I am not the ultimate authority on this--I'm not saying that others are wrong)

I'd still argue that a program meant to self-modify is fulfilling its code, not overcoming it, but I guess that's just a question of semantics.  In practice, I think a working AI program would have fixed but very flexible structure to its code but would have a flexible data module that it could update and alter its methods through.  

One of the troubles I have with "overcoming your programming" is that the tone of the phrase seems to imply a positive outcome, like a friend breaking free of shackles.  Moz supposedly overcame his programming to save Gary, so hooray, Gary is alive.  If he can just arbitrarily ignore his programming, he could also kill Gary, but then I think people would not say "Moz overcame his programming," they would be talking about the robot going berserk, and probably worry about a robot revolution.  Or, overcoming your programming could just be called a glitch-- if Moz stopped speaking and spent every minute of every day performing the chicken dance and imitating kazoo sounds, that could be called overcoming his programming.

In science fiction, the concept of AI is somewhat well defined, being an artificial being that can reason at a level equal to humans.  But in practice, the definition is much more loose, and tends to shift with technology.  AI tends to be the label applied to cutting-edge computer learning algorithms that would almost certainly NOT be called AI in an SF story, and probably won't be called AI in a few years when they become obsolete.  Take gaming AI, for instance.  The learning system used by the monsters in the PC game "Black and White" is extremely cool.  Your creature can learn to water crops, destroy houses, or poop on villagers, all based on your interactions with it.  This was and is still pretty cool use of AI, but in an SF sense, it's not really AI because it can only examine and reason those things that were included in the world that the creature was designed for.  There are no true unknowns to it.  

And, on a bit of a tangent, one theoretical measure of machine intelligence is the Turing Test: http://en.wikipedia.org/wiki/Turing_test
In which intelligence is determined by whether or not an AI could converse with a human in text form, and the human would be unable to determine whether it was a human or an AI.  This test is terribly flawed, though, because it's not measuring how intelligent the AI is, but how "human-like" it is.  This has always bugged me for two main reasons:
1.  Even before AI, machines just plain have faster processing for straightforward calculations.  Ask a computer what the square root of 11134.1 is, and it will respond in an eyeblink with quite good precision.  Ask a human that, and almost everyone would have to do some calculations, and might not ever be able to come up with an exact answer.  For the machine to behave like a human, it would have to act LESS intelligent than it really is, and either insert extra delays or simply say "I don't know" even though it could make the calculation very quickly.
2.  Humans often respond with emotion.  Try to provoke a human to anger, and with most people you could find a way by insulting them or the ones they love, maybe making offensive comments about religion or politics.  For an AI to be indistinguishable, it would have to simulate emotional reactions that would by necessity be less rational.  Emotions are an important part of human life, but it is possible for a machine be intelligent without exhibiting emotions that will alter your reasoning.  Really, one could easily argue that emotions can temporarily lower ones iintelligence (if you're in a temper or if you're horny, you're probably not thinking straight).
3.  To be completely human-like, it would have to be capable of lying.  Both in the sense that humans may lie, given uncomfortable questions, but also in the sense that if you ask it "Are you human?" it has to lie and say "yes" (well, at least some of the time, since I suppose a human could say no).  Although intelligent beings often lie, I don't think that deception is a good metric of intelligence.


Title: Re: EP307: Soulmates
Post by: Gamercow on September 13, 2011, 07:20:42 PM
Not to get TOO far off the subject, but we're pretty far away from conversational AI right now.
http://www.youtube.com/watch?v=WnzlbyTZsQY&feature=player_embedded
Title: Re: EP307: Soulmates
Post by: NomadicScribe on September 14, 2011, 02:01:42 PM
Not to get TOO far off the subject, but we're pretty far away from conversational AI right now.
http://www.youtube.com/watch?v=WnzlbyTZsQY&feature=player_embedded

Bad example. He clearly says he is a unicorn. And we all know that machines don't lie.
Title: Re: EP307: Soulmates
Post by: Unblinking on September 14, 2011, 05:12:17 PM
Not to get TOO far off the subject, but we're pretty far away from conversational AI right now.
http://www.youtube.com/watch?v=WnzlbyTZsQY&feature=player_embedded

Bad example. He clearly says he is a unicorn. And we all know that machines don't lie.

What about the Oracle?
Title: Re: EP307: Soulmates
Post by: NomadicScribe on September 14, 2011, 09:39:31 PM
What about the Oracle?

The Oracle is Barbara Gordon.
Title: Re: EP307: Soulmates
Post by: InfiniteMonkey on September 15, 2011, 01:41:59 AM
What about the Oracle?

The Oracle is Barbara Gordon.

Not anymore, as I understand it....
Title: Re: EP307: Soulmates
Post by: kibitzer on September 15, 2011, 03:38:11 AM
What about the Oracle?

The Oracle is Barbara Gordon.

Not anymore, as I understand it....

What the...? (checks Wikipedia)

(jaw drops)
Title: Re: EP307: Soulmates
Post by: NomadicScribe on September 15, 2011, 11:11:42 AM
Yeah, yeah, but as anyone else who has been reading comics across four decades (late 80's, 90's, 2000's, 2010's) will know, nothing's more dependable than comics publishers resetting the continuity.

Anyway, does anyone feel like steering the topic back on track to the story, or at least AI?
Title: Re: EP307: Soulmates
Post by: Mav.Weirdo on September 20, 2011, 08:00:08 AM
Just once, I'd like Resnick to write a story that's so fall-down-funny that I laugh until I cry instead of bypassing "Go" and proceeding directly to lacrimal overflow.

I'd suggest episodes 94 and 108 of the Dunesteef. Two Resnick stories featuring the same character that are likely to have you at least chuckling, and possibly outright guffawing. :)

http://dunesteef.com/2011/03/03/episode-94-catastrophe-baker-and-a-canticle-for-leibowitz-by-mike-resnick/
http://dunesteef.com/2011/08/04/episode-108-catastrophe-baker-and-the-cold-equations-by-mike-resnick/

I think the funniest thing he ever wrote was "His Award-winning Science-Fiction Story" (That's actually the title)
Title: Re: EP307: Soulmates
Post by: Mav.Weirdo on September 20, 2011, 08:43:26 AM
While I thought Dave was reading a bit to fast in the beginning, once we introduced Moz he hit his stride, and did a good job overall. I am one of those who was crying by the end.

On the subject of "Did Moz 'overcome' his programing?" Clearly what was happening was that Gary was in effect adding new programing to Moz by speaking with him. Gary was not doing this intentionally, or with any plan. Eventually the user (Gary) programing came in conflict with the original programing. Moz ended up resolving contradictory programing himself, in favor of the user programing, without outside intervention. The question of if this constitutes 'overcoming' his programing depends on what you believe his original programing was.
Title: Re: EP307: Soulmates
Post by: Unblinking on September 20, 2011, 02:21:35 PM
The question of if this constitutes 'overcoming' his programing depends on what you believe his original programing was.

And on your definition of programming, which is where I think I'm differing from others.  His behavior has certainly changed, but to me that's just a result of his flexible programming, not overcoming that programming.  That's true to me regardless of exactly what his original programming was.  If his behavior can change without a new program being written and installed, then that behavior must by definition be the result of the old programming (though exhibiting unprecedented and unforeseen behavior caused by new inputs).


Title: Re: EP307: Soulmates
Post by: Unblinking on September 20, 2011, 05:11:30 PM
For anyone who isn't sick of the "overcoming programming" discussion:

Consider the human brain.  Compared to many other animals, the human brain has very few inborn instincts, along with some reflexes.  Pretty much everything else has to be taught painstakingly.  We start with no knowledge whatsoever, only those few instincts and reflexes, along with sensory inputs and the wetware to learn how to interpret them.  For me, the brain is analogous to the programming, it's the system that we use to take that sensory input and create a meaningful world out of it.  Everyone's is somewhat different, but also basically the same.

A human is not born knowing how to play the piano, nor does a newborn understand the concept of a piano, or even the concept of music or fine motor control.  Many adults don't even know how to play the piano (including me).  It's something you have to learn to do, but I would say that most CAN learn to do it to some basic level of skill, if they were trained.  If someone learns to play the piano, does it mean that they have "overcome their brain"  to learn to do so?  Not really.  Although their brain does not come hardwired with the ability to play piano, the brain is versatile enough to be used in that way.  So it's not that the person overcame their brain, it's that their brain allows them to perform a variety of actions that are not hardwired, and this is one of those. 

The same goes for programming.  Moz's programming was not hardwired to make him behave as he did, but his programming was written in a way to allow a great deal of flexibility of learning and interpretation.  So he is making good use of a versatile program, not overcoming his programming.  :)
Title: Re: EP307: Soulmates
Post by: Scattercat on September 21, 2011, 09:32:10 AM
At which point you bump up against the duality argument, which many people hold in high esteem, to whit: there is more to being a human than the brain, and thus when the soul/mind/whatever acts, it can override the physical part of human nature and transcend mere brute existence.  Thus, because Moz is not so much a robot as a very poorly educated human (at least that's how he acts), people instinctively think of him as "overcoming his programming" just as they might think of a human "overcoming their brain" in the event that they, I dunno, cured their own schizophrenia or alcoholism or whatever.

Which means you're heading into an unwinnable argument, unless you want to try and disprove the existence of the soul here on a forum post about a Mike Resnick story.  ;-)  (Not that anyone is that angry about this discussion.)
Title: Re: EP307: Soulmates
Post by: Unblinking on September 21, 2011, 01:43:02 PM
At which point you bump up against the duality argument, which many people hold in high esteem, to whit: there is more to being a human than the brain, and thus when the soul/mind/whatever acts, it can override the physical part of human nature and transcend mere brute existence.  Thus, because Moz is not so much a robot as a very poorly educated human (at least that's how he acts), people instinctively think of him as "overcoming his programming" just as they might think of a human "overcoming their brain" in the event that they, I dunno, cured their own schizophrenia or alcoholism or whatever.

Which means you're heading into an unwinnable argument, unless you want to try and disprove the existence of the soul here on a forum post about a Mike Resnick story.  ;-)  (Not that anyone is that angry about this discussion.)

I wouldn't dream of trying to disprove souls, as I don't disbelieve in them myself.  It's pretty clear to me from the effects of mind-altering drugs (including the legal kinds like anti-depressants, alcohol, nicotine, others) that at least while we're attached to these flesh bodies, our minds are based in that squishy gray organ up there.  If something of me will live on after I die, then at that point I would certainly have overcome my fleshy bonds.

But I see your point.  I guess if one believed in reaching a transcendental mental state, reaching nirvana, than one could argue that Moz has done the same.  In another story I might buy that as being a theme, but to me, that doesn't make a lot of sense in the context of this story.  Moz was designed to have a flexible mind to serve his function, and although I think the degree of his flexibility was a bit unnecessary for an assembly line troubleshooter, I still see his adaptability as being with the design of his programming, not counter to it.  :)
Title: Re: EP307: Soulmates
Post by: birdless on September 28, 2011, 05:23:04 PM
I still see his adaptability as being with the design of his programming, not counter to it.  :)
FWIW, I agree that the adaptability is a function of his programming, but I feel like he exceeded the expectations of his programming, not acted counter to it.
Title: Re: EP307: Soulmates
Post by: Unblinking on September 29, 2011, 01:41:31 PM
I still see his adaptability as being with the design of his programming, not counter to it.  :)
FWIW, I agree that the adaptability is a function of his programming, but I feel like he exceeded the expectations of his programming, not acted counter to it.

Fair enough.  :)
Title: Re: EP307: Soulmates
Post by: CryptoMe on October 01, 2011, 04:51:54 AM
I liked this story. It was nice. It was fun.

But it had one big flaw: I can't believe that a large warehouse wouldn't have emergency lighting specifically for power outages. Isn't that part of industrial building codes or something?

It didn't ruin the story for me, but it was very difficult to ignore.
Title: Re: EP307: Soulmates
Post by: Unblinking on October 03, 2011, 01:40:12 PM
But it had one big flaw: I can't believe that a large warehouse wouldn't have emergency lighting specifically for power outages. Isn't that part of industrial building codes or something?

That's a fair point.  Yes, I would think that would be a requirement.  Even when I lived in an apartment building the hallways had battery backup lighting, and it would be much more vital in a situation where one is surrounded by dangerous machinery.  Maybe it malfunctioned??  But if it's designed well there should be redundant backups, and apparently there weren't if that was the case.
Title: Re: EP307: Soulmates
Post by: Thomas on October 03, 2011, 03:56:21 PM
But it had one big flaw: I can't believe that a large warehouse wouldn't have emergency lighting specifically for power outages. Isn't that part of industrial building codes or something?

Lights in a factory operated by robots? do they really need lighting? For security, maybe... but necessary for operation or safety in a part of the facility primarily used by robots?

but i agree, an oversight on the part of the author, but then, wouldn't security have flashlights?
Title: Re: EP307: Soulmates
Post by: Unblinking on October 03, 2011, 04:21:43 PM
Lights in a factory operated by robots? do they really need lighting? For security, maybe... but necessary for operation or safety in a part of the facility primarily used by robots?

It depends on what kind of sensors the robots had.  If the robots rely primarily on optical sensors for situational awareness, then emergency lighting might matter for them to be able to navigate.
Title: Re: EP307: Soulmates
Post by: Thomas on October 03, 2011, 09:32:25 PM

It depends on what kind of sensors the robots had.  If the robots rely primarily on optical sensors for situational awareness, then emergency lighting might matter for them to be able to navigate.

didn't Mose operate independent of lighting?
which electromagnetic wave length would be the cheapest for the robots to operate under??

now we are nitpicking ....
Title: Re: EP307: Soulmates
Post by: CryptoMe on October 04, 2011, 01:39:33 AM
But it had one big flaw: I can't believe that a large warehouse wouldn't have emergency lighting specifically for power outages. Isn't that part of industrial building codes or something?

Lights in a factory operated by robots? do they really need lighting? For security, maybe... but necessary for operation or safety in a part of the facility primarily used by robots?

but i agree, an oversight on the part of the author, but then, wouldn't security have flashlights?

A part of the factory where the maintenance robot is not allowed to go? Yes, that would certainly need back-up lighting so that the humans, who would have to go fix things in case of an emergency, would be able to see.
Title: Re: EP307: Soulmates
Post by: Unblinking on October 04, 2011, 02:15:17 PM

It depends on what kind of sensors the robots had.  If the robots rely primarily on optical sensors for situational awareness, then emergency lighting might matter for them to be able to navigate.

didn't Mose operate independent of lighting?
which electromagnetic wave length would be the cheapest for the robots to operate under??

now we are nitpicking ....

Less nitpicking, more overanalyzing.  (How about that, I nitpicked the nitpicking!)

I would think the cheapest would be visible light because there are lots of cheap commercial bulbs available, plus it doubles as being useful for the humans as well. 

Okay, I don't need to keep going.  But I'm having fun.  :)
Title: Re: EP307: Soulmates
Post by: FireTurtle on October 06, 2011, 01:47:57 AM
It was a fun little story. If you like Socratic arguments. If you don't examine the "plot" with a magnifyin glass. If you like being led like a sheep to an emotional conclusion.
Argh. I want to like it. I liked Asimov. IIRC there was more plot there in I, Robot. But, it was a book, not a short story.
My biggest, banging fist against steering wheel causing beef: since when does alcohol withdrawal make you philosophical and hungry? Just to be clear: it makes you ape-sh*t crazy nuts and sometimes you have seizures. Just saying, I found myself wondering if Moz had magical properties to cure physiologicL dependence because THAT my friends, is biological programming.
Title: Re: EP307: Soulmates
Post by: Gamercow on October 11, 2011, 08:31:43 PM

Argh. I want to like it. I liked Asimov. IIRC there was more plot there in I, Robot. But, it was a book, not a short story.


Common mistake.  I, Robot was a novel-sized collection of unrelated short stories about robots.  And a steaming turd of celluloid movie was ostensibly made from one of those 9 short stories.
Title: Re: EP307: Soulmates
Post by: FireTurtle on October 11, 2011, 09:34:27 PM

Argh. I want to like it. I liked Asimov. IIRC there was more plot there in I, Robot. But, it was a book, not a short story.


Common mistake.  I, Robot was a novel-sized collection of unrelated short stories about robots.  And a steaming turd of celluloid movie was ostensibly made from one of those 9 short stories.

I stand corrected. It's been a while.... And please, don't mention the Moving Picture of the same name...*shudders*
Title: Re: EP307: Soulmates
Post by: Thomas on October 12, 2011, 02:32:14 AM

 I, Robot was a novel-sized collection of unrelated short stories about robots.

Check again, the stories were related. first it was about trouble shooting unforeseen glitches in the three laws of robotics and moved into the evolution of AI and how it controlled humanity through direct and indirect manipulation of human frailties and prejudices. amongst other things.
Title: Re: EP307: Soulmates
Post by: HexD on October 17, 2011, 05:41:44 AM
Once again, Resnick nails it. Love the story. Touching without being overly sentimental or sappy.
Title: Re: EP307: Soulmates
Post by: Gamercow on October 17, 2011, 02:18:34 PM

 I, Robot was a novel-sized collection of unrelated short stories about robots.

Check again, the stories were related. first it was about trouble shooting unforeseen glitches in the three laws of robotics and moved into the evolution of AI and how it controlled humanity through direct and indirect manipulation of human frailties and prejudices. amongst other things.

I guess I meant to say that the short stories when they were originally written were not specifically related to each other, as they were written over many years.  They were related in the sense you mention above, they were all robot stories by Asimov.
Title: Re: EP307: Soulmates
Post by: Fenrix on February 20, 2012, 09:53:34 PM

It depends on what kind of sensors the robots had.  If the robots rely primarily on optical sensors for situational awareness, then emergency lighting might matter for them to be able to navigate.

didn't Mose operate independent of lighting?
which electromagnetic wave length would be the cheapest for the robots to operate under??

now we are nitpicking ....

Less nitpicking, more overanalyzing.  (How about that, I nitpicked the nitpicking!)

I would think the cheapest would be visible light because there are lots of cheap commercial bulbs available, plus it doubles as being useful for the humans as well. 

Okay, I don't need to keep going.  But I'm having fun.  :)

With the continued improvements in illumination engineering, LED lamps would be quite feasible and operate effectively on a battery backup system when power is out. This is feasible now, let alone when we have near-sentient robots.