Author Topic: EP307: Soulmates  (Read 27357 times)

Unblinking

  • Sir Postsalot
  • Hipparch
  • ******
  • Posts: 8729
    • Diabolical Plots
Reply #25 on: September 07, 2011, 01:22:14 PM
I was expecting Gary to accidentally start the Robot Rebellion against humans. The description of the place when Moe was fixing the broken robot especially made me feel that. Instead of "Kill all faulty meatbags!" we got a nice ending. Bit sappy ending, but I suppose not every robot story needs to end with the buggers turning against us.

Technically Moe didn't overcome his programming, supposing that Asimov's laws are being followed in that place, as it was following those laws perfectly.

Three Laws of Robotics
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Either the laws don't exist in the story or they checked the robot's head after the accident and found something that was wrong.

I gather that the laws didn't exist, or that rather than laws they were lower priority directives beneath obedience. 

Another hint of this is that the robot said something to the effect that, if he became convinced that robots were equal to humans, he would not have to obey them.  I got a dark premonition from that (which was not fulfilled), that the versions of the Three Laws that he has in his system might be reworded so that rather than "human being" they would refer to "superior beings".  And so that when his human friend convinced him that robots were their equals then the Three Laws would not apply anymore and he could do whatever he wanted.



birdless

  • Lochage
  • *****
  • Posts: 581
  • Five is right out.
Reply #26 on: September 07, 2011, 04:13:47 PM
Wow, I admit I'm a bit surprised at the amount of negative feedback on this one. Maybe part of that comes from me not finding that sci-fi and short stories mix very well. Well, for that matter, I really prefer to watch sci-fi rather than read it, so I haven't read Asimov, et al. I thought this was a brilliant story. I understand why the codeheads would be bothered by the technicalities of 'robots overcoming their programming,' but as a layman, this didn't bother me at all (he had to have artificial intelligence to even carry on a conversation outside the confines of his role as a robot mechanic). I rather liked the fact that it showed how Moz overcame his programming and even evolved a sense of self-preservation. But for me, the story was so much more about the process of Gary overcoming grief and guilt than the process of a robot becoming more self-aware.



Unblinking

  • Sir Postsalot
  • Hipparch
  • ******
  • Posts: 8729
    • Diabolical Plots
Reply #27 on: September 07, 2011, 04:21:50 PM
I understand why the codeheads would be bothered by the technicalities of 'robots overcoming their programming,' but as a layman, this didn't bother me at all (he had to have artificial intelligence to even carry on a conversation outside the confines of his role as a robot mechanic). I rather liked the fact that it showed how Moz overcame his programming and even evolved a sense of self-preservation.

[codehead]
If he has artificial intelligence, it is because he is programmed to have artificial intelligence, with some kind of sophisticated learning and reasoning algorithm.  Unless there's magic involved, he's not overcoming his programming, he is just working within the bounds of his sophisticated programming.  I can understand the protagonist not making this distinction, but I find it hard to believe that Moz would not.
[/codehead]

Anyway, carry on.  ;)



raetsel

  • Peltast
  • ***
  • Posts: 116
    • MCL & Me
Reply #28 on: September 07, 2011, 04:28:14 PM

[codehead]
If he has artificial intelligence, it is because he is programmed to have artificial intelligence, with some kind of sophisticated learning and reasoning algorithm.  Unless there's magic involved, he's not overcoming his programming, he is just working within the bounds of his sophisticated programming.  I can understand the protagonist not making this distinction, but I find it hard to believe that Moz would not.
[/codehead]

Anyway, carry on.  ;)

Unless that programming is totally adaptive and can completely override or change any part of itself. In which case one could argue it is overcoming its initial coding. Is this comparable to a teenager overcoming the programming of his/her parents in a rebellious phase?

Or perhaps there is a logical or tautological flaw in the initial rules it uses that the robot learns to exploit.



birdless

  • Lochage
  • *****
  • Posts: 581
  • Five is right out.
Reply #29 on: September 07, 2011, 04:45:37 PM
Unless that programming is totally adaptive and can completely override or change any part of itself. In which case one could argue it is overcoming its initial coding. Is this comparable to a teenager overcoming the programming of his/her parents in a rebellious phase?

Or perhaps there is a logical or tautological flaw in the initial rules it uses that the robot learns to exploit.
Yeah, what he said. I really thought AI was supposed to overcome its initial programming. I thought that was whole theory behind what artificial intelligence means. Again, layman here, so don't hate.



Dem

  • Lochage
  • *****
  • Posts: 567
  • aka conboyhillfiction.wordpress.com
    • Suzanne Conboy-Hill
Reply #30 on: September 07, 2011, 05:05:15 PM
Unless that programming is totally adaptive and can completely override or change any part of itself. In which case one could argue it is overcoming its initial coding. Is this comparable to a teenager overcoming the programming of his/her parents in a rebellious phase?

Or perhaps there is a logical or tautological flaw in the initial rules it uses that the robot learns to exploit.
Yeah, what he said. I really thought AI was supposed to overcome its initial programming. I thought that was whole theory behind what artificial intelligence means. Again, layman here, so don't hate.
I think you're right. And you're wrong. I think it's what AI means but not necessarily what people have in mind for it. Years ago I was involved in closing a large institution for adults with intellectual disabilities. The process of moving them to ordinary housing was called 'normalisation'. While that meant having a nice telly and a garden, everyone was pretty keen; but once people started wanting to get married and - hell's teeth - have babies, suddenly it was a different sort of normal people wanted for them. I'm betting we'll do the same with our AIs when they get round to asking us politely to move over to the passenger seat.
BTW, got over my C3PO fixation when Asimov's Laws of Robotics made an appearance. That was exactly the frame I had this in. A little stilted - like Asimov himself - and rather educational (ditto), but getting alongside a decent point by the end.

Science is what you do when the funding panel thinks you know what you're doing. Fiction is the same only without the funding.


Scattercat

  • Caution:
  • Hipparch
  • ******
  • Posts: 4904
  • Amateur wordsmith
    • Mirrorshards
Reply #31 on: September 07, 2011, 06:04:37 PM
Wow, I admit I'm a bit surprised at the amount of negative feedback on this one.

I can't speak for everyone, but for me, it's mostly because it's Yet Another Resnick Robot Story.  I'm not a big fan of him when he tries to tug on the heartstrings because I find his technique ham-fisted and saccharine, but when he does one of these "dialogue with a robot" stories (and he's done this several times, even previously on Escape Pod), I find it even less compelling.  Basically, his robots don't act like robots; they aren't coming from outside of humanity and trying to understand it with the tools at their disposal.  Instead, they are Socratic mechanisms with fully-formed and articulate beliefs (that just so happen to closely resemble common Western moral principles with a strong Judeo-Christian flavor) who are all sitting around just waiting for a human with a minor self-inflicted emotional trauma to come by and request therapy. 

Honestly, you know what this story reminds me of?  It reminds me of the stories about "witnessing" that Evangelical pastors like to tell their congregations, in which a Good Christian does something Christian-y in front of a Typical Non-Believer, and the TNB proceeds to react with questions like, "But how can you be so calm about the end of your life?" and "I struggle to adhere to any kind of moral code.  What can you tell me about that?"  It's schmaltzy.  It doesn't reflect reality much at all because it's not really intended to do so; it's a story whose purpose is to make the reader/listener feel better about themselves and to reinforce pre-existing beliefs and notions.

I fundamentally did not believe in the characters here; alcoholics rarely have so simple a cause for their disease, and overcoming it is a long and complex struggle against physical and mental addiction.  You can't just suddenly perceive that your behavior is self-destructive and decide to stop doing it.  (Most alcoholics know quite well that their behavior is self-destructive, at least on some level; that's WHY denial is such a problem for them.  Denial is when you force yourself to disbelieve something you suspect to be true, and it is a much more durable illusion than what is portrayed here, which is someone actually unaware that they even have a problem.)  I also did not believe the robot; he made some massive logical leaps and came to conclusions that simply were not supported by the data he was provided.  It looked a lot like a case of him jumping right to a predetermined endpoint rather than true inductive reasoning.  Because the characters consisted of a straw man and a sock-puppet, I couldn't engage with either of them emotionally, and thus the Moment of Danger did not do its intended job of putting me in suspense.

I know Mike Resnick can do better; several of his books are on my shelf of favorites.  But this story is tremendously similar to several other stories he's written (so much so that it's almost a genre unto itself), and it doesn't distinguish itself much to me, neither in comparison to them nor on its own merits.



NomadicScribe

  • Palmer
  • **
  • Posts: 53
Reply #32 on: September 07, 2011, 09:38:15 PM
Honestly, you know what this story reminds me of?  It reminds me of the stories about "witnessing" that Evangelical pastors like to tell their congregations...

You know, you bring up a great point. I didn't make it all the way to the end of the story, but I did get far enough to get to one part where he says something to the effect of, "...and suddenly I realized I wasn't drinking anymore."

I'm reminded of Wired's story about AA: "Secret of AA: After 75 Years, We Don’t Know How It Works".

Or, in internet terms:
1. Step one: Develop alcohol problems.
2. Step two: Meet a Nice Robot.
3. ????
4. PROFIT!!!

The alchemy of alcoholic recovery aside, I saw this story becoming a Sappy Robot Story. When you do a Sappy Robot Story right, you get The Iron Giant. When you do it wrong, you get A. I.

This story was going firmly and positively in neither direction.
« Last Edit: September 07, 2011, 09:40:11 PM by NomadicScribe »



stePH

  • Actually has enough cowbell.
  • Hipparch
  • ******
  • Posts: 3906
  • Cool story, bro!
    • Thetatr0n on SoundCloud
Reply #33 on: September 07, 2011, 09:50:49 PM
I'm reminded of Wired's story about AA: "Secret of AA: After 75 Years, We Don’t Know How It Works".

Interesting. I've long thought of AA as a religion unto itself, but I never knew until now that it was started by a dude who'd had a "religious" experience while stoned on belladonna.

"Nerdcore is like playing Halo while getting a blow-job from Hello Kitty."
-- some guy interviewed in Nerdcore Rising


SF.Fangirl

  • Peltast
  • ***
  • Posts: 145
Reply #34 on: September 08, 2011, 03:37:58 AM
Though to be honest, I doubt Gary and Moz are going to get very far.

Yeah.  I thought this story was good albeit overly long.  However I thought there were a couple of moments where I was unable to suspend my disbelief.  Moses the the first robot who becomes friend to a human?  That seems unlikely given how friendly he is and how hard an alcoholic would really be to crack.  (And a hard core alcoholic quicts cold turkey on his own?)  Getting money out of banks or not, they are not going to get anywhere wandering around as a man with two titantium legs and an arm and a titantium robot.  That happy ending obviously ends no more than a day (probably less) after the copany tries to retrieve its stolen and malfunctioning piece of equipment.



SF.Fangirl

  • Peltast
  • ***
  • Posts: 145
Reply #35 on: September 08, 2011, 03:47:32 AM
I liked this one simply because it belongs in the pages of "I, Robot".  Very much an Asimovan robot story.  The characters were not that deep, the action not that exciting, but the concepts are very interesting and detailed.  Also, like many Asimov stories, this is a socratic debate between a human and a robot, which I always enjoy.  Finally, the use of Moz' designation of MOZ123 as his name is very Asimovan as well. 

Hmmm ... that may be why I liked parts of this story - nostalgia for the robot stories of "my golden age".  Because I did like it; although, I couldn't say why despite finding it overly long and implausible.  I thought robot and the human were both unrealistic as characters.  I knew why I shouldn't like it and was surprised that I still did enjoy it somewhat.  I was actually surprised that so many comments here were positive since I saw a lot of holes in the story.



El Barto

  • Peltast
  • ***
  • Posts: 132
Reply #36 on: September 08, 2011, 03:48:04 AM
I liked this one a bunch.   Many stories about robot consciousness seem to skip the question of how the "special" robot broke away from the others and developed curiosity.  

I liked how the programming in this case was intended to facilitate one type of "out of the box thinking" resulted in the robot going in a different direction of questioning and analyzing human behavior and ethics.  

Kind of like when you teach your kids how to start the lawn mower and they take your car for a joyride.




birdless

  • Lochage
  • *****
  • Posts: 581
  • Five is right out.
Reply #37 on: September 08, 2011, 02:28:32 PM
I can't speak for everyone, but for me, it's mostly because it's Yet Another Resnick Robot Story.
I had to go back and see what other stories were Resnick stories, because few have stood out enough to really stick in my memory. Wikipedia has a great episode listing, by the way, and it's sortable by author, reader, episode, etc. So yeah, after refreshing my memory, I can see what you mean.

I fundamentally did not believe in the characters here; alcoholics rarely have so simple a cause for their disease, and overcoming it is a long and complex struggle against physical and mental addiction.  You can't just suddenly perceive that your behavior is self-destructive and decide to stop doing it…. I also did not believe the robot; he made some massive logical leaps and came to conclusions that simply were not supported by the data he was provided.
I can appreciate the issues with character development, and I knew people would argue with the logic (I refuse to get into that, though, because I have issues with what some people call logic (that's not directed at you personally, Scattercat ;) )). Still, I thought some great questions were asked, regardless of the medium in which they were delivered.



eagle37

  • Extern
  • *
  • Posts: 5
Reply #38 on: September 09, 2011, 08:28:50 AM
For me, this dragged. There was a cracking good story in there, but it needed maybe another 10-15% scraping away to make it really shine out. We got the message early on that MOZ was emoting, and the story could have lost the middle example of this to kick the pace along



Talia

  • Moderator
  • *****
  • Posts: 2682
  • Muahahahaha
Reply #39 on: September 09, 2011, 10:11:17 PM
I have actually enjoyed the other Resnick Robot stories a fair amount. That this was Another one of those didn't bother me.

I just really didn't like Gary and couldn't raise any empathy. That made it hard to really enjoy the story. Rather than this being, to me, a story about the friendship between a robot and some guy, it came off like the story of a desperately mentally ill man who latches onto a non-judgemental figure then projects a lot of his own feelings on him. Some of Gary's flaws in his logic process bothered me a good deal too.

Also, I don't think I've ever in my life heard someone refer to someone as a "soulmate" when that person wasn't someone they were romantically interested in. The suggestion that these two were soulmates of a sort threw me for a loop, because it just felt a bit.. wrong.



Kaa

  • Hipparch
  • ******
  • Posts: 620
  • Trusst in me, jusst in me.
    • WriteWright
Reply #40 on: September 09, 2011, 11:27:28 PM
I have a friend who refers to me as her platonic soulmate. I'm unsure what her husband thinks about that.

I invent imaginary people and make them have conversations in my head. I also write.

About writing || About Atheism and Skepticism (mostly) || About Everything Else


Captain (none given)

  • Extern
  • *
  • Posts: 16
  • Activate Ridiculosity Drive
Reply #41 on: September 11, 2011, 12:16:10 AM
Also, I don't think I've ever in my life heard someone refer to someone as a "soulmate" when that person wasn't someone they were romantically interested in. The suggestion that these two were soulmates of a sort threw me for a loop, because it just felt a bit.. wrong.

Just coming from my own personal experience, I actually know other people who have "soulmates." And maybe it's just an extension of "best friend," but I, too, feel that I have a soulmate of the opposite gender. Completely platonic, but we're still the best of friends. It can happen. Or maybe I'm just naive. I accept the ignorance of my youth.

"The idea is to write it so that people hear it and it slides through the brain and goes straight to the heart." -- Maya Angelou


Talia

  • Moderator
  • *****
  • Posts: 2682
  • Muahahahaha
Reply #42 on: September 11, 2011, 12:27:34 AM
Fair enough, just  my perception then. :)



Yargling

  • Peltast
  • ***
  • Posts: 139
Reply #43 on: September 12, 2011, 07:59:17 AM
I enjoyed this story; it was pleasant, intelligent, and spoke to the heart of damaged people, giving some hope that all can be well. About my only fault with this story is not much really happens, but thats keeping in with the pace of the story. It made a nice break from the darker stories (i.e. 'Kill Me' comes to mind, though admittedly that's the next episode)



LaShawn

  • Lochage
  • *****
  • Posts: 550
  • Writer Mommies Rule!
    • The Cafe in the Woods
Reply #44 on: September 12, 2011, 04:50:53 PM
This story was okay. I'd probably appreciate it better in print form so I could skip through it--there were several philosophisizing that had my eyes glazing over (good thing I wasn't operating heavy machinery or anything). But I like how the ending was light and happy. Just what I needed.

--
Visit LaShawn at The Cafe in the Woods:
http://tbonecafe.wordpress.com
Another writer's antiblog: In Touch With Yours Truly


Unblinking

  • Sir Postsalot
  • Hipparch
  • ******
  • Posts: 8729
    • Diabolical Plots
Reply #45 on: September 12, 2011, 05:01:47 PM
Fair enough, just  my perception then. :)

For what it's worth, I have the same perception.  I think I've always heard soulmates used in a romantic sense.  When Moz asked if he and Gary were soulmates, I thought Gary was going to respond based on this usage, saying that he didn't love Moz "that way" or something.



Unblinking

  • Sir Postsalot
  • Hipparch
  • ******
  • Posts: 8729
    • Diabolical Plots
Reply #46 on: September 12, 2011, 05:22:47 PM
Unless that programming is totally adaptive and can completely override or change any part of itself. In which case one could argue it is overcoming its initial coding. Is this comparable to a teenager overcoming the programming of his/her parents in a rebellious phase?

Or perhaps there is a logical or tautological flaw in the initial rules it uses that the robot learns to exploit.
Yeah, what he said. I really thought AI was supposed to overcome its initial programming. I thought that was whole theory behind what artificial intelligence means. Again, layman here, so don't hate.
I think you're right. And you're wrong. I think it's what AI means but not necessarily what people have in mind for it. Years ago I was involved in closing a large institution for adults with intellectual disabilities. The process of moving them to ordinary housing was called 'normalisation'. While that meant having a nice telly and a garden, everyone was pretty keen; but once people started wanting to get married and - hell's teeth - have babies, suddenly it was a different sort of normal people wanted for them. I'm betting we'll do the same with our AIs when they get round to asking us politely to move over to the passenger seat.
BTW, got over my C3PO fixation when Asimov's Laws of Robotics made an appearance. That was exactly the frame I had this in. A little stilted - like Asimov himself - and rather educational (ditto), but getting alongside a decent point by the end.

(If you don't feel like listening to more geekspeak, you may as well skip this post.  ;)   Also, I mean this post to be along the lines of friendly discussion.  I know I ramble on, but I am not the ultimate authority on this--I'm not saying that others are wrong)

I'd still argue that a program meant to self-modify is fulfilling its code, not overcoming it, but I guess that's just a question of semantics.  In practice, I think a working AI program would have fixed but very flexible structure to its code but would have a flexible data module that it could update and alter its methods through.  

One of the troubles I have with "overcoming your programming" is that the tone of the phrase seems to imply a positive outcome, like a friend breaking free of shackles.  Moz supposedly overcame his programming to save Gary, so hooray, Gary is alive.  If he can just arbitrarily ignore his programming, he could also kill Gary, but then I think people would not say "Moz overcame his programming," they would be talking about the robot going berserk, and probably worry about a robot revolution.  Or, overcoming your programming could just be called a glitch-- if Moz stopped speaking and spent every minute of every day performing the chicken dance and imitating kazoo sounds, that could be called overcoming his programming.

In science fiction, the concept of AI is somewhat well defined, being an artificial being that can reason at a level equal to humans.  But in practice, the definition is much more loose, and tends to shift with technology.  AI tends to be the label applied to cutting-edge computer learning algorithms that would almost certainly NOT be called AI in an SF story, and probably won't be called AI in a few years when they become obsolete.  Take gaming AI, for instance.  The learning system used by the monsters in the PC game "Black and White" is extremely cool.  Your creature can learn to water crops, destroy houses, or poop on villagers, all based on your interactions with it.  This was and is still pretty cool use of AI, but in an SF sense, it's not really AI because it can only examine and reason those things that were included in the world that the creature was designed for.  There are no true unknowns to it.  

And, on a bit of a tangent, one theoretical measure of machine intelligence is the Turing Test: http://en.wikipedia.org/wiki/Turing_test
In which intelligence is determined by whether or not an AI could converse with a human in text form, and the human would be unable to determine whether it was a human or an AI.  This test is terribly flawed, though, because it's not measuring how intelligent the AI is, but how "human-like" it is.  This has always bugged me for two main reasons:
1.  Even before AI, machines just plain have faster processing for straightforward calculations.  Ask a computer what the square root of 11134.1 is, and it will respond in an eyeblink with quite good precision.  Ask a human that, and almost everyone would have to do some calculations, and might not ever be able to come up with an exact answer.  For the machine to behave like a human, it would have to act LESS intelligent than it really is, and either insert extra delays or simply say "I don't know" even though it could make the calculation very quickly.
2.  Humans often respond with emotion.  Try to provoke a human to anger, and with most people you could find a way by insulting them or the ones they love, maybe making offensive comments about religion or politics.  For an AI to be indistinguishable, it would have to simulate emotional reactions that would by necessity be less rational.  Emotions are an important part of human life, but it is possible for a machine be intelligent without exhibiting emotions that will alter your reasoning.  Really, one could easily argue that emotions can temporarily lower ones iintelligence (if you're in a temper or if you're horny, you're probably not thinking straight).
3.  To be completely human-like, it would have to be capable of lying.  Both in the sense that humans may lie, given uncomfortable questions, but also in the sense that if you ask it "Are you human?" it has to lie and say "yes" (well, at least some of the time, since I suppose a human could say no).  Although intelligent beings often lie, I don't think that deception is a good metric of intelligence.


« Last Edit: September 12, 2011, 05:32:41 PM by Unblinking »



Gamercow

  • Hipparch
  • ******
  • Posts: 654
Reply #47 on: September 13, 2011, 07:20:42 PM
Not to get TOO far off the subject, but we're pretty far away from conversational AI right now.
http://www.youtube.com/watch?v=WnzlbyTZsQY&feature=player_embedded

The cow says "Mooooooooo"


NomadicScribe

  • Palmer
  • **
  • Posts: 53
Reply #48 on: September 14, 2011, 02:01:42 PM
Not to get TOO far off the subject, but we're pretty far away from conversational AI right now.
http://www.youtube.com/watch?v=WnzlbyTZsQY&feature=player_embedded

Bad example. He clearly says he is a unicorn. And we all know that machines don't lie.



Unblinking

  • Sir Postsalot
  • Hipparch
  • ******
  • Posts: 8729
    • Diabolical Plots
Reply #49 on: September 14, 2011, 05:12:17 PM
Not to get TOO far off the subject, but we're pretty far away from conversational AI right now.
http://www.youtube.com/watch?v=WnzlbyTZsQY&feature=player_embedded

Bad example. He clearly says he is a unicorn. And we all know that machines don't lie.

What about the Oracle?