Author Topic: EP428: Paradise Left  (Read 12240 times)

Fenrix

  • Curmudgeonly Co-Editor of PseudoPod
  • Editor
  • *****
  • Posts: 3996
  • I always lock the door when I creep by daylight.
Reply #25 on: January 13, 2014, 08:57:52 PM
This is probably the most chilling story I've heard on EscapePod this year. Nice and compact and quite effective at delivering the multi-layered and reinforced message.

It's interesting to see how many folks in this thread are perfectly willing to welcome their new AI overlords. I just don't think they found your line. The alcohol was a great line to draw, as it personalizes the whole thing. It's also relevant and encroaching into the lives of ordinary folks.

Now, let's talk about one the the CDC's pushes for January 2014: get alcohol included as mandatory on health screening. Here's one of the definitions of the box they want you to fit in: "If you do choose to drink, do so in moderation. This is defined as up to 1 drink a day for women or 2 for men." source The next step the story takes is to have these government regulations enforced for our own good by a benevolent AI.

Need more examples? How about the action in NYC to eliminate e-cigarettes because they look like smoking? Or placing a size restriction on the soda that you can purchase? Who programs the standards? Who draws the line? Where's your line? How loose do you want your collar to be and how long your leash?


All cat stories start with this statement: “My mother, who was the first cat, told me this...”


Windup

  • Hipparch
  • ******
  • Posts: 1226
Reply #26 on: January 14, 2014, 01:41:12 AM

I love when this question comes up and we're supposed to respond, "challenges and stresses are what make us human -- they're what make life worth living," but it's all garbage. No condition is permanent, no satisfaction complete. Frankly, a world where people's base needs are met and governments and companies don't ruin lives by the millions for the sake of overblown 6th-grader-type conflicts would be worth the servitude to the robots or aliens in charge. You're going to serve someone, it might as well be someone not actively seeking to harm you.


Matt, i'm curious about the contrast between the above statement and your position on the character who had the part of her brain that processes emotional pain deadened in "Loss, with Chalk Diagrams."  You came down rather hard on the character for shutting herself off from the painful emotional experiences of being human.  However, here you seem to be saying that the AI's removing the external causes of painful experiences is OK, or even desirable. 

Is it that one is internal and one is external?  Or are you looking at it through some completely different lens?


"My whole job is in the space between 'should be' and 'is.' It's a big space."


matweller

  • EA Staff
  • *****
  • Posts: 678
Reply #27 on: January 14, 2014, 03:28:40 PM
In my opinion, we are the sum of our experiences and how we process them. It's what creates our souls. It's why the transporter didn't work in The Fly -- there's a difference between the existence of a steak and the flavor. Therefore, intentionally shutting oneself off to the world in any way is a hampering of that experience; a stunting of the soul. Of course, some people are hampered by the loss of a limb or mental difficulties, but those people at least try to compensate for the disability in some way, thereby building their experience. What was proposed in that story that I was against was that people would be somehow better by eliminating their ability to have any bad feelings. I might concur that there are memories that hurt more than help and being able to get rid of a specific memories might lead to a better end result than a life lived hampered by that hurt. But to remove the ability to hurt altogether? That's not the answer.

That's very different from whether or not the world at large is a more or less oppressive place. Would you trade places with someone who was born and died a slave? I mean, according to Maslow's theory, we don't get to enjoy the privileges of the higher functions of human existence until the baser levels are settled. Obviously, Maslow was a bit oversimplified, but I think he got it right at the core. And I think the natural extension of that is that if we consider all life to be valuable, then the very definition of "higher existence" would be elevate as many lives as possible as far up the pyramid as possible, thereby freeing them to have the most experience possible. The robot-caretaker world posited in this story wouldn't end negative experience. People would still fall and get scrapes; death would not be eradicated; disappointments would still occur; experience would still happen.

Did it seem a little too utopian? Yes, the sweet spot is probably somewhere in the middle, I'm only saying this: Draw a number line with zero in the middle. Put current, real life {that hampers our evolution by bogging down a significant percentage of our population with base-level struggles} on the left side somewhere. Put robot-caretaker world {that hampers evolution because it's too cushy and doesn't allow us to bleed enough to learn} the same distance from zero on the right. You can see that the absolute value of both is the same, but in robot caretaker world we get to evolve at the same pace, and nobody lives in a dumpster. The choice seems clear, right.

I don't see the two thoughts as begin at odds. If anything, I see the latter as being an affirmation of the former. Let's get everybody out of the gutter so that we can all have more experiences and learn from the good and the bad, so that our species can attain the next level.



Unblinking

  • Sir Postsalot
  • Hipparch
  • ******
  • Posts: 8729
    • Diabolical Plots
Reply #28 on: January 14, 2014, 04:50:40 PM
I'm with Matt.  I have similar opinions of both stories premises, and he summed it up well.



albionmoonlight

  • Matross
  • ****
  • Posts: 213
Reply #29 on: January 14, 2014, 09:08:59 PM
I remember hearing a thought experiment once (heck, it may have even been an outtro from an EA podcast) where the question was whether you would want a world where poverty, hunger, etc. have been eliminated, but the trade-off is that the entire world looks like Southern California suburbs.  No variety--just endless suburban tracks, forever and everywhere.

For most of the people reading this, I imagine you see what would be lost in that world--the vast majority of human culture.  And you may seriously debate, as I did, whether you would make those tradeoffs.

The point of the thought experiment, though, was to then say "Ok, now look at that question from the perspective of a child soldier in Africa.  Or a homeless woman with pneumonia on the streets of Atlanta.  Or someone living in a slum in Bangladesh."  Basically, to realize that something that I thought of as a real hard choice--eliminate poverty at the expense of homogenizing culture--would not be a hard choice at all for 90%+ of the human population.



Myrealana

  • Peltast
  • ***
  • Posts: 107
    • Bad Foodie
Reply #30 on: January 16, 2014, 04:28:07 PM
Do I have that much self-determination in my life right now?

Well, I suppose I could run out into traffic or drink myself silly. I could walk away from my job, my family, my security and wander the Earth like Cain.

But I'm not going to. What I really have is an illusion of choice. I don't want to go hungry. I don't want to die. I don't want to end up in prison. So, I conform.

Is Rob's existance really all that different? Except that, for all my conformity and effort, the chance exists that an outside force could pull all my security out from under me. All it would take is one tragic event - untimely death, house fire, cancer diagnosis - and everything I've built could fall apart.

I'd give up some of the illusion of freedom for the assurance that nothing can take away my lifestyle.

"You don't fix faith. Faith fixes you." - Shepherd Book


hardware

  • Matross
  • ****
  • Posts: 192
Reply #31 on: May 16, 2014, 05:04:12 PM
Fun Story, I liked how it wasn't really judging any of the characters, including the AI. In a way, all three were kind of acting according to their programming, whether in computer code or social conditioning. It's an interesting question - to what extent will the initial AI:s have a 'free will', and how much will they be still determined (and frustrated by) their programming. I doubt they will be a 'blank slate' - after all, no human is.