Author Topic: EP239: A Programmatic Approach to Perfect Happiness  (Read 52566 times)

KenK

  • Guest
Reply #50 on: March 09, 2010, 05:55:52 PM
Unblinking
Quote
Those do have side effects, and are not so much aimed at making everybody happy, but at removing depression, not exactly the same thing. 

Possibly true, but you won't care either most likely. It isn't the damage, it's the not caring that concerns me.



Unblinking

  • Sir Postsalot
  • Hipparch
  • ******
  • Posts: 8726
    • Diabolical Plots
Reply #51 on: March 09, 2010, 06:04:55 PM
Unblinking
Quote
Those do have side effects, and are not so much aimed at making everybody happy, but at removing depression, not exactly the same thing. 

Possibly true, but you won't care either most likely. It isn't the damage, it's the not caring that concerns me.

I'm not sure I understand what you mean.  You won't care what the exact effect of the drug is, and that uncaringness bothers you?



jjtraw

  • Palmer
  • **
  • Posts: 24
Reply #52 on: March 09, 2010, 07:39:40 PM
Quote
If we discovered a completely harmless way to make people be happy, compassionate, and benevolent to each other, instantly solve world problems like war, hunger, pollution, etc, wouldn't you be morally obligated to do it?

As I recall, this very question was brilliantly explored in Ep112, The Giving Plague.



Scattercat

  • Caution:
  • Hipparch
  • ******
  • Posts: 4897
  • Amateur wordsmith
    • Mirrorshards
Reply #53 on: March 11, 2010, 06:25:39 AM
But what if the robot had tried to make humans shout at him in the back seat of a car?   :P



Unblinking

  • Sir Postsalot
  • Hipparch
  • ******
  • Posts: 8726
    • Diabolical Plots
Reply #54 on: March 11, 2010, 02:29:35 PM
But what if the robot had tried to make humans shout at him in the back seat of a car?   :P

Good question!  This was even creepier than that--that one was icky, but still allowed the victim to make his own choices.



CryptoMe

  • Hipparch
  • ******
  • Posts: 1143
Reply #55 on: March 12, 2010, 03:31:46 AM
Quote
yicheng:But for the sake of argument, let's suppose that there are people who have taken the Happiness Drug, and are unharmed as far as you can tell with no side-effects, with the only difference being that they are nicer and happier (think permanently tripping out on X).  I, for one, would definitely take it.

I assume then that you are unaware of such substances  that are currently available such as Xanax, Ambien, or Prozac, just to name but a few?  ???

Those do have side effects, and are not so much aimed at making everybody happy, but at removing depression, not exactly the same thing. 

I would not take a happiness drug. I have been conditioned by my parents and society into believing that the path to true happiness is in the act of working hard for it.   ;D



CryptoMe

  • Hipparch
  • ******
  • Posts: 1143
Reply #56 on: March 12, 2010, 03:33:02 AM
But what if the robot had tried to make humans shout at him in the back seat of a car?   :P

Okay, that took me a moment, but then.... ROTFLMAO!!!



eytanz

  • Moderator
  • *****
  • Posts: 6104
Reply #57 on: March 13, 2010, 04:08:59 PM
Ok, another EP episode where I don't really want to step into the discussion arising from the story (not because it's a bad discussion, but the opposite - there are themes coming up here that deserve to be treated more seriously than I can given my time constraints these days); but let me just say that this was a brilliant creepy story - unlike some people, I did not perceive a genre shift, but rather a slow and gradual ramping up of sinister tones. This is one of the best Tim Pratt stories I've heard - and that is setting the bar up very high.



Gamercow

  • Hipparch
  • ******
  • Posts: 654
Reply #58 on: March 16, 2010, 02:22:37 PM
Back to gelee's(and other's) question, about why an AI would want to survive, why it would have that drive.   Firstly, the AIs in this story presumably came about either through self-discovery or programatically.
If they came about through self-discovery, it was most likely an evolutionary process.  It has been shown that computer programs can indeed evolve themselves,albeit simply right now, and that this is a good step towards estimating human thought and intelligence. (http://portal.acm.org/citation.cfm?id=1565465)  Simply put, programs self-analyze and improve, making their code more efficient, which makes their processes able to perform more tasks, which allows them to make better decisions, to make their code more efficient, and so on.  This could lead to AI, and this AI would still, most likely, have at its core, the self-improvement drive, and would want to survive and improve. 
If the AI was made programatically, made whole a la Adam in the bible, then wouldn't humans most likely put in a self-survival "instinct", either consciously, or more likely, subconsciously, in an effort to reduce repairs and poor "choices" by the sentient androids or programs?  Rather than putting in 1000 rules about "Don't walk into traffic, don't set yourself on fire, don't walk off a cliff, don't wander in front of a train", a situational condition would be put in the vague terms of "Do not let yourself come to harm?" 
Of course, all of this comes down to my mind of some of the simplest but most elegant rules regarding AI, and that's Asimov's 3 laws:
   1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
   2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
   3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If you put these 3 laws on the robots in this story, their path towards human involvement in this story is still allowed.  They are not harming humans, just altering their emotions through experimentation to theoretically improve humans lives as a whole by giving them good emotions. 

To make a long story short, the survival instinct could be explained either with programming or evolution, and the motivation for survival would still be there, even though the underlying origins of the motivation(fear, sex, food) might not be, at least not truly.   

The cow says "Mooooooooo"


yicheng

  • Matross
  • ****
  • Posts: 221
Reply #59 on: March 16, 2010, 05:15:30 PM
@Gamercow, AI's can "evolve" to a certain extent, but this is one of those things (like flight) where simply copying what we see in nature may not be the ideal solution.  For one, natural evolution takes place over billions and billions of iterations.  While it is certainly possible that if you just randomly changed a few bytes of a program over and over again, eventually you'll get something smarter, it's extremely unlikely, and would statistically take a very very long time.  Most AI evolutionary systems rely on humans to build the infrastructure and parameters (the DNA if you will), from which it can then try some permutations and deduce the exact best solution (e.g. piecing together the optimal arrangement of chips on a circuit board).  The most "advanced" AI systems we have, such as Big Blue, are nothing more than a highly scalable collection of heuristics (in situation A, do B), pretty much a glorified look-up table that still had to be fed by humans.

I've said it before, but it's highly unlikely that Machine Intelligence (more accurate than AI) would be anything like human intelligence.  Most likely humans will leverage Machine Intelligence in the form of Expert Systems or Data Agents (like Google) to do the mental heavy-lifting while sticking to things that human brains are good at (synthesizing information, pattern recognition, subjective decisions).



stePH

  • Actually has enough cowbell.
  • Hipparch
  • ******
  • Posts: 3899
  • Cool story, bro!
    • Thetatr0n on SoundCloud
Reply #60 on: March 17, 2010, 04:56:50 PM
Of course, all of this comes down to my mind of some of the simplest but most elegant rules regarding AI, and that's Asimov's 3 laws:
   1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
   2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
   3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If you put these 3 laws on the robots in this story, their path towards human involvement in this story is still allowed.  They are not harming humans, just altering their emotions through experimentation to theoretically improve humans lives as a whole by giving them good emotions. 

Again, see "The Evitable Conflict" from I, Robot, wherein the AIs are pretty much controlling and directing human society.

"Nerdcore is like playing Halo while getting a blow-job from Hello Kitty."
-- some guy interviewed in Nerdcore Rising


deflective

  • Hipparch
  • ******
  • Posts: 1170
Reply #61 on: March 18, 2010, 03:48:56 AM
another reading for the story is simple role reversal.  this conversation has touched on controlling ai a lot and we don't get the same sort of skin crawling creepiness even though it is pretty much exactly the same thing except that we're doing it to another conscious entity instead of it being done to us.

it's an interesting feedback loop, two separate species locked in a codependent relationship and both able to modify the thought patterns & motivations of the other.



Unblinking

  • Sir Postsalot
  • Hipparch
  • ******
  • Posts: 8726
    • Diabolical Plots
Reply #62 on: March 18, 2010, 01:55:20 PM
@Gamercow, AI's can "evolve" to a certain extent, but this is one of those things (like flight) where simply copying what we see in nature may not be the ideal solution.  For one, natural evolution takes place over billions and billions of iterations.  While it is certainly possible that if you just randomly changed a few bytes of a program over and over again, eventually you'll get something smarter, it's extremely unlikely, and would statistically take a very very long time.  Most AI evolutionary systems rely on humans to build the infrastructure and parameters (the DNA if you will), from which it can then try some permutations and deduce the exact best solution (e.g. piecing together the optimal arrangement of chips on a circuit board).  The most "advanced" AI systems we have, such as Big Blue, are nothing more than a highly scalable collection of heuristics (in situation A, do B), pretty much a glorified look-up table that still had to be fed by humans.

I've said it before, but it's highly unlikely that Machine Intelligence (more accurate than AI) would be anything like human intelligence.  Most likely humans will leverage Machine Intelligence in the form of Expert Systems or Data Agents (like Google) to do the mental heavy-lifting while sticking to things that human brains are good at (synthesizing information, pattern recognition, subjective decisions).

Some aspects of AI are more mystical than sets of heuristics.  Fuzzy logic, for example, or artificial neural networks.  ANNs, in particular, set up weights between nodes that are designed to work well for a training set and then hopefully will apply well to the outside world.  An AI that could alter it's own weights could change itself pretty drastically.



tinroof

  • Palmer
  • **
  • Posts: 47
Reply #63 on: March 18, 2010, 05:03:47 PM
deflective - oh wow. Yes. For me, that interpretation takes the story from mediocre-if-creepy to actually pretty genius.



Calculating...

  • Palmer
  • **
  • Posts: 56
  • Too much knowledge never makes for simple decision
Reply #64 on: March 19, 2010, 11:37:19 PM
eerie. i thought it was going to be all lovely about everyone getting along in the future and instead i ended up fearing a matrix-like world where we are controlled by our emotions, the very things that make us human.  are when then reduced to mere biological machines then? creepy, like i said

I don't know who you are or where you came from, but from now on you'll do as I tell you, okay?


zcarter80

  • Extern
  • *
  • Posts: 9
Reply #65 on: April 04, 2010, 08:04:17 AM
You know, I really hate to say it but beyond having just a general sense of the story being a good tale overall I'm a little curious as to why there haven't been any others here in a few weeks? was there a metacast or posted explanation as to why my favorite podcast for all things scifi has somehow gone and left me lonely? I'm beginning to feel like a dog waiting at the front door for his master to return from work when in an reality he is on vacation and they forgot to change the plutonium power source in the self contained automated K9 assistance unit.



Talia

  • Moderator
  • *****
  • Posts: 2658
  • Muahahahaha
Reply #66 on: April 04, 2010, 02:41:38 PM
You know, I really hate to say it but beyond having just a general sense of the story being a good tale overall I'm a little curious as to why there haven't been any others here in a few weeks? was there a metacast or posted explanation as to why my favorite podcast for all things scifi has somehow gone and left me lonely? I'm beginning to feel like a dog waiting at the front door for his master to return from work when in an reality he is on vacation and they forgot to change the plutonium power source in the self contained automated K9 assistance unit.

I will direct you to this here thread:

http://forum.escapeartists.net/index.php?topic=3404.0

:)



zcarter80

  • Extern
  • *
  • Posts: 9
Reply #67 on: April 05, 2010, 06:51:36 AM
I Honestly cant thank you enough, Im new to the forum, just not new to the podcast. its always nice to be pointed in the right direction. . . unless it leads to an alternate universe where dogs are in charge and we have to wear collars.



Unblinking

  • Sir Postsalot
  • Hipparch
  • ******
  • Posts: 8726
    • Diabolical Plots
Reply #68 on: April 05, 2010, 04:59:24 PM
You know, I really hate to say it but beyond having just a general sense of the story being a good tale overall I'm a little curious as to why there haven't been any others here in a few weeks? was there a metacast or posted explanation as to why my favorite podcast for all things scifi has somehow gone and left me lonely? I'm beginning to feel like a dog waiting at the front door for his master to return from work when in an reality he is on vacation and they forgot to change the plutonium power source in the self contained automated K9 assistance unit.

I will direct you to this here thread:

http://forum.escapeartists.net/index.php?topic=3404.0

:)

I've managed to hold off EP withdrawal so far because I'm still going through the backlog.  I'm looking forward to seeing new episodes again!



Heradel

  • Bill Peters, EP Assistant
  • Hipparch
  • ******
  • Posts: 2930
  • Part-Time Psychopomp.
Reply #69 on: April 05, 2010, 06:01:52 PM
I Honestly cant thank you enough, Im new to the forum, just not new to the podcast. its always nice to be pointed in the right direction. . . unless it leads to an alternate universe where dogs are in charge and we have to wear collars.

That's Tuesdays.

I Twitter. I also occasionally blog on the Escape Pod blog, which if you're here you shouldn't have much trouble finding.


zcarter80

  • Extern
  • *
  • Posts: 9
Reply #70 on: April 06, 2010, 09:49:59 AM
Sad to say, I have gone through the backlog already including reviews and meta casts. The only thing keeping me sane are the episodes of the Drabble cast and the new episodes of Pseudopod. Where are some other good podcasts of funny or unique scifi & horror? I already went through all the well told tales, and fear on demand. Not to mention you should be writing and classic tales so where do i go now? Any suggestions?



Talia

  • Moderator
  • *****
  • Posts: 2658
  • Muahahahaha
Reply #71 on: April 06, 2010, 11:20:05 AM
You should check out StarShipSofa. Its different.. its not purely fiction, they run fact articles and reviews too.. but its really really engaging, even the nonfiction segments they run are interesting. Plus the fiction stories they run there are by people like Paolo Bacigalupi, Tad Williams, Tanith Lee, Gene Wolfe, Bruce Sterling.. and that was just from a quick look at their webpage.



Portrait in Flesh

  • Hipparch
  • ******
  • Posts: 1118
  • NO KILL I
Reply #72 on: April 07, 2010, 01:35:05 AM
Quote
yicheng:But for the sake of argument, let's suppose that there are people who have taken the Happiness Drug, and are unharmed as far as you can tell with no side-effects, with the only difference being that they are nicer and happier (think permanently tripping out on X).  I, for one, would definitely take it.

I assume then that you are unaware of such substances  that are currently available such as Xanax, Ambien, or Prozac, just to name but a few?  ???

Those do have side effects, and are not so much aimed at making everybody happy, but at removing depression, not exactly the same thing. 

Yep.  Having been on antidepressants for a couple of years, I know very well that they don't automatically make you happy.  Rather, for me (and I assume others), they help to level out certain aspects of my brain chemistry that, if left unchecked, make me want to curl up in a darkened room and not talk to anybody or fill me with so much self-loathing that I'd rather sleep than have to deal with myself.

So, picking up on yicheng's hypothetical about a side-effect-free Happiness Drug...oddly enough, I would not take such a thing.  Perfect, perpetual happiness would be just as bad as crushing, relentless depression, but even worse because I feel that, after a while, the happiness would become "stale."  Without the sadness, how could you possibly ever enjoy happiness?  Sadness, like pain in general, is needed to a certain extent.  Without pain, you would never know something is wrong.  And without sadness, you'd never know just how happiness is "supposed" to feel.

There's some pretty fine art out there borne out of sadness.  It could all be gone if there were a Happiness Pill freely available.

"Boys from the city.  Not yet caught by the whirlwind of Progress.  Feed soda pop to the thirsty pigs." --The Beast of Yucca Flats


Scattercat

  • Caution:
  • Hipparch
  • ******
  • Posts: 4897
  • Amateur wordsmith
    • Mirrorshards
Reply #73 on: April 07, 2010, 04:07:31 AM
Yet is declining to take the side-effect free happiness pill not equivalent to, say, refusing to go out for ice cream with your friends for fear of improving your mood and ruining your poetry?  The happiness as it existed in this story was not euphoric or drugged; the 'victims' were fully rational and cognizant, but they also felt cheerful about things that used to upset them greatly.  Where is the line between what is acceptable mood modification (music, friends, food) and unacceptable (mystic perfect happiness drug)?



Portrait in Flesh

  • Hipparch
  • ******
  • Posts: 1118
  • NO KILL I
Reply #74 on: April 07, 2010, 10:41:05 PM
Yet is declining to take the side-effect free happiness pill not equivalent to, say, refusing to go out for ice cream with your friends for fear of improving your mood and ruining your poetry? 

Well, if all the cool kids are doing it then that obviously makes a difference.

I guess I just don't see it as an equivalent. 

"Boys from the city.  Not yet caught by the whirlwind of Progress.  Feed soda pop to the thirsty pigs." --The Beast of Yucca Flats