Author Topic: What is so artificial about AI?  (Read 17094 times)

eytanz

  • Moderator
  • *****
  • Posts: 6109
on: January 27, 2008, 10:38:43 PM
So, Steve's intro to the latest EP (EP142: Artifice and Intelligence) got me thinking about something I've been wondering about for a while.

What is "artificial" about artificial intelligence? What seperates it from our own intelligence? Sure, self-aware computers might be smarter than us, at least in certain respects, but that's a matter of quantity, not quality. And they may be very alien to us, but we don't consider the intelligence of possible alien species to be "artificial". Is it just that the intelligence arises from non-biological means? That's obviously what most people using the term think. But really, do we want to define an intelligence by the nature of the "body" that houses it?

The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted? I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions? And why place ourselves above it? In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?



Nobilis

  • Peltast
  • ***
  • Posts: 156
    • Nobilis Erotica Podcast
Reply #1 on: January 27, 2008, 11:14:40 PM
But really, do we want to define an intelligence by the nature of the "body" that houses it?

No, we want to define an intelligence by the force that creates it.  An artificial intelligence is created by man (that's the definition) and as such, will show the results of that creation.  Look at all the other wonderful things we've created and tell me that isn't a risky proposition.

The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted?

No, by definition.  Taking precautions is the opposite of being short-sighted.

I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions?

No.  First of all, there's no guarantee that the intelligences we create (intentionally or not) will not be just as flawed as we are.  They may be more powerful but they won't necessarily be more moral or have our best interests in mind.

And why place ourselves above it?

Because most folks would rather see the Earth inherited by our biological children rather than our technological children.

In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?

I don't know about you, but I'd like to have the descendants I can hug be the winners.



Heradel

  • Bill Peters, EP Assistant
  • Hipparch
  • ******
  • Posts: 2938
  • Part-Time Psychopomp.
Reply #2 on: January 27, 2008, 11:18:48 PM
Well, an AI in the sense that we use it in implies a constructed intelligence, and thus an artificial one. Our intelligence was not designed or constructed (speaking scientifically, the religious perspective is of little use here) but rather evolved. Now, we're speaking of artificial in it's "made or produced by human beings rather than occuring naturally, typically as a copy of something natural " denotation, not the denotation of "insincere or affected" — fake, imitation, mock, ersatz, faux, substitute, replica, reproduction, etc.

As to trusting an AI to make the best decision or be benevolent, well, there's a reason cars are crash-tested. Humans and their products are fallible, and the smart and transcendent can be as evil as the low and stupid. Allowing any thing or group too much authority is a bad idea, be they carbon or silicon based.

I Twitter. I also occasionally blog on the Escape Pod blog, which if you're here you shouldn't have much trouble finding.


eytanz

  • Moderator
  • *****
  • Posts: 6109
Reply #3 on: January 28, 2008, 09:35:18 AM
Well, an AI in the sense that we use it in implies a constructed intelligence, and thus an artificial one.

But it's not used in just that way, isn't it? It is also used to describe self-emergant AIs like Sarisvati from "Artifice and Intelligence" and Skynet from the Terminator.

Quote
As to trusting an AI to make the best decision or be benevolent, well, there's a reason cars are crash-tested. Humans and their products are fallible, and the smart and transcendent can be as evil as the low and stupid. Allowing any thing or group too much authority is a bad idea, be they carbon or silicon based.

I'm not sure I'm comfortable with the crash-test analogy; is the proposal that we create AIs and kill them if we don't like their morals? I took Steve to mean that people are trying to put "likes humans" into the design process, but that's as likely to be flawed as anything else.

No, we want to define an intelligence by the force that creates it.  An artificial intelligence is created by man (that's the definition) and as such, will show the results of that creation.  Look at all the other wonderful things we've created and tell me that isn't a risky proposition.

People are also created by other people. Of course I see the risk inherent in creating any new intelligence. I just don't see why the emphasis is on the origins. Instead of having some sort of useful classification (for example, naming intelligences after their capabilties or properties), it's just a way of highlighting "we made this!" - wheter we did it by accident or design.

Quote
The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted?

No, by definition.  Taking precautions is the opposite of being short-sighted.

Nonsense. It depends on what you are trying to protect - there have been plenty of examples throughout history of innovation being crippled by attempts to protect the status quo, rather than looking at the long term ramifications. It's a conservative (in the non-political sense) outlook - "things are ok now, lets take precautions so as to minimize change". That almost never works out, as the change happens regardless and is often a lot more painful than if it was planned for rather than fought against.

Quote
I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions?

No.  First of all, there's no guarantee that the intelligences we create (intentionally or not) will not be just as flawed as we are.  They may be more powerful but they won't necessarily be more moral or have our best interests in mind.

Well, that's the crux of it, right? Are we entitled to enforce human morality on non-human intelligences? What gives us the right to demand that any emerging intelligences have our best interests in mind?

Quote
In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?

I don't know about you, but I'd like to have the descendants I can hug be the winners.

In a way, that's what I'm questioning. I mean, obviously I feel the same to a major extent. But I find that a troubling thought - it seems to me that the mindset of "we should create these new intelligences to serve us, but we should fear them as we do so because they are not us" is setting us up for far worse trouble than anything else. I'm not saying that we should just be creating any sort of dangerous intelligence that can arise there - it would be a shame if we created something that immediately proceeded to kill all humans. But I'm not comfortable with the thought of, essentially, drawing a line between us and them.

We should be creating AIs that are the extension of humanity, not creating aliens in our midst.



Russell Nash

  • Guest
Reply #4 on: January 28, 2008, 10:03:50 AM
The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted? I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions? And why place ourselves above it? In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?

To this I have one question.  If we have an AI (for lack of a better term) which has control of resources and production (doesn't need us to "reproduce"), can you think of one logical reason for it to allow man to live?  The only reasons I can think of would be "artificial benevolence".  As humans we are systematically using up this planet.  I believe any logical system would see us as a plague of locusts.  Our only hope would be if the system saw use as a curiousity or self-developed some kind of emotional attachment to us.  That's too big of a risk in my book.



eytanz

  • Moderator
  • *****
  • Posts: 6109
Reply #5 on: January 28, 2008, 10:35:45 AM
The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted? I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions? And why place ourselves above it? In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?

To this I have one question.  If we have an AI (for lack of a better term) which has control of resources and production (doesn't need us to "reproduce"), can you think of one logical reason for it to allow man to live?  The only reasons I can think of would be "artificial benevolence".  As humans we are systematically using up this planet.  I believe any logical system would see us as a plague of locusts.  Our only hope would be if the system saw use as a curiousity or self-developed some kind of emotional attachment to us.  That's too big of a risk in my book.

That's a good point. But the question isn't "will they allow us to live". The question is "will they allow us to remain in control of the resources" I'm assuming the answer is no, and I don't have a problem with that. Our society is built on the principle of one generation supplanting the other.

The question is how will they come to replace us. A war, almost certainly, will be a far greater waste of resources than allowing humanity to persist. A more subtle war - poisoning all the water supplies or something - will be risky and equally stupid. Unless we are talking about silly pseduo-magical technology like in Terminator 3, humans will still be able to control a large amount of technology; machines can't just take over other machines any more than humans can take over other biological entities.

More likely, an AI will just take the resources it needs and allow humanity to survive on the rest, for as long as it can. And, if the AI is better than us at producing new resources, that might be more than we have right now. The AI might see us as ticks or lice, but there are more ticks and lice on this planet than humans.

I don't think survival is really at stake. Maybe it is, but in a way it always is - human survival is hardly certain, AI or not. What is at stake is human pride - we want to be on top of the food chain, and we aren't about to let any uppity AI take that away from us.



Russell Nash

  • Guest
Reply #6 on: January 28, 2008, 11:15:35 AM
The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted? I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions? And why place ourselves above it? In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?

To this I have one question.  If we have an AI (for lack of a better term) which has control of resources and production (doesn't need us to "reproduce"), can you think of one logical reason for it to allow man to live?  The only reasons I can think of would be "artificial benevolence".  As humans we are systematically using up this planet.  I believe any logical system would see us as a plague of locusts.  Our only hope would be if the system saw use as a curiousity or self-developed some kind of emotional attachment to us.  That's too big of a risk in my book.

That's a good point. But the question isn't "will they allow us to live". The question is "will they allow us to remain in control of the resources" I'm assuming the answer is no, and I don't have a problem with that. Our society is built on the principle of one generation supplanting the other.

The question is how will they come to replace us. A war, almost certainly, will be a far greater waste of resources than allowing humanity to persist. A more subtle war - poisoning all the water supplies or something - will be risky and equally stupid. Unless we are talking about silly pseduo-magical technology like in Terminator 3, humans will still be able to control a large amount of technology; machines can't just take over other machines any more than humans can take over other biological entities.

More likely, an AI will just take the resources it needs and allow humanity to survive on the rest, for as long as it can. And, if the AI is better than us at producing new resources, that might be more than we have right now. The AI might see us as ticks or lice, but there are more ticks and lice on this planet than humans.

I don't think survival is really at stake. Maybe it is, but in a way it always is - human survival is hardly certain, AI or not. What is at stake is human pride - we want to be on top of the food chain, and we aren't about to let any uppity AI take that away from us.

I went back and forth on answers and don't want a fight.  I presented my point.  I hope it's wrong, because at some point we're going to get a computer that can reprogram itself and then within days they'll be beyond us.  That could be in a year.  It could be ten, but it will happen before I become a granfather and I want to be a grandfather. 

I don't know the percentages to each of the possibilities, but I don't like the majority of the possibilities.  Any programming we could do to keep AI more toward your image of it the better it is.  I just think your nuts to want to roll the dice on this.

Also I feel no parental obligation to AI.  If it was in my power, I would never have allowed it to go forward.  I like technology, but I don't want my Mac to be self-aware. 

I like being on the top.  I see no reason to suddenly hand it over to a machine.  That's like asking the EU to trade Europe to the AU in exchange for those African countries in the union.  Sure some of them aren't that bad, but none of it is Europe.  Or the Americans to trade with Mexico and the Central American countries.  No one would take that deal either.



eytanz

  • Moderator
  • *****
  • Posts: 6109
Reply #7 on: January 28, 2008, 01:20:46 PM
I went back and forth on answers and don't want a fight.

Me neither. I didn't even realize we were arguing about anything. Actually, I'm still not sure of that. I thought we were asking each other questions.

Quote
at some point we're going to get a computer that can reprogram itself and then within days they'll be beyond us.  That could be in a year.  It could be ten, but it will happen before I become a granfather and I want to be a grandfather. 

I don't quite understand that, actually - we have plenty of computers that can reprogram themselves; a lot of the more menacing computer viruses, for example, are self-adaptive. What I assume you mean is that we're going to get a self-aware computer that can reprogram itself. Holywood Science Fiction always confuses the two, but they are really very seperate concepts. As I said, self-programming computers are pretty common. Self-awareness is the trickier question; if it emerges on its own, the it has to be coupled with self-programming, but if it is created by scientists then it's likely that the first few self-aware computers will have no ability to change, certainly not in any self-directed way.

And even if a computer is both self-aware and self-programming it will still be likely restricted by the hardware in question. Computers can already do math much faster than people, and store more information. But it's not obvious that either of those is done in a way that's conducive to actual intelligence.

Quote
Also I feel no parental obligation to AI.  If it was in my power, I would never have allowed it to go forward.  I like technology, but I don't want my Mac to be self-aware. 

That's an entirely sensible position. I can easily see why it's a good idea to not develop AI. I just think that the way the dicussion is framed leads to the alternatives to that position to be rather strange.

But I don't think I'm doing a better job myself of explaining my position - I think I'm doing a pretty poor job of it, actually. When I have a bit of extra time (yeah, right) I might try to give a detailed analysis of the Singualrity Institute's mission statement and what I think the problems are with it.



Russell Nash

  • Guest
Reply #8 on: January 28, 2008, 03:07:57 PM
at some point we're going to get a computer that can reprogram itself and then within days they'll be beyond us.  That could be in a year.  It could be ten, but it will happen before I become a granfather and I want to be a grandfather. 

I don't quite understand that, actually - we have plenty of computers that can reprogram themselves; a lot of the more menacing computer viruses, for example, are self-adaptive. What I assume you mean is that we're going to get a self-aware computer that can reprogram itself. Holywood Science Fiction always confuses the two, but they are really very seperate concepts. As I said, self-programming computers are pretty common. Self-awareness is the trickier question; if it emerges on its own, the it has to be coupled with self-programming, but if it is created by scientists then it's likely that the first few self-aware computers will have no ability to change, certainly not in any self-directed way.

I meant a self aware computer in that it could upgrade itself.  Man this is hard to explain.  I guess I mean one that works on itself but doesn't have a dedicated purpose.  Viruses are about being viruses.  I think it would be an AI study where the computer had an open shot at it's programming.  Not allowing an AI to do that is one of the safeguards I want, that I thought you were arguing against.



ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #9 on: January 28, 2008, 06:01:11 PM
I think the meaning of the term "Artificial Intelligence" has changed since it was coined.  Originally, I think it meant artificial as in simulated - something that only appears and acts like a conscious, thinking being.  Now, when we use the term, we really mean a created consciousness that exists on a machine.

It seems like the general consensus is that AI is just a few years away - and it has been like that for as long as I can remember (I'm 43).  HAL was supposed to be here 7 years ago.
The nightmare scenario of Skynet makes for good storytelling, but I doubt it will happen in my lifetime, if ever.  Purposely creating a true constructed consciousness requires us to first understand what consciousness is and how our own brain accomplishes that.  We still don't know that yet.

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


Nobilis

  • Peltast
  • ***
  • Posts: 156
    • Nobilis Erotica Podcast
Reply #10 on: January 28, 2008, 11:51:12 PM
Many of our greatest accomplishments happened by accident.  I don't see AI as being (potentially) any different.



CGFxColONeill

  • Matross
  • ****
  • Posts: 241
Reply #11 on: January 29, 2008, 02:01:05 AM
interesting thread
as far as it being artificial 1: humanly contrived often on a natural model : man-made <an artificial limb> <artificial diamonds> (from http://www.merriam-webster.com/dictionary/artificial)
as far as it controlling things most of the AIs in science fiction seem to start good ( or at least benign) AIs that go bad for one reason or another ie VIKI or Sarisvati etc they end up messing things up in the end

there are a few examples of " good ones" like Jane from Card, I dont recall her ever "going bad"
Andromeda ( some exceptions but in general ) is the same

or you have the ones that start bad and get worse ie the ghost pseudo AI from artifice   

I am sure there are more but those are the ones that come to mind

random aside why are most AIs female?

Overconfidence - Before you attempt to beat the odds, be sure you could survive the odds beating you.

I am not sure if Life is passing me by or running me over


Tango Alpha Delta

  • Hipparch
  • ******
  • Posts: 1778
    • Tad's Happy Funtime
Reply #12 on: January 29, 2008, 03:37:16 AM
I believe any logical system would see us as a plague of locusts.  Our only hope would be if the system saw use as a curiousity or self-developed some kind of emotional attachment to us.  That's too big of a risk in my book.


There's a lot of neat stuff to think about here, but this quote made me wonder; how would the "big AI decides we are vermin using up the planet and exterminates most of/all of us" scenario be any different from "we are vermin using up the planet, and we naturally start to die off in droves"?   Either way, we go.

Just because it's framed as the cold and logical (inhuman?) decision of a post-human/post-singularity creation of mankind, as opposed to the inevitable consequence of our collective bad choices, doesn't mean the end result isn't the same.

This Wiki Won't Wrangle Itself!

I finally published my book - Tad's Happy Funtime is on Amazon!


wakela

  • Hipparch
  • ******
  • Posts: 779
    • Mr. Wake
Reply #13 on: January 29, 2008, 05:37:23 AM
I think AI will develop gradually and each step with be thoroughly integrated with humanity.  Maybe you'll have a  spam filter that can also determine which emails are more important, then it can reply to simple meeting requests based on your schedule, then it reply to more complex requests and come to you for key decisions.  But eventually you notice that it's not making any mistakes and you just let it take over all the details like a personal assistant.  Then one day you'll say, "wait a minute, are you self aware?" and it will say, "I dunno.  Are we going to Kevin's party on the 6th or not?"

Of course the computer scientists' AIs will be helping them make better AIs, so the things will get much much smarter very very quickly.  But I think they will be intimately tied with us, and most of them will be in consumer electronics and software, whose function is to serve us.  They smarter they get, the smarter we get.

Maybe the lazier we get, too.  Just what are we supposed to do when AIs manage the world's resources and production and there is no need to work? 



Tango Alpha Delta

  • Hipparch
  • ******
  • Posts: 1778
    • Tad's Happy Funtime
Reply #14 on: January 29, 2008, 11:06:09 AM
Just what are we supposed to do when AIs manage the world's resources and production and there is no need to work? 


I guess we'll ALL be at Kevin's party on the 6th, at very least...

This Wiki Won't Wrangle Itself!

I finally published my book - Tad's Happy Funtime is on Amazon!


ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #15 on: January 29, 2008, 01:08:34 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


Heradel

  • Bill Peters, EP Assistant
  • Hipparch
  • ******
  • Posts: 2938
  • Part-Time Psychopomp.
Reply #16 on: January 29, 2008, 01:31:02 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.

You're assuming it doesn't go insane.

I Twitter. I also occasionally blog on the Escape Pod blog, which if you're here you shouldn't have much trouble finding.


gelee

  • Lochage
  • *****
  • Posts: 521
  • It's a missile, boy.
Reply #17 on: January 29, 2008, 01:47:30 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.
Great points.  I've been working on an AI story.  One of the toughest things I've had to wrestle with is: What motivates a non-corporeal, inorganic entity?  No glands, so no emotions.  No fear, jealosy, anger, love, pride, etc.  Just cool rationalism.  No drive to reproduce, maybe even no instinct for self preservation.  So what does this thing actually want?
For the above reasons, I've never liked the way I've seen AI's protrayed in fiction.  Why do they always have emotions?  Why do they go nuts?



Russell Nash

  • Guest
Reply #18 on: January 29, 2008, 04:48:27 PM
random aside why are most AIs female?

The military discovered that pilots responded better to a female voice.  The pilots tended to get combative with a male voice.  In civillian applications it has proven to be the same.

I believe any logical system would see us as a plague of locusts.  Our only hope would be if the system saw use as a curiousity or self-developed some kind of emotional attachment to us.  That's too big of a risk in my book.


There's a lot of neat stuff to think about here, but this quote made me wonder; how would the "big AI decides we are vermin using up the planet and exterminates most of/all of us" scenario be any different from "we are vermin using up the planet, and we naturally start to die off in droves"?   Either way, we go.

Just because it's framed as the cold and logical (inhuman?) decision of a post-human/post-singularity creation of mankind, as opposed to the inevitable consequence of our collective bad choices, doesn't mean the end result isn't the same.

Basically what I was thinking about.




Darwinist

  • Hipparch
  • ******
  • Posts: 701
Reply #19 on: January 29, 2008, 04:53:22 PM
For the above reasons, I've never liked the way I've seen AI's protrayed in fiction.  Why do they always have emotions?  Why do they go nuts?

Because if they behaved as designed and didn't go nuts there wouldn't be much of a story.   

For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring.    -  Carl Sagan


gelee

  • Lochage
  • *****
  • Posts: 521
  • It's a missile, boy.
Reply #20 on: January 29, 2008, 05:38:22 PM
For the above reasons, I've never liked the way I've seen AI's protrayed in fiction.  Why do they always have emotions?  Why do they go nuts?

Because if they behaved as designed and didn't go nuts there wouldn't be much of a story.   
True.  Hence, the trouble with writing an AI story.  Also, if an AI must behave as designed, is it truly sentient?  Isn't a sense of self-determination or self-direction the defining aspect of true sentience?



eytanz

  • Moderator
  • *****
  • Posts: 6109
Reply #21 on: January 29, 2008, 06:04:16 PM

  Also, if an AI must behave as designed, is it truly sentient?  Isn't a sense of self-determination or self-direction the defining aspect of true sentience?

Depends. Humans may have be capable of self-determination, but we are also rather limited by our biology and neurology. We are not capable, for instance, of willing ourselves to sleep, at least not in most circumstances. Nor can we deliberately not think of white elephants. Limited self-determination does not equal non-sentience. An AI that can make its own choices about many matters but not about harming humans (say, an Asimov robot) is probably still sentient.

Of course, when writing a troy, you also have the option of treating the AI's programming on par with a human's conditioning by his or her environment - most of us conform (within vague parameters) to the society in which we brought us up, and if we rebel, we tend to rebel in culturally prescribed ways (which is why we tend to get counter-cultures rather than random individualist self-expression). An AI could be much the same - whether by design or by accident, it could follow the path desired by its creators (or its creators' creators, or however far back you need until you reach a human) while still having the choice to do otherwise.

That's actually also a pretty common AI trope in SF - it's the type of AI you find in Iain M. Banks stories, or in Acephalous Dreams. Or in Artifice and Intelligence, for that matter. The AIs in those stories have considerable power and freedom, and they tend to be in positions of power, but they are no more malevolent than most humans in positions of power have been. They might manipulate others for their interests, or take advantage of weaker AIs/Humans, but they're doing it from within the system, not from without.



Heradel

  • Bill Peters, EP Assistant
  • Hipparch
  • ******
  • Posts: 2938
  • Part-Time Psychopomp.
Reply #22 on: January 29, 2008, 06:45:01 PM
For the above reasons, I've never liked the way I've seen AI's protrayed in fiction.  Why do they always have emotions?  Why do they go nuts?

Because if they behaved as designed and didn't go nuts there wouldn't be much of a story.   
True.  Hence, the trouble with writing an AI story.  Also, if an AI must behave as designed, is it truly sentient?  Isn't a sense of self-determination or self-direction the defining aspect of true sentience?

So was the electric toothbrush sentient then?

I Twitter. I also occasionally blog on the Escape Pod blog, which if you're here you shouldn't have much trouble finding.


ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #23 on: January 29, 2008, 07:12:33 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.

You're assuming it doesn't go insane.

Insanity is a brain malfunction.  For an AI to go insane, it would have to become just malfunctioned enough to be dangerous without breaking to the point of not being able to function at all - not a likely scenario.

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


wakela

  • Hipparch
  • ******
  • Posts: 779
    • Mr. Wake
Reply #24 on: January 29, 2008, 11:51:40 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.

You're assuming it doesn't go insane.

Insanity is a brain malfunction.  For an AI to go insane, it would have to become just malfunctioned enough to be dangerous without breaking to the point of not being able to function at all - not a likely scenario.
At first I was going to refute this because scientists are using neural networks in their AI research.  Neural networks, like our own brains and the Internet, function very well after being damaged in isolated areas.
But I also think that AI will come out of the commercial sector rather than pure research and that service to humans will be such an integral part of the AI's makeup that it would be unable to function without it.  It would be like Google without the Internet. 

So yeah.  I don't think an AI could develop malevolent insanity.  However, I also think a malevolently insane person or group could develop an AI...

This is something to consider when we talk about what we should and shouldn't do with developing technologies like nano, AI, cloning, genetic engineering...  Even if we in the Enlightened West make all the right decisions and develop these powerful technologies safely, it doesn't mean everyone else will.