Author Topic: What is so artificial about AI?  (Read 17093 times)

eytanz

  • Moderator
  • *****
  • Posts: 6109
on: January 27, 2008, 10:38:43 PM
So, Steve's intro to the latest EP (EP142: Artifice and Intelligence) got me thinking about something I've been wondering about for a while.

What is "artificial" about artificial intelligence? What seperates it from our own intelligence? Sure, self-aware computers might be smarter than us, at least in certain respects, but that's a matter of quantity, not quality. And they may be very alien to us, but we don't consider the intelligence of possible alien species to be "artificial". Is it just that the intelligence arises from non-biological means? That's obviously what most people using the term think. But really, do we want to define an intelligence by the nature of the "body" that houses it?

The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted? I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions? And why place ourselves above it? In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?



Nobilis

  • Peltast
  • ***
  • Posts: 156
    • Nobilis Erotica Podcast
Reply #1 on: January 27, 2008, 11:14:40 PM
But really, do we want to define an intelligence by the nature of the "body" that houses it?

No, we want to define an intelligence by the force that creates it.  An artificial intelligence is created by man (that's the definition) and as such, will show the results of that creation.  Look at all the other wonderful things we've created and tell me that isn't a risky proposition.

The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted?

No, by definition.  Taking precautions is the opposite of being short-sighted.

I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions?

No.  First of all, there's no guarantee that the intelligences we create (intentionally or not) will not be just as flawed as we are.  They may be more powerful but they won't necessarily be more moral or have our best interests in mind.

And why place ourselves above it?

Because most folks would rather see the Earth inherited by our biological children rather than our technological children.

In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?

I don't know about you, but I'd like to have the descendants I can hug be the winners.



Heradel

  • Bill Peters, EP Assistant
  • Hipparch
  • ******
  • Posts: 2938
  • Part-Time Psychopomp.
Reply #2 on: January 27, 2008, 11:18:48 PM
Well, an AI in the sense that we use it in implies a constructed intelligence, and thus an artificial one. Our intelligence was not designed or constructed (speaking scientifically, the religious perspective is of little use here) but rather evolved. Now, we're speaking of artificial in it's "made or produced by human beings rather than occuring naturally, typically as a copy of something natural " denotation, not the denotation of "insincere or affected" — fake, imitation, mock, ersatz, faux, substitute, replica, reproduction, etc.

As to trusting an AI to make the best decision or be benevolent, well, there's a reason cars are crash-tested. Humans and their products are fallible, and the smart and transcendent can be as evil as the low and stupid. Allowing any thing or group too much authority is a bad idea, be they carbon or silicon based.

I Twitter. I also occasionally blog on the Escape Pod blog, which if you're here you shouldn't have much trouble finding.


eytanz

  • Moderator
  • *****
  • Posts: 6109
Reply #3 on: January 28, 2008, 09:35:18 AM
Well, an AI in the sense that we use it in implies a constructed intelligence, and thus an artificial one.

But it's not used in just that way, isn't it? It is also used to describe self-emergant AIs like Sarisvati from "Artifice and Intelligence" and Skynet from the Terminator.

Quote
As to trusting an AI to make the best decision or be benevolent, well, there's a reason cars are crash-tested. Humans and their products are fallible, and the smart and transcendent can be as evil as the low and stupid. Allowing any thing or group too much authority is a bad idea, be they carbon or silicon based.

I'm not sure I'm comfortable with the crash-test analogy; is the proposal that we create AIs and kill them if we don't like their morals? I took Steve to mean that people are trying to put "likes humans" into the design process, but that's as likely to be flawed as anything else.

No, we want to define an intelligence by the force that creates it.  An artificial intelligence is created by man (that's the definition) and as such, will show the results of that creation.  Look at all the other wonderful things we've created and tell me that isn't a risky proposition.

People are also created by other people. Of course I see the risk inherent in creating any new intelligence. I just don't see why the emphasis is on the origins. Instead of having some sort of useful classification (for example, naming intelligences after their capabilties or properties), it's just a way of highlighting "we made this!" - wheter we did it by accident or design.

Quote
The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted?

No, by definition.  Taking precautions is the opposite of being short-sighted.

Nonsense. It depends on what you are trying to protect - there have been plenty of examples throughout history of innovation being crippled by attempts to protect the status quo, rather than looking at the long term ramifications. It's a conservative (in the non-political sense) outlook - "things are ok now, lets take precautions so as to minimize change". That almost never works out, as the change happens regardless and is often a lot more painful than if it was planned for rather than fought against.

Quote
I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions?

No.  First of all, there's no guarantee that the intelligences we create (intentionally or not) will not be just as flawed as we are.  They may be more powerful but they won't necessarily be more moral or have our best interests in mind.

Well, that's the crux of it, right? Are we entitled to enforce human morality on non-human intelligences? What gives us the right to demand that any emerging intelligences have our best interests in mind?

Quote
In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?

I don't know about you, but I'd like to have the descendants I can hug be the winners.

In a way, that's what I'm questioning. I mean, obviously I feel the same to a major extent. But I find that a troubling thought - it seems to me that the mindset of "we should create these new intelligences to serve us, but we should fear them as we do so because they are not us" is setting us up for far worse trouble than anything else. I'm not saying that we should just be creating any sort of dangerous intelligence that can arise there - it would be a shame if we created something that immediately proceeded to kill all humans. But I'm not comfortable with the thought of, essentially, drawing a line between us and them.

We should be creating AIs that are the extension of humanity, not creating aliens in our midst.



Russell Nash

  • Guest
Reply #4 on: January 28, 2008, 10:03:50 AM
The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted? I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions? And why place ourselves above it? In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?

To this I have one question.  If we have an AI (for lack of a better term) which has control of resources and production (doesn't need us to "reproduce"), can you think of one logical reason for it to allow man to live?  The only reasons I can think of would be "artificial benevolence".  As humans we are systematically using up this planet.  I believe any logical system would see us as a plague of locusts.  Our only hope would be if the system saw use as a curiousity or self-developed some kind of emotional attachment to us.  That's too big of a risk in my book.



eytanz

  • Moderator
  • *****
  • Posts: 6109
Reply #5 on: January 28, 2008, 10:35:45 AM
The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted? I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions? And why place ourselves above it? In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?

To this I have one question.  If we have an AI (for lack of a better term) which has control of resources and production (doesn't need us to "reproduce"), can you think of one logical reason for it to allow man to live?  The only reasons I can think of would be "artificial benevolence".  As humans we are systematically using up this planet.  I believe any logical system would see us as a plague of locusts.  Our only hope would be if the system saw use as a curiousity or self-developed some kind of emotional attachment to us.  That's too big of a risk in my book.

That's a good point. But the question isn't "will they allow us to live". The question is "will they allow us to remain in control of the resources" I'm assuming the answer is no, and I don't have a problem with that. Our society is built on the principle of one generation supplanting the other.

The question is how will they come to replace us. A war, almost certainly, will be a far greater waste of resources than allowing humanity to persist. A more subtle war - poisoning all the water supplies or something - will be risky and equally stupid. Unless we are talking about silly pseduo-magical technology like in Terminator 3, humans will still be able to control a large amount of technology; machines can't just take over other machines any more than humans can take over other biological entities.

More likely, an AI will just take the resources it needs and allow humanity to survive on the rest, for as long as it can. And, if the AI is better than us at producing new resources, that might be more than we have right now. The AI might see us as ticks or lice, but there are more ticks and lice on this planet than humans.

I don't think survival is really at stake. Maybe it is, but in a way it always is - human survival is hardly certain, AI or not. What is at stake is human pride - we want to be on top of the food chain, and we aren't about to let any uppity AI take that away from us.



Russell Nash

  • Guest
Reply #6 on: January 28, 2008, 11:15:35 AM
The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted? I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions? And why place ourselves above it? In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?

To this I have one question.  If we have an AI (for lack of a better term) which has control of resources and production (doesn't need us to "reproduce"), can you think of one logical reason for it to allow man to live?  The only reasons I can think of would be "artificial benevolence".  As humans we are systematically using up this planet.  I believe any logical system would see us as a plague of locusts.  Our only hope would be if the system saw use as a curiousity or self-developed some kind of emotional attachment to us.  That's too big of a risk in my book.

That's a good point. But the question isn't "will they allow us to live". The question is "will they allow us to remain in control of the resources" I'm assuming the answer is no, and I don't have a problem with that. Our society is built on the principle of one generation supplanting the other.

The question is how will they come to replace us. A war, almost certainly, will be a far greater waste of resources than allowing humanity to persist. A more subtle war - poisoning all the water supplies or something - will be risky and equally stupid. Unless we are talking about silly pseduo-magical technology like in Terminator 3, humans will still be able to control a large amount of technology; machines can't just take over other machines any more than humans can take over other biological entities.

More likely, an AI will just take the resources it needs and allow humanity to survive on the rest, for as long as it can. And, if the AI is better than us at producing new resources, that might be more than we have right now. The AI might see us as ticks or lice, but there are more ticks and lice on this planet than humans.

I don't think survival is really at stake. Maybe it is, but in a way it always is - human survival is hardly certain, AI or not. What is at stake is human pride - we want to be on top of the food chain, and we aren't about to let any uppity AI take that away from us.

I went back and forth on answers and don't want a fight.  I presented my point.  I hope it's wrong, because at some point we're going to get a computer that can reprogram itself and then within days they'll be beyond us.  That could be in a year.  It could be ten, but it will happen before I become a granfather and I want to be a grandfather. 

I don't know the percentages to each of the possibilities, but I don't like the majority of the possibilities.  Any programming we could do to keep AI more toward your image of it the better it is.  I just think your nuts to want to roll the dice on this.

Also I feel no parental obligation to AI.  If it was in my power, I would never have allowed it to go forward.  I like technology, but I don't want my Mac to be self-aware. 

I like being on the top.  I see no reason to suddenly hand it over to a machine.  That's like asking the EU to trade Europe to the AU in exchange for those African countries in the union.  Sure some of them aren't that bad, but none of it is Europe.  Or the Americans to trade with Mexico and the Central American countries.  No one would take that deal either.



eytanz

  • Moderator
  • *****
  • Posts: 6109
Reply #7 on: January 28, 2008, 01:20:46 PM
I went back and forth on answers and don't want a fight.

Me neither. I didn't even realize we were arguing about anything. Actually, I'm still not sure of that. I thought we were asking each other questions.

Quote
at some point we're going to get a computer that can reprogram itself and then within days they'll be beyond us.  That could be in a year.  It could be ten, but it will happen before I become a granfather and I want to be a grandfather. 

I don't quite understand that, actually - we have plenty of computers that can reprogram themselves; a lot of the more menacing computer viruses, for example, are self-adaptive. What I assume you mean is that we're going to get a self-aware computer that can reprogram itself. Holywood Science Fiction always confuses the two, but they are really very seperate concepts. As I said, self-programming computers are pretty common. Self-awareness is the trickier question; if it emerges on its own, the it has to be coupled with self-programming, but if it is created by scientists then it's likely that the first few self-aware computers will have no ability to change, certainly not in any self-directed way.

And even if a computer is both self-aware and self-programming it will still be likely restricted by the hardware in question. Computers can already do math much faster than people, and store more information. But it's not obvious that either of those is done in a way that's conducive to actual intelligence.

Quote
Also I feel no parental obligation to AI.  If it was in my power, I would never have allowed it to go forward.  I like technology, but I don't want my Mac to be self-aware. 

That's an entirely sensible position. I can easily see why it's a good idea to not develop AI. I just think that the way the dicussion is framed leads to the alternatives to that position to be rather strange.

But I don't think I'm doing a better job myself of explaining my position - I think I'm doing a pretty poor job of it, actually. When I have a bit of extra time (yeah, right) I might try to give a detailed analysis of the Singualrity Institute's mission statement and what I think the problems are with it.



Russell Nash

  • Guest
Reply #8 on: January 28, 2008, 03:07:57 PM
at some point we're going to get a computer that can reprogram itself and then within days they'll be beyond us.  That could be in a year.  It could be ten, but it will happen before I become a granfather and I want to be a grandfather. 

I don't quite understand that, actually - we have plenty of computers that can reprogram themselves; a lot of the more menacing computer viruses, for example, are self-adaptive. What I assume you mean is that we're going to get a self-aware computer that can reprogram itself. Holywood Science Fiction always confuses the two, but they are really very seperate concepts. As I said, self-programming computers are pretty common. Self-awareness is the trickier question; if it emerges on its own, the it has to be coupled with self-programming, but if it is created by scientists then it's likely that the first few self-aware computers will have no ability to change, certainly not in any self-directed way.

I meant a self aware computer in that it could upgrade itself.  Man this is hard to explain.  I guess I mean one that works on itself but doesn't have a dedicated purpose.  Viruses are about being viruses.  I think it would be an AI study where the computer had an open shot at it's programming.  Not allowing an AI to do that is one of the safeguards I want, that I thought you were arguing against.



ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #9 on: January 28, 2008, 06:01:11 PM
I think the meaning of the term "Artificial Intelligence" has changed since it was coined.  Originally, I think it meant artificial as in simulated - something that only appears and acts like a conscious, thinking being.  Now, when we use the term, we really mean a created consciousness that exists on a machine.

It seems like the general consensus is that AI is just a few years away - and it has been like that for as long as I can remember (I'm 43).  HAL was supposed to be here 7 years ago.
The nightmare scenario of Skynet makes for good storytelling, but I doubt it will happen in my lifetime, if ever.  Purposely creating a true constructed consciousness requires us to first understand what consciousness is and how our own brain accomplishes that.  We still don't know that yet.

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


Nobilis

  • Peltast
  • ***
  • Posts: 156
    • Nobilis Erotica Podcast
Reply #10 on: January 28, 2008, 11:51:12 PM
Many of our greatest accomplishments happened by accident.  I don't see AI as being (potentially) any different.



CGFxColONeill

  • Matross
  • ****
  • Posts: 241
Reply #11 on: January 29, 2008, 02:01:05 AM
interesting thread
as far as it being artificial 1: humanly contrived often on a natural model : man-made <an artificial limb> <artificial diamonds> (from http://www.merriam-webster.com/dictionary/artificial)
as far as it controlling things most of the AIs in science fiction seem to start good ( or at least benign) AIs that go bad for one reason or another ie VIKI or Sarisvati etc they end up messing things up in the end

there are a few examples of " good ones" like Jane from Card, I dont recall her ever "going bad"
Andromeda ( some exceptions but in general ) is the same

or you have the ones that start bad and get worse ie the ghost pseudo AI from artifice   

I am sure there are more but those are the ones that come to mind

random aside why are most AIs female?

Overconfidence - Before you attempt to beat the odds, be sure you could survive the odds beating you.

I am not sure if Life is passing me by or running me over


Tango Alpha Delta

  • Hipparch
  • ******
  • Posts: 1778
    • Tad's Happy Funtime
Reply #12 on: January 29, 2008, 03:37:16 AM
I believe any logical system would see us as a plague of locusts.  Our only hope would be if the system saw use as a curiousity or self-developed some kind of emotional attachment to us.  That's too big of a risk in my book.


There's a lot of neat stuff to think about here, but this quote made me wonder; how would the "big AI decides we are vermin using up the planet and exterminates most of/all of us" scenario be any different from "we are vermin using up the planet, and we naturally start to die off in droves"?   Either way, we go.

Just because it's framed as the cold and logical (inhuman?) decision of a post-human/post-singularity creation of mankind, as opposed to the inevitable consequence of our collective bad choices, doesn't mean the end result isn't the same.

This Wiki Won't Wrangle Itself!

I finally published my book - Tad's Happy Funtime is on Amazon!


wakela

  • Hipparch
  • ******
  • Posts: 779
    • Mr. Wake
Reply #13 on: January 29, 2008, 05:37:23 AM
I think AI will develop gradually and each step with be thoroughly integrated with humanity.  Maybe you'll have a  spam filter that can also determine which emails are more important, then it can reply to simple meeting requests based on your schedule, then it reply to more complex requests and come to you for key decisions.  But eventually you notice that it's not making any mistakes and you just let it take over all the details like a personal assistant.  Then one day you'll say, "wait a minute, are you self aware?" and it will say, "I dunno.  Are we going to Kevin's party on the 6th or not?"

Of course the computer scientists' AIs will be helping them make better AIs, so the things will get much much smarter very very quickly.  But I think they will be intimately tied with us, and most of them will be in consumer electronics and software, whose function is to serve us.  They smarter they get, the smarter we get.

Maybe the lazier we get, too.  Just what are we supposed to do when AIs manage the world's resources and production and there is no need to work? 



Tango Alpha Delta

  • Hipparch
  • ******
  • Posts: 1778
    • Tad's Happy Funtime
Reply #14 on: January 29, 2008, 11:06:09 AM
Just what are we supposed to do when AIs manage the world's resources and production and there is no need to work? 


I guess we'll ALL be at Kevin's party on the 6th, at very least...

This Wiki Won't Wrangle Itself!

I finally published my book - Tad's Happy Funtime is on Amazon!


ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #15 on: January 29, 2008, 01:08:34 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


Heradel

  • Bill Peters, EP Assistant
  • Hipparch
  • ******
  • Posts: 2938
  • Part-Time Psychopomp.
Reply #16 on: January 29, 2008, 01:31:02 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.

You're assuming it doesn't go insane.

I Twitter. I also occasionally blog on the Escape Pod blog, which if you're here you shouldn't have much trouble finding.


gelee

  • Lochage
  • *****
  • Posts: 521
  • It's a missile, boy.
Reply #17 on: January 29, 2008, 01:47:30 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.
Great points.  I've been working on an AI story.  One of the toughest things I've had to wrestle with is: What motivates a non-corporeal, inorganic entity?  No glands, so no emotions.  No fear, jealosy, anger, love, pride, etc.  Just cool rationalism.  No drive to reproduce, maybe even no instinct for self preservation.  So what does this thing actually want?
For the above reasons, I've never liked the way I've seen AI's protrayed in fiction.  Why do they always have emotions?  Why do they go nuts?



Russell Nash

  • Guest
Reply #18 on: January 29, 2008, 04:48:27 PM
random aside why are most AIs female?

The military discovered that pilots responded better to a female voice.  The pilots tended to get combative with a male voice.  In civillian applications it has proven to be the same.

I believe any logical system would see us as a plague of locusts.  Our only hope would be if the system saw use as a curiousity or self-developed some kind of emotional attachment to us.  That's too big of a risk in my book.


There's a lot of neat stuff to think about here, but this quote made me wonder; how would the "big AI decides we are vermin using up the planet and exterminates most of/all of us" scenario be any different from "we are vermin using up the planet, and we naturally start to die off in droves"?   Either way, we go.

Just because it's framed as the cold and logical (inhuman?) decision of a post-human/post-singularity creation of mankind, as opposed to the inevitable consequence of our collective bad choices, doesn't mean the end result isn't the same.

Basically what I was thinking about.




Darwinist

  • Hipparch
  • ******
  • Posts: 701
Reply #19 on: January 29, 2008, 04:53:22 PM
For the above reasons, I've never liked the way I've seen AI's protrayed in fiction.  Why do they always have emotions?  Why do they go nuts?

Because if they behaved as designed and didn't go nuts there wouldn't be much of a story.   

For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring.    -  Carl Sagan


gelee

  • Lochage
  • *****
  • Posts: 521
  • It's a missile, boy.
Reply #20 on: January 29, 2008, 05:38:22 PM
For the above reasons, I've never liked the way I've seen AI's protrayed in fiction.  Why do they always have emotions?  Why do they go nuts?

Because if they behaved as designed and didn't go nuts there wouldn't be much of a story.   
True.  Hence, the trouble with writing an AI story.  Also, if an AI must behave as designed, is it truly sentient?  Isn't a sense of self-determination or self-direction the defining aspect of true sentience?



eytanz

  • Moderator
  • *****
  • Posts: 6109
Reply #21 on: January 29, 2008, 06:04:16 PM

  Also, if an AI must behave as designed, is it truly sentient?  Isn't a sense of self-determination or self-direction the defining aspect of true sentience?

Depends. Humans may have be capable of self-determination, but we are also rather limited by our biology and neurology. We are not capable, for instance, of willing ourselves to sleep, at least not in most circumstances. Nor can we deliberately not think of white elephants. Limited self-determination does not equal non-sentience. An AI that can make its own choices about many matters but not about harming humans (say, an Asimov robot) is probably still sentient.

Of course, when writing a troy, you also have the option of treating the AI's programming on par with a human's conditioning by his or her environment - most of us conform (within vague parameters) to the society in which we brought us up, and if we rebel, we tend to rebel in culturally prescribed ways (which is why we tend to get counter-cultures rather than random individualist self-expression). An AI could be much the same - whether by design or by accident, it could follow the path desired by its creators (or its creators' creators, or however far back you need until you reach a human) while still having the choice to do otherwise.

That's actually also a pretty common AI trope in SF - it's the type of AI you find in Iain M. Banks stories, or in Acephalous Dreams. Or in Artifice and Intelligence, for that matter. The AIs in those stories have considerable power and freedom, and they tend to be in positions of power, but they are no more malevolent than most humans in positions of power have been. They might manipulate others for their interests, or take advantage of weaker AIs/Humans, but they're doing it from within the system, not from without.



Heradel

  • Bill Peters, EP Assistant
  • Hipparch
  • ******
  • Posts: 2938
  • Part-Time Psychopomp.
Reply #22 on: January 29, 2008, 06:45:01 PM
For the above reasons, I've never liked the way I've seen AI's protrayed in fiction.  Why do they always have emotions?  Why do they go nuts?

Because if they behaved as designed and didn't go nuts there wouldn't be much of a story.   
True.  Hence, the trouble with writing an AI story.  Also, if an AI must behave as designed, is it truly sentient?  Isn't a sense of self-determination or self-direction the defining aspect of true sentience?

So was the electric toothbrush sentient then?

I Twitter. I also occasionally blog on the Escape Pod blog, which if you're here you shouldn't have much trouble finding.


ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #23 on: January 29, 2008, 07:12:33 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.

You're assuming it doesn't go insane.

Insanity is a brain malfunction.  For an AI to go insane, it would have to become just malfunctioned enough to be dangerous without breaking to the point of not being able to function at all - not a likely scenario.

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


wakela

  • Hipparch
  • ******
  • Posts: 779
    • Mr. Wake
Reply #24 on: January 29, 2008, 11:51:40 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.

You're assuming it doesn't go insane.

Insanity is a brain malfunction.  For an AI to go insane, it would have to become just malfunctioned enough to be dangerous without breaking to the point of not being able to function at all - not a likely scenario.
At first I was going to refute this because scientists are using neural networks in their AI research.  Neural networks, like our own brains and the Internet, function very well after being damaged in isolated areas.
But I also think that AI will come out of the commercial sector rather than pure research and that service to humans will be such an integral part of the AI's makeup that it would be unable to function without it.  It would be like Google without the Internet. 

So yeah.  I don't think an AI could develop malevolent insanity.  However, I also think a malevolently insane person or group could develop an AI...

This is something to consider when we talk about what we should and shouldn't do with developing technologies like nano, AI, cloning, genetic engineering...  Even if we in the Enlightened West make all the right decisions and develop these powerful technologies safely, it doesn't mean everyone else will. 



Tango Alpha Delta

  • Hipparch
  • ******
  • Posts: 1778
    • Tad's Happy Funtime
Reply #25 on: January 30, 2008, 02:47:00 AM
For the above reasons, I've never liked the way I've seen AI's protrayed in fiction.  Why do they always have emotions?  Why do they go nuts?

Because if they behaved as designed and didn't go nuts there wouldn't be much of a story.   

No, you misunderstood the question: why do they always have emotions and go nuts IN REAL LIFE?

;)

This Wiki Won't Wrangle Itself!

I finally published my book - Tad's Happy Funtime is on Amazon!


stePH

  • Actually has enough cowbell.
  • Hipparch
  • ******
  • Posts: 3906
  • Cool story, bro!
    • Thetatr0n on SoundCloud
Reply #26 on: January 30, 2008, 06:09:53 AM
For the above reasons, I've never liked the way I've seen AI's protrayed in fiction.  Why do they always have emotions?  Why do they go nuts?

Because if they behaved as designed and didn't go nuts there wouldn't be much of a story.   
Exactly.  You can't have System Shock without SHODAN.

"Nerdcore is like playing Halo while getting a blow-job from Hello Kitty."
-- some guy interviewed in Nerdcore Rising


ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #27 on: January 30, 2008, 12:56:49 PM
I don't foresee an AI deciding to kill us off.  For en evil AI to survive, it only needs uninterrupted power (so it can stay running on whatever machine it runs on) and physical security (to keep people from switching it off). A back up system to use as an escape route would also be prudent.  So an AI wouldn't care about clean water or clean air. It doesn't need those. Consequently, as long as armies of us didn't try to break into it's home and switch it off, we wouldn't be a threat to it.

You're assuming it doesn't go insane.

Insanity is a brain malfunction.  For an AI to go insane, it would have to become just malfunctioned enough to be dangerous without breaking to the point of not being able to function at all - not a likely scenario.
At first I was going to refute this because scientists are using neural networks in their AI research.  Neural networks, like our own brains and the Internet, function very well after being damaged in isolated areas.

But there is a fundamental difference between the internet and our brain.  Most of the internet is functionally identical - servers which spit out content. You could eliminate all of them but one and that one would still work fine.  A very small number of the machines attached to it  - routers and such -  actually make it work. Take a few of these out and the whole thing collapses.
In our brain, every different part has its own purpose.  Yes, damage one part and another part might try and compensate, but most parts cannot be replaced.
Also, computers are digital. Our brains are not.

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


Thaurismunths

  • High Priest of TCoRN
  • Hipparch
  • ******
  • Posts: 1421
  • Praise N-sh, for it is right and good!
Reply #28 on: February 01, 2008, 02:40:15 AM
But there is a fundamental difference between the internet and our brain.  Most of the internet is functionally identical - servers which spit out content. You could eliminate all of them but one and that one would still work fine.  A very small number of the machines attached to it  - routers and such -  actually make it work. Take a few of these out and the whole thing collapses.
In our brain, every different part has its own purpose.  Yes, damage one part and another part might try and compensate, but most parts cannot be replaced.
Also, computers are digital. Our brains are not.

Aside from wanting to point out that that was your your 666th post, I wanted to ask if you could clarify what you meant.
I'm not quite following your objection.

How do you fight a bully that can un-make history?


ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #29 on: February 01, 2008, 01:10:17 PM
But there is a fundamental difference between the internet and our brain.  Most of the internet is functionally identical - servers which spit out content. You could eliminate all of them but one and that one would still work fine.  A very small number of the machines attached to it  - routers and such -  actually make it work. Take a few of these out and the whole thing collapses.
In our brain, every different part has its own purpose.  Yes, damage one part and another part might try and compensate, but most parts cannot be replaced.
Also, computers are digital. Our brains are not.

Aside from wanting to point out that that was your your 666th post, I wanted to ask if you could clarify what you meant.
I'm not quite following your objection.
666 - I hadn't noticed that.  :D

I'm just saying that the analogy of comparing our brain to the internet doesn't really hold up under enough scrutiny.

 Also, computers are digital.  No matter how sophisticated the software is that run on them, it still all comes down to representing things as 1's and 0's.  Our brains are not digital.  At some point in the future, it might be possible to simulate our brain on a computer, or a network of them, but it would still only be a simulation, not a duplication.  It would be like a digital image of the Mona Lisa.  It might look really good, but it isn't the same.  (Wait, now I'm using an analogy :P ).  Digital images and digital music are both very good, but we are along way from "digital thought process."   I'm not sure you could ever achieve consciousness on a computer. Perhaps the best you could ever achieve is "an amazing simulation of consciousness."
Now, you could argue that it could be close enough not to matter and maybe that is true, but it's still not the same thing.

Another thought just occurred to me.  People have been arguing for centuries as to whether humans are deterministic or not. A computer AI would be deterministic.  Given a certain situation, it would always respond a certain way. 





Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


Nobilis

  • Peltast
  • ***
  • Posts: 156
    • Nobilis Erotica Podcast
Reply #30 on: February 02, 2008, 01:16:23 AM
Given a certain situation, it would always respond a certain way. 

Ah, a Linux user then?

Seriously, though, not all computers are digital.  Look at some of the quantum computing, optical computing, and analog electronic computing experiments that are going on, and you'll find some different answers.



Tango Alpha Delta

  • Hipparch
  • ******
  • Posts: 1778
    • Tad's Happy Funtime
Reply #31 on: February 02, 2008, 04:54:11 AM
I notice, too, that you say "given a certain situation" as though you can guarantee that ALL of the variables will be the same each time.  Perhaps the reason you can't see how "deterministic" we are is that our "processors" are affected by more factors than we are perceiving?

This Wiki Won't Wrangle Itself!

I finally published my book - Tad's Happy Funtime is on Amazon!


ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #32 on: February 03, 2008, 03:21:52 AM
Given a certain situation, it would always respond a certain way. 

Ah, a Linux user then?

No, just a meatball programmer (currently I do .net).

Quote

Seriously, though, not all computers are digital.  Look at some of the quantum computing, optical computing, and analog electronic computing experiments that are going on, and you'll find some different answers.

Quantum and optical computers are both digital - they store information as 1's and 0's. Quantum computers (if they are ever developed) would store their information bits at the atomic level, so you could have LOTS of bits in a small area.  Optical computers use light to transfer the information around inside the computer itself and would be very fast. Only analog computers (which have been around since the 1960's?) are not digital.  Analog and digital are essential antonyms.

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


Tango Alpha Delta

  • Hipparch
  • ******
  • Posts: 1778
    • Tad's Happy Funtime
Reply #33 on: February 03, 2008, 04:04:03 AM
ClintMemo's description prodded my memory, and I briefly searched for clarification, but couldn't even understand the wiki article, so I'll just pose this as an "I thought quantum computing meant this" and let someone who understands it better shoot me down:

I thought digital computing was tied to binary (0's and 1's) because there are two electrical states; but quantum computing would be based on the six quark states (up, down, top, bottom, strange, and ...um... Doc).   So quantum computers would no longer be digital, per se.


On an unrelated side note, in college, my roommate and I both took an electronic music studio course in order to play with samplers, sequencers, drum machines, etc.  He happened to be a much better piano player than I was, despite the fact he is missing his index and middle fingers on his left hand.  Most of his music is digital - programmed on synthesizers - but when he plays something himself, he jokes about it being analog due to the missing digits.

And, of course, he always ends up having to explain what "analog" means.  (The price uber-geeks pay for our humor.)

This Wiki Won't Wrangle Itself!

I finally published my book - Tad's Happy Funtime is on Amazon!


Heradel

  • Bill Peters, EP Assistant
  • Hipparch
  • ******
  • Posts: 2938
  • Part-Time Psychopomp.
Reply #34 on: February 03, 2008, 04:16:58 AM
I thought digital computing was tied to binary (0's and 1's) because there are two electrical states; but quantum computing would be based on the six quark states (up, down, top, bottom, strange, and ...um... Doc).   So quantum computers would no longer be digital, per se.

Not exactly. Quantum means 1, 0, or a superposition of both. and they're called Qubits. And quantum computing acts at the atomic, not subatomic level. And it's magnetic states for bits on the hard drive, electrical on the flash/RAM. 
« Last Edit: February 03, 2008, 04:23:30 AM by Heradel »

I Twitter. I also occasionally blog on the Escape Pod blog, which if you're here you shouldn't have much trouble finding.


ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #35 on: February 04, 2008, 12:46:47 PM
I used the word "digital", but "binary" more be accurate (if it even makes a difference).

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


Tango Alpha Delta

  • Hipparch
  • ******
  • Posts: 1778
    • Tad's Happy Funtime
Reply #36 on: February 05, 2008, 12:52:26 AM
I thought digital computing was tied to binary (0's and 1's) because there are two electrical states; but quantum computing would be based on the six quark states (up, down, top, bottom, strange, and ...um... Doc).   So quantum computers would no longer be digital, per se.

Not exactly. Quantum means 1, 0, or a superposition of both. and they're called Qubits. And quantum computing acts at the atomic, not subatomic level. And it's magnetic states for bits on the hard drive, electrical on the flash/RAM. 


By process of elimination, I guess that leaves me with stamp collecting:'(



This Wiki Won't Wrangle Itself!

I finally published my book - Tad's Happy Funtime is on Amazon!


wakela

  • Hipparch
  • ******
  • Posts: 779
    • Mr. Wake
Reply #37 on: February 06, 2008, 02:18:18 AM
Quote from: ClintMemo
I'm not sure you could ever achieve consciousness on a computer. Perhaps the best you could ever achieve is "an amazing simulation of consciousness."
Now, you could argue that it could be close enough not to matter and maybe that is true, but it's still not the same thing.
From The Age of Spiritual Machines:
"Will computers be self aware?  I don't know.  But they will say they are, and we will treat them as if they are."

An AI doesn't have to be a simulation of a human mind.  Birds need feathers to fly.  Planes don't have feathers, but they fly anyway.  And they fly in a way more useful to us. 



ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #38 on: February 06, 2008, 05:00:59 PM
Quote from: ClintMemo
I'm not sure you could ever achieve consciousness on a computer. Perhaps the best you could ever achieve is "an amazing simulation of consciousness."
Now, you could argue that it could be close enough not to matter and maybe that is true, but it's still not the same thing.
From The Age of Spiritual Machines:
"Will computers be self aware?  I don't know.  But they will say they are, and we will treat them as if they are."

An AI doesn't have to be a simulation of a human mind.  Birds need feathers to fly.  Planes don't have feathers, but they fly anyway.  And they fly in a way more useful to us. 

...but again, that's not an "artificial consciousness" - it's just a really, really, really good deterministic user interface.

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.


Nobilis

  • Peltast
  • ***
  • Posts: 156
    • Nobilis Erotica Podcast
Reply #39 on: February 06, 2008, 11:50:24 PM
I notice, too, that you say "given a certain situation" as though you can guarantee that ALL of the variables will be the same each time.  Perhaps the reason you can't see how "deterministic" we are is that our "processors" are affected by more factors than we are perceiving?

The human brain is a "chaotic" system in that it is highly sensitive to initial conditions.

As such, it would probably be more realistically simulated with analog neural networks, which can display similar behavior.



eytanz

  • Moderator
  • *****
  • Posts: 6109
Reply #40 on: February 06, 2008, 11:58:57 PM
Quote from: ClintMemo
I'm not sure you could ever achieve consciousness on a computer. Perhaps the best you could ever achieve is "an amazing simulation of consciousness."
Now, you could argue that it could be close enough not to matter and maybe that is true, but it's still not the same thing.
From The Age of Spiritual Machines:
"Will computers be self aware?  I don't know.  But they will say they are, and we will treat them as if they are."

An AI doesn't have to be a simulation of a human mind.  Birds need feathers to fly.  Planes don't have feathers, but they fly anyway.  And they fly in a way more useful to us. 

...but again, that's not an "artificial consciousness" - it's just a really, really, really good deterministic user interface.

Wait, I'm not sure I understand this. Assuming that nebulous notions such as souls are left out of the equation (and if a soul is needed for consciousness then the no technological approach would ever work, unless someone figures out how to make artifical souls), then it is not clear to me why you would assume that only human minds are capable of having the necessary properties to seperate a "deterministic user interface" from "conciousness" (scare quotes because I'm not even sure the human mind is anything but a very good deterministic human interface once you look at it from the right level of granularity).



ClintMemo

  • Hipparch
  • ******
  • Posts: 680
Reply #41 on: February 07, 2008, 01:37:45 PM
Quote from: ClintMemo
I'm not sure you could ever achieve consciousness on a computer. Perhaps the best you could ever achieve is "an amazing simulation of consciousness."
Now, you could argue that it could be close enough not to matter and maybe that is true, but it's still not the same thing.
From The Age of Spiritual Machines:
"Will computers be self aware?  I don't know.  But they will say they are, and we will treat them as if they are."

An AI doesn't have to be a simulation of a human mind.  Birds need feathers to fly.  Planes don't have feathers, but they fly anyway.  And they fly in a way more useful to us. 

...but again, that's not an "artificial consciousness" - it's just a really, really, really good deterministic user interface.

Wait, I'm not sure I understand this. Assuming that nebulous notions such as souls are left out of the equation (and if a soul is needed for consciousness then the no technological approach would ever work, unless someone figures out how to make artifical souls), then it is not clear to me why you would assume that only human minds are capable of having the necessary properties to seperate a "deterministic user interface" from "conciousness" (scare quotes because I'm not even sure the human mind is anything but a very good deterministic human interface once you look at it from the right level of granularity).

I'm not assuming that only a human mind can achieve this, I just doubt it can be achieved with a binary based machine.   You also make a very important point.  If don't know exactly what consciousness is, how will we duplicate it  - or - know if we do?

Life is a multiple choice test. Unfortunately, the answers are not provided.  You have to go and find them before picking the best one.