Well, an AI in the sense that we use it in implies a constructed intelligence, and thus an artificial one.
But it's not used in just that way, isn't it? It is also used to describe self-emergant AIs like Sarisvati from "Artifice and Intelligence" and Skynet from the Terminator.
As to trusting an AI to make the best decision or be benevolent, well, there's a reason cars are crash-tested. Humans and their products are fallible, and the smart and transcendent can be as evil as the low and stupid. Allowing any thing or group too much authority is a bad idea, be they carbon or silicon based.
I'm not sure I'm comfortable with the crash-test analogy; is the proposal that we create AIs and kill them if we don't like their morals? I took Steve to mean that people are trying to put "likes humans" into the design process, but that's as likely to be flawed as anything else.
No, we want to define an intelligence by the force that creates it. An artificial intelligence is created by man (that's the definition) and as such, will show the results of that creation. Look at all the other wonderful things we've created and tell me that isn't a risky proposition.
People are also created by other people. Of course I see the risk inherent in creating any new intelligence. I just don't see why the emphasis is on the origins. Instead of having some sort of useful classification (for example, naming intelligences after their capabilties or properties), it's just a way of highlighting "we made this!" - wheter we did it by accident or design.
The other question that the intro raised was the one of the people working to ensure that any emerging super-intelligences would be benevolent towards us. Isn't that a bit short sighted?
No, by definition. Taking precautions is the opposite of being short-sighted.
Nonsense. It depends on what you are trying to protect - there have been plenty of examples throughout history of innovation being crippled by attempts to protect the status quo, rather than looking at the long term ramifications. It's a conservative (in the non-political sense) outlook - "things are ok now, lets take precautions so as to minimize change". That almost never works out, as the change happens regardless and is often a lot more painful than if it was planned for rather than fought against.
I mean, if the emerging intelligence transcends us, shouldn't we trust it to make the best decisions?
No. First of all, there's no guarantee that the intelligences we create (intentionally or not) will not be just as flawed as we are. They may be more powerful but they won't necessarily be more moral or have our best interests in mind.
Well, that's the crux of it, right? Are we entitled to enforce human morality on non-human intelligences? What gives us the right to demand that any emerging intelligences have
our best interests in mind?
In a very straightforward sense, any machine intelligence that arises is our descendent - and shouldn't be working for our children, machine or man, rather then curtailing them for our own self-interest?
I don't know about you, but I'd like to have the descendants I can hug be the winners.
In a way, that's what I'm questioning. I mean, obviously I feel the same to a major extent. But I find that a troubling thought - it seems to me that the mindset of "we should create these new intelligences to serve us, but we should fear them as we do so because they are not us" is setting us up for far worse trouble than anything else. I'm not saying that we should just be creating any sort of dangerous intelligence that can arise there - it would be a shame if we created something that immediately proceeded to kill all humans. But I'm not comfortable with the thought of, essentially, drawing a line between us and them.
We should be creating AIs that are the
extension of humanity, not creating aliens in our midst.