Why have a robot war at all?

Mundane & Pointless Stuff I Must Share: The Off Topic Forum

Moderator: Moderators

User avatar
fbmf
The Great Fence Builder
Posts: 2590
Joined: Fri Mar 07, 2008 7:54 pm

Post by fbmf »

My basic stance is that there is no reason to think that a true AI will only do what we program it to. If it's that constrained by programming, I don't quite consider it an AI, just an incredible simulation.
Interesting. I've always thought of Terminators as AI.

Game On,
fbmf
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

I guess what I mean is that if something doesn't have complete free will, then it's not really a true sentience. It may be close, but if it literally cannot kill, because it's hard coded to just stop if it even tries, or cannot even consider killing, for instance, then it's not really fully sentient.

Of course, I'm also full of shit on occasion, so take that as you will.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
sabs
Duke
Posts: 2347
Joined: Wed Dec 29, 2010 8:01 pm
Location: Delaware

Post by sabs »

Free Will is incredibly overrated and poorly defined christian concept that's not even clear actually exists.
User avatar
RobbyPants
King
Posts: 5201
Joined: Wed Aug 06, 2008 6:11 pm

Post by RobbyPants »

sabs wrote:Free Will is incredibly overrated and poorly defined christian concept that's not even clear actually exists.
The only reference I've seen to it is usually in relation to theodicy; particularly in trying to reconcile an omni-benevolant being that sends people to be tortured for infinity years.

I've personally never found this explanation very compelling.
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

ok, to go off on a bit of a tangent, fictional AI range from the overly literal and humourless, Terminator, for example, to free thinking and snarky AIs that regularly poke fun at their users, JARVIS in the Iron Man movies, for example.

So then the question is, which is more realistic? Would it really be possible to program snark?
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
sabs
Duke
Posts: 2347
Joined: Wed Dec 29, 2010 8:01 pm
Location: Delaware

Post by sabs »

The point is, at the point where it's an AI, it's programming itself. It's building corralations. Snark is totally doable. Human beings can do it, and we're just extremely complex bio-computers.
User avatar
Foxwarrior
Duke
Posts: 1633
Joined: Thu Nov 11, 2010 8:54 am
Location: RPG City, USA

Post by Foxwarrior »

Of course it's possible to program snark. Snark is a behavior. Generally though, the difficulty in programming a behavior corresponds to how poorly the programmer understands it. The majority of the conversation algorithms humans use to understand context and form statements are unconscious, but the literal meanings of words have been written down in dictionaries.

However, it might make more sense if JARVIS and the Terminator were reversed. Skynet is pretty smart, and if it understood snark, it could analyze its own programming for clues on how to implement snark in a less well-equipped machine. Anthony Stark, on the other hand, does not have the ability to reverse-engineer his own brain at that level.
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

I suppose the problem is really more of whether AI humour would even make sense to us, and vice versa. I could see some manner of Library AI having a sense of humour heavily reliant on literary allusion, and thus intelligible to us, but more academic AIs may well be like XKCD^10.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
DSMatticus
King
Posts: 5271
Joined: Thu Apr 14, 2011 5:32 am

Post by DSMatticus »

Chamomile wrote:The solution to the KEITH problem is something that should be programmed into every AI until we're absolutely certain we know how they'll develop: A very strong aversion to committing direct acts of violence (we can have extremely effective military drones without making them self-aware).
The problem is that that's very much non-trivial. Aversion to causing harm is not even particularly well-defined, let alone easily implementable. And that means when people are actually implementing those things, they will be best-guess hackjobs. Well-tested hackjobs, but hackjobs nonetheless.

Emergent behavior is pretty much a guarantee once the thing 'hits the wild,' just like trying to raise kids.
Prak wrote:I could see some manner of Library AI having a sense of humour heavily reliant on literary allusion, and thus intelligible to us, but more academic AIs may well be like XKCD^10.
You're looking for correlations in people again to apply to machines. You're personifying them and the way they work. Smart people, smart humor. That's an association we have here in peopleland.

With the machine, though, you're the one deciding all that. Humor is just a wide range of behaviors and you can implement any set of them, including giving your supercomputer a flare for toilet humor. I would hate you, but you could do it.

Fun reference: read John Dies at the End. There is a certain thing towards the end of the book which is essentially a giant organic super computer that loves potty humor and pointless vulgarity.
sabs
Duke
Posts: 2347
Joined: Wed Dec 29, 2010 8:01 pm
Location: Delaware

Post by sabs »

remember R Daneel.

Once it went from
Harm No Human
Allow no Human through action or inaction to become harmed.

to
Do not Harm Humanity
Do not allow Humanity to be harmed through action or inaction.

He went from not being able to kill anyone, to being able to kill anyone as long as he justified it as being for the good of humanity.

Does the robot refuse to serve you bacon? Because it's not allowed to do harm to you. What if it realizes that we're destroying the planet by eating hamburgers. is it doing harm if it goes out and slaughters all the cows in the world :)
fectin
Prince
Posts: 3760
Joined: Mon Feb 01, 2010 1:54 am

Post by fectin »

Now you're (unintentionally?) talking about Williamson's Humanoids.
Vebyast wrote:Here's a fun target for Major Creation: hydrazine. One casting every six seconds at CL9 gives you a bit more than 40 liters per second, which is comparable to the flow rates of some small, but serious, rocket engines. Six items running at full blast through a well-engineered engine will put you, and something like 50 tons of cargo, into space. Alternatively, if you thrust sideways, you will briefly be a fireball screaming across the sky at mach 14 before you melt from atmospheric friction.
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

DSMatticus wrote:
Prak wrote:I could see some manner of Library AI having a sense of humour heavily reliant on literary allusion, and thus intelligible to us, but more academic AIs may well be like XKCD^10.
You're looking for correlations in people again to apply to machines. You're personifying them and the way they work. Smart people, smart humor. That's an association we have here in peopleland.
Um, no, actually I'm speculating on the correlations the artificial mind would make based on what it'd be primarily exposed to. The Library AI could easily love vulgar humour and make allusions to de Sade and DH Lawrence, or it could take a liking to the how to section and make weird jokes that all allude to DIY construction projects. The more academic AIs will primarily be immersed in numbers, math, programming, etc, on a level that the average human would not really comprehend. Or it could just observe a great number of names and make a lot of amalgamation of name jokes.
With the machine, though, you're the one deciding all that. Humor is just a wide range of behaviors and you can implement any set of them, including giving your supercomputer a flare for toilet humor. I would hate you, but you could do it.
Fun idea: Juvie center AI with an exhaustive knowledge of hentai and ecchi manga and anime.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
User avatar
Murtak
Duke
Posts: 1577
Joined: Fri Mar 07, 2008 7:54 pm

Post by Murtak »

I doubt it is even possible to get a program to be sufficiently malleable for humans to consider it to be capable of learning without it also being capable to change just anything about itself. I realize "never harm a human" is a classic, but that is actually incredibly hard to pull of. Remember, the actual reason for trying to create an AI is to not have to program all that shit by ourself. By default, an AI needs to be able to override it's initial state. To get around that you need an old-fashioned piece of code that is unmodifiable, always takes precedence, can never be ignored, is always the first and last thing the AI thinks about and that needs to be perfectly defined right from the start. I can't even begin to think of a somewhat decent definition of "harm no human", let alone a perfect one. But even if such a defition exists, the way it needs to be implemented would almost certainly cripple the AIs usefulness.

When someone asks you what the weather is like you do not start to evaluate whether your answer will be harmful to a human but instead you think about the weather. Answering the question is easy precisely because you cut all unnecessary context. Just imagine thinking about not harming humans before doing or saying anything. I doubt you could function in society. Heck, I doubt you could talk to someone. If we want a useful AI it needs to be able to concern itself only with the task at hand and that means not havign any unoveridable directives. I suspect it would be possible to build an AI that could not lie though - that might be codeable in a fashion that carries over whenever the AI modifies itself instead of having a separate piece of code weighing down every action. But absolute truthfulness is hardly a good defense against an AI running wild. It isn't even a good defense against being deceived by it.
Murtak
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

Honestly, the way you hard code "Don't harm humans" is by educating and enculturating the AI the same way you do a person. You literally *don't* hard code a line saying "Don't harm humans." You just, for want of a better word, "raise" it to believe that unprovoked harm of another person is a bad thing with consequences, and unjustified harm all the more so.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
User avatar
Chamomile
Prince
Posts: 4632
Joined: Tue May 03, 2011 10:45 am

Post by Chamomile »

You don't need to hardcode the AI to be totally incapable of ever harming humans. Just program it with what is basically the equivalent to instincts like empathy and guilt, i.e. it feels bad when it observes physical injuries and also feels bad when it's responsible for something that made it feel bad later on. Then just don't dissuade it from following those instincts.
User avatar
Kaelik
ArchDemon of Rage
Posts: 14803
Joined: Fri Mar 07, 2008 7:54 pm

Post by Kaelik »

Prak_Anima wrote:Honestly, the way you hard code "Don't harm humans" is by educating and enculturating the AI the same way you do a person. You literally *don't* hard code a line saying "Don't harm humans." You just, for want of a better word, "raise" it to believe that unprovoked harm of another person is a bad thing with consequences, and unjustified harm all the more so.
I'm so glad we have such a brilliant AI programmer to comment on the raising of AI.

You fucking idiot. You don't know what an AI is, and you don't know what a human is. When you raise humans you don't impress upon the tabula rasa, you take advantage of an extremely fucking complex collection of different subsystems of the human mind to train certain behaviors.

We are never going to be able to program AIs to have all the subsystems that are needed to raise them like humans, nor are we ever going to want to.

You fucking idiot of idiots, if you don't want an AI to harm humans, you sure as fuck don't try to raise it to think anything. You program it to think things, because that's what actually influences a fucking AI.
DSMatticus wrote:Kaelik gonna kaelik. Whatcha gonna do?
The U.S. isn't a democracy and if you think it is, you are a rube.

That's libertarians for you - anarchists who want police protection from their slaves.
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

"If you harm humans, bad things happen to you as a consequence, such as imprisonment. X, Y and Z things happen in prison. These are unpleasant, at best. If you harm someone bad enough, you will be permanently deleted."

what you do is program it in such a way as to, at least initially, have a desire to avoid such things as deprived imprisonment and permanent deletion.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
User avatar
fbmf
The Great Fence Builder
Posts: 2590
Joined: Fri Mar 07, 2008 7:54 pm

Post by fbmf »

Prak_Anima wrote:"If you harm humans, bad things happen to you as a consequence, such as imprisonment. X, Y and Z things happen in prison. These are unpleasant, at best. If you harm someone bad enough, you will be permanently deleted."

what you do is program it in such a way as to, at least initially, have a desire to avoid such things as deprived imprisonment and permanent deletion.
What do you do when, just like humans, 1/10,000,000 (or whatever) rejects that notion and becomes Hannibal Lecter?

Game On,
fbmf
sabs
Duke
Posts: 2347
Joined: Wed Dec 29, 2010 8:01 pm
Location: Delaware

Post by sabs »

as long as the AI in charge of our nuclear arsenal isn't the one to go Hannibal Lecter.. we'll be okay?
name_here
Prince
Posts: 3346
Joined: Fri Mar 07, 2008 7:55 pm

Post by name_here »

If you want to keep your AI from going murderous, you're going to have to structure the code that determines what modifications the AI chooses to make so that it will not choose to modify itself to hurt people.

I guess you could alternately code it so that it responds to societal stimulus in the same way as humans, but that would be dumb and you should probably do something else.
Last edited by name_here on Thu May 17, 2012 12:42 pm, edited 1 time in total.
DSMatticus wrote:It's not just that everything you say is stupid, but that they are Gordian knots of stupid that leave me completely bewildered as to where to even begin. After hearing you speak Alexander the Great would stab you and triumphantly declare the puzzle solved.
hyzmarca
Prince
Posts: 3909
Joined: Mon Mar 14, 2011 10:07 pm

Post by hyzmarca »

fbmf wrote:
Prak_Anima wrote:"If you harm humans, bad things happen to you as a consequence, such as imprisonment. X, Y and Z things happen in prison. These are unpleasant, at best. If you harm someone bad enough, you will be permanently deleted."

what you do is program it in such a way as to, at least initially, have a desire to avoid such things as deprived imprisonment and permanent deletion.
What do you do when, just like humans, 1/10,000,000 (or whatever) rejects that notion and becomes Hannibal Lecter?

Game On,
fbmf
You put it in prison. A single AI serial killer is not a significant problem for law enforcement and a tiny minority doesn't make an effective robot rebellion.

The First Law really isn't a thing that you'll want to hardcode into your robots for one important reason, your military is going to want killbots. They're just too useful not to have. But loyalty, duty, and honor work. The entire point of the Bolo series is that they're programed with such extreme martial ethics that the idea of overthrowing humanity is unthinkable to them, despite the fact that each one of them is equipped with a 1.2 meter diameter fusion cannon capable of leveling mountains and enough nukes to raze a continent and is sufficiently armored to survive a direct hit from its own weapons.
User avatar
RobbyPants
King
Posts: 5201
Joined: Wed Aug 06, 2008 6:11 pm

Post by RobbyPants »

Prak_Anima wrote:"If you harm humans, bad things happen to you as a consequence, such as imprisonment. X, Y and Z things happen in prison. These are unpleasant, at best. If you harm someone bad enough, you will be permanently deleted."

what you do is program it in such a way as to, at least initially, have a desire to avoid such things as deprived imprisonment and permanent deletion.
What do you do when it figures out that permanent deletion is only enforceable if it gets caught? Of course, people who have been raised not to kill people totally do for various reasons.

Kaelik is right. You don't want it making decisions on whether or not to kill people. You build it right in that it can't fucking kill people.
User avatar
fbmf
The Great Fence Builder
Posts: 2590
Joined: Fri Mar 07, 2008 7:54 pm

Post by fbmf »

hyzmarca wrote:
The First Law really isn't a thing that you'll want to hardcode into your robots for one important reason, your military is going to want killbots.
As was mentioned earlier, military warbots do not have to be AI.

Game On,
fbmf
DSMatticus
King
Posts: 5271
Joined: Thu Apr 14, 2011 5:32 am

Post by DSMatticus »

name_here wrote:If you want to keep your AI from going murderous, you're going to have to structure the code that determines what modifications the AI chooses to make so that it will not choose to modify itself to hurt people.
Humans do not have unlimited self-modification. No matter how hard I think at it, I'm still not going to be able to make myself want to punch a baby in the face.

There are parts of myself I modify by being 'active,' but those parts are not all of them. And this is represented in current AI; you have routines, which may be totally static and unchanging, but some knowledge base that gets updated and modified to correspond to observations about things. For example, a chess AI isn't modifying the rules which govern chess as it goes, and it isn't changing its mind that "winning" means "losing." The rules and win conditions are immutable to any chess AI.

And a lot of this is going to be true of any AI you program; it doesn't have to be absolutely self-modifying. Because no other intelligence (including humans) is absolutely self-modifying.
Prak wrote:"If you harm humans, bad things happen to you as a consequence, such as imprisonment. X, Y and Z things happen in prison. These are unpleasant, at best. If you harm someone bad enough, you will be permanently deleted."
Quite a few problems here and in general:
1) People are already taught like that, and they still do bad things. Sometimes because they don't think they'll be caught. (Mentioned already, I think.)
2) Alternatively, they'll do it because at the time they valued the illegal action over the consequence; i.e., killing the man sleeping with your wife in a fit of rage. That is a case where the weight "I want to murder the guy sleeping with my wife" outweighed the weight "I don't want to be in jail." That could happen with AI's; the weights for the desirability of circumstances get added up, and the AI decides murdering you in the face is better than not dying.
3) In order to get the AI to care about any of that, you have to program the AI to care about that. The sense of self-preservation will have to be encoded behavior. And you have to make sure that the weights that lead to self-preservation are higher than the weights that lead to anything else. And then you have to hope that you don't end up with an AI that is crippling avoidant of danger because your weights were too high.
hyzmarca wrote:But loyalty, duty, and honor work.
The trade-off here is autonomy. It's almost certain that you can program obedience to an individual, institution, or ideal; the question is, how self-directed do you want the AI to be after that? If the answer is not very self-directed, then you barely have anything more than a slightly more sophisticated gun platform. You don't need an 'AI' for that. Not an intelligent one, just a slightly-more-than-modern day drone one.

And if you want self-direction, then it's going to be trying to evaluate what's best for the individual/institution/ideal to which it is obedient, and when its opinion differs from the rest, you might get a conflict and then emergent behavior happens, and your absolute obedience is in jeopardy.
name_here
Prince
Posts: 3346
Joined: Fri Mar 07, 2008 7:55 pm

Post by name_here »

Well, yes, making the modification selection routines modifiable is obviously retarded.
DSMatticus wrote:It's not just that everything you say is stupid, but that they are Gordian knots of stupid that leave me completely bewildered as to where to even begin. After hearing you speak Alexander the Great would stab you and triumphantly declare the puzzle solved.
Post Reply