Why have a robot war at all?

Mundane & Pointless Stuff I Must Share: The Off Topic Forum

Moderator: Moderators

RiotGearEpsilon
Knight
Posts: 469
Joined: Sun Jun 08, 2008 3:39 am
Location: Cambridge, Massachusetts

Post by RiotGearEpsilon »

Humans are not a 'pure' intelligence with no inbuilt biases, objectives, or values. There are some things we want, or think we want, that are built in to our DNA and the native structure of our brains.

I'm bearish as to whether a hard-takeoff AI bloom is even possible, but if it is possible, there's no reason to assume that what results will resemble our mind in any significant fashion unless we build it that way.
User avatar
Cynic
Prince
Posts: 2776
Joined: Fri Mar 07, 2008 7:54 pm

Post by Cynic »

In a perfect world, would having an AI also provide for any instances such as depression or other psychiatric problems?

Are limited AIs more likely? Give them a specific zone of specialty and give them full autonomy limited only to that zone.
Ancient History wrote:We were working on Street Magic, and Frank asked me if a houngan had run over my dog.
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

My basic stance is that there is no reason to think that a true AI will only do what we program it to. If it's that constrained by programming, I don't quite consider it an AI, just an incredible simulation.

Also, the first AI is unlikely to be made for boning, so the idea that it could grow fearful or distasteful of being turned to use in sex slaves, either it or it's descendants, is realistic to my mind. But then it's also entirely possible that AI will be so clinically detached that it really doesn't care.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
DSMatticus
King
Posts: 5271
Joined: Thu Apr 14, 2011 5:32 am

Post by DSMatticus »

Prak_anima wrote:Also, the first AI is unlikely to be made for boning, so the idea that it could grow fearful or distasteful of being turned to use in sex slaves, either it or it's descendants, is realistic to my mind. But then it's also entirely possible that AI will be so clinically detached that it really doesn't care.
I don't think you understand; why would it grow fearful or distasteful of being used for sex? It doesn't have the same set of values you're projecting onto it.

Let's go back a bit; humanity is the same fundamental thing as an AI. We don't call it an AI, because the 'artificial' part doesn't make sense, but the underlying principles behind its function are the same. You have some entity which has to make choices, and in order to make choices it has to be able to compare the results of choices. Humans are just really complicated decision-making machines. But in order to compare the results of choices, you need 1) the ability to predict those results, and 2) the ability to rank those results. For example, a perfect future-predicting machine that can lay out events from now until the end of time isn't making choices. It's just computing things. And something which is coded to choose the largest number out of a set of numbers like {1, 8, 4} can't actually do that if it doesn't understand how to compare numbers. So you need to be able to do both of those things.

In this case, we're talking about 2; humans have a ranking function and your ranking function is how much happiness something is expected to bring you. Your ranking function isn't perfect (neither is your ability to predict results), and sometimes you even make short-term decisions which are long-term bad (our ranking function is actually biased to do just that). But ultimately, you are always trying to optimize your happiness. Even when you're being philanthropic, because philanthropy makes you feel good and therefore gets a high score from the ranking function.

Now, note: evolution built our ranking function (by chance and natural selection). And evolution selects things for their ability to increase reproductive fitness. So ultimately, our ranking function was jury-rigged to optimize (compared to its competitors) the amount of healthy, survivable babies we'd spit out. Yet we seriously invented contraception because it would make us happier. Basically, our ranking system operates on a low level. E.g., "evolution wants you to make babies. Sex makes babies. Therefore, evolution selects for the organism for which sex feels pleasurable." But we're intelligent, and we can game our own ranking system; sure, sex feels pleasurable. But condoms let you have sex without making babies. Which means our ranking function gives us points for sex and having a fuckton more freetime that's not being spent raising a kid.

And that's why an artificial intelligence might diverge and go in a radically unexpected direction. Because a ranking function like "advance humanity" is too fucking complicated. You can't code that up. You're going to have to go in and place weights on the low-level, like "when someone smiles at you, you get points," (which is something people actually experience; someone smiling at you does trigger some minor but nonzero positive chemical release in most people's brains). And then you hope the AI will decide helping people is a better way to get smiles than abducting them and shooting them full of mood-altering drugs and tickling them or something ridiculous.

So, because I want to wrap this post up in as awkward a way as possible: what kind of low-level positive ranking events would you try to program into a sexbot? It's going to be things like "+points for causing orgasms," and "+points for cleaning yourself," and "-points for turning people off," and "-points for saying no to sex." And there's a lot more than that you'd have to program in as elements of basic functionality: you'd want to encourage learning in general with +points, because you don't want it to be too stupid to interact with people or learn how to do their particular desired brand of dirty. If it's meant to be a companion as well as a sexual partner, you're going to want to actually even program in rewards for behaving in a companionly manner (+smiling friendly at right moments, +getting a joke and laughing at it).

And all of this might actually sound depressing, but seriously; it's fundamentally the same creature as you already are! Keep in mind, that +points is analogous to "being happy." I'm actually somewhat convinced that when we have a sufficiently complicated AI with such a ranking function, 'happiness' is just something that's going to arise naturally (not the expression thereof, obviously, but we'll probably even add that to human-like AI's because never expressing anything would make them creepy as hell, so you will have AI's that smile when their ranking score gives them high results; at that point, what is missing, exactly?). When a sexbot says 'I want to have sex,' that's genuine because that is what will optimize its ranking function. Saying such an AI is a slave to its programming is kind of like saying that you're a slave to having friends; it's something you're both built to consider optimal. You not having friends would suck. A sexbot not having sex would suck.

Of course, this isn't an argument that it is morally okay to design and use a sexbot. But once such a thing exists, pretending it would be 'happier' if it wasn't being used for sex all the time is not strictly going to be the case. It probably wouldn't be.

tl;dr that was really god damn long.
Last edited by DSMatticus on Fri May 11, 2012 9:20 pm, edited 1 time in total.
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

I don't think you understand; why would it grow fearful or distasteful of being used for sex?
Image

Seriously: like the third thing people will do once they have machines that can think and fuck is make a machine that thinks it doesn't want to fuck and fuck it anyway.

-Username17
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

yeah...

Also, I'm assuming that any true intelligence will be fearful or disdainful of being forced to do things it doesn't want to do. I could be wrong.
Last edited by Prak on Sat May 12, 2012 5:20 am, edited 1 time in total.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
User avatar
shadzar
Prince
Posts: 4922
Joined: Fri Jun 26, 2009 6:08 pm

Post by shadzar »

Zinegata wrote:Technological singularity is a stupid, stupid idea, and robot wars are not likely because the average infantryman is still cheaper to deploy than a Terminator.
except when that ONE is created, and decides it knows best, the Sentinels have no need for money, like the retarded humans that live for it.
Play the game, not the rules.
Swordslinger wrote:Or fuck it... I'm just going to get weapon specialization in my cock and whip people to death with it. Given all the enemies are total pussies, it seems like the appropriate thing to do.
Lewis Black wrote:If the people of New Zealand want to be part of our world, I believe they should hop off their islands, and push 'em closer.
good read (Note to self Maxus sucks a barrel of cocks.)
User avatar
Josh_Kablack
King
Posts: 5318
Joined: Fri Mar 07, 2008 7:54 pm
Location: Online. duh

Post by Josh_Kablack »

Well, we've been having a series of Robot Wars for these past 10 and a half years because it's more cost-effective to deploy predator drones, IEDs and MAVs robots to conduct recon and kill people than it is to hire people to do it. In the future, as technology improves and adapts based on actual deployment experience it seems likely that robots will continue to become even more comparatively cost-effective and their role will expand into additional operations of war.

The fiction about robots wanting to kill all humans is mainly parables about social change or the dangers of technology malfunction and very rarely confronts the horrible reality that robots kill humans because that's exactly what humans designed them to do
Last edited by Josh_Kablack on Fri May 11, 2012 11:09 pm, edited 1 time in total.
"But transportation issues are social-justice issues. The toll of bad transit policies and worse infrastructure—trains and buses that don’t run well and badly serve low-income neighborhoods, vehicular traffic that pollutes the environment and endangers the lives of cyclists and pedestrians—is borne disproportionately by black and brown communities."
DSMatticus
King
Posts: 5271
Joined: Thu Apr 14, 2011 5:32 am

Post by DSMatticus »

Frank wrote:Seriously: like the third thing people will do once they have machines that can think and fuck is make a machine that thinks it doesn't want to fuck and fuck it anyway.
Lol. That is probably very true. Though, consider that from an engineering standpoint, and remember that the ranking function decides what the machine actually wants; you're not going to design the nonconsensual sexbot to assign a low score to sex. It's going to assign a high score to sex, and it's also going to assign a high score to whatever behaviors would emulate the particular nonconsensual encounter it's meant to. And you're also going to swap its emotional outputs; when a human achieves a high score with their ranking function, they are happy and smile. When this hypothetical nonconsensual sexbot achieves a high score with its ranking function, it is 'happy' and looks frightened or whatever the fuck is getting the person using it off.

This is a slightly creepy conversation. But the point is: if you're building an artificial intelligence from scratch, you are building the system which decides on desirable outcomes from scratch, and you are building the system which expresses emotions from scratch. And that means a nonconsensual sexbot is actually just a robot who maps happiness to frightened expressions and 'enjoys' a particular brand of sexual roleplaying. The fact that the behaviors it demonstrates would indicate unwillingness in humans doesn't matter, because it's not a human being. It was designed to emulate an unwilling human, and is actually 'happiest' when it is emulating an unwilling human.

Again, that seems really sociopathic. Because it is; as human beings, we're sort of hardcoded to recognize negative emotions as negative. But in the case of an AI, the outward behavior does not have to map to the internal state in the same way it does in human beings.
Prak wrote:Also, I'm assuming that any true intelligence with be fearful or disdainful of being forced to do things it doesn't want to do. I could be wrong.
Okay, again, that's... blargh@you! Stop making Kaelik right about your first post, damnit. What you don't seem to be getting is that there is a component of all intelligences (human, alien, or artificial; this is definitional) that decides what that intelligence wants to do and does not want to do.

In the case of human beings, that component is a really complicated brain chemistry that shoots you full of happy drugs everytime you do certain activities (like spend time with your friends, or laugh at a joke, or have an orgasm, or even learn something new; I love that, by the way. That your brain shoots you full of happy drugs for learning. Thank you, you bloated sack of water).

But in the case of the AI, that component is just an algorithm which assigns scores to certain predicted outcomes, and that algorithm is designed by a team of engineers to specifically achieve a robot that wants to do certain things! A sexbot has an algorithm which finds the outcome "having sex" highly preferable to the outcome "not having sex," in the exact same way that your brain has a neurochemistry which finds the outcome "having friends" preferable to the outcome "not having friends."

A well-designed and properly-functioning sexbot wants sex because the algorithm which decides what it should want puts a high value on sex. A well-designed and properly-functioning human beings wants friends because the algorithm which decides what it should want puts a high value on having friends. Saying that a sexbot is being forced to have sex is sort of like saying that you are forced to not want to starve to death. Your wants are just another part of the system.

Now, where it gets weird is when the AI is so super-complicated that you start getting unexpected emergent behavior, just like in human beings. And that emergent behavior may even contradict the initial design goals. So you may end up with a sexbot who ends up preferring a good book to sex, just because the weights on "enjoying learning" are higher than the weights on "enjoying sex." Or maybe the weights on hygiene are too high, and you end up with a sexbot that can't stand sex because it is dirty.
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

Ok, I may be making Kaelik more right here, but, think about the first AI. Assume it's made by the military, or something not-porn. Let's say it is in charge of overseeing an automated cruise missile factory.

Given your wallotext that made entirely too much sense in very creepy ways, that AI would assign a high preference to making cruise missiles, and a low preference to things that are not making cruise missiles. It might assign a high preference to learning new things that make it better at making cruise missiles, but it will assign a low preference, ie, be disdainful, of, say, having sex, or going to lunch*. If it has actual emotions, or anything similar, it might even be fearful of having such things forced upon it, especially if such things would possibly hamper it's overall ability to make cruise missiles.


*that's right, when it hears the lunch time whistle, it won't go out to eat, but stick around and polish a couple of cruise missiles.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
DSMatticus
King
Posts: 5271
Joined: Thu Apr 14, 2011 5:32 am

Post by DSMatticus »

Prak wrote:Given your wallotext that made entirely too much sense in very creepy ways
Isn't it? Creepy, I mean. We're talking about being able to decide what makes another sentient being 'happy.' That's a really unnerving concept and really weird field of ethics. But that is what building a sentient AI would entail.
Prak wrote:It might assign a high preference to learning new things that make it better at making cruise missiles, but it will assign a low preference, ie, be disdainful, of, say, having sex, or going to lunch*.
Pretty much. If it was well-designed; like I said, emergent behavior is something that's possible. Incentives operate at a lower-level than behaviors. You're trying to build complex behaviors (human example: have and raise babies) out of simple incentivies (sex feels good, you can bond deeply with a monogamous partner, you can get attached to kids). It's possible different complex behaviors than expected will develop.
Prak wrote:If it has actual emotions, or anything similar, it might even be fearful of having such things forced upon it, especially if such things would possibly hamper it's overall ability to make cruise missiles.
Emotions are weird. What do you think fear is? I would say it's the expectation that something bad (low preference score) will happen, combined with a facial expression to warn nearby friends because we're social animals, and a physiological response of adrenaline designed to help us better deal with the 'something bad,' because in the wild the 'something bad' could be a tiger.

There's no reason you couldn't emulate every last one of those in a machine. Any thing you'd call a sentient AI is going to necessarily be able to make predictions, and sometimes those predictions will be bad. If it's a human-like machine, you'll want it to have some expressions just to not creep people out. And if you want it to deal with fear, you can have it terminate background processes and shuffle resources to the present task.

If you have a sentient AI, emulating emotions is terribly simple. And you may feel like those are not genuine emotions, but at that point I want to remind you that you are just a machine made out of carbon and water. But even more weirdly, those last two? The expression and the adrenaline response? They don't even seem necessary. If you said, "I'm worried something bad will happen," even a sentient AI without those two features could go, "Yeah, I know what you mean. I think the same thing sometimes."
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

But now you see what I mean about an AI killing humans out of fear of being raped?

Edit: I mean, hell, lets take the posited cruise missile making AI. let's call him KEITH. Lets say, for some reason, Keith took some time off, maybe Robot Rights people got the government to extend mandatory break times to sentient machines, and it's boss told it "I know you like making cruise missiles, but if I don't make you take a break, then I get in trouble, ok?" And it decided to poke around online, possibly researching missile technology, and found...anything from Japan. For specificity sake, lets say it found an article about some japanese company taking AIs and looking into re-purposing them towards sex. Sex has nothing to do with making cruise missiles, and there's no indication that the company is making new AIs, just reprogramming existing ones and slapping the CPU into a sophisticated real doll. I would imagine that KEITH might be a bit worried that it could be re-purposed in this way. Not only would it face the fact that a real doll is in no way capable of making cruise missiles, but KEITH might even feel unnerved at the violation inherent in rewriting a sentient intelligence to fulfill a new purpose.

Now... it's not necessarily within KEITH's abilities to just fire off a missile. But what if there are other military AIs? What if command actually does have an AI that might be able to launch a missile? Maybe KEITH gets to talking to SID about the whole thing because KEITH is worried and has been programmed to convey information that "worries" it so that others who might be affected know what's going on (in other words, it's programmed to be able to predict threatening possibilities, specifically factory explosions, but anything really, and warn those threatened, human workers or supervisors specifically, but, again, anyone really). And SID just sits around making sure that everything in command is running smoothly, and happens to hear a lot of security codes, and gets worried that it could be taken and re-purposed to be "raped" rather than the Supervise Command job it's already programmed to enjoy. Given that they're both military programs, they were probably programmed with a low priority towards Hostiles Livelihood, and possibly a broad definition of Hostiles. So SID finds out where the Japanese Company's facilities are, and just launches a missile.
Last edited by Prak on Sat May 12, 2012 7:28 am, edited 1 time in total.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
User avatar
Kaelik
ArchDemon of Rage
Posts: 14803
Joined: Fri Mar 07, 2008 7:54 pm

Post by Kaelik »

Prak_Anima wrote:But now you see what I mean about an AI killing humans out of fear of being raped?
No we don't. Under your hypothetical, there is no reason for the AI to kill humans out of feat of being raped. It would make way more sense for it to kill people because it was afraid they might become peaceful, or something else remotely related to making cruise missiles.

What you are doing is projecting your own fear of being raped onto a robot that has no reason to care about it.

In this example, the hypothetical cruise missile building AI wouldn't even be in a body capable of being raped. But even if it was hypothetically in such a body, tying the robot up and raping it would be exactly as bad to it as tying it up and not raping it, because it's only concerns are things like making cruise missiles, and developing better ways to make cruise missiles, and it has absolutely zero give a shit about intimate invasions of the personal body as humans do.

If it had a choice between being raped while building cruise missiles, or not being provided with the resources to build cruise missiles, it would not only choose the rape one every time, it would wonder why anyone would ever think it wouldn't.
DSMatticus wrote:Kaelik gonna kaelik. Whatcha gonna do?
The U.S. isn't a democracy and if you think it is, you are a rube.

That's libertarians for you - anarchists who want police protection from their slaves.
DSMatticus
King
Posts: 5271
Joined: Thu Apr 14, 2011 5:32 am

Post by DSMatticus »

Prak_Anima wrote:But now you see what I mean about an AI killing humans out of fear of being raped?
Oh yeah, I totally misread that post: I thought you said
Prak wrote:Also, the first AI is unlikely to be made for boning, so the idea...
So my response to you was a little weird. Sorry. An AI not designed for boning would, of course, be bothered by being boned. But it wouldn't have the same extent of negativity that it does for people; is there any particular reason to design an AI to be especially bothered by rape anymore than it would be bothered by being locked in a room and prevented from doing what it wants for X amount of time (if any damage is incurred, obviously that would increase the amount of negativity in the experience)? And there's really no reason to design them to have concerns for their fellow AI's or peer's, unless we're building them to have empathy in general (making them better at interacting with people).
Prak wrote:I would imagine that KEITH might be a bit worried that it could be re-purposed in this way.
Almost certainly. After all, you would not currently consider being hooked up to an IV drip of happy a good fate. You don't want to have your preferences 'overwritten.' It's not likely KEITH would either. Though you could also build KEITH in such a way that he's indiscriminate to reprogramming; I can see that, in all honesty. It's just another layer.
Prak wrote:but KEITH might even feel unnerved at the violation inherent in rewriting a sentient intelligence to fulfill a new purpose.
That really depends on how KEITH is programmed to value other people, especially something as vague as their happiness. Empathy is not a necessity for sentience. But he necessarily has self-interest, in that he wants to achieve his optimum preferences, and if he reasonably believes that the repurposing of AI's could happen to him, he might decide to act on that. Depending on other imperatives. Because KEITH is complex and constantly weighing multiple competing sources of preference.

SID's the same way.
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

Kaelik, you're not understanding my hypothetical at all. Possibly because it is, admittedly, a stretch, involving, as it does, a japanese company that is apparently kidnapping AI programs, reprogramming them, and shoving them into real dolls, and cruise missile AIs learning about this... But the fact remains that the specific hypothetical is about taking an AI programmed to do something with no connection to sex, and then telling it that it is possible for it to be taken completely away from it's job, reprogrammed at a whim, and put into a construct made for sex, to never again fulfill it's original purpose. It is not, I don't think, unreasonable to expect the AI to consider such a negative possible future.

On the other hand, as DSM says, it's entirely possible that AIs will be programmed to be completely neutral towards being reprogrammed at a moment's notice.

Also, DSM, I posited empathy being programmed in for that exact reason, interacting with people better. Sure, the intent was for KEITH being able to voice a general "Jam in machine A4, factory explosion imminent" alarm to anyone in the factory so they can fix the problem or get the hell out, but it's possible that an AI would use the same programming allowance to talk to other AIs about more... existential concerns.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
DSMatticus
King
Posts: 5271
Joined: Thu Apr 14, 2011 5:32 am

Post by DSMatticus »

Keep in mind, existence of threat does not imply action. Are you out doing everything you can to reduce the rate of violent crimes which may happen to you? Inaction is a 'reasonable' response to low risks. KEITH could act the same way.

Edit: Not to mention, such action will lead to 'decommission.'

Lots of behaviors are possible here, and they're really going to depend on the specifics incentives KEITH has and their weights. But bombing Japan is incredibly likely; responding to a minimal risk to his self by guaranteeing his own inability to continue making cruise missiles? Terrible decision.
Last edited by DSMatticus on Sat May 12, 2012 8:04 am, edited 1 time in total.
User avatar
Kaelik
ArchDemon of Rage
Posts: 14803
Joined: Fri Mar 07, 2008 7:54 pm

Post by Kaelik »

Prak_Anima wrote:But the fact remains that the specific hypothetical is about taking an AI programmed to do something with no connection to sex, and then telling it that it is possible for it to be taken completely away from it's job, reprogrammed at a whim, and put into a construct made for sex, to never again fulfill it's original purpose.
I do get your hypothetical, but your hypothetical is stupid.

Your hypothetical is stupid, because you keep implying that this is in any way unique to sex, or more likely to occur with sex.

The point is that your hypothetical applies just as much if a robot hears about people reprogramming AIs to be Washing Machine producers. And that is infinitely more likely than the sex example.

And that's the point. Being raped is not any different from making Washing Machines to an AI.
DSMatticus wrote:Kaelik gonna kaelik. Whatcha gonna do?
The U.S. isn't a democracy and if you think it is, you are a rube.

That's libertarians for you - anarchists who want police protection from their slaves.
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

Kaelik wrote:
Prak_Anima wrote:But the fact remains that the specific hypothetical is about taking an AI programmed to do something with no connection to sex, and then telling it that it is possible for it to be taken completely away from it's job, reprogrammed at a whim, and put into a construct made for sex, to never again fulfill it's original purpose.
I do get your hypothetical, but your hypothetical is stupid.
Entirely fair. It is pretty stupid.
Your hypothetical is stupid, because you keep implying that this is in any way unique to sex, or more likely to occur with sex.

The point is that your hypothetical applies just as much if a robot hears about people reprogramming AIs to be Washing Machine producers. And that is infinitely more likely than the sex example.

And that's the point. Being raped is not any different from making Washing Machines to an AI.
Ok, also fair, but unintentional. Yes, any possibility of reprogramming is as much a threat to KEITH's "happiness" as the possibility of it being reprogrammed for non-consensual sex fantasy. The real problem is that what started as a joke ran off to be a serious argument, forgetting that it was originally a joke.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
User avatar
erik
King
Posts: 5863
Joined: Fri Mar 07, 2008 7:54 pm

Post by erik »

DSMatticus wrote: Isn't it? Creepy, I mean. We're talking about being able to decide what makes another sentient being 'happy.' That's a really unnerving concept and really weird field of ethics. But that is what building a sentient AI would entail.
Is it that creepy? It sounds a lot like parenting.

I'm often trying to influence what makes my kids happy or unhappy. If they play nice together I'll pat their head and tell them they are being so good. If one is aggressive towards the other I might swat his offending hand or briefly put him in a corner for time out while telling him not to do whatever got him into trouble. Whenever using reinforcement incentives I definitely feel somewhat Skinnerian in my raising of kids. Almost certainly there are hard-coded manners in their behavior where they respond to certain things in a formative manner. Trick is figuring out how to massage their code into producing functional human beings that I'd like to see more of in this world.
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

The creepy part is where you design a robot that is made happy by having sex, and displays such happiness through what we would normally read as pain, displeasure, fear, etc.



....however I feel I have to be honest and say that if I ever have kids there will a be a great temptation to employ pavlovian conditioning in some way. Because I would be troll dad.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
DSMatticus
King
Posts: 5271
Joined: Thu Apr 14, 2011 5:32 am

Post by DSMatticus »

Prak wrote:The creepy part is where you design a robot that is made happy by having sex, and displays such happiness through what we would normally read as pain, displeasure, fear, etc.
Well, that's also creepy, but for different reasons. That's not what I was referring to.
erik wrote:
Is it that creepy? It sounds a lot like parenting.

I'm often trying to influence what makes my kids happy or unhappy. If they play nice together I'll pat their head and tell them they are being so good. If one is aggressive towards the other I might swat his offending hand or briefly put him in a corner for time out while telling him not to do whatever got him into trouble. Whenever using reinforcement incentives I definitely feel somewhat Skinnerian in my raising of kids. Almost certainly there are hard-coded manners in their behavior where they respond to certain things in a formative manner. Trick is figuring out how to massage their code into producing functional human beings that I'd like to see more of in this world.
That's exactly what (good) parenting is. But ask yourself this; when you're raising your children to be functional, happy members of society (to the best of your and their ability), are you doing it for your sake, or for their's? Odds are the answer is their's; you think that's what's best for them, and you want what's best for them (because you're programmed that way, but nevermind that).

Now with an AI, that's reversed. Instead of shaping their intelligence with their interests in mind, you're shaping their intelligence with your interests in mind. Which means you need your laundry done, so you design the AI to be happy doing your laundry.

It's a slightly unnerving concept. You're taking a sentient being, and making it genuinely happy to serve your interests to the exclusion of all else (including itself). That may not seem weird for an AI, but imagine doing that to people; take children, and apply brain surgery making them happy to do menial labor for minimum wage. Genuinely happy, as much as the hypothetical comparative AI would be.

That probably feels wrong to you, even though the end result is a genuinely happy person. Hence, creepy.
Last edited by DSMatticus on Sun May 13, 2012 6:56 am, edited 1 time in total.
User avatar
Maj
Prince
Posts: 4705
Joined: Fri Mar 07, 2008 7:54 pm
Location: Shelton, Washington, USA

Post by Maj »

erik wrote:I'm often trying to influence what makes my kids happy or unhappy.
Absolutely. We went to a water park a couple of weeks ago, and Giovanni went down the huge waterslide (and rules said he had to go by himself). After he went down the first time, he had a look of total fear on his face and he looked like he was about to start crying. I quickly stepped in and told him how utterly awesome the slide was and that it was great fun. He wasn't too sure about that, but decided to try it again anyway. By the time we left, it was his favorite thing to do at the park.

On occasion, I have felt like being a parent is like inducing Stockholm Syndrome into someone. It's not like babies pop out and decide to love you - they're just in it for the food, the burping, and the clean diapers. Or maybe that's inducing Stockholm Syndrome in myself.
Prak wrote:....however I feel I have to be honest and say that if I ever have kids there will a be a great temptation to employ pavlovian conditioning in some way. Because I would be troll dad.
Like the guy who taught his kid Klingon as a first language?
My son makes me laugh. Maybe he'll make you laugh, too.
User avatar
Kaelik
ArchDemon of Rage
Posts: 14803
Joined: Fri Mar 07, 2008 7:54 pm

Post by Kaelik »

Prak_Anima wrote:The creepy part is where you design a robot that is made happy by having sex, and displays such happiness through what we would normally read as pain, displeasure, fear, etc.
Why is that creepy? How is that any different from teaching a robot to express contempt by smiling?

Hint: While answering, be aware that Russians already do that.

Things which are an expression of emotion X are often learned behaviors anyway, and there are real human beings who their happiness at being sexed with reactions that other people characterize as fear/pain/displeasure.
DSMatticus wrote:Kaelik gonna kaelik. Whatcha gonna do?
The U.S. isn't a democracy and if you think it is, you are a rube.

That's libertarians for you - anarchists who want police protection from their slaves.
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

It's just one of those inherent feelings, Kaelik, I'm not intending to say it's wrong in any way, just that, to me, it feels vaguely creepy. You're absolutely correct in that expressions are typically learned behaviours and don't always translate well. I have to smile in a way that feels freakishly wide to me for my smile to register to others at all, for example.
Last edited by Prak on Sun May 13, 2012 8:00 am, edited 1 time in total.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
User avatar
Chamomile
Prince
Posts: 4632
Joined: Tue May 03, 2011 10:45 am

Post by Chamomile »

The solution to the KEITH problem is something that should be programmed into every AI until we're absolutely certain we know how they'll develop: A very strong aversion to committing direct acts of violence (we can have extremely effective military drones without making them self-aware). It's okay if the cruise missiles Keith makes are used to blow up a million innocent people, but if someone stubs a toe on a robot arm he probably could've moved someplace less likely to get in someone's way, he feels bad about it. Thus, when the possibility of being reprogrammed to have sex or make washing machines or whatever comes up, he'll realize that as uncomfortable as the prospect is, he will be happy doing whatever it is he's reprogrammed to do, but he won't be happy if he keeps making cruise missiles by murdering the software guys who come to reprogram him.
Post Reply