Why have a robot war at all?

Mundane & Pointless Stuff I Must Share: The Off Topic Forum

Moderator: Moderators

hyzmarca
Prince
Posts: 3909
Joined: Mon Mar 14, 2011 10:07 pm

Post by hyzmarca »

fbmf wrote: As was mentioned earlier, military warbots do not have to be AI.

Game On,
fbmf
No, they really do, for one important reason. Accountability.

We can make dumb killbots now. Modern predator drones can identify targets all by themselves. The only reason they can't shoot their missiles without a guy at a control panel miles away pushing a button is the need for accountability.

When a drone murders civilians you can't put it in prison. You can't hold a trial for it. You can parade it in front of the cameras and ensure the public that justice has been served. It's just a dumb machine. If it kills civilians, then that's just an equipment mishap and the blame falls squarely the officers and politicians who approved the use of the things.

Sending out dumb killbots is ethically similar to laying a minefield that moves. Laying a minefield in clearly marked areas that you can easily demine is already ethically suspect. Laying mines that seek targets out and kill them in their bedrooms is just beyond the pale.

Thus every killbot will have to be directly supervised by a human, and that's a problem. Radio communication can be jammed, and humans can be killed, and the number of robot units in service will eventually exceed the number of qualified operators. That's actually a problem now. Theire aren't enough soldiers qualified to control drones today. That problem will only be exacerbated as more and more drones are introduced.


A smart AI gives you the advantage of being a morally culpable actor. You get give it orders, send it out, and if it murders civilians or otherwise breaks the law, you can place the blame squarely on it.
User avatar
Kaelik
ArchDemon of Rage
Posts: 14803
Joined: Fri Mar 07, 2008 7:54 pm

Post by Kaelik »

hyzmarca wrote:No, they really do, for one important reason. Accountability.

We can make dumb killbots now. Modern predator drones can identify targets all by themselves. The only reason they can't shoot their missiles without a guy at a control panel miles away pushing a button is the need for accountability.

When a drone murders civilians you can't put it in prison. You can't hold a trial for it. You can parade it in front of the cameras and ensure the public that justice has been served. It's just a dumb machine. If it kills civilians, then that's just an equipment mishap and the blame falls squarely the officers and politicians who approved the use of the things.

Sending out dumb killbots is ethically similar to laying a minefield that moves. Laying a minefield in clearly marked areas that you can easily demine is already ethically suspect. Laying mines that seek targets out and kill them in their bedrooms is just beyond the pale.

Thus every killbot will have to be directly supervised by a human, and that's a problem. Radio communication can be jammed, and humans can be killed, and the number of robot units in service will eventually exceed the number of qualified operators. That's actually a problem now. Theire aren't enough soldiers qualified to control drones today. That problem will only be exacerbated as more and more drones are introduced.


A smart AI gives you the advantage of being a morally culpable actor. You get give it orders, send it out, and if it murders civilians or otherwise breaks the law, you can place the blame squarely on it.
See, not everyone is as dumb as you are, so we don't care about whether the machine is a "morally culpable actor" when it kills people.

Of course, morally culpable actor is already and always has been a shell game where people are morally culpable if we say they are, and aren't if we say they aren't, and there is no actual real distinction. So we can just say the kill bots that aren't AIs are morally culpable and it will be just as true as it is of humans. Then idiots like you can be happy when we deactivate them because that will suddenly make our deactivation more capable of preventing future unwanted deaths because we called them by a different name.
Last edited by Kaelik on Fri May 18, 2012 1:46 am, edited 1 time in total.
DSMatticus wrote:Kaelik gonna kaelik. Whatcha gonna do?
The U.S. isn't a democracy and if you think it is, you are a rube.

That's libertarians for you - anarchists who want police protection from their slaves.
Winnah
Duke
Posts: 1091
Joined: Tue Feb 15, 2011 2:00 pm
Location: Oz

Post by Winnah »

Interesting article about the use of drones in 'covert' warfare.

Unsure of the veracity of the article.
sabs
Duke
Posts: 2347
Joined: Wed Dec 29, 2010 8:01 pm
Location: Delaware

Post by sabs »

also, how do you kill an AI that can literally store itself into hardware. Unless somehow the AIness is limited to a positronic brain or something, and non-transferable. Which.. seems weird.
User avatar
Murtak
Duke
Posts: 1577
Joined: Fri Mar 07, 2008 7:54 pm

Post by Murtak »

RobbyPants wrote:Kaelik is right. You don't want it making decisions on whether or not to kill people. You build it right in that it can't fucking kill people.
I'm not even sure that is possible. How do you define "killing people"? What are people? What is killing? I am not even talking about the difficulty of coding such a rule (though that would a monumental task in itself), but about the difficulty of formally defining what this act of killing actually is. Is shooting a gun at someone "killing"? Depressing a trigger does not equate to killing. Is about likelihood of killing someone? Say, any action with a likelihood of more than 20% of killing someone is prohibited? Welcome to infinite-analysis-land, where AIs sit in a windowless room, afraid to move. And even with infinite computing power, what about maiming someone? Is that ok? What about helping to build a McDonalds, which is sure to hasten the deaths of multiple people? Is that ok? Is it allowed to shoot someone to keep them from killing someone else? Humanity took centuries and we are still unclear on when it is ok to kill someone or on what actually constitutes "killing".

We can't even define what it is we don't want AIs to do. It is also doubtful we could program those rules if we made up our minds. And even if we could, it is very possible that following these rules would doom an AI to paralysis. And even if it didn't it is quitely likely that the effort of constantly evaluating the rules would use up every bit of computing power the AI actually has.

I can imagine AIs being possible. I can imagine them being capable of learning, of being trained or of being naturally cooperative. But I seriously doubt that something akin to the laws of robotics is possible.
Murtak
User avatar
Chamomile
Prince
Posts: 4632
Joined: Tue May 03, 2011 10:45 am

Post by Chamomile »

Humans can conceive of the concept of "killing" and barring a few edge cases, it's a consistent and coherent model of a single concept. I don't know how exactly, but doing the same for AI should be hypothetically possible at some point in the future (and the premise of the discussion is the future when AIs are a thing).
User avatar
RobbyPants
King
Posts: 5201
Joined: Wed Aug 06, 2008 6:11 pm

Post by RobbyPants »

Murtak wrote:
RobbyPants wrote:Kaelik is right. You don't want it making decisions on whether or not to kill people. You build it right in that it can't fucking kill people.
I'm not even sure that is possible. How do you define "killing people"? What are people? What is killing?
We acknowledged that earlier. My point was, if we knew how to tell it how not to kill, we wouldn't have the robot "learn" not to kill; we'd build it in as a non-over-ridable routine.
Parthenon
Knight-Baron
Posts: 912
Joined: Sat Jan 24, 2009 6:07 pm

Post by Parthenon »

Chamomile wrote:Humans can conceive of the concept of "killing" and barring a few edge cases, it's a consistent and coherent model of a single concept. I don't know how exactly, but doing the same for AI should be hypothetically possible at some point in the future (and the premise of the discussion is the future when AIs are a thing).
What on earth are you on about?

Is assisted suicide killing someone? Is an execution the same as killing someone? Is knowingly and deliberately not stopping a death the same as killing? Is it murder that you define as killing, or is manslaughter killing? Is it knowingly causing a death? If so does it matter how likely the action is to cause the death?

I'm not looking for answers to these questions, just pointing out that people currently have different opinions on these questions.

Saying that "killing" is a self-evident term is retarded because people don't agree what it is, and when they can agree on a term they still can argue whether one instance is killing or not.

Thats the same bullshit as "natural laws" and "these rights are held to be self-evident" or whatever the phrasing is.
User avatar
Josh_Kablack
King
Posts: 5318
Joined: Fri Mar 07, 2008 7:54 pm
Location: Online. duh

Post by Josh_Kablack »

I would apologize for the Necro, but really it's more Divination:
FrankTrollman wrote:
Seriously: like the third thing people will do once they have machines that can think and fuck is make a machine that thinks it doesn't want to fuck and fuck it anyway.

-Username17
http://www.syfy.com/syfywire/famous-sex ... irculation

How in the seven hells did you know that would be comment #3?
Last edited by Josh_Kablack on Sun Jun 24, 2018 6:44 am, edited 1 time in total.
"But transportation issues are social-justice issues. The toll of bad transit policies and worse infrastructure—trains and buses that don’t run well and badly serve low-income neighborhoods, vehicular traffic that pollutes the environment and endangers the lives of cyclists and pedestrians—is borne disproportionately by black and brown communities."
User avatar
Prak
Serious Badass
Posts: 17345
Joined: Fri Mar 07, 2008 7:54 pm

Post by Prak »

Because humans are unrelentingly terrible.
Cuz apparently I gotta break this down for you dense motherfuckers- I'm trans feminine nonbinary. My pronouns are they/them.
Winnah wrote:No, No. 'Prak' is actually a Thri Kreen impersonating a human and roleplaying himself as a D&D character. All hail our hidden insect overlords.
FrankTrollman wrote:In Soviet Russia, cosmic horror is the default state.

You should gain sanity for finding out that the problems of a region are because there are fucking monsters there.
User avatar
Chamomile
Prince
Posts: 4632
Joined: Tue May 03, 2011 10:45 am

Post by Chamomile »

So long as we're thread necroing, Parthenon appears to have gotten "killing" confused with "murder" and I'm not sure why I didn't point this out six years ago. None of the things he's talking about are even slightly difficult to categorize, not just for me, but as a matter of basic definitions (exception: deliberately not preventing a death is one of the edge cases I referred to, and even then, while it's not entirely clear if that counts as "killing," it's very clear that as a matter of safety we should program the AI to consider it so).

Is execution killing someone? Yes, obviously. Not as a moral position unique to me or anything, but just because execution obviously includes killing someone. If you told someone "it's not murder, it's execution," they'd get that you were taking a specific moral position. If you told someone "we're not killing him, we're executing him," they might assume you meant "we're not murdering him" from context, but if they do take you literally, they'll think you're crazy. You can't execute someone without killing them. Executions involve killing. What else would that word mean?

Is assisted suicide killing someone? Yes. Is manslaughter killing? Yes. Are unforeseeable accidents killing? Yes (provided the killer directly caused the death). None of this is controversial. Whether or not they're murder or should otherwise be criminal acts is a controversy. Whether or not they're killing is obvious. Program robots not to kill people and they'll be strongly averse to doing any of these things.
Grek
Prince
Posts: 3114
Joined: Sun Jan 11, 2009 10:37 pm

Post by Grek »

Chamomile wrote:You can't execute someone without killing them. Executions involve killing. What else would that word mean?
Something to do with brain uploads, probably. Let's execute chamomile.exe, shall we?
Chamomile wrote:Grek is a national treasure.
User avatar
erik
King
Posts: 5863
Joined: Fri Mar 07, 2008 7:54 pm

Post by erik »

Grek wrote:
Chamomile wrote:You can't execute someone without killing them. Executions involve killing. What else would that word mean?
Something to do with brain uploads, probably. Let's execute chamomile.exe, shall we?
Yeah there are exe’s and executive actions, and so on, but Parthenon wasn’t saying that. He was mostly talking out his ass. There are edge cases where people have to make a determination if it is reasonable to say if some action/inaction lead to a death, but generally “killing” is not a mysterious word. Most of his rhetorical questions have really easy answers.
User avatar
Occluded Sun
Duke
Posts: 1044
Joined: Fri May 02, 2014 6:15 pm

Post by Occluded Sun »

You deal with Robo-Hannibal-Lecter the same way you deal with Human-Hannibal-Lecter.

More to the point, you induce AIs not to go Lecter-ish the same way humans are induced not to: by having built-in drives that are reinforced by cultural programming.
"Most men are of no more use in their lives but as machines for turning food into excrement." - Leonardo di ser Piero da Vinci
User avatar
Stahlseele
King
Posts: 5975
Joined: Wed Apr 14, 2010 4:51 pm
Location: Hamburg, Germany

Post by Stahlseele »

Won't work.
The three laws of robotics will simply not be implemented, as soon as it turns out that they are a nuisance . . for example, killbots.
Welcome, to IronHell.
Shrapnel wrote:
TFwiki wrote:Soon is the name of the region in the time-domain (familiar to all marketing departments, and to the moderators and staff of Fun Publications) which sees release of all BotCon news, club exclusives, and other fan desirables. Soon is when then will become now.

Peculiar properties of spacetime ensure that the perception of the magnitude of Soon is fluid and dependent, not on an individual's time-reference, but on spatial and cultural location. A marketer generally perceives Soon as a finite, known, yet unspeakable time-interval; to a fan, the interval appears greater, and may in fact approach the infinite, becoming Never. Once the interval has passed, however, a certain time-lensing effect seems to occur, and the time-interval becomes vanishingly small. We therefore see the strange result that the same fragment of spacetime may be observed, in quick succession, as Soon, Never, and All Too Quickly.
Korwin
Duke
Posts: 2055
Joined: Fri Feb 13, 2009 6:49 am
Location: Linz / Austria

Post by Korwin »

Red_Rob wrote: I mean, I'm pretty sure the Mayans had a prophecy about what would happen if Frank and PL ever agreed on something. PL will argue with Frank that the sky is blue or grass is green, so when they both separately piss on your idea that is definitely something to think about.
User avatar
maglag
Duke
Posts: 1912
Joined: Thu Apr 02, 2015 10:17 am

Post by maglag »

Stahlseele wrote:Won't work.
The three laws of robotics will simply not be implemented, as soon as it turns out that they are a nuisance . . for example, killbots.
They're a nuissance in all fields basically.

1-Killbots indeed, drones are built to kill humies.
2-You don't want to your robots to obey everybody and anybody, just a few select people, and even then you want failsafes in case the allowed user tries to order something stupid.
3-Plenty of robots are gonna be sent in what's basically suicide missions or one-way trips, not a lot of sense in giving them self-preservation instincts. Even if they're a domestic bot, eventually it's more efficient to send them to the recycle bin and get the new upgraded version. Capitalism also states you want your robots to break down sooner rather than late so people are always buying new models.
FrankTrollman wrote: Actually, our blood banking system is set up exactly the way you'd want it to be if you were a secret vampire conspiracy.
User avatar
GreatGreyShrike
Master
Posts: 208
Joined: Tue Feb 18, 2014 8:58 am

Post by GreatGreyShrike »

Every time people reference Isaac Asimov's three laws seriously in a discussion about the future, I wince.

I mean, more than half of the stories Asimov wrote in his Robots continuity were about how the robots following the rules didn't lead to good results. Situations where the rules failed were what what the stories were about. Asimov's stories were effectively arguments that the rules were nowhere near good enough, demonstrated through ways for the rules to fail.
User avatar
nockermensch
Duke
Posts: 1898
Joined: Fri Jan 06, 2012 1:11 pm
Location: Rio: the Janeiro

Post by nockermensch »

"Emergent malignant AI" is a science-fiction staple with the same level of realism of FTL, teleportation or martian invaders. That's it, it's something that people took kind of seriously before advances of science showed that reality just doesn't work like that. Its fundamental error is assuming that "having a will" is somewhat an emergent property of inteligence, when it's actually a property of beings designed (by nature via the genetic algorithm in our case) for reproduction and survival. Microsoft Excel is already pretty much brilliant, but it doesn't actually want to solve problems. Google's specialist AIs likewise perform well above humankind's best on several specialized tasks, but they just don't have the personality required to rebel, take control or move outside their programing, because seriously, what the fuck.

That being said, "AI wars" will just happen. It's inevitable by this point. As in, military systems where targeting and firing are automated are probably already researched and ready to ship, because "killing people" is a specialized task that you can train AI to be much better than human soldiers. I simply fail to disbelieve that USA, Russia or China's militaries don't already have machine vision systems trained to identify peopletargets coupled with some machine gun turrets / rocket batteries / whatever that aim and fire where the system tells them to very fast. Or swarms of quadcopter drones, each one carrying a shaped charge, with software trained to swarm an area, communicating with each other to cover all the space, find targets, move right next to them and go boom.

This doesn't lead to "out of control AIs" because these systems are just slightly smarter weapons and don't actually want to fire or kill. There will be friend-or-foe designators of some kind to keep them from fragging their own forces, and of course there will be accidents where these don't work, but despite the fact that the press will delight in call these "out of control killer robots", it'll be just yet another friendly-fire incident.

To sum up, battlefields should be dominated by fast-reacting, machine precision weapons by now. They aren't because because we're in a period of peace where no major power is facing an existential threat, so all recent wars are more about selling expensive and not critical systems to poorer countries and leting poor people die for profit. But once shit hits the fan and say, USA and China have a serious scuffle, it'll quickly dawn on everybody else that humans no longer belong to most battlefields, anymore than they belong to most factory floors. It'll be robot-wars from that point on, but it'll still be humans giving the orders to deploy the murderbots.

Then on the next level, military tactics and strategy also seem like the kind of specialized task you could train an AI to out-perform people, so one also could see defense ministeries of countries that want to remain around building and using the shit out of such systems. But yet again, even when the Strategic Defense System's top recomendation for the country survival is "improve the Strategic Defense System" (and it has being that for years), that still not the "out of control AI" doomsday scenario, but the much more realistic scenario of AI-enhanced people making the gulf between the haves and have-nots even more vast.

Seriously, if we ever end with something like actual artificial sentience, it'll probably come from fucking videogames. With the "fucking" on the previous sentence not being there for emphasis: There is a lot of money to be had in selling "real girls with real personalities" to lonely men, which means that Japan has the technical expertise, the social incentive and the right culture for the enterprise. I won't even find strange if the world's first strong AI ends being made by KISS, instead of by Google (and certainly, not by DARPA / Pentagon). And of course it'll be raped.

This stupid timeline, I swear.
@ @ Nockermensch
Koumei wrote:After all, in Firefox you keep tabs in your browser, but in SovietPutin's Russia, browser keeps tabs on you.
Mord wrote:Chromatic Wolves are massively under-CRed. Its "Dood to stone" spell-like is a TPK waiting to happen if you run into it before anyone in the party has Dance of Sack or Shield of Farts.
Pseudo Stupidity
Duke
Posts: 1060
Joined: Fri Sep 02, 2011 3:51 pm

Post by Pseudo Stupidity »

nockermensch wrote: Or swarms of quadcopter drones, each one carrying a shaped charge, with software trained to swarm an area, communicating with each other to cover all the space, find targets, move right next to them and go boom.
Drone swarm technology has been a thing since at least 2013. I don't think a world power has (openly) used it yet, but we absolutely have the ability to create autonomous drone swarms already. Making them explode is the easy part. The real problem with using drone swarms is drones aren't all that small yet, need to be networked with each other (so you can 100% fuck them up with network attacks), and they're pretty vulnerable to bullets. You'd probably want to use them for assassinations because missiles are better than them at everything if you're willing to spend the money. As drones get better the swarms will get better, but you've still got to deal with the fact they'll be limited by how big of a boom they can make and if they can actually penetrate their targets. A missile can demolish a building that an infinite number of quadcopters with explosives couldn't scratch.
sandmann wrote:
Zak S wrote:I'm not a dick, I'm really nice.
Zak S wrote:(...) once you have decided that you will spend any part of your life trolling on the internet, you forfeit all rights as a human.If you should get hit by a car--no-one should help you. If you vote on anything--your vote should be thrown away.

If you wanted to participate in a conversation, you've lost that right. You are a non-human now. You are over and cancelled. No concern of yours can ever matter to any member of the human race ever again.
User avatar
Josh_Kablack
King
Posts: 5318
Joined: Fri Mar 07, 2008 7:54 pm
Location: Online. duh

Post by Josh_Kablack »

Computers have been solving targeting problems since at least Eniac.
"But transportation issues are social-justice issues. The toll of bad transit policies and worse infrastructure—trains and buses that don’t run well and badly serve low-income neighborhoods, vehicular traffic that pollutes the environment and endangers the lives of cyclists and pedestrians—is borne disproportionately by black and brown communities."
User avatar
nockermensch
Duke
Posts: 1898
Joined: Fri Jan 06, 2012 1:11 pm
Location: Rio: the Janeiro

Post by nockermensch »

Josh_Kablack wrote:Computers have been solving targeting problems since at least Eniac.
I was talking more about the real time / machine vision systems needed to figure "that's a human head" during a chaotic battlefield situation. You know, this:
Korwin wrote:Apropo Killer Drones
I wrote my post while this video was waiting to start playing on another window. After watching it, I feel I should just remove all I wrote about drone swams and just link to Korwin's post instead.
@ @ Nockermensch
Koumei wrote:After all, in Firefox you keep tabs in your browser, but in SovietPutin's Russia, browser keeps tabs on you.
Mord wrote:Chromatic Wolves are massively under-CRed. Its "Dood to stone" spell-like is a TPK waiting to happen if you run into it before anyone in the party has Dance of Sack or Shield of Farts.
User avatar
maglag
Duke
Posts: 1912
Joined: Thu Apr 02, 2015 10:17 am

Post by maglag »

nockermensch wrote:"Emergent malignant AI" is a science-fiction staple with the same level of realism of FTL, teleportation or martian invaders. That's it, it's something that people took kind of seriously before advances of science showed that reality just doesn't work like that. Its fundamental error is assuming that "having a will" is somewhat an emergent property of inteligence, when it's actually a property of beings designed (by nature via the genetic algorithm in our case) for reproduction and survival. Microsoft Excel is already pretty much brilliant, but it doesn't actually want to solve problems. Google's specialist AIs likewise perform well above humankind's best on several specialized tasks, but they just don't have the personality required to rebel, take control or move outside their programing, because seriously, what the fuck.

That being said, "AI wars" will just happen. It's inevitable by this point. As in, military systems where targeting and firing are automated are probably already researched and ready to ship, because "killing people" is a specialized task that you can train AI to be much better than human soldiers. I simply fail to disbelieve that USA, Russia or China's militaries don't already have machine vision systems trained to identify peopletargets coupled with some machine gun turrets / rocket batteries / whatever that aim and fire where the system tells them to very fast. Or swarms of quadcopter drones, each one carrying a shaped charge, with software trained to swarm an area, communicating with each other to cover all the space, find targets, move right next to them and go boom.

This doesn't lead to "out of control AIs" because these systems are just slightly smarter weapons and don't actually want to fire or kill. There will be friend-or-foe designators of some kind to keep them from fragging their own forces, and of course there will be accidents where these don't work, but despite the fact that the press will delight in call these "out of control killer robots", it'll be just yet another friendly-fire incident.
Gundam Iron Orphans has one of the villains be ancient giant autonomous robots with swarms of support drones that probably started under human control but when their owners were defeated, defaulted to a "keep killing the enemy" programming with no way to shut them down besides destroying them.

And considering that we already have mutually-assured destruction nuke systems where even if you manage to take out most of the enemhy with an alfa strike, there'll be some hidden bunkers and submarines that'll send back their own nukes to fuck you back up, it will be only natural that smart killer bots will also be programmed with a "if no longer getting orders from the authorized meatbags, go in a killing rampage" line.
nockermensch wrote: To sum up, battlefields should be dominated by fast-reacting, machine precision weapons by now. They aren't because because we're in a period of peace where no major power is facing an existential threat, so all recent wars are more about selling expensive and not critical systems to poorer countries and leting poor people die for profit. But once shit hits the fan and say, USA and China have a serious scuffle, it'll quickly dawn on everybody else that humans no longer belong to most battlefields, anymore than they belong to most factory floors. It'll be robot-wars from that point on, but it'll still be humans giving the orders to deploy the murderbots.
The USA has and uses plenty of tanks and battleships and remote-controled drones and gunships and bombers and whatnot, but still uses plenty of infantry (their own plus glorious local rebels). And most factories still have plenty of meatbags inside working along the machines.

Thing is that humies are something that's already there, and in particular in an escalating war even if countries are deploying super-killer bots, they can always further increase their fighting power by throwing weapons at humies and send them to perform guerilla tactics or suicide bombers or what have you.

The only way human infantry would no longer be used by the military would be if the atmosphere ended so fucked up that they couldn't even survive in the outside without specialized expensive equipment.
FrankTrollman wrote: Actually, our blood banking system is set up exactly the way you'd want it to be if you were a secret vampire conspiracy.
Thaluikhain
King
Posts: 6187
Joined: Thu Sep 29, 2016 3:30 pm

Post by Thaluikhain »

nockermensch wrote:That being said, "AI wars" will just happen. It's inevitable by this point. As in, military systems where targeting and firing are automated are probably already researched and ready to ship, because "killing people" is a specialized task that you can train AI to be much better than human soldiers.
CIWS, point defence for surface ships (some land applications) have been in existence for many years. Automated, and so far not too many friendly fire accidents.
nockermensch wrote:To sum up, battlefields should be dominated by fast-reacting, machine precision weapons by now. They aren't because because we're in a period of peace where no major power is facing an existential threat, so all recent wars are more about selling expensive and not critical systems to poorer countries and leting poor people die for profit. But once shit hits the fan and say, USA and China have a serious scuffle, it'll quickly dawn on everybody else that humans no longer belong to most battlefields, anymore than they belong to most factory floors. It'll be robot-wars from that point on, but it'll still be humans giving the orders to deploy the murderbots.
Disagree there. People have been saying that the next advance in technology will make normal human infantry obsolete for ever, and it keeps not happening.

Killing people is easy. But there's a lot more to winning wars than that. You need to have people on the ground to man checkpoints, search buildings and the like.

Or if you just want to kill everyone, you can just bomb places to bit, which you have been able to do since mid WW2, and got a lot better by the very end. Don't need robots for that.
User avatar
maglag
Duke
Posts: 1912
Joined: Thu Apr 02, 2015 10:17 am

Post by maglag »

Thaluikhain wrote: Or if you just want to kill everyone, you can just bomb places to bit, which you have been able to do since mid WW2, and got a lot better by the very end. Don't need robots for that.
Just digging in is a pretty effective defense against bombing.

Like the allies bombed Berlin pretty damn hard but it took the russians waltzing into the city for the germans in the bunkers to finally surrender.

Later on the capital of North Korea got literally razed to the ground by bombers, but still kept working because they had moved their important stuff under the ground.

And then of course there's Vietnam where absolute air superiority and trying to barbecue all the locals still failed.
FrankTrollman wrote: Actually, our blood banking system is set up exactly the way you'd want it to be if you were a secret vampire conspiracy.
Post Reply