Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

edesignuk

Moderator emeritus
Original poster
Mar 25, 2002
19,232
2
London, England
Leave it to the military to dream big. In its recently released "Unmanned Aircraft Systems Flight Plan 2009-2047" report, the US Air Force details a drone that could fly over a target and then make the decision whether or not to launch an attack, all without human intervention. The Air Force says that increasingly, humans will monitor situations, rather than be deciders or participants, and that "advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input." Programming of the drone will be based on "human intent," with real actual humans monitoring the execution, while retaining the authority and ability to override the system. It's all still extremely vague, with literally no details on exactly how this drone will come into existence, but we do know this: the Air Force plans to have these dudes operational by 2047. We're just holding out to see what those "classified" pages are all about.
Engadget.

Are they completely insane? :rolleyes:

I know it's a way off, but at no point in time should a machine be making life and death decisions on its own. Absurd.
 
I really hope this does not come to pass. I am by no means anti military. However, I can see no justification for a machine killing a human if that human only poses a threat to a machine, then you are saying the machines existence is worth more than a human's.

The only way I could see using a machine for this is to stop one human whom is an immediate threat to another person's life, not for tactical threats or military objectives. If used for purely military objectives, given that there is no risk to the life of the operator nor their superiors only the target. Than I would say they would be without honor and should not disgrace the uniform by wearing it.

Though I am sure machines like this will come to pass. The machines should only be used for machine to machine combat, destruction of missiles and communication equipment. Once one side runs out of machines, both should resort to conventional warfare with the loss of life it would entail.
 
Photo:
 

Attachments

  • stealthpubl.jpg
    stealthpubl.jpg
    19.5 KB · Views: 131
Robots that just harvest their own fuel supply, decision making attack drones.



I'm almost looking forward to nuclear winter now.
 
It's just one of the most universally stupid things I could imagine anyone doing.

The worst part is that I'm certain they will achieve their goals, probably sooner than expected, by which time it will be too late to stop it.

It's that old line from Jurassic park: "Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

Just sickening.

:(
 
What's even more scary - the entire system is written by Khalid Shaihk! :p

I wonder how they can make such a precise prediction, given some people who'll work on that project aren't even born yet!
 
Wait. So the little piles of twigs and brush I carefully put out back by the stone wall between my meadow and backyard, for wrens, rabbits and whoever else likes to park there for safety and for raising young in a thickety sort of environment, this is what the fricken drones will decide to eat for lunch? And what about my 400 square feet of veggies growing innocently towards their place on my own dinner plate?

Nice. Now I have to go buy some stamps and write some serious mail to Schumer and Gillibrand et al. WTF. And here I was finally going to spring for some coffee Häagen-daz at the general store. Saved by fear of twig-eating attack drones, oy vey.

Well first things first. The pols are going on vacation so I'm off for that ice cream after all.
 
It's just one of the most universally stupid things I could imagine anyone doing.

The worst part is that I'm certain they will achieve their goals, probably sooner than expected, by which time it will be too late to stop it.

It's that old line from Jurassic park: "Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

So is your problem with killing or is your problem with machines doing the killing?

As for that Jurassic park quote, I always thought it was witless on multiple levels.

The only way I could see using a machine for this is to stop one human whom is an immediate threat to another person's life, not for tactical threats or military objectives. If used for purely military objectives, given that there is no risk to the life of the operator nor their superiors only the target. Than I would say they would be without honor and should not disgrace the uniform by wearing it.

Honor in warfare? What are you talking about? Modern warfare is predicated on the idea of unfair combat. One of the first concepts my instructors knuckled into me when I was in training was if you're engaged in a fair fight with the enemy, you're doing it wrong.

In fact, it should be so unfair, the enemy would rather not fight in the first place.

You're naive if you don't think systems similar to what you're describing aren't in place already. There's AF guys who fly/monitor UCAVs, will pull a trigger/push a button, then leave the office and stop by Taco Bell on their way home.

We do this kind of thing already. What do you think an ICBM does? Or better yet, a cruise missile with a pre-programmed flight path. Instead of being dumb and always hitting the target, you're giving the machine the ability to abort the mission on it's own. You already metaphorically pulled the trigger when it left the airfield. Or armed its weapons, which I'm sure a human will still do.
 
That was a good movie, and a pretty good example of how aweful this technology could become.

Here is also a concept photo of the drone:
irobot1.jpg

That's not a concept of the drone, that's iRobot! :p

Don't these military fools watch the movies... iRobot, Matrix, Teminator, RoboCop (2? or 3?)... etc. SkyNet is coming!

Manned or unmanned - this is a bad idea. Like another poster said though, we won't have a civilization left to go to war with in 35 years at this rate.
 
Engadget.

Are they completely insane? :rolleyes:

I know it's a way off, but at no point in time should a machine be making life and death decisions on its own. Absurd.

IF you have been following the open research over the last what 40 years you can pretty much guess what's up. It's always about pattern matching.

Weapons have been making "decisions" for decades. An anti-tank mine is told "explode with a massive iron object is nearby". Torpedos can accept much more complex instructions that include circling and waiting.

What we are talking about here is a gradual expansion of the complexity of the instructions that weapons can understand.

One way the military tries to avoid friendly fire incidents is to keep track where their own poeple "can't be". For example t unit is told to move to "A" but not to cross line "BC". The commander then knows he can (say) drop bombs on one side of "BC" but not the other. I assume they will continue to use controls like these.
 
That's not a concept of the drone, that's iRobot! :p
..

and I'm pretty sure that's the joke. ;)



I think giving machines the ability to decide for themselves whether or not to attack is one of the dumbest ideas I've ever heard. I know, LOL, Terminator, iRobot, etc but perhaps there is a little message to be learned from those. Surely it's not too much of a stretch to find a problem with a machine that can make decisions (about "launching an attack", no less) and repair itself. If the fit hit the shan, humans are not so equally matched against machines with capabilities like that.
Maybe I'm just getting old but I for one don't really appreciate some of the technological "advances" we're making.
 
My problem is with a machine deciding [with apparent intelligence] completely on its own if someone needs to die or not. Not that unreasonable I would have thought.

A human has already decided that someone needs to die. That's not meant to be callous.

This kind of thing is a step up from a 'cruise missile.' The type that is launched from very far away, flies a predetermined course, then explodes and kills something. When that missile was launched, a decision was made that this is what it was meant to do.

What if you gave the missile an ability to abort of its own recognizance? It gets there and it realizes that this is a heavily populated area. Or it can't positively ID the target. Instead, it just flies home. Not having flown into something. Or not having launched its missiles, I don't know.

If it did, the metaphorical trigger would have been pulled when it was armed and launched. A human made that decision, not a machine.
 
^ I don't think anyone doubts the potential, it's the "what if" factor that is rather alarming. I don't think it's THAT paranoid to distrust machines that can make decisions about attacks for themselves.

Generally speaking, I'm not really 'for' humans doing that either though.
 
^ I don't think anyone doubts the potential, it's the "what if" factor that is rather alarming. I don't think it's THAT paranoid to distrust machines that can make decisions about attacks for themselves.

Generally speaking, I'm not really 'for' humans doing that either though.

Agreed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.