Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
and when the drone decides to completely destroy some city in the middle east, what happens?

and never give machines the ability to choose between life or death.
cleaning houses and serving tea, yes, but not dropping bombs.
 
urgh...

i better get cranking on building my underground bunker for judgement day... 35 years should be enough time for me to build it...
 
Robots that just harvest their own fuel supply, decision making attack drones.

Ah yes. I can see the scenario now. A robot tank running low on fuel, scans the area for potential fuel sources. It finds no dead bodies, only some civilians wandering about. It's program dictates that live bodies are not viable fuel, but dead bodies are. Being a military robot, its mission is its first priority. Bang bang. Viable fuel source.:eek::eek:
 
What if you gave the missile an ability to abort of its own recognizance? It gets there and it realizes that this is a heavily populated area. Or it can't positively ID the target. Instead, it just flies home. Not having flown into something. Or not having launched its missiles, I don't know.

well.. i don't want anything coming back with a load on and "something" having gone wrong at the original target.

What if the "something" was not so much that the target went bad, but some of the sensors on the thing went bad. Then on the way home, who knows what else degrades.

Down this path, trying to make it safer, lie ever more "failsafe" mechanisms that jack up the cost of a weapon until one of them costs enough to feed the poor for a year. If they have to build the thing, then let it be built to make a one way trip and hope it doesn't kill any more innocent civilians than get killed by less "smart" predators now.
 
well.. i don't want anything coming back with a load on and "something" having gone wrong at the original target.

What if the "something" was not so much that the target went bad, but some of the sensors on the thing went bad. Then on the way home, who knows what else degrades.

Down this path, trying to make it safer, lie ever more "failsafe" mechanisms that jack up the cost of a weapon until one of them costs enough to feed the poor for a year. If they have to build the thing, then let it be built to make a one way trip and hope it doesn't kill any more innocent civilians than get killed by less "smart" predators now.

I dunno, I was just throwing out hypotheticals off the top of my head.

Anyway, I read the document that was referenced - Google docs has a copy, too - and people are getting their Terminator buttons pushed without really reading much about what capabilities and the degree of autonomy that these things will have.

These things are still human controlled systems. The ground attack UAV that's orbiting the area is still having it's targets designated by a human. The autonomous wingman type UAV is still keying off of a manned vehicle. It just does the intermediary steps on its own. You know, kind of like how a USB drive will mount itself.

If people are really frightened of unmanned weapons autonomously killing humans in any real sense, they'd be more concerned about landmines, which are a reality in the here and now vs yet another future warfare concept from the propeller beanie crowd at the USAF that may or may not happen.
 
If people are really frightened of unmanned weapons autonomously killing humans in any real sense, they'd be more concerned about landmines, which are a reality in the here and now vs yet another future warfare concept from the propeller beanie crowd at the USAF that may or may not happen.

Oh man you got that so right. They are a true abomination.
 
Call for debate on killer robots

An international debate is needed on the use of autonomous military robots, a leading academic has said.

Noel Sharkey of the University of Sheffield said that a push toward more robotic technology used in warfare would put civilian life at grave risk.

Technology capable of distinguishing friend from foe reliably was at least 50 years away, he added.

However, he said that for the first time, US forces mentioned resolving such ethical concerns in their plans.

"Robots that can decide where to kill, who to kill and when to kill is high on all the military agendas," Professor Sharkey said at a meeting in London.

"The problem is that this is all based on artificial intelligence, and the military have a strange view of artificial intelligence based on science fiction."
BBC.
 
Engadget.

Are they completely insane? :rolleyes:

I know it's a way off, but at no point in time should a machine be making life and death decisions on its own. Absurd.

"It's all still extremely vague,"

Uh, yeah, that's the whole 2047 aspect of things right there, only 38 years away, much ado about nothing, move along, move along.
 
Oh yea that cold war is just so over, will never return, right?

"Russian Subs Patrolling Off East Coast of U.S."
http://www.nytimes.com/2009/08/05/world/05patrol.html?_r=1
Once among the world’s most powerful forces, the Russian Navy now has very few ships regularly deployed on the open seas...

How´s the F-22 going to stop a submarine?

And yes, the Cold War Era is over. New military "tensions" will not reinstate it and revert the world to a nuclear arms race and hyped fear mongering about how the commies are coming.
 
I would be more concerned if they could actually keep the technology from being stolen....ala F22 specs already being in foreign hands,and they still do not know,how exactly the theft occurred.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.