Alan Boyle has an interesting post on the ethics of killer robots.
"Asimov contributed greatly in the sense that he put up a straw man to get the debate going on robotics," Arkin said. "But it's not a basis for morality. He created [the Three Laws] deliberately with gaps so you could have some interesting stories."
Even without the Three Laws, there's plenty in today's debate over battlefield robotics to keep novelists and philosophers busy: Is it immoral to wage robotic war on humans? How many civilian casualties are acceptable when a robot is doing the fighting? If a killer robot goes haywire, who (or what) goes before the war-crimes tribunal?
Would John McCain be against the torture of robots to extract information?
Why don't you ask him? Personally, I'm concerned about the unintended consequences of (as a poster put it) "giving potentially intelligent machines all of the weapons".
If this were a Scholastically rigorous inquiry rather than a 'yuck response,' people would have freaked about the Falcon AAM back in the 1950s.
(Of course, Summer Glau is more maneuverable than a Falcon.)
Summer Glau (considered as a "terminator" robot with considerable knowledge at her/its disposal) also probably knows how to make more Summer Glaus. That opens the door to a great deal of mischief. OTOH, the worst a Falcon AAM could do was hit the wrong target.