Military robots exist and are more and more used, on various operation theatres and in the different media: air (drones), sea, ground and even, maybe soon, space. Their market is strongly expanding.
Used for observation, reconnaissance, demining and even firing (e.g. armed drones), they offer the advantage of positioning the servant away from the battlefield, thus reducing the friendly losses, and to be usually recoverable.
However, these robots raise important and original ethics issues with respect to other armaments, mainly when they are endowed with a great autonomy (of action), including for firing. Their complexity may lead to a certain unpredictability, responsible of unacceptable “blunders”. Their ease of use, associated with a certain “invulnerability” of the servant linked to the remoteness, may result in excess of violence: one speaks of “joy-stick war” for a war assimilated to a video game!
Indeed, isn’t this military robot already outwardly equipped with a certain “consciousness”?
Methods begin to appear for improving, in particular through self-training, the robot “consciousness” (including “moral” aspects?).