Académie des technologies

Military robots exist and are more and more used, on various operation theatres and in the different media: air (drones), sea, ground and even, maybe soon, space. Their market is strongly expanding.

Used for observation, reconnaissance, demining and even firing (e.g. armed drones), they offer the advantage of positioning the servant away from the battlefield, thus reducing the friendly losses, and to be usually recoverable. Furthermore, cost reduction of equipment can be expected owing to the fact that the constraints linked to human presence on board are suppressed. Drones allow permanence on site that a piloted aircraft could not provide.

However, these robots raise important and original ethics issues with respect to other armaments, mainly when they are endowed with a great autonomy (of action), including for firing. Their complexity may lead to a certain unpredictability, responsible of unacceptable “blunders”. Their ease of use, associated with a certain “invulnerability” of the servant linked to the remoteness, may result in excess of violence: one speaks of “joy-stick war” for a war assimilated to a video game!

Thus, it is essential to fully control the implementation of these robots, at all levels, from the political decision of the development of the weapon to the operational use. This implies the responsibility of the different actors, and legal aspects are important. Until now, the robot’s own “responsibility” is denied, but this statement may evolve.
Indeed, isn’t this military robot already outwardly equipped with a certain “consciousness”?

Functional properties of human cousciousness are already present in it. Some of them are worth being developed, such as a precise representation of the environment (material and human), including a precise representation of targets, in order to avoid “blunders”, as far as possible. Others could be introduced or on the contrary avoided: a robot devoid of “emotions” or “feelings” might, in some cases, appear more objective and ethics friendly than a human being. Methods begin to appear for improving, in particular through self-training, the robot “consciousness” (including “moral” aspects?).