LaDissertation.com - Dissertations, fiches de lectures, exemples du BAC
Recherche

Robotic Warfare

Analyse sectorielle : Robotic Warfare. Recherche parmi 300 000+ dissertations

Par   •  7 Octobre 2018  •  Analyse sectorielle  •  1 526 Mots (7 Pages)  •  426 Vues

Page 1 sur 7

Robotic Warfare

“Now I am become Death, the destroyer of worlds.” J. Robert Oppenheimer, father of the atomic bomb.

        The evolution of technology has always been an intriguing topic in the field of ethical philosophy, for like many other discussions in the ethics branch, countless views can be raised on the subject of technology and its fast growth and on whether it is moral or not.

        An aspect of such growth includes weaponry and its constant improvement. Machines have played a crucial part in wars, and their use has been supervised by humans since the beginning of their creation. However, the fast technical development might be able to open opportunities in the machinery field, where the manufacture of autonomous robots could happen in the foreseeable future. Those robots will have the power to perform tasks on their own, without any human intervention, and when used as weapons, they will have the power to make life-and-death decisions.

        In his text the “Open Letter”, Elon Musk discusses the dangers of killer robots and using them in the war fields as soldiers. The founders stress on the need to ban these lethal autonomous weapons as they can create dangerous consequences and become ‘humanity’s biggest existential threat’ as stated by Musk, one of the signatories of the open letter.

        I personally agree with the views stated in the text, as I find the idea of having robots autonomously acting in the war field and killing human beings in order to conquer a war troubling. I will explain why I believe using military robots is closest to being immoral than moral by referring to normative ethical theories.

        For a war to be declared, there have to be many convincing reasons behind it in order for politicians to put their country at risk. The motivation for which will cause soldiers to go into battle ranges from the thirst of conquering lands to defending one’s own country against a foreign attack. According to Kant’s rationalism, doing something is considered right only if its motive is right – if we’re doing it for the wrong reasons then it’s considered morally wrong. If politicians wage war in order to defend their country from an outside threat, then using autonomous robots as the front line in their army will be morally acceptable under conditions in which the robots will be programmed to behave ethically (if that level will ever be reached). But if the main motive of war is for a territorial or economic gain, in which the only goal will be to fulfill a country’s selfish needs (such as oil, gold, etc.), then according to Kant, the usage of killer robots, which will increase one’s advantage at winning, will be morally wrong.

        Additionally, Kant believes that we shouldn’t use people merely as means to our end, meaning we shouldn’t use them in order to get what is beneficial for us without respecting them. The question that arises now is about the value we give to these machines. Currently, humans are using robots as tools – meaning they have an instrumental value. But as scientists manage to develop features in these technologies so they can reach a different level of involvement in our lives, will it be the time where they will earn a moral consideration of their own? For example, Sophia, a humanoid robot made by Hanson Robotics was the first robot to be get a Saudi Arabian citizenship – does this open more opportunities for robots in the near future to earn their own rights and be fully independent? Another problem that can arise with deontological ethics will be about slavery. Military robots will be programmed to follow commands, achieving pre-programmed goals instead of their own. They would be considered as slaves and means for human fulfillment.

        Moreover, robots are known to have many advantages over our limited capabilities; they are smarter, quicker, more vigilant, emotionless, immune to fatigue etc. But autonomous robots might not have the free will as claimed given. Artificial intelligence might not be able to detect whether something is trustworthy or not. But not only do we have to be aware of the robots’ decision making and whether it is reliable, but we also have to make sure that the people developing these machines won’t have malicious intentions. What makes one so sure they will create them with ethical considerations (such as limiting unnecessary suffering) of the human rights?

...

Télécharger au format  txt (8.5 Kb)   pdf (33.9 Kb)   docx (13 Kb)  
Voir 6 pages de plus »
Uniquement disponible sur LaDissertation.com