Tuesday, October 22, 2019
Automated Wars Essay Example
Automated Wars Essay Example Automated Wars Essay Automated Wars Essay Automated wars In our world today, wars are being fought openly and discretely. Warfare is becoming more sophisticated and intelligent. The weapons business has become a lucrative means of earning money for the parties involved. These parties sell war as a necessity for our wellbeing. However the promotion of war is not an easy task. War is known to cause destruction, it is the cause of many innocent deaths, more often than none wars do not resolve issues. For these reasons and many more, wars are not welcomed, people donâ⠢t want to be killed and for that matter kill others. But what if wars could be fought with precision What if only the bad guys died What if a machine calculated a threat level and made the decision to kill This is the direction the modern warfare is headed towards. The ability for a machine to automatically locate and attack a target, with minimal human intervention is the focus of leading global militaries. Organizations such as United States Air Force (USAF) are focusing o n artificial intelligence to conduct reconnaissance and make decisions based on their findings. However, are these intelligent machines going to work with us Or will we eventually reach a point where we will have to fight the machines for our survival This essay will evaluate this question and prove that a heavy reliance on artificial intelligence may eventually cause us more harm than good. The USAF has released an action plan in which the main focus is to evolve existing unmanned aircraft systems (UAS) to include artificial intelligence (AI) to make combat decisions, while acting within legal and policy constraints without necessarily requiring human input (June, 2009). The UAS have been in development for several years and are only recently being relied upon as dependable service machines. This technology, along with the assistance of humans were developed to allow forces to conduct dull, dirty and dangerous missions, like searching tunnels and caves for terrorists, rescuing wounded soldiers, spying on enemies and even killing humans (Lin, 2009). There are more than 7,000 unmanned aircrafts and 12,000 ground robots currently in service in both Afghanistan and Iraq (Lin, 2009). The USAF believes that by 2024 the military will be at a point where the UAS will be able to carry out orders that would otherwise be limited by the lack of precision, and speed of reaction by h uman soldiers. (USAF, 2009, p. 14). The USAF holds an instrumentalist point of view in which they believe that the use of technologies such as the UAS is a solution to our existing problems in warfare. It is believed that the human soldiers are limited to their performance and physiological characteristics. Current manned aircrafts cannot be exposed to certain risks due to the fear of losing human life. Some missions are also assumed to be jeopardized due a soldierâ⠢s reaction time to a situation. With the use of UAS, battlefield decision will be made much more rapidly by allowing these machines to perceive a certain situation and act independently without human intervention. (USAF, 2009, p. 14) Although the idea of using AI enabled machines as tools to fight our wars, while saving soldiersâ⠢ sounds appealing, should this be accepted as a solution within our society Patrick Lin, raises a good point in his article The Ethical War Machine , by stating that the use of these military machines may make it easier for nations to wage war largely because they reduce risks and friendly casualties which usually bear a heavy political cost (Lin, 2009). With reduced political risks and less fear of losing soldiers, governments may not face the same amount of resistance from activists that they would receive today. As a result starting a war would become a lot easier than it is today. If these wars are waged, what social impacts will these machines have on our society If these machines worked as planned, and did not raise problems of their own, one has to wonder how this will affect the opposing party. Lin believes that since these machines can deliver quicker, more decisive victori es for us (Lin, 2009), the enemy may retaliate by resulting to more desperate strategies and tactics. Enemies may also have their own machines to fight their battles. If this happens, are we safer in the future with these machines than we are now This of course is a dilemma that we would face if the AI machines were to work as proposed. We know from past events in history and according to Murphys Law that not all technological inventions have and will perform exactly as projected. A big believer of this method was Theodore Kaczynski, the Unabomber. Kaczynski viewed technology from a dystopian point of view, he believed that the design and use of technology would have unintended consequences (Joy, 2000). However, the USAF has a different point of view. They believe that with proper ethical discussions and policy decisions these AI machines can be guided towards a set future. Additionally they believe that the systems for the UAS will be programmed to be based on human intent, with humans monitoring the execution of operations and retaining the ability to override the system at any point during a mission (USAF, 2009). Choosing an ethical perspective to guide the machines seems to raise another issue. In our society people hold varying ethical beliefs and values. How then, will we figure out which ethical theory we sho uld use to guide these machines Lin raises a good ethical question that these machines and the creators of these machines would face. He asks Should we let a robot decide that it is permissible to sacrifice one innocent person (for instance, a child) to save 10 or 100 others This is an interesting question because our soldiers have yet to figure out exactly how to distinguish illegal targets in the battle field with 100% accuracy. If this is a problem we cannot solve our selves, how will we guide these machines to make these decisions for us Maintaining this perspective we can also question the morality of these machines. In his article, Lin questions how these machines would react in certain situations. He explains that as a band of brothers, soldiers trust and support each other which can sometimes lead to abuses and cover-ups (Lin, 2009). He questions how these machines, which will have cameras to monitor action and ensure proper behaviors, will uphold the brother hood. Additionally Lin also suggests that the use of machines to conduct operations may increase distrust among the people living in the country, he questions how effective these machines will be in winning hearts and minds of the other side to achieve a lasting peace (Lin, 2009) The USAF believes that as time passes and as these machines become more sophisticated in the battle field they will learn from their actions. They also imply that if these machines are being found to make errors, humans will retain the ability to change the level of autonomy as appropriate for the type or phase of mission (USAF, 2009, p. 59). Building a sort of Kill switch into these machines is a great idea, but it may not be very practical if the purpose of these machines is to conduct their missions with minimal human intervention. The goal of the USAF is to have fewer human operators flying; instead they would be put in charge of directing swarms of these machines. With one soldier controlling multiple machines, is it possible to have the soldier pay attention to every little detail that the machine sees In the article Can AI fight terrorism, Juval Aviv describes a similar problem that exists with AI today. A person standing a couple of feet from his or her suitcase for more than a few minutes at an airport could set off an alert with an AI-monitored camera system, whereas a human being looking at the same scenario would know that there is not yet a cause for concern. This can result in a boy who cried wolf scenario where too many false alarms cause alarms to be ignored (Aviv, 2009). If the soldiers begin to ignore ce rtain alerts, are we not allowing these machines to think on their own The USAFâ⠢s plan is to allow these machines to automatically perform repairs in flight and conduct routine ground maintenance without human touch labor. Some might argue that allowing these machines to think on their own, and giving them the ability to perform repairs may give them a life of their own. However this may not necessarily be the case. Billy Joy explains this in his article Why the future doesnt need us. He states that the human race would never be foolish enough to hand over all the power to the machines (Joy, 2000); this is the same ideology that organizations such as USAF share. However, Joy also suggests that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions (Joy, 2000). This is a valid point, as we can look back in history to when the internet was invented. At first the internet was introduced as a mere tool to extend our methods of communication. Today, many people have become heavily reliant on the internet and without it they feel lost and disconnected from the world. The heavy reliance on these machines to fight our wars may bring us to a point in the future where we would begin to trust the judgment of these machines. We may not be able to disconnect ourselves from these machines as they will perform operations at a rate which we will not be able to match. Shutting down these machines in the future may cause us to become overwhelmed with the work that would lie ahead. We can look at this scenario from Neil Postmanâ⠢s critical points about technology. Neil suggests that for every advantage that technology offers, there is always a disadvantage. In the case of AI machines, we can see that the obvious advantage to this technology is convenience and efficiency. While the disadvantages are potential overreliance on these machines, as well as the lack of experiences we have in dealing with such technologies. Neilâ⠢s second point of view suggests that technology is the enemy of culture and tradition. The use of these machines can certainly harm our current traditions. The bond that soldiers maintain today will be eliminated with the introduction of machines. All future battles would be monitored by surveillance devices and cover-ups and the code of silence will become obsolete. The third point implies that technology has become more important than culture and tradition. Just the fact that our governments are exploring these technologies as poss ible machines to fight our future wars demonstrates that technology is becoming more important than our traditional methods. The fourth point Neil makes is that technology does not empower us. This point is somewhat debatable in the case of AI machines. The party with the most efficient and powerful machines will certainly become empowered, but this does not necessarily mean that their problems will disappear with the increased power. This brings us to Neilâ⠢s final point which suggests that technology does not solve our problems. These machines are being created to solve problems such as the loss of soldiers or the lack of precision in our battle fields today. However are these problems really solved With a closer inspection, it can be argued that although soldiers will not be pushed to the front lines, they may still be in danger as enemies will have similar technologies to counter attack. In addition, the militaries will have to worry about the types decisions these machines may make, as they may not resolve issues with the same ethical perspective as a human soldier would. The invention of these machines is inevitable as our governments will continue to develop technologies to stay ahead of their enemies. These AI enabled machines may not solve our problems as organization such as the USAF has planned. Instead they may result in more complex problems that may become much harder to solve. Our increased reliance on these types of machines may lead us to a point of no return. ? Bibliography Aviv, J. (2009, June). Can AI Fight Terrorism Retrieved July 2009, from Forbes: forbes.com/2009/06/18/ai-terrorism-interfor-opinions-contributors-artificial-intelligence-09-juval-aviv.html Joy, B. (2000, April). Why the future doesnt need us. Retrieved July 2009, from Wired.com: wired.com/wired/archive/8.04/joy_pr.html June, L. (2009, July). US Air Force says decision-making attack drones will be here by 2047. Retrieved July 2009, from Engadget: engadget.com/2009/07/28/us-air-force-says-decision-making-attack-drones-will-be-here-by/ Lin, P. (2009, June 22). The Ethical War Machine. Retrieved July 2009, from Forbes: forbes.com/2009/06/18/military-robots-ethics-opinions-contributors-artificial-intelligence-09-patrick-lin.html USAF, U. S. (2009, May 18). Unmanned Aircraft Systems Flight Plan. Retrieved July 2009, from Government Executive: govexec.com/pdfs/072309kp1.pdf
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.