Technology open letter which highlights the dangers and

Technology is advancing at an
ever-increasing rate, there are new breakthroughs occurring all the time. A
current topic of research is autonomous systems, especially weaponised systems,
which is causing much controversy and hype within scientific and engineering
arenas and the general public. We currently have semi-autonomous drones which
can make small decisions for themselves such as choosing to fly around
obstacles, but the goal of this research is to have fully autonomous drones by
running an Artificial Intelligence (AI) system. As this is being developed
there have been many debates over their use within the military. There are some
incredible benefits like the amount of protection they can provide soldiers in
the field and keeping civilians out of harm’s way, but there are also some
undeniable disadvantages which include starting an AI arms race, security risks
and risk of harm to people. Many scientists and engineers such as Stephen
Hawking and Elon Musk, have acknowledged the perils of this technology in an
open letter which highlights the dangers and benefits. They believe the
technology will be beneficial to humanity but the technology should be banned
once it reaches a stage past human control (Future of Life Institute, 2015).
Other powers such as the United Kingdom governing bodies believe there is no
need for a ban as they will not create autonomous weapons, they will only
develop systems which are under human control. Making laws and regulations for
autonomous weapon systems is incredibly difficult and complex as there is no
universal definition for an autonomous weaponised drone.

 

There are many definitions for
autonomous weaponised drones, each organisation has a slightly different idea.
Some definitions can encompass semi-autonomous drones which have limited
decision making programming but still are ultimately controlled by humans. This
definition can be very important when forming regulations and laws for the use
of this technology. The US Department of Defence defines “autonomous systems as
those that once activated, can select and engage targets without further human
intervention by a human operator” (Thurnher, 2013). This is the definition that
will be used when referring to autonomous weaponised drones in this essay. We
have not reached this level of technology currently but it is under development.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

Autonomous drones would still be
required to respond to a request from humans, but this does not mean the
artificial intelligence will comply. The target may be chosen but there is
always the possibility of equipment failure or it may choose to defy the
instruction, as it is a form of intelligence which can think for itself. It may
inadvertently identify the incorrect person as the target, it may miss the
target or experience a programming malfunction. If this occurs who is at fault,
human or machine? By supplying artificial intelligence weapons and ordering
them to do the killing, we will be teaching it to kill. No compassion, no
ethics or morals, just a killing machine. This has the potential to cause
issues when it becomes fully autonomous and is without human control. When
fully autonomous artificial intelligence is developed it will be learning from
the human race which can have deadly consequences, as all we have done is educate
it to kill.

 

At the United Nations
Conference (year??), a total ban on
autonomous weapons was proposed. There are many arguments for the ban such as
causing an AI arms race, it would lower the conflict threshold so there would
be more combat situations, weaponised drones are easier and cheaper to build
than most weapons and when the technology for AI is available, anyone can use
these as incredibly devastating weapons (Future of Life Institute, 2015).  The United Kingdom opposed this as they
believe there is sufficient regulation and stated they would not develop systems
which are not under human control. A UK Foreign Office spokesperson states “we
do not see the need for a prohibition as international humanitarian law already
provides sufficient regulation” (Bowcott, 2015). This can cause issues as not
every country is a signatory and therefore does not have to follow these
regulations.

 

There are currently no
universal laws to regulate the use of autonomous weapon systems, it leaves a
choice for each country on how they wish to regulate or not regulate their use.
So far there have been no bans on this type of weapon and soon they will be
fully developed. In Isaac Asimov’s short story Runaround written in 1942, well before autonomous weapons were
theorised, three laws of robotics were established to attempt to create a system
which can govern autonomous systems. These laws included: ”

0.    A robot may
not harm humanity, or, by inaction, allow humanity to come to harm.

1.    A robot may
not injure a human being or, through inaction, allow a human being to come to
harm.

2.    A robot must
obey the orders given to it by human beings, except where such orders would
conflict with the first law.

3.    A robot must protect its own existence as
long as such protection does not conflict with the first or second law.”
(Dvorsky, 2016)

These laws show a way to
protect humanity from what we are aiming to create. A system of universal
guidelines would be highly beneficial but these could not be enforced as it
would not be regulation or law. This would also prove confusing to an artificial
intelligence system as its direct orders are to kill a human being. Trying to
add this level of protection to people into an autonomous weaponised system
would prove near impossible as it is specifically designed to destroy.

 

International Humanitarian Laws
can be applied to the use of autonomous weapons in combat situations but they
are not specifically about autonomous systems nor do they have to be applied if
the country is not a signatory. International Humanitarian Law covers both
people who are not fighting and weapons restrictions. These regulations are
mostly enforced by the country which you are serving, they are the military’s responsibility to be
taught, upheld and provide punishment if broken, but there is the possibility
of international tribunals (International Committee of the Red Cross, 2004). A
drone could be programmed to recognise symbols such as a red cross or others
provided by the International Committee of the Red Cross, allowing relative
safety for those injured or not fighting. This would not always be possible as
making the symbols seen from all directions can be challenging. In the
Additional Protocols to the Geneva Conventions of 12 August 1949, it states in
article 36, it is up to the signatories of this to determine if a new weapon
breaks the regulations (International Committee of the Red Cross, 2010), this
interpretation can be left up to the signatories. The basic rules (article 35)
found in the Additional Protocols to the Genevan Conventions of 12 August 1949
also limit the types of weapons which can be used, not weapons which causes
suffering or environmental damage may be used in warfare (International Committee
of the Red Cross, 2010). Autonomous drones could also play a role in the
protection of civilians and soldiers which is an essential part of the Geneva
Convention of 1949. They can provide warning systems and evacuation alerts to
civilians in places where no other method of communication is available. They may
be used to find injured people both civilian and soldiers and supply medical
supplies to remote or difficult to reach locations. They would be a far more
efficient and safer method of searching for soldiers who have gone missing in
action. When autonomous systems are fully developed they may have the
capabilities to defend civilians from an enemy attack by those who do not
follow the protocols, this could save innocent lives. It is undetermined
whether autonomous weaponised systems will have the targeted identification
capabilities to decide if a soldier is a threat to a civilian, this will
require enormous amounts of testing to be sure so it does not kill those who do
not pose a threat.

 

Each country may develop their own form of regulations but
what is to say that those will not be disregarded when it comes to severe combat
situations. The US Department of Defence has a Directive on Autonomy in Weapon
Systems (30000.09). This covers the standard of testing is required to ensure
the systems are safe to use and who it applies to (Department of Defence,
2012). This directive states “Autonomous and semi-autonomous weapon systems
shall be designed to allow commanders and operators to exercise appropriate
levels of human judgment” (Department of Defence, 2012), but does not clearly
state the specific definition of what an appropriate level of human interaction
is. This leaves it open to interpretation of the operator. The Directive on
Autonomy in Weapon Systems in point 4.c(1), only states semi-autonomous weapon
systems should not be able to select their own human targets, this implies that
autonomous systems will have the capability to choose their own targets if
communication is lost.

 

Data integrity is essential in the military, there may be
dire consequences if it is not properly protected. If military data is hacked
it can cause fatalities when related to combat. Autonomous systems, if not protected
effectively, can provide military database access to enemy hackers. Drones are
portable and can be used as a remote access point which can be hacked if not
appropriately protected. If autonomous weaponised systems are hacked all that
has to be done is change the target as most decisions will be made by the
system around this target. These risks are incredibly dangerous if exploited.
The US Department of Defence directive on Autonomy in Weapon Systems (30000.09)
includes a statement saying “safeties, anti-tamper mechanisms, and information
assurances in accordance with DoD Directive 8500.01E” (Department of Defence,
2012). This means there must be some standard of protection for these
autonomous systems to be developed. It has been proposed that the US military
will change their data systems to Blockchain. This would provide a more secure
system which is all a part of the same database but only authorised personnel
can access data relevant to them. This will also provide a complete transaction
history which will allow identification of any attacks from within or outside of
the military (Jewitt, 2016).

 

There are currently no universal regulations for
autonomous weaponised drones. There are some regulations such as International
Humanitarian Law which covers its signatory countries and each country can have
its own set of laws on autonomous weapons. As this technology is looming we
need to find a solution to this problem or we may be faced with serious risks.
There is a current project called ALPHA which can “process sensor data and plan
combat moves for four drones in less than a millisecond, or over 250 times
faster than the eye can blink — reaction times far beyond human abilities”
(Chamary, 2016). This article was released two years ago and we have likely
made significant advancements since this time, clearly highlighting the time
sensitive nature of developing and implementing mandatory international laws. This
technology can be devastating if not developed and executed in a safe manner,
therefore placing enormous pressure on finding a timely solution to these
regulations as we may soon be facing completely autonomous weaponised systems.
A ban should not be placed on all autonomous weaponised systems as they can be
highly beneficial, the aim should be to place strict, enforceable regulations
on their use to prevent destruction and unnecessary death.

x

Hi!
I'm Mack!

Would you like to get a custom essay? How about receiving a customized one?

Check it out