top of page

Will AI help humanity evolve or lead to our destruction?

Artificial Intelligence (AI) is intelligence demonstrated by a machine. This technology is becoming increasingly common in our lives whether we realise it or not. The most famous example is arguably the controversial Google self-driving car, yet AI is also used in more common everyday applications such as Facebook’s facial recognition tool and in fraud prevention. An ethical approach has been taken to the subject, often because of the consequences of AI that personify robotics or potentially change the roles of humans in society. In this article I will discuss the main ethical issues regarding AI, the way in which scientists are developing AI to avoid these issues, and how we can ensure AI is used advantageously.


Currently, AI is being developed for use in self-driving cars and planes. If we turn to the future, and if these machines operate with no risks, then there is the risk of unemployment in the taxi, trucking and airline industries. Machines have already been seen to replace humans in factories and, with AI developments, lower-skilled jobs could begin to be done by programmes instead. This could possibly lead to mass unemployment. On the other hand, dependency on AI could be viewed as positive because a higher-skilled work force would be needed to ensure AI is operating correctly or more computer scientists and engineers would be required to further develop the AI field. It is still unlikely that there would be enough of these jobs to accommodate everyone. It is also unreasonable to think everyone has the skills to do these, so unemployment poses a real threat to manufacturing and travel industries.

Social interactions AI could also threaten employment. Sophia is an AI robot developed by Hanson Robotics. It has appeared for interviews on Good Morning Britain and the Tonight Show. Sophia is able to communicate with humans and respond to them adequately, showing how technology can adapt to an audience. AI programmes similar to Sophia’s have many tasks they could take on for us. They could replace humans in call centres or in any customer service-based job because of their lack of human essence, so no risks of having an ‘off day’, being sick, turning up late etc.




AI is a system that can adapt but ultimately was created by humans so risks having bias when adapting around certain types of people or not being programmed carefully enough to avoid security risks.


In May 2017 there were reports of a racist AI algorithm used for a risk assessment by US courts, which was bias against black prisoners re-offending. You can read more about it here. It was found the programme would wrongly offend black people more often than white (45% instead of 24%). There was dispute over why this occurred yet arguably it is due to what the algorithm relied on. It used data from arrest records, postcodes, income etc and so these systems can thus reflect human prejudice. This shows that what and who AI learns from determines how safe and ethical it can be.


What an AI is instructed is particularly important when used for financial security and solving complex human tasks. If there is incorrect communication with AI then these programs could act in a malicious way as not every outcome is covered by the programmers. The Google car has often been quizzed by a modern version of the trolley problem proposed by Phillipa Foot: who does the car kill if a child runs into the road (if the car does not have enough time to stop) and on the pavement there is an old lady? These responsibilities and outcomes still need to be covered by the programmers as AI does not hold accountability, even if it was not necessarily programmed to do one thing or the other.


Open AI is an independent research organisation in the AI industry founded by Elon Musk that aims to develop friendly, beneficial AI for humanity. Artificial General Intelligence (AGI) is being developed at this company: intelligence of a machine that had the capacity to learn or understand any intellectual task that a human can. One of the main focuses of their research is into technical safety improvements for AI by being more specific with goals AI has to complete to reduce the unpredictability associated with such programmes. To do so, the company studies the way AI can do simpler tasks to learn how AI adapts. AI has been tested on computer games and research has found that this technology can play computer games at superhuman levels. This is used to understand how AI can learn so rapidly and the methods it uses. The company stresses that for it to understand how to control the unpredictable nature of AI in certain instances, then they must research the most up-to-date developments so that they are at the forefront of AI technology.


The ultimate aim of AI is still to positively impact the human race, particularly highlighted when AI programmes are used to run simulations and find solutions to problems which require knowledge beyond human intelligence. Our greatest inventions and medicines have come from our intelligence, so with a machine that had a higher IQ, even more ideas can arise to benefit us.


One does have to question at this point whether AI would be overpowering humanity. If the intelligence is greater, AI could adapt better and have an equivalent advantage over us as we have over animals. This concept is known as the ‘singularity’: the point in time where humans are no longer the most intelligent beings on earth. Additionally, at the ‘singularity’, have we developed AI that now possess a conscience? If robots have the power to feel like humans do, this makes them responsible for their own actions and could mean assigning them a legal status on par with humans.


Current AI issues and dangers still seem to be caused by us. If we programme AI incorrectly or do not account for every detail then risks arise. The research being conducted is hopeful in making AI a beneficial assistance, yet once a conscience is developed it may no longer want to act as originally instructed. Why not just use humans for tasks? A robot or AI with a conscience that can feel would potentially have the same human essence and human-like errors could develop. It also may not be truly ethical to create such a machine as it requires ‘playing God’; the AI would have a type of free will that could be malicious even if it has the potential to be good, like humanity.


The standards to judge whether something is ethical is based on the human principals of right and wrong, which differ from country-to-country and are reflected in legality. AI is debated because of the unpredictability of how these programmes will be used, controlled in the future and who to place the responsibility of malicious outcomes on.

82 views0 comments
bottom of page