Ethical challenges of Artificial Intelligence (AI)
Artificial Intelligence (AI) has been described as the 4th Industrial Revolution. Here, Dr Attracta Lagan, leading Australian business ethicist and advisor to Boards and CEO’s, explores some of AI’s ethical challenges.
From smart phones to smart cars, AI (Artificial Intelligence) is shaping every aspect of our lives. It is changing everything about the way we relate to each other, how we work, and understand our world.
The rapid progress being achieved in AI means that super intelligent machines, where machines will learn to write code for themselves, is now anticipated as the next development. Because of its potential large-scale impact, machine learning has been described as humankind’s fourth industrial revolution and is currently having the greatest impacts in employment, workplace redesign, education, transportation, healthcare, public safety and security, the military and our entertainment areas. Just as digital technologies have already reshaped our societies, machine learning will accelerate these social changes. In fact, AI is being likened to the advent of the Internet in just how far and wide it will disrupt the status quo.
It is the emergence of efficient and smarter machine learning tools, the core algorithms through which AI systems develop their intelligence, that is making the advancement of the use of AI and its social impacts such an urgent ethical issue to be considered by businesses. These tools are being used across a range of applications, such as chatbots to perform increasingly sophisticated customer interaction tasks by combining machine learning, natural language processing, and speech generation.
Most people are largely accepting of AI and its applications because it appears to be making their lives easier. Benefits range from the convenience of smart phones and virtual assistants, ready access to information from search engines, and service assistance from AI-powered devices in the home such as Amazon’s Alexa, Apple’s Home kit and Google Home.
In our education and healthcare areas, powerful applications have enabled research and medical breakthroughs that have enhanced our physical quality of life and extended our lives in pain-free ways. In workplaces it’s in the form of analytical tools enabling predictive modelling to anticipate future customer behaviours as well as the use of algorithms to remove repetitive tasks. Accounting software, for example, is getting more intelligent, performing automation as well as analysis previously done by humans. Some accounting practices are starting to implement advanced technology to simplify operations. Benefits include time savings, minimizing costs, boosting productivity and providing better accuracy.
Attracta is speaking on Day Two of the General Insurance Seminar (GIS) on 12-13 November in Sydney. View the Program and register now – tickets are selling fast!
Plenary 4 – Ethics and Leadership
Helen Rowell, Anthony Asher, Attracta Lagan
Facilitator: Daniel Smith
The potential downsides to these technologies include risks to personal privacy, risks to employability, data security and the potential for social reengineering regarding how we relate to each other. For the moment, there are no commonly agreed policies or accountability frameworks, yet, in the last two years, we have seen widespread use of drones, fingerprint technology, facial recognition, driverless cars and other significant AI breakthroughs that raise serious ethical issues about their social impacts.
The unknown mechanics of exactly how algorithms work and our inability to predict their macro social impacts is a key ethical concern. The rise of fake news created by bots and its distribution along with hate speech on platforms such as YouTube, Twitter, Facebook and Instagram, and the manipulation of data to interfere with political elections and referenda has undermined public trust in the integrity of our major political institutions and increased anxiety around who can be believed and trusted. The accompanying “low trust world” that has been spawned is promoting societal fear and insecurity as well as redrawing the boundaries between generations and nations.
The absence of regulations and the current commoditisation, marketing and trafficking of personal data around the globe, and how this enables data users to manipulate people’s attitudes and behaviours, is an ever growing worldwide ethical concern.
The rise of cybercrime, cyberbullying and curated pornography are amongst the dark forces that have already been released by AI technological advances. However, it is the very real potential for AI to totally reshape our existing social landscapes as well as our economic order that brings serious ethical challenges.
Professional Accountability in an AI world
Even though a new world of AI powered applications is being forged, the fundamental ethical standards for professionals around integrity, duty of care, confidentiality and competency remain the same. It will be the professional’s ability to ask and address the right questions around AI that will enable ethical standards to be maintained. Increasing flows of data will also come with increased ethical accountabilities around its integrity, ownership, management, transparency, storage and removal. Retaining trust in the professions will become even more important in an AI world where the “black box” nature of algorithms poses significant challenges around who can be trusted.
Ethical concerns also revolve around the potential biases of AI designers and coders. Women and ethnic minorities, are greatly under-represented in the coding arena; will this result in a gender or ethnic bias in how AI reaches its recommendations? Failure to design to purposely challenge and remediate potential biases might deepen existing social divisions and concentrate AI benefits unequally among different sections of society.
The good news is that there are several initiatives underway exploring how a governance regime might be put in place to guide AI’s ongoing development. Leading the way is the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which involves several hundred participants from six continents. These thought leaders from academia, industry, policy and government aim to find consensus on the development of ethical principles for AI. The UK’s House of Lords has also released a guiding set of principles, Both sets of principles emphasise the importance of protecting human well-being and ensuring full transparency and accountability. The European Commission has also put the first major brake on how personal data can be exploited. It has passed legislation reinforcing the principle that everyone has the right to the protection of personal data.
A universal concern is the disruption in jobs that will affect almost every sector of the economy. The OECD predicts around 66 million jobs globally are at risk of automation from AI and robotics. It is AI’s capacity to force large scale workforce redesigns and employee redundancies that necessitates an immediate rethink of the ethical obligations business has towards current and future generations of employees, its role in informing employees and their representatives about the likely impacts of the new technologies, and how it can promote the new skills necessary to keep people employable in an AI-powered economy.
As we saw in other developed economies such as Singapore and Ireland in the later part of the 20th century, providing the skills for the new machine economy to make people employable is key to humans thriving.
AI offers the promise of tremendous productivity gains; how this machine-created, newfound wealth can be shared is one of its major ethical challenges. If AI is to bring about enormous savings as well as leaps in productivity as is predicted, how will these savings be fairly shared in a society. Will it be winners take all or will we recognize that losers need to have some sort of compensation? Do business leaders have an ethical accountability to make provisions for the social and economic hardships that may accompany their adoption of AI technologies?
These profound social impacts for the future of humanity that accompany business’ embrace of new technologies have ethical considerations at their core. Already, for example, there is emerging public debate around the need to provide a universal wage when jobs disappear and whose responsibility this might be.
A guiding principle such as that suggested by the UK House of Lords is that AI should operate on principles of intelligibility and fairness. It is the sort of guiding ethical principle the professionals might consider. The high-profile loss of public trust in the Facebook brand following its failure to protect its users’ personal data has forewarned others that AI is not ethically neutral. Rather, it demands a much higher level of public transparency because its impacts are so far reaching.
Businesses will need to have an AI ethics code or have refreshed existing governance protocols to outline what the machine is expected to do as well as its limitations. Ideally these should be shared with customers, so they can make informed choices about who they are doing business with. To reassure those affected, AI algorithms will need to be designed so that they can be reviewed by a third party to avoid manipulation or bias. Duty of care accountabilities will mean that AI managers must act proactively to ensure possible adverse impacts can be identified before they become a problem for the organisation or its clients.
AI is not here to replace us. It augments our abilities, enabling us to achieve greater efficiencies and to step up to higher levels of performance. However, its potential benefits for all can only be realized if we take the time now to consider and include the ethical accountabilities that comes with AI at every stage from design to implementation and review.
This is an edited version of a research paper conducted and written by the author on behalf of ICAA into the ethical issues accompanying the advent of AI.
CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.