Moral questions to check and safeguard development of AI technologies.

Mar 26 2018

Advances in AI, robotics and so-called autonomous technologies warrants special reflection as they can be oriented divergent from common good and pose ethical, societal and legal challenges.  European Group on Ethics in Science and New Technologies (EGE) thus released a statement calling for the launch of a process that would pave the way towards a common, internationally recognised ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems.

Self-driving cars and drones, robots in deep sea and space exploration, weapon systems, software agents, such as bots in financial trade, and deep learning in medical diagnosis, are among the most prominent examples of Autonomous Systems. Artificial intelligence (AI), especially in the form of machine learning, and the increasing availability of large datasets from various domains of life are important drivers of these systems. The confluence of these digital technologies is rapidly making them more powerful, they are applied in an increasing number of new products, and can have both military and civilian application. The advent of these high-tech systems and software that can function increasingly independently of humans and can execute tasks that would require intelligence when carried out by humans, warrants special reflection. These systems give rise to a range of important and hard moral questions as recognized by the EGE statement-

  1. Questions about safety, security, the prevention of harm and the mitigation of risks. How can we make a world with interconnected AI and ‘autonomous’ devices safe and secure and how can we gauge the risks?
  2. Questions about Human moral responsibility. Where is the morally relevant agency located in dynamic and complex socio-technical systems with advanced AI and robotic components? How should moral responsibility be attributed and apportioned and who is responsible for untoward outcomes? Does it make sense to speak about ‘shared control’ and ‘shared responsibility’ between humans and smart machines?
  3. Questions about governance, regulation, design, development, inspection, monitoring, testing and certification. How should our institutions and laws be redesigned to ensure AI and autonomous systems serve the welfare of individuals and society, and to make society safe for this technology.
  4. Questions regarding democratic decision making. Investigations are carried out across the globe to establish the extent to which citizens are taken advantage of by the use of advanced nudging techniques based on the combination of machine learning, big data and behavioural science, which make possible the subtle profiling, micro-targeting, tailoring and manipulation of choice architectures in accordance with commercial or political purposes.
  5. Questions about the explainability and transparency of AI and autonomous systems. Which values do these systems effectively and demonstrably serve? And which values are we letting to be undermined – openly or silently – in the technological progress and utility trade-offs? AI driven ‘optimisation’ of social processes based on social scoring systems with which some countries experiment, violate the basic idea of equality and freedom in the same way caste systems do, because they construct ‘different kinds of people’ where there are in reality only ‘different properties’ of people. How can the attack on democratic systems and the utilisation of scoring systems, as a basis for dominance by those who have access to these powerful technologies, be prevented?

The statement also describes following considerations from an ethical perspective in searching for the answers to above questions.

  1. Autonomy in the ethically relevant sense of the word can only be attributed to human beings. The terminology of ‘autonomous’ systems has however widely gained currency in the scientific literature and public debate to refer to the highest degree of automation and the highest degree of independence from human beings in terms of operational and decisional ‘autonomy’. Human beings ought to be able to determine which values are served by technology, what is morally relevant and which final goals and conceptions of the good are worthy to be pursued. This cannot be left to machines, no matter how powerful they are.
  2. Moral responsibility, in whatever sense, cannot be allocated or shifted to ‘autonomous’ technology. In recent debates about Lethal Autonomous Weapons Systems (LAWS) and Autonomous Vehicles there seems to exist a broad consensus that Meaningful Human Control is essential for moral responsibility. The principle of Meaningful Human Control (MHC) was first suggested for constraining the development and utilisation of future weapon systems. This means that humans - and not computers and their algorithms - should ultimately remain in control, and thus be morally responsible.

While there is growing awareness of the need to address such questions, AI and robotics are currently advancing more rapidly than the process of finding answers to these thorny ethical, legal and societal questions. Current efforts represent a patchwork of disparate initiatives. EGE, as expressed in its statement, is of the opinion that Europe should play an active and prominent role in defining a collective and inclusive process to propose a set of fundamental ethical principles and democratic prerequisites that could also guide light on binding laws to check any violation by autonomous systems and AI.


Facebook Twitter Linkedin