Industry Lens: the 3 AI Challenges for future networks
Hi All,
- Read more about Industry Lens: the 3 AI Challenges for future networks
- Log in or register to post comments
"Responsible AI" concerns applications of AI whose actions need to be explainable and governed from both a legal and ethical standpoint because they are either safety critical or impact the lives of citizens in significant ways.
As AI and automated systems have come of age in recent years, they promise ever more powerful decision making, providing huge potential benefits to humankind through their performance of mundane, yet sometimes safety critical tasks, where they can perform better than humans. Research and development in these areas will not abate and functional progress is unstoppable, but there is a clear need for ethical considerations applied to, and regulatory governance of, these systems, as well as AI safety in general with well-publicised concerns over the responsibility and decision-making of autonomous vehicles as well as privacy threats, and potentially prejudicial or discriminatory behaviours of web applications.
Influential figures such as Elon Musk and Stephen Hawking have voiced concerns over the potential threats of undisciplined AI, with Musk describing AI as an existential threat to human civilisation and calling for its regulation. Recent studies into the next generation of the Internet such as Overton and Takahashi concur that regulation and ethical governance of AI and automation is necessary, especially in safety critical systems and critical infrastructures.
The above issues and others are encapsulated in the “Asilomar AI Principles”, a unifying set of principles that are widely supported and should guide the development of beneficial AI, but how should these principles be translated into a research agenda for the EC?