Managing the risks posed by Artificial General Intelligence | UniSC | University of the Sunshine Coast, Queensland, Australia

Accessibility links

Managing the risks posed by Artificial General Intelligence

Managing the risks posed by Artificial General Intelligence

Artificial Intelligence (AI) or Artificial Narrow Intelligence (ANI) systems are in widespread use today. These systems encompass non-human agents that possess the intelligence required to undertake a specific task, such as playing chess, driving, or diagnosing medical issues. The next generation of AI, known as AGI, will be far more advanced. Equipped with advanced computational power, these systems will able to perform all of the intellectual tasks that humans can and will be able to learn, solve problems, adapt and self-improve, and undertake tasks for which they were not originally designed. They will also have the capacity to control themselves autonomously; having their own thoughts, worries, feelings, strengths, weaknesses and predispositions. Whilst AGI systems do not yet exist, the consensus is that they will be with us by 2050. Whilst this may seem far off, research is required now to ensure that the risks associated with AGI systems are effectively managed during their design, implementation, and operation.

The risks associated with AGI

There is no doubt that AGI systems could revolutionise humanity across work, rest and play. The effect on humankind could be greater than the industrial and digital revolutions combined, however, it is widely acknowledged that a failure to implement appropriate controls could lead to catastrophic consequences. The most pessimistic of viewpoints suggest that AGI will eventually pose an existential threat. Such risks are apparent primarily because AGI systems will have the capacity to learn, problem-solve, self-improve, evolve, and self-organise, whilst at the same time possessing computational power that exceeds the limits of human cognition.

Based on our work in safety and risk management, we recently identified three forms of AGI system controls that urgently require development and testing:

1. Controls to ensure AGI system designers and developers create safe AGI systems;

2. Controls that need to be in-built into the AGIs themselves, such as “common sense”, morals, operating procedures, decision-rules, etc; and

3. Controls that need to be added to the broader systems in which AGI will operate, such as regulation, codes of practice, standard operating procedures, monitoring and maintenance systems, and infrastructure.

The research is a three-year ARC funded project that aims to design and test a series of controls in each category and will provide researchers and practitioners with guidelines for the development and implementation of controls in the domains in which AGI will be introduced.

Publications

Salmon, P. M., Carden, T., & Hancock, P. A. (2020). Putting the humanity into inhuman systems: How human factors and ergonomics can be used to manage the risks associated with artificial general intelligence. Human Factors and Ergonomics in Manufacturing & Service Industries.

Salmon, P.M., Read, G.J.M., Thompson, J., McLean, S., Carden, T. The ghost of Christmas yet to come: how an AI ‘SantaNet’ might end up destroying the world. The Coversation, December 23, 2020. https://theconversation.com/the-ghost-of-christmas-yet-to-come-how-an-ai-santanet-might-end-up-destroying-the-world-151922

Salmon, P. M., Hancock, P. A., & Carden, T. To protect us from the risks of advanced artificial intelligence, we need to act now. The Conversation, January 25, 2019. https://theconversation.com/to-protect-us-from-the-risks-of-advanced-artificial-intelligence-we-need-to-act-now-107615