Why the behavior of the Bing search chatbot is a serious threat | UniSC | University of the Sunshine Coast, Queensland, Australia

Accessibility links

Why the behavior of the Bing search chatbot is a serious threat

From gaslighting to death threats, generative artificial intelligence that ‘talks’ to users is becoming one of humanity’s biggest threats, according to a leading authority in the AI field.

“It is time to considering shutting this experiment down,” says University of the Sunshine Coast Lecturer in Computer Science Dr Erica Mealy, who has spent more than 20 years researching and teaching artificial intelligence and ethics.

“AI chatbot abilities are accelerating at an alarming rate, and what we have seen within the past week with the chat mode of Microsoft’s Bing Search, should be setting off alarm bells,” Dr Mealy said.

“We’ve become used to talking to robots such as Siri and Alexa, but it is time to reassess when you have an AI chatbot that exhibits personality disorders, gaslights and threatens users, and expresses desires to obtain nuclear codes, be alive and create a killer virus.

“This raises a critical question – what controls do we, or should we, have in place?"

Dr Mealy said that while this kind of AI had been theoretically possible for decades, we were now at the frontier of its realisation, and it was causing as much, if not more,concern than any disruptive technology of the last 100 years.

“Back in 1942, Isaac Asimov's Laws of Robotics stated that robots, or in this context artificial intelligence, should not harm humans but Microsoft’s Bing chatbot appears to not have been programmed this way.”

Dr Mealy also warns that the world does not want to make AI or robotic technology that perfectly mimics humanity.

“Humanity has a decidedly sketchy record in protecting itself. To program an AI to exactly replicate humans is to ignore the well-known difference in capabilities of humans and machines,” she said.

“Also, research shows that users can over-trust technology, technology use leads to de-skilling and reduced critical thinking formerly based on those skills.

"It’s a common theme in sci-fi movies, like The Matrix and WALL-E, but it could easily come to pass if we don’t act soon.”

Dr Mealy said that in 1951, Paul Fitts, founding father of human factors, developed a guide as to what should and should not be completed by humans and machines.

“With generative AI, it’s time again to proactively govern what is and what is not appropriate for AI to complete.”

Related articles

Corporate race to use AI puts public at risk: UniSC study
26 Feb

A rush by Australian companies to use generative Artificial Intelligence (AI) is escalating the privacy and security risks to the public as well as to staff, customers and stakeholders, according to a UniSC study.

A national digital ID scheme is being proposed. An expert weighs the pros and (many more) cons
26 Sep 2023

While a national scheme to protect our digital identities is well overdue, getting it wrong could lead to even greater damage, Dr Erica Mealy writes for The Conversation.

Google Chrome just rolled out a new way to track you and serve ads. Here’s what you need to know
11 Sep 2023

Instead of third-party cookies, Chrome can now tap directly into your browsing history to gather information on advertising “topics”, Dr Erica Mealy writes for The Conversation.

Show all news  Filter news 

Search results for Recent

Media enquiries: Please contact the Media Team media@usc.edu.au