By John E. Dunn
AI should be regulated as soon as possible, says most (ISC)² members surveyed. But stopping it – even for a short period – is probably now impossible.
In the face of the recent open industry letter calling for a pause in development of artificial intelligence (AI) technology, a group of (ISC)2 members were asked if they agreed with the likes of Apple co-founder Steve Wozniak and Tesla/SpaceX head Elon Musk.
Overall, of the 163 members and candidates who participated, over a third (34.3%) agreed with this idea, with 58.9% against and 6.7% expressing no opinion. While the majority favored no interruption, the extent of the majority view was surprising, especially when married with a follow-up question.
When the same group was asked if it wanted immediate new regulation put in place to control the development and use of AI, an emphatic 74.2% were in favor, 23.3% were against, and 2.4% had no opinion.
While it seems most professionals we spoke to share concern over the growth of AI, most would want some form of regulation in place as a means to control the pace of development and integration into society before they would want to pause development outright.
Is A Pause Enough, Or an Overstep?
When asked whether the caution expressed by some AI researchers could prove self-limiting or perhaps didn’t go far enough, members revealed a wide range of anxieties.
One was the fatalistic view that stopping now would be futile given that many other countries would be unlikely to follow suit. Following this path would also be difficult to reconcile with economic pressures.
“If it [AI] comes to halt, and another country with less self-reflecting intention gets the upper hand, companies like OpenAI will be forced to speed up their progress, throwing testing and secure coding overboard to meet stakeholders demands,” said one member.
Meanwhile, bad actors would not be similarly held back by regulation.
“I agree with the concern shown by the scientific community. If the technology falls into the wrong hands or is used for purposes other than supporting mankind, then its repercussions would be pretty bad,” said another.
Other concerns included the still difficult-to-judge effects AI might have on privacy and the tendency of today’s chatbots to hallucinate incorrect or even bizarre answers - less politely, to make things up.
A deeper problem was that nobody had yet mapped the parameters of harm in relation to AI. Regulators are constantly being surprised by new capabilities that hadn’t been anticipated.
Consequently, “I think governments should establish a framework around scientific research to ensure it does not end up being harmful for humankind,” suggested a member.
Dangerous democratization
The views in the in the poll shouldn’t come as a total surprise. Earlier this year, (ISC)² conducted a separate straw member and practitioner poll which found a similar level of skepticism around AI.
However, perhaps opinion has developed since then. The well-informed audience represented by (ISC)² members accepts that AI is inevitable in some form but can’t be stopped or even slowed down much without that also having unintended effects.
AI will happen at some point because there is simply too much money, economic potential and geo-political competition resting on it.
Separately, April’s CYBERUK conference saw Tom Tugendhat MP highlight the ways that AI could put advanced cyber-capabilities in the hands of bad actors in a process of unwanted “democratization.”
Researchers have already widely hypothesized that AI is likely to unleash a wave of innovation in phishing attacks and business email compromise (BEC) that could boost the success rate of these attacks.