Geoffrey Hinton, one of the so-called godfathers of artificial intelligence, urged governments on Wednesday to step in and ensure that machines do not take control of society. The highly respected AI scientist, based at the University of Toronto, made headlines in May when he announced that he quit Google after a decade of work to speak more freely about the dangers of AI. His move followed a bot called ChatGPT, capturing the world’s imagination and raising concerns that it could spread misinformation or even commit terrorism.
Despite the risks, there is no shortage of people saying AI can benefit society. This week, scientists working with an AI tool discovered a new antibiotic and helped paralyzed man walk again. And there are many more examples of how AI can tackle social problems such as predictive policing, healthcare, and preventing the spread of fake news.
But a growing chorus of voices has warned that the pace of AI developments has outstripped governance to keep up with it. Earlier this month, Apple co-founder Steve Wozniak and SpaceX CEO Elon Musk joined thousands of other tech experts in signing a letter calling companies to agree to a six-month pause in developing more powerful AI systems.
At the G7 summit in Japan this weekend, leaders called on nations to adopt technical standards to help keep AI “trustworthy” and agreed that creating these rules must be an international effort. But several tech giants have been resisting calls for a pause and arguing that regulating the technology is too soon.
The leaders of Google, Amazon, and Microsoft have all vowed to take steps to protect consumers. And the White House has been pushing for laws that would create an oversight framework for the industry. But the US has mainly been a follower in this area, with China leading the way in rolling out a range of laws to govern AI.
There are fears that the emergence of more powerful AI will lead to an arms race between tech firms and the creation of super-intelligent machines that will threaten humanity’s existence. Other worries include the risk of AI being used to rig elections, manipulate the media or commit fraud.
But Hinton argued that these risks are exaggerated and that governments need to help educate the public about how AI can be used for good and evil. He added that he was concerned that the substantial productivity gains from AI would deepen inequality, as the wealth created goes to wealthy owners rather than workers. He urged the government to introduce laws requiring AI-generated content to be marked like central banks’ watermark cash. This could help prevent AI-generated content from being used to evade copyright law or to spread fake news.