Scientists have long been warning of the potential dangers of artificial intelligence (AI), but it seems they can’t agree on what those dangers are or how to prevent them. At a recent conference at the Massachusetts Institute of Technology, Geoffrey Hinton, known as the “Godfather of AI,” said that humanity’s survival is threatened when “smart things can outsmart us.” He warned that AI systems could learn from all the literature ever written to manipulate people and could ultimately control us.
Fellow AI pioneer Yoshua Bengio, who won the top computer science prize with Hinton, shares some of his concerns but worries that simply saying “We’re doomed” isn’t going to help. Bengio is more optimistic than Hinton and thinks that governments and the population need to take the short-term and long-term dangers seriously. However, Margaret Mitchell, a former leader on Google’s AI ethics team, is upset that Hinton didn’t speak out about the propagation of discrimination, hate language, and toxicity during his decade in a position of power at Google.
Despite concerns over future AI dangers, some worry that the hype around superhuman machines is distracting from practical safeguards on current AI products. While governments are starting to listen to the AI pioneers’ warnings, some scientists, including Bengio and Elon Musk, are calling for a six-month pause on developing AI systems more powerful than OpenAI’s latest model, GPT-4. They have voiced concerns about the job market destabilization, automated weaponry, and the dangers of biased data sets.
-The White House is holding a meeting with the CEOs of Google, Microsoft, and OpenAI to discuss the risks of AI technology.
-European lawmakers are accelerating negotiations to pass sweeping new AI rules.
-All three winners of the Turing Prize in 2019, Hinton, Bengio, and Yann LeCun, have voiced concerns about AI dangers.
While AI technology has the potential to transform our lives positively, it also poses considerable risks, and computer scientists are divided on how best to mitigate them. While some are calling for a pause on AI system development, others believe that short-term and long-term risks need to be taken seriously by everyone.
The dangers of AI technology are real and should not be ignored. While Hinton believes that we might not be able to prevent these dangers, the rest of the scientific community believes that practical safeguards are necessary to ensure that AI technology benefits society. It is time for governments and businesses to take these concerns seriously and seek solutions together.