The conversation about the dangers of AI and the potential problems it could bring in the future has come to the forefront again following comments by billionaire entrepreneur Elon Musk, and now Bill Gates has added his voice to those advising caution. Shortly after Eric Horvitz, one of Microsoft’s leading researchers, went on the record as believing completely that AI did not pose a threat, Gates responded to an interview question from a session on Reddit that he actually believed differently. He went as far as to say that he found it difficult to believe that more people with knowledge of the field were not as concerned as he was.
Gates indicated that his concern was not for the immediate future, but that once AI became more common, and more universally used, it could be a matter of decades before it was to the point where they could evolve to a level of intelligence and autonomy where humans would not be able to control them. Musk joined a group of renowned scientists recently proposing that safeguards be established now for research into AI, and that development of the technology be monitored closely to avoid human beings forfeiting control. Gates, in his statement, said that he was in agreement with Musk and the others.
For many, it is difficult to make the leap from the current experience of AI programs, such as Apple’s Siri to the science fiction threats that Terminator or The Matrix have brought to the big screen, but the truth is that the technology is advancing quickly. At this point, some programs have moved on from functioning as a talking search engine or playing elite chess to diagnosing illnesses and performing medical research. Perhaps not a threat yet, but there is even one AI which is purported to be the best poker player in the world. These leaps from just the short amount of time from when the technology was first being introduced and developed until now, if extrapolated, tend to lend credibility to the possibility that the future Gates is projecting could happen. It is a worst-case scenario being foretold, but one that is becoming increasingly less far-fetched.
Musk, Gates, and crew are not advising fear and panic, merely awareness and the establishment of controls. In the short-term, AI systems present such huge potential for beneficial applications that it is difficult for most people to see a reason to curtail the current pace of development. To look forward into the long-term possibilities, however, according to Stephen Hawking, will depend on whether the technology actually can be controlled.
The question of whether AI could really bring significant problems or even significantly alter the course of civilization as Gates and so many are pondering is a difficult one to keep in a current-day context for most people. When so many minds who have been labelled as visionary and insightful are coming to similar conclusions, however, there may be cause to take a closer look at the things they are seeing.
By Jim Malone