Researchers are warning that advances in artificial intelligence could supercharge hacking and election meddling by 2020.
In a report released Wednesday, researchers at Oxford and Cambridge universities concluded AI-enhanced software will allow people with little technical skills to easily produce audio and video that’ll be nearly impossible to distinguish between what is real and what isn’t.
“There is no obvious reason why the outputs of these systems could not become indistinguishable from genuine recordings, in the absence of specially designed authentication measures,” the authors warn. “Such systems would in turn open up new methods of spreading disinformation and impersonating others.”
Artificial intelligence will “set off a cat and mouse game between attackers and defenders, with the attackers seeming more human-like,” Miles Brundage, a research fellow at Oxford University’s Future of Humanity Institute and one of the authors of the report, told NBC News.
According to Brundage, “artificially intelligent systems don’t merely reach human levels of performance but significantly surpass it,” telling NBC News that AI also allows technology to work more efficiently than humans to identify targets and launch attacks.
Some of this technology is already being used to create videos, with “deepfakes” gaining notoriety online earlier this month by allowing people with limited technical skills to create fantasy pornography videos, the report noted.
“There has been a night-and-day transition between a few years ago and now,” Brundage said of advances. “It’s becoming easy to get copies of these systems. Deepfakes was a proof of concept posted on Reddit that was made easier and easier to use. Large amounts of people were able to download it.”
Some politicians have taken notice, including Sen. Mark Warner, D-Va., who also sounded an alarm about the technology.
“Fake News” may only be the beginning. We need to start thinking about the consequences of fake video technology and how we maintain trust in a digital-based economy when you may not be able to believe your own eyes anymore.https://t.co/aiZQYKXBSx
— Mark Warner (@MarkWarner) February 20, 2018
“It’s one thing to say this could happen, another to prevent it and lessen the damage,” Brundage told NBC News. “We need better detection of fake multimedia, more research approaches to make systems less vulnerable to attack, and changes to some norms.”