>> Davey Alba, The New York Times
Published: 2019-12-21 14:34:44 BdST
The accounts — including pages, groups and Instagram feeds meant to be seen in both the United States and Vietnam — presented a new wrinkle to researchers: fake profile photos generated with the help of artificial intelligence.
The idea that artificial intelligence could be used to create wide-scale disinformation campaigns has long been a fear of computer scientists. And they said it was worrying to see it already being used in a coordinated effort on Facebook.
While the technology used to create the fake profile photos was most likely a far cry from the sophisticated AI systems being created in labs at big tech companies like Google, the network of fake accounts showed “an eerie, tech-enabled future of disinformation,” said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab.
Scientists have already shown that machines can generate images and sounds that are indistinguishable from the real thing or spew vast volumes of fake text, which could accelerate the creation of false and misleading information. This year, researchers at a Canadian company even built a system that learned to imitate the voice of podcaster Joe Rogan by analysing audio from his old podcasts. It was a shockingly accurate imitation.
The people behind the network of 610 Facebook accounts, 89 Facebook Pages, 156 Groups and 72 Instagram accounts posted about political news and issues in the United States, including President Donald Trump’s impeachment, conservative ideology, political candidates, trade and religion.
“This was a large, brazen network that had multiple layers of fake accounts and automation that systematically posted content with two ideological focuses: support of Donald Trump and opposition to the Chinese government,” Brookie said in an interview.
The Atlantic Council’s lab and another company, Graphika, which also studies disinformation, released a joint report analysing the Facebook takedown.
The Epoch Media Group denied in an email sent to The New York Times that it was linked to the network targeted by Facebook and said that Facebook had not contacted the company before publishing its conclusions.
The people behind the network used artificial intelligence to generate profile pictures, Facebook said. They relied on a type of artificial intelligence called generative adversarial networks. These networks can, through a process called machine learning, teach themselves to create realistic images of faces, even though they do not belong to a real person.
Nathaniel Gleicher, Facebook’s head of security policy, said in an interview that “using AI-generated photos for profiles” has been talked about for several months, but for Facebook, this is “the first time we’ve seen a systemic use of this by actors or a group of actors to make accounts look more authentic.”
He added that this AI technique did not actually make it harder for the company’s automated systems to detect the fakes, because the systems focus on patterns of behaviour among accounts.
Ben Nimmo, director of investigations at Graphika, said that “we need more research into AI-generated imagery like this, but it takes a lot more to hide a fake network than just the profile pictures.”
Facebook said the accounts masked their activities by using a combination of fake and authentic U.S. accounts to manage pages and groups on the platforms. The coordinated, inauthentic activity, Facebook said, revolved around the media outlet The BL — short for “The Beauty of Life” — which the fact-checking outlet Snopes said in November was “building a fake empire on Facebook and getting away with it.”
Gleicher said Facebook began its investigation into The BL in July and accelerated its efforts when the network became more aggressive in posting this fall. It is continuing to investigate “other links and networks” tied to The BL, he said.
Facebook said the network had spent less than $9.5 million on Facebook and Instagram ads. On Friday, Facebook said The BL would be banned from the social network.
The Epoch Times and The BL have denied being linked, but Facebook said it had found coordinated, inauthentic behaviour from the network to the Epoch Media Group and individuals in Vietnam working on its behalf.
The Epoch Media Group said in its email that The BL was founded by a former employee and employs some of its former employees. “However, that some of our former employees work for BL is not evidence of any connection between the two organisations,” the company said.
A Facebook spokeswoman said executives at The BL were active administrators on Epoch Media Group Pages as recently as Friday morning.
In August, Facebook banned advertising from The Epoch Times after NBC News published a report that said The Epoch Times had obscured its connection to Facebook ads promoting Trump and conspiracy content.
Twitter said Friday that the social network was also aware of The BL network and had already “identified and suspended approximately 700 accounts originating from Vietnam for violating our rules around platform manipulation.” A company spokeswoman added that its investigation was still open, but Twitter has not identified links between the accounts and state-backed actors.
Facebook also said Friday that it had taken down a network of more than 300 pages and 39 Facebook accounts and their coordinated, inauthentic activities on domestic political news in Georgia.
Facebook said the network tried to conceal its coordination, but it found that the accounts responsible were run by the government, led by the Georgian Dream party, and Panda, a local advertising agency in the country. The owners of the Facebook pages masqueraded as news organisations and impersonated public figures, political parties and activist groups.
In a related move, Twitter said it also took down 32 million tweets from nearly 6,000 accounts related to a Saudi Arabian social media marketing company called Smaat, which ran political and commercial influence operations.
Smaat was led in part by Ahmed Almutairi, a Saudi man wanted by the FBI on charges that he recruited two Twitter employees to search internal company databases for information about critics of the Saudi government, said Renee DiResta, a disinformation researcher at the Stanford Internet Observatory, which separately analysed Twitter’s takedown.
The operation was “extremely high volume” and automatically generated by “Twitter apps that made religious posts, posts about the weather” and other topics, DiResta said.
At times, the accounts were used for “more tailored purposes,” including more than 17,000 tweets related to Jamal Khashoggi, a Saudi dissident and columnist for The Washington Post, who was killed while visiting a Saudi consulate in October last year.
Many of the tweets claimed that those criticising the Saudi government for their involvement were doing so for their own political purposes.
© 2019 New York Times News Service