Not this photo, but a similar photo…
A decade ago, at a pro-immigration march on the steps of the Capitol building in Little Rock, Ark., community organizer Randi Romo saw a woman carrying a sign that read “no human being is illegal.” She took a photograph and sent it to an activist group, which uploaded it to photo-sharing site Flickr.
Last August, the same image—digitally altered so the sign read “give me more free shit”—appeared on a Facebook page, Secured Borders, which called for the deportation of undocumented immigrants. The image was liked or shared hundreds of times, according to cached versions of the page.
This use of doctored images was a crucial and deceptively simple technique used by Russian propagandists to spread fabricated information during the 2016 election, one that exposes a loophole in tech company defenses. Facebook Inc. and Alphabet Inc.’s Google have traps to detect misinformation, but struggle—then and now—to identify falsehoods posted directly on their platforms, in particular through pictures.
Facebook disclosed last fall that Secured Borders was one of 290 Facebook and Instagram pages created and run by Russia-backed accounts that sought to amplify divisive social issues, including immigration. Last week’s indictment secured by special counsel Robert Mueller cited the Secured Borders page as an example of how Russians invented fake personas in an effort to “sow discord in the U.S. political system.”
The campaigns conducted by some of those accounts, according to a Wall Street Journal review, often relied on images that were doctored or taken out of context.
(click here to continue reading The Big Loophole That Helped Russia Exploit Facebook: Doctored Photos – WSJ.)
There is an advantage to having actual humans involved – not every decision tree can be outsourced to computer algorithms. I know tech companies like to reduce their costs by eliminating staff, but there are consequences.