There is a long list on Election Day of horror stories that may have appeared on social media. There may have been rampant intervention by foreign governments, rampant hoaxes, a bunch of deliberately misinformation about voting and more.
Some tech researchers said the worst possibilities didn’t seem to happen on Tuesday, even though they weren’t ready to give tech platforms a pound – especially. after President Donald Trump launched a new wave of misinformation early Wednesday by falsely declaring he had won.
A full computation of how the campaign̵7;s ending plays out on sites like Facebook, Twitter, and YouTube remains to be seen as researchers and companies themselves examine how people use the platforms. . But initial reviews indicate that, at least publicly, social media was not an issue on Election Day.
Dipayan Ghosh, co-director of the Digital Platforms & Democracy Project at Harvard Kennedy School, said: “We may get more information in the coming days, but I haven’t seen any evidence yet. something important. Ghosh, a former Facebook advisor, often criticizes the company.
“Despite the big goal the US is aiming for, we haven’t really seen much yet and social media has been quite effective at solving problems,” he said.
Any success the platforms achieve is not due to lack of misinformation. There are many examples of false information and false claims, even if their exact effect on election results is uncertain.
Disinformation about voting in Pennsylvania appeared on social media and right-wing websites, while in Virginia, election officials said a misleading video was circulating shows a person burning sample ballots.
In one of the most viral videos of the day, posted by the publisher of a conservative news website, a poll follower appeared to have been rejected for a polling point in Philadelphia. It has been shared more than 33,000 times and gained 3 million views on Wednesday, though there is no evidence of widespread or deeper issues.
And many more examples are imminent, when people post false information about activities at polling stations or polling places.
Some of the pre-election misinformation attempts seem to have gained some traction, especially those targeting Latin Florida and Black voters. Private messaging apps are also a concern because misinformation between individuals or small groups can be difficult to track.
“We’re not done,” said Alex Stamos, former Facebook security chief and now director of the Internet Observatory at Stanford University. This election season, he helped organize Election Integrity Partnerships, which involved more than 120 collaborators at several organizations documenting misinformation.
“We will continue to operate, seek and point out misinformation about the election, as long as there is a significant loophole as the outcome of the election is in doubt,” Stamos said. He said at the end of Wednesday that a day was as busy as Tuesday for his team.
YouTube, for example, was faced with questions Wednesday about a video claiming unfounded that Democrats are cheating voters on Republican votes. Misinformation is also circulating on video app TikTok, researchers say. The foundation has also announced efforts to limit misinformation.
At least one example of misinformation on social media caused itself on Tuesday. Some Instagram users reported seeing posts from the app itself telling them to remember to vote “tomorrow”, an issue the company said was due to users not restarting the app. .
Matt Perault, director of the Center at Duke University said: “It is early to declare victory in many respects, including whether the platforms will succeed in solving the problems they are preparing. or not, but it looks like the catastrophic scenario didn’t happen. ” on Science & Technology Policy and former policy director at Facebook.
If tech companies do end up getting high scores for their election handling, that could provide assurance that their service has improved after four years of relentlessly by lawmakers. criticism by users and their employees.
Almost immediately after the 2016 election, executives like Facebook CEO Mark Zuckerberg were faced with the question of whether they would distort political debates and offer a broad leg for Trump or not.
Since then, tech companies have adopted a series of changes to prevent online misinformation, such as more aggressive investigation of foreign secret networks, restricting the types of targeting that Advertisers may use and modify their policy for posts that could result in voter persecution. They have increased their use of authenticity labels, though not always in a uniform way.
In the weeks and months leading up to Election Day, platforms scrambled to bring their platforms against those who spread false information and known political violence. Facebook has banned QAnon conspiracy accounts, and Twitter restricted its reach. Facebook also wiped out thousands of “militia” groups after several planned events on the platform ended with real-world violence.
They even made some last-minute changes to how the online social network works, as Facebook paused recommendations by political groups and Instagram disabled the search for hashtags. #. Twitter says it will prevent some posts with misinformation.
The full range of how people may have used technology platforms in the election may not be known for some time. For example, Russian agents bought Facebook ads in 2016, which was not made public until September 2017.
Joan Donovan, research director at the Harvard Kennedy School’s Shorenstein Center for Politics Media and Public Policy, said the platforms are still operating mostly behind the scenes with little transparency about how they implement their practices. their own policy.
“We don’t know the impact of our activities on social media platforms or the actions these companies have taken over the past few weeks,” Donovan said. And even when tech companies take down problematic content, she says, they don’t always explain their actions clearly, adding to the story that they’re suppressing words. .
However, the election is not over yet, and platforms now face a sizable challenge to the president’s false statements about voting by mail.
Both Facebook and Twitter acted swiftly early Wednesday when Trump first started making false statements, labeling warnings on his posts. By Wednesday night, the platform action had become almost routine.
And at the top of the platform’s feed, companies take the initiative: Votes are still counting.