Social Media fighting Anti-Vaccine Accounts and Spreading fake news

With vaccination against COVID-19 in full swing, social platforms like Facebook, Instagram and Twitter say they’ve stepped up their fight against misinformation that aims to undermine trust in the vaccines. But problems abound.

For years, the same platforms have allowed anti-vaccination propaganda to flourish, making it difficult to stamp out such sentiments now. And their efforts to weed out other types of COVID-19 misinformation — often with fact-checks, informational labels and other restrained measures, has been woefully slow.

Twitter, for instance, announced this month that it will remove dangerous falsehoods about vaccines, much the same way it’s done for other COVID-related conspiracy theories and misinformation. But since April 2020, it has removed a grand total of 8,400 tweets spreading COVID-related misinformation — a tiny fraction of the avalanche of pandemic-related falsehoods tweeted out daily by popular users with millions of followers, critics say.

Efforts to crack down on vaccine misinformation now, though, are generating cries of censorship and prompting some posters to adopt sneaky tactics to avoid the axe.

False claims – Fake news

The Associated Press identified more than a dozen Facebook pages and Instagram accounts, collectively boasting millions of followers, that have made false claims about the COVID-19 vaccine or discouraged people from taking it. Some of these pages have existed for years.

Of more than 15 pages identified by NewsGuard, a technology company that analyzes the credibility of websites, roughly half remain active on Facebook, the AP found.

One such page, The Truth About Cancer, has more than a million Facebook followers after years of posting baseless suggestions that vaccines could cause autism or damage children’s brains. The page was identified in November as a “COVID-19 vaccine misinformation super spreader” by NewsGuard.

Recently, the page stopped posting about vaccines and the coronavirus. It now directs people to sign up for its newsletter and visit its website as a way to avoid alleged “censorship.”

Facebook said it is taking taking “aggressive steps to fight misinformation across our apps by removing millions of pieces of COVID-19 and vaccine content on Facebook and Instagram during the pandemic.”

Facebook also banned ads that discourage vaccines and said it has added warning labels to more than 167 million pieces of additional COVID-19 content thanks to our network of fact-checking partners. (The Associated Press is one of Facebook’s fact-checking partners).

YouTube, which has generally avoided the same type scrutiny as its social media peers despite being a source of misinformation, said it has removed more than 30,000 videos since October, when it started banning false claims about COVID-19 vaccinations. Since February 2020, it has removed over 800,000 videos related to dangerous or misleading coronavirus information, said YouTube spokeswoman Elena Hernandez.

Prior to the pandemic, however, social media platforms had done little to stamp out misinformation, said Andy Pattison, manager of digital solutions for the World Health Organization. In 2019, as a measles outbreak slammed the Pacific Northwest and left dozens dead in America Samoa, Pattison pleaded with big tech companies to take a closer look at tightening rules around vaccine misinformation that he feared might make the outbreak worse — to no avail.

It wasn’t until COVID-19 struck with a vengeance that many of those tech companies started listening. Now he meets weekly with Facebook, Twitter and YouTube to discuss trends on their platforms and policies to consider.

“When it comes to vaccine misinformation, the really frustrating thing is that this has been around for years,” Pattison said.

The targets of such crackdowns are often quick to adapt. Some accounts use intentionally misspelled words — like “vackseen” or “[email protected]” — to avoid bans. (Social platforms say they’re wise to this.) Other pages use more subtle messaging, images or memes to suggest that vaccines are unsafe or even deadly.

“When you die after the vaccine, you die of everything but the vaccine,” read one meme on an Instagram account with more than 65,000 followers. The post suggested that the government is concealing deaths from the COVID-19 vaccine.

Twitter said it is continuously reviewing its rules in the context of COVID-19 and changes them based on guidance from experts. Earlier this month, it added a strikes policy that threatens repeat spreaders of coronavirus and vaccine misinformation with bans.

Anti – Vaccine false COVID-19 information continues to pop up

Earlier this month, several articles circulating online claimed that more elderly Israelis who took the Pfizer vaccine were “killed” by the shot than those who died from COVID-19 itself. One such article from an anti-vaccination website was shared nearly 12,000 times on Facebook, leading earlier this month to a spike of nearly 40,000 mentions of “vaccine deaths” across social platforms and the internet, according to an analysis by media intelligence firm Zignal Labs.

“Vaccine hesitancy and misinformation could be a big barrier to getting enough of the population vaccinated to end the crisis,” said Lisa Fazio, a professor of psychology at Vanderbilt University.

Some health officials and academics generally believe that the social-platform efforts are helpful, at least on the margins. What’s not clear is how big of a dent they can put in the problem.

“If someone truly believes that the COVID vaccine is harmful and they feel a responsibility to share that with friends and family … they will find a way,” Guidry said.

And some still blame business models that they say encouraged the platforms to serve up engaging, if false, coronavirus misinformation in order to profit from advertising.

Read also: Social Media cause Depression

Acronis Cloud Backup 12.5

2X Faster | 15 Seconds RTO | 21 Platforms