I think trying to limit offensive material is mug's game, symptomatic of a society where people are quick to claim "victimhood" and display it as a badge of belonging.
I first began to think about this decades ago when "political correctness" was beginning to emerge. Practically any statement can offend someone. Hell, what right does the weather guy have to say that tomorrow will be a "beautiful sunny day" — is he unaware that many of us enjoy rainy days? That sort of thing. Believe me, it's just not worth keeping score of all the bitter barbs and calumnies flung your way by people who don't even know you.
You are making a very partisan argument.
Of course the opposite is true.
One thing we can be sure of is that for Trump and his followers there are not five stages of grief, leading from denial to acceptance. The furthest their sense of it can go is to the second stage, anger. Just as there is “long Covid,” there is long Trump. The staying power of his destructiveness lies in the way that disputed defeat suits him almost as much as victory. It vindicates the self-pity that he has encouraged among his supporters, the belief that everything is rigged against them, that the world is a plot to steal from them their natural due as Americans.
The fact is that statements that are offensive to transgendered people are often remove. Statements offensive to evangelical Christians are rarely removed.
That might have to do with the difference between the relative emotional and physical security of members of the two groups. Evangelical Christians have a long history in the USA, they have political power, their votes are courted by Republicans, they may attend mega-churches with thousands of others. Many people in the transgendered community are perceived as illegitimate, many are alienated and alone, some have been persecuted and suffer from physical and emotional trauma. So it's very possible that moderators see offensive messages directed at transgendered people as having greater potential to inflict psychological injury.
You may have good reason to choose some groups over others as worthy of special protection.
...there are tens of millions of White people who legitimately claim that they have been shut out of academic opportunities, are locked in shitty careers with no chance of advancement and are suffering.
The issue is fairness in these information monopolies such as Facebook and Twitter. Using censorship to offer protection to specific identity groups. doesn't fit with the goals of freedom of expression.
Conservatives have said for years that online social media platforms censor their views. But their evidence is largely anecdotal, and conservative accounts frequently perform extremely well online.
The charges of censorship will almost certainly play a central role in Wednesday’s hearing. Republicans like Senator Marsha Blackburn of Tennessee and Senator Ted Cruz of Texas are likely to criticize the chief executives about how their platforms have moderated content posted by conservative politicians or right-wing media outlets.
Conservatives have seized on individual instances of content moderation to claim that there is a systemic bias against them on the platforms. In some cases, the companies have said that the content violated their policies; in other instances they have said that the moderation was a mistake.
Recently, Republicans pointed to the decision by Twitter and Facebook to restrict the sharing of stories about Hunter Biden, the son of Joseph R. Biden Jr., the Democratic nominee for president. Twitter initially said that the story violated its policy against the sharing of hacked information, but later reversed itself. Facebook has said it is restricting the story’s reach while it waits for a third-party fact checker to evaluate the claims.
In 2017, Twitter took down an ad for Ms. Blackburn’s Senate campaign after the company deemed it “inflammatory” for a line that included a reference to “the sale of baby body parts,” saying the post violated its policies. The company changed its mind a day later.
In 2016, Facebook had to answer questions from conservatives about whether its Trending Topics section, which at the time was run by human curators, not the algorithms that power its News Feed, had suppressed conservative news. The company said it found no evidence that the accusations were true.
None of these cases unearthed evidence of a systemic bias against conservative content. A 2019 study by The Economist found that Google did not favor left-leaning websites. Posts from commentators like Ben Shapiro regularly rank among the most highly-engaged on Facebook. Liberals have also had their posts flagged or removed from the platforms — groups that advocate for racial justice, for example have said that Facebook has taken their content down.
Democrats have accused Republicans of raising the issue to manipulate Silicon Valley companies into being more cautious when it comes to moderating false or misleading information posted by conservatives.
“There’s simply no reason to have this hearing just prior to the election, except that it may intimidate the platforms, who have shown themselves to be vulnerable to political blunt force in the past,” Senator Brian Schatz, Democrat of Hawaii, wrote in a tweet this month about Wednesday’s hearing.
On the 28 September, 2018, Reddit made a big announcement.
Two of its most controversial subreddits, r/TheRedPill and r/Braincels, would immediately be quarantined.
Both the territory of men's rights activists and rife with misogynistic material, they wouldn't be banned outright, but subject to tighter controls designed to limit the spread of hateful content.
Their posts would be scrapped from the Reddit front page. Gone from recommendations and subscription feeds, invisible in the search function. Users couldn't make money from them. They could only be accessed with the direct URL or from Google.
And even if people did find them, a warning flashed up: "[This subreddit] is dedicated to shocking or highly offensive content".
It was all part of the company's increasing shift away from its founding principle: radical free speech.
Reddit was being swept up in the growing pressure for social media companies to regulate hateful material on their platforms, and an increasing reliance on advertisers - who weren't thrilled to be associated with such ideas.
The quarantine policy was unique - different to the total ban approaches of other sites, like Facebook and YouTube.
And it hasn't proved to be a huge success, according to new research from the Australian National University.
In fact, it may have unintentionally made things worse when it comes to stopping the spread of hateful and misogynistic content, according to the study's author.
"What it says to me is that we need to think really carefully about the use of technological solutions to stop the spread of hateful material online," said PhD researcher Simon Copland.
The way many users responded - by taking their views to other, self-regulated platforms - is a huge concern, he told Hack.
Hack contacted Reddit for a response but did not hear back before publication.
Engagement drops off, but misogyny remains
r/Braincels - a community for incels - was eventually banned completely, but r/TheRedPill - a cesspool of coercive tricks to pick up women and rants against 'toxic feminism', amongst even more extreme content - is still quarantined.
Mr Copland wanted to see how the quarantines affected three things: the level engagement, the use of misogynistic language, and how users responded to the changes.
He found it did not reduce the prevalence of misogynistic language in either of the subreddits.
"The percentage of comments that had misogynistic language in them stayed the same," Mr Copland said.
In more positive news, engagement levels did go down.
"There was an immediate, probably 50 per cent drop off, in terms of the amount of submissions and comments on those subreddits," he said.
But it was hard to say how much of that was because the subreddits had disappeared from places like subscription feeds and the front page, or because users started taking their chats elsewhere (we'll get back to this).
Mr Copland thinks it was a mix of both.
"So you see a drop, I suspect from those people who are casually engaged," he said.
"It would have taken time for them to even realise it's gone. Or to take the energy to get back involved."
"Then I think you would have had a second part of people who were really heavily engaged and really committed to these spaces, who would have gone well, 'This is no longer worth it, I'm going to go somewhere else'."
Users turn to self-moderating platforms
That somewhere else is an even more dangerous place: small, intense, self-moderated platforms.
When the quaratines were announced, the hierarchy of r/TheRedPill mobilised immediately.
They started actively pushing users to external forums. Moderators promoted specific sites - which we won't mention here - and then they set up an automatic response to every post, asking people to leave Reddit and join other communities.
This is what concerns Mr Copland most.
"These forums are watched far less closely and in turn allow hateful material to develop and spread more quickly," he said.
It's a trend he's seeing more and more these days, not just from Reddit, but in the wake of bans across social media platforms: when QAnon material was removed from YouTube and Facebook, or when r/The_Donald was banned earlier this year, for example.
"And what the research shows is that when you have those sorts of situations, ideas can become more extreme, more quickly."
As this kind of hateful material swells and intensifies on these smaller platforms, he thinks there's potential it could spill out into the real world.
We've already seen extremist violence with shootings in Toronto and Christchurch.
"I worry about that real intensification of these ideas," Mr Copland said.
"The policies like quarantines intensify these ideas, which do have bad and potentially worse outcomes, than not implementing them in the first place."
'We need to take responsibility as a community'
Mr Copland said he'd like to see greater debate about how these platforms moderate their users, and their responsibilities in stopping the spread of hateful material.
In Reddit's case, the policy of quarantining only exacerbated existing distrust of institutions among some users.
"It was sort of seen as being implemented unfairly and unjustly and abruptly, so they didn't see any incentive to change their behaviour."
But he was wary about relying on these companies too much.
"Because they don't have the best interest of the community at heart," he said.
"They have their [own] interests at heart. And that is an interest to make money."
Mr Copland said hateful ideas don't spring up, out of the blue, just on social media sites. Simply banning individuals and groups might only scrape the surface of a bigger issue.
"We also need to do work as ourselves as communities to try and address the root causes of these problems, rather than just relying on technological solutions to deal with them."
Are Aboriginal people misogynist because they practice their cultural tradition?
Right now, there are arguments over using the wrong pronouns is hate speech... in my opinion, this gets a little ridiculous.
I don't think you are disagreeing on this point.
However, I categorically reject the idea that sites which refuse to post unproven or debunked allegations from dubious sources are "censoring" political speech. And most of the belly-aching I've heard from conservatives has been about things like the Hunter Biden laptop and hydroxychloroquine. People like Giuliani want to use popular social media sites as their own propaganda platforms. Any media site has the right to protect itself against potential charges of spreading lies and disinformation and editing out such content is not just acceptable — it is their responsibility.
There is no "Freedom of speech" or expression on
privately owned internet sources. Twitter and Facebook have all the rights in
the world to censor anything it sees fit.
It is up to the free market to either over come, change it or go somewhere
The thing with Twitter is that you need to follow someone to see what they
post. If you don't like what someone is posting, you simply unfollow them and
poof! They are gone from your feed.
We may not like it, but private companies are under no obligation to cater to
The Radical Left is in total command & control of Facebook, Instagram, Twitter and Google.
At the end of August, for instance, Dan Bongino, a conservative commentator with millions of online followers, wrote on Facebook that Black Lives Matter protesters had called for the murder of police officers in Washington, D.C. Bongino’s social media posts are routinely some of the most shared content across Facebook, based on CrowdTangle’s data.
The claims — first made by a far-right publication that the Southern Poverty Law Center labeled as promoting conspiracy theories — were not representative of the actions of the Black Lives Matter movement. But Bongino’s post was shared more than 30,000 times, and received 141,000 other engagements such as comments and likes, according to CrowdTangle.
In contrast, the best-performing liberal post around Black Lives Matter — from DL Hughley, the actor — garnered less than a quarter of the Bongino post’s social media traction, based on data analyzed by POLITICO.
Reddit tried to stop the spread of hateful material. New research shows it may have made things worse