A few weeks after defeating Elon Musk’s attempt to silence it in court , anti-hate research nonprofit, the Center for Countering Digital Hate (CCDH), is back with a new piece of research on X (formerly Twitter). The study builds on earlier work investigating Musk’s impact on online speech by spotlighting how the policy changes he enacted are actively rewarding hate speech posters with increased reach, engagement and even direct payouts through X’s subscriber feature.
CCDH studied the growth rates of 10 influential accounts that pay for X Premium and have posted anti-Jewish and/or anti-Muslim hate speech since October 7, 2023, when Hamas’ attack on Israel sparked the Israel-Gaza conflict. Some of these accounts had previously posted conspiracy theory content related to COVID-19, per the report.
The 10 accounts tracked for the study (titled “Hate Pays: How X accounts are exploiting the Israel-Gaza conflict to grow and profit”) are: Jackson Hinkle, Dr. Anastasia Maria Loupis, Censored Men, Jake Shields, Dr. Eli David, Radio Genoa, Ryan Dawson, Keith Woods, Way of the World, and Sam Parker.
The CCDH found these accounts were able to boost their reach on X after posting hateful content about the war. The report discusses examples of hate speech posted by the accounts, such as tweets depicting antisemitic tropes like the blood libel, or seeking to dehumanize Palestinians by depicting them as rats.
“Each of the accounts showed slow follower growth in the four months before October 7th, for a combined growth of approximately 1 million followers. However, in the four months after the outbreak of the conflict, they collectively gained 4 million new followers,” the CCDH wrote.
Growth rates for individual accounts that gained new followers over the period varied, with the highest growth multiple recorded being 9.6x (for Dawson’s account), followed by 8.3x (for Hinkle), and 7.1x (for Parker). At the lower end, Way of the World grew its followers 1.7x over the period.
X did not immediately respond to a request seeking comment on the report.
The report includes a history of the tracked accounts’ notoriety, noting for example that Hinkle is banned by WhatsApp, YouTube and PayPal. Or that the Censored Men (anonymous) account used to generally post defenses of toxic masculinity influencer, Andrew Tate, but, since October 7, has focused on the Israel-Gaza conflict. Dawson, a Holocaust denier who also believes the 9/11 terrorist attacks were carried out by Israel, was previously banned from X but had his account reinstated in 2023 under Musk.
Since taking over Twitter, as X was known back in October 2022 , the billionaire has reversed a number of legacy account bans, which included notorious white supremacists and neo nazis . Coupled with the policy changes Musk has pushed in areas like content moderation, account verification and premium features (such as prioritized ranking for paid accounts’ posts), this has resulted in a polarized platform where it’s increasingly difficult to distinguish genuine information from lies, and where the tone all-too-often skews towards conversational outrage (or worse).
The CCDH contends this is intentional, and is a deliberate strategy by Musk to profit from tragedy. It’s accusing him of embracing hateful accounts and configuring X so that purveyors of hate speech are able and encouraged to turn war and human suffering into an opportunity to raise their profiles on the service, and earn revenue from posts that exploit violence and misery.
Six of the 10 accounts the CCDH studied have enabled X’s subscriptions feature, which lets their followers pay them to access additional content. The report also quoted a post by Hinkle in early October, in which he shared a screenshot that showed him receiving $550 in ad revenue over the course of a month — directly profiting from engagement driven by his posts.
The CCDH said its analysis of the accounts showed that even activity that was critical of these posts — such as quote tweets denouncing hateful content — raised their visibility and reach (potentially boosting revenue-generating opportunities). Such critical reshares contributed as much as 28% to the reach of hateful posts, per the report, which suggested the figure is a conservative estimate as it does not take account of X’s own algorithmic response to these reshares, which applies further amplification aimed at harvesting even more engagement for ad profit.
Ad-funded business models that earn revenue based on user engagement have been known to drive such anti-social outrage mechanisms. In X’s case, Musk’s erratic behavior has alienated some advertisers, but not all: The CCDH found ads being served alongside hateful posts made by all the tracked accounts. “We found ads for Oreos, the NBA, the FBI and even X itself placed near hateful posts,” the report said.
“Under Elon Musk’s ownership, X appears to be pursuing a strategy of hosting as much controversial content as possible,” a CCDH spokesperson told TechCrunch. “We know that this controversial content is addictive, not just for users who approve of it but also for users who criticize it, too. The potential benefit to X is that these controversies could ramp up user time spent on the platform and increase ad revenue — but only if brands are willing to pay for ads that could be displayed near toxic content.”
“The accounts studied by our report have grown sharply despite posting false or hateful content, showing that posting such content is no impediment to growth on X. This is not unique to the Israel-Gaza conflict, but it is the latest example of the problem. Our previous research into accounts that were reinstated following Musk’s takeover of Twitter shows that X stands to make significant ad revenue by welcoming users posting a range of topical hate and disinformation, from brutal misogyny to anti-vaccine conspiracies,” the spokesperson said.
Commenting on the report in a statement, Imran Ahmed, CEO and founder of the CCDH, said: “The public and advertisers need to know more about the symbiotic, profitable relationship between X and hate-peddling ‘influencers’. Lawmakers must act to enforce greater transparency and accountability from platforms and to allow these companies to be held responsible for harming the civil rights and safety of Jews, Muslims and other minority communities.”
Musk has previously claimed hate speech has decreased on his watch, but earlier CCDH research debunked his claim.
X is also currently under investigation in the European Union for a string of suspected breaches of the bloc’s online governance and content moderation regime, including for its response to illegal content, which may include hate speech. Penalties for confirmed breaches of the EU’s Digital Services Act can reach 6% of a company’s global annual turnover.
EU turns up the heat on X over illegal content in wake of Israel-Hamas war
X is leaving up antisemitic and Islamophobic hate, new report shows