By: Emilia Stankeviciute
July 26 2024
On July 18, 2024, the Harehills suburb of Leeds, U.K., experienced significant violence following a police response to a domestic disturbance that involved children being taken away by government agency workers. This incident escalated into a night of rioting with assaults on police officers, vehicle arson, and widespread destruction.
In the aftermath of the unrest, misinformation proliferated on social media, erroneously attributing the riots to the Muslim community. Various posts falsely scapegoated Muslims for the violence, including a widely shared claim that the riots were due to the growing Muslim population in Leeds (archived here), speculating about dire consequences if their numbers increased further (archived here). Other posts wrongly implicated Mothin Ali, a Muslim councilor who was actively working to de-escalate the situation. This spread of false information exacerbated existing tensions and hindered efforts to restore peace.
Accounts linked to Indonesia and India were found to have fueled and contributed to the wider spread of some of this misinformation around the events in Leeds. This cross-border pollination of false and misleading narratives emphasizes the danger of misinformation in situations like this and the consequences it can have on minority communities.
According to The Mirror, four Romanian children were taken into care by police following a tip-off that they were being taken out of the U.K., which led to the riots. This intervention occurred after a baby from the family was taken to the hospital with a head injury, which led to the authorities fearing for the children's safety.
A family court proceeding, part of a pilot scheme allowing media attendance, took place after last week's events. The judge permitted The Mirror to attend and report on the hearing. During the hearing, it was revealed that the children, aged between eight and 14, were initially placed in temporary foster care but have since been returned to live with their uncle.
The judge emphasized the importance of focusing on the children's welfare and urged the family to remain calm. The court heard that the initial removal of the children was due to concerns about their potential relocation to Romania or Cyprus. The incident escalated tensions in the community, as footage of the children being removed sparked public outrage.
A social media analysis by Logically Facts revealed a significant spike in bot-like activity promoting a disinformation campaign centered around a viral post on X (archived here) with more than 7.9 million views. The post stated that "hundreds of Muslims riot in Harehills, Leeds, smashing police vehicles and attacking officers" and linked this to the newly-elected local councilor.
Bot activity on social media can usually be identified by distinct characteristics. Bots often post the same or similar content repeatedly, a pattern seen in spam messages or automated promotional content. Additionally, bots tend to generate and post content much more frequently than typical human users. This rapid posting can flood feeds with messages or tweets within a concise time frame. Lastly, bot profiles tend to lack personal information, feature generic profile pictures, and exhibit unusual follower-to-following ratios.
An abnormal increase in social media activity related to the viral post was detected from July 18 to July 22, 2024, with a spike over the weekend. It was primarily driven by six bot-like accounts. The bot accounts predominantly copied the post's text, enhancing its reach through repeated shares. The post was shared 6,649 times from the six accounts over these days.
The same text was shared repeatedly from multiple accounts (Source: X/Modified by Logically Facts)
These accounts also posted video screenshots, encouraging users to click on play, which would open a suspicious link in the new browser tab. Their metadata and user information pointed to Indonesian IP addresses and time zones, suggesting their geographic location.
By Monday, the accounts began posting explicit images while concurrently copying texts from a Turkish activist and journalist notorious for their strong stance against stray dogs and those who support them. Again, they shared nearly identical posts numerous times a day.
The abrupt transition to an entirely unrelated trending issue, coupled with the complete cessation of any anti-Muslim or U.K. political content, further suggests the presence of bot activity.
The accounts then switched topics. (Source: X/Modified by Logically Facts)
An intriguing aspect of this bot network was its interaction with other bot accounts. These accounts often engaged in discussions using nonsensical and non-sequitur statements that suggest they were drafted using a generative AI tool like ChatGPT. This created an illusion of genuine interaction, sustaining the narrative across multiple platforms and further confusing unsuspecting users.
During and after the riots, several accounts with a history of spreading Indian right-wing rhetoric were also identified as significant disseminators of Islamophobic misinformation. These accounts spread false narratives that blamed Muslims for the violence and linked the unrest to broader anti-Muslim sentiments. Notable examples include a tweet from an account that spread the false narrative that a recently elected councilor in Leeds was a "Bangladeshi" and "actively involved in the riots" (archived here). The councilor in question, Mothin Ali, has been credited with protecting police forces and putting out a fire started by the rioters.
The same video of Ali had been used by another India-based account to spread an Islamophobic narrative even before the violence. Following Ali's election in May, their post (archived here) used the phrase "Vote Jihad in England," suggested that the "next target" of such a programme was India, and encouraged people to avoid voting for liberal or pro-Muslim parties.
Another post on X (archived here) posted a video of men attempting to set fire to a bus while claiming, without evidence, that the men in question were Muslims. Their post also sought to vilify the Muslim community more broadly, saying that if countries allow them to enter, they will "burn your town and homes." This kind of misinformation can inflame tensions and contribute to an atmosphere of distrust and hostility.
Multiple Indian users also reshared the language used in the viral post widely shared by bot-like accounts.
Anti-Muslim misinformation and rhetoric are common features of the content posted by many Indian right-wing accounts on social media. While this usually relates to domestic affairs, they have also been seen to post such content in connection with international events. For example, following the October 7, 2023, attack by Hamas on Israel, several Indian and international fact-checkers and news organizations found India-based accounts spreading false and misleading information about the attack and its aftermath (see for example here, here, and here).
In September 2022, groups displaying similar behavior posted a significant share of content that escalated tensions in Leicester in the build-up to and during the violence there. The unrest in Leicester saw a convergence of local and international dynamics, with right-wing Indian rhetoric playing a key role in the online discourse. This highlights the transnational impact of these ideologies on local communities in the U.K.
Indonesia has also experienced its share of ethnic and religious violence, mainly targeting Chinese Indonesians and other minorities. Using bots to propagate divisive narratives in the U.K. could be seen as an extension of these domestic tensions. Furthermore, the strategic alliance between certain segments of the Tamil community in Indonesia and Indian right-wing discourse provides a framework for understanding this cross-border dissemination of misinformation.
Thus, the potential explanations for Indonesian bots' involvement in spreading misinformation about U.K. riots are multifaceted. One possibility is the existence of commercial bot services that cater to various clients worldwide, providing amplification of content regardless of geographic relevance. Another aspect could be the geopolitical interests where state or non-state actors use bot networks to destabilize or influence foreign societies.
For instance, in 2015, Indonesian click farms were implicated in manipulating social media engagements and political campaigns, such as funneling millions of Facebook likes to candidates during elections. Recent reports also highlight sophisticated bot farms in Indonesia, which operate many fake social media accounts to manipulate online discussions and trends.
The use of such technology to spread misinformation has parallels with the manipulation seen in Indonesian politics, where bots have been utilized to amplify certain political narratives and exacerbate ethnic and religious divisions.
Moreover, as of 2023, Indonesia had over 200 million internet users, making it one of the largest online populations in the world.
For Indian nationals, the strategic interest may lie in reinforcing domestic political narratives. By linking the events in Leeds to broader anti-Muslim sentiments, these actors can strengthen their ideological positions and rally support. This is particularly pertinent in the wake of the 2024 Indian elections where, despite Prime Minister Narendra Modi's re-election, his party experienced a weakened vote share.
The Harehills riots in Leeds have thus become a focal point for false claims propagated by Indian right-wing accounts and Indonesian bots. This involvement underscores the global nature of modern misinformation strategies and the complex interplay of political narratives and technological tools.
By examining these interconnected issues, it becomes clear that the motivations and impacts of misinformation are complex and far-reaching, requiring a nuanced approach to address and mitigate their effects. The real motivation behind this misinformation campaign, however, remains unconfirmed.