VOX – Pol https://voxpol.eu/ Network of Excellence Mon, 16 Mar 2026 19:25:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 https://voxpol.eu/wp-content/uploads/2023/07/favi.svg VOX – Pol https://voxpol.eu/ 32 32 Blurred Lines: Upscrolled And The Co-Option Of Legitimate Civic Discourse https://voxpol.eu/blurred-lines-upscrolled-and-the-co-option-of-legitimate-civic-discourse/ Wed, 18 Mar 2026 12:00:00 +0000 https://voxpol.eu/?p=27545 By Sam David UpScrolled, a social media platform for microblogging and short-form video sharing, experienced rapid growth in late January 2026 following disputes surrounding TikTok’s US operations. The expansion was initiated largely by allegations that protest-related content was being suppressed on mainstream platforms and, while independent verification of this is limited, the perception of such

The post Blurred Lines: Upscrolled And The Co-Option Of Legitimate Civic Discourse appeared first on VOX - Pol.

]]>
By Sam David

UpScrolled, a social media platform for microblogging and short-form video sharing, experienced rapid growth in late January 2026 following disputes surrounding TikTok’s US operations. The expansion was initiated largely by allegations that protest-related content was being suppressed on mainstream platforms and, while independent verification of this is limited, the perception of such appears to have been sufficient to drive migration. UpScrolled grew quickly into a high-traffic alternative space framed around resistance to perceived censorship.

In the weeks following, UpScrolled hosted substantial volumes of lawful political discourse. Users debated domestic immigration enforcement, ongoing and emerging international conflicts, free speech, and related issues. Alongside this content, however,  monitoring identified significant presence of material supportive of designated terrorist and violent extremist (TVE) movements, as well as identity-exclusive and conspiratorial narratives.

The emergence of this content on UpScrolled is not unique as both TVE material and lawful-but-awful discourse is detectable on most large platforms. The central issue in the UpScrolled migration is its sustained proximity to legitimate civic grievance discourse within a rapidly expanding and lightly moderated environment.

Platform Migration and Content Proximity

Observed activity suggests a pattern of narrative adjacency rather than dominance by TVE content. Discourse largely focused on lawful protest issues—such as immigration and foreign policy, policing practices, and civil liberties—but frequently contained parallel or downstream comments utilising language or imagery drawn from TVE frameworks (Figure 1). In some cases, TVE-coded discourse was embedded within conspiratorial and identity-exclusive narratives and tropes; for example, content employed TVE (neo-nazi, jihadist) terminology and imagery within conspiratorial and identity-exclusive narratives and tropes (antisemitism/anti-Zionism). In others, explicit TVE material—namely Islamic State and al-Qaeda aligned media outlets—appeared adjacent to protest content via discover feeds, shared hashtags, or through single accounts that alternated between lawful political commentary and amplification of extremist material.

It is stressed here that there is no evidence of coordinated alignment between protest communities and TVE groups or organisations. Rather, the dynamic appears opportunistic. Malign actors seeking to advance violent, exclusionary, or accelerationist narratives appeared to position themselves within high-visibility grievance streams, where discourse related to institutional distrust and systemic degradation was elevated but still lawful and legitimate (Figure 2).

This distinction is analytically significant. The findings do not suggest or support claims that lawful political or protest movements are inherently extremist, rather that periods of heightened institutional distrust generate discursively porous information environments vulnerable to exploitation by TVE groups or informal support networks. This is exacerbated when moderation is uneven and growth is rapid, and creates permissive conditions for content that would otherwise remain marginal to circulate within mainstream spaces.

Figure 1

Note. Single account content collected through qualitative monitoring on UpScrolled on 10 February 2026 by Sam David. Content embedded overt TVE content (Islamic State) within broader protest discourse.

Figure 2

Note. Content collected through qualitative monitoring on UpScrolled between 31 January and 10 February 2026 by Sam David. Observed content was lawful and legitimate, containing elevated institutional distrust and systemic degradation narratives.

Structural Conditions

Several structural features appear to have shaped this environment. First, rapid user growth reduced the ratio of moderators to content volume. Public statements from UpScrolled leadership acknowledged capacity constraints during the surge period. In such conditions, a lag in enforcement can be expected and the presence of TVE content should not be interpreted as endorsement by the platform. UpScrolled is not alone in this struggle, both Bluesky and the Chinese-owned app XiaoHongShu (‘Little Red Book’, or ‘RedNote’) experienced similar symptoms in recent years.

Second, periods of mass digital migration can distort content visibility. When large volumes of users enter a platform simultaneously, systems are forced to adapt to incomplete behavioural data. In early-stage and rapid growth phases, mechanisms may prioritise user retention through increased volume of heterogeneous content, increasing the likelihood that ideologically divergent material appears in proximity. UpScrolled is uniquely vulnerable to this as the algorithm driving the platform’s ‘discover’ feed orders posts by engagement in conjunction with light temporal manipulation, providing a more equal opportunity to all content, be it legitimate discourse or not, to appear on a user’s feed.

Third, grievance-driven migration concentrates diverse sets of actors within the same digital space. Civil liberties advocates, geopolitical activists, and conspiracy entrepreneurs may all interpret perceived censorship as validation of broader systemic critique, placing them in close digital proximity to TVE networks and supporters. Shared distrust functions as an opportunity for lowest-common-denominator symbolic alignment, even where substantive ideological positions and tactical methods diverge sharply.

Within such an ecosystem, TVE actors do not need to dominate discourse to achieve effect. Co-location with legitimate grievance content increases the chances of exposure and, while this does not equate to audience capture or ideological alignment, it can create avenues for user harm through algorithmic radicalisation or cross-platform migration.

Exposure and Conditioning

The significance of this lies less in immediate exposure and more in downstream effects. In this context, the concern is not ideologically explicit recruitment or operational signalling; rather, it is cumulative normalisation of harmful narratives. Research has shown that narrative repetition within popular spaces can reduce the perceptual distance between issue-based mobilisation and actors seeking to instrumentalise similar grievance language for exclusionary projects. In a shifting global system, marked by compounding crises and competition across political, economic, and social domains, rising levels of institutional distrust can position individuals as more receptive to narratives and actors outside the status quo.

This does not collapse substantive distinctions in content. Lawful civic advocacy is analytically and normatively distinct from TVE movement and supporter materials. However, when context is degraded through content velocity or account-level convergence, this can produce avenues for malign digital actors to embed within the broader discourse without ideological commitment from the audience. As these exposure pathways expand, users are likely to encounter TVE-coded interpretations not through deliberate behaviour, but proximity.

Why This Matters

The case of UpScrolled illustrates that rapid migration events can introduce visibility gaps as diverse sets of user groups shift across platforms. During these periods, monitoring coverage is likely to be uneven, creating permissive conditions for TVE actors and informal support networks to embed narratives within legitimate grievance streams.

This activity complicates moderation and monitoring, particularly where detection models might rely on ideologically explicit branding or static indicators. In the contemporary online environment, characterised by a multitude of actors within a geopolitical landscape in flux, moderation thresholds are difficult to calibrate without an increase in false positives or degrading analytical precision.

Co-location also carries exposure risks. TVE content, including graphic imagery or explicit violence, appearing in proximity to legitimate political discourse increases the risk of incidental exposure to harm. Research shows the scale and psychological impact of such harm in both the research community and general population. Further study should examine how structural proximity might impact the likelihood of non-deliberate exposure to violent content, and the potential implications of such exposure for user harm.


Sam David is a Director for the Canadian Association for Security and Intelligence Studies (CASIS) Vancouver and former graduate fellowship holder in Political Science at the University of British Columbia for research in human security and conflict. His work focuses on counterterrorism and counterextremism, with emphasis on mobilisation and online radicalisation within contemporary information environments.

Photo by Madison Oren on Unsplash

The post Blurred Lines: Upscrolled And The Co-Option Of Legitimate Civic Discourse appeared first on VOX - Pol.

]]>
Analysing the Online Thugur of the Salafi-Jihadi Digital Ecosystem: Facebook, Instagram, and TikTok https://voxpol.eu/salafi-jihadi-digital-ecosystem/ Wed, 11 Mar 2026 12:00:00 +0000 https://voxpol.eu/?p=27514 By Alessandro Bolpagni, Eleonora Ristuccia, and Grazia Ludovica Giardini More than ten years ago, Halummu launched an online campaign entitled “Supporting Ribat and Jihad” to urge IS munasirin to spread IS propaganda material “to as many platforms and accounts as possible”, underlying that the “ongoing war between the camp of kufr and the camp of

The post Analysing the Online Thugur of the Salafi-Jihadi Digital Ecosystem: Facebook, Instagram, and TikTok appeared first on VOX - Pol.

]]>
By Alessandro Bolpagni, Eleonora Ristuccia, and Grazia Ludovica Giardini

More than ten years ago, Halummu launched an online campaign entitled “Supporting Ribat and Jihad” to urge IS munasirin to spread IS propaganda material “to as many platforms and accounts as possible”, underlying that the “ongoing war between the camp of kufr and the camp of faith is waged on the military and media levels.” Over the past decade, while big tech companies are working to get rid of Salafi-Jihadi (al-Qaeda – AQ – and IS) users and content, media mujahidin are not mere guests, but rather true ‘habitués’ thanks to their dexterity and skill in exploiting the features and vulnerabilities of social media to avoid content moderation and conduct da‘wa, share propaganda material, recruit new supporters. This is mostly true for social networks, the online thughur (digital frontier) of the Salafi-Jihadi online ecosystem, where propaganda – especially on Facebook, Instagram, and TikTok – is ‘openly’ and ‘publicly’ accessible, without a user needing any particular links to get to it.

When it comes to addressing Salafi-Jihadi propaganda presence on Facebook, Instagram, and TikTok, we are used to hearing that IS and AQ have been driven off these social networks. However, the reality tells another story. These social networks show an extensive presence of pro-AQ and pro-IS propaganda material shared by supporters who are gradually ‘settling on’ the digital frontier of the Salafi-Jihadi online ecosystem, no longer a place where propaganda is merely disseminated. This has led to the emergence of a horizontal propaganda stream between Facebook, Instagram, and TikTok, wherein pro-AQ and pro-IS users are crafting their propaganda according to the platforms’ content typology and functioning. These different usage patterns reflect the very purpose of these social networks: while Facebook and Instagram are designed for intentional browsing, TikTok is a ‘tailor-made’ gambling slot machine. According to that, we are dealing with different platforms, usual content, but in multiple forms. Specifically, these social networks share several techniques exploited by the Salafi-Jihadi movement to disseminate propaganda material tailored to each network. These include the use of the comments section, Direct Messages (DMs), stories, re-posts, broadcast channels, group chats, live propaganda discussions, ‘Account Hijacking’, hashtags, ‘Photo Mode’, editing and filters, and ‘favourites’ section.

Facebook

Firstly, Facebook is a highly ‘textual’ social network. This can be inferred from the fact that the character limit for post captions is 63,206. Hence, users are able to share large portions of text within the comments section and interact in a more extensive way. With more than ten years of experience, Salafi-Jihadi propaganda on Facebook is mainly in Arabic and articulated in long textual posts, often ‘hidden’ in the comments section. This section is also employed to spread content and links to other Facebook posts or outlinks to other social media. Furthermore, hashtags are highly employed as a functional tool for categorising Salafi-Jihadi content. Moreover, to make access easier for sympathisers, many users employ the “mention” and “highlight comment” functionality to reach many or all their followers. Overall, Salafi-Jihadi users tend to mainly share institutional and Ansar Production propaganda, articulated in text-transcriptions, videos, Salafi-Jihadi newspapers, and long-textual discussions. Within the Salafi-Jihadi online ecosystem, Facebook acts as a content aggregator, presenting various internal links and outlinks that direct users to archive websites and other social media platforms.

Instagram

Instagram is halfway between Facebook and TikTok. Content is generally more ‘polished’ and ‘curated’ than on TikTok, where non-branded propaganda is particularly prevalent. As to how extremist material is distributed, Salafi-Jihadi users exploit several Instagram features which enable content to be shared both textually and visually. Stories are regularly employed by supporters not only to spread propaganda, but also to disseminate outlinks which lead to other social media platforms. Carousels are used to store Salafi-Jihadi content in a single post. This is the case, for instance, with entire issues of An-Naba newspaper or a series of bulletins by Amaq News Agency. The ‘Highlights’ section allows users to categorise propaganda material as well. Salafi-jihadi groups are also engaging in groups and broadcast channels, where institutional, Ansar Production, and non-institutional propaganda content can be found. More recently, supporters have been creating custom AI chatbots with whom to interact and receive information on emblematic Salafi-jihadi books, such as those of Anwar al-Awlaki. Besides this, Instagram AI chatbots are able to provide users with outlinks to the most well-known pro-IS directories collecting institutional and Ansar Production propaganda.

TikTok

Within TikTok, people are fully and unwittingly flooded with all sorts of content due to its peculiar algorithm and features. Considering the Salafi-Jihadi propaganda, it can be said that TikTok represents a functional space for its dissemination, giving prominent space to a specific type of propaganda material which distances itself from the canonical pro-Salafi-Jihadi propagandistic style. In this respect, alongside other typologies, it was possible to detect a particular type of Salafi-Jihadi propaganda, namely what the authors described as non-branded propaganda. This content employs the use of audiovisual elements devoid of any AQ or IS media houses’ logo, but directly attributable to Salafi-Jihadi groups or imagery. Moreover, TikTok was also seen to be the ideal landing for the dissemination of the so-called ‘Gaming-jihad propaganda’. The latter is produced using gaming platforms and focuses on the creation and reproduction of actual pro-Salafi-Jihadi propaganda. Nonetheless, monitoring how pro-Salafi-Jihadi propaganda is shared within the TikTok environment, it can be said that jihadi supporters take advantage of all the features provided by the platforms to accomplish their communicative purpose in terms of propaganda stream. Particularly, some TikTok users were seen to exploit the TikTok live streams in order to share Salafi-Jihadi propaganda and discuss it, creating a sort of safe interactional space within the platform. Therefore, TikTok is the perfect representation of what ‘the feed gambling machine’ is like, providing several ways to disseminate different content functional to varied audiences and purposes, often stressing and/or crossing the so-called ‘halal-haram line.’

Adapting to increase capabilities and effectiveness

Overall, each social network, which has its own audience, shows specific content shared by Salafi-Jihadi groups’ supporters through the exploitation of the platform’s features. According to that, Salafi-Jihadi groups’ supporters behave as the ‘average user’, who uses these social networks according to their functioning. In other words, propaganda material on Facebook is highly textual because Facebook focuses on connecting people through people. Propaganda on Instagram is centred around visual and textual content because Instagram connects people and content. Finally, TikTok, where propaganda content is completely articulated on audiovisual content, because TikTok connects users through content. Based on this approach and by employing a multiplatform propaganda spreading, Salafi-Jihadi supporters elaborate and adapt their propaganda to the targeted social media to increase their reaching capabilities and radicalisation effectiveness.


Alessandro Bolpagni is a Senior Analyst at the Italian Team for Security, Terroristic Issues & Managing Emergencies (ITSTIME) and a lecturer at the Università Cattolica del Sacro Cuore (UCSC). He also specialised in OSINT, SOCMINT, and Digital HUMINT, oriented particularly on the analysis and monitoring of the Salafi-jihadi online ecosystem.

Eleonora Ristuccia is a Junior Analyst at the Italian Team for Security, Terroristic Issues & Managing Emergencies (ITSTIME). She specialises in the application of OSINT and SOCMINT techniques. Specifically, she focuses on the study of Salafi-jihadi groups’ communication strategies and propaganda through various online platforms.

Grazia Ludovica Giardini is a Junior Researcher at the Italian Team for Security, Terroristic Issues, and Managing Emergencies (ITSTIME). She specialises in the application of Open Source Intelligence (OSINT), Social Media Intelligence (SOCMINT) techniques, and Digital Human Intelligence (Digital HUMINT). Her research activities are focused on monitoring Salafi-jihadi groups’ communication strategies and propaganda through various online platforms.

Image credit: Rodion Kutsaiev on Unsplash

The post Analysing the Online Thugur of the Salafi-Jihadi Digital Ecosystem: Facebook, Instagram, and TikTok appeared first on VOX - Pol.

]]>
Between Suspicion and Selection: How Virtual Extremist Communities Filter Newcomers https://voxpol.eu/online-extremist-vetting-strategies/ Wed, 04 Mar 2026 12:00:00 +0000 https://voxpol.eu/?p=27527 By Christopher V. David and Marten Risius Extremist groups operate in an environment where trust is existential and suspicion is constant—every new member could be an ally or an informant. The Provisional IRA, for example, famously operated a dedicated squad tasked with tracking down and liquidating informers within their ranks. Though in a spectacular twist,

The post Between Suspicion and Selection: How Virtual Extremist Communities Filter Newcomers appeared first on VOX - Pol.

]]>
By Christopher V. David and Marten Risius

Extremist groups operate in an environment where trust is existential and suspicion is constant—every new member could be an ally or an informant. The Provisional IRA, for example, famously operated a dedicated squad tasked with tracking down and liquidating informers within their ranks. Though in a spectacular twist, it would turn out later that the leader of this squad was a mole himself, “the golden egg” of British intelligence that had devastated the organization’s operational security for years. Such examples illustrate the severe implications of ineffective filtering. 

To prevent such infiltration, extremist groups rely on strong vetting and filtering mechanisms to select loyal members. In offline contexts, a central strategy for this is what Karsten Hundeide (2003) termed “deep commitment”: Requiring recruits to perform certain acts that prove loyalty. In his ethnographic study of extremist youth groups, this included requiring newcomers to commit certain crimes that prevented them from easy disengagement at a later point. In a particularly gruesome example, the so-called Islamic State vetted their child soldiers by pushing them to commit violence against their own families. 

In an online environment where anonymity reigns, such physical tests of loyalty are difficult. Consequently, many virtual spaces are vulnerable to infiltration, and openly accessible extremist web forums are regularly monitored by outsiders. For instance, antagonistic communities on platforms such as Reddit infiltrate incel forums to harvest content and ridicule their members. Faced with this constant threat, extremist communities are forced to devise protective mechanisms adapted to the digital world. In our recently published comparative case study based on ethnographic material, we examine how virtual extremist communities filter members. We find striking differences between two extremist communities: a misogynistic incel forum and a white supremacist forum.

Hostility as a deliberate selection mechanism for incels

The incel community filters members through a hostile process we call “Trial by fire”. On the forum newcomers post directly to the general feed where they are put through what one established member called a “wringer”. Newcomers are assigned a visibly low-status label and are often met with constant hostility. Their novice status is indicated by a greyed-out username, which remains in place until they have made 500 posts. 

This hostility is not random trolling but rather a deliberate selection and onboarding mechanism. We observed multiple established members reflecting on the strategic purpose of their hostility.  One said: “I think if [the forum] didn’t put people through the wringer (…) we would have even more infiltration. (…) I think it is a defence mechanism because outsiders/other groups/redditors etc. all hate us (…).” Another user agreed, stating that he says “things to weasel them {infiltrators} out”. The amount of hostility is moderated by specific criteria: 

  • Formal Rule Compliance: While some rules are generic (such as do not spam), others are specific for incel subculture (e.g. do not brag about receiving female attention). In any case, rules are strictly enforced. For example, a newcomer who violated the ban on LGTBQ+ content in his first post was targeted for harassment and reported to moderators. 
  • Implicit Assumptions: Newcomers are expected to pick up on unwritten community norms. For example, while hateful tirades against women are a standard posting practice, targeting law enforcement is met with suspicion. A recruit who directly attacked police was met with an emoji symbolizing a federal agent (a “glowie”), marking him as a suspected infiltrator. 
  • Hierarchy: Newcomers must “know their place”. While the forum is a “refuge for the wicked” for some established members, newcomers are denied any empathy. One recruit shared his fear of being expelled from university for making threatening remarks about women, and established members responded with severe verbal abuse and calls for self-harm. 

Ideological Engagement: We observed that hostility decreased if a newcomer demonstrated deep knowledge or interest in the incels’ fatalistic ideology. For example, the post of one newcomer that compiled “scientific proof” for the underlying assumptions was moved to the must-read section of the forum.

Mission-driven selection in the white supremacist forum

On the white supremacist forum, the filtering process is vastly different and comparatively more structured. We refer to the filtering process as “mission-driven”, due to its educational character. Newcomers are expected to introduce themselves in a dedicated subforum and detail their motivation for joining the “movement”. Self-study of white supremacist ideology is expected, with established members expecting newcomers to “read, read, read” dedicated threads explaining central tenets of white supremacist ideology. 

While a general “siege mentality” is present on the forum, with some members suspecting that a significant portion of users may be infiltrators or ideological opponents, recruits who follow this protocol are generally welcomed warmly. In their introductory threads, established members judged recruits primarily based on two key criteria: 

Ideological Alignment: A recruit’s stated motivation for joining is crucial. A newcomer from Australia who detailed how “mass migration” was destroying his country was welcomed for reflecting key community beliefs. Another user gained instant approval by using coded language (adding capitalized double SS in a sentence), signalling allegiance to National Socialist thought. 

Racial suitability: This is the primary filter criterion. The forum limits membership only to “100-percent white people of European descent”. There is constant suspicion of infiltration attempts by “inferior races”. When a self-declared ethnic Albanian introduced himself, he was immediately asked whether he was “racially white”. He attempted to prove his belonging by sharing a picture of himself, but his whiteness was deemed insufficient, and he was barred from participation. 

Anonymity and the lack of deep commitment

We believe that our results show two vastly different ways in which virtual extremist communities attempt to replace the “deep commitment” mechanisms of offline groups. The anonymous online environment leads both incels and white supremacists to be constantly wary of infiltration. However, the communities’ distinct goals influence their filtering strategies: 

  • The incel forum applies a “trial by fire” approach: They drive away both potential allies and infiltrators through fierce hostility. This is primarily because established members prioritize creating a “refuge for the wicked” instead of reaching clear political goals. They can therefore afford to filter out some genuine incels. Similar strategies of hostility as a filtering tactic have also been reported by Lee and Knott for a fascist forum, suggesting a potentially broader pattern of hostility as a selection mechanism. 
  • The white supremacist forum is mission-driven: The community hopes to awaken “racial consciousness” in as many white people as possible. Therefore, once a recruit passes initial racial and ideological screening, hostility is seen as counterproductive. 

Our findings highlight how extremist communities choose different strategies moderated by their broader community goals when adapting their vetting and filtering processes. For future research, it would be particularly promising to compare such filtering processes in virtual communities of contrasting ideologies (as incels and white supremacists do share certain overlaps), or in closed virtual spaces, in which moderators are primarily responsible for granting access to the group. 


Christopher V. David is a researcher and PhD candidate at the Neu-Ulm University of Applied Sciences, Germany. His work examines online extremism through a socio-technical lens, with a particular focus on how extremist communities communicate, learn, and adapt across digital environments. He holds a degree in Psychology (M.Sc.) and Security Studies (M.A.) https://bsky.app/profile/dsrc.bsky.social

Marten Risius is Professor for Digital Society and Online Engagement at Neu-Ulm University of Applied Sciences, Germany and Adjunct Senior Fellow at the School of Psychology at the University of Queensland, Brisbane, Australia. His position receives generous funding support from the Bavarian State Ministry of Science and the Arts through the Distinguished Professorship Program as part of the Bavarian High-Tech Agenda. https://bsky.app/profile/risius.bsky.social

The post Between Suspicion and Selection: How Virtual Extremist Communities Filter Newcomers appeared first on VOX - Pol.

]]>
Automatically Generating Counter-Speech: Opportunities and Challenges of Using LLMs for CVE https://voxpol.eu/automatically-generating-counter-speech/ Wed, 25 Feb 2026 12:00:00 +0000 https://voxpol.eu/?p=27511 By Ellie Rogers As technology has developed, extremist actors have found new ways to use it to their advantage. Large language models (LLMs) are one area that has been exploited by extremists to create and share content. Introduction LLMs use natural language processing (NLP), and often artificial intelligence (AI), to process and generate text for

The post Automatically Generating Counter-Speech: Opportunities and Challenges of Using LLMs for CVE appeared first on VOX - Pol.

]]>
By Ellie Rogers

As technology has developed, extremist actors have found new ways to use it to their advantage. Large language models (LLMs) are one area that has been exploited by extremists to create and share content.

Introduction

LLMs use natural language processing (NLP), and often artificial intelligence (AI), to process and generate text for translation, information retrieval, conversational interactions, and summarisation. Countering violent extremism (CVE) efforts are also making use of LLMs, by automatically generating counter-speech (interventions that directly address, or offer alternatives to extremist content), with the aim of dissuading individuals from extremist narratives. Scholars have argued that LLMs may assist practitioners in identifying hate speech, creating targeted counter-speech content, and engaging with users to deliver counter-speech. This article outlines the potential for LLMs as a vehicle for counter-speech, whilst highlighting the challenges that must be overcome.

Potential Benefits

Using LLMs to automatically generate counter-speech can assist with the increasing demand for counter-speech online, as they can be used to supplement the design and dissemination of counter-speech and may be more easily scalable than solely human-operated interventions. When identifying where counter-speech is needed, LLMs could automatically detect hate speech. Counter-speakers have also shown interest in using AI to gather relevant information to assist them in designing counter-speech content. Using LLMs to assist counter-speakers in identifying hate speech and constructing counter-speech responses could be a beneficial time-saving strategy.

It is important to reduce the strain on counter-speakers, as they can become the victims of hate speech themselves and experience negative impacts on their personal well-being. Since counter-speech requires a lot of time and effort and often involves addressing upsetting content, some counter-speakers have reported feeling overwhelmed and experiencing negative impacts on their mental health. As such, counter-speakers have suggested that it could be beneficial to use AI for counter-speech campaigns to protect their personal well-being, as it can offer them anonymity and reduce the amount of harmful content that they must identify and review, helping them to become more emotionally detached.

Potential Challenges

Whilst using LLMs for counter-speech can reduce the physical and mental load on counter-speakers, there are important practical limitations and ethical challenges that should be considered.

There are questions surrounding the functionality of LLMs and their ability to produce relevant and credible counter-speech to a variety of audiences. Generating counter-speech requires large amounts of data, language comprehension, emotional intelligence, and contextual knowledge, which can be challenging to achieve with machine learning alone. As such, some LLM-generated counter-speech has been found to include factual, or grammatical errors, which may limit the effectiveness of the counter-speech within a CVE campaign.

Counter-speakers have expressed concerns that a lack of human involvement during the design and dissemination of counter-speech may have impacts on the authenticity and credibility of the counter-speech. For instance, disclosing that counter-speech was written by AI has been found to significantly reduce users perceived trust in the counter-speech. Alternatively, if it is not disclosed when AI has been used for counter-speech, transparency concerns can arise from users being unaware that LLMs are involved. It is important that LLM-generated counter-speech is manually reviewed to ensure it is authentic, accurate, and appropriate for the target audience.

Using fully automated design and delivery methods for counter-speech raises a number of ethical considerations including user privacy and bias. Privacy is a key ethical consideration for LLMs, which can gather highly personal information about individuals. For example, when conversing with a chatbot, individuals may not understand that their conversation can be read by real people. When designing counter-speech campaigns that use LLMs, user privacy should be carefully considered and protected as much as possible.

Biases can be built into LLMs by the developers, from the datasets used to train them, or during real-world implementation. Biases can result in harmful and inaccurate information being shared, which can encourage discrimination. For example, a translation LLM on Facebook incorrectly translated “good morning” written in Arabic to “hurt them” and “attack them”, resulting in a Palestinian individual being arrested by Israeli authorities. Diverse groups of individuals need to be involved in the design and dissemination of LLMs to ensure that biases are mitigated and LLMs are not generating harmful or incorrect content.

The effectiveness of LLM-generated counter-speech

There is limited empirical research that assesses the effectiveness of LLM-generated counter-speech as part of a CVE campaign. Most studies focus on the ability of LLMs to generate counter-speech, leaving measures of effectiveness mostly restricted to evaluations of the counter-speech’s written quality. There are, however, a smaller number of studies that do assess the practical effectiveness and impact of LLM-generated counter-speech.

Chung et al. (2021) tested an NLP tool that aimed to assist NGO practitioners in countering Islamophobia on Twitter in English, French, and Italian. The tool detected Islamophobia and then automatically composed counter-speech responses. The practitioners who tested the tool had positive feedback and felt it was an innovative addition to counter-speech writing. Importantly, some of the practitioners emphasised that the tool should not entirely replace manual writing, but instead should be used to assist them in counter-speech writing, as modifications to the generated counter-speech were often necessary. The tool was still considered to be useful as writing counter-speech from scratch reportedly took longer than modifying the counter-speech that the tool generated.

Bilewicz et al. (2021) assessed whether counter-speech messages that were generated and delivered by a bot (disguised as a real male user) could be effective at reducing verbal aggression within two subreddits. When verbal aggression was detected, the bot delivered counter-speech by directly replying to the user who posted the aggressive content. User comments posted 60 days before and after the intervention were analysed and a control group of users from other subreddits, who were not targeted with the bot, were used for comparison. Users displayed a lower proportion of verbal aggression after the intervention than before, whereas the proportion of verbal aggression in the control group remained largely the same throughout. This study suggests that bots could be used as part of counter-speech interventions.

Bär et al. (2024) explored whether LLM-generated counter-speech was effective in reducing hate on Twitter (X). Compared to manually-generated counter-speech, the LLM-generated counter-speech was found to be less effective in reducing online hate. The LLM-generated counter-speech was also associated with an increase in hate posts after users viewed the counter-speech. The researchers suggest that users may have recognised that the counter-speech was LLM-generated, causing them to react negatively. These findings highlight the potential counter-productive backfire effects that can arise from the use of LLMs to generate counter-speech.

Conclusion

LLMs can generate counter-speech in response to a range of extremist content, which can offer some protections to counter-speakers and may reduce the human resources that are needed to design and deliver counter-speech campaigns. However, there are some important limitations associated with the use of LLMs to generate counter-speech that must be considered. The potentially limited functionality of LLMs can result in the production of counter-speech that contains inaccuracies and the design and use of LLMs raises user privacy and bias concerns that can have harmful implications.

The small number of studies that assess the practical effectiveness of LLM-generated counter-speech offer mixed findings. LLMs show some promise in assisting practitioners with counter-speech writing and may potentially help to reduce verbal aggression online. It is important to consider that using LLMs to generate counter-speech may backfire and create increased hostility. Any use of LLMs within counter-speech campaigns needs to be accompanied by rigorous evaluation, risk assessment, and human oversight to ensure that the counter-speech being generated is relevant, factual, and non-harmful.


This article is republished from the Centre for Research and Evidence on Security Threats (CREST) under a Creative Commons license. Read the original article.

Ellie Rogers is a PhD candidate at Swansea University within the Cyber Threats Research Centre (CYTREC). Her research focuses on the algorithmic amplification of counter-speech as a response to online extremism.

Image credit: Mediamodifier on Unsplash

The post Automatically Generating Counter-Speech: Opportunities and Challenges of Using LLMs for CVE appeared first on VOX - Pol.

]]>
Why Extremist Innovation Happens First in Sexualised Digital Spaces  https://voxpol.eu/gendered-synthetic-abuse-extremism/ Wed, 18 Feb 2026 12:00:00 +0000 https://voxpol.eu/?p=27520 By Mischa Gerrard Online extremism research tends to treat gendered AI-enabled harms – such as non-consensual sexual deepfakes and synthetic child sexual abuse material (CSAM) – as peripheral to core radicalisation mechanisms. Yet these harms represent more than isolated safety problems. Rather, they function as early-stage enabling infrastructures through which new techniques of coercion, evidentiary

The post Why Extremist Innovation Happens First in Sexualised Digital Spaces  appeared first on VOX - Pol.

]]>
By Mischa Gerrard

Online extremism research tends to treat gendered AI-enabled harms – such as non-consensual sexual deepfakes and synthetic child sexual abuse material (CSAM) – as peripheral to core radicalisation mechanisms. Yet these harms represent more than isolated safety problems. Rather, they function as early-stage enabling infrastructures through which new techniques of coercion, evidentiary evasion, and networked mobilisation are tested before diffusing into overtly political or violent online ecosystems. This matters for understanding how extremist innovation accumulates and spreads. 

Non-consensual sexual deepfakes, conceptualised as sexual digital forgeries, are  particularly illustrative. They exploit gendered asymmetries of digital life to erode verification norms, degrade democratic participation, and generate the epistemic instability upon which broader forms of political manipulation depend. The fact that such content now circulates seamlessly across mainstream platforms underscores that this is an infrastructural problem rather than a marginal subculture. 

AI-generated child sexual abuse material represents a parallel and more acute manifestation of the same dynamic. Where sexual digital forgeries destabilise trust and participation by rendering women’s bodies falsifiable, synthetic CSAM extends this epistemic disruption into the domain of transnational crime and child protection, severing the link between image, victim, and offender in ways that directly undermine investigative and governance infrastructures. 

What makes these environments particularly conducive to extremist innovation is the convergence of four structural conditions: high engagement, weak or uneven enforcement, platform architectures optimised for amplification, and the social devaluation of those most frequently targeted. Together, these conditions lower the costs of experimentation, normalise coercive and manipulative techniques, and allow new tactics to be refined before migrating into explicitly political or extremist domains.

Gendered Synthetic Abuse as an Enabling Condition 

Understanding this dynamic requires shifting attention from individual instances of harm to the structural conditions they generate. What matters for extremist innovation is not the specific form of abuse, but the capacities it normalises: deniability, scalability, attribution failure, and the strategic manipulation of uncertainty. For example, synthetic fabrication enables what has been described as the “liar’s dividend,” whereby authentic evidence can be dismissed as manipulated, diffusing accountability and complicating attribution. Gendered synthetic harms are especially effective at generating these conditions because they operate at the intersection of high engagement, weak enforcement, and social devaluation.

Unlike discrete acts of violence or harassment, synthetic abuse strikes at verification,  attribution, and truth-finding, producing what can be understood as a condition of ambient insecurity. The harm does not reside solely in any single image or artifact, but in the persistent possibility of fabrication. When bodies become technically falsifiable,  reputational harm becomes easier to inflict and harder to contest, denials become  more plausible, and the boundary between authentic and fabricated material is rendered socially ambiguous. These conditions are not incidental to extremist activity;  they are actively useful to movements that depend on intimidation, narrative manipulation, and the destabilisation of trust. 

These dynamics are embedded within mainstream digital infrastructures rather than confined to fringe spaces. The same platform architectures that enable the rapid circulation of sexualised synthetic content – algorithmic amplification, low-friction sharing, and uneven moderation – also underpin extremist communication and  mobilisation. In this sense, gendered synthetic abuse functions as infrastructural rehearsal: techniques of coercion, humiliation, and evidentiary evasion are refined insexualised contexts before being redeployed in more overtly political settings, where the stakes are higher but the mechanisms are already familiar. 

The Feminised Testbed Hypothesis and the Intimate Security Dilemma

The dynamics outlined above point to a broader structural pattern that can be described as the Feminised Testbed Hypothesis – a term I use to capture the tendency for destabilising digital technologies to mature first in gendered and  sexualised environments before diffusing into political, criminal, and extremist domains. This pattern reflects how risk, vulnerability, and enforcement are unevenly distributed across digital ecosystems, leaving certain populations more exposed to experimental forms of harm and control. 

Sexualised digital spaces offer particularly conducive conditions for technological and behavioural experimentation. They generate high volumes of engagement, produce abundant training data, and sit at the intersection of technological novelty and regulatory hesitation. As a result, violations in these spaces often provoke delayed or fragmented institutional responses, lowering the costs of innovation for those testing new tools or tactics. The techniques later associated with political disinformation, harassment campaigns, and extremist mobilisation can be refined in environments where abuse is normalised and accountability is diffuse. 

This testbed dynamic is reinforced by platform incentives. Recommendation systems optimised for attention rather than harm minimisation amplify transgressive content,  while moderation frameworks struggle to distinguish between consensual material,  abuse, and synthetic fabrication at scale. These conditions enable iterative experimentation: tools are refined, distribution strategies tested, and user responses  calibrated before similar techniques appear in explicitly extremist contexts. 

Building on classic understandings of security dilemmas in international relations, the security implications of this pattern are captured by what can be termed the Intimate Security Dilemma. In synthetic environments, insecurity emerges when the intimate foundations of identity – bodies, images, and personal evidence – become falsifiable. When verification collapses at the most personal level, this instability scales outward, corroding institutional trust, evidentiary standards, and the credibility of claims across digital public life. Extremist actors are particularly well positioned to exploit these conditions, which lower the costs of intimidation, grievance amplification, and narrative destabilisation. 

Implications for Extremism Research in the Age of Synthetic Insecurity

Taken together, these dynamics suggest that extremism research risks a systematic blind spot when it treats sexualised synthetic harms as adjacent to, rather than constitutive of, contemporary radicalisation processes. When analysis focuses only on overt ideology or explicitly political violence, it misses the upstream environments in  which the tools, affordances, and behavioural scripts of extremist mobilisation are first developed. 

Recent scholarship suggests that sexualised synthetic harms are often treated as technologically novel aberrations, positioned at the margins of security analysis because they sit at the intersection of taboo content and emerging AI capabilities.  Framed as exceptional, exotic, or morally discrete, they are analysed as safety problems rather than as structural signals. This misclassification leaves digital security research inattentive to the infrastructural lessons these environments provide about how coercive techniques mature and diffuse. 

If extremism research is to keep pace with synthetic technologies, it must look earlier, not later: to the gendered and sexualised digital spaces where new forms of coercion, evidentiary evasion, and manipulation are refined under conditions of weak enforcement and high engagement. This suggests that monitoring efforts, radicalisation research, and platform governance frameworks should treat sexualised synthetic ecosystems as early-warning environments for extremist innovation rather than solely as online safety issues. Treating these domains as peripheral does not contain risk; it delays recognition of how extremist innovation is already being incubated.


Mischa Gerrard is an MA researcher in Violence, Terrorism and Security at Queen’s University Belfast. Her work examines online extremism and technology-facilitated harms, with a focus on gendered radicalisation dynamics. (X: @MischaGerrard)

The post Why Extremist Innovation Happens First in Sexualised Digital Spaces  appeared first on VOX - Pol.

]]>
Masculinity and Militant Traditions: The Shaping of a Home-Grown Irish Far-Right https://voxpol.eu/irish-far-right-masculinity-republicanism/ Wed, 11 Feb 2026 12:00:00 +0000 https://voxpol.eu/?p=27506 By Joshua Farrell-Molloy Only a decade ago, Ireland’s far right was barely visible. Its current form is rooted in 1990s anti-abortion activism, which shaped the political careers of figures Justin Barrett and James Reynolds, who in 2016 co-founded the National Party, Ireland’s largest far-right party. The 2018 abortion referendum provided an early incubation space for

The post Masculinity and Militant Traditions: The Shaping of a Home-Grown Irish Far-Right appeared first on VOX - Pol.

]]>
By Joshua Farrell-Molloy

Only a decade ago, Ireland’s far right was barely visible. Its current form is rooted in 1990s anti-abortion activism, which shaped the political careers of figures Justin Barrett and James Reynolds, who in 2016 co-founded the National Party, Ireland’s largest far-right party. The 2018 abortion referendum provided an early incubation space for the movement, bringing initial networks together, while it flourished during the COVID-19 pandemic, as anti-lockdown grievances fuelled the growth of an Irish far-right ecosystem.

In 2022, the movement easily latched onto localised anti-immigration protests with a “house the Irish first” narrative. Demonstrations spread nationwide in response to the placing of asylum seekers in emergency accommodation centres during a period marked by a national housing crisis and an influx of Ukrainian refugees. The protests were a turning point. Since they began, the number of influencers, groups and parties has grown. A distinct Irish far-right ideology has further crystallised, which draws on historical symbols, narratives and aesthetics from Ireland’s republican past.

This rootedness in national history was evidenced in early-December, when a video spread on social media, purportedly from a newly formed anti-immigrant dissident republican group in Northern Ireland, the “New Republican Movement”. In the footage, three men in balaclavas stood in front of an Irish tricolour flag armed with what appeared to be firearms. A speaker, who introduced the group as “proud men of Ireland” and “patriots”, declared their opposition to immigration, before threatening elected representatives: “We have your addresses and know your movements, every one of you are legitimate targets as of today.” 

In another incident only weeks earlier, two men belonging to the “Irish Defence Army”, who intended to attack mosques and asylum seeker accommodation, were charged with possession of explosives. Like the New Republican Movement, the Irish Defence Army recorded a video. Wearing balaclavas in front of a tricolour flag, members delivered an anti-immigration statement before claiming responsibility for upcoming intended attacks. 

While investigations are ongoing in each case, what is clear is that in both videos, through flags, masks, and choreography, there is a conscious mimicry of Irish republican paramilitary aesthetics typically associated with groups active during the conflict in Northern Ireland (1969-1998), like the Provisional Irish Republican Army (PIRA), or factions active in the dissident Irish republican campaign (1994–ongoing). Such performances represent an indigenised form of militant masculinity rooted in Ireland’s republican tradition of militancy.

Yet, much commentary and research on the Irish far-right focuses instead on transnational elements, such as connections to overseas actors, engagement with global far-right narratives or conspiracy theories. This distracts from the more significant emergence of a home-grown ideological project driven by politically maturing Irish groups, who nativise far-right ideology by embedding it in historical Irish nationalism.

Irish nationalism, martyrdom and masculinity

To understand the phenomenon of a home-grown Irish far-right, it is necessary to understand the relationship between masculinity and Irish nationalism. As historian Aidan Beatty points out, masculinity was central to nationalist politics, with an Irish manhood constructed in opposition to British racialised stereotypes and caricatures depicting the Irish as inferior and primitive. In response to the humiliating experience of colonial domination, nationalists projected an ideal of the proud Gaelic warrior as a means of restoring dignity.

This was especially pronounced during the Gaelic Revival. Cultural nationalism emphasised physical strength, discipline and virility as markers of Irish manhood, promoted through institutions such as the Gaelic Athletic Association and reinforced by mythological figures like Fionn Mac Cumhaill and Cúchulainn. From these cultural foundations emerged a political ideal linking the bearing of arms to honour, in which the Irish Volunteers and later the IRA, came to embody masculine virtues of courage, self-control and discipline, an image perhaps best captured in Seán Keating’s iconic painting of heroic IRA figures in Men of the South.

Martyrdom for Ireland was upheld as the highest example of masculine heroism. The commemoration of patriot dead framed sacrifice as a generational struggle, in which each failed uprising inherited the legacy of the last and, in turn, inspired the next. This narrative gained renewed potency after the Easter Rising of 1916, which rebellion leader Patrick Pearse in his proclamation explicitly situated within this lineage of armed rebellion, stretching from the 1641 rebellion through 1798 to the Fenian rising of 1867. As writings of so many involved in the Rising and the War of Independence attest, history was central to how they understood their actions, instilling a deeply felt sense of continuity, duty and belonging that continues to resonate within Irish republicanism.

Reanimation by the far right

Contemporary Irish far-right actors reanimate these very same cultural scripts, historical narratives and masculine ideals, by performing a disciplined, militarised masculinity that draws on a long republican repertoire, echoing the commitment of the Irish Volunteers or disciplined PIRA flag-bearers. They redeploy the familiar generational call to manhood, fusing their ideology with the struggle for Irish independence by framing their mobilisation as a continuation of earlier national struggles. Through this lens, far-right activism is cast as a masculine duty owed to Ireland.

This dynamic is visible in the online content of several groups. One video on Telegram of a National Party member delivering a fiery speech is set to a phonk-style instrumental track and interspersed with rapid-fire flashes of Easter Rising leaders and armed PIRA members, projecting a stylised performance of militant, masculine national pride. Neo-Nazi organisation Clann Éireann, formed after a split from the National Party, makes use of imagery associated with the 1798 rebellion in their propaganda, such as the ‘pikeman’ statue, representing an iconic rebel figure of the uprising armed with a pike, channelling the defiant legacy of 1798 to their own cause against mass immigration.

The Irish chapter of the Active Club network, Comhaltas na nGaedheal invokes 1916 in recruitment posters, and frames their activism as masculine duty in continuity with the same cause, a theme encapsulated in a three-image post from an affiliated Telegram channel, featuring IRA gunmen from the 1920s, camouflaged PIRA figures from the 1990’s, and Comhaltas members, captioned “Different eras, same struggle”. In each representation, historic Irish struggles are repurposed in opposition to immigration, with masculinity functioning as the glue that fuses past and present into a single narrative, depicting activists as standing up and embracing the duty of national defence.

A militarised masculinity is also performed in choreographed settings. Clann Éireann’s inaugural speech by group leader ‘Ceannaire’ Justin Barrett, was delivered in front of members wearing berets over balaclavas standing in disciplined formation. This beret-over-balaclava aesthetic mirrors the visual language of PIRA statements and commemorations during the conflict in Northern Ireland, signalling perceived continuity with a recognisable tradition of disciplined militant republican masculinity. More recently, the group has adopted a greenshirt-style uniform, invoking the paramilitary wing of the fascist National Corporate Party.

Through these performances as disciplined protectors of the nation, Clann Éireann resembles what scholar Elizabeth Pearson observed in the protest culture of far-right group Britain First, whose adoption of uniforms, marching, and displays of discipline, enabled activists to assume a valorised masculine identity through activism. While Pearson focused on the British far right, Irish groups like Clann Éireann localise these same rituals, embedding them within Irish historical narratives and republican symbolism to position themselves as guardians of the nation.

Emasculation and boundary policing

Another central feature of this masculine project is the subordination of perceived rivals. Sinn Féin, historically the political wing of the Irish republican movement and closely associated with PIRA, along with many dissident republican groups opposed to the peace process, adhere to left-wing ideologies. These groups are subsequently portrayed by the far-right as weak and emasculated, accused of abandoning the Irish struggle in favour of ‘marxism’, and no longer competent to defend Irish sovereignty. Clann Éireann-affiliated group Republicans Against Antifa has this mission at its core, targeting left-wing republicans with graffiti and vandalism, while distributing propaganda, often in republican heartlands in Northern Ireland.

This notion of masculinity as authenticity also structured how the far-right responds to British or loyalist involvement in Irish protests. Irish activists who openly embrace such alliances tend to be politically immature, whereas groups anchored to identitarian or neo-Nazi ideologies in Ireland instead engage in active boundary policing, distancing themselves from such individuals and performatively condemning any engagement. Rejecting collaboration becomes a way of reasserting Irish nationalist authenticity and masculine credibility from the humiliation and emasculation of collaborating with historical oppressors.

Conclusion

The cases of the New Republican Movement and the Irish Defence Army illustrate how deeply rooted the contemporary Irish far-right is in local historical and political culture. Rather than relying solely on imported conspiracy theories or global narratives, the Irish far-right more broadly draws upon a long-standing repertoire of nationalist and republican symbolism, giving the movement cultural resonance and potentially greater staying power.

Masculinity functions as an ideological scaffolding for the Irish far right. Actors portray themselves as disciplined protectors of the nation, invoking a militant lineage rooted in sacrifice, generational duty and revolutionary continuity stretching from 1916 through to 1798 and beyond, while fear of emasculation informs their rivalry with other republican actors or the rejection of loyalist alliances. Rather than fixating on external influences or short-term symbolic convergences that may seem surprising, greater attention should be paid to the deeper ideological groundwork and identity-formation shaping the Irish far right’s long-term trajectory.


This blog was first published in RightNow!, Center for Research on Extremism, University of Oslo

Joshua Farrell-Molloy is a PhD student at the Department of Global Political Studies at Malmö University. His doctoral project focuses on digital subcultures and is part of the EU-funded Marie Skłodowska-Curie ‘VORTEX’ Doctoral Network, which focuses on developing new evidence-based innovative strategies to counter and prevent ideological and behavioural radicalisation.

Image: Men of the South by Seán Keating (1921)

The post Masculinity and Militant Traditions: The Shaping of a Home-Grown Irish Far-Right appeared first on VOX - Pol.

]]>
AI, Anger and Entitlement: What’s Fuelling Misogynistic Extremism Online https://voxpol.eu/misogynistic-extremism-online-ai/ Wed, 04 Feb 2026 12:00:00 +0000 https://voxpol.eu/?p=27483 By Bernadette Johnston Misogyny and problematic attitudes towards women are not new phenomena. What is new however is the speed and scale at which misogynistic ideas and related behaviours now circulate – particularly online. Problematic ‘nudification apps,’ whose sole purpose is to underdress and sexually transform photos of women online without their consent, have existed

The post AI, Anger and Entitlement: What’s Fuelling Misogynistic Extremism Online appeared first on VOX - Pol.

]]>
By Bernadette Johnston

Misogyny and problematic attitudes towards women are not new phenomena. What is new however is the speed and scale at which misogynistic ideas and related behaviours now circulate – particularly online.

Problematic ‘nudification apps,’ whose sole purpose is to underdress and sexually transform photos of women online without their consent, have existed for quite some time. They have fuelled troubling stories about their use among schoolchildren including widely reported events in 2024 from Australia and 2023 in Spain.

However, it was the rollout of an update to AI chat tool GROK in late December 2025 that has brought the issue to the general public and global media. The app update on X (formerly Twitter) reproduces sexualised and degrading images of women (and children) at the click of a button and for free, and it has been reported that over 3 million explicit images were created in the first 11 days of the roll out.

This widespread proliferation from users on a mainstream social media platform, coupled with the continued popularity of manosphere influencers and male-grievance content online highlights how misogyny is increasingly embedded in digital cultures that reward entitlement, outrage and hostility.

Here in Ireland, home to the European headquarters of X (formerly Twitter), the Irish government’s Online Safety Act and parallel EU initiatives seek to address an increasing volume of online harms directed at women and children in particular. As such, an empirical understanding of the psychosocial drivers of misogynistic extremism has become an urgent concern.

Misogyny as an extremist belief system

Misogynistic extremism refers to belief systems that position women as inferior, manipulative, or untrustworthy, and which justify male dominance, exclusion, or punishment. These beliefs include not only overt hatred, but also entitlement to women’s bodies and narratives framing gender equality as primarily an attack on men.

Online, these ideas are most visibly clustered within the manosphere, a loose network of digital communities including forums, groups, and grievance-focused influencers. Research shows that engagement with these spaces can also act as a pathway to further radicalisation, often intersecting with far-right ideology, racism and homophobia.

This overlap is documented in multiple policy-focused reports, including the Center for Countering Digital Hate’s investigation into the incelosphere, which outlines how misogynistic communities normalise harassment, sexual entitlement and admiration for real-world attackers.

What does the evidence show?

Last November, I presented a systematic review and analysis of existing literature on misogynistic extremism at the 2025 Vox Pol Conference held in Charles University in Prague. This work, as part of a now completed MSc Psychology programme in Dublin City University, synthesised nineteen eligible studies and is currently undergoing peer review. Despite substantial variation in methods and measurement, a consistent pattern emerged with risk and motivating factors clustered around four key pillars.

1. Individual psychological vulnerabilities

Men who score higher on misogynistic attitudes are more likely to report depression, anxiety, loneliness, insecure attachment styles, and rigid thinking styles. Of course, these vulnerabilities do not cause misogyny on their own. However, they can increase susceptibility to grievance-based explanations that externalise blame and offer simple narratives of victimhood.

This aligns with broader extremism research emphasising the role of identity threat and emotional distress. Online environments optimised for engagement rather than wellbeing may intensify these dynamics by validating resentment rather than challenging it.

2. Entitlement and perceived rejection

One of the strongest findings across the literature relates to entitlement, particularly sexual and relational entitlement. Men who believe they are owed intimacy, attention or status are more likely to endorse hostile attitudes towards women when those expectations are unmet.

Experiences of romantic or sexual rejection are frequently reframed within manosphere narratives as collective injustice rather than individual circumstance and this framing is central to Incel ideology.

3. Masculine nostalgia and feminist “injustice”

Another recurring theme is resentment towards social change. Many men who endorse misogynistic beliefs perceive feminism as having gone “too far,” resulting in the cultural emasculation of men. This sense of loss, often described as masculine gender nostalgia, is strongly associated with hostile sexism and acceptance of violence against women.

These narratives closely mirror those found in far-right and authoritarian movements. The European Parliament’s briefing on gender and radicalisation notes that anti-feminism increasingly functions as a gateway ideology, normalising extremist worldviews through appeals to tradition, hierarchy and male authority.

4. In-group affiliation and identity fusion

More than half of the studies reviewed highlighted the importance of group belonging. Misogynistic communities provide validation, identity, and shared grievance, reinforcing hostility towards women and, in some cases, endorsement of harassment or violence.

Some studies point to identity fusion, where personal identity becomes tightly bound to group identity. In these contexts, defending misogynistic beliefs feels synonymous with defending the self. This is a mechanism also observed in other extremist movements and highlighted in Prevent and counter-terrorism guidance across the EU.

Why this matters now

These findings have clear contemporary relevance. If we want to address problematic behaviours, to work towards developing interventions and perhaps, even better…prevention, then we need to understand the factors underpinning them.

Digital platforms continue to increasingly blur the boundaries between mainstream and fringe content, while AI-generated tools can reproduce and scale misogynistic tropes at unprecedented speed and become more mainstream by the day. Yet misogynistic extremism is only now being treated as a distinct form of radicalisation within counter-extremism policy, indicating the need for further and focused empirical study.

Misogynistic extremism thrives where grievance is validated, complexity is rejected and belonging is conditional on blame. Addressing it requires much more than moderation alone. It demands sustained attention to the psychological, social, and digital conditions that allow these belief systems to take root and to spread.


Bernadette Johnston is a research assistant at Dublin City University, Ireland and holds an MSc and BA in Psychology, alongside a previous career background in journalism and broadcast media in Ireland. She has a focused interest in researching the psychological underpinnings of cognitive warfare, online extremist communities, and the development of empirically framed interventions to address them.

Image Credit: Zulfugar Karimov on Unsplash

The post AI, Anger and Entitlement: What’s Fuelling Misogynistic Extremism Online appeared first on VOX - Pol.

]]>
Online Terrorist Exploitation: Responding to Children as Victims and Perpetrators https://voxpol.eu/online-terrorist-exploitation-responding-to-children-as-victims-and-perpetrators/ Wed, 28 Jan 2026 12:00:00 +0000 https://voxpol.eu/?p=27402 By Gina Vale In December 2021, terrorism charges against a 14-year-old British girl were dropped—not due to lack of engagement with extremist networks, but on account of the power dynamics of her digital relationships therein. The Home Office Single Competent Authority (SCA) determined that she was a victim of modern slavery in the UK for

The post Online Terrorist Exploitation: Responding to Children as Victims and Perpetrators appeared first on VOX - Pol.

]]>
By Gina Vale

In December 2021, terrorism charges against a 14-year-old British girl were dropped—not due to lack of engagement with extremist networks, but on account of the power dynamics of her digital relationships therein. The Home Office Single Competent Authority (SCA) determined that she was a victim of modern slavery in the UK for the specific purposes of criminal exploitation and sexual exploitation. Despite no physical interaction, her online communications with an adult male extremist in the USA were found to be sufficient evidence for her recruitment and digital control, leading her to download extreme right-wing propaganda and instructions for manufacturing a 3D-printed firearm.

This was the first—and so far, only—case where the UK’s Modern Slavery Act 2015 provided a statutory defence for a child facing terrorism charges. Less than five months after its discontinuation, the girl died by suicide. Her story raises urgent questions: Are we prosecuting children who should be protected? And can our legal and policy frameworks recognise when terrorism and exploitation intersect?

In a recent article, I propose a cross-harm approach to children’s online recruitment and engagement in terrorism. Drawing on 30 interviews and two workshops with experts in counter-terrorism, anti-slavery, child protection, and digital safety, I argue that such online dynamics can, and should, be considered as a form of child criminal exploitation. Here, I outline the key challenges and promise of bringing together hitherto siloed theory and practice.

Rising Juvenile Terrorism Meets Digital Exploitation

The UK is experiencing unprecedented levels of child involvement in terrorism, with 42 arrests in the last year alone. Since 2016, 59 children have been convicted for offences ranging from downloading terrorist content to late-stage attack plotting. Most cases involve purely online activity. Yet, we face a fundamental problem: scholarship, policy, and practice treat exploitation and terrorism as separate issues when they are increasingly interconnected. Research has warned of a ‘collective social media blind spot’ concerning how technologies enable children’s involvement in serious violence.

Whilst early cases of radicalisation involving British teenagers were framed through language of grooming, this narrative has shifted dramatically towards responsibilisation and prosecution. Only three UK legal cases have explicitly questioned juvenile terrorism activity through dynamics of exploitation. However, in February 2024, the UK Home Secretary accepted recommendations to remove all terrorism offences from the purview of the statutory defence provided by section 45 of the Modern Slavery Act 2015, eliminating a vital legal safeguard for exploited children.

The Challenge and Promise of A Cross-Harm Approach

Children targeted for terrorist recruitment and indoctrination share characteristics with those exploited for sexual abuse—approximately 40% are neurodivergent or have special educational needs, and many have adverse childhood experiences. Moreover, the patterns of recruit-recruiter relationships mirror those in cases of sexual exploitation: identifying vulnerabilities, building trust, isolating from protective networks, and exercising coercive control through digital means. The central feature to all forms of exploitation—whether sexual, financial, or even ideological—is power imbalance. Importantly, this is not limited to the traditional dynamic of adult perpetrator and child victim, but can also encompass the increasing trend of peer-radicalisation among children and youth.

To tackle these intersecting harms upstream, current regulations for online platforms need to shift the focus of moderation from content to communications. The Online Safety Act 2023 has been praised for its aim to stymie proliferation and access to child sexual abuse material and branded terrorist propaganda. However, the legislation has faced considerable criticism for its application only to major platforms, and an approach centred on the exclusion of children from potentially dangerous online spaces, rather than child-safe redesign. In 2024, Ofcom rejected recommendations to extend protections against child sexual abuse (e.g. enhanced user controls and support for child users) to include terrorism, despite acknowledging that ‘the measure may mitigate the radicalisation of children in some cases where the targeted functionalities are used in a similar way to commit grooming’. Redesign of digital safety policy to respond to recruitment and communication patterns—not just individual content—will ensure that children do not fall between the cracks created by a ‘harm-by-harm’ approach.

Professional siloes are ineffective and create barriers to recognising victim-perpetrators. There is a lack of cross-harm dialogue and training across professions—counter-terrorism officers are not trained to recognise exploitation, whilst child protection workers can struggle to identify radicalisation dynamics, particularly when confined to purely online activity. While some law enforcement practitioners note informal connections being made between terrorism and modern slavery investigations, such collaborations ‘came down more to personalities’ than formalised strategy[KV1] [GV2] . These siloes have a knock-on effect for safeguarding. Unlike other serious crimes against children, terrorism cases do not automatically trigger exploitation assessments. The Home Office’s annual statistics for referrals to its National Referral Mechanism (NRM) do not include data for terrorism as a sub-type of child criminal exploitation. Rather, the approach to juvenile terrorism-related activity is led through Prevent, which despite a ‘safeguarding’ framing has been heavily criticised for securitising children and vulnerable youth. In the case of the 14-year-old girl, NRM requests were delayed, and she was never interviewed as a victim. In light of the missed opportunities leading to her death, her legal team advocate for the automatic dual referral to Prevent and the NRM for all children suspected of involvement in terrorism, as well as the expansion of ‘first responders’ able to refer to the NRM to include lawyers, teachers, and other frontline community actors.

Conclusion

The intersection of online child exploitation and terrorism represents unfamiliar territory, but the evidence is clear: children are being recruited and controlled through digital channels for terrorist purposes. Their victimisation does not excuse or diminish their actions, but it must inform our response.

We face a choice. We can continue prosecuting victimised children as terrorists, missing opportunities for intervention and potentially contributing to tragic outcomes. Or, we can adopt a cross-harm approach that recognises complexity, prioritises child welfare, and develops more effective responses to both exploitation and radicalisation. Section 45 of the Modern Slavery Act 2015 is central to this paradigm shift. While not intended or proposed as a blanket ‘get out of jail free card’, the opportunity for a statutory defence acts as an important backstop against default criminalisation. The suggested exclusion of all terrorism offences from this protection is an urgent concern. Reliance on the public interest test for prosecution, or mitigation at sentencing, has thus far resulted in surging numbers of juvenile terrorism convictions.


This blog is based on the article Vale, G. (2025). Exploited for the Cause?: The Potential for a Cross-Harm Approach to Children’s Online Engagement in Terrorism. British Journal of Criminology, online first, 1-18.

Dr Gina Vale is a Lecturer of Criminology in the Department of Sociology, Social Policy and Criminology at the University of Southampton. She is also an Associate Fellow of the International Centre for Counter-Terrorism (ICCT) and a Member of the VOX-Pol Network of Excellence.

Image Credit: Photo by Karla Rivera on Unsplash


The post Online Terrorist Exploitation: Responding to Children as Victims and Perpetrators appeared first on VOX - Pol.

]]>
Far-right extremists have been organizing online since before the internet – and AI is their next frontier https://voxpol.eu/online-far-right-extremism-history-ai/ Wed, 21 Jan 2026 12:00:00 +0000 https://voxpol.eu/?p=27414 Michelle Lynn Kahn, University of Richmond How can society police the global spread of online far-right extremism while still protecting free speech? That’s a question policymakers and watchdog organizations confronted as early as the 1980s and ’90s – and it hasn’t gone away. Decades before artificial intelligence, Telegram and white nationalist Nick Fuentes’ livestreams, far-right

The post Far-right extremists have been organizing online since before the internet – and AI is their next frontier appeared first on VOX - Pol.

]]>
Michelle Lynn Kahn, University of Richmond

How can society police the global spread of online far-right extremism while still protecting free speech? That’s a question policymakers and watchdog organizations confronted as early as the 1980s and ’90s – and it hasn’t gone away.

Decades before artificial intelligence, Telegram and white nationalist Nick Fuentes’ livestreams, far-right extremists embraced the early days of home computing and the internet. These new technologies offered them a bastion of free speech and a global platform. They could share propaganda, spew hatred, incite violence and gain international followers like never before.

Before the digital era, far-right extremists radicalized each other primarily using print propaganda. They wrote their own newsletters and reprinted far-right tracts such as Adolf Hitler’s “Mein Kampf” and American neo-Nazi William Pierce’s “The Turner Diaries,” a dystopian work of fiction describing a race war. Then, they mailed this propaganda to supporters at home and abroad.

I’m a historian who studies neo-Nazis and far-right extremism. As my research shows, most of the neo-Nazi propaganda confiscated in Germany from the 1970s through the 1990s came from the United States. American neo-Nazis exploited their free speech under the First Amendment to bypass German censorship laws. German neo-Nazis then picked up this print propaganda and distributed it throughout the country.

This strategy wasn’t foolproof, however. Print propaganda could get lost in the mail or be confiscated, especially when crossing into Germany. Producing and shipping it was also expensive and time-consuming, and far-right organizations were chronically understaffed and strapped for cash.

Going digital

Computers, which entered the mass market in 1977, promised to help resolve these problems. In 1981, Matt Koehl, head of the National Socialist White People’s Party in the United States, solicited donations to “Help the Party Enter The Computer Age.” The American neo-Nazi Harold Covington begged for a printer, scanner and “serious PC” that could run WordPerfect word processing software. “Our multifarious enemies already possess this technology,” he noted, referring to Jews and government officials.

Soon, far-right extremists figured out how to connect their computers to one another. They did so by using online bulletin board systems, or BBSes, a precursor to the internet. A BBS was hosted on a personal computer, and other computers could dial in to the BBS using a modem and a terminal software program, allowing users to exchange messages, documents and software.

With BBSes, anyone interested in accessing far-right propaganda could simply turn on their computer and dial in to an organization’s advertised phone number. Once connected, they could read the organization’s public posts, exchange messages and upload and download files.

The first far-right bulletin board system, the Aryan Nations Liberty Net, was established in 1984 by Louis Beam, a high-ranking member of the Ku Klux Klan and Aryan Nations. Beam explained: “Imagine, if you can, a single computer to which all leaders and strategists of the patriotic movement are connected. Imagine further that any patriot in the country is able to tap into this computer at will in order to reap the benefit of all accumulative knowledge and wisdom of the leaders. ‘Someday,’ you may say? How about today?”

Then came violent neo-Nazi computer games. Neo-Nazis in the United States and elsewhere could upload and download these games via bulletin board systems, copy them onto disks and distribute them widely, especially to schoolchildren.

In the German computer game KZ Manager, players role-played as a commandant in a Nazi concentration camp that murdered Jews, Sinti and Roma, and Turkish immigrants. An early 1990s poll revealed that 39% of Austrian high schoolers knew of such games and 22% had seen them.

Arrival of the web

By the mid-1990s, with the introduction of the more user-friendly World Wide Web, bulletin boards fell out of favor. The first major racial hate website on the internet, Stormfront, was founded in 1995 by the American white supremacist Don Black. The civil rights organization Southern Poverty Law Center found that almost 100 murders were linked to Stormfront.

By 2000, the German government had discovered, and banned, over 300 German websites with right-wing content – a tenfold increase within just four years.

In response, American white supremacists again exploited their free speech rights to bypass German censorship bans. They gave international far-right extremists the opportunity to host their websites safely and anonymously on unregulated American servers – a strategy that continues today.

Up next: AI

The next frontier for far-right extremists is AI. They are using AI tools to create targeted propaganda, manipulate images, audio and videos, and evade detection. The far-right social network Gab created a Hitler chatbot that users can talk to.

AI chatbots are also adopting the far-right views of social media users. Grok, the chatbot on Elon Musk’s X, recently called itself “MechaHitler,” spewed antisemitic hate speech and denied the Holocaust.

Countering extremism

Combating online hate is a global imperative. It requires comprehensive international cooperation among governments, nongovernmental organizations, watchdog organizations, communities and tech corporations.

Far-right extremists have long pioneered innovative ways to exploit technological progress and free speech. Efforts to counter this radicalization are challenged to stay one step ahead of the far right’s technological advances.


Michelle Lynn Kahn, Associate Professor of History, University of Richmond

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image credit: Stephanie Keith/Getty Images

The post Far-right extremists have been organizing online since before the internet – and AI is their next frontier appeared first on VOX - Pol.

]]>
Bondi attack came after huge increase in online antisemitism: research https://voxpol.eu/antisemitism-australia-online-rise/ Wed, 14 Jan 2026 12:00:00 +0000 https://voxpol.eu/?p=27409 By Matteo Vergani, Deakin University At least 16 people – including a ten-year-old child – are dead after two men opened fire on a crowd of people celebrating the Jewish holiday of Hanukkah on Sunday in a public park at Sydney’s Bondi Beach. Many more are injured. I am horrified. But as a researcher who

The post Bondi attack came after huge increase in online antisemitism: research appeared first on VOX - Pol.

]]>
By Matteo Vergani, Deakin University

At least 16 people – including a ten-year-old child – are dead after two men opened fire on a crowd of people celebrating the Jewish holiday of Hanukkah on Sunday in a public park at Sydney’s Bondi Beach. Many more are injured.

I am horrified. But as a researcher who studies hate and extremist violence, I am sadly not surprised.

The Jewish community has been a top target for terrorist ideologies and groups for a long time. Many people working in this field have been expecting a serious attack on Australian soil.

Much remains unclear about the Bondi terrorist attacks and it’s too early to speculate about these gunmen specifically. The investigation is ongoing.

But what about antisemitic sentiment more broadly?

Our research – which is in the early stages and yet to be peer reviewed – has recorded a significant and worrying increase in antisemitic sentiment after October 7.

Our research

We have been training AI models to track online sentiment in social media targeting Australian communities, including Jewish people.

That means working with humans – including extremism experts and people in the Jewish community – to label content. This is to teach our model if the content it is encountering is hateful or not.

Based on definitions adopted by the Jewish community, we distinguished between two main types of antisemitism: “old” antisemitism and “new” antisemitism.

“Old” antisemitism targets Jews as Jews. It draws on entrenched myths and stereotypes that portray them as alien, dangerous, or morally corrupt.

“New” antisemitism shifts the focus from individual Jews to the state of Israel. It blames Jews collectively for Israel’s actions.

Many in the Jewish community see this as a modern continuation of historical antisemitism. Critics (both within and outside the Jewish community) contend it risks conflating legitimate opposition to Israeli policies with antisemitism.

Central to this debate is whether anti-Israel sentiment represents a continuation of age-old prejudices or a political response to the Israeli-Palestinian conflict.

In our research, we tracked both “old” and “new” antisemitism.

A sharp increase

We found that both increased sharply after October 7.

For example, we studied posts on X (formerly Twitter) geolocated in Australia before and after October 7. We wanted to understand the size of the rise in antisemitism.

We found that “old” antisemitism rose from an average of 34 tweets a month in the year before October 7 to 2,021 in the following year.

“New” antisemitism increased even more, rising from an average of 505 a month in the year before October 7 to 21,724 in the year after.

Some examples of “old” antisemitism are explicit, such as calls to “get rid of all Jews” or “kill all Jews”.

Others are more indirect, including minimising or denying the Holocaust. Examples include posts claiming that “if the Holocaust of 6 million Jews were true, Israel could not exist today” or that the Nazis had only a minimal impact on the Jewish population.

Other forms of hate rely on conspiracy theories, such as claims that “Jews are paying to destroy Australia”.

However, the vast majority of the content our models identified as antisemitic fell into the category of “new” antisemitism. This included content that blamed the Jewish community for events in Israel, such as calling all Australian Jews “baby killers” or “Zionazi fu–wits”, regardless of their personal political views and opinions about the Israeli government and its actions.

(All examples here are drawn from real content, but the wording has been slightly modified to anonymise them and prevent identification of the original authors).

In other words, we have seen an overall escalation of hostilities against Jews online.

More extreme and explicit calls for violence rarely appear on mainstream platforms. They tend to circulate on fringe social media, such as Telegram.

On X, we have seen a collision of mainstream discourse and fringe discourse, due to the lack of moderation.

But antisemitism doesn’t always involve slurs, meaning it can also happen in mainstream platforms. Especially after the election of Trump and the relaxation in moderation practices of Meta, we have also seen it on Instagram. This includes Instagram posts published after the Bondi attack.

Could more have been done?

Certainly the Jewish community, I am sure, will feel not enough was done.

Jillian Segal, Australia’s first government-appointed special envoy for combating antisemitism, released her plan for addressing the issue back in July.

As I wrote at the time, the recommendations fell into three main categories:

  1. preventing violence and crime, including improved coordination between agencies, and new policies aimed at stopping dangerous individuals from entering Australia
  2. strengthening protections against hate speech, by regulating all forms of hate, including antisemitism, and increasing oversight of platform policies and algorithms
  3. promoting antisemitism-free media, education and cultural spaces, through journalist training, education programs, and conditions on public funding for organisations that promote or fail to address antisemitism.

The government had said it will consider the recommendations. Segal has now said government messaging combating antisemitism has “not been sufficient”.

Some might argue addressing points two and three could have helped prevent the Bondi attack. A common assumption is that a climate of widespread antisemitism can embolden violence.

The reality, however, is that this is hard to establish. People who commit terrorist acts – whether they self radicalise or are recruited by terrorist organisations – do not necessarily respond to changes in broader public sentiment.

That said, there is obvious value in prevention work aimed at reducing hostility and antisemitic attitudes, even while small networks or individuals committed to violent terrorism may still exist.

Preventing terrorist violence of this scale relies primarily on effective law enforcement. This requires adequate resourcing and a clear legislative framework.

Education and broader cultural change matter. In short term, however, they are less likely to be as effective at preventing acts of terrorism as measures such as firearm regulation, monitoring extremist networks, and disrupting plots before they turn into action.


Matteo Vergani, Associate Professor and Director of the Tackling Hate Lab, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image credit: AAP Image/Mick Tsikas

The post Bondi attack came after huge increase in online antisemitism: research appeared first on VOX - Pol.

]]>