Since the attacks of September 11, 2001 in the US, many countries have implemented measures to prevent radicalisation, often called ” de-radicalisation ”  or ” disengagement ” programmes. All of these measures are part of the fight against radicalisation, which is defined by the UN as a ‘package of social, political, legal, educational and economic programmes specifically designed to deter disaffected (and possibly already-radicalized) individuals from crossing the line and becoming terrorists’ (Task Force, 2006: 5). These measures target jihadists and extremists of the far right and far left alike.

However, the effectiveness of programmes to prevent violent extremism remains to be proven.


Many political representatives agree on the importance of evaluating these prevention programmes. However, the fact that public funding is conditional upon the efficacy of these organisations introduces a bias in the evaluation practice. Self-evaluating organisations hope that their results support their actions. This leads them to hiding some of their data and they reject the involvement of an outsider who could have helped them design evaluation tools. In doing so, they miss the very objective of evaluation, which is to reflect on one’s own practices, adapt them and evolve.

The imposition of evaluations by political authorities produces the same effects, by putting practitioners in the position of being passive subjects and disengaging them from a process of thinking about their actions that directly concern them.

The outcomes are experienced as a judgement imposed on the individual, which reduces the chances of benefiting from them.


It was this observation that sparked the foundation of the International Team for the Evaluation of Violent Radicalization Prevention (ITERP).