social media

SOCIAL MEDIA AND TERRORISM, AN INITIATIVE AGAINST RADICALIZATION

Share this post:

The Internet and above all social media increasingly represent a space in which individuals share their moods, including the fear of a danger such as terrorism. But what is the relationship between social media and online terrorism? What about social media and terrorist attacks? Let’s see it together.

social media

SOCIAL NETWORKS UNITED IN THE FIGHT AGAINST TERRORIST PROPAGANDA

Recently, the giants of the web (Facebook, Google, Microsoft and Twitter) have created a partnership to obstruct terrorist propaganda on social media. It consists in sharing valuable information to combat effectively the phenomenon of spreading the same message on multiple social networks at the same time.

However, it remains an arduous undertaking given that among all communication channels, social media are the most used by terrorist organizations. The control systems are able to filter 99% of the contents before they are published.

Monitoring the Facebook pages of users supporting terrorist organizations, it was showed that only 38% of the posts containing clear symbols of extremist groups are removed.

This demonstrates an effective difficulty in blocking the dissemination of this kind of content and in managing social media and communication for dangerous uses.

NEW TECHNOLOGIES AGAINST TERRORISM

Despite the development of artificial intelligence technologies, these are not yet able to automatically identify and filter the video content, images and texts related to terrorist propaganda before their publication.

In other words, to date social networks remain a real opportunity to proselytise without great limits and new technologies, such as AI, are not yet ready to counter extremist propaganda.

FACEBOOK AND THE BATTLE AGAINST TERRORISM

In recent years, one of Facebook’s main objectives has been precisely to avoid the dissemination of content that urges racist or terrorist hatred and violence.

To succeed, the researchers of the social network monitored the Facebook pages of more than three thousand users affiliated to extremist organizations for five months (indicated by the United States government).

Nonetheless, many of the reported content, such as videos of executions, images of beheaded heads and propaganda in honour of the martyred militants, are still available on Facebook.

Facebook measures to combat terrorism

Therefore, the social network is not able to control this type of content uploaded on the platform, so far, and even, sometimes, it produces itself (in a completely involuntary way) videos that celebrate terrorism. But how? For example, putting together extremist and violent contents to create those automatic animations that collect users’ activities.

Facebook admits the imperfections of its system and points out that it is improving the system for removing terrorist contents, although it is yet unable to find on time all these contents.

This work is done automatically by artificial intelligence algorithms, which recognise and remove content linked to terrorist groups, supported by 30 thousand human moderators. Mark Zuckerberg himself highlighted the success of these tools, specifying that 99% of the reported material is deleted even before being seen by users.

This statistic refers, however, only to the contents identified, while the percentage of extremist and terrorist material that Facebook is able to identify remains unknown. A bright spot is how easily researchers can identify the profiles of users who praise terrorism by searching for simple keywords.

tastiera pc

The danger of automatic generation of company pages on Facebook

The most worrying thing is that Facebook sometimes involuntarily helps terrorist organizations to connect with potential new followers, starting from the job position reported by the users themselves.

Users who indicated terrorist associations such as Al Qaeda as their “employer” they led to the creation of company pages based on these indications. This means that on Facebook it is possible to “like” ISIS, allowing these organizations to consult a list of supporters from which they can draw to find new militants.

The automatic generation of company pages on Facebook does not only concern jihadist terrorists, but also American associations promoting the white power, for example.

The profiles of the people involved record an abundance of photos of swastikas and much more, so it would not seem difficult to trace who is behind these movements. So, why does a giant like Facebook find it difficult to remove such contents, which are also prohibited by its own policy?

Social networks were not designed thinking about the negative aspects that came only after, so now they have to work backwards to find solutions to increasingly serious problems. To do this they combine automatic tools and human moderators, in order to remove all the dangerous contents that infest Facebook and not only.

SHUTTING DOWN SOCIALS TO COMBAT TERRORISM

Following the attack claimed by ISIS, Sri Lanka government decided to shut down all social networks. To understand this decision, it is necessary to remember the interreligious tensions that triggered the violence between Buddhists and Muslims. In fact, before the attack uncontrolled rumours spread on WhatsApp.

Through them, Buddhist groups accused Muslims of forcing people to convert to Islam and vandalising Buddhist archaeological sites. The wave of violence led the government to temporarily turn off social networks, accusing Facebook of spreading hatred. Social networks were turned off again to prevent hate speech and propaganda from spreading with easily predictable consequences.

The negative effects of social networks in the world

While in the West social networks could play a role in the rise of populisms and in India some fake news on WhatsApp were the trigger that killed 25 people in 2018, in Myanmar the Muslim minority is subject to persecution fed also through social networks.

Maybe because in that area Facebook suddenly spread thanks to the Free Basics program wanted by Mark Zuckerberg. It enables to use Facebook without charges and it was accused of digital neo-colonialism.

In many countries of the Asian continent Free Basics spread suddenly social networks, without the right awareness of its most negative and dangerous aspects. The program helped spread fake news and Twitter also played a crucial role.

The positive effects of social networks in the world

Obviously, there are not only the cons. In Sri Lanka, Facebook and Twitter contributed positively to mobilising the population against the attempted coup in 2018.

Specifically, the supporters of the coup occupy the media by forcing the newspapers to give up control over the presses, but do not take social networks into account. Just on the online platforms, the critics of the coup mobilised the population, by organizing demonstrations and protests that stopped the coup.

Censorship and free information

On the one hand, social networks are tools that make it difficult to block the flow of free information, but on the other, they are a means of rapidly spreading propaganda and hatred, key themes of our digital age.

Therefore, the choice to turn off social networks might seem right, but in a country prone to authoritarianism it could be used for censorship purposes.

Obviously, turning off social networks can only be a temporary solution that must not be at the expense of freedom of information. So what is the solution?

If social networks really found a way to prevent the circulation of the most dangerous contents, other types of communication and information would also risk to be censored.

social media

PRECOBIAS, THE PROJECT TO PREVENT RADICALIZATION AND OBSTRUCT PREJUDICES

As regards the relationship between social media and terrorism, Moka Adv (by P.M.F.) is coordinator of the European project PRECOBIAS, designed to obstruct prejudice and hate messages on social networks.

The kick off meeting was held in Brussels on February 18th, 2020, which brought together all the coordinators of the projects related to ISFP program within Union European.

In particular, for the PRECOBIAS project our web agency deals with the online campaign aimed at young people at risk of radicalization or already radicalized. The project, in fact, focuses on the mental processes and cognitive biases that come into play when young people are subjected to the terrorist discourses of extremist currents.

The main goal of the project is to contrast these processes, promoting the user’s critical thinking and revealing the mental processes and cognitive biases that lead to their interpretations and analyses.

Thus, PRECOBIAS aims to provoke a change in behaviour and to dissuade young people from promoting extremist content and hatred online.

Need a boost? Ask Moka Adv by filling out the Contact Form
This site uses cookies to improve users' browsing experience and to collect information on the use of the site.