An estimate from the International Olympic Committee has determined that the Summer Games will generate more than half a billion social media posts. This has prompted the IOC to use Threat Matrix, an AI system, to fight abuse on social media against athletes during the Paris Olympics.
The International Olympic Committee developed this system under their “Olympic AI agenda,” a program that aligns with the IOC’s mission for solidarity and inclusivity.
In a statement, the IOC explained that the system, called Threat Matrix, “has undertaken a broad review of the uses for AI in sport, and of high-impact areas where the IOC could inspire the use of AI.”
The main goal of the Olympic AI agenda is to safeguard athlete’s mental health
Online abuse in sports has become a significant issue. Fans sometimes forget that some of their favorite players and athletes are, first and foremost, also human beings.
This increase in online abuse has caused athletes to call for more protection from event organizers and institutions, and this is exactly what the International Olympic Committee is focusing its AI agenda on.
Kirsty Burrows, head of the Safe Sport Unit at the IOC explained that “AI isn’t a cure-all, but it’s a crucial part of the effort to fight back. “It would be impossible to deal with the volume of data without the AI system.”
To help with this, the IOC is using a language model, Threat Matrix. This AI-powered language model analyzes large patterns of text and identifies the feelings and intentions of writers. This process is referred to as “sentiment analysis” by experts.
However, this is challenging for machines, given that the sentiment behind something as simple as the use of emojis can be challenging for an AI system to comprehend. This is especially true for emojis such as the crying face emoji and skull emoji.
Experts say newer generations have made these into ironic emojis commonly used to communicate messages contrary to their intended meaning.
Threat Matrix will scan posts in 35 different languages during the 2024 Paris Olympics
The IOC’s Threat Matrix partnered with social media platforms such as Facebook, Instagram, TikTok, and X to identify abusive online messages directed at athletes during the 2024 Paris Olympics by using AI.
The system will place messages in a database where a human team will review them. Although the IOC has explained that Threat Matrix does most of the work, human reviewers are also crucial to mitigating online abuse at the Paris Olympics.
The way in which the system works is that once Threat Matrix flags a message, the human response team will look for the post’s context and then take appropriate action.
These mitigating actions include reaching out to victims of online abuse, reporting users to platforms, or, in more extreme cases, contacting the police.
In most cases, these steps are taken prior to an athlete seeing the online abuse so as to make the system as effective as possible at mitigating effects.