Bodyguard is a moderation solution that detects and moderates toxic and undesirable content for you in real-time on social networks and your own spaces (websites, apps)
1. What is Bodyguard
The technology automatically detects and moderates hateful content (comments under posts and lives) 24/7 and in multiple languages.
- Scope:
- Bodyguard is a preventive technology, we analyze a comment and take an action in 100 ms.
- Bodyguard protects companies in a 360 way: protection of individuals (Journalists, Athletes) but also of entities and their communities.
- Bodyguard classifies comments under different categories:
- Neutral, Hate speech, Hateful, Undesirable, Criticism, Positive.
The different categories and classifications of comments can be found here.
2. The different steps of the Bodyguard technology
Bodyguard.ai reproduces all the steps of human moderation in an automated way thanks to Artificial Intelligence.
In order to know if a comment is hateful or not, the technology analyzes it in a contextual way, taking into account the context of the comment, the severity of the comment (if hateful), and to whom it is addressed (directed at).
The action will be automatic according to the moderation rules set up by the client.
As a reminder, the moderation rules are customizable (see here the different possible rules). Our team of experts will help you configure your moderation parameters from the start and then don't worry about anything else, we take care of everything for you!
3. The Bodyguard dashboard
You can view all your accounts and platforms (social networks or owner) in a centralized and dedicated space. Access a detailed analysis of your community so you can take concrete actions to engage your community.
You can find a detailed description of the dashboard by tab here.