1. Knowledge Base
  2. Advanced use cases with Bodyguard

Detect a harmful user and take actions with Bodyguard

You can use the Bodyguard Dashboard to detect hateful users and handle the problem.

Define the target

Select the accounts you want to analyse, if you think they are likely to be victim of cyber harassment. The analyse is easier if you focus on specific accounts.

Select a date range

Go to messages page

1. Filter the messages

You can use the filter to easily detect most hateful messages.

2. Read the message details

On this page, you can see different information, and be redirected to the related social media (post or author's page).

It allows you to get all comments information and identify the author.

Then, you can take several actions:

  • delete the comment manually
  • block the author from the dedicated page
  • report the comment or the author directly from the social media
  • consider a filling of complaint, using the information above (screenshot of the tweet, author's name and ID).

Alternative : Authors page

You can also use this page to easily detect the most hateful/toxic authors. You can sort them by volume of hateful messages:

Authors-02

Then click on any author to see its/her comments in details. you will be able to see if comments are particularly harmful and require an action from your side.

Test-2