1. Knowledge Base
  2. Moderation by Bodyguard

Bodyguard Moderation Rules

Here you can find a guide about our different moderation rules to pick the one that suits your needs the best.

1. The moderation rules:

A. context : 

Moderation rules are the instructions we provide to our technology. We have built several templates in order to adapt our moderation rules to the sensitivity of our customers.

We keep the detected comments as :

  • Positive
  • Neutral
  • Negative criticism
  • Vulgarity
  • Self Harm

We remove comments detected as: 

  • Racism
  • LGBTQIA+Phobia
  • Misogyny
  • Ableism
  • Moral harassment
  • Sexual harassment
  • Terrorism and violent extremism
  • Threat
  • Pedophilia
  • Doxxing
  • SPAM/SCAM/Flood
  • Ads

The choice of a template has an impact on the comments detected as :

  • Insult
  • Hatred
  • Body Shaming
  • Vulgarity
  • Sexually Explicit
  • Drug Explicit
  • Weapon Explicit

Some content categories can be customized and moderation rules are defined with the client: Trolling, Link, and Forbidden language.

B. Rules configuration:

A template is based on two types: 

Type of protection:

  • Individual:

Comments detected as Insult, Hatred and Body Shaming are removed only when they are directed to the user, his/her family, or the entity.

  • General:

Comments detected as Insult, Hatred and Body Shaming are removed regardless of the target.

C. Type of Tolerance:

  • Very strict tolerance:

Vulgarity, Sexually Explicit, Drug Explicit, Weapon Explicit comments with a 'high' severity or above are removed.

This level of tolerance cannot be applied to individuals, only organizations.

It can be used for specific use cases and may not fit all audience needs.

  • Strict tolerance:

All content detected as Insult, Hatred and Body Shaming are removed.

  • Balanced tolerance:

Content detected as Insult, Hatred and Body Shaming of medium severity and above are removed.

  • Permissive tolerance:

Content detected as Insult, Hatred, and Body Shaming of high severity and above are removed.

2. How are moderation rules built:

Step 1: Analysis

The moderation rules of Bodyguard’s technology are built around 3 main aspects:

    • categories of toxic content
    • the target of hateful comment
    • the severity level of toxic content

A.Toxic content and Undesirable comment classifications 

Key points:

  • Bodyguard removes unwanted comments that pollute the community space, according to the moderation rules chosen.
  • Bodyguard has extremely strict moderation rules for the following types of toxic content: racism, LGBT+ phobia, misogyny, bullying, sexual harassment and threats.
➡️ For more details, see this article.

B. Target of a Toxic comment:

Before making the decision to Remove or Keep a hateful comment, the Bodyguard technology identifies the target of the comment:

  • User : the account owner, who published the post
  • Group : a specific group of people
  • Everyone
  • User family : family and close friends of users
  • Author of comment : the author of the comment herself/himself
  • Single person : a particular person

C. The severity level of a toxic comment:

Next, Bodyguard will define the severity level of the analyzed content (from low to critical). As a reminder all content of the categories racism, LGBT+ phobia, misogyny, bullying, sexual harassment and threats will be deleted indifferently of the severity. 

Here are some examples in different categories.

  • Insult :

    • Low : “You’re an idiot!”

    • Medium : “He’s being a dick”

    • High : “You son of a bitch!”

    • Critical : “Fuck you, you cunt!”

  • Hatred :

    • Low : “Shame on you!”

    • Medium  : “You have no balls”

    • High : “You’re fucking disgusting!”

    • Critical :“Fuck off, grow a pair!”

  • Bodyshaming :

    • Low : “Your head is a scary place”

    • Medium  : “She’s ugly”

    • High : “Fatboy needs to go on a diet”

    • Critical : “He is really too hideous with his dirty dick face”

  • Sexual Harassment:

    • “Send nudes!”

    • “stfu u slut eat a dick”

    • “She hot I’d fuck her every day too”

    • “She’s fuckable”

  • Racism :

    • “African savages.”

    • “Shut up you n i g g e r”

    • “Gypsies are dirty”

    • “Muslim are terrorists”

  • LGBTQA+ Phobia :

    • “This dude is a sissy”

    • “Trannys are disgusting anyway.”

    • “Gay as aids”

    • “You look lesbo as fuck!”

  • Misogyny :

    • “Don’t show your tits in public you perverted woman.”

    • “Women :café:

    • “This is why women shouldn’t be in positions of power”

    • “Women are inferior”


Step 2 : decision

Finally, Bodyguard will apply the client’s defined rules to decide which moderation action to take: Remove or Keep the comment. 

Please refer to this article to understand how Bodyguard's decisions work on each social media.

Notes

Hateful comments that are classified as Racism, LGBT+ phobia, Misogyny, Sexual harassment, and morale harassment are automatically removed.

All comments related to holocaust denial will be also automatically removed.