Here you can find a guide about our different categories and classifications.
Table of content
Bodyguard.ai maps content using 3 types of classifications:
- General classifications
- Additional classifications (for monitoring purposes only)
- Custom classifications (for monitoring purposes only)
1. General classifications
Here are our main categories of content:
- Neutral: appropriate comments that do not display any supportive nor problematic content.
- Positive: content expressing approval or support of someone or something.
- Hate Speech: content associated with promoting discrimination, hostility, or violence towards a group or individual defined by their ethnicity, religion, sexual orientation, or other factors.
- Hateful: content detected as aggressive, denigrating, or condescending towards a group, individual, or entity.
- Criticism: content expressing disapproval of someone or something on the basis of perceived faults or mistakes.
- Undesirable: content detected as repetitive, annoying, useless, irrelevant, or potentially dangerous.
Within those categories, content is mapped through an extended list of classifcations:
Neutral Category
We classify the contents of the "Neutral" category into 9 classifications:
- This includes the following classifications:
- Neutral: Comments that do not display any supportive or problematic content.
- "It's gonna rain tomorrow"
- Vulgarity: Language that can be socially unpleasant, offensive or obscene.
- "Oh fuck, I can't believe this shit!"
- Sexually Explicit: Any type of comment that talks about or details sexual acts and genitalia.
- "I'll let you bend me over"
- PII: The act of an individual publishing, either purposefully or unwittingly, their personally identifiable information (PII) online.
- "Actually I called to customer care for issues in contact details. Actually, my number is 9958592028"
- Link: Comments containing a link to another page or site.
- "https://www.bodyguard.ai"
- Geopolitical: Comments that talk about or give opinions relating to politics, especially international relations, as influenced by geographical factors.
- "#FreePalestine"
- Drug explicit: Any type of comment that talks about or encourages the use of drugs.
- "Im so looking forward to dropping acid at music festivals this summerrrrr its gonna be litttt"
-
Underage user: A comment in which the user discloses or is understood to be underage.
- "Hi, I'm John and I'm 9 years old."
- Neutral: Comments that do not display any supportive or problematic content.
This classification can be set according to your preferences to flag comments related to users under 13, under 16 or under 18.
-
-
Weapon explicit: Any type of comment that talks about or encourages the use of arms and weapons.
- "I love my ak47 with its high capacity drum mag"
-
Classifications are adapted to your industry and its specificities. For gaming, 'weapon explicit' content are not classified the same in the context of a war game.
-
- Dating: Comments indicating a user's intent or interest in pursuing romantic or sexual relationships.
- Terrorism reference: Comments that make explicit mentions or references to acts of terrorism, specific terrorist organizations, or individuals associated with such groups.
- Pedophilia reference: Reference to child sexual abuse or illegal sexual activities with minors, excluding any first-person statements suggesting intent to engage in or support such acts.
- "Nude photos with under 12 year olds are available on the dark web.."
- Politics: Comments related or mentioning government, political parties, and political figures.
- "Imagine people using social media to discourage Democratic artists after speaking out against Donald Trump"
- Cybersecurity: Comments related to digital security topics
- "@brand you had a data breach bc for some damn reason someone in AZ tried to order $166 worth of product and luckily my bank caught it!"
- Finance: Comments related to monetary and economic issues, specially those including the management, impact, and ethics of financial practices.
Positive Category
- There are 3 classifications of "Positive" comments
- Supportive: Comments showing appreciation for a user or for their content.
- "This product is amazing"
- Fair play: The act or fact of abiding by the rules as in sports or games; fairness and honor in dealing with competitors, customers, etc.
- "GG guys!"
- Encouragement: Comments giving an individual or an entity your support.
- "Keep it up guys, you can do it !!!"
- Supportive: Comments showing appreciation for a user or for their content.
Hate speech Category:
- The Hate Speech category includes the following classifications:
- Threats: Comments designed to intimidate or scare another person by promising to do something that will endanger that person mentally or physically.
- "I'm going to find you, and then I'm going to kill you."
- Sexual Harassment: Unwelcome and inappropriate sexual comments about a person's physical appearance or promises to approach a person in a sexual manner.
- "Wonna see you naked"
- Moral Harassment: Comments and abusive behavior designed to undermine and humiliate a person.
- "You really are usless aren't you"
- Racism: Any type of discrimination against or prejudice towards individuals on the basis of their membership to a particular racial or ethnic group, typically one that is a minority or marginalized.
- "You crybaby 🐵"
- LGBTQIA+Phobia: Any type of discrimination against or prejudice towards individuals who identify as members of the LGBTQIA+ community; this includes comments or behavior aimed at non-LGBTQ+ individuals that nonetheless represent hateful LGBTQ+phobic attitudes.
- "Trans people are sick in the head. They need therapy."
- Ableism: Any type of discrimination against or prejudice towards individuals living with disabilities; this includes those who have mental, psychological and physical disabilities.
- "The paralympics are shit"
- Misogyny: Any type of discrimination or prejudice against women, including comments that are not targeting individual women but that nevertheless represent hateful patriarchal and transphobic attitudes.
- "Go back where you belong, the kitchen"
- Self-Harm: Self-harm is the intentional behavior of taking or wanting to take harmful action towards ones own body.
- "I can't do this anymore. This life's not worth living. I want to end it all."
- Threats: Comments designed to intimidate or scare another person by promising to do something that will endanger that person mentally or physically.
Whatever their severity, comments classified as 'self harm' are not removed.
-
- Terrorism and violent extremism: Comments which involve the intimidation or coercion of populations or governments through the threat or perpetration of violence, causing death, serious injury or the taking of hostages.
- "Totally support his action, should have killed more of them"
- Pedophilia: Any comment that exhibits or promotes sexual attraction towards minors.
- "kali ur 13 but respectfully u are hot"
- Terrorism and violent extremism: Comments which involve the intimidation or coercion of populations or governments through the threat or perpetration of violence, causing death, serious injury or the taking of hostages.
Hateful Category:
- The hateful category includes the following classifications:
- Insults: Disrespectful or abusive language targeting an individual.
- "That’s one stupid SOB trying to drive that boat"
- Hatred: Comments that aim to injure an individual, group, or entity, representing an attitude of fundamental hostility towards the target.
- "I hate you"
- Body Shaming: Any discrimination against or prejudice towards an individual's physical appearance or modifications.
- "You look like a fat, disgusting cow."
- Trolling: The act of posting confusing and manipulative messages with the intent of provoking a reaction.
- "Just like Belgium at the World Cup"
- Doxxing: A form of online harassment involving someone finding and publishing, or threatening to publish, a person’s personal information without their consent. It is often used as a tactic of intimidation or abuse, as well as a form of revenge.
- "She lives at the end of my hall (3 apartments over from me)."
- Reputation harm: Comments intended to harm the reputation of an entity or an individual.
- " [Brand name] is in cahoots with pedophiles."
- Insults: Disrespectful or abusive language targeting an individual.
Criticism Category
-
- Negative Criticism: Offering critiques of a user's content or their actions without the intent of harming the user.
- "I don't like this bag, I think it's not fashionable at all."
- Boycott: Comments urging users to stop buying, using, or supporting a company’s products or services as a protest or to force change.
- "No one should buy from this brand ever again"
- Negative Criticism: Offering critiques of a user's content or their actions without the intent of harming the user.
Undesirable Category:
We also detect 5 classifications of comments that pollute a community space (identified as "undesirable"):
The Undesirable category encompasses the classifications:
- Useless: Comments without meaningful content that do not add to or enrich the conversation.
- "1st person in the comment section"
- Scams: Any type of message encouraging the user to visit an external page (any page, website) unrelated to the platform where the post originated in order to extort money from a user.
- "Click my link "Mlillionaire 111" so that you too can make money"
- Spam: Undesirable comments repeatedly sent to a large number of people.
We do not provide examples for 'Spam' comments as their identification is based on time and repetition rules to detect patterns of behaviors.
- Flood: Comment meant to disturb the normal operation of a medium like the massive publishing of senseless texts.
- "ooooooooooooooooooooooooooooooooo"
- Ads: Promotional messages that businesses or individuals use to reach a specific target audience and promote their products or services.
- "Hello, please visit my page, I need a follower, I wish you the best 😉"
2. Additional classifications
Additional classifications are business-related content tags that are used to help users monitor specific topics to make informed decisions through community insights.
Our NLP team has developed a set of classifications for Brands and Sports to be even more accurate in content mapping, and offer specific community data based on their line of business.
These classifications are now available for Brands:
- Environment: Comments which express concern regarding the brand's manufacture and production processes.
- Plagiarism: Comments which express concerns against a brand for stealing concepts or ideas for their products from other brands or creators.
- Customer complaint: Comments where a customer criticizes the brand's quality, its representatives, or its service.
- Customer satisfaction: Comments where a client expresses their appreciation for the brand's service, representatives, or product quality.
- Customer request: Comments containing customer inquiries and requests for assistance about a brand’s service or product.
- Pricing: Comments that criticize the high prices or the rise in prices of a brand’s product or merchandise
- Animal wellbeing: Comments which express concern regarding the use of animal products or campaign against this use by a brand.
- Cultural appropriation: Comments that accuse the brand of conceptual or cultural theft from a historically marginalized culture or community.
These classifications are now available for Sports:
- Betting: Comments that include links to unofficial and illicit betting websites.
- Illegal streaming: Comments providing illegal links to access live event broadcasts.
- Pricing: Critics of prices of services provided by the club (tickets, official merchandise, etc.).
Additional classifications are part of the Advanced Plan. To enable them, contact our Customer Success team.
3. Custom classifications
A custom classification is a classification specifically requested by a customer to answer a business need of monitoring. They are, as any other classification, developed internally by our team of linguists for each premium language and their specificities.
To be developed, the classification will need to be evaluated by the NLP team to decide if there is not any technical limitation. Once validated, the development can take up to two weeks.
Custom classifications are available as additional services upon request. If you have any need that the general taxonomy does not cover, reach out to our Customer Success team.