Here are my additional thoughts for today:
First, I would like to define a few terms:
An umbrella term (like “online harassment”) meant to encompass a number of harassing online behaviors. Like physical bullying, “cyberbullying” is generally aimed at young people and may involve threats, embarrassment, or humiliation in an online setting.
- Cyber-Mob Attacks:
When a large group gathers online to try to collectively shame, harass, threaten, or discredit a target.
- Hateful speech and online threats:
Hateful speech is a form of expression attacking a specific aspect of a person’s identity, such as one’s race, ethnicity, gender identity, religion, sexual orientation, or disability. Hateful speech online often takes the form of ad hominem attacks, which invoke prejudicial feelings over intellectual arguments in order to avoid discussion of the topic at hand by attacking a person’s character or attributes. Threats issued online can be just as frightening as they are offline, and are frequently meant to be physically or sexually intimidating.
- Online sexual harassment
—which is targeted at women at a far higher rate than men—encompasses a wide range of sexual misconduct on digital platforms and includes some of the more specific forms of online harassment. These are more specific examples:
- Non-consensual sharing of intimate images and videos: As described above, this type of abuse—often referred to as “revenge porn”— is defined as the public distribution of sexually explicit images without the consent of the victim. Note: this has happened to players who go beyond wolf and happens on sites such as Kik or Instagram.
- Exploitation, coercion, and threats: A person receiving sexual threats, being coerced to participate in sexual behavior online, or blackmailed with sexual content.
(see source: https://onlineharassmentfieldmanual.pen ... -of-terms/)
- Unwanted sexualization: When a person receives unwelcome sexual requests, comments and content. This is by far one of the most common things I've seen.
Minorities are consistently failed when the online game company and the people it hires don't do anything about in-game harassment. Currently, reporting targeted harassment to game moderators results in non-action. Moderators committed to doing nothing will suggest that targets just “use the mute function,” putting the onus on the player receiving abuse. This is poor advice, as a minority in an unprotected space can ignore legions of people and still not see an end to the hate speech. If nothing is done to punish hate speech, it will continue to fester within these public spaces, which is exactly the sort of social trend that an effective system of moderation is supposed to prevent. Furthermore, there is nothing to suggest that this inaction is necessarily impassive, since moderators are just as prone to bigoted philosophies as the players they oversee.