Various elections are not the only thing Big Tech is “protecting” this summer: athletes competing in major sporting events are another.

Meta has announced that the “protection measures” that are to affect its apps (Facebook, Instagram, and, Threads) will also extend to the fans.

Regardless of the way Meta phrases it, the objective is clearly to censor what the giant decides is abusive behavior, bullying, and hate speech.

The events that draw Meta’s particular attention are the European football championship (EURO 2024), and the Olympic and Paralympic Games.

To prove how serious it is about implementing censorship in general, the company revealed an investment exceeding $20 billion that went into the “safety and security” segment (often resulting in unrestrained stifling of speech and deplatforming.)

Coincidentally or not, this investment began in 2016, and since then, what Meta calls its safety and security team went up to 40,000 members, with 15,000 used as “content reviewers.”

Before explaining how it’s going to “keep athletes and fans safe,” Meta also summed up the result of this spending and activities: 95% of whatever was deemed to be “hate speech” and similar has been censored before it even got reported, whereas some component of AI was used to automate issuing warnings to users that their comments “might be offensive.”

Now, Meta says that users will be allowed to turn off DM requests on Instagram, isolating themselves in this way from anyone they don’t follow. This is supposed to “protect” athletes presumably from unhappy fans, and there’s also “Hidden Words.”

“When turned on, this feature automatically sends DM requests — including Story replies — containing offensive words, phrases and emojis to a hidden folder so you don’t have to see them,” the blog post explained, adding, “It also hides comments with these terms under your posts.”

This is just one of the features on Facebook and Instagram that effectively allows people to use these platforms for influence and/or monetary gain, but without interacting with anyone they don’t follow, including indirectly via comments (that will be censored, aka, “moderated”).

Meta is not only out to “protect athletes,” but “educate” other users, this time on how to display “supportive behavior.” It doesn’t bode well that notoriously error-prone algorithms (AI) seem to have been given a key role in detecting “abusive” or “offensive” comments and then warning people they “may be breaking our rules.”

But this “training of users” works, according to Meta, that shared testing of what they refer to as interventions showed that “people edited or deleted their comment 50% of the time after seeing these warnings.”