CS:GO’s Toxicity-Busting Minerva Chat AI Banned 20,000 Players in Just One Month

The FPS Review may receive a commission if you purchase something after clicking a link in this article.

Artificial intelligence appears to be making headway in ensuring that the online multiplayer experience is a heavily sanitized one. Within its first weeks, FACEIT and Google’s Minerva AI banned 20,000 Counter-Strike: Global Offensive players after analyzing over 200,000,000 chat messages and deeming 7,000,000 of those to be “toxic.” Toxic language has reportedly been reduced by as much as 20% since the AI’s incorporation.

“If a message is perceived as toxic in the context of the conversation,” FACEIT explains in a blog post, “Minerva issues a warning for verbal abuse, while similar messages in a chat are flagged as spam. Minerva is able to take a decision just a few seconds after a match has ended: if an abuse is detected it sends a notification containing a warning or ban to the abuser.”


Tsing Mui
Tsing has been writing the news for over 5 years, first at [H]ard|OCP and now at The FPS Review. He has a background in journalism and makes sure to give his readers the relevant context to why each news post matters.

Recent News