GGWP raises $10M for AI-based platform to moderate multiplayer games

Missed the GamesBeat Summit excitement? Don’t worry! Tune in now to catch all of the live and virtual sessions here.


GGWP has announced that it has raised $10 million in funding for its AI-based platform for moderating community behavior in multiplayer games.

GGWP also said it will be providing free access to its moderation tools for all developers, with different tiers of payment for more services. The company is introducing a “free to use” model.

It focuses on using AI to scan text chat in multiplayer games for toxic behavior and then notifying community moderators when it finds a need for action, such as banning players, said Dennis “Thresh” Fong, CEO of GGWP (which stands for “good game, well played”) in an interview with GamesBeat.

“We’re a comprehensive game moderation platform. And so what that means is we have solutions that we think are best in class across a number of different factors,” Fong said. “When some people talk about moderation, they’re usually talking about either chat or voice chat moderation. So we have a solutions in text chat that is a lot more than just a keyword.

GGWP has raised $10 million to fight toxicity.

He added, “We use contextual AI to figure out and detect behaviors that are typically very difficult to detect.”

The company has been working with dozens of developers over the past year and is now ready to make the platform available to everyone. Samsung Ventures and SK Telecom Ventures led the round.

GGWP was founded by games entrepreneur Fong, a former top professional gamer; Crunchyroll founder Kun Gao; and data and AI expert George Ng.

“We have felt from the start that all developers should be able to support the healthy online experiences that players deserve,” said Fong. “Even the best-staffed developers can’t keep up with even a fraction of reported incidents through human moderation. Leveraging AI and a sophisticated platform of tools, we address the vast number of incidents. And now with GGWP going free, we hope the whole industry will join us on our mission to help make our game communities safe and enjoyable social spaces.”

GGWP’s AI-driven technology has been successful in fighting toxicity, with teams using the platform able to triage an average of 98% of player reports. The platform flags the most serious reports for additional review for human moderation, delivering the most complete and effective model for community moderation today.

Fong said that the company can automatically detect behaviors such as intentional AFK, leaving a match early in a team-based game, friendly fire, feeding, griefing, trolling and more.

Fong said the most popular games in the world receive billions of user-submitted reports a year and are able to only respond to <0.1% of them. GGWP revolutionizes report handling by automatically validating, triaging, and prioritizing reports and improving the response rate to 98%-plus.

Of course, if there are 100 million reports about toxic play, then that means there are still millions of reports for humans to handle. And that’s just too many. Still, GGWP brings down the numbers dramatically and makes things more manageable for developers.

All developers will now have access to GGWP’s entire suite of products, with flexible pay-as-you-go pricing and no minimum commitments.

GGWP’s solutions work on any platform, including web, PC, mobile, and consoles.

A big difference in reputation

Dennis Fong is CEO of GGWP.
Dennis Fong is CEO of GGWP.

Fong argues that his company’s platform is the first comprehensive game moderation platform and works to holistically combat harmful and disruptive player behavior through a variety of best-in-class solutions which can be used together or separately.

“This is where there’s no other vendors in our space, which is what people typically call player behavior,” Fong said. “We call it more specifically ‘in-game actions.’ One of the most frustrating things is to hook you up in a competitive match, and then have one of your teammates go AFK (away from keyboard), or leave the match early or ‘rage quit’ or something like that. And so we actually uniquely have AI models that can detect whether someone did that intentionally or not, and then respond appropriately automatically. Likewise, we can detect things like griefing and feeding and cheating like intentional friendly fire, and so forth.”

Fong said the company isn’t providing an anti-cheat solution, which companies like Riot Games or Activision do on their own. But it can take the anti-cheat solution data as inputs into GGWP’s system, Fong said.

These include player reports, usernames, chat moderation, player behavior detection, a reputation platform, game-specific context, and a Discord bot. The part about giving each player account — the account, as opposed to a particular person (which could be a privacy violation) — a reputation is the most interesting to me.

Many toxic players will gang up on a good player and file false reports about that player being toxic in hopes of getting that player banned. But if the player has a good reputation score, then GGWP wouldn’t automatically flag that player for banning despite the reports. It would also check whether the reports are coming from accounts with poor reputations.

Every incident, whether it’s AI-detected, user-reported, or identified through an external source flows through the GGWP platform and affects a user’s reputation score. Credibility ratings are also generated based on how often GGWP validates a player’s report.

GGWP tracks player behavior inside games over time.

In theory, player reporting features enable and empower users to identify the bad actors in the ecosystem. The reality, of course, is that 40% to 50% of reports filed are fake or invalid, Fong said. There was a small game with 30,000 players and the company received over 100,000 reports in its first month.

“You can imagine that it just becomes completely untenable very, very quickly,” Fong said. “So we built a system that can actually automatically triage and process these reports. And how we do that, of course, is deploy AI that can detect toxicity in chat, and detect it in in-game actions and behaviors automatically, which means that we can actually validate when something has happened.”

The company uses that to build a credibility score on every user account, based on how often the company can validate that, when they’ve reported another user for bad behavior, the report was credible.

If a report comes in from a user with a poor credibility rating, GGWP will toss out the report.

“And likewise, we can also help identify who the really good actors are, those who arehelping you police your community,” Fong said. “Everything that we’re tracking feeds into a reputation score for the user, which I think is pretty self explanatory. And that reputation score we provide back to the company. So it’s obviously used by all of our models. But it’s also provided back to the customers for free.”

And Fong said usernames are more than just identifiers; they are the first interaction point between players within a game’s community. GGWP catches the unnatural language, l33tspeak, and workarounds that gamers use to create inappropriate usernames.

“As you can imagine, that’s usually the first thing that a user will try to do, picking an inappropriate username, so we use AI for that as well,” Fong said. “And then we also have a Discord bot, that uses our same chat and reputation models that you can just deploy on your Discord server for moderation as well.”

GGWP also goes beyond simple keyword-based filters by using contextual AI to identify and respond to difficult-to-detect behaviors like sarcasm, gameplay criticism, bullying, hate speech, child grooming, and more.

GGWP gives community moderators a dashboard for monitoring players.

GGWP’s Discord bot also leverages the same chat and reputation models to help moderate Discord servers and conveniently brings Discord incidents into the same view on the GGWP platform. And the system uniquely takes into account additional factors to help determine player intention and context for every incident.

For example, it won’t penalize people for what appears to be toxic chat if they’re goofing off with their friends. Other examples include: rank/skill of player, activity level, whether it’s a casual or ranked
match, etc.

By focusing on player behavior, you can learn how to govern a community. For instance, a player may play 100 game in a day and reach a boiling point and do something toxic. That player might be told to relax, rather than be banned outright, as everybody has a breaking point, Fong said.

“Instead of having a moderator look it up and review it, our system will just process it automatically,” Fong said.

GGWP’s platform has been backed by leading investors and game companies, including Bitkraft Ventures, Makers Fund, Griffin Gaming Partners, and Riot Games. GGWP has 40 people.

One thing that is helpful is when a company has more than one game and it can identify a toxic user in not just one game but multiple games. Sadly, due to competitive or privacy reasons, such information can’t be shared between companies that are rivals. And that gives toxic players a chance to jump from one game to another, griefing players in one game until they are booted and then doing the same in another game. Fong said there isn’t really a solution for such toxic players who bounce around like that, except that they may eventually run out of popular games to jump to.

An alliance of companies might stop such players, but that’s not something that GGWP is pursuing today, Fong said. Maybe that’s something the whole industry should think about for the future.

Originally appeared on: TheSpuzz

iSlumped