Richard Dent is studying for a PhD in digital sociology at the University of Cambridge. This post originally appeared on his website http://www.richardjdent.com.
Improbable’s second round of funding gives them the opportunity to break down the technical barrier to a new generation of massively multiplayer virtual worlds. If Improbable’s SpatialOS works as advertised, it will create a new virtual society where perhaps millions will socially interact in a ‘frictionless’ virtual environment.
This kind of virtual world has been envisioned by Neal Stephenson in his book Snowcrash. The Metaverse is coming. Alongside the people in these virtual worlds will be AI bots that will be indistinguishable from real people. And these AIs could play a pivotal role in the success or failure of these SpatialOS powered spaces. What else could govern such a massive online user base?
This leads to an important question. What are the appropriate rules for such a massive online social world? What are the right social boundaries? What is the right balance? Currently, it’s not clear.
"AIs could play a pivotal role in the success or failure of these SpatialOS-powered spaces"
Digital society has many similarities to real world society. Both have rules, both have social norms and both have social inequalities. But the way these manifest can be radically different on the Internet. Understanding these similarities and differences might be crucial for the success of Improbable’s new digital world making technology. If you build it, they will come. But will they stay?
After all, we’ve seen the damage done by trolling. This is partially driven by what some academics call the disinhibition effect. Individuals, hiding behind anonymous profiles, believe they can get away with more extreme behaviour. They often do. Social elites can form online. The early adopters and the more digitally literate have an advantage and this can lead to insider/outsider dynamics that put new users off.
People bring their shyness and confidence with them when they go online. This can make a major difference in a virtual world if one is expected to talk to other users. Online role playing games can often be dominated by less skilled players who happen to be very confident and strong communicators. Getting the newbies to stay will be part of the challenge. Can pro-social AIs help? A recent article in Nature implies that they might.
The social opportunities are significant. Virtual worlds can play host to genuinely welcoming and engaging communities. Those who suffer from illness find friendship and companionship that the real world can’t always offer. There is a sense of the social levelling and social equality in some virtual worlds. Especially if users play by the rules and respect others.
Moderation is an important factor to the success of large virtual communities, especially when intelligently applied. Look at Reddit’s voting system or the more carefully moderated chat rooms on Twitch. Positive virtual communities thrive when the right boundaries are set. We can teach AIs the ethical and social boundaries that make real world societies grow and prosper. Or we can customise that programming to different tastes.
I’m not proposing making every Improbable world a kids-only safe space. Perish the thought. The lack of traditional social boundaries brings a wild west feeling to the current online gaming world and that is welcomed by many. In fact, we can learn from these spaces what the right boundaries are. Nobody wants overzealous AI police timing people out for innocent mistakes, minor infractions or the odd meme.
Millions, perhaps billions, may want to escape the real world for a virtual world created by Improbable’s technology, if only for a short time. I’m one of them. I hope that we find the right balance to make large online gaming and social spaces genuinely fun and socially engaging. If successful, these virtual worlds could become positive virtual communities for many of us. How we program AI could be part of this success.