At Meta, we believe in the power of technology to collapse physical distance and help people connect with those who matter most. Today, virtual reality lets us immerse ourselves in vibrant digital spaces with a surprising sense of social presence—the feeling that you’re right there with someone else, no matter where in the world they happen to be. The augmented reality glasses of the future have the potential to help us stay more present and engaged with the world around us, rather than having our attention pulled away by our devices. And our vision of the metaverse—a set of interconnected digital spaces—is an inclusive one, where everyone can enjoy the full richness that AR, VR, and the internet have to offer.
But technology that opens up new possibilities can also be used to cause harm, and we must be mindful of that as we design, iterate, and bring products to market.
We often have frank conversations internally and externally about the challenges we face, the trade-offs involved, and the potential outcomes of our work. There are tough societal and technical problems at play, and we grapple with them daily. It’s table stakes: To build the next computing platform that puts people at the center, there needs to be collaboration between companies, experts, and policy makers to develop new tools that help keep those people safe.
Harassment in digital spaces is nothing new, and it’s something we and others in the industry have been working to address for years. That work is ongoing and will likely never be finished. It’s continually evolving, though its importance remains constant. It’s an incredibly daunting task, but it’s also a crucial opportunity to improve the online experience for millions—if not billions—of people.
We want everyone to feel like they’re in control of their VR experience and to feel safe on our platform. Full stop.
There are several tools already in place to help keep people safe while in VR.
If you don’t want to see someone, you can block them at the platform level. Once you’ve blocked someone, they won’t be able to add you as a friend, invite you to a game or party, or search for you.
You can report abusive content or behavior from inside a game or app, or you can report it outside of the headset from our web tool. Our goal is to empower people to report the things that upset them easily, reliably, and with minimal friction.
As a platform, our role is two-fold: to help provide safety and security to the people who use our devices and to help developers have the best tools they can for moderating the experiences they build.
Developers understand their communities best, so we need to partner closely with them in a combined effort to help keep people safe. We support reviews for all apps in the Oculus Store, and we can also provide more warnings—of increasing severity—to developers about their apps that have been identified as potentially toxic and to consumers who repeatedly engage in toxic behavior across those apps. We’re working to raise the visibility of our minimum acceptable blocking, muting, and banning standards for developers. And we already require reporting in apps to be fed back to us, which lets us flag those that get a higher number of reports across apps. Beyond that, we encourage apps to integrate with our identity system (even if it’s just behind the scenes) so that blocks and mutes can be persisted across virtual worlds and personas more effectively.
Of course, there are limitations to what we can do. For example, we can’t record everything that happens in VR indefinitely—it would be a violation of people’s privacy, and at some point, the headset would run out of memory and power. That said, we’ve developed a solution in our Horizon Worlds experience with a rolling buffer stored locally that’s overwritten over time. When you submit a report in Horizon Worlds, it automatically includes captured information as evidence of what happened as explained in the Supplemental Terms of Service. That information is captured through a rolling buffer, which ensures you can submit a report without having to relive the experience. And when we see that something is going wrong based on blocks, mutes, and reports, we may send a trained safety specialist to remotely observe the session. They can remove a user from the session if needed. We will also send a support message to let everyone know they were observed and why.
As we help build the metaverse, creating ways for user protection to operate not just across applications but also across platforms will be important. The metaverse is a long-term vision—it will take work over many years with many other companies and creators before it reaches scale. That said, our tools for reporting are already improving and will continue to improve in the future.
We’re investing heavily in this. For example, the $10 billion investment in Reality Labs that Meta recently announced includes our efforts to address safety and integrity challenges. We’re also collaborating with industry partners, civil rights groups, government agencies, nonprofits and academic institutions to think through tough issues in the metaverse, including safety, integrity, equity, and inclusion. In September, we announced a $50 million XR Programs and Research Fund, a two-year investment to work with experts in government, academia, and industry to help us anticipate risks and get it right. Specifically, we’re facilitating independent external research with institutions across the globe such as Seoul National University and The University of Hong Kong, which will focus on research into safety, ethics, and responsible design.
Meta is also a founding member of the XR Association (XRA) to help build responsible XR, which includes virtual reality, augmented reality, mixed reality, and future immersive technology. And we joined the XR Advisory Council in October, alongside policy makers, experts, and academics, to collaboratively work to address key issues facing the XR ecosystem.
The road to the metaverse is a long and necessarily collaborative one, and it can be easy to forget that VR is still a relatively nascent medium. Shared norms around acceptable behavior in these spaces are still evolving. That’s why we’re committed to working together with developers, creators, academics, and more as we have these conversations and strive to get this right. Through considered and conscientious enforcement today, we can ideally shift the broader online culture towards a more open, inclusive, and safe space for everyone.