What lessons can we learn from the recent past, and how do we avoid making similar mistakes in the metaverse? Here are some aspects to consider:
The design of smartphones and social media platforms has fueled an epidemic of screen addictions. These addictions have been linked to increased levels of depression, most worryingly among teenagers. The addictiveness of these platforms may not have been intentional, but it wasn’t inevitable. It was the result of business models predicated on maximizing user engagement which, when combined with A/B testing, resulted in design features — from infinite scroll to push notifications — that produced addictions while trying to maximize engagement.
What impact will the business models and design decisions powering the metaverse have on tech addictions? While we are still in early days and there are many visions of the metaverse, a common thread through many of these visions is persistence. The stated goal is to design environments that are always on, and in which people spend substantially all of their time. Will the goal of an always-on metaverse, like the goal of maximizing engagement in the social media era, drive a new wave of user addictions?
Exercise and mental health
Next, consider another design choice: locomotion within metaverse platforms. Physical exercise demonstrably lowers depression and stress while improving quality of sleep. So, experiences in which people spend lots of time while remaining sedentary will predictably worsen mental health outcomes. Unfortunately, designing metaverse experiences in which people get real exercise — for instance, by walking or running within the metaverse using their legs — isn’t practical at the current time.
Our bodies still inhabit the physical world, with all its walls to walk into and furniture to trip over. Solutions such as omnidirectional treadmills are cumbersome and require significant user investment — making them unlikely to gain widespread adoption. Will designers and engineers crack the code of physical locomotion in the metaverse? Or will a new generation of immersive and sedentary experiences lead to negative health outcomes?
Polarization and disconnection
Social media has played a significant role in fueling political polarization and diminishing social trust. Behavioral economists have extensively documented the psychological underpinnings of our tribal behaviors. While technology did not create these behavioral instincts, it weaponized them. Social media platforms enabled echo chambers and filter bubbles where people only hear from like-minded individuals. Meanwhile, algorithms seeking to maximize engagement discovered by trial-and-error that an effective way of engaging people is by feeding them moral outrage about the opposing political camp.
Without careful consideration of design choices, the metaverse could supercharge polarization and filter bubbles. Imagine not just different metaverse platforms for different political persuasions, but infinitely personalized experiences within the same platform. A liberal and a conservative walking through the same metaverse neighborhood could be shown different retailers, avatars, bots, and experiences — all customized to their political persuasion.
If the metaverse becomes an environment in which people spend most of their waking hours, this also raises the prospect of people becoming increasingly disconnected from reality — especially if these spaces are designed to conform to people’s worldviews. If social media monetized outrage, the metaverse might evolve to monetize numbing — building spaces that are escapes from the real world at a time when increasingly urgent societal challenges (climate change, economic inequality, authoritarian political movements) demand more attention, not less.
Misinformation and critical thinking
It’s no secret that social media has a misinformation problem. Despite increased efforts, misinformation has proven very difficult to effectively eradicate, because of two characteristics: social networks generate vast amounts of information, and decisions about what information to take down often involve nuanced judgement calls. As a result, while automated systems play a role (e.g., video hashing allows AI to instantly take down duplicates of a conspiracy theory video) content moderation remains a labor-intensive task that often delivers imprecise results.
The metaverse could be poised to magnify the misinformation problem. For one, the move to the metaverse will lead to an unprecedented explosion in the volume of information generated. Imagine multiple online worlds in which information is communicated in real time via speech, video, text overlay, facial expressions, gestures and more.
Information shared on social media platforms was relatively static, including posts, images or videos, which do not change once created and can be inspected at any time. However, information generated in a metaverse will be much more fluid, dynamic and fleeting, for instance real-time conversations and interactions between individuals. This makes information generated in the metaverse much harder to track. Many design features of metaverse platforms could also encourage and enable anonymity, empowering adversaries to spread misinformation with greater ease.