Couldn’t attend Transform 2022? Discover all the summit sessions now in our on-demand library! Look here.
In the years to come, consumers will spend a significant portion of their lives in virtual and augmented worlds. This migration into the metaverse could be a magical transformation, expanding what it means to be human. Or it could be a deeply oppressive twist that gives corporations unprecedented control over humanity.
I do not make this warning lightly.
I’ve been a virtual and augmented reality champion for over 30 years, starting as a researcher at Stanford, NASA, and the US Air Force and founding a number of virtual reality and augmented reality companies. After surviving several cycles of hype, I believe we are finally here – the metaverse will happen and have a significant impact on society over the next five years. Unfortunately, the lack of regulatory protections worries me deeply.
Indeed, metaverse providers will have unprecedented power to profile and influence their users. While consumers are aware that social media platforms track where they click and who their friends are, metaverse platforms (virtual and augmented) will have much deeper capabilities, monitoring where users go, what they do, who they are with, what they are looking at, and even how long their gaze lingers. The platforms will also be able to track users’ posture, gait, facial expressions, voice inflections and vital signs.
Invasive surveillance is a privacy issue, but the dangers increase dramatically when you consider that targeted advertising in the metaverse will shift from flat media to immersive experiences that will soon become indistinguishable from genuine encounters.
For these reasons, it is important that policy makers take into account the extreme power that metaverse platforms could wield over society and strive to secure a basic set of “immersive rights”. Many guarantees are necessary, but as a starting point, I offer the following three basic protections:
1. The right to experiential authenticity
Promotional content permeates the physical and digital worlds, but most adults can easily identify advertisements. This allows individuals to view the material in the proper context – as paid messaging – and bring healthy skepticism when considering the information. In the metaverse, advertisers could subvert our ability to contextualize messages by subtly altering the world around us, injecting targeted promotional experiences that are indistinguishable from genuine encounters.
For example, imagine walking down the street in a virtual or augmented world. You notice a parked car that you have never seen before. As you pass, you overhear the owner telling a friend how much he loves the car, a notion that subtly influences the way you think, consciously or unconsciously. What you don’t realize is that the encounter was entirely promotional, placed there for you to see the car and hear the interaction. It was also targeted – only you saw the exchange, chosen according to your profile and personalized for maximum impact, from the color of the car to the gender, voice and clothing of the virtual spokespersons used.
While this type of covert publicity might seem benign, simply influencing opinions about a new car, the same tools and techniques could be used to fuel political propaganda, misinformation, and outright lies. To protect consumers, immersive tactics such as virtual product placements and virtual spokespersons need to be regulated.
At the very least, regulations should protect the fundamental right to authentic immersive experiences. This could be achieved by requiring promotional artifacts and promotional people to be visually and audibly distinct in an overt manner, allowing users to perceive them in the appropriate context. This would prevent consumers from confusing modified promotional experiences with authentic experiences.
2. The right to emotional intimacy
We humans have developed the ability to express emotions in our faces and in our voices, postures and gestures. It is a basic form of communication that complements verbal language. Recently, machine learning has enabled software to identify human emotions in real time from faces, voices and posture and vital signs such as respiratory rate, heart rate and blood pressure.While this allows computers to engage in non-verbal language with humans, it can easily cross the line into predatory privacy violations.
This is because computers can detect emotions from cues that are not perceptible to humans. For example, a human observer cannot easily detect heart rate, respiratory rate, and blood pressure, which means that these signals may reveal emotions that the observed individual did not intend to convey. Computers can also detect “micro-expressions” on faces, expressions too brief or subtle for humans to perceive, again revealing emotions that the observed had not intended. Computers can even detect emotions from subtle blood flow patterns in human faces that people can’t see, again revealing emotions that weren’t meant to be expressed.
At a minimum, consumers should have the right not to be emotionally evaluated at levels beyond human capacity. This means not allowing the use of vital signs and micro-expressions. Additionally, regulators should consider banning emotional analysis for promotional purposes. Personally, I don’t want to be targeted by an AI-driven chatbot that adjusts its promotional tactics based on emotions determined by my blood pressure and respiratory rate, which can now be tracked by consumer technologies.
3. The right to behavioral confidentiality
In virtual and augmented worlds, location, posture, gait, and line-of-sight tracking is necessary to simulate immersive experiences. Although this is detailed information, the data is only needed in real time. It is not necessary to store this information for long periods of time. This is important because stored behavioral data can be used to create detailed behavioral profiles that document users’ daily actions with extreme granularity.
With machine learning, this data can be used to predict how individuals will act and react under a wide range of circumstances in their daily lives. And because platforms will have the ability to modify environments for persuasive purposes, predictive algorithms could be used by paying sponsors to preemptively manipulate user behaviors.
For these reasons, policymakers should consider banning the storage of immersive data over time, thereby preventing platforms from generating behavioral profiles. Additionally, metaverse platforms should not be allowed to correlate emotional data with behavioral data, as this would allow them to convey modified promotional experiences that not only influence what users do in immersive worlds, but skillfully manipulate how they feel by doing it.
Immersive rights are necessary and urgent
The metaverse is coming. While many impacts are positive, we need to protect consumers from harm with basic immersive rights. Policy makers should consider ensuring fundamental rights in immersive worlds. At a minimum, everyone should have the right to trust the authenticity of their experiences without fear that third parties will modify their environment for promotional purposes without their knowledge and consent. Without such basic regulations, the Metaverse may not be a safe or trusted place for anyone.
Whether you’re looking forward to the metaverse or not, it could be the biggest change in how society interacts with information since the invention of the internet. We can’t wait for the industry to mature to put safeguards in place. Waiting too long could make it impossible to fix the issues, as they will become part of the core business practices of the major platforms.
For those interested in a safe metaverse, I’m directing you to an international community effort in December 2022 called Metaverse Safety Week. I sincerely hope this becomes an annual tradition and people around the world strive to make our immersive future safe and magical.
Louis Rosenberg, PhD is an early pioneer in the fields of virtual and augmented reality. His work began more than 30 years ago in the laboratories of Stanford and NASA. In 1992, he developed the first immersive augmented reality system at the Air Force Research Laboratory. In 1993, he founded the first virtual reality company Immersion Corporation (listed on Nasdaq). In 2004, he founded the first AR company Outland Research. He received his Ph.D. from Stanford University, obtained more than 300 patents for VR, AR and AI technologies and was a professor at California State University.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider writing your own article!
Learn more about DataDecisionMakers