Converging the physical and digital spaces had been the stuff of sci-fi for long, but it is only now that technology can materialize it, and businesses want to monetize it
The rather prosaic world of technology has suddenly started to speak in verses. Microsoft had talked about its Enterprise Metaverse some time ago, Nvidia’s Omniverse has been since December 2020, and now the latest to join the bandwagon is Facebook with its copycat Metaverse concept, a parallel universe as a “set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.” There is a stiff competition brewing among companies to innovate on the concept of merging the physical and digital space to create new economic and business value. No wonder that, in a bid to reposition their brand identity, Facebook even changed the name of their parent entity to “Meta” this week!
The concept of a mashup of the physical and the digital spaces has been around for several years, but it is only now that foundation technologies exist, and business drivers strong enough to monetize it with real world solutions. Nvidia has already launched its Omniverse – a virtual environment where teams can collaborate to co-create solutions. It has an interesting demo on its site that shows a group of architects, engineers, and space designers collaborating to create an office complex.
The pandemic has proved to be a tipping point in the development of virtual collaborative platforms, as remote working has become the norm today. Everyone’s trying to solve remote work and the idea of telepresence. We have been missing the aroma of the coffee beans in the dispenser at your office; the soft touch of the fingers of our loved one; the fragrance of the flowers in a friend’s garden in a distant land; or maybe even the smell of the sea in a beachside holiday resort.
A parallel world
Cracking the idea of creating a parallel world that is almost like the real one even in terms of engaging the other senses apart of the visual and auditory, has been tantalizing for the technology industry. The real impetus came when people began to miss the physical contact in a near-total virtual world necessitated by the pandemic. We are at the threshold of an emerging future when the physical and the digital world converge, in the most realistic way imaginable – it is way beyond augmented or virtual reality, it’s extended reality (XR). We notice a slew of use cases leveraging XR, coming out every day
XR will radically change the way people access information, the way media and news are delivered, and the way society collectively understands reality. From a business perspective, XR will greatly facilitate training and education through digital twins, informational overlays, remote meetings, and the reduction in time spent traveling for information and knowledge gathering.
Digital consulting firm Kalypso, a Rockwell Automation company, recently published a case study of its work with a consumer-packaged-goods company to design an AR platform to provide manufacturing equipment training for workers with immersive experiences that could be used anywhere in the world at any time on any equipment. Kalypso designed a customizable training program using AR based on existing digital content, including CAD models, simulations, IoT data, video recordings, animations, and other media. Users didn’t have to own a VR headset to interact with the content and could rely on a smartphone or tablet.
Overcoming the latency barrier
While the concept of XR is not new, the most critical barrier to its accessibility and widespread adoption has been the inability to process information at high enough speeds to allow successful engagement on existing and widely available internet networks. Successful implementations of XR require headsets to process large amounts of visual, spatial, and audio data in near real-time. Any latency – e.g. delay – between an end user’s actions in XR and response can be uncomfortable, or even harmful, to end users.
Most existing headsets perform their computations in the headset with large graphics cards or via computers tethered to the headset, resulting in bulky devices with large batteries or a limiting connection to a high-powered processing unit.XR compute could be offloaded to the network, but given current mobile network latency (e.g., 3G/4G LTE), devices minimize their frames-per-second (FPS). This can severely reduce the quality of the experience and even cause end users to experience motion sickness.
By that same token, network speed limits the bandwidth for streaming visual, audio, and other data types. The net effect is that superimposed augmented and mixed reality content has much lower visual quality than observed environment. With one millisecond end-to-end latency and 20GBPS speed available in the next five years, 5G NR overcomes these two critical barriers. Ericsson Research’s Consumer and IndustryLab forecasts that national deployment of 5G NR will cause consumers and industries to rapidly adopt the technology over the next five years.
Our next episode shall discuss the Internet of Senses (IoS) and how it can be pivotal in creating a parallel world.
(To be continued)