Metacenter Announces Exciting New Immersive Event

XR Today's David Dungay hosts David Adelson, CEO of Innovate Orlando & Nathan Pettyjohn, President of VRARA.


XR Today’s David Dungay hosts David Adelson, CEO of Innovate Orlando & Nathan Pettyjohn, President of VRARA to discuss the new combined event involving Immerse Global Summit and Synapse.

In this conversation the panellist discuss the following:

The nature of the relationship and the combined event

  • What attendees can expect from the exhibitors and speakers.

  • How the latest Generative AI and Apple Vision Pro trends will shape the agenda.

  • Why businesses should attend.


3M helped make virtual reality headsets smaller. Next step? More consumer demand

Post originally appearing on Star Tribune by Brooks Johnson.

David Ylitalo imagines one day opening this newspaper, scanning for a story and having the text pulled right up to his eyes for easy reading. There is no paper, however, just a virtual reality app that mimics the real thing.

"We're right on the precipice of this becoming the next way people consume visual information from a computer," Ylitalo said. "Content that supports all these different uses — that's what's going to make it the next big thing."

Ylitalo is vice president of R&D for 3M's Display Materials Division, which has been supporting VR headset makers for a decade.

3M's "pancake optics" help shrink the size of headsets while improving display quality, both key product improvements for VR's quest to get more consumers to buy into the tech.

As Minnesota-based 3M prepares to spin off its health care business and reposition the remaining company for growth, the industrial giant is embedding its materials and technology in a number of next-big-things: electric vehicles, industrial automation, climate tech and virtual and augmented reality.

Sales have slowed for traditional consumer electronics like phones, TVs and computers — a core business segment for 3M that typically generates more than $3 billion in yearly sales. Electronics revenue is down 23% for the first half of the year amid weak consumer demand, especially in China.

Meanwhile, numerous market reports predict a multibillion-dollar spike in VR hardware sales over the coming years.

"Much like our customers, we're waiting for this to really take off, and we're already working on the next generation and the next-next generation of this technology," Ylitalo said.

A Citi report last year said that by 2030 there could be trillions of dollars spent on and in the metaverse, which the bank defines broadly as a highly immersive internet across a wide variety of devices.

"We believe that the metaverse will eventually help us find new enhanced ways to do all of our current activities, including commerce, entertainment and media, education and training, manufacturing and enterprise in general," the report said.

The promise of the metaverse has been touted for years, drawing more attention during the pandemic as workplaces and communities explored new ways to interact online. Lately, though, it's faced setbacks from tech company layoffs and resources shifting to artificial intelligence.

"It goes through its own hype cycles, like a lot of industries do," said Nick Roseth, Minneapolis chapter president of the VR/AR Association trade group. "The two biggest issues are: There aren't enough devices on the market, and content is still expensive."

The release of Apple's Vision Pro this summer was seen as a breakthrough moment — but for $3,500 it will be used mostly by developers to continue pushing the boundaries of what the tech can be used for, Roseth said.

He expects it will be another 18 to 24 months before real progress is made on affordability and accessibility for consumers.

"I have to remind myself that 90% of the population doesn't realize this technology exists," Roseth said. "It's a slow burn."

It took five years for 3M to find ways to improve VR headsets after being approached by companies at the Consumer Electronics Show in 2013.

"They simply asked us if we could make their headsets smaller," Susan Kent, R&D lab director at 3M, said earlier this year. "We shortly realized that we could and make the image quality ... better and look less cartoony."

After 3M combined pancake lenses with its patented reflective polarizer technology, headsets could bring screens closer to a user's face, making them smaller while also enabling crisp text.

3M has also developed optical films for heads-up-displays — like digital data displayed on a car windshield. That type of augmented reality, as opposed to a fully immersive virtual headset, has already seen wide adoption.

"We're already living with augmented reality on our phones," Roseth said, pointing to Pokemon Go, Ikea Place and fashion try-on apps. "That blends information with the real world."

Headsets, heads-up-displays and more were on display last month at the 3M Open in Blaine. As the golf tournament's sponsor, 3M's fan experience tent focused on how its technology is connecting the physical and digital worlds — a hands-on look at all things "phygital."

The golf games — including an augmented-reality putting tool — were especially popular.

"From here, it's about doing this at a large scale," Ylitalo said, "at a volume and cost that allows our customers to put these on not millions of faces but hundreds of millions or billions."

Banuba Increases the Performance of Virtual Backgrounds by 10X by Upgrading Neural Network Architecture

Banuba implemented a series of technical updates to Face AR SDK that drastically increase the maximum frame rate of live videos with background separation effect. Depending on the platform and hardware, this results in up to 10 times higher maximum FPS.


Virtual backgrounds are a must-have feature for any modern video communication software and social media. They help alleviate camera shyness, protect users’ privacy and prevent potentially embarrassing situations like pets walking in during a serious business meeting. After the COVID-19 pandemic caused a massive increase in remote work, the demand for virtual backgrounds skyrocketed, so improving them has been one of Banuba’s top priorities. 


The results were achieved thanks to three main additions:


  • New neural network architecture with improved utilization of CoreML Neural Engine (on Apple devices with Bionic processors);

  • Algorithm optimization on Windows and Web, allowing the neural network to monitor every other frame instead of each of them;

  • Improved anti-jitter algorithms that demand less resources from the device.

  • This effect is available as part of Banuba Face AR SDK (for live streaming and video communication) and Video Editor SDK (for prerecorded videos).


Other updates include:

  • Facial feature editing – a new functionality that allows changing size and shape of any part of the face;

  • Optimized hand tracking and gesture recognition;

  • Better lips segmentation. This is especially noticeable on the corners of the mouth and near the philtrum;

  • Acne removal for photos and an option to change the size of the area affected by the effect.

About Banuba

Banuba is an augmented reality company with over 7 years on the market, pioneering face tracking, virtual try-on, and virtual background technologies. Besides online try-on solutions for makeup, headwear, jewelry, glasses, and more, it offers a Face filters SDK and Video Editor SDK – ready-made modules to apply effects and edit videos.


New whitepaper! VR/AR To Address Staffing Challenges of the Energy Sector (Download)

Our Energy Industry Committee has produced this whitepaper aimed to guide energy organizations in leveraging VR/AR solutions to address challenges in staff acquisition, skill impartation, and talent retention. This document outlines key considerations in identifying use cases, specifications, functionalities, and hardware selection. Additionally, it addresses VR/AR solution deployment and change management. 

These papers (Part 1 and Part 2) serve as reference documentation for both end users and VR /AR solution providers. They facilitate productive engagement by establishing a shared language and understanding of requirements. 

Virtual and Augmented Reality (VR and AR) have the potential to revolutionize learning and training in the energy sector. They offer immersive experiences that enhance understanding of complex concepts, procedures, and equipment in a safe environment. These technologies are engaging and impactful throughout the employment cycle, from recruiting to reskilling and provide access to virtual training environments worldwide, reducing carbon footprints and promoting sustainability. 

In 2021, the VR/AR Association Energy Committee released the first whitepaper in a series, titled “VR/AR in the Energy Sector,” providing insights on VR and AR utilization in the industry. The goal was to offer insights to the VRARA Energy community, representing stakeholder organizations and technology suppliers, on how VR and AR solutions can be used to overcome critical business challenges facing our industry. 

Table of Contents 

Authors & Contributors......................................................................................................................................... 1 

Table of Contents....................................... 2 

1. Introduction................. 3 

2. Principal Considerations Before Getting Started......................................................... 3 

3. Setting the Right Learning Objectives........................................................................................ 5 

4. Type of Content to Develop.................................................................................................. 7 

4.1 Planning the Content................................................................................................ 7 

4.2 Classification of VR AR Use Cases for Training................................................................ 8 

4.3 Types of Simulation Modules.................................................................................. 9 

5. Hardware Equipment........................................................................10 

5.1 Type of Headset systems.........................................................................................10 

5.2 Mobile Devices..............................................................................................................12 

5.3 Desktop deployments.......................................................................................12 

5.4 Room Scale Immersive Systems.............................................................................12 

5.5 Characteristics of VR/AR Hardware Devices......................................................................................12 

5.6 Selection of VR AR Hardware for Application Development......................................13 

6. Content Production and Distribution.............................................................................14 

6.1 Internalize Capabilities................................................................14 

6.2 Third-party partnership......................................................................14 

7. Data and Scoring............................................................................................15 

8. Integrating into the Enterprise................................................................16 

8.1 Integration Support.............................................................................................13 

8.2 Content Licensing and Intellectual Property.............................................................13 

9. Physical Considerations for a Virtual World........................................................................17 

9.1 Training Space..........................................................................................17 

9.2 Audience Preparation................................................................................................17 

10. Conclusion.............................................................................................................................18

Sony will showcase a new product at our IGS during Metacenter Global Week in Orlando, Oct 17-19

Immerse Global Summit at Metacenter Global Week​ in Orlando on Oct 17-19!

In addition to it’s newest Spatial Reality Displays, Sony will also showcase mocopi at our IGS during ​Metacenter Global Week​ in Orlando on Oct 17-19!

Mocopi is a revolutionary phone-based motion 3D capture system for controlling virtual avatars that easily helps you track & record your full body motion; it’s great for your use in the metaverse! Mocopi is fully wireless and only requires a Bluetooth connection to your phone (iOS or Android) , so you can use it anywhere. It consists of six small sensors and a dedicated app that enable full-body motion tracking when combined using Sony’s proprietary technologies. Those in the industry know that traditional motion capture systems require pricey studios and trained operators, while Mocopi simply relies on Sony’s unique algorithm for accurate motion measurements with only small lightweight sensors as well as a smartphone.

Sony’s mocopi system makes motion capture and virtual content creation easy

With the dedicated “mocopi” app, users can create movies with their avatar in motion with their compatible smartphone, using the data obtained from the sensors attached to their body. In addition to pre-installed avatars, users can import custom avatars. Recorded avatar movies can be exported as mp4 files or motion data from the mobile app,” said Sony.

Mocopi is ultimately designed to record a user’s movements and then mirror them in digital environments — hence the mashup of “motion” and “copy.” There are plenty of different use cases for this kind of tech, from allowing animators to rig 3D characters with more realistic motions, to allowing Vtubers to replicate their movements in real time across streams and virtual reality platforms like VR Chat.

Mocopi provides some major benefits for the niche communities that will be willing to cough up the cash to buy it. While there are some affordable VR headsets like the Meta Quest 2 that can be similarly utilized in VR applications, these won’t provide the finesse of a dedicated motion capture tool, especially when it comes to lower body tracking.

Sony’s Mobile Motion Capture

Motion capture is a technique which digitizes the movements of a real person or object and imports them into a computer. This allows you to reproduce more lifelike, humanly movements with a computer-generated character in video production. This technique is also being widely used in movies, animations, and game contents within the Sony Group.

Typical motion capture requires studio facilities to install many cameras as well as a tight full-body suit worn by the actor with many markers attached to the body. In contrast, we have realized a new technology that enables motion capture using only small, lightweight sensors. We call this "Mobile Motion Capture."With this technology, you can readily digitize a person's movements while wearing everyday clothes whether they are indoors, outdoors, or anywhere and apply those movements to a computer-generated character.

If you haven't yet, get ​tickets​ to Metacenter Global Week.

Also, there is still time to become a ​sponsor​ or ​exhibitor​. Apply today to get the best speaking and or expo placement!

How Siemens utilizes Virtual Reality to enhance the employee experience

With more than nearly 300 production and manufacturing facilities and more than 385,000 employees, the global technology powerhouse Siemens is Europe’s largest industrial manufacturer and one of the most famous enterprises worldwide. Setting the highest standards throughout their line of business using innovative concepts and technologies, the company is constantly striving for new ways to improve existing concepts.

EHS & QHSE are crucial for employee & factory security

Ensuring the safety and health of their employees is an important and ever-present matter for companies worldwide. Especially industrial enterprises in the manufacturing business, whose employees work with heavy machinery in factories, face the challenge of continuously sensitizing their staff to issues such as plant security and occupational safety. For this reason, EHS training provides the staff with valuable insights about their workplace, underlying processes and implemented safety measures.

Since these measures are so crucial for the safety of their staff, companies are constantly looking for ways to make EHS training more efficient. Immersive technology like Virtual Reality elevates the learning effect for employees, helping them to grasp the training content faster as well as to apply them more confidently. Siemens has acknowledged exactly this and successfully implements Virtual Reality to train its employees.

What is EHS?

EHS (short for Environment, Health, Safety) is a discipline that focuses on implementing practical aspects of environmental protection, risk reduction and safety at work. When it is combined with Quality Management, it is commonly referred to as QHSE. Other common acronyms are, among others, OHS, SHE, HSSE, QEHS and QHSSE.

Embracing the ”new normal” with Virtual Reality

In collaboration with VRdirect, Siemens created a virtual tour through one of their industrial facilities, digitally depicting the different work places. In this first use case,  the virtual tour was complemented by additional and important information on EHS concepts. Users can explore the immersive and interactive training environment on their own while actively engaging with the necessary information. Through adding new features, the Virtual Reality project was continuously developed into a virtual escape game, facing users with a timed challenge where they have to apply everything they have learned to escape a fire emergency scenario.

This way, not only did the Virtual Reality experience make users engage more actively with the learning materials, but it allowed them to immediately test their knowledge in a fun and entertaining way. Using the VRdirect platform, Siemens can easily publish Virtual Reality projects on various devices, meaning VR headsets, mobile devices and PCs. The Escape Game was presented by Siemens at the Health & Safety Week, an internal event focussing on all topics regarding EHS. Staff members of many different departments were able to experience the project via Virtual Reality headsets but could also try out the web version on a PC.

Wide range of applications of Virtual Reality in EHS & QSHE

The unique way Siemens tackles the challenge of training staff for plant security and occupational safety shows the potential of Virtual Reality for EHS training as well as Quality Assurance. There is a broad spectrum of possibilities opening up when using immersive technology. Virtual Reality allows users to experience virtual surroundings up close and in an interactive way. This makes the technology viable for creating virtual simulations (for training & onboarding purposes, for example) as well as for actual simulations of real circumstances. The latter is especially suited for quality control or workplace inspections that can be done remotely. 

There are many possible use cases for Virtual Reality in EHS and QSHE, for example:

  • Easy onboardings in lifelike workplace surroundings

  • Workplace instructions

  • EHS & QSHE training sessions with integrated quizzes

  • Visitor Center trainings

  • Remote workplace inspections

  • Quality Assurance

Another huge benefit of Virtual Reality solutions is that they are constantly available via a multitude of devices. Once developed, a Virtual Reality application created for EHS training in a specific scenario can be used by employees anytime from any place without further preparation or supervision, greatly reducing the effort required to properly train staff members. Employees who are responsible for occupational safety can gain insights on different workplaces without the need to physically be there – all that is needed are a series of 360° captures and a platform to create and publish an immersive Virtual Reality experience.

Virtual Reality experiences engage with employees for better EHS / QSHE training results

The Escape Game was very well received by the participating Siemens employees at the event. Through the VRdirect platform, solutions like the Escape Game can furthermore be distributed via all common devices to specific users – without restrictions regarding time and place. Especially in times of the COVID-19 pandemic, where personal contact is limited to a minimum and the opportunities to conduct offline training are rare, the constant availability of Virtual Reality experiences is a huge benefit. This way, applications can be offered to employees remotely, regardless of which device the specific target groups can or wants to use.

Virtual Reality allows for immersive experiences even beyond training scenarios

Thanks to its easy-to-use approach, the VRdirect platform allows the use of Virtual Reality not only for EHS and QHSE, but also for a myriad of departments and use cases, for example Sales & Marketing, Human Resources, training in general as well as on- and offline events. With no special development skills needed, Siemens departments can create complete Virtual Reality applications quickly and easily on their own. The broad feature set of the platform allows for the creation of immersive Virtual Reality projects that are not limited to virtual tours only, but allow for countless fields of application. In only a short amount of time, divisions can create Virtual Reality experiences tailored to their own specific needs with nothing more than a clear idea of a story and a couple of 360° images or videos. With the VRdirect platform, projects can also be constantly updated or developed further in real-time.

The potential of one Virtual Reality platform as a Virtual Reality mainstream tool for various departments

Besides the success of the EHS Escape Game, a number of other Siemens departments have already implemented or are currently developing Virtual Reality use cases with a similar approach using the VRdirect platform. Besides a web portal, the Siemens VR app (available in the company’s internal app store soon) serves as a central hub that allows code protected access to the Virtual Reality projects.  Siemens IT APD GLS in Munich has acquired the platform as a potential mainstream tool for internal Virtual Reality projects:

“We needed a solution that allowed a company wide roll-out, meaning quick and easy implementation and distribution of stable Virtual Reality projects. With VRdirect multiple businesses are now starting with Virtual Reality, publishing to the one internal Siemens VR app – and they don’t need expert knowledge or a complex technical set up.”

Daniela Peine

IT APD GLS

Next to VRdirect, Siemens IT APD GLS remains the internal contact for the solution, allowing departments worldwide to realize straightforward use cases in Virtual Reality.

Now Enrolling: Extended Reality (XR) Developer Apprenticeship Program at Cañada College

Background

Cañada College in conjunction with the Bay Area Community College Consortium, California Student Aid Commission, and  State of California-Division of Apprenticeship Standards (DAS) are seeking employers for the Extended Reality (XR) Developer Apprenticeship Program (2023- 2024) cohort. Funded by a California Apprenticeship Initiative (CAI) grant, the Cañada College XR Apprenticeship Program trains, facilitates placement of, and supports eligible candidates in entry level positions with partner employers. Some occupations in XR studios include Junior Developer, Game Designer, and Production Manager.

 

Eligible Apprentices from the Developer Apprenticeship program will have completed either:

  • (a) Approved DAS Pre-apprenticeship Program,

  • (b) A certificate, associate degree and/or bachelor degree in a related field

  • (c) Art related apprentices’ applicants will have also passed a portfolio assessment conducted by developer apprenticeship faculty consultants.

 

Employer Role

Once placed in a position with an employer, the apprentice works to fulfill their On-the-Job Training (OJT) and receive Related Supplemental Instruction (RSI) through Cañada College. Successful completion of the program is expected to take six to eighteen months, depending on studio production cycles and the individual schedules of apprentices. While further employment with a partner employer is not guaranteed, a graduating apprentice is certified by the State of California in their chosen occupation and well positioned in the industry with hours of real-world experience and technical training by industry experts.

 

Funding and Apprentice Support

Cañada College’s Learning-Aligned Employment Program (LAEP) provides funds to offer eligible students opportunities to earn money while gaining career-related experience in their fields of study. For Learning-Aligned employment positions with for-profit employers, the program can provide up to 50 percent of the student’s compensation. In addition, Cañada College will provide mentors, and additional apprentice support free of charge to employers.

 

For more information, visit:

The Fate of Apple's Vision Pro | Part I

Today we’re featuring a guest post from Evan Helda, the Principal Specialist for Spatial Computing at AWS, where he does business development and strategy for all things immersive tech: real-time 3D, AR, and VR. 

Come see Amazon AWS at our IGS at Metacenter Global Week!

Evan has been at the forefront of the immersive technology industry for the last 7 years. His experience spans numerous layers of the AR/VR/3D tech stack; from AR headsets and apps (the OG Meta), to simulation and game engines at Improbable, to cloud and edge computing/5G (AWS). 

Evan also writes a newsletter called Medium Energy, where he explores the impact of exponential technology on the human experience. 

We recently came across Evan's writing and thought you might enjoy his perspective on the Apple Vision Pro. If you do like this piece, we encourage you to check out more of his content over at MediumEnergy.io!


####

Today was the big day.

The fateful day our tired and battle-worn industry has waited for; for a long, long time. So many troughs of disillusionment, so many clunky demos, so many shattered startup dreams...

We all sat bated breath, leaning forward with anticipation.

The backdrop was out of a movie: dozens of industry experts, leaders, investors, and entrepreneurs, sprawled across rows of couches on the beach-side deck of a Malibu mansion.

To our right, waves crashed rhythmically, bringing sea foam right up to our feet. To our left, a sprawling spread of breakfast delicacies and of course, champagne. Copious amounts of champagne. The extent to which it would be popped & consumed? TBD... Directly ahead was a massive flat screen TV unfolding what we all hoped would be our industry's 'big bang'.

And then, the moment finally arrived. Apple CEO, Tim Cook, re-appeared and said those historic words, "But wait... there's just one... more... thing".

Our small crowd erupted with hoots, hollers, and applause. My skin erupted with goose bumps.


As the Apple Vision Pro faded onto the screen, it felt like a dream. And for a split second I did dream, flashing back to another fateful day.... five years prior.

The Office of the Future (Spring 2018)

Today was the big day.

The fateful day our augmented reality startup, Meta (the original Meta…), would finally fulfill the promise our CEO had made to the world; to throw away our computer monitors and replace them with a more natural human-computer interface: an AR headset that would blend the physical world with the digital.

Meta 2 AR Headset

We called it 'spatial computing'.

Our CEO made this promise on the grand stage that is TED (worththe 10 mins to watch here). And in about a month, Bloomberg was set to visit our office. Their tech reporters wanted to see this bold exclamation for themselves and write an article on the outcome.

CEO, Meron Gribetz, on the TED stage

We were being held accountable. The boats were burned. There was nowhere to hide.

Today was the dress rehearsal for that Bloomberg visit. All 100 hundred employees would finally taste the fruits of our labor; three years of blood, sweat, and tears towards building a fully vertical AR stack; our own display system, our own sensor array for positional tracking, our own SLAM algorithms, our own hand tracking algorithms, our own SDK, and most importantly... our own 'spatial operating system'.

This was no ordinary OS. It was meant to be the 'OS of the Mind': one that would conform to how our brains have naturally evolved, guided by what we called 'the principles of spatial design'.

(If you watched the Vision Pro announcement... sound familiar? It's no coincidence. Apple seriously considered buying Meta back in 2017. Our 'spatial design principles' and vision for a SpatialOS were a big reason why. Oh, what could have been…)

We would place virtual monitors all around us at limitless scale. We would free 3D models from the confines of 2D screens, visualizing and interacting with them as they were always intended: spatially.

Gone were the days of the mouse & keyboard. Thanks to computer vision, we would use our hands to more naturally & directly interact with the digital world, just as we do the physical.

This same computer vision tech would turn the world into our desktop background, understanding the environment and anchoring actualized figments of imagination all around us.

Meta, the OG Meta... was going to build the true iteration of Steve Job's 'bicycle of the mind': a computer that grandma or a child could pick up and intuitively know how to use, with zero learning curve.

Oh, how beautiful the vision was...

But oh... how naive we were.

The Revenge of the Meta 2

The office that day of the ‘Bloomberg rehearsal’ was buzzing with anticipation.

For the first time, we each received our own headset. The packaging was a work of art; a beautiful piece of engineering and design, accompanied by a thoughtful developer guide and a document outlining our 'spatial design principles'.

The first step: plugging the tethered device into a computer and then into the wall for power (yes, the sequencing mattered...).

It was a collective stumble right out of the gates. Our computers failed to recognize the device. For the next hour, we twiddled our thumbs as the engineers scrambled to fix a bug and re-distribute the SDK (software development kit)

Hot start.

Once the 'Spatial OS' finally launched, the user was tasked with calibrating the sensors and mapping the world.

A graphical UI instructed you to look up, look down, look left, look right.

The next 5-10 minutes was a comical display of 100+ people in a state of manic indecision; stuck between vigorous yes's and no's; shaking our heads this way and that; waiting, hoping, yearning for the cameras to lock-on to our physical surroundings.

Some devices registered the real world within a few minutes. Other poor souls were left doing neck exercises for the next 5-10 minutes.

If you were lucky enough to create your environment map, then the OS would finally launch. The OS interface looked like a holographic book shelf. Each shelf with floating orbs representing a variety of spatial apps.

But upon launch, exactly where this holographic shelf appeared in space was anyone's guess.

For some, it was down on the floor. For others, it was off in the distant horizon or behind them. The next 10 minutes we collectively embarked on a holographic treasure hunt at our desks; searching up, down, and all around for our 'app launcher'.

My holographic shelf was above me, anchored to the ceiling.

Now the primary way to interact with these holograms was with your hands. You had to reach out and grab them. But doing so was quite the art... it required your hand being in the perfect spot, at just the right proximity to the hologram. When you found that magic zone, a circle would appear.

Then, and only then, could you close your hand and 'grab' the hologram. The camera on the headset needed to see a very distinct gesture: a wide-open hand and then a distinctively closed fist. When the cameras saw this movement, the circle UI would become a dot, confirming the hologram was secured.

This led to yet another comical sight; an entire office of people, waving their hands in the air, trying to materialize that circle. Everyone was flailing about, groping the air and repeatedly trying to turn that circle into a dot. We became a horde of perverts molesting invisible objects of desire.

I stood up and reached longingly into the air for my holographic shelf, only to be immediately yanked back into my chair by the tether.

Screw it. I resorted to using the mouse we so vehemently vowed to replace. It was a fallback form of input, controlling a 'spatial cursor' that allowed me to click on the 3D shelf and pull it closer.

Finally, I could start pulling out little apps & experiences, placing them all around me at my desk. For a split second I was living in the future.

There were virtual monitors showcasing the future of productivity, with PowerPoint, web browsing, and spreadsheets. But I could barely read the text. It was blurry and the eye strain was very real. There was a beating heart for the future of education. There was a virtual jukebox to show case our (attempts at) spatial audio. There was a 3D model of a Tesla, hinting at the future of immersive design or e-commerce.

And my personal favorite... a box with an image of a butterfly. When you touched it, the box exploded into a cloud of 3D butterflies, fluttering vigorously this way and that. When you held out your hand, they would come land and gently rest.

For many, the mind would play tricks. You could feel the tickle of the butterfly's little legs on your hand...

This… this to me is the magic of spatial computing; mind merging with machine, tapping into the mystery of how the brain has naturally evolved to interact with and understand the real world.

Imagine the impact of this for communication, education, collaboration, and creation. We were passionately driven by this potential. We were mission obsessed, and mission bound.

But that moment in the future was short lived. After a few minutes, naseua set in from the motion-to-photon latency (aka: the time between head movement and the display’s output/reaction ). Then, the virtual shelf suddenly started to jitter and float away, carrying with it our collective hopes & dreams.

Alas, my headset lost world tracking entirely and holographic chaos ensued.

Before I knew it, holograms were flying all over the place. The virtual heart shot past my head, the virtual monitor turned upside down and shot through my desk, and the 3D shelf/OS UI zoomed right back up to its original home; the ceiling.

Next to me sat my sales colleague and dear friend, Connor McGill. We looked at each other, let out massive sighs, and just laughed. What else could we do?

We had spent the last 18 months traveling the world; from LA to NYC, Shanghai to London, Amsterdam to Rome giving thousands of demos and convincing the world’s largest companies that spatial computing was the future, and that it was imminent with the Meta 2: Nike and Adidas, Lockheed Martin and Boeing, Exxon and Shell, Ford and Tesla, Disney and Universal, Dell and Lenovo. The list goes on.

This was going to make for some awkward conversations.

Welp... at least we had good packaging.

Dell Technologies President, Jeff Clarke, celebrating the deal to become a Meta 2 reseller.

Meta 2 @ The Pantheon in Rome

Kate Middleton & Prince William

Even Bert wanted in on the action

When the Apple Vision Pro presentation ended, I was in awe. They seemed to have absolutely nailed it.

The display quality— near perfection; making it seem like you’re viewing the world through a pane of glass. The pixel density— mind blowing; making text perfectly legible, at last. The innovation with the R1 chip— a sci-fi feat; processing data from 12 cameras to produce zero perceived latency and making nausea a thing of the past. The world tracking— immediate and flawless, anchoring holograms perfectly and elevating them to first class citizens in the real world. The input/interaction- pure magic, creating the illusion of mind conrol with the perfect tandem of eye and hands tracking.

The list goes on... they seemed to think of every little detail, and thoughtfully addressed the majority of paper cuts that have plagued AR/VR for decades.

When I left the viewing party that day, I half-expected there to be a ‘spatial computing parade’ in the streets.

The tech we’ve all been waiting for was finally here! A cause worth celebrating, for sure. Heck, I was ready to take the day off and paint the town red! (And that is exactly what a few of us did…)

Spatial Squad

But when I integrated back into the real world the next day, my enthusiasm wasn’t quite the norm.

The first friend I talked to about the announcement said “it made me want to put my feet in the grass and hide in the woods”.

Okay, considering some of the concept videos, I get it… (we’ll address those later).

And then there were the mainstream media pundits, spewing all kinds of nonsense: ‘Apple has lost its way’, ‘this product will never sell’, ‘no one needs this’, etc. etc.

As the weeks went by, the criticism kept pouring in…

  • It's too expensive!

  • What is this good for?

  • People won’t wear something on their face

  • It's too isolating

  • The digital eyes are creepy

  • Only a two-hour battery life?!

When I first heard these critiques, my blood boiled. I couldn’t help but think… What the hell is wrong with these people? How can they not see what I see? Do they not get the magnitude of these technical feats and this product’s potential impact?

With my emotions at the helm, I realized I needed to take a step back, think objectively, and question my beliefs… turns out inherent career bias is a helluva a drug.

Why was I so triggered? Am I the crazy one here? Or is everyone else missing it, and my bullishness is indeed warranted?

Over the last month, I’ve done the inner work to remove my XR fanboy cap and think more deeply about Apple’s strategy, along with the importance of this moment.

With my bias officially on the shelf, I remain convinced-- the pundits are wildly wrong, consumers don’t know what they don’t know, and this moment truly does matter, indeed.

And turns out, I’m not alone… while I’ve not tried the device myself, I have listened & talked to those who have. Upon hearing their feedback, I feel (a bit) less crazy. My initial perceptions & instincts seem to hold true.

The best part? Their favorite moment of the Vision Pro demo is a holographic butterfly that lands on your hand. They too could feel the tickle of its legs…

(A coincidence? I think not… Apple hired many of Meta’s top engineers upon failure, including the ones who built that original ‘office rehearsal’ demo)

Now, before exploring the impact of this moment, why Apple’s strategy is the right one, and why you should care… Come with me back in time once more, to a moment chalk full of lessons & predictions of what’s to come.

General Magic

They say history doesn't repeat, but it certainly rhymes.

The Meta journey is a reflection of a similar story, with a similar outcome for an eerily similar company: General Magic.

If you like documentaries, this doc about General Magic is a must watch, even if tech & business is not your thing. It's just a compelling story; of ambition and courage; of how to blaze new trails; and of how to cope with heartbreak and shattered dreams.

If you're unfamiliar: General Magic attempted to create the iPhone back in 1989/1990. The vision and the use cases were exactly the same: a personal computer in your pocket acting as a remote control for your life.

General Magic Design

The team was perfect, the vision was prophetic, and much of the technology existed. But the timing was wrong and the tech couldn’t yet merge to make the whole greater than the sum of its parts.

While the individual pieces were there, they weren’t mature enough to yield a compelling user experience. A lot of technology still needed to be invented, and there were numerous UI/UX rough edges to be smoothed over. Very similar to the Meta 2 headset.

There also wasn’t a fertile ecosystem. The internet wasn't ubiquitous, telco connectivity wasn't mature, and 'mobile developers' didn't really exist. There were very few builders and businesses with properly primed imaginations or business models.

Perhaps most important... consumer behavior wasn't properly evolved. They didn't see the point. The use cases didn't quite click and people weren't quite sure what this thing was good for. The whole thing just seemed… silly.

Sadly, the General Magic dream came to an end in 2002.

Fast to June 29th, 2007 (five years after General Magic shuts down). Apple launches the iPhone and changes the world.

Apple was watching, studying, and learning from General Magic all along. They even hired some of their best/most talented employees (e.g. Tony Fadell). They had blueprints and prototypes all along the way. But it took 17 years to get the technology just right; polishing, testing, debating.

And boy, did they nail it.

In hindsight, it's easy to say the iPhone's future impact was obvious when it launched.

But was it?

Sure, it launched with some killer apps: calls, email/messaging, web browsing, and music. But these things weren't entirely new. It was things we were already doing, just better on multiple vectors. Very few people, if any, saw the app store coming and all the innovation that would follow…

Fast forward to 2023 (also five years after Meta closed its doors), and Apple uses the exact same playbook.

For 10-15 years… They were watching, learning, iterating, polishing, debating, and polishing some more.

Apple then launches the Vision Pro and change… well, depends who you ask. To most, its impact is far from obvious.

And so, the stage is set for perhaps a similar story, albeit with obvious differences. This is a bit more dramatic than going from a flip phone to a touch screen.

We’re now breaking through the screen and entering the machine. Of course, the tradeoffs, roll out plan, and adoption cycle must be radically different.

In Part II, we’ll explore those differences and analyze Apple’s strategy, diving deeper into the tradeoffs, why the skeptics are wrong, and how this adoption curve might unfold over the next 3-5 years.

Until then…

Thanks for taking the time to read Evan's essay. Let us know what you think about this perspective, and if you want to check out some more of Evan’s writing, here are some of our personal favorites:

- How to defend the metaverse

- Finding solace in the age of AI

- The Ultimate Promise of the Metaverse

Matt Thompson Appointed to St. Louis VRARA Chapter

We are excited to welcome Matt Thompson who president of the St. Louis Chapter of the VRARA!

Matt is an experienced product leader in the worlds of SaaS and IP with experience working with companies like Disney, Warner, Universal, Sony, Meta, and YouTube. He is also a rabid gaming and VR/AR enthusiast.

It's incredibly exciting to lead a new Chapter for the VRARA here in St. Louis! I'm looking forward to bringing outside knowledge into our area and taking local knowledge to other areas in the industry. 

-Matt Thompson

Now Hiring: The Department of Interactive Media, University of Miami

VRARA Member The University of Miami is hiring!

The Department of Interactive Media at the University of Miami (https://lnkd.in/gRf3CQ7c) is currently seeking applications for a full-time, nine-month, tenure-track or tenure-eligible faculty position at the Assistant Professor level for faculty conducting research in immersive media. We are dedicated to fostering a diverse and inclusive academic community and enthusiastically encourage applications from individuals who can contribute to this mission. The candidate must possess a Ph.D. in Communication, Human-Computer Interaction, Media and Technology, Media and Society, Media Arts, Emerging Media, Computer Science, or a related discipline by the beginning of the appointment, August 15, 2024.

Learn more and apply at https://lnkd.in/gt5jNdWA.

The True Impact of XR on Education: Beyond the Hype

Written by XPANCEO founder Roman Axelrod.

There is tremendous excitement about how AR/VR technologies can transform education. Just as in movies, one may imagine children taking virtual trips to ancient Rome, studying interactive 3D models of the Solar System, and even observing molecules or frogs coming to life before their eyes. Various research, for example, this one in Nature magazine, proves that AR and VR effectively enhance knowledge acquisition, retention, and skills development. However, merely achieving these benefits is insufficient to revolutionize education. To make a significant difference, new technologies must not only change how individuals interact with materials but also transform the entire sector of the economy.

Still, XR indeed has the potential to reshape the education system. In this article, we will explore how this shift will happen in other fields of education.

Why making education “cooler” is not enough

As in any other industry, educators are often preoccupied with concerns about funding, safety, equality, and curriculum. Therefore, for any new technology to gain widespread acceptance, it must bring about a fundamental transformation, rather than merely making it more user-friendly.

For instance, let's consider school education. As it was highlighted in various studies, AR and VR could potentially enhance students' learning experiences; however, the question arises whether the complexities and costs of implementing such technology in a relatively rigid government-controlled system are justified. Even personal computers, despite their long presence, have struggled to fully integrate into school education. According to the Cambridge International survey, 48% of students do use computers for their education, yet 90% still use pen and paper.

In the case of higher education, we can expect more viable opportunities. For example, giving students in medical or engineering schools better tools through XR applications can significantly improve the effectiveness of their courses. In this realm, funding sources are also clearer, with employers and key vendors showing interest in investing. However, even these advancements fall short. The challenge lies in the fact that such changes only represent a small portion of the overall educational process. While such training aids in acquiring practical skills faster, it cannot replace the complete educational journey.

Where the real change can take place

The most promising ground for XR to provide a complete solution is vocational training, where a global shift in approach is required. Vocational training is education preparing for a specific career, primarily in the blue-collar sector. Studies highlight that the demand for low-skilled workers as well as their salary, is decreasing. In contrast, there is a rising demand for mid-skilled jobs that require specialized training and qualifications to operate and maintain various equipment. The mid-skilled roles encompass various tasks: heavy construction equipment and machines and working with cash registers, inventory scanners, and robotized cleaners. As technology advances, devices become more prevalent, with embedded computers becoming a standard feature. Consequently, re-training becomes essential due to evolving equipment and changes in job requirements.

What will the changes be

With the known benefits of AR and VR applications in education, such as increased engagement, improved focus, and better knowledge retention, we can see their significant impact on vocational training. Going beyond merely accelerating the learning process, XR's potential for fundamental changes is promising:

  1. Redefining training and work. AR applications can truly blur the lines between training, apprenticeship, and on-the-job performance. With the use of them, students or workers can be smoothly transferred from simulations to augmented or real-life work, after the system determines they are ready to move on to the next learning stage. This is especially the case for a smart contact lens which will make education the most realistic without adding any additional devices.

  2. Enhancing workplace safety. XR can make professional certifications fully digital, so people will get to learn how to work with complex machines without any risks. At the same time, XR provides possibilities for instant certification that cannot be faked, so the employer will have effective means to reassure that the person is ready to work with the equipment.

  3. Increasing workforce mobility. With quick and cost-effective training, employers can address workforce shortages and adapt to changing market demands. This fosters investment and growth in manufacturing and other industries, benefiting both employers and employees alike.

Vocational training is being discussed less often, but the opportunities there are much brighter than in more hype areas such as school or university education. Moreover, there already are some companies that offer hardware simulators for heavy equipment, for example, Caterpillar machines or VR training courses. More will follow when XR equipment becomes more affordable, but only with truly wearable gadgets like smart contact lens will the full potential of these tools be unleashed.

 If you want to learn more about the smart contact lens developed by XPANCEO, contact dragon@xpanceo.com.



Call for Speakers: VRARA New York Chapter - Showcase Your Work!

Dear VRARA New York Chapter Members,

We are excited about our upcoming events being organized right now. The New York Chapter of the VRARA is now looking for engaging and passionate speakers who are willing to share their work, insights, and experiences with fellow industry professionals and enthusiasts at our in-person networking socials and demo days.

We are seeking speakers who:

  • Are current members of the VRARA.

  • Have cutting-edge knowledge and expertise in VR/AR.

  • Have developed groundbreaking projects, tools, or methodologies in the field.

  • Can deliver an engaging and informative presentation 

Perks of participating:

  • Showcase your work.

  • Network with fellow industry experts, business leaders, and VR/AR enthusiasts.

  • Receive recognition and build your professional reputation.

  • Obtain valuable feedback and spark new collaborations.


How to Apply:

To submit your interest or proposal, please contact the NY Chapter President, Cindy Mallory, at cindy@thevrara.com.

Spaces are limited, and we encourage you to apply soon to secure your opportunity to contribute in the upcoming VRARA NY chapter events, including a Manhattan event early September. Should you have any questions or need additional information, please don't hesitate to contact Cindy via email or LinkedIn.

Declan Jonson Appointed as Co-President of VRARA Utah Chapter.

We are excited to welcome Declan Johnson who is joining me as co-president of the Utah Chapter of the VRARA.

Declan is a graduate of Brigham Young University. It was there that he developed a passion for XR and devoted time to the Mixed Reality Lab for two years, developing projects for students and departments across campus. Declan took that excitement for the field and started his own consulting company, prototyping XR experiences for clients in Unity.

Declan then joined Continuum XR, a leading team of expert XR developers and 3D artists, and began working exclusively on 8th Wall web AR projects. He has created over 300 web AR experiences, most with a retail focus for nationally esteemed brands, and had a helping hand in growing the company to where it is today. This has led to his current role as Continuum’s Business Development Representative.

"I am absolutely thrilled to take on the role of Co-President for the Utah Chapter of the VRAR Association. I am eager to contribute my passion and expertise in organizing exciting events, fostering growth within the chapter, and spreading the association's influence throughout the vibrant tech landscape of Silicon Slopes."

—-Declain Johnson

Dan McConnell Appointed as President of VRARA DC Chapter.


We are thrilled to have Dan McConnell serve as Chapter President, VR/AR Association Washington DC!

Dan is an action-oriented, visionary technologist with over 20 years of experience as a leader in both government and industry. Currently, Dan is Chief Technologist at Booz Allen Hamilton and leader of the spatial computing capability within the firm’s Bright Labs emerging technology incubator in the office of the CTO. Dan also serves as Co-Chair of the VR/AR Association Industrial Metaverse and Digital Twin Committee. Previously Dan was a strategy consultant at The Cohen Group, and he also spent nearly a decade in uniform in the US Army. Dan holds a bachelor’s degree from the United States Military Academy at West Point and graduate degrees in public policy and technology from Harvard Kennedy School of Government and the University of Virginia McIntire School of Commerce.


Virtualware launches VIROO 2.4, the new version of its virtual reality as a service (VRaaS) platform

Spanish publicly traded company, Virtualware (EPA:MLVIR), one of the leaders in virtual reality, announced today the release of the new version 2.4 of its VRaaS platform, VIROO, which incorporates, among other capabilities, Mixed Reality (MR) and VR CAVEs integration as a standout novelties.

Version 2.4 introduces a new groundbreaking feature by integrating virtual reality (VR) and mixed reality (MR) technologies into its sessions. This combination provides a seamless and collaborative experience, allowing multiple users to connect from different locations and use various devices, establishing genuine platform interoperability.

Among the new features, the most significant are:

• Mixed Reality capabilities: VIROO boasts the capacity to blend VR and MR technologies within its sessions, offering true cross-platform interoperability.
• VR CAVEs integration: VIROO is now compatible with multi-projection systems, such as CAVEs or similar.
• VIROO Studio for Unity: VIROOʼs low-code VR Creation tool for Unity becomes VIROO Studio.
• VIROO Room offline configuration: The new feature allows to deploy immersive multiuser content in VIROO Room without the need of internet connection.
• VIROO Content updates: New scenes have been created and updated for any VIROO 2.4 user to make use of them.
• Latest headsets compatibility: VIROO integrates the full compatibility with the latest enterprise VR headsets.
• Identities management: VIROO adds identity management to enhance security throughout the platform.
• Data visualization and UI/UX improvements: More content information and better usability.

“VIROO 2.4 is the cutting-edge virtual reality technology that offers businesses a significant competitive edge. With its enhanced graphics, seamless interactions, improved performance, and expanded capabilities, VIROO 2.4 empowers businesses to deliver innovative solutions that exceed customer expectations. This is not only opens new revenue possibilities but also attracts customers who are seeking immersive experiences.” said Sergio Barrera, CTO of Virtualware.

Virtualware’s flagship product VIROO is the world’s pioneering VR as a Service (VRaaS) platform, makes Virtual Reality accessible to companies and institutions of all sizes and sectors. It is an all-in-one digital solution that enables the development and deployment of multi-user Virtual Reality applications remotely.

Headquartered in Bilbao, Spain, Virtualware is a global pioneer in developing virtual reality solutions for major industrial, educational, and healthcare conglomerates. Since its founding in 2004, the company has garnered widespread recognition for its accomplishments. In 2021, Virtualware was acknowledged as the world’s most Innovative VR Company.

With a diverse client base that includes GE Hitachi Nuclear Energy, Ontario Power Generation, Petronas, Iberdrola, Alstom, Guardian Glass, Gestamp, Danone, Johnson & Johnson, Biogen, Bayer, ADIF, the Spanish Ministry of Defense, Invest WindsorEssex, McMaster University, University of El Salvador and EAN University, and a network of partners worldwide, Virtualware is poised for further global expansion.

VRARA Member Warp VR helps Meliora VR redefines health & safety training with blended learning

This Customer Success Story originally from VRARA member Warp VR.

Customer intro

In 2017, a number of companies, educational institutions and healthcare organizations in the Netherlands joined forces to experiment with innovative training methods for critical situations in healthcare. The group works together since then to develop applications with VR and 360° videos and evaluate these in practice.

In 2018, Meliora VR (part of Saasen Groep) started from this initiative, providing an innovation platform for cooperating organizations to develop digital products for practicing and testing competencies related to safety, healthcare, and more.

Challenge

Many companies, especially in healthcare and other knowledge-intensive industries, have to deal with a shortage of staff and limited time for training. Also certain environments needed for training aren’t easily accessible (like operating rooms), and certain critical job situations that may be dangerous, impossible, counter productive or too expensive to replicate in real life are hard to train for.

The pandemic amplified this by making it hard to impossible to follow courses in person, so there was a new way needed to reach students.

Solution

Virtual reality is becoming increasingly accessible. Better, cheaper VR glasses and more content providers are making it more interesting as a tool for learning and behavior change. The immersion in an environment (immersion) distinguishes VR from other methods and (digital) tools. 

The learning products Meliora VR develops align with the principles of Miller's Pyramid and consist of a mix of VR, e-learning, animation, and systems for workplace input (such as digital safety tracking systems). The VR products include virtual tours, ‘Real Learning’ (a combination of animation and real objects like CPR dummies), and ‘Right Choice Learning’ (realistic simulations using 360° video).

Meliora VR uses Warp VR to power the Right Choice Learning offerings for its ease-of-use and scalability. Pico VR headsets are managed automatically, so trainees don’t have to hassle with technology but only need to click on their own name to start a scenario. When users aren’t comfortable wearing a VR headset, they are also offered the option to play on a tablet. After each session, students keep access to the scenario for 1 month on their smartphones for additional practice.

Training competencies can take place in pre-conceived scenarios. Again and again, a trainee is then faced with new choices while being able to experience the consequences of each choice they make. For emergency response, 10 scenarios have already been developed. Meliora VR’s system is set up in such a way that customers can easily develop custom scenarios to include other competencies.

Meliora VR has its own production team and actively cooperates with educational institutions like universities of applied sciences Fontys and De Kempel, MBO St. Lucas and Ter Aa College. Students from these organizations help with creating immersive experiences, providing feedback on learning effectiveness, and researching new use cases.

Customers mainly come from healthcare (e.g. Anna Zorggroep) and education (e.g. Fontys University of Applied Sciences). Most use cases focus on health & safety (e.g. what to do with smoke & fire, how to provide  first aid, and working with dangerous substances) and soft skills (e.g. anti-aggression training, and making decisions).

Results

Users are enthusiastic about the realistic, 360° video based simulations and regard them as a great, engaging addition to more traditional learning methods. As trainees can directly experience the consequences of wrong choices, retention of the training material is also significantly improved. 

Meliora VR and B&V partners received a grant from the MKB !dee project ‘VR learning culture in safety and health’ to promote a VR learning culture within Fontys and other educational institutions.

Another example is Anna Ziekenhuis that ordered over 1100 cardboards to scale VR learning within their organization, including onboarding of new employees and safety procedure training.

Quotes from users:

"It felt like I got away from the classroom for a while. What a fun way to learn!"

"It all gets really close. It feels like you're right on top of it."

"I was quite worried about using a VR headset, but fortunately no controllers were needed. It all felt very natural and familiar."

"The biggest strength of the system is its scalability. I can reach everyone with the system very quickly."

How to add spatial tracking to 3D objects and deliver attention data insights

Coffee, like spatial computing, is a universal passion for most XR developers so let’s look at three coffee machines as example 3D objects and apply spatial tracking to deliver attention insights. The objects could be product demos (retail, automotive etc), virtual try-ons (fashion, furniture etc), digital twins (manufacturing, construction etc) or training, advertising and game experiences. For these coffee machines, we want to know which one performs best, how we can optimise the user experience and what actions will drive business objectives. Spatial tracking and the AR/VR attention data it generates can be applied to all 3D objects whether it’s individual AR models or multiple objects in a VR scene. So, now we have your attention, let’s jump in, show you how it works and the type of attention data insights you get.

Add CORTEXR spatial tracking plug-in to 3D engine


Our plug-in for 3D engines (Unity, A-frame etc) adds spatial tracking on 3D objects without needing code or data analytics experience. Setting up spatial tracking and configuring 3D objects takes less than 10 minutes for developers with intermediate experience of your preferred XR platform. The plug-in places a cube around the 3D object which, in this case, is scaled according to the dimensions of the coffee machine with each face of the object generating attention data based on where users looked. Multiple 3D objects can be tracked in a scene (e.g. kitchen environment with various objects) but today we’re focused on individual 3D models and the insights our XR data analytics platform generates. Attention data outputs are similar to eye tracking, but infinitely more scalable, with spatial tracking revealing deeper insights about spatial behaviour in spatial environments compared to standard web data analytics tools like Google and Unity. 

Identify best performing object with highest attention levels

Almost 20% more attention is given to the red coffee machine so people found this 3D object more appealing as it held their attention more than the yellow and white models. Attention Total is the amount of attention users give to the 3D model with data split by each face visible to users (front, right, back, left, top) and we’ve added the brand logo as it’s important to this project’s objectives. Attention Distribution shows the percentage split to analyse attention for each face and how this differs for each 3D object. This extra data insight layer of spatial tracking on 3D objects on top of standard data analytics (dwell time, interactions etc) gives you a complete picture on user behaviour to improve the performance of your AR/VR project, whether it’s pre-testing, prototyping, A/B testing, live monitoring or post analysis.

Insights and actions:

Identify red as winning object so placement/position early in the experience will increase overall user engagement and e-commerce ‘buy now’ funnel metrics.


Optimise attention across all sides of 3D objects to improve interactions with areas (like the right side) which get attention but have no user actions or events.


Analyse successful elements and dial up these features e.g. increase performance of red model by adding coffee cups on the top as this works well on the yellow model.

Analyse user behaviour around 3D objects with attention heatmaps

Over 70% of user attention (on average) is focused on the front face of the 3D objects as people spend time understanding functional controls and ergonomic features of the coffee machines. Our Attention Areas heatmap shows spatial tracking on 3D objects in an unboxed cube to examine attention on each face of the 3D model and – importantly – if user attention drifts outside the 3D object. Heatmaps in our prebuilt dashboards are more than pretty pictures as hovering over each cell shows data values to let you dig into specific elements, export data and analyse the pattern of user attention and its impact on project objectives.


Insights and actions:

Identify high visual attention areas on each face of the object (e.g. functional controls on front) to ensure UX design and UI interactions meet objectives.


Optimise low attention areas with nascent appeal (e.g. coffee cups and water tank) to drive exploration around 3D objects to increase dwell time and engagement.


Analyse hierarchy of attention against commercial goals (e.g. brand logo receives only 5% of attention) to improve brand recall, call-to-action buttons etc.


Optimise order of events with Attention Priority maps

User attention is mainly on the front of these objects as they see the front first and aren’t really motivated to explore all sides of the object. This could be an issue if important information is being missed on the right/back/left/top faces but it’s a successful result for this project as aesthetics on a kitchen counter and ergonomics of functional elements are the main business goals. Attention Priority sequence maps show the order of user attention around 3D objects to reveal additional insights (white is first view whilst purple is last view). Here we dig into the results on the front face to understand what’s going on. First view is often most important but last view is sometimes more interesting as it adds another layer of diagnosis to improve project performance.

Insights and actions:

Analyse intuitive user behaviour against intended content experience (e.g. control panel ranks first as ergonomics is primary objective) so this is a successful result.

Optimise narrative and user flow around objects (e.g. users moved around to the right because nozzle position on the right led their attention in that direction).

Identify unexpected orders of attention (e.g. logo gets attention early in the experience despite having low overall attention) so brand validation is important.

The metrics of human behaviour with spatial data analytics


Analysing human behaviour with 3D spatial content doesn’t always follow the UX/UI design when an AR/VR experience is released into the wild. User behaviour in 6DOF spatial environments can be unpredictable and surprising so spatial tracking on 3D objects help you build, test, monitor, analyse and grow your XR projects. Are people viewing the areas you expect them to, which areas do they look at first (versus last) and does this support your business objectives? CORTEXR data analytics platform gives you the answers so – combined with standard data analytics (Google, Unity etc) – you have the full picture to grow your business.

Find out more 

Schedule a demo 







Banuba Helped sMedio Deliver a Video Calling App for a Fortune 500 Company

A major software development company from Japan sMedio integrated Banuba AR Face Filters SDK in the video conferencing app it made for a major electronics manufacturer. By the end of 2023, this app will have been installed on 500,000 devices, intended for business use.

sMedio took charge of developing the core functionality, while Banuba contributed their cutting-edge technologies for face touch-up and background replacement. These essential features are considered indispensable in a contemporary video calling application:


  1. Professional Appearance: Virtual backgrounds play a vital role in avoiding awkward situations such as interruptions by children or pets during a video call. With this feature, only the person in front of the camera remains visible, while everything else can be replaced with static pictures, GIFs, videos, or even immersive 3D environments.


  1. Alleviating Anxiety: Many individuals feel self-conscious about their appearance during video calls. Thanks to the discreet face touch-up feature, users can effortlessly remove skin imperfections, brighten their teeth, correct camera distortions, and enhance their overall appearance, helping them look their best during the call.


Creating these features from the ground up would demand specialized expertise and entail additional time, substantially inflating the final product's expenses. Opting to use the Banuba Face AR SDK proved to be a more cost-effective choice.


"Banuba provided high-quality products, proven technology, and solid support, leading us to make our OEM customer’s products competitive in a limited time and cost," Sadanori Iwamoto, CEO of sMedio, Inc. said.

About sMedio

sMedio is a Japanese company that specializes in software for consumer electronics and portable devices. Its main focus lies in multimedia and wireless connectivity technologies. sMedio is a respected player in the market, having secured contracts with several Fortune 500 companies.

About Banuba


Banuba is an augmented reality company with over 7 years on the market, pioneering face tracking, virtual try-on, and virtual background technologies. Besides online try-on solutions for makeup, headwear, jewelry, glasses, and more, it offers a Face filters SDK and Video Editor SDK – ready-made modules to apply effects and edit videos.