What to know about Niantic's new SDK for the most amazing AR experiences

Come see Niantic at our Immerse Global Summit during Metacenter Global Week in Orlando Oct 17-19

Ahead of the general availability of Niantic Lightship 3.0, Justin Sneddon, Group Product Manager on Niantic Lightship, will present how this SDK can help you build the most amazing AR experiences at our Immerse Global Summit during Metacenter Global Week in Orlando.

Here’s a sneak peak!

Beyond the AR Horizon

Lightship ARDK 3.0 takes what ARKit and ARCore offer in Unity via ARFoundation and cranks it up a notch. But that’s just the beginning. Lightship’s tools are designed to fill in the missing gaps and push the boundaries of computer vision technology. Buckle up, because we’re about to take you through some game-changing features like Depth, Meshing, Semantics, Navigation, Shared AR (Multiplayer), Visual Positioning (VPS), Debugging Tools (Playback and Mocking).

Depth - The Foundation of AR Awesomeness

Depth is the secret sauce behind every AR experience. It’s what helps us figure out where to place objects and how they should interact with the real world. Lightship’s depth is something truly special. Why, you ask? Well, it all comes down to our passion for getting people outdoors.

Lightship’s depth is trained on vast outdoor environments, which means it can provide incredibly accurate depth from a single camera. Plus, it’s not limited to a short range like Lidar on iPhone Pros. Lightship’s depth can reach a whopping 40+ meters, and it works on all AR-capable phones—yes, that includes all iPhones and most Androids!

And why does that extended range matter? Imagine summoning a massive dragon into your AR world—this creature has a wingspan that far exceeds the 5-meter limit. With Lightship’s long-range depth, you can place it 10 to 20 meters away from your camera and capture every breathtaking detail.

What else can you do with this supercharged depth? Let me break it down for you:

  • Placement: Convert a screen point to a real-world position and place digital objects there.

  • Measurement: Know the distance to anything on your screen.

  • Occlusion: Use depth information to seamlessly blend digital objects into the real world.

But wait, there’s more! When you combine depth with semantics (stay tuned for that!), the possibilities become endless. Visual effects like pulse effects, depth of field, toon filters, edge detection, and more come to life. I’ll walk you through how to create the first two experiences.

And, there you have it, folks! Niantic Lightship is all about taking your AR game to new heights. If you’re as excited as I am, you can dig deeper into these features with my upcoming blog posts, complete with examples and source code.

Register now!

Roxana Nagy Appointed as Co-Chair for VRAR Association’s Generative AI Committee

We are thrilled to have Roxana Nagy help lead our community for Generative AI.

Roxana is an experienced Creative Technologist and Senior Mobile Engineer with a passion for Immersive Technologies. In her role as Director of Immersive Technologies at Monstarlab, Roxana helps top companies around the world to create strategies and build innovative projects with the use of Metaverse, AR and VR technologies.

With a love for giving back to the community, Roxana is co-leading Women in Tech Dubai, a group focused on creating a safe space and community for women working in the tech field and is a member of the Murdoch University Dubai Industry Advisory panel created to bring critical perspectives about the expectations of industry employers in relation to graduate outcomes.

She is a thought leader on the topic of Augmented Reality, both from a technical and business perspective by writing specialized articles and speaking at high-profile international conferences and podcasts.

 

“I'm excited to step into the role of co-chair for the VRAR Association’s Generative AI committee. This committee stands at the intersection of Generative AI and XR – two domains that, when combined, can dramatically revolutionize our digital interactions, workflows, and experiences.

 By seamlessly integrating advanced natural language processing and generation capabilities, we will witness a new era in which virtual worlds become increasingly engaging, interactive, and personalized.

 I look forward to bringing my expertise to the table, collaborating with industry pioneers, and helping shape the next chapter of XR.”

- Roxana Nagy


Augmented Reality at the stadium - redefining fan engagement with Immersal

Come see Immersal at our Immerse Global Summit during Metacenter Global Week in Orlando Oct 17-19

How to engage your fans at the stadium better? How to get the audience to spend more time and money at the event? Immersal, a visual positioning company shared their know-how of AR stadium experiences from the US and Japan. App for baseball fans at all stars -event and a community AR experience for rockband's fans in Japan, were created based on Immersal VPS. More to learn more, visit Immersal Booth at the IGS, do not miss Immersal's presentation at the conference. In the presentation, the potential of AI and VPS working together - is presented and there will be a simple demo to make this potential more real! 

With Immersal's visual positioning system (VPS) you can create centimeter-accurate, large-scale indoor and outdoor AR experiences. Navigation, entertainment, industrial applications like asset management and maintenance assistant, and information systems in AR are just a few of the many more applications VPS can power.

You can read more about AR stadium experiences powered by Immersal here

If you want to start testing the Immersal SDK and create your own AR experiences - download the free SDK here.

Campfire is coming to Meta Quest 3!

Campfire for Quest uses passthrough to make technical communications for design reviews and training easier than ever. 

 

With full-color passthrough on Meta Quest 3, you’ll feel like you are interacting with content in front of you and team members around you. Whether you are presenting concepts in a design review or explaining complex assembly procedures, this is the next best thing to being there! 

Not only is the Campfire app easy to learn for first-time Quest users, but it also comes with a virtual assistant named Spark. Spark will guide you through an interactive tutorial, combining humor and how-to instruction to make your experience even more enjoyable! 

Campfire for Quest will be available in the Meta Quest Store on Nov 1 for Quest 3, Quest Pro, and Quest 2. The app is also available for PC, Mac, and iPad.  To learn more, schedule a demo here.

 

How Does Orlando Compare to Other National Tech Hubs?

From tourism to lifestyle, Orlando is a destination for millions around the globe to travel each year and experience our world class entertainment to thousands moving here weekly to make this amazing metro area their home.

Next month, a significant event will be held here for the first time, bringing local businesses together with mega global brands, such as Meta, Amazon, and more.

What’s bringing them here? Immersive technology.

Kyle Morrand is the CEO of the gaming technology company, 302 Interactive.

Growing up in Miami, Kyle found his way to Orlando when he chose to attend UCF. After experiencing life in Orlando, he chose to stay.

Kyle and his team at 302 Interactive are getting ready for MetaCenter Global Week, a 3 day event that puts Orlando front and center with immersive technology brands from all over.

We met up with Kyle at Creative Village in downtown to have a conversation about the event, his thoughts on the future of immersive technology and what separates Orlando in comparison to other national tech hubs.

For the full interview click here


source: The Orlando Life

Adam Kornuth Announced as Co-Chair for VRAR Association’s Generative AI Committee

We are delighted to have Adam Kornuth help lead our community for Generative AI. 

Adam brings a wealth of knowledge and experience to his role as Co-Chair of the VRARA Generative AI Committee. He has an extensive track record of collaborating with world-class brands in the retail, media, and technology sectors, including renowned companies such as Coca Cola, AT&T, Toyota, IHG, and TEGNA, along with innovative agencies and emerging technology labs. His multifaceted career has encompassed roles in Strategy, Business Development, Marketing, Account Leadership, and Advisory positions. Kornuth is well-equipped to guide the committee towards innovative solutions that harness the potential of Generative AI for industry leaders, researchers, technology enthusiasts, and organizations worldwide.

“I’m honored to take on the role of Co-Chair for the VRARA Generative AI Committee. Virtual and Augmented reality are on the cusp of a transformative era, and the integration of Generative AI will play a pivotal role in shaping the future of these as well as so many other industries. I look forward to collaborating with experts in the field to drive innovation and foster cross-industry strategic partnerships.”

– Adam Kornuth



Lenovo and F1® Team Up to Virtually Put Fans in the Driver’s Seat with the ThinkReality VRX

Post originally appearing on Lenovo Storyhub by Vishal Shah.

Get Ready Race Fans

Lenovo and Formula 1® are working together to constantly improve the fan experience: from content production to live broadcasting, including the use of cutting-edge technologies like augmented reality (AR) and virtual reality (VR). The Lenovo ThinkReality VRX F1 project is a further step in this direction, allowing F1 fans in the Paddock Club to test themselves on the track.

Using the new ThinkReality VRX all-in-one headset, race fans at the upcoming FORMULA 1 LENOVO JAPANESE GRAND PRIX 2023 in Suzuka and FORMULA 1 LENOVO UNITED STATES GRAND PRIX 2023 in Austin will be able to play an exclusive VR F1 mini-game.

Based on the iconic slot car racing games many of us played as kids, users drive an F1 car around a replica of the Suzuka International Racing Course or the Circuit of the Americas, controlling its speed with the ThinkReality VRX controllers’ buttons. While playing, the user can view the track from different perspectives, as if they were observing a toy car track placed on a table. The experience is just like the traditional electric track cars. Drivers can drift around corners, but if they accelerate too much in a curve, the car can leave the track, and even fall off the table!

The goal is to achieve the best lap time: the scoreboard displays the results and can be projected onto an external screen.

Throughout the production of the ThinkReality F1 experience, generative AI was used to design elements of the racecars and track, as well as for voiceovers and programming assistance. 

The Lenovo ThinkReality VRX F1 game will also be available at Lenovo’s Tech World 2023, taking place on October 23-24 in Austin, Texas.

It’s Not Just Fun and Games

The racing game is an opportunity to showcase the power and value of VR experiences and the ThinkReality VRX, Lenovo’s new all-in-one virtual reality (VR) headset engineered for the enterprise.

From employee training and virtual collaboration to 3D design and engineering, XR technologies are becoming more important than ever for businesses and organizations, enabling people to do more, faster, and with less cost.

The ThinkReality VRX offers the market something truly unique, an end-to-end XR solution for the enterprise. It not only includes cutting-edge hardware, but also the software and services to make enterprise XR deployments easier, as well as quicker in achieving ROI.

The ThinkReality ISV ecosystem is tailored to the core use cases that show real results and ROI at scale. Hard skills training to create muscle memory and support employees to learn by doing, and to fail safely. Soft skills training to help workers communicate better, grow their potential, and learn about their organization’s values. Collaboration tools to enhance team meetings, review digital twins, and to hold special events. Spatial computing applications like virtual monitors and AI-supported workflow applications that help expand workspaces and supercharge productivity. Even wellness platforms to help employees reset, and recenter, both physically and mentally.

The ThinkReality VRX is supported by a broad portfolio of professional services. This includes flexible device management with the ThinkReality cloud software platform, and ThinkReality xR Services from consulting and content creation to deployment support. Similar to many enterprise solutions from Lenovo, the ThinkReality VRX is also supported Lenovo’s Integrated Solution Support (LISS) for around the clock global customer service, as well as Device as a Service (DaaS) financing through Lenovo TruScale.

Lenovo believes smarter technology can revolutionize the way people train, work, and communicate. Once again, our partnership with Formula 1 helps us to showcase the capabilities of Lenovo’s technologies, services and solutions on a global stage.

Orlando Mayor Dyer: Metacenter Global Week to showcase Orlando to the world

Originally appearing on orlandonews.com by Marco Santana.

Orlando Mayor Buddy Dyer has watched the city’s economy undergo multiple transformations since his first election in 2003.

Sometimes, it’s a matter of necessity.

As the coronavirus battered Orlando’s tourism industry the previous three years, he turned his attention to economic diversification.

At the same time, he noticed the tech industry thriving, even as COVID-19 completely hamstrung the city’s 500-plus hotels.

So, among other things, he leaned into the Orlando tech industry.

He famously held his 2022 State of Downtown address in virtual reality in December.

Then, in May, he threw his support in early to announceMetaCenter Global Week during his annual State of the City.

As that weeklong celebration and showcase of Orlando’s tech community approaches – as the major tech event in Orlando next month – Orlando Tech News caught up with him to get his thoughts on the industry and the upcoming event.

What is it about MetaCenter Global Week that has you excited?

It’s a great opportunity to raise the profile of Orlando’s reputation as a tech community. We have a thriving tech ecosystem here with both big companies and small ones but I don’t know that the world is necessarily aware of it. The whole notion of the MetaCenter (Global Week is) having all the people come to Orlando and expose them to what we have to offer.

What could it mean to Orlando’s tech community?

It gives our companies and entities the opportunity to meet with people they might eventually collaborate with, perhaps. It lets the outside world know about the educational and various industries and how well we have parlayed our industry clusters. These include military, simulation and training, Creative Village, Lake Nona and some of these tech-focused businesses at incubators like Starter Studio.

Can you talk about the significance of Innovate Orlando becoming its own thing recently?

The whole notion of having an entity like Innovate Orlando break out of the Orlando Economic Partnership and stand on its own is certainly significant in terms of demonstrating where we stand as a tech community.  We have gained notoriety around the country in terms of what we have to offer here. That’s continuing to get exposure by having a week focused on Orlando’s innovation and tech offerings.

The industry we are known for is actually one of the original tech industries.

If you think about this in terms of our tourism industry, some of the high-tech aspects of the theme parks absolutely go hand-in-hand. These are some of the forebearers of these technology innovations. Modeling, simulation and training and the live experiences offered at theme parks go hand-in-hand in terms of the type of people that would have that expertise and it’s transferable between industries.

Global Week is a combo of offerings. What could the future of the event bring?

Combining Synapse and the Immerse Global Summit into one week was a big deal. What we need to do is look around us. I think this will grow. It might be reminiscent of South by Southwest, which didn’t really know what they would become in the early days. I am hoping in 2043 we can say, ‘Gosh, remember what this was like in 2023?’ 

Can you talk a little more about Innovate Orlando’s presence now?

I think it’s a big deal that it happened. It’s not unlike the fact that Visit Orlando was once a part of the chamber a long time ago and then came out to stand on its own. In some sense, this is a similar move. It’s cool to see. Visit Orlando was there to help a growing economy and has since become a huge part of our economy. I believe Innovate Orlando could serve that same purpose.

How big was it that Orlando had a thriving tech community during the pandemic while COVID hammered tourism?

The growth in our tech community continued during the pandemic as if it weren’t a pandemic. We always talk about diversifying the economy and having a segment that can continue to grow and thrive while other pieces are impacted. Having that is important to the overall health of the community.

PIXO VR Announces Strategic Partnership with Verse Foresight as its Middle East Affiliate

PIXO VR is delighted to announce a strategic partnership with Verse Foresight as its Middle East Affiliate.

This groundbreaking affiliation combines PIXO’s expertise in VR Training with Verse Foresight's profound knowledge of learning needs and clients across the Middle East, and extends the outreach of PIXO VR to the region.

"We are excited to partner with Verse Foresight as our Middle East affiliate," said Sean Hurwitz, CEO of Pixo VR. "The Middle East region's growing appetite for innovation aligns seamlessly with our mission to make work safer and more enriching through VR Training. Together with Verse Foresight, we aim to offer unmatched enterprise-grade immersive learning solutions to businesses in the Middle East.”

Mostafa Nassef, CEO of Verse Foresight, expressed equal enthusiasm: "Our partnership with PIXO VR as their affiliate in the Middle East is a significant step towards enhancing immersive learning experiences in the region. Leveraging PIXO’s exceptional VR content creation expertise and enterprise-grade platform, we are poised to empower businesses in the Middle East to engage their audiences in profoundly impactful ways."

As PIXO VR’s Middle East affiliate, Verse Foresight will focus on providing PIXO’s solutions to a diverse range of industries, including energy, construction, education, healthcare, and more. Clients in the Middle East can anticipate transformative immersive learning experiences that transcend cultural and geographical boundaries.
Stay tuned for updates as PIXO VR and Verse Foresight embark on this transformative journey together, dedicated to bringing the power of VR to the Middle East through their partnership.

Rooom receives significant investment of 17 million euros!

From rooom.com:

We are absolutely thrilled as this exciting development marks an important milestone and will undoubtedly strengthen our ability to continue to provide innovative solutions for our customers.

 

The investment received comes from the Munich-based financial investor Marondo Capital, TGFS Technologiegründerfonds Sachsen, as well as our long-time supporters bm|t beteiligungsmanagement thüringen GmbH and other existing investors.

We are grateful that they have our back to keep pushing our ideas and innovations. This is how we realize our mission to create groundbreaking 3D visualizations and metaverse solutions. Trust us when we say we have big plans and are eager to create even more unique solutions for you and your customers.


SynergyXR Announces the Launch of Version 2.5: A Game-Changer in Immersive Training & Collaboration

Aarhus, Denmark – 18 September 2023 - SynergyXR, a pioneering leader in the augmented and virtual reality sector for business, is thrilled to announce the release of SynergyXR 2.5. This latest version is set to redefine the standards of virtual collaboration and training, making extended realities like VR and AR more accessible and user-centric for businesses. 

What's New in SynergyXR 2.5? 

  • LMS-Integration: A groundbreaking feature that revolutionizes training management. Users can now seamlessly export VR content in the coveted SCORM 1.2 format, ensuring effortless integration with leading Learning Management Systems. 

  • iOS Quick-AR: Designed for on-the-go professionals, this feature eliminates the need for 3D scanning first, allowing users to dive straight into augmented reality. 

  • QR-Code Support: Enhancing user experience, this feature ensures swift access to Spaces. A simple scan is all it takes to dive into a desired virtual space. 

  • Mac Support: In a move towards inclusivity, SynergyXR 2.5 now offers full support for Mac platforms, ensuring a seamless immersive experience for all users. 

 

Sune Wolff, CTO and co-founder of SynergyXR, commented on the release, "SynergyXR 2.5 isn't just another update; it's a leap. By incorporating invaluable feedback and industry insights, we've crafted an XR experience that's not only more powerful and versatile but also centered around the user." 

 

About SynergyXR: 

Based in Aarhus, Denmark, SynergyXR is at the forefront of making augmented and virtual reality tools accessible for modern businesses. With a strong foundation in the manufacturing and energy sectors, SynergyXR understands the challenges that contemporary businesses face. Their commitment to people-first solutions ensures that extended realities like VR and AR become everyday tools for businesses. 

 

For more information about SynergyXR 2.5 and its groundbreaking features, visit SynergyXR 2.5 Release

 

Press Contact: 

Andy Grantham 

Head of Marketing, SynergyXR 

Email: ag@synergyxr.com 

Phone: +45 31239681 

Meet Obsess in Orlando, Oct 17-19 at Metacenter Global Week! Launched over 250 virtual stores for Ralph Lauren, Charlotte Tilbury, J.Crew, Revlon, Maybelline, J&J, NBC Universal, Mattel

Obsess is a Sponsor and Exhibitor at our Immerse Global Summit at Metacenter Global Week on Oct 17-19 in Orlando, the metacenter for the building of the metaverse.


Obsess is the leading experiential e-commerce platform that enables brands and retailers to create immersive, branded, discovery-driven virtual stores on their websites. The mission of the company is to create the next-generation online shopping interface that transforms the traditional e-commerce thumbnail grid into a 3D, interactive, social and highly engaging experience. Obsess has launched over 250 virtual stores and experiences for brands such as Ralph Lauren, Charlotte Tilbury, J.Crew, Revlon, Maybelline, Johnson & Johnson, NBC Universal, Mattel, and more—driving consumer engagement, brand loyalty and conversion.


The Fate of Apple's Vision Pro: Part II

Today, we’re featuring Part II of Evan Helda’s series, ‘The Fate of Apples Vision Pro’. 


Evan Helda is the Principal Specialist for Spatial Computing at AWS, where he does business development and strategy for all things immersive tech: real-time 3D, AR, and VR. Evan also writes a newsletter called Medium Energy, where he explores the impact of exponential technology on the human experience. 

If you've not read Part I, we suggest doing so for complete context & appreciation. Here’s the link. 

In Part I Evan reflects on his time at the original Meta, an AR startup that was building an Apple Vision Pro competitor, albeit 5 years prior. 

Here in Part II, Evan dives into the critiques from Vision Pro skeptics and dissect why they might be wrong.

If you enjoy this piece, we encourage you to check out more of his content over at MediumEnergy.io! You can also follow Evan on X/Twitter: @EvanHelda

----

Have you ever cried in a business setting?

I have.

It was the first time I ever experienced really good AR. And I mean... really, really good. Like, take-your-breath-away, blow-your-mind good.

Needless to say, they were tears of joy.

The sad part? That experience was 5 years ago.

To this day, I still haven't seen anything that comes close. And I've tried it all; just about every headset and every top application

Of course, this experience was just a demo. An absolute Frankenstein of a demo at that. To create something this good, we had to duct tape together the best-in-class components of the AR tech stack. It was kludgy as hell, but it accomplished our goal: to showcase the art-of-the-possible if we could get everything right.

We used the best display system (with the largest field-of-view and highest resolution, aka: the Meta 2); the best hands tracking (a Leap Motion sensor); the best positional tracking (a Vive lighthouse rig, with a Vive controller hot-glued to the top of the headset to track head movement... yeah... like I said, kludgy...); an intuitive interface and custom software via Unity that allowed you to move between 2D creation (using a Dell Canvas to draw a shoe) and 3D consumption (software to 'pull' the 2D shoe out into the world as a 3D object for a design review).

Meta 2 AR headset

Leap Motion hand tracking sensor

Vive Lighthouse Tracking Gear

Dell Canvas for design


But it wasn't just these tech components. We also had some of the world's most talented developers, 3D artists, and UI/UX designers build that demo. And that's the other thing we've been missing as an industry: the world's brightest minds. They just haven't entered this space yet en masse because they know... AR/VR isn’t yet worthy of their talent.

We made this demo for Nike in partnership with Dell (who was reselling our headset). It was a re-creation of the CGI from this two-minute concept video: with the exact same 3D assets, same workflow, and same UI/UX.

Nike designer using Meta 2

Virtual prototype in AR

This Nike video still drives me to this day.

When I put that duct taped contraption on my head the kludginess disappeared. I was captivated by the future. Or rather, my childhood fantasy. Like a wizard at Hogwarts, I was using my voice to summon different design iterations. I was grabbing orbs out of the air, each one representing a different color or texture. These orbs could be dragged with your fingers and dropped on to the 3D model; changing the aesthetic by tossing invisible objects onto other invisible objects.

POV shot of the demo/video

Except they weren't invisible. Not to me. You could see every detail: the stitching, the fabric, the glow of the materials.

I could explain it in more detail, but just watch the video and then... imagine.

Imagine this type of workflow and collaboration, between both humans and AI, allowing us to move from imagination to reality in the blink of an eye.

The video's script poetically says it all:

"It starts with a question, followed by an idea. On how to make things simpler.

Better.

Or more beautiful.

But it’s not just about what it looks like. It’s how it works.

Which means trying... and failing... and trying again.

To be a designer (or creator) means not being bound by the limits of your tools. But instead, being inspired by them; so that you can focus on what only YOU can do; being creative, being curious, and being critical; exploring the union between function and form, until suddenly...

You know.

And when you're ready to share your work, make sure everyone can see... that the world is a little simpler, better, and more beautiful."

I've seen this video over a hundred times. But those words never fail to stir my soul. And with the advent of generative AI, combined with the promise of spatial computing, they’re more poignant than ever.

People sometimes ask me... when will this tech be here? When will we know it’s arrived?

I answer by showing the Nike video. When that experience exists, in the form of a real product, with a real application, in a real production setting... that's when we've arrived.

Now, I've yet to try the Apple Vision Pro (AVP). But that's why I'm so excited, and why I think you, dear reader, should be as well. Because the AVP seems to be the industry's FIRST device that will yield an experience of such magical magnitude.

Something that suspends disbelief and blow your mind. Something that compels the world’s top talent to experiment and re-invent human computer interaction.

Apple Vision Pro

But, as I mentioned in Part I, not everyone has the same level of optimism about AVP.

Immediately after the announcement, the Luddites grabbed their pitchforks and the skeptics had a field day.

Listen, I get it... the notion of being inside the computer is strange and some of Apple's portrayals had Black Mirror vibes (which we'll address later). Such objections are nothing new. Every major tech epoch faced similar doubt in droves.

But when you put the primary objections under a microscope, they just don't hold up. Especially over the fullness of time.

As I also said in Part I, I'm obviously biased. But I've done the work to produce an objective lens. And upon analyzing the major objections, I remain convinced: most haters/pundits are woefully wrong, lacking the right perspective, foresight, and an understanding/appreciation for the nuances of Apple's strategy, timing, and approach.

That's not meant to be a knock. Most people’s/pundit’s perspective is just limited, lacking exposure to the tech, the impactful use cases, and the problems they'll address.

In my opinion, everything Apple is doing makes perfect sense: the form factor, the timing, the positioning, the use cases; all of it has been meticulously ruminated, debated, and patiently executed upon.

So before explaining why, let's do a quick recap of the most common objections/concerns:

  1. It's too expensive! (Price)

  2. I don't want to wear that thing on my face (UI/UX)

  3. What’s the point? What is this good for? (Use Cases)

  4. This is going to ruin humanity! (Societal Impact)

While not an exhaustive list, I view these as the ‘big rocks’ in the proverbial jar of eventual truth (aka: Evan’s optimistic opinions). Let’s dive in.

It's too expensive! (Price)

News flash: the Apple Vision Pro is not going to be a commercial success. Everyone knows this. Including Apple. Regardless, Wall Street is going to be disappointed, the critics will say I told you so, and they will all be missing the forest for the trees.

Making money isn't Apple's goal. Nor is it their metric of success.

Their goal is twofold. First, to attack the most challenging barrier: consumer behavior and imagination. Second, to get into the market, learn, and iterate; all in the wake of consumer inspiration and rising sentiment/demand due to a premium/mind blowing UI/UX.

Apple's strategy can be summarized by a tweet Palmer Luckey wrote in 2016: 

“Before VR can become something that everyone can afford, it must become something that everyone wants”.

Towards that end... Apple had a choice. They could have waited until the price point was perfect, along with the form factor, the battery power, etc. But are these things their biggest challenge?

No. These things: price, battery, weight, size, etc... they're all bound to be solved by the natural progress of technology. You'd rather be deficient on these vectors, as they will naturally take care of themselves.

Where you can't afford to be deficient is usability, utility, and delight. In other words, it's better to go high end and be super compelling, than low end as another 'me too' device competing in a red ocean, doomed to gather dust (just like every other affordable device). That device is just not worth making, and as Luckey alluded, it won't make AR/VR something everyone wants.

Thus, Apple chose to reach 5 years into the future, spare no expense, and pull next-gen technology into the present. Hence, $3,499.

There's also a simpler argument: the price is irrelevant.

Apple is targeting very early adopters: bougie prosumers and power users with very low price sensitivity. These are folks who would pay $5,000 - $10,000 for the AVP. They just want the latest/greatest.

That said, even at this price point, the complaints remain overblown. Especially on a relative basis (both historically and currently).

Case in point is Apple's origins: the Lisa, one of the world’s first personal computers.

Similarly, this was the first time most consumers saw innovations like the modern GUI and the mouse; innovations that would shape the future of computing. Innovations that also begged similar questions… why does the average home need this? At the time, most consumers had no idea.

As a result, the Lisa was an abject commercial failure. But it paved the way for Apple's success with the Apple II and Mac. It also awakened the world to the potential of personal computing.

The Lisa cost $10k in 1983. $29,400 in today's prices. Not to mention, the Macintosh, Apple's most iconic breakthrough, was $2,495 in 1985. That’s $7,000 in today's prices…

From a more local & relative perspective: the Magic Leap 2 AR headset is $3,299; the Microsoft Holo-Lens 2 is $3,500. The Varjo, the most direct comparison as a mixed reality pass through device... it's $7,100!

The AVP is right in the ball park at $3,499, and vastly superior on just about every dimension.

It's also worth considering what the AVP strives to replace: powerful workstations, laptops, and high-end displays. People in their target market spend $2k - $5k on nice workstations/laptops, and up to $2k - $3k on high end displays; all without blinking an eye.

The AVP can replace these products, and then do SO much more...

And so, I rhetorically ask... is the AVP really that expensive?

I don't want to wear something on my face (UI/UX)

My response will seem trite, but I think it will prove true.

This is a classic case of "don't knock it ‘till you try it."

I know, I know. I haven't even tried the AVP myself. But I've spoken with people who have: from grounded analysts to XR skeptics. They’ve all had a similar response, falling somewhere along the lines of...

  • “Holy shit”

  • "I felt like I had super powers"

  • "It was remarkable and exceeded my wildest expectations"

I'll have to circle back on this after I try it, but here’s my bet: the user experience is going to be so compelling that it trumps the awkwardness/friction of wearing something on your face.

At least in the contexts they've optimized for: productivity & visualization.

The input modality is said to be the most mesmerizing part, i.e., the eyes and hand tracking in lieu of a mouse, keyboard, or screen taps.

With the AVP, you just follow your instincts, using your eyes and subtle hand motions to control virtual objects, as if they are actually in the real world. It feels like you have magical powers and it just works. Your intuition is the controller.


Sure, the jury remains out until it ships. But this seems like the first AR/VR product that is just... buttery.

What the hell do I mean by buttery?

It comes from Nick Grossman’s 'butter thesis' (Nick is a partner at Union Square Ventures). The thesis describes product interactions & experiences that just absolutely nail it. What 'it' is exactly is hard to describe… but you know it when you see it. It's just frictionless: intuitive, smooth, and delightful.

AR/VR today is cool and novel. But I don’t think anyone would call it buttery. It’s plagued with all kinds of UI/UX paper cuts that make it very hard to do real work or consume content for hours on end.

As much as I love AR/VR, I'm still painfully aware of the brick on my face, constantly sliding off, noticeably heavy, hot, with non-intuitive controllers, etc.

Now, I'm sure the AVP will have its edges. All V1.0 products do. But fortunately ... this is Apple we're talking about.

Unlike other players in this space, most people will give Apple the benefit of the doubt.

More so than perhaps any other company, Apple knows how to make things desirable. Which is a key pillar of their strategy: social engineering. They’re going to make this thing cool and they have a plan to do so. One example is hyper personalization.

Apple is going to ensure your AVP fits like a glove, while also offering the opportunity for self-expression and style via custom aesthetics.

At first, you’ll have to make appointment at an Apple retail store to buy an AVP. They're carving out entire sections of the stores for headset demos and sizing, allowing associates to select and customize the right accessories for the buyer. This will ensure a snug fitting headband (of which there will be many styles), the perfect light seal (informed by a facial scan), and the right prescription lenses.


Considering the amount of inventory, the number of variations, the in-store logistics, the demos, etc… this will be the most complex retail roll out in Apple's history. A feat few beyond Apple could pull off, and a compelling story line to monitor through 2024.

What’s the point? What is this good for? (Use Cases)

Steve Jobs famously said, "You've got to start with the customer experience and work backwards to the technology. You can't start with technology and then try to figure out where to sell it."

Many people think the AVP flies in the face of this wisdom. They think this is fancy tech looking for a problem. I think the AVP strategy falls somewhere in the middle, largely because it has to; the form factor and UI/UX are just too new for anyone to have all the answers.

In Part I, I said the following: "In hindsight, it's easy to say the iPhone's impact was obvious at launch. But was it? Sure, it launched with what became killer apps: calls, email/messaging, browsing, and music. But, similar to the VisionPro's focus use cases, these things weren't entirely new. It was things we were already doing, just better on multiple vectors."

Similar to the iPhone strategy, Apple is starting with a simple and practical use case: screen replacement, aka: doing things some people are already doing, just better.

The use cases for 'infinite display' will be compelling for a lot of people: remote workers, digital nomads, software programmers, finance traders, data analysts, 3D artists/designers, gamers, movie buffs, the list goes on.

Virtual Displays

The total addressable market for these folks alone is in the tens of millions, if not hundreds of millions. Upon realizing they can become Tom Cruise from Minority Report, these people are going to line up in droves to buy the AVP.

Now, this use case doesn’t come without its haters, garnering comments along the lines of "ugh, but it’s so isolating". But this response feels silly to me. Isolation is the point. Many of these customers work remote from home, alone, and often on the road. What they do requires 'deep work' that is inherently isolating. If anything, the AVP is a device that could help close us off from endless distraction & interruption, allowing us to more easily tap into states of flow & ideal work conditions.

But if 'isolation' is your concern, know that collaboration will be the killer feature of spatial computing’s killer apps.

To be sure, it was odd that Apple barely showed 'multiplayer' use cases in their keynote. Quite odd. Collaboration is where the true magic happens. Particularly in AR, when you both have a completely shared context within both the physical and digital realms, bonded over a shared hallucination.

These 'shared hallucinations' are going to be most impactful within work settings.

Whether Apple likes it or not (because they don’t really care about enterprise), the enterprise will be their biggest/best opportunity short term, i.e., corporations buying the device for use cases like training, design, and sales & marketing.

Across these use cases today, even the world's most advanced companies are stuck in the 90s.

CAD designers create 3D things with 3D design tools, but go right back to 2D pictures in Powerpoint when it comes time to share/present.

Training departments use laughable videos, Power Points, and text filled PDFs with static pictures to explain complex, and sometimes dangerous procedures. And they wonder why they can’t recruit, inspire, and retain digitally native, 'experience craving' millennials/Gen Z…

Sales reps all too often take a similar approach. Their customers are better off just reading the same content online or getting a product analysis from ChatGPT

Across all of these examples, people are fundamentally trying to transfer knowledge by conveying an 'experience' in a woefully non-experiential way. This ‘knowledge transfer’ problem becomes increasingly acute in the face of an aging workforce, worker displacement (as AI eats more jobs), and within an era of customization/personalization in product design & sales.

This is why I prefer to call spatial computing, 'experiential computing'.


Within endless scenarios (be it work, education, or play) the goal is to capture, understand, or convey an 'experience' of some kind: what it's like to wear a shoe, what it's like to navigate a factory floor, what it's like to put your hands behind the wheel of the car.

We can try to use a bevy of words, images, and videos to spark imagination. And maybe imagination will get you 10-50% of the way there.

But what if we could turn imagination into reality? What if we can directly experience the thing itself, in its entirety? What if we could transfer knowledge at 80-100% levels of fidelity, without information loss?

Speaking of ‘direct experience’… Apple’s other focus use case might also be enough to sell out the AVP in year 1, and that’s immersive sports & live entertainment (e.g. any kind of live show/performance, music, plays, comedy acts, etc.)

They’re investing heavily in this area, with their own camera hardware for 360/volumetric capture, their own file format for this media type, and their own streaming platform via the acquisition of NextVR.

NextVR 360 camera


I thought NextVR was by far the most compelling consumer VR app to date. It was also a major driver of Oculus Quest sales, putting user’s court side of NBA games, on the sidelines of NFL games, or front row of a Taylor Swift concert (don’t judge).

These ‘real-world’ tickets cost anywhere from $1,000 to $10,000. Taylor rocks, and so does live sports, but I’m not paying that. Neither are 9/10 of people.

If you told me I could be court side, alongside friends from around the world, week in week out, for $3,499 and a small monthly subscription? I’m all over it. And I think many, many other people will feel the same way.

So as far as use cases are concerned, ‘infinite display’, collaboration + knowledge transfer, and live sports/entertainment alone will be enough to drive demand and establish product market fit. 

But this is the tip of the iceberg. Just like no one predicted the App Store, and the ensuing explosion of new apps, we can’t predict all the innovation that’s brewing amidst the long tail of Apple developers who are already diving into the AVP SDK & developer docs.

Out of the millions of apps in the app store, a healthy chunk is brain storming as we speak about what their apps could look/feel like in a spatial world. And I can’t wait to see the results…

This is going to ruin humanity!

I beg to differ.

To the contrary, spatial/experiential computing just might be a key ingredient to humanity’s salvation, especially with the advent of AI.

There’s a variety of philosophical and practical reasons why. I’ll just hit my two favorites.

Philosophically, consider all the complex and daunting problems we face in the world. Most of them lack answers, and in our search for solutions, it’s hard to say where to start.

But one place that is hard to refute, and that will certainly help us find the right answers/solutions, is better communication & collaboration; between employees, executive, & scientists. Between countries, companies, and local governments. Between political groups, their leaders, and their polarized constituents.

Poor communication & collaboration sits at the heart of all our issues, causing a lack of empathy, understanding, and ultimately, poor decision making, low alignment, and very little progress.

To illustrate the power of spatial computing for communication & collaboration, I fall back to a section from my essay, ‘How to Defend the Metaverse’.

It quotes one of the cyberspace/metaverse OG's: Terrence McKenna.

McKenna says, "Imagine if we could see what people actually meant when they spoke. It would be a form of telepathy. What kind of impact would this have on the world?"

McKenna goes on to describe language in a simple but eye-opening way, reflecting on how primitive language really is.

He says, "Language today is just small mouth noises, moving through space. Mere acoustical signals that require the consulting of a learned dictionary. This is not a very wideband form of communication. But with virtual/augmented realities, we'll have a true mirror of the mind. A form of telepathy that could dissolve boundaries, disagreement, conflict, and a lack of empathy in the world."

This form of ‘telepathy’… i.e. a higher bandwidth, more visiual form of communication, i.e. the ability to more directly see or experience an idea, an action, a potential future… this will not just benefit human to human communication, but also human to machine. Which brings us to my practical response du jour.

Practically, we need to consider how humans evolve and keep up in the age of AI.

We’re briskly moving from the age of information to the age of intelligence. But intelligence for whom?

Machines are inhaling all of human knowledge. As a result, every person and every company will have the ultimate companion; capable of producing all the answers, all the options, and all the insights…

How do we compete and remain relevant? Or perhaps better said… How do we become a valuable companion to AI in return?

Just like machines leveled up via transformers and neural nets, we too need better ways to consume, analyze, and ‘experience’ information. Especially the information AI’s produce, which will come in droves and a myriad of formats.

AI is going to produce answers, insights, and truth for all kinds of things: new ideas, products, stories, moments in time, scenarios, plans, and my personal favorite; all things that remain abstract and unseen by most; space, stars, planets, the deep sea, the deep forest, the inner workings of the human body & mind, the list goes on.

AI is going to reveal things previously mysterious, complex, and otherwise impossible to fully grasp.

As it does so… how can AI best communicate its findings back to humans? And how can we fully grok, parse through, and become fully empowered to act?

More often than not, our answer back to the AI is going to be, “don’t tell me, damnit, show me”.

Spatial computing will be the ultimate tool for helping AI’s ‘show’, and helping humans ‘know’, ushering in an age of ‘experience’ in tandem with the age of ‘intelligence’.

As a result, humans will be empowered to better remain in the loop; as the final decision maker, fully empowered to add the human touch and tweak the final outcome/output, in a way that only humans know how, i.e. through feeling, intuition, and empathy, a la this essay ‘How to find solace in the age of AI: don’t think, feel’

Tech vs. Tech

In closing, there is one more common concern within this realm that I admittedly don’t have the best answer to. At least not yet. And that is… once we’re ‘in the loop’ with AI, and spending more time ‘in the machine’ with spatial computing… how do we retain the best parts of humanity that are obviously negatively impacted by technology?

Things like our attention and mental health, or our physical movement, social skills, and time in nature.

My prediction is that we’re going to get increasingly good at using tech to combat tech.

Meaning… there are apps and tools that we can build to shape our relationship with tech, negate its afflictions, and build better habits & social connections.

Apple is already doing this today, and I thought it was one of the more compelling parts of the WWDC presentation. They showcased apps for journaling to aid with emotional awareness. Meditation for mindfulness. Fitness & outdoor hobbies of all types, with unique ways to measure, gamify, and socialize/connect with others, boosting motivation & consistency along the way. 

I think this trend is going to accelerate over the coming years. It’s already a cottage industry, with startups such as TrippVR for meditation & mental health, and FitXR for VR fitness.

The AVP’s arrival is going to enhance and legitimize these use cases, and over time, shift people’s relationship with technology while reducing the afflictions born of abstracted, ‘flat computing’. Or the afflictions born of boxes tethered walls and TVs (aka: an Xbox or PS5). These current form factors are what keeps kids/people stuck inside, isolated, and socially inept. 

In contrast… AR, in its ultimate form, will free kids from the confines of a screen and a living room with an outlet, thrusting them back into nature, back into face to face contact, and back into a world longed for by prior generations. A world of scratched knees from a treasure hunt in the park, of youthful pride from a fort forged in the woods, or of confidence from winning an argument while playing make believe in the backyard.

Except this time, the treasure becomes real, the forts become labyrinths, and the figments of make belief become not so make belief…

Thanks for taking the time to read Evan's essay. Let us know what you think about this perspective. And if you enjoyed this piece, don’t forget to check out more of his essays and subscribe over at MediumEnergy.io. Here are some of our personal favorites:

- How to defend the metaverse

- Finding solace in the age of AI

- The Ultimate Promise of the Metaverse













SpectreXR Joins the VRARA

The hand-tracking firm joined the VARA Association (VRARA) in a bid to push the boundaries of XR technology and scale the delivery of its hand-tracking solutions.

The firm noted in a social media post that joining the VRARA “aligns perfectly” with the firm’s mission to contribute to the “growth and development of the XR industry,” with a focus on innovating how XR users interact with digital objects and environments.

As the XR space is growing at a rapid pace, it's important for us to be a part of amazing XR communities! We are thrilled to announce that SpectreXR is now a proud member of the VR/AR Association (VRARA)!  At SpectreXR, we're committed to pushing the boundaries of XR technology and delivering innovative solutions. Joining the VRARA aligns perfectly with our mission to contribute to the growth and development of the XR industry putting our focus on how we interact with digital objects and environments. As a member, we look forward to collaborating with fellow industry leaders and experts, sharing insights, and shaping the future of XR. Together, we aim to drive innovation, foster new partnerships, and create exceptional experiences for users across various sectors. Stay tuned for more updates as we embark on this exciting journey as a VR/AR Association member! 

The news follows a partnership between SpectreXR and HTC Vive earlier this year to drive innovation, immersion, and realistic interaction. SpectreXR is now part of HTC VIVE’s Developer Partner Program to promote “synergy” between the firms and improve the usability of HTC VIVE devices.

The move also sees OctoXR benefit from the HTC VIVE partnership, as the deal provides the headset vendor with OctoXR hand-tracking for its software development kit (SDK), further spreading OctoXR’s hand-tracking capabilities to more headsets alongside the recently adopted Pico portfolio.

SpectreXR CEO on the Future of Input

In a recent roundtable with XR Today, Ivan Rajković noted that his firm dedicated over 17000 hours of R&D to hand tracking technology, aiming “to provide the most realistic and intuitive hand interactions inside VR and AR environments.”

Rajković also said that hand-tracking technology can bring value to various industries by providing a “natural and intuitive way of interacting with digital interfaces,” according to the CEO, which can lead to improved productivity, accuracy, and efficiency outcomes for workers.

The CEO also added:

Tracking solutions have become an essential component of modern digital interfaces, providing a more engaging and intuitive experience for end-users. Enterprise adopters can benefit greatly from incorporating tracking solutions into their products, creating a user-friendly and inclusive experience that enhances the overall value proposition of their offerings.

Moreover, the CEO noted that a “big change is happening on the industry side,” with Rajkovic explaining how many companies are “recognizing the importance of hand-tracking technology.”

Rajković noted

With the continued advancements in hardware, hand tracking could become, and we believe that it will become the default way how we are interacting in virtual and augmented reality.

The CEO continued and explained how SpectreXR is “excited” about the future of hand-tracking and how the technology will affect AR/VR/MR applications, “we will continue to invest in research and development to ensure that our technology remains at the forefront of this rapidly evolving field,” Rajkovic remarked. 

How Hotelschool The Hague uses VR to improve hospitality education

We are proud to feature VRARA Member Warp VR's latest success story!

Customer intro

Hotelschool The Hague was founded and funded in 1929 by the hospitality industry to create a hub where students can develop, conduct research and share their hospitality knowledge and skills in a realistic setting and hone their leadership talent for a successful career in the industry. Since then, it has become one of the top 10 hospitality management schools worldwide.

Hotelschool The Hague has several programmes on offer, including a four-year bachelor of arts in hospitality management, a fast-track bachelor programme, and masters in international hospitality management and hotel transformation.

Challenge

Higher education faces many financial, political, social, and technological challenges, and increasing competition. To attract and engage students, the education sector has a long history of adopting emerging technologies to supplement traditional pedagogical methods. From smartboards to laptops and even the internet itself, there have been many examples of technologies that have profoundly altered the way educators and students teach and learn.

Virtual reality helps to transform education by delivering meaningful teaching and learning experiences that enhance engagement and retention, and promote inclusivity.

Virtual reality lets students learn through experience. With VR, learning goes beyond textbooks and lectures, and students become active participants in their own education. By providing learners with engaging, memorable and impactful experiences, it helps them learn more effectively, build important skills such as empathy and collaboration, and retain what they learned long after they leave the classroom.

As an educational technology, VR using 360° video is great for placing learners in a different environment outside of the classroom allowing learners to experience a context and visualize concepts and situations in an immersive way. Perhaps most importantly, 360° video allows the learner to experience a situation or environment in the first person, allowing for delivery of emotion and encouraging agency as well as personal, real and active learning.

This medium is ideal for transferring emotion and allowing a learner to feel the emotion in a scenario as well as develop empathy by taking the perspective of someone different to themselves. 360° video allows for the learner to be placed in a new simulated environment that may be inaccessible, unsafe or expensive to experience in real-life, allowing for contextual learning and visualization of concepts and context.

As a result it is a natural fit for safety training as well as soft skills training such as leadership and guest relations to experience difficult situations and build emotional intelligence through exposure. 360° video can also be used effectively for developing customer experience prototypes of new environments as well as visualizing standard operating procedures. Furthermore, 360° video can be used to create immersive case studies that position the learner within a first-person perspective and present them with a problem to be resolved.

In hospitality, VR can enhance event planning, training, customer experience prototyping and marketing. When using 360° video, VR can recreate induction experiences enabling familiarity with the work environment, allowing new employees to practice standard operating procedures prior to their first day. For educators and creators of educational content, 360° video is relatively easy to learn and a nice entry point into VR and immersive technology.

Solution

Hotelschool The Hague works together with other international hospitality universities to prepare students and professionals for a changing profession through immersive real-life learning experiences.

In one of the first projects, as a result of the pandemic, the Future of Work minor experimented with the use of a 360° VR training scenario on dealing with difficult conversations. By making decisions to deal with a challenging team member, one can build leadership skills through trial and error in a safe virtual environment and gain leadership experience in a VR world. This can inform future situations one may find oneself in as a leader within the future of work.

To get up and running quickly, they sourced the scenario from an external content provider and provided Google Cardboard fold-out viewers to students to play it on their own mobile phones. Learners described the experience as being stimulating and impactful, with many feeling like they were actually present in the room.

In a subsequent project, students could create their own scenarios around unconscious bias and diversity and inclusion. In a course on change management, learners get to experience resistance to change in the first person. Within the context of a fictional coffee company, learners are placed in the role of a change consultant who is meeting with a difficult branch manager. The company aims to enhance customer experience through training for frontline staff, which conflicts with the manager’s profit motive resulting in continuous resistance. Learners face the challenge of taking decisions to influence the branch manager to get onboard with the change.

The school selected Warp VR to create and distribute realistic, 360° video based experiences that support story branching and easy to play on both VR headsets and smartphones. After the experience, the process of reflection through experiential learning was facilitated to make the link to change management theory, while building competencies in soft skills such as decision making, critical thinking and perspective taking.

A related project feeding into the VR/AR Project is the Transforming Hospitality Education through Tech Abilities (THETA) Erasmus+ research project. For this project, Hotelschool The Hague is working together with four partner institutions. The THETA project aspires to enable real-life learning through immersive technologies on mobile phones to enhance hospitality education. Furthermore, the project intends to develop guidelines for educators and learners on how to create their own low-tech AR and VR experiences which are easy to use to enrich teaching and learning.

The THETA project is developing four prototypes using immersive technologies, which are refined using student feedback. One of these is Branched Storytelling, which uses 360° video for soft-skills training giving students the chance to experience the emotion of a difficult conversation with a guest while testing decision-making and critical-thinking skills. Another prototype, The Outlets, allows students to feel present in hospitality outlets such as the kitchen and front desk without having to be there physically while providing an introduction to equipment and processes.

“It has been an absolute pleasure working with the Warp VR team over the past two years who are always willing to help and provided support when needed. The platform is easy to use, looks professional, is reliable and works well even when distributing to large classes (over 120 students).” - Che Govender, Lecturer VR/AR in Education at Hotelschool The Hague

Results

Hotelschool The Hague has introduced immersive technologies into eight courses of its curriculum thus far. After the delivery of an immersive experience in the classroom, the didactical process shifts to structured reflection through an experiential learning model to relate and apply classroom theory to solve the problem faced in the VR/AR scenario. The experience is turned into a learning moment through critical in-class reflection and group coaching approaches.

The school uses 10 PICO headsets that can be screencasted to digiboards for live group reflection. Further adoption by other departments within the school is stimulated with informal workshops, dissemination presentations, collaboration with industry partners on the development of immersive learning experiences and the development of manuals to promote both student and educator created content.

Initial research results indicated that students preferred highly interactive experiences with gamification elements. In addition, for AR experiences, rather than viewing 2D video on cutting techniques within an immersive platform, students wanted to be able to view 3D objects (eg. the knife) from different angles in order to maximize the benefits of the medium. Furthermore, the 360° video on dealing with a difficult guest was perceived as being highly engaging.

To learn more about using VR for education, watch our webinars How to use VR in education and How to use VR for healthcare education, or read our blog post VR training in education: a game-changer for learning.

Download customer story

Click here to download this customer story in PDF format (no registration required).

Illuminating Immersion: UV-C's Impact in Virtual Reality - Uncovering Risks and Benefits

Come meet Uvisan at Immerse Global Summit | Metacenter Global Week in Orlando Oct 17-19

Introduction

In recent years Virtual Reality (VR) has evolved from a niche technology to a mainstream phenomenon, captivating users with immersive digital experiences that blur the boundaries between the real and the virtual. As the demand for more lifelike and engaging VR content surges, so does the need for addressing the crucial aspect of hygiene within this dynamic realm. Enter Ultraviolet-C (UV-C) technology, a potent and versatile tool that is revolutionising the way we approach cleanliness and sanitisation in virtual reality.

UV-C, a short-wavelength ultraviolet light with germicidal properties, has long been recognised for its efficacy in sterilisation and disinfection applications in various industries. Now, its remarkable potential in elevating VR experiences through improved hygiene standards is taking centre stage. In this article, we delve into the pivotal role of UV-C in VR, focusing on its ability to transform the way we perceive and maintain hygiene within virtual environments. From sterilising VR equipment to mitigating the risks of shared experiences, UV-C technology is paving the way for a cleaner, safer, and more enjoyable virtual reality landscape. There are of course risks when implementing UV-C technology and in the article we explore both the benefits and risks involved when using UV-C as well as offering guidance on how to minimise any potential risk that may come from using a UV-C product.

Understanding UV-C

Understanding UV-C (Ultraviolet-C) involves exploring the intriguing world of short-wavelength ultraviolet light, a powerful and unique form of electromagnetic radiation. Falling within the 100 to 280 nanometer range, UV-C possesses exceptional germicidal properties, making it an effective tool in the battle against harmful microorganisms. The key to UV-C's potency lies in its ability to disrupt the DNA and RNA of bacteria, viruses, and other pathogens, rendering them unable to reproduce and thus neutralising their harmful effects. This property has led to the widespread application of UV-C in various industries, including water and air purification, healthcare, and food processing. In recent times, UV-C has also found its way into the realm of Virtual Reality (VR) and Augmented Reality (AR), where its benefits extend beyond mere sterilisation. UV-C technology is being harnessed to improve the hygiene of VR equipment, ensuring a safer and cleaner user experience. However, it is crucial to recognise that UV-C exposure poses potential dangers to living organisms, including humans. As we integrate UV-C into the realm of VR and AR, a balanced understanding of its capabilities and limitations becomes essential to harness its power effectively and responsibly.

History of UV in VR / AR

In the mid-2010s, the first instances of UV-C implementation within Virtual Reality (VR) emerged as companies and researchers explored its germicidal potential. An early example was the introduction of automated UV-C cleaning stations in VR arcades and public VR spaces. These stations allowed users to disinfect VR headsets and controllers between sessions, minimising the risk of infections spreading among different users. This was of course pre-covid. 

The Virtual Reality (VR) industry experienced a significant negative impact due to the COVID-19 pandemic. Before the outbreak, VR arcades, amusement parks, and entertainment venues were thriving, offering consumers a chance to experience VR in a social and interactive setting. However, with strict social distancing measures and lockdowns in place, these public VR spaces faced closure, leading to revenue losses and business uncertainties. The fear of potential virus transmission through shared VR equipment deterred many customers from visiting these establishments, further exacerbating the industry's struggles. Consequently, VR arcade operators and businesses had to adapt rapidly to the changing landscape, investing in rigorous sanitisation protocols, implementing UV-C disinfection systems, and adhering to strict hygiene standards to regain public trust. Despite the challenges, the resilience of the VR industry and the implementation of UV-C technology played a vital role in the gradual recovery of public VR spaces, fostering a safer and cleaner environment for users eager to experience the joy of VR in shared settings once again.

Post-covid saw the popularity of UV-C technology really skyrocket with the adoption of UV-C disinfection systems like Uvisan cabinets. Uvisan cabinets, equipped with powerful UV-C lamps, offered an automated and efficient way to sanitise VR headsets and importantly, also controllers between users. These cabinets used UV-C radiation to deactivate harmful pathogens, ensuring a clean and safe experience for each participant. These pioneering applications showcased the potential of UV-C technology to revolutionise VR hygiene, providing users with a worry-free, germ-free, and highly enjoyable virtual experience. Suffice to say, whilst Covid may have been the catalyst for UV-C technology becoming so popular within VR / AR, it has now gone far beyond that, with hygiene itself being the focal point more generally, as opposed to Covid specific prevention.

Benefits of UV-C in VR / AR

One of the most significant advantages of UV-C is its powerful germicidal properties, which make it highly effective in disinfecting VR equipment and accessories. Automated UV-C cleaning systems, such as Uvisan cabinets, provide a quick and efficient way to sanitise VR headsets, controllers, and other shared accessories, reducing the risk of cross-contamination in public VR spaces. By eradicating harmful pathogens, UV-C ensures a safer and more hygienic environment for users, instilling confidence in their virtual experiences. Moreover, UV-C technology helps to extend the lifespan of VR equipment by keeping it free from harmful microbes without the need for chemicals and mechanical cleaning, leading to cost savings and reduced equipment maintenance. The implementation of UV-C in VR/AR not only elevates the overall hygiene standards but also contributes to a more enjoyable and worry-free immersive experience for users, ultimately advancing the adoption and growth of these transformative technologies.

In the vast expanse of virtual possibilities, UV-C and VR appear to be the ideal pairing, their stars aligned. However, as with any celestial match, there are cosmic concerns to navigate. UV-C, while incredibly beneficial, carries its share of hazards. Integrating UV-C into VR establishments, or any industry for that matter, demands a comprehensive understanding of the risks involved. Selecting UV-C equipment requires careful consideration, ensuring it meets stringent safety standards and is equipped with proper shielding measures to protect users from its potent radiation. Before embracing the union of UV-C and VR, it is crucial to recognise the importance of responsible implementation and the vigilance required in safeguarding the well-being of those venturing into the virtual realms.

Dangers of UV-C

UV-C, despite its remarkable benefits, presents a range of potential dangers that must be addressed with utmost care and attention. When handled responsibly, these hazards can be effectively managed to ensure the well-being and safety of users. However, any missteps in implementation could result in severe risks to health and safety.

The Harmful Impact of UV-C on Our Eyes

Ultraviolet C (UV-C) radiation, with wavelengths ranging from 100 to 280 nanometers, is the most energetic and harmful type of ultraviolet light. While natural UV-C radiation is mostly absorbed by the Earth's atmosphere, artificial sources like germicidal lamps pose a significant risk to our eyes. Below we explore the harmful impact of UV-C on our eyes, paying attention to the specific health risks and providing in-depth academic references to support the information presented.

Acute Photokeratitis: UV-C's Painful Consequence

Acute photokeratitis, also known as "welder's flash" or "snow blindness," is a painful eye condition caused by overexposure to UV—C radiation. This condition affects the cornea, the transparent outer layer of the eye, and can result in the following symptoms: eye pain, redness, excessive tearing, light sensitivity, and a feeling of grittiness. Prolonged exposure to UV-C radiation, even for a short duration, can lead to acute photokeratitis.

Academic Reference:

  • Pitts, D. G., & Cullen, A. P. (2000). UV and Infrared Absorption Spectra, Ultraviolet (UV) Radiation Properties, and UV Radiation-Induced Injury. Survey of Ophthalmology, 45(4), 349-361. doi:10.1016/S0039-6257(00)00169-5

Corneal Damage: A Serious Concern

The cornea is highly susceptible to damage caused by UV-C radiation. Direct exposure to UV-C rays can lead to corneal injuries, which may result in pain, blurry vision, and potential long-term vision impairment. Corneal damage requires immediate medical attention to prevent further complications and promote proper healing.

Academic Reference:

  • McCarty, C. A., Taylor, H. R., & Key, S. N. (2000). Corneal Light Shielding and UV-B-Induced Ocular Surface Squamous Neoplasia. Archives of Ophthalmology, 118(3), 392-393. doi:10.1001/archopht.118.3.392

Conjunctival Irritation: An Inflammation Risk

The conjunctiva, the thin, transparent membrane covering the whites of the eyes and the inner eyelids, can also suffer from UV-C-induced irritation. Prolonged UV-C exposure can cause inflammation and discomfort in the conjunctiva, making it red, swollen, and potentially leading to temporary vision disturbances.

Academic Reference:

  • Kuckelkorn, R., Redbrake, C., & Reim, M. (2001). Acute Ultraviolet-B-Induced Conjunctivitis and Its Mechanism. Investigative Ophthalmology & Visual Science, 42(6), 1429-1434. PMID: 11381087

Long-term Vision Issues

While acute effects of UV-C exposure are painful, long-term UV-C exposure can result in chronic vision issues. Prolonged exposure can lead to cumulative damage to the cornea and other eye structures, potentially leading to irreversible vision problems, including reduced visual acuity and other visual impairments.

Academic Reference:

  • Feldman, R. M., & Schultz, R. O. (1982). Ultraviolet Light-Induced Corneal Changes. Transactions of the American Ophthalmological Society, 80, 173-191. PMID: 6758506

The Harmful Impact of UV-C on Our Eyes - Summary

Ultraviolet C radiation poses significant risks to our eyes, with acute photokeratitis, corneal damage, conjunctival irritation, and potential long-term vision issues being some of the adverse effects. It is essential to be cautious and take appropriate safety measures, especially when dealing with artificial UV-C sources like germicidal lamps. The academic references provided support the scientific understanding of the harmful impact of UV-C on our eyes, emphasising the importance of protecting our eyes from this potent form of ultraviolet radiation.

The Harmful Impact of UV-C on Our Skin

Skin Burns: The Immediate Consequence of UV-C Exposure

Accidental direct exposure of the skin to UV-C radiation can result in skin burns that are similar to sunburns. These burns are characterised by redness, pain, swelling, and blistering. The severity of the burn depends on the duration and intensity of UV-C exposure.

Academic Reference:

  • Litchfield, D. J. (2005). Skin Cancer and UVR Exposure. In: Sunscreens: Development, Evaluation, and Regulatory Aspects. New York: Marcel Dekker, Inc. pp. 491-507. ISBN: 9780824757914.

Premature Aging: UV-C's Silent Impact

Exposure to UV-C rays from the sun can lead to premature aging of the skin.

Academic Reference:

  • Fisher, G. J., & Kang, S. (2002). Mechanisms of Photoaging and Chronological Skin Aging. Archives of Dermatology, 138(11), 1462-1470. doi:10.1001/archderm.138.11.1462

Skin Cancer: A Long-term Risk

Overexposure to UV radiation can lead to DNA damage in skin cells, increasing the risk of developing skin cancers like melanoma, basal cell carcinoma, and squamous cell carcinoma.

Academic Reference:

  • Lomas, A., Leonardi-Bee, J., Bath-Hextall, F. (2012). A systematic review of worldwide incidence of nonmelanoma skin cancer. British Journal of Dermatology, 166(5), 1069-1080. doi:10.1111/j.1365-2133.2012.10830.x

Immunomodulation: Compromising Skin's Defense

UV-C radiation can also weaken the skin's immune system, reducing its ability to defend against infections and environmental stressors. This immunomodulatory effect can make the skin more vulnerable to various diseases and ailments.

Academic Reference:

  • Ullrich, S. E. (2005). Mechanisms underlying UV-induced immune suppression. Mutation Research/Fundamental and Molecular Mechanisms of Mutagenesis, 571(1-2), 185-205. doi:10.1016/j.mrfmmm.2004.10.018


The Harmful Impact of UV-C on Our Skin - Summary

Ultraviolet C radiation, though naturally blocked by the Earth's atmosphere, can have harmful consequences when exposed directly to our skin through artificial sources like germicidal lamps. Skin burns, premature aging, and the potential long-term risk of skin cancer are among the concerning effects of UV-C exposure on our skin. It is vital to be cautious and take appropriate safety measures when handling UV-C-emitting devices to protect our skin from this potent form of ultraviolet radiation. The academic references provided serve as evidence of the harmful impact of UV-C on our skin, emphasising the significance of skin protection from this potentially dangerous radiation.

Navigating the Dangers of UV-C

As Ultraviolet-C (UV-C) technology gains traction across various industries, it brings with it a range of benefits, from sterilisation to improved hygiene. However, the potential dangers of UV-C radiation cannot be ignored. To ensure the safe utilisation of UV-C, particularly in scenarios such as Virtual Reality (VR) equipment sanitation, it's crucial to adopt precautionary measures. In this article, we delve into the methods and guidelines to effectively protect oneself from the potential hazards of UV-C exposure. Unfortunately there is no governing body and no official guidelines for safety but the below gives a comprehensive overview of what to look for when assessing your UV-C product for safety. 

IEC 62471 - A Crucial Benchmark for Safety

One of the cornerstones of protecting yourself from UV-C dangers is to ensure that the equipment in use adheres to recognised safety standards. The IEC 62471 standard specifically addresses photobiological safety, including UV-C radiation. It establishes exposure limits for various wavelength ranges and outlines the measurement techniques to determine potential risks. Prior to implementing UV-C technology, it's imperative to verify that the equipment bears the appropriate certifications, indicating compliance with IEC 62471. Relying on certified equipment provides a crucial baseline for minimising the risks associated with UV-C exposure. If there are holes or a direct line of sight to the bulbs (glass or any other transparent material will absorb UV-C), it is a strong warning sign that the product is not certified or safe. Uvisan cabinets are in the exempt category indicating that there is zero leakage of UV-C light from the cabinets. 

Opt for Quality Bulbs: Prioritising Safety and Ozone Mitigation

When it comes to protecting yourself from the potential dangers of UV-C radiation, the quality of the bulbs you choose plays a pivotal role. Opting for bulbs manufactured by reputable and well-established companies is essential not only for maximising sterilisation effectiveness but also for mitigating the risks associated with UV-C exposure. A crucial factor to consider alongside quality is the bulb's potential to produce ozone. UV-C radiation can interact with oxygen molecules in the air, resulting in the generation of ozone, which can have adverse effects on respiratory health. High-quality bulbs are designed with measures to minimise ozone production, ensuring that the benefits of UV-C technology are realised without compromising air quality or personal safety. Prioritising both quality and ozone mitigation is key to harnessing the advantages of UV-C while safeguarding your well-being.

Shining Light on Safety

In the ever-expanding realm of UV-C technology, safeguarding oneself from potential hazards is paramount. The journey begins with ensuring equipment adheres to certifications like IEC 62471, establishing a baseline for safe usage. Investing in quality bulbs from reputable manufacturers not only boosts the efficacy of UV-C applications but also minimises exposure risks. By cultivating an acute awareness of signs of poor manufacture and exercising caution, individuals can actively protect themselves from the potential dangers of UV-C radiation. As UV-C technology continues to redefine industries, responsible use becomes the guiding principle, ensuring its transformative benefits come without compromising safety.

Uvisan cabinets rigorously tested and fully certified holding certificates for :

IEC 62471

ISO 9001

ISO 14001

CE Certified

RoHS Certified

Only high grade UV-C bulbs are used in all Uvisan products



#ICYMI: What’s Taking Shape in the Industrial Metaverse?

VRARA's Kevin O'Donovan shared his vision on digital twins and industrial metaverse with XR Today.

With NVIDIA’s recent announcement of massive updates for its CloudXR and Omniverse technologies, the industrial metaverse has seen a huge jump in interest.

Across the headlines, the company boasted a roughly 10 percent spike on Wednesday last week. This has led to renewed confidence in the Santa Clara-based firm’s industrial ambitions and restored faith in the metaverse, although in its enterprise incarnation.

XR Today spoke to Kevin O’Donovan, Co-Chair, VRARA Industrial Metaverse and Digital Twin Committee, to examine the potential of the industrial and enterprise metaverse. He is a tech evangelist based in Nice, France, and serves as an expert in blockchain technologies for the European Commission.

O’Donovan has also regularly contributed to XR Today’s Big News Show since its inception.

Following the London-based Enterprise Metaverse Summit in late June, O’Donovan shared his thoughts on the industrial metaverse, digital twins, and immersive XR.

Siemens attended the event to speak on the need to upscale the industrial and enterprise XR. O’Donovan discussed his firsthand experience with the massive German infrastructure firm.

XR Today: What can you tell us about your experience with Siemens’ industrial XR technologies?

Kevin O’Donovan: I’ve been collaborating with Siemens for the past four or five years, and for those familiar with the company, Siemens is one of the worldwide leaders in industrial automation, energy systems, healthcare products, and many others.

They’ve discussed ‘the physical meets the digital’ for five to eight years. Honestly, they have been talking about digitising industries and creating digitalisation technologies, leading to digital twins. over the past couple of years.

They have design tools, a simulation centre (sim centre), InEx, and many automation software tools. If you’re designing a factory, you can design it in a 3D model, and Siemens has been in this space for years.


They also have Internet of Things (IoT) systems and grid modelling software, but it’s now just over a year ago that Siemens made a big announcement around their Xcelerator strategy and where they were looking to team up in a more structured way with many new and existing partners.

They created interoperability because everyone knows you can’t do it alone if you want to digitalise factories, grids, and industries. With the new system, you can mix and match with various partners. They also announced the Industrial Metaverse, a key strategic imperative for the company for the past 12 months.

In a big announcement, they revealed a month ago that they had invested about two billion dollars in a new factory in Singapore — a completely automated industrial metaverse factory.

They also announced a 500 million euro investment in Erlangen, just north of Nuremberg, one of their big campuses where they’re building a new technology research centre for automation, digital twins, and the industrial metaverse to take the technologies to the next level.

I’ve been collaborating with them and plan to visit their campus in Erlangen, where they opened the new Industrial Metaverse Experience Centre last week.

Over the last 12 months, we’ve seen them talk about the industrial metaverse along with NVIDIA, Nokia, and many other system integrators.

If you’re at Hanover Messe, everyone’s talking about digital twins and generative AI and how they can bring [the technologies] together to create what we’ve found regarding industry 4.0 for many years. That’s where it’s headed.

XR Today: What is the value of digital twins to industries and enterprises?

Kevin O’Donovan: Firstly, it depends on how you define a digital twin. Many people in the industry will say that they’re not new. At their basic level, a digital twin is a digital representation of something in the real world.

This could be a 3D computer-aided design (CAD) model, real-time data from my IoT systems, or a real-time data graphical user interface that’s a digital twin for my current production.

We’re seeing that, given the advancements in core technologies, whether from Intel, AMD, ARM, NVIDIA, and others, require more and more graphics, AI, and compute capabilities. This takes more data from the real world and digital twins by pooling data from multiple silos across different applications, and they don’t talk to each other.

We’re also seeing the next generation of digital twin technology, and many companies are adding more and more to boost simulation capabilities, generative AI to generate synthetic data for more simulations and scenario planning, and getting more data from real-time IoT systems.

However, we’re starting to see data pooled from multiple siloed digital twins, and that’s what platforms like NVIDIA’s Omniverse, Siemens, Bentley iCloud, and others are doing with many of their partners that we need to pool data from those different sources.

You then get this next-generation digital twin that offers a holistic view of everything in a digital format with 3D spatial interfaces.

These can perform scenario planning for business resiliency, maintenance, and optimisation for grids, factories, product designs, and recycling. It’s like digitisation on steroids.

Additionally, in our world, we like coming up with new terms. We had embedded business for many years, and didn’t call it the industrial IoT (IIoT), and now the Metaverse is the next game in town, where digital twins are the foundational building block.

You then speak about terms like XR, VR, AR, generative AI, 5G networks, and the latest edge computing from Intel, AMD, and NVIDIA. After bringing all of this together, we’re now at a stage where we’re taking digital twins to the next level.

I often tell people, “Look, digital twins mean different things to different people.” Great things are taking shape at the Digital Twin Consortium, where people can see digital twin maturity models.

These frameworks allow people to determine what digital twins mean, which exist today, and what problems they solve. As cool as the technology is, we must see if it makes you more money, saves money, or increases efficiency.

XR Today: There are a lot of use cases developing for digital twins, namely for companies like NVIDIA, Unity, Unreal Engine, and GE Digital. How are they being implemented in real use cases?

Kevin O’Donovan: People may say that digital twins aren’t new. We’re taking them to the next level now that we have a platform and immersive experiences.

This doesn’t always mean you’re in VR or XR, but you could instead view a 3D model of your factory [and] see what’s going on. This can allow you to reconfigure things and determine if you can boost production based on real-time data from the current production line.

We can also see if anything will break, if new shifts are needed, or if systems require predictive maintenance before speeding up production.

Conversely, two of us could be in different parts of the world and collaborate in the same environment. We’re not looking at two SAP screens but are actually in immersive environments.

It’s also not like a Zoom or Teams call anymore. We’ve recorded data for years that stuff sticks as we live in an immersive world if you’re trained in immersive ways.

So, as long as we use these technologies from Industry 4.0—the industrial metaverse—we can stay competitive as a company, industry, or country. Where we’re headed with automation, design, virtual worlds, and other things can also add to your sustainability story.

All the new infrastructure is being built for our [sustainable] energy transition, whether with electric vehicle (EV) factories, planning new grids, wind farms, hydrogen plants, and carbon capture plants. Everything is now being done in a digital twin model so they can plan everything before physically building infrastructure.

However, if you’re at an existing factory and have equipment from the last 10 to 15 years, your first step on that digital transformation journey is to put in all the IoT equipment to record real-time data in order to measure predictive maintenance.

That’s the journey we’re all on. It’s fascinating times, [and] people should not ignore this stuff.

XR Today: What did you think of PriceWaterhouseCoopers’ Four Pillars to the Metaverse? Do they resonate with how the industrial metaverse is developing?

Kevin O’Donovan: PwC’s four pillars—employee experience, training, client experience, and metaverse adoption—are key performance indicators (KPIs). Anybody in the industry wanting to invest will ask about the return on investment (ROI) level.

This can happen with happier employees, better collaboration with metaverse technologies, and other metrics. However, if you go to other companies, they may use other methodologies regarding digitalisation, with different ways to measure the success of digitalisation projects in your company, city, or country.

These KPIs allow people to know what success looks like and with the same goals. [However], if it doesn’t help client or employee experiences, people must consider why they use it.

Such frameworks are key. We’ve seen that, in the industry, people install technologies because they’re ‘cool.’ They have to have an ROI, and that’s one of the key drivers for why the industrial metaverse is not going away.

Digitalisation will become the only game in town, leading to better digital twins, resiliency, simulation capabilities, and ‘what if’ scenarios—all in real-time.

XR Today: How have digital twins and the industrial metaverse evolved over the years to improve infrastructure?

Kevin O’Donovan: I often chat with people in the industry, and they say, “Haven’t we been doing that for years? We don’t just build wind farms and hope they’ll work.” I agree with this.

There’s a lot of experience, competence, Excel models, and simulations that go into these projects. How do you put the mooring lines for offshore, floating wind turbines?

In the past, we didn’t have the computing, algorithms, or AI to generate more synthetic data and just run with it. Previously, we’d run ten simulations, some of which were paper-based.

Now, you can run hundreds of thousands of simulations. Using these simulations, we can now determine ‘what if’ scenarios like the tide, temperature, and climate changes—that ‘one in a hundred-year storm.

Almost every utility on the planet uses software to design distribution grids, allowing engineers to simulate what happens if another ten people plug in their electric vehicles, its effects on substations, and other issues.

This stuff is happening, and we can’t ignore it, especially the efforts from PwC, Siemens, NVIDIA, Nokia, and many others. While we talk about the Apple Vision Pro, Meta Quest Pro and 3, and [metaverse platforms like] Decentraland, the real story is happening in the industry and enterprise.

Keep an eye on it because it’s not going away.