Red 6, Lockheed, KAI Partner for ATARS Fighter Pilot Trainer

Post originally appearing on xrtoday.com by Demund Cureton.

Red 6 and two major defence and aerospace companies have joined forces to develop an ecosystem of augmented reality (AR) trainers and solutions, it was revealed on Tuesday.

Lockheed Martin, Korea Aerospace Industries, and Red 6 formed the partnership to create a tech framework to implement the latter’s Advanced Tactical Augmented Reality System (ATARS).

The new toolkit will integrate into Lockheed Martin’s T-50 fighter jet and Prepar3D software simulation suite. Gaining from previous tech solutions, the latest innovation aims to build further experience and embedded T-50 training programmes.

Red 6’s ATARS solution provides multi-node, all-domain AR technologies for outdoor synthetic training environments.

Pilots using the multi-user trainer can experience flight simulations with bespoke environments, scalable scenarios, and other factors. For trainees, they can tackle combat scenarios repeatedly and on-the-fly, with variable parameters while monitoring progress with key analytics.

Comments on Red 6 Partnership

Daniel Robinson, Founder and Chief Executive, Red 6, said the ATARS system addressed “critical training inefficiencies” for current training platforms.

He added:

“There has never been a training environment in which you can combine virtual assets being visually represented in the real outdoor world, and the opportunity to overlay this training into ground-based training, until now”

Additionally, Aimee Burnett, Vice-President of Business Development for the Integrated Fighter Group, Lockheed Martin, added,

“Our vision is to help our customers leverage emerging technologies to seamlessly and securely connect all assets for joint missions, and enable fast and decisive action. Lockheed Martin has made significant advances in digital engineering and built strategic partnerships that are enabling us to accelerate development across our platforms”

Citing examples, Burnett said that Lockheed aimed to “build advanced 21st Century Security capabilities” to support customer needs. Her firm could achieve this via “continued integration” with Red 6’s training technologies.

Speaking further, she said that Lockheed Martin’s T-50 programme continued to remain in demand globally. Several high-profile clients like the US Air Force (USAF) had leveraged the fighter jet for its trainer programme.

Recently, the USAF launched three near-term training missions for tactical training, tactical fighter surrogate, and adversary air support, among others. The defence firm’s TF-50A, a light attack fighter jet, includes numerous upgrades such as electronic warfare systems, radar, tactical data links, and others.

These meet the USAF’s Air Combat Command requirements. Further plans to collaborate with the US Navy’s trainer programme are ongoing.

Red 6 hopes to expand its collaboration with the Bethesda, Maryland-based firm’s family of fighter jets, including the F-16, F-22, and its latest innovation, the F-35.

Red 6 Trains in Santa Monica

The news comes after the Orlando, Florida-based enterprise announced in June last year it had completed successful test of its jet fighter trainer.

Using artificial intelligence (AI) and AR content, the training platform allows pilots to train with virtual combatants in real-time and with analytical performance monitoring.

At the time, Red 6 showcased the trainer with Berkut 540 fighter jets at Santa Monica Airport. Furthermore, it leveraged its Combined Augmented Reality Battlespace Operation Network (CARBON) to provide simultaneous interactivity with RT3D assets.

According to reports, Red 6 aims to test its AR and AI-fuelled dogfighting capabilities next year.

Robinson on Training Future Fighter Pilots

The news comes after the United Kingdom’s Royal Air Force (RAF) also aimed to leverage Red 6 tools to empower the British fighting force.

Speaking to the Express, Robinson, a former Tornado pilot and the first foreign national to fly the F-22 Raptor, urged the RAF to explore “readiness, lethality, and training” against “peer adversaries.”

He said at the time: “We don’t want to go anywhere near a conflict right? But we need to be credible and to be credible, we need to be lethal. We need to train and we need to train rather than every single day at scale.”

One reason for developing his company was to address shortfalls in training at scale, pilot fighter shortages, and lack of fighter pilots.

Red 6 Fighter Jets Train at the ARCADE

Additionally, Red 6 inked a massive deal with Boeing in September last year to boost its ATARS training platform. Boeing will integrate ATARS on its next-generation T-7 and F-15EX aircraft for the partnership.

Pilots can use the solution to train with unmanned aircraft and simulated threats in real-time. Also, training modules can incorporate numerous simulated protocols, including manoeuvres, refuelling, engaging targets, and others.

At the time Robinson said: “Readiness and lethality are critical if our warfighters are to prevail against peer adversaries. Boeing’s next-generation platforms will be the first aircraft in the world that are capable of entering our augmented reality training environment.”

To track performance metrics, Red 6 will offer its AR Command and Analytic Data Environment (ARCADE) for Boeing environments. This allows instructors to debrief, plan, and analyse pilot performance with recorded metrics.

Actionable insights with AR/VR attention data

Spatial tracking reveals actionable insights to optimise AR/VR projects

Understanding the specific areas of an AR or VR experience which get the most attention is essential insight for XR companies and their clients. Are people spending time viewing the areas you expect them to, which areas do they look at first (versus last) and does this support your business objectives? Analysing freeform behaviour with 3D spatial content doesn’t always follow the UX/UI design when an AR/VR experience is released into the wild. Human behaviour in 6DOF spatial environments is often unpredictable and surprising so CORTEXR spatial tracking surfaces actionable attention insights to optimise the success of AR and VR projects.

Attention heatmaps deliver a complete picture of AR/VR user behaviour

Attention area heatmaps shows the time spent viewing specific areas of a scene or object (including individual objects tracked within a scene). Attention priority heatmaps track the sequence of attention across scenes and objects to measure the order that people viewed different areas. CORTEXR attention metrics add invaluable insight to existing data on total session/dwell time and clicks/interactions to deliver a complete picture on user behaviour. The areas which don’t get much attention are often as insightful as those that grab most user attention so we’ve highlighted the key insights which help our customers measure their AR/VR projects against business goals. 


Optimise AR/VR storytelling to deliver marketing and advertising objectives 

The narrative for AR/VR marketing and advertising projects needs to meet campaign objectives such as brand awareness and conversion rates. Number of views, dwell time and interactions are fine for desktop/mobile computing but don’t give you the full picture when it comes to user behaviour in spatial computing. Actionable insights include:

  • Communication objectives (conversion targets etc) based on levels of attention given to different parts of the storytelling experience provides a hierarchy of communication. Optimising the most important messages based on campaign objectives drives KPIs in the right direction e.g. ensuring actions/events have high attention drives higher conversion rates.

  • Brand asset recognition (logos, text etc) as a percentage of the overall experience to measure brand recall and whether brand standout needs to be dialled up/down. Creative executions sometimes dominate brand messages (and visa versa) so optimising brand asset placement and timings delivers higher brand recall.

  • Interactive elements (buttons, menus etc) and the priority of these actions shows whether people see these in the optimal sequence. Millisecond visual scanning of a scene or object doesn’t always follow the desired order of events so optimising interactive elements by position, size and timing increases engagement with the right elements at the right time.


Analyse Digital Twin and AR/VR enterprise projects with attention metrics
Virtual versions of products, equipment, factories, buildings and cities enable companies to improve designs, reduce maintenance costs and increase operational efficiencies. Adding attentional data to IoT sensors and interaction data of digital twin assets delivers an invaluable layer of insight on user behaviour in real world conditions. Actionable insight examples:

  • Design comprehension based on distribution of attention across all areas of the asset validates ‘big picture’ objectives where people need to understand the entire experience e.g. architectural designs or construction sites in the context of existing buildings.

  • Diagnostic details on high versus low attention areas highlight whether the most important components naturally receive people’s focus e.g. aerospace engine or manufacturing process which needs operatives to identify and test a maintenance issue.

  • Risk assessment of activities which should receive high attention levels to specific tasks or important components are identified as needing improvement e.g. medical equipment or emergency procedure with blind spots which need more attention to quicky spot a problem and respond to a situation.

10 benefits of spatial tracking for actionable AR/VR attention insights

CORTEXR data analytics plug-in enables all AR/VR companies to access actionable attentional insights as standard practice. From Education, Training and Research to Healthcare, Manufacturing and Retail, advanced data analytics are at your fingertips.

  1. Spatial tracking for spatial computing unlocks attention data analytics across all headsets and handsets

  2. Analyse real-time attention across all devices and platforms with prebuilt dashboards and heatmaps

  3. Standardised attention metrics deliver insights across all projects without needing data analytics expertise

  4. Track attention levels throughout project lifecycle with data insights pre/during/post project live dates 

  5. Create more realistic and immersive experiences by optimising attention levels across all scenes and objects

  6. Improve UX/UI by optimising natural attention towards specific content areas, events and interfaces

  7. Analyse hierarchy of user attention to optimise priority areas and order of views against business objectives

  8. Boost user engagement, interaction and conversion rates by combining attention data with Google Analytics

  9. Identify and diagnose unexpected user behaviours and poor performing elements to improve business goals

  10. Benchmark attention measures across all projects to drive innovation and increase performance levels 



VRARA London Sponsors The Economist's Impact Enterprise Metaverse Summit

The VRARA London Chapter is a proud sponsor of the Economist Impact’s Enterprise Metaverse Summit 2023, which will help senior leaders build and benefit from digital twins to enable productivity gains and sustainable innovation.


A digital twin is a 3D replica of a physical workspace and assets, and can imitate a single screw to an entire factory. The insights this digital copy provides can help firms enhance resilience, adaptability and efficiency. Digital twins will underpin the enterprise metaverse and are forecast to become a US$48bn business by 2026 (McKinsey). Businesses can also use mixed reality (augmented and virtual) to enhance collaboration, improve the safety of working environments and increase the effectiveness of training.

However, the enterprise metaverse may not become mainstream as fast as its champions expect, if at all. Will it be based on proprietary technology or on open standards? How should firms build their first digital twin and take advantage of the opportunities presented by mixed reality? Enterprise Metaverse Summit will offer informed and nuanced views from forward-thinking companies who are already gaining value from immersive reality.

Join The VR/AR Association London at The Economist's Impact Enterprise Metaverse Summit on June 28th-29th in London, in-person and online. Get a balanced, informed perspective from industry leaders that have already gained real value from the technology, and get the tools you need to benefit from mixed reality and digital twins. Register now (including free passes for eligible attendees): https://bit.ly/3Lyb6t5.

LMS365 partners with SynergyXR - Enhanced organizational learning through VR

From VRARA Member SynergyXR:

We are thrilled to announce our partnership with LMS365, a leading learning management solution, aimed at revolutionizing organizational learning through immersive XR technology.

With this collaboration, we are bringing extended reality (XR) tools to enhance employee training and engagement. By combining LMS365’s recognized learning management solution with our powerful no-code, enterprise XR platform, organizations can create deeply engaging learning and training content for their employees.

Our partnership enables learning administrators to immerse employees in an extended reality training environment, made possible by SynergyXR. Learners can interact and engage with training courses in a virtual environment, resulting in improved training speed, learning confidence, and emotional connection to the content. Studies have shown the transformative impact of virtual reality on soft skills training (source: PwC).

This partnership expands the capabilities of learning and training management. LMS365 offers a comprehensive learning management solution integrated into the Microsoft 365 digital toolset. By integrating this solution with SynergyXR’s XR platform, organizations can now extend learning and training into virtual environments, making interaction with course content more tangible and expansive.

3D Cloud by Marxent Announces Early Access Program for 3D Cloud Room Scanner

ST. PETERSBURG, Fla., June 12, 2023 /PRNewswire/ -- Today, 3D Cloud by Marxent, the leader in 3D kitchen design and furniture planning, is excited to announce an early access program for the highly anticipated 3D Cloud Room Scanner. Retailers can now sign up to be among the first to deploy the game-changing capabilities of the company's LiDAR room scan-to-design solution for capturing room measurements that can then be directly dropped into 3D Cloud Room Planner for designing with real, buyable products.

3D Cloud Room Scanner seamlessly connects precise measurements to 3D design. Designed with furniture and home improvement retailers in mind, 3D Cloud Room Scanner makes it easy to capture accurate measurements, design in 3D Cloud Room Planner, and buy an entire furnished room or kitchen remodeling project. Drop room scans directly into 3D Cloud Room Planner to speed up in-home measuring appointments and show off impressive designs faster.

The 3D Cloud Room Scanner addresses a significant pain point in the industry by revolutionizing the process of obtaining precise floorplan measurements for interior spaces and then turning those measurements into buyable projects with speed and efficiency.

"With 3D Cloud Room Scanner, users can effortlessly capture accurate floorplan measurements and seamlessly import them into our comprehensive suite of design applications, including the 3D Cloud Kitchen Designer, 3D Cloud Office Planner, 3D Cloud Bathroom Designer, and 3D Cloud Room Planner," said Beck Besecker, CEO and Co-Founder of 3D Cloud by Marxent.

This limited opportunity to participate in the 3D Cloud Room Scanner early access program allows eligible enterprise furniture and home improvement retailers to be among the first to market with this groundbreaking customer experience.

Key benefits of the 3D Cloud Room Scanner include:

  • Simplified and accurate measurements: Say goodbye to expensive measuring appointments that don't convert. Capture precise floorplan measurements with ease and send them directly to design software.

  • Seamless integration with 3D Cloud Room Planner: The 3D Cloud Room Scanner seamlessly integrates with 3D Cloud Room Planners, empowering users to turn their floorplan scans into immersive 3D designs effortlessly.

  • Streamlined design-to-purchase experience: Rapidly transform floorplan scans into fully customized spaces, complete with real products, enabling a quick and seamless transition from design to purchase.

"We are thrilled to offer this exclusive preview opportunity for the 3D Cloud Room Scanner," said Beck Besecker, CEO and Co-founder of 3D Cloud by Marxent. "With early access, retailers will have the chance to experience the future of in-house interior design programs firsthand, unlocking new levels of efficiency, accuracy, and creativity. We are excited to collaborate with our clients and gather their invaluable feedback as we continue to shape the future of 3D design technology."

Request an invitation to the early access program for 3D Cloud Room Scanner today. Limited spots are available to eligible retailers, so early registration is encouraged.

About 3D Cloud™ by Marxent

3D Cloud™ by Marxent is the global leader in 3D e-commerce for furniture, kitchen, bath, outdoor, office furniture, closets, and storage. The 3D Cloud™ platform allows retailers and brands to build endless applications from a single 3D product catalog. With 3D Cloud™, 3D content is created, managed, and published to all 3D applications for consistency across every touchpoint in the customer journey. Applications that run on 3D Cloud™ include 3D Product Configurators, 3D Sectional Configurator, 3D Room Planner with Design from Photo, 360 Product Spins, 3D Renders, WebAR, Augmented Reality retail apps, and Virtual Reality retail apps. 3D Cloud by Marxent has offices in Miamisburg, Ohio, and St. Petersburg Florida as well as an international presence with offices in London, England; Paris, France; and Auckland, New Zealand. Clients include a major U.S.-based home improvement retailer, Kingfisher plc, PlaceMakers, Mico, Macy's, Ashley, HNI Corporation, La-Z-Boy, Joybird, and John Lewis and Partners. For more information, visit 3dcloud.com.

Jason Harrison Appointed at Utah Chapter President

We are thrilled to have Jason Harrison as our Utah Chapter President.

Jason Harrison is a dynamic XR leader and educator with over 20 years of experience in software engineering, technical consulting, team leadership, and immersive technology. He's owned a thriving tech business and consultancy, collaborated with Fortune 100 companies, and taught for Microsoft and Unity - creating rippling industry impact.

Jason sparks innovation and drives positive change while engaging with leaders across all levels. As a Unity Certified Expert and Instructor, he designs and delivers captivating XR workshops and Unity courses, empowering others to unleash their potential in the fast-paced and changing tech landscape.

"Utah has a thriving XR industry and a strong academic core. I'm excited to unite the already-established forces with the energetic new innovators to bring engaging growth opportunities to the state."

— Jason Harrison: Utah@thevrara.com.

Apple showcases its Vision Pro headset with a medical app. Join our Healthcare Forum on June 29.

Apple enters the spatial computing headset market!

The XR healthcare market is projected to reach $9.5B by 2028.

On June 29, join our annual Healthcare Forum as we host 40+ speakers, and 800+ attendees.

Apple showcases its Vision Pro headset with a medical app during WWDC

Although still a small market ($1.8B in 2021), health-focused XR has shown potential in treating a range of conditions, from phobias to chronic pain.

Already, XR is helping treat patients. One of the most successful areas has been post-traumatic stress disorder (the Veterans Health Administration was an early adopter) and has also been used to reduce pain and anxiety without medication. XR has become an important educational tool, allowing students to practice procedures and prepare for high-risk scenarios (surgeons trained with XR have shown significantly fewer errors than traditional training methods.) And, VRARA Member UCF, has implemented XR for nurses over the past year. And, Arizona State replaces all intro biology labs with VR labs.

On June 29, our annual Healthcare Forum will have 40+ speakers, 800+ attendees from Microsoft, Bloomberg, Magic Leap, Tel Aviv Medical Center, Merck, Thermo Fisher, Harvard, Compal, Edwards, Pacific Neurocenter, Canon, MIT, NYU, CAE, Sony, Intel, TAMP (with doctors from US, Europe, Brazil, Nigeria, India), and many others.

Overlay introduces Asset Vision, new AI identification tool

Access the video recordings from our Forum here

Post by Matt Collins originally appearing on geoweeknews.com.

Last week, the VR/AR Association held their VR Enterprise and Training Forum, discussing the increasingly viable business applications for mixed reality technology. As part of the one-day event, Overlay CEO and co-founder Christopher Morace gave a keynote talk in which he introduced the company’s new Asset Vision feature, which utilizes artificial intelligence (AI) within the augmented reality (AR) app to quickly and automatically identify features in water utility spaces. Recently, Morace spoke with Geo Week News to discuss the new feature as well as to provide more background on the company as a whole.

Prior to founding Overlay, Morace had spent his career primarily working with enterprise software, including with a social collaboration platform called Jive. There, he says he got experience working with Fortune 500 companies and began to “understand what it takes to really transform a business.” This was about a decade ago, as many different tools were starting to be developed for enterprise uses, and Morace started to become intrigued by technology like AI and AR. However, he says, he and Overlay co-founder and CTO Josh Ricau “felt pretty strongly that the white collar workers inside the four walls had been a bit overserved.” 

He continued, “They were kind of drowning in technology, and everybody out in the real world trying to keep the internet up and running, and keep water flowing, they’re just struggling.” Morace notes that technology designed for those in the field did exist, but it was extremely expensive. That led to the ultimate development of Overlay, providing simple-to-use technology to water agencies for solving their problems in the field.

As alluded to above, Overlay uses AR and AI to provide easy access to crucial data in the field right on a user’s phone. For example, information from GIS systems can be brought into Overlay, and then a user can hover over an asset – such as a sewer in a road – with their iPhone and instantly receive all available metadata for that asset. Additionally, Overlay takes advantage of all of the sensors available on iPhones, including the relatively recent addition of lidar, to enable 3D scanning of modeling to, among other reasons, cut down on return trips for professionals in the field - a significant and persistent issue in the industry - and create digital twins of assets by incorporating Internet of Things (IoT) data.

The ability to do all of this with just an iPhone was a crucial piece of Overlay’s foundation. Morace acknowledges that scanning with these mobile devices certainly don’t solve every issue and there are still plenty of projects that require high-end laser scanners, but for many of their users working on water utilities, projects are small enough to take advantage of simpler technology, to say nothing of the accessibility (or lack thereof) of more traditional devices. He told Geo Week News, “It was important to us that everything was off-the shelf, inexpensive hardware like an iPhone, because we feel like when real transformation happens, it happens because you can put the device in everyone’s pocket.”

That brings us to the recent development of Asset Vision, which as mentioned above utilizes AI to identify assets automatically and keep an inventory for agencies. Morace notes that as he and his colleagues started going out in the field and seeing the real problems faced by professionals, he learned from utilities that they don't actually have a great idea of what assets they have. They know, of course, where their pump stations are, for example, but not necessarily every asset within the stations. That’s a clear barrier for maximum efficiency. As Morace puts it, “It’s great to have IoT sensors, but IoT sensors aren’t that valuable if you don’t know what it’s on.” 

That’s where Asset Vision comes in. Trained to recognize assets typical for these types of water utility stations, the tool is able to take a 3D model scanned using the Overlay app and automatically recognize and register assets within the model, thereby creating an accurate inventory that the industry has largely been lacking. While the AI can’t necessarily identify exactly which asset it is looking at, but rather just the type, it can read things like serial numbers and QR codes – and is location-aware – which allows users to subsequently attach identified assets to those IoT sensors, maximizing the value of that real-time data and creating a functional digital twin.

In a conversation with Morace about the current AI boom as well as some of the shortcomings – like “hallucinations” with ChatGPT – and he noted that this is obviously a different kind of AI than the generative tools that are dominating the mainstream news cycle. One of the crucial features with Asset Vision is that it gives a percentage of how sure it is that it has correctly identified an asset, providing some guidance as to when a little more human intervention may be needed.

Morace also talked about the process for identifying asset types that are not already in the Overlay database, something that they say is significantly more efficient than more traditional AI training. Within Asset Vision, if there is an asset that is not identified by the AI, a user can simply put a digital box around the asset within the 3D model and enter the asset type within the app. After just a few seconds, that asset is in the programming and will be identified moving forward. Additionally, Morace mentioned that if a utility has some sort of “proprietary relationship to an asset,” they would be able to quarantine that to just their business’ account, though he notes that since most of these assets are third-party purchases they have yet to come across that scenario.

We’re in a time where there is more pressure on utilities than perhaps any other time for a variety of reasons, with climate change looming large over water and energy utilities around the world, and ever-increasing reliance on remote work and global connectivity putting pressure on communication utilities. It’s something that is on the mind of the Overlay team, which is why they are looking to take advantage of recent technological developments and lean on relatively simple tools to complete complex tasks. Morace reiterated points above about using phones for digital twin creation, looking back at when iPhones first came out.

He said, “When the iPhone first came out, they had these really terrible cameras, and everyone was like, ‘Why would I use that camera? It’s so terrible.’ And the answer became, the best camera is the one you have with you. We see the same approach to this technology, which is: The best technology you can have is the one that’s going to be in your truck or in your pocket.”

They are using this mantra to try and address the issues in the field being experienced by those in the pressurized utility fields. “I think at this time when there’s so much pressure on all these spaces – energy and water and communication – we all just need to get a lot more with a lot less, and we think this type of technology can help play that role.”

Asset Vision will officially launch on June 1.

Call for Speakers: VRARA Summer West Fest

Summer West Fest will bring together experts from Pacific North West of US and Canada together under one roof (virtually) to share knowledge and network.  The July 19th event is the first in a series of events. Interested in speaking? See the detail and themes of this first event below!

Preliminary Schedule

11am - opening Sarah/Roberto and introduction to the Pacific NorthWest Chapter Presidents

11:20-11:45am : Featured speaker 1

11:45 - 12:15 pm : Virtual Networking

12:15 -1pm Featured speaker 2

Themes for Speakers:

AI revolution: What does it mean for XR?

APPLE glasses 

XR Analytics

If your expertise covers these themes, please contact vancouver@thevrara.com for more information of applying as a speaker to this series!

Banuba Enhances Facial Feature Editing in Face AR SDK

Facial feature editing, or face morphing, was already a feature in the Banuba Face Augmented Reality SDK prior to version 1.7. However, with the latest update, the capabilities have been significantly enhanced. Users can now effortlessly customize the following features, as demonstrated in a short video:

  • Eyebrows (spacing, height, bend)

  • Eyes (shape, size, spacing, squint, lower eyelid size & position)

  • Nose (width, length, tip size)

  • Mouth (size, smile, shape, lip thickness)

  • Cheeks (size of the cheeks and cheekbones, dimples/sunken cheeks)

  • Chin (length, width, V-shape)


The inclusion of these new features opens up numerous potential applications. One notable example is the ability to simulate the outcomes of plastic surgery. Additionally, it enables users to edit photos, alter their appearance during video calls, and explore a wide range of possibilities. Moreover, face morphing can be combined with other similar features, such as face touch-up, to achieve even more impressive effects.

Face feature editing comes as part of v1.7 update, which also brought along a massive performance boost. 


Virtual backgrounds, one of the most popular features, has also received a performance update. It will be especially noticeable on Apple devices – up to 10x faster work. 


The full release notes could be found in the Banuba blog.



VR Healthcare to reach $9.5B by 2028

On June 29, join our annual Healthcare Forum

Although still a small market ($1.8B in 2021), health-focused VR has shown potential in treating a range of conditions, from phobias to chronic pain. The market is projected to reach $9.5B by 2028 (Washington Post).

Already, VR is helping treat patients. One of the most successful areas has been post-traumatic stress disorder (the Veterans Health Administration was an early adopter) and has also been used to reduce pain and anxiety without medication. VR has become an important educational tool, allowing students to practice procedures and prepare for high-risk scenarios (surgeons trained with VR have shown significantly fewer errors than traditional training methods.) And, fyi, VRARA Member UCF, has implemented VR/AR for nurses over the past year. And, Arizona State replaces all intro biology labs with VR labs.

On June 29, our annual Healthcare Forum will have 40+ speakers, 800+ attendees from Microsoft, Bloomberg, Magic Leap, Tel Aviv Medical Center, Merck, Thermo Fisher, Harvard, Compal, Edwards, Pacific Neurocenter, Canon, MIT, NYU, CAE, Sony, Intel, TAMP (with doctors from US, Europe, Brazil, Nigeria, India), and many others.

The all-new Campfire App and Campsite Starter Kit are now available

Collaborating with 3D has never been easier. The new Campfire app is simply the best way to communicate 3D information for design reviews and training.  Instead of traveling for a meeting or shipping physical equipment, you can collaborate digitally and get work done at a fraction of the cost and the time. You're also likely to save your team expensive rework in the future.

 

Sound good? Request your invite now. There's a free plan to get started, and an enterprise plan to grow with.

 

The Campsite Starter Kit combines the Campfire App and Campfire Headsets with a plug-and-play solution to jump start your POC for design reviews, training, and a wide range of shared 3D experiences.  It includes:

Use your existing PCs, Macs, iPads, and iPhones (for headset controllers) and get everything running in less time than it takes to pitch a tent. Really. 

 

Don't expect IT hassles either. The Campfire Headset plugs into your Windows PC with Thunderbolt-3 just like a monitor. In fact, we've made it seamless with Dell Precision Mobile Workstations. Just plug it in, download the Campfire App, and get to work. 

 

Coming to AWE? Visit our real-world Campsite at Booth125. We'll be running live demos of the Campfire App along with the Campsite Starter Kit. We'll also be previewing the Campfire App on Quest Pro.

AR Post: "A Very Interesting VR/AR Association Enterprise & Training Forum"

The VR/AR Association held a VR Enterprise and Training Forum yesterday, May 24. The one-day event hosted on the Hopin remote conference platform, brought together a number of industry experts to discuss the business applications of a number of XR techniques and topics including digital twins, virtual humans, and generative AI.

The VR/AR Association Gives Enterprise the Mic

The VR/AR Association hosted the event. In addition to keynotes, talks, and panel discussions, the event included opportunities for networking with other remote attendees.

“Our community is at the heart of what we do: we spark innovation and we start trends,” said VR/AR Association Enterprise Committee Co-Chair, Cindy Mallory, during a welcome session.

While there were some bonafide “technologists” in the panels, most speakers were people using the technology in industry themselves. While hearing from “the usual suspects” is nice, VR/AR Association fora are rare opportunities for industry professionals to hear from one another on how they approach problems and solutions in a rapidly changing workplace.

“I feel like there are no wrong answers,” VR/AR Association Training Committee Co-Chair,Bobby Carlton,said during the welcome session. “We’re all explorers asking where these tools fit in and how they apply.”

The Convergence

One of the reasons that the workplace is changing so rapidly has to do with not only the pace with which technologies are changing, but with the pace with which they are becoming reliant on one another. This is a trend that a number of commentators have labeled “the convergence.”

“When we talk about the convergence, we’re talking about XR but we’re also talking about computer vision and AI,” CGS Inc President of Enterprise Learning and XR, Doug Stephen, said in the keynote that opened the event, “How Integrated XR Is Creating a Connected Workplace and Driving Digital Transformation.”

CGS Australia Head, Adam Shah, was also a speaker. Together the pair discussed how using XR with advanced IT strategies, AI, and other emerging technologies creates opportunities as well as confusion for enterprise. Both commented that companies can only seize the opportunities provided by these emerging technologies through ongoing education.

“When you put all of these technologies together, it becomes harder for companies to get started on this journey,” said Shah. “Learning is the goal at the end of the day, so we ask ‘What learning outcomes do you want to achieve?’ and we work backwards from there.”

The convergence isn’t only changing how business is done, it’s changing who’s doing what. That was much of the topic of the panel discussion “What Problem Are You Trying to Solve For Your Customer? How Can Generative AI and XR Help Solve It? Faster, Cheaper, Better!”

“Things are becoming more dialectical between producers and consumers, or that line is melting where consumers can create whatever they want,” said Virtual World Society Executive Director Angelina Dayton. “We exist as both creators and as consumers … We see that more and more now.”

“The Journey” of Emerging Technology

The figure of “the journey” was also used by Overlay founder and CEO, Christopher Morace, in his keynote “Asset Vision – Using AI Models and VR to get more out of Digital Twins.” Morace stressed that we have to talk about the journey because a number of the benefits that the average user wants from these emerging technologies still aren’t practical or possible.

“The interesting thing about our space is that we see this amazing future and all of these visionaries want to start at the end,” said Morace. “How do we take people along on this journey to get to where we all want to be while still making the most out of the technology that we have today?”

Morace specifically cited ads by Meta showing software that barely exists running on hardware that’s still a few years away (though other XR companies have been guilty of this as well). The good news is that extremely practical XR technologies do exist today, including for enterprise – we just need to accept that they’re on mobile devices and tablets right now.

Digital Twins and Virtual Humans

We might first think of digital twins of places or objects – and that’s how Morace was speaking of them. However, there are also digital twins of people. Claire Hedgespeth, Head of Production and Marketing at Avatar Dimension, addressed its opportunities and obstacles in her talk, “Business of Virtual Humans.”

“The biggest obstacle for most people is the cost. … Right now, 2D videos are deemed sufficient for most outlets but I do feel that we’re missing an opportunity,” said Hedgespeth. “The potential for using virtual humans is only as limited as your imagination.”

The language of digital twins was also used on a global scale by AR Mavericks founder and CEO, William Wallace, in his talk “Augmented Reality and the Built World.” Wallace presented a combination of AR, advanced networks, and virtual positioning coming together to create an application layer he calls “The Tagisphere.”

“We can figure out where a person is so we can match them to the assets that are near them,” said Wallace. “It’s like a 3D model that you can access on your desktop, but we can bring it into the real world.”

It may sound a lot like the metaverse to some, but that word is out of fashion at the moment.

And the Destination Is … The Metaverse?

“We rarely use the M-word. We’re really not using it at all right now,” Qualcomm’s XR Senior Director, Martin Herdina, said in his talk “Spaces Enabling the Next Generation of Enterprise MR Experiences.”

Herdina put extra emphasis on computing advancements like cloud computing over the usual discussions of visual experience and form factor in his discussion of immersive technology. He also presented modern AR as a stepping stone to a largely MR future for enterprise.

“We see MR being a total game changer,” said Herdina. “Companies who have developed AR, who have tested those waters and built experience in that space, they will be first in line to succeed.”

VR/AR Association Co-Chair, Mark Gröb, expressed similar sentiments regarding “the M-word” in his VRARA Enterprise Committee Summary, which closed out the event.

“Enterprise VR had a reality check,” said Gröb. “The metaverse really was a false start. The hype redirected to AI-generated tools may or may not be a bad thing.”

Gröb further commented that people in the business of immersive technology specifically may be better able to get back to business with some of that outside attention drawn toward other things.

“Now we’re focusing on the more important thing, which was XR training,” said Gröb. “All of the business cases that we talked about today, it’s about consistent training.”

Business as Usual in the VR/AR Association

There has been a lot of discussion recently regarding “the death of the metaverse” – a topic which, arguably, hadn’t yet been born in the first place. Whether it was always just a gas and the extent to which that gas has been entirely replaced by AI is yet to be seen.

While there were people talking about “the enterprise metaverse” – particularly referring to things like remote collaboration solutions – the metaverse is arguably more of a social technology anyway. While enterprise does enterprise, someone else will build the metaverse (or whatever we end up calling it) – and they’ll probably come from within the VR/AR Association as well.

Post originally appearing on Arpost.co by Jon Joehnig.

930 People from 55 countries joined our annual Enterprise & Training Forum. See top Sessions, Speakers, and Video Recordings.

Access the video recordings here

Our annual forum was a full day of best practices, guidelines, and insights as we brought together industry leaders in VR/AR, digital twins, industrial metaverse, AI, and digital transformation.

We had 60 speakers from Qualcomm, CGS Inc, Overlay, Microsoft, ArborXR, Boeing, Michelin, US Air Force, Bridgestone, Magic Leap, UPS, Accenture, Lenovo, Varjo, Raytheon, Unity, Booz Allen, Porsche, Schlumberger, Veteran Affairs, and more.

Access the video recordings here.

Below are the top sessions and countries:

Access the video recordings here.

CGS to Keynote VRARA Forum on Creating a Connected Workplace and Accelerating Digital Transformation with Extended Reality (XR)

Post originally appearing on Global Newswire.

NEW YORK, May 23, 2023 (GLOBE NEWSWIRE) -- CGS, a global provider of business applications, enterprise learning, and outsourcing services, announced that its keynote at the VRARA Enterprise & Training Forum, on Wednesday, May 24, 2023, will address the business opportunity of combining XR with other next-generation technologies. To illustrate this, especially for those new to enterprise-level XR, the presentation will introduce Immersive Learning as a Service (ILaaS™) and describe how it removes barriers for enterprise organizations as they deploy and scale learning experiences with the support of virtual reality (VR) and augmented reality (AR). The keynote will also showcase LaunchpadXR™, a framework designed for companies new to XR, which is wrapped around the ILaaS solution to help accelerate its adoption and integration.

How Integrated XR is Creating a Connected Workplace and Driving Digital Transformation will be presented by Doug Stephen, President of Enterprise Learning and XR at CGS, and Adam Shah of VMG Connect. Their remarks will educate audiences about the fast-growing market for augmented reality (AR) and virtual reality (VR) and how these technologies are driving digital transformation in the workplace and improved productivity.

“We’re on the precipice of extraordinary changes in how businesses leverage the best in their human capital through support by advancement in extended reality, the Internet of Things, and ChatGPT”, said Doug Stephen, President of Enterprise Learning and XR at CGS. “For any business leader interested in understanding how this race for innovation can positively impact your business and its people, I strongly encourage you to attend this session.”

Participants will also learn how implementing Immersive Learning as a Service (ILaaS) removes barriers for enterprise organizations as they strategize, launch, deploy and scale learning experiences with virtual reality (VR) and augmented reality (AR).

The keynote takes place between 10:55 AM-11:15 AM. Click here to register.

About CGS

For nearly 40 years, CGS has enabled global enterprises, regional companies, and government agencies to drive breakthrough performance through business applications, enterprise learning and outsourcing services. CGS is wholly focused on creating comprehensive solutions that meet clients' complex, multi-dimensional needs, and support clients' most fundamental business activities. Headquartered in New York City, CGS has offices across North America, South America, Europe, the Middle East, and Asia. TeamworkAR, Inc. is a wholly owned subsidiary of Computer Generated Solutions Canada Ltd. For more information, please visit www.cgsinc.com and follow us on Twitter at @CGSinc and @LearningCGS and on LinkedIn

TeamworkAR: Make Everyone a Genius™
TeamworkAR is a platform that brings real-time digital transformation to on-the-job training, learning and support for any company, anywhere. From knowledge capture and transfer to collaborating with and assisting workers across skill levels, our goal is to make everyone a genius. By moving your workforce from a training room to real-world work in days rather than weeks, TeamworkAR increases productivity and success. Enhancing your own custom content through augmented reality, you can change how work gets done — for better.

With over 35 years expertise providing award-winning custom learning and development solutions for dozens of Fortune 500 companies including McDonald’s, Comcast, Toshiba, Medtronic and Maersk, CGS is a trusted partner in aligning learning strategy to measurable business results.

MEDIA CONTACTS
Escalate PR for CGS
cgs@escalatepr.com

Mark D. Tullio
mtullio@cgsinc.com

New Masterclass from AREYES: Helping You Level Up Your AR Skills

VRARA members receive 25% discount, email info@thevrara.com to obtain code.

AREYES, a creative technology studio that focuses on new formats of visual communication and builds unique digital experiences using AR/VR, has announced a new AR Masterclass: Development of a Commercial AR Product. See how the process of creating a commercial AR works from the inside, sharpen your AR skills and supercharge your professional development with 

Beginning mid-summer for a duration of four weeks, this masterclass is perfect for:

  •  A creative professional or an AR creator and want to improve your hard skills and build a solid career in the new economy

  •  A small team looking for the further growth opportunities

  •  A mature studio and want to set up or improve your XR department

Register today! VRARA members receive 25% discount, email info@thevrara.com to obtain code.

CORTEXR spatial tracking scales attention metrics in XR industry

What is ‘attention’ and why is it important to the XR industry?

‘Attention’ as a cognitive process has played a surprisingly important role in the evolutionary success of humans. Our attentional operating system gave our ancestors an unfair advantage as they emigrated from Africa thousands of years ago and remains fundamental to our understanding of the world and our communication with each other. Auditory attention is always on, in standby mode, as an early warning system against potential dangers – even when we’re asleep. Visual attention, on the other hand, is a serious bit of hardware which commands two thirds of the brain’s computing power to concentrate on specific things and gain deeper understanding. Visual attention (referred to as just ‘attention’) is part of the broader Attention Economy and a priority for the XR industry. Attention is therefore one of the key metrics of human behaviour in XR which we need to understand. What are people looking at in AR and VR environments and how can we use attention metrics to grow the XR industry?

How is visual attention measured in AR/VR environments?

The two main approaches to measuring attention in Extended Reality are eye tracking (also known as gaze tracking) and spatial tracking (device movements in 3D space). Eye tracking typically uses inward-facing sensors (i.e. internal cameras) in HMDs to collect data on pupil size, gaze vector, eye openness etc. Spatial tracking uses device sensors (i.e. accelerometer and gyroscope) to collect 6DOF spatial data on head (HMD) and hand (mobile) orientation, location and movement. Both eye tracking and spatial tracking are used by the XR industry to measure visual attention in multiple sectors, from Education and Training to Media and Entertainment, albeit in very different ways. Eye tracking offers qualitative data in select HMDs at a high price point whilst spatial tracking delivers quantitative data across all HMDs and mobile devices at a low price point. So how do the different technologies compare?

How is eye tracking used to measure attention in XR?

Eye tracking software helps optimise HMD performance with dynamic foveated rendering, makes avatar interactions more realistic and is used successfully in specific enterprise use cases. The human eye moves 2-3 times per second to build an understanding of a scene from target areas about the size of a coin at arm’s length and internal HMD cameras collect data on this visual scanning. The technology pre-dates XR so there are challenges with eye tracking as an attentional measurement tool in VR (let alone AR) applications. Firstly, there isn’t empirical evidence that gaze tracking corresponds with cognitive processing i.e. just because someone’s eyes point at something doesn’t mean they’re actively processing the information. Secondly, eye tracking software isn’t scalable with limited HMD integration and isn’t available on 1 billion+ AR enabled mobile devices. Thirdly, eye tracking requires specialist software and advanced data analytics skills to extract insight. This means the adoption of eye tracking technology to measure attention in Extended Reality experiences is relatively low with the Future of XR and Metaverse Measurement Study finding only 4% of companies involved in XR are using eye tracking as a measurement tool. So what is the alternative?

How is spatial tracking delivering attention metrics in XR?


Spatial tracking is an Internet of Behaviour (IoB) approach to scaling data analytics in the XR industry. Accelerometer and gyroscope sensors available in all HMD and mobile devices collect millisecond data on user location, orientation and movement in 3D space. XYZ device movements track head (e.g. VR HMD) and hand (e.g. AR mobile) positions in 6DOF experiences to deliver surprisingly accurate data on what people are looking at. This is achieved with AI and Cognitive Science which makes sense of the velocity, variety and volume of spatial data. The good news for the XR industry is that spatial tracking is a proven approach to delivering attention metrics at scale with companies like Coca-Cola, WPP, Clear Channel, Mondelēz International and Yahoo successfully measuring visual attention in AR and VR experiences. Identifying which areas in VR scenes or AR objects get the most attention – and the order and sequence of areas viewed – is essential insight which is now available for all XR projects.

What are the benefits of spatial data analytics for XR industry? 

Spatial tracking is a scalable attention measurement tool which can be implemented – today – across all Extended Reality projects. CORTEXR is leading the way with an end-to-end data analytics solution which is:

  1. Agnostic of device (HMD and mobile), platform (3D engines and XR platforms) and content (AR, VR and Metaverse). 

  2. Scalable across all projects via plug-in for 3D engines and XR platforms (Unity is live with Three.JS and A-Frame in the pipeline).

  3. Standardised metrics of human behaviour in XR to measure, analyse and optimise performance of large data sets across all projects.

  4. Accessible to everyone as no coding is required and prebuilt dashboards deliver insights regardless of level of data analytics expertise.

Attention, a cognitive process which is thousands of years old, is an evolutionary leap for the XR industry with studies showing that spatial tracking produces insights similar to eye tracking. From a Cognitive Science perspective, the deliberate physical movements captured by spatial tracking correspond with higher levels of cognitive processing i.e. if someone actively looks at something, they’re making a cognitive effort to understand it. Eye tracking tends to grab headlines but spatial tracking is fast evolving as the scalable solution to data analytics for AR, VR and the Metaverse.