Zanni Joins NVIDIA Inception to Bring XR to Mass Audiences and Live Productions

Zanni recently announced it has joined NVIDIA Inception, a program designed to nurture startups revolutionizing industries with advancements in AI and data sciences.

Zanni brings a unique blend of extensive theatre production and technical virtual production experience to deliver XR solutions to mass audiences. Its team brings a vast network of partners in immersive technologies to create the most robust solutions for producers in theater, sporting, and entertainment. The company recently ​announced the coming release of Ovees™, a proprietary handheld mixed reality viewer, which enables XR-enhanced performances by integrating augmented reality content in live productions for mass audiences.

NVIDIA Inception will allow Zanni to deliver scalable XR solutions for mass audiences through Ovees™ using the NVIDIA CloudXR platform for streaming virtual reality (VR), augmented reality (AR), and mixed reality (MR) content on a remote server. Zanni will also expand its offering with the opportunity to collaborate with industry-leading experts and other AI-driven organizations.
“This is an exciting time for XR technology to allow live entertainment to really reach beyond the physical limitations of production design,” says David S Rodriguez, CEO and Founder of Zanni. “We are excited to partner with NVIDIA to bring creatives the capabilities and tools to think outside the box of the physical limitations of reality to create incredible, memorable and engaging experiences to simultaneous live audiences, at scale.”

NVIDIA Inception helps startups during critical stages of product development, prototyping and deployment. Every NVIDIA Inception member gets a custom set of ongoing benefits, such as NVIDIA Deep Learning Institute credits, marketing support, and technology assistance, which provides startups with the fundamental tools to help them grow.



About Zanni
Zanni’s mission is to transform the way event producers and performers tell stories and engage audiences by blending traditional storytelling with innovative immersive technology. We use techniques in Virtual Production and Technical Theatre to bring the audience right into the story and experience live entertainment in ways that have never been done before. With 30 years of professional experience in theatre and event production, we know how to design immersive, technology-driven experiences with the audience in mind and give them incredible memories that will last forever.

Your Website URL: https://zannixr.com/

What’s new for Theorem-XR?

We recently sat down with Ryan Dugmore, Consultancy Director, and Sales Director Chris Litchfield to discuss What’s New with Theorem-XR and the latest hot topics surrounding Extended Reality (XR) in engineering.

What is Theorem-XR?
Powered by the Theorem Visualization Pipeline, Theorem-XR enables companies to visualize their 3D CAD data in context at full scale, in Augmented, Mixed and Virtual Reality (collectively known as Extended Reality – XR).

“We’ve developed a suite of applications which are fundamentally based on a server solution – the Visualization Pipeline. This is about optimizing and preparing CAD data for use in XR applications. We understand that our customers have needs for XR devices and we’ve delivered a number of off-the-shelf solutions that we can deliver for customers to use. Similarly, we’ve worked with customers worked with customers who develop their own applications, but want to use the pipeline capability to furnish them” – Ryan Dugmore on Theorem-XR and the Visualization Pipeline

Data and Device Agnostic
Chris Litchfield (CL): With this being such a young industry and devices appearing on almost a monthly basis, where are Theorem in the space of supporting devices?

Ryan Dugmore (RD): “As with our historic CAD Translation products, we’re CAD and device agnostic. We can read any data and push it to the downstream system you want – and we’ve gone for that approach for Theorem-XR as well. We understand that the market is heavily changing and that there are new devices released day-on-day. The way we’ve architected our experiences is to be able to support these new devices quickly as they come to market, and as others fall off. It’s an everchanging market that we are working in, and we have to be prepared to support customers as the devices change.”

CL: Where do you see the benefit of the Pipeline sitting when you’re talking to companies?

RD: “I think everyone’s really on an exploratory session – understanding where VR/MR/AR can give you the best ROI in your business. What the Pipeline does is enables people, at an enterprise level, to optimise and prepare data for usage, no matter their use case. The real benefit of the Pipeline is that it’s robust. We can support systems that come along, prepare data and make it ready for any particular use case that arrives”.

Optimizing data for XR
CL: Companies quite often start their journey with the device then try and shoehorn the rest of their process into the device. Something that is really overlooked until they come up against the problem, is the optimization process and why you need to be flexible in that approach…

RD: “As you’ve touched on there, the device and the use case defines what optimization is required. Some devices you need to heavily optimize and prepare data so it’s usable. The aim of the Pipeline’s flexibility is to enable you to optimize data for your use case. Or, if you’re streaming data down from a remote rendering service, you can keep a high fidelity model and keep the quality. It’s purely based on the use case, and we work flexibly with the Pipeline to understand what a customer needs and hopefully prepare it based on that.”

Azure Remote Rendering (ARR) and HoloLens 2
CL: What capabilities have we got surrounding Azure Remote Rendering and what’s new in the HoloLens 2?

RD: “At the recent release, we’ve updated our HoloLens 2 offerings. We’ve now ported our experiences into Azure. What that enables the user to do, is to take much higher volumes of data with much higher quality. If we put it through our Pipeline, it optimizes it, prepares it, pushes it to a Remote Rendering farm, where it sits up in the cloud. Then, when you access it from within our application, it streams that down live to the device.”

“We can work on that collaboratively. Multiple users in a HoloLens or in VR can be working in the same model space, streaming high quality, high volumes of data down. We’re a Microsoft Silver Partner so we’ve worked closely with them to develop this capability, and we’re very proud of what it can give. It’s enabled us to take all of our standard applications, but take out the limitation of processing or rendering everything locally on the device - it’s purely rendered on the cloud. That said, if there’s still a need to pull in a file locally, you can pull that in alongside the rendered data. So, again, it’s about the flexibility of the device and the application to enable you to do the task required.”

Device Collaboration
CL: One of the topics that became a ‘hot topic’ early pandemic was collaboration, and the ability to collaborate. What devices can you bring into a collaborative session?

RD: “The nature of the Pipeline is that it controls it all from a central location. The server controls everything and the clients link in, so as long as there is network access, you can link in from any device. VR, MR, and even desktop users can see what’s going on, look at the Design Review that’s happening, apply comments etc. Anything that you can do singularly, you can do in this collaborative environment.”

Newly supported VR devices
CL: What’s the latest offering from Theorem regarding device support in VR?

RD: “We attempt to support the devices as they’re released. Obviously, we’ve got a development cycle so we include what we can. Here, we have the VIVE Focus 3. This has got wireless business streaming. Being able to stream to the VR device, you’re no longer tethered.”

“When new devices are released, our architecture enables us to, for not much work, bring that device alongside. We don’t mind what devices you buy. This one’s been released, we’ve coded for it, and it’s now available to use. Similarly, we’ve got others that are being worked that we are porting our software to.

CL: So, it’s based on what’s available and who’s requesting it?

RD: “There are some big players in the market that I’m sure this year will bring out more devices - and we’ve got to be ready to support customers with them.”

Theorem-AR
CL: What are Theorem’s offering in the Augmented Reality (AR) area?

RD: “At this release, we’ve refreshed all of our AR offerings. We’ve essentially brought them up to the latest spec, in similar menus and features that we offer in the other devices. There are two main methods that you can use with the AR offering…

“Firstly, the visual digital twin, which is being able to recognise data. If you have a physical object and you’ve uploaded the digital version to the server, you can snap to the physical object using the tablet app. It’s about being able to view the data in the context of its physical object. We’ve then got the (2) AR Viewer capability, which uses the capabilities provided by Android and Apple with AR Core and AR Kit. It’s about being able to place your objects in recognition of the space that you’re in.”

“We’ve essentially refreshed our AR offering as more people begin to use tablets and phones on the shop floor and ask if they can have this capability available to them. So, we’ve listened to the market and brought AR up to spec at this release.”

More info https://www.theorem.com/extended-reality

Nextech AR Launches its Public Version of ARitize 3D SaaS For Ecommerce

Everyone can now use ARitize 3D to turn their 2D products into 3D/AR Visualizations 

Nextech AR Solutions Corp. (“Nextech'' or the “Company”) (OTCQB: NEXCF) (NEO: NTAR) (CSE: NTAR) (FSE: N29), a Metaverse Company and leading provider of augmented reality (“AR”) experience technologies and services is pleased to announce the launch of its ARitize 3D SaaS offering to the public. With this launch, Nextech now extends 3D model creation to an unlimited list of customers, including small, medium and large ecommerce businesses who want to quickly scale the creation of 3D models in a cost effective way. Nextech believes that it is first to market with this self-service AR SaaS platform for ecommerce which offers scalability, affordability, ease of use, and the highest quality 3D models.

Sign up via the Company’s website - click here

Aritize 3D SaaS was first released as a Beta version in November, the Company has now moved the platform out of Beta and into Public use. This SaaS launch expands Nextech’s revenue opportunity into a no-touch self-service with a growing base of monthly recurring revenue (MRR) and brings Nextech's state-of-the-art ARitize 3D modeling factory to the public. Nextech’s pricing plans are much more affordable than other platforms and separates itself from competitors.

ARitize 3D is the One-Stop-Shop 3D + AR solution for ecommerce that is:​

  • Affordable - ​lowest cost provider

  • Scalable - fastest, seamless ​, high quality

  • Frictionless - requires low implementation effort ​

  • AI & ML powered - automated 3D model creation

  • End to End - from model creation to CMS & AR visualization

Nextech is working on integration with all the major ecommerce aggregators including, Shopify, BigCommerce, WooCommerce, Wix, and Magento. With these integrations, users will have an affordable, frictionless, and seamless one click SaaS solution for 3D model making,and AR product visualizations.

With ARitize 3D SaaS, whether a customer wants to create 5 models or 5000 models, they just have to sign up, select the applicable pricing plan, enter a credit card payment method, and enable WebAR for their website. After signing up, using Nextech’s artificial intelligence-powered technology, customers can create the 3D/AR models in 3 simple steps.

Watch a video to preview the process of creating a 3D model:


The Company also offers an array of enhanced 3D model and AR visualizations including; product hotspotsanimations360+ exploded views3D swirl adsconfigurator3D carouselvirtual staging and room decorator. These enhanced features allow companies to provide even further detail and context to 3D models, including changing colours and textures, breaking apart the model into an “exploded view” to see all the individual parts of the product, providing animation to any part of the product that requires movement, and much more. These enhancements provide the Company with significant high margin revenue opportunities.

ARitize 3D has been gaining substantial market traction, and will continue to expand with this latest Saas offering. More and more ecommerce businesses recognize that the future of online commerce rests in product visualizations through 3D models, as it has been proven to increase conversions and reduce returns. Nextech believes it is becoming a leader in the 3D modeling industry through ARitize 3D, which is already supplying many notable retailers with 3D models and AR product visualizations, including Kohl’sLighting Plus New ZealandPier 1Kmart AustraliaEzoozaNorthByNorthThe Office GroupJust ReclinersNever SummerMitra10Cle PriveeSeville ClassicsBizrklan Eyewear, Source for Sports, Bothwell Cheese, FKA Brands, Visolab, O2 Vape, Poly & Bark, Skate One, Adler Jewelers and many others as demand for this technology is growing over a variety of sectors.

Watch a video of ARitize 3D: - click here

Nextech AR CEO Evan Gappelberg commented, “This is an extremely exciting time for Nextech, as the launch of our ARitize 3D SaaS solution for eCommerce is now available to the masses. We are poised to become the world’s leading 3D + AR model factory. No other platform can match our quality, scalability, affordability,and now  ease of use in the global 3D model making market.” He continues, “we are driven to keep adding more advanced offerings including ARitize CAD (CAD files to 3D models for industrial manufacturers), and enhanced features like the color configurator, and NFT minting of 3D models cementing ourselves as the defacto state of the art, one-stop-solution for 3D modeling. Why would a company or brand go to several other platforms, when they can get everything they need with Nextech’s integrated tech stack? We are seeing great demand for 3D models from enterprise companies as well as small and medium sized ecommerce businesses, and I believe this demand will only accelerate in 2022 with this public launch.”

ARitize CAD
Next, the Company plans to bring its ARitize CAD solution to the public as a SaaS offering. Nextech believes that with the combination of ARitize 3D  and Nextech’s ARitize CAD which enables manufacturers to convert  CAD files into 3D AR models at scale the Company has a major competitive edge in the 3D modeling market and believes it is well positioned in becoming the world’s leading 3D modeling factory.

Stock Compensation
Paul Duffy – President, has taken restricted shares in lieu of cash for services rendered by Moonshot Inc. (Paul Duffy) in the amount of CAD $66,666.67 for 43,011 common shares. The deemed price per share is $1.55 CAD.  All securities issued in this transaction will be subject to a 4-month hold period in Canada and are subject to Exchange approvals.

To learn more, please follow us on TwitterYouTubeInstagramLinkedIn, and Facebook, or visit our website: https://www.Nextechar.com

For further information, please contact:

Investor Relations Contact
Lindsay Betts
investor.relations@Nextechar.com  
866-ARITIZE (274-8493) Ext 7201  

About Nextech AR  
Nextech AR Solutions is a Metaverse Company that develops and operates augmented reality (“AR”) platforms, transporting three-dimensional (“3D”) product visualizations, human holograms and 360° portals to its audiences altering e-commerce, digital advertising, hybrid virtual events (events held in a digital format blended with in-person attendance) and learning and training experiences. 

Nextech focuses on developing AR solutions for the Metaverse, however most of the Company’s revenues are derived from three e-Commerce platforms: vacuumcleanermarket.com (“VCM”), infinitepetlife.com (“IPL”) and Trulyfesupplements.com (“TruLyfe”). VCM and product sales of residential vacuums, supplies and parts, and small home appliances sold on Amazon. 

Forward-looking Statements
The CSE and the NEO have not reviewed and do not accept responsibility for the adequacy or accuracy of this release. 

Certain information contained herein may constitute “forward-looking information” under Canadian securities legislation. Generally, forward-looking information can be identified by the use of forward-looking terminology such as, “will be” or variations of such words and phrases or statements that certain actions, events or results “will” occur. Forward-looking statements regarding the completion of the transaction are subject to known and unknown risks, uncertainties and other factors. There can be no assurance that such statements will prove to be accurate, as future events could differ materially from those anticipated in such statements. Accordingly, readers should not place undue reliance on forward-looking statements and forward-looking information. Nextech will not update any forward-looking statements or forward-looking information that are incorporated by reference herein, except as required by applicable securities laws. 

makeSEA joins VR/AR Association (VRARA)

Expanding outreach to all members, entrepreneurs, partners, and professionals in virtual and augmented reality

San Luis Obispo, California, January 20, 2022 - makeSEA, a content management and collaboration platform for mixed reality, has become a member of the VR/AR Association (VRARA). The VRARA is an international industry organization for virtual and augmented reality designed to connect member organizations, encourage research and education, help develop industry best practices, and foster collaboration between solution providers and end-users. The VRARA membership currently consists of 4,000 companies, brands, and schools, plus over 60,000 industry professionals.

“makeSEA is excited to be a member of the VR/AR Association,” said Chris Stavros, CEO and Founder at makeSEA. “As we see accelerated adoption of the use of VR and AR in all segments of business as well as educational institutions, we feel members of the Association will help foster the generation of new ideas and collaboration in the technology. makeSEA is focused on providing easy and economical cross-platform solutions that provide rapid value and measurable benefits through user experiences tied to client content.”

makeSEA developers are currently working on new features for the cross-platform solution. With makeSEA + Catapult you can reduce production and refresh times for mixed reality apps by 10x - 100x as well as reduce implementation time to weeks, providing a publishing and update workflow that has reduced the development and operating costs by more than 80% as compared to traditional production methods.

makeSEA puts the power of sharing and live collaboration in the hands of everyone, any educator, any business. makeSEA is the fastest way to get your content onto your XR device with no programming requirements and real-time updates. The California based company serves multiple educational institutions, a diverse range of business clientele, and individual creators and designers.  

Companies, organizations, and individuals seeking to explore how to use AR and VR technology should contact makeSEA to learn how easy and affordable XR content production can be using the makeSEA Content Management & Collaboration Platform.

For more information see https://www.makeSEA.com   

Contact: Chris Stavros

CEO

cstavros@makesea.com

1-800-803-1050

https://www.facebook.com/makeSEA

https://www.linkedin.com/company/makesea/?viewAsMember=true

https://twitter.com/makesea

https://www.instagram.com/makesea_pics/


Sensorium Galaxy Makes a Major Step Towards Public Launch, Reveals New Content and Plans for Expanding Closed Beta

Earlier this year, Sensorium launched a closed beta for its metaverse which allowed testers to explore for the first time two virtual worlds — PRISM, dedicated to extraordinary music events, and MOTION, an underwater-themed world focusing on mindfulness and relaxation. As the end of year approaches, Sensorium is unveiling a major series of updates, bringing the metaverse closer to a public launch, including significant content enhancements and the introduction of a multiplayer mode.

Starting today, Sensorium’s first users will be able to experience concerts performed by AI-driven DJs in PRISM. These virtual artists, developed in partnership with Mubert, create generative music in real-time, adapting to a wide array of environments and the mood of concertgoers. Using artificial intelligence, they combine individual elements — drum beats, synth pads, bass line, etc. — to create algorithmically-generated, infinite music streams that change continuously and can neither end nor be ever repeated.

Following the public launch of Sensorium, virtual artists will host shows in this metaverse, along with world-famous artists like David Guetta, Armin van Buuren, Carl Cox, among others. For the first time in history, AI-powered artists will complement the natural talent of the real-life performers — a balanced mix of human and AI-generated art. One of Sensorium’s upcoming virtual stars, Kàra Màr, has already released their first album across major music streaming platforms.


A second major update will see AI-driven virtual beings added to the beta version. While exploring PRISM and MOTION, users can now chat and befriend some of the smartest conversational AI-driven characters available to date. Each virtual being has a unique personality, possesses long-term memory and is capable of supporting unscripted conversations without losing track of context. These creatures were unveiled by Sensorium earlier this year and have already attracted the attention of VentureBeat, Forbes, PCGamer, Fast Company, among other media outlets. Apart from being trusted companions and guides inside the metaverse, these virtual beings can facilitate connections between real-life users.


While interactions with AI virtual beings for the VR and Desktop modes are currently only offered to users within the closed beta, a wider audience can already chat with them through the Sensorium Galaxy Mobile App (available on the App Store and Google Play). This application also gives access to additional features of the Sensorium Galaxy metaverse in AR mode, including dancing with virtual beings, as well as staging unique choreographies. All user-generated content can be easily shared across social platforms.


Finally, Sensorium has polished its multiplayer content distribution platform, so that beta testers can now tune in to Sensorium Galaxy from different devices and share immersive experiences.


All these updates set the stage for the welcoming of new beta testers early next year, with the public launch slated to take place upon the completion of beta-testing.

More info https://sensoriumxr.com

Contact:

Name: Elena Rudovskaya

Email Address: elena.rudovskaya@sensoriumxr.com

Varjo Brings World’s First Human-Eye Resolution VR/XR Cloud Streaming Capability to Its Reality Cloud Platform

With today’s announcement of the addition of cloud streaming to its Varjo Reality Cloud platform, select early access users, like electric vehicle manufacturer Rivian, can deploy virtual and mixed reality applications and experiences entirely from the cloud for the first time and stream human-eye resolution VR/XR content directly to Varjo headsets.

Professionals across industries can move into immersive workflows easier and faster than ever before by leveraging the infinite compute power of the Varjo Reality Cloud, powered by Amazon Web Services (AWS) and NVIDIA GPUs. By streaming content directly from Varjo Reality Cloud, local computing requirements are significantly reduced, and the need to have supported software applications installed on every user’s local PC is diminished. Instead, users can simply put on any Varjo headset (XR-3, VR-3, or Aero) and, with a simple link, join a cloud-hosted session to begin collaborating instantly across the globe.

Early access customers such as Rivian are already using the new platform to improve the scalability of their immersive workflows 

 

Jan. 19, 2022 – Helsinki, Finland Varjo, the industry-leading provider of professional-grade VR/XR hardware and software, today announced the addition of cloud streaming to its Varjo Reality Cloud platform, marking progress toward the company’s vision of bringing real-life collaboration into the metaverse. With the new service, select early access users can deploy virtual and mixed reality applications and experiences entirely from the cloud for the first time and stream human-eye resolution VR/XR content directly to Varjo headsets. This unlocks new levels of scalability and productivity as professionals look to expand their use of immersive workflows.

 

“Being able to achieve the same quality experience through Varjo Reality Cloud with less powerful local PCs is a game-changer for companies looking to scale their use of virtual and mixed reality,” said Urho Konttori, founder and CTO of Varjo. “Now, with our new cloud streaming service, users can join photorealistic virtual experiences with almost any laptop with a dedicated NVIDIA GPU and a Varjo headset and start collaborating in an immersive environment.”

 

Professionals across industries can move into immersive workflows easier and faster than ever before by leveraging the infinite compute power of the Varjo Reality Cloud, powered by Amazon Web Services (AWS) and NVIDIA GPUs. By streaming content directly from Varjo Reality Cloud, local computing requirements are significantly reduced, and the need to have supported software applications installed on every user’s local PC is diminished. Instead, users can simply put on any Varjo headset (XR-3, VR-3, or Aero) and, with a simple link, join a cloud-hosted session to begin collaborating instantly across the globe.  

 

By utilizing Varjo’s proprietary foveated transport algorithm, users can stream immersive content from Varjo Reality Cloud to VR/XR devices with a bandwidth of only 35 megabits per second. Additionally, all the traffic between the local PC and the servers, including the stream itself, is encrypted and has been developed with industry best practices.

 

Rivian, the electric vehicle manufacturer, is among the first customers who have received early access to Varjo Reality Cloud to conduct automotive design reviews with Autodesk VRED software. Varjo has been working closely with the Rivian, Autodesk VRED, and AWS teams to transform the automaker’s design review process into a cloud-streamed, immersive workflow.  

 

With Varjo Reality Cloud, Rivian can now create collaborative VRED sessions on-demand and enable key decision-makers to join. More specifically, one user can create a cloud-hosted session running on Varjo Reality Cloud and send it to other users across geographies. Once users click on the session link and put on a Varjo headset, they can all see and collaborate on the same, high-resolution 3D car model through VRED without having to download it or install the application locally. 

 

“With Varjo Reality Cloud, we are able to make high-fidelity immersion a key part of our design development and scale it effectively across locations,” said Trevor Greene, Lead of Visualization Design at Rivian. “This is a turn-key solution that allows users with very different skill levels to be brought into an immersive environment to collaborate – something that hasn’t been possible before.”

 

Varjo Reality Cloud is still under development and only available to select existing Varjo customers in early access, with commercial availability expected during the first half of this year. Varjo plans to extend its cloud streaming capability for other relevant software and workflows across industries throughout 2022 and invites interested partners and customers to get in touch with their Varjo contact to inquire about early access. 

 

Supporting Customer & Partner Quotes: 

 

“Varjo Reality Cloud enables professional users to easily access and benefit from the highly scalable NVIDIA A10G Tensor Core GPUs to power the photorealistic and collaborative visualization service,” said Lisa Bell-Cabrera, Director of Business Development XR at NVIDIA. “This is the first time Varjo software runs in the cloud on an NVIDIA GPU, and it’s a great step forward in bringing scalable, true-to-life virtual reality experiences to professionals across industries.” 

 

“We’re excited to partner with Varjo to help cutting-edge automotive manufacturers such as Rivian bring their design review process in VRED into the cloud,” said Lukas Fäth, Senior Product Manager at Autodesk. “With Varjo Reality Cloud, users can create collaborative VRED sessions on-demand and easily invite key decision-makers to join. When paired with the human-eye resolution built into Varjo’s headsets, users are able to see a realistic, real-scale virtual model of the car they’re working on and collaboratively review it in real-time, making immersive workflows more efficient than ever before.” 

 

###

 

About Varjo  

Varjo (pronounced var-yo) makes revolutionary VR/XR hardware and software that together allow you to see and experience virtual and augmented content just as clearly as you see the real world. Our virtual and mixed reality products take you to another level of performance and emotional immersion – recreating the exact feeling and conditions of real life, allowing you to perform better and learn faster. www.varjo.com

Point72 is looking for a Metaverse Research Engineer, Technology Innovation in New York or London!

More info and apply here

A Career with Point72’s Technology Innovation Group

Point72’s Technology Innovation Team

The Technology Innovation team at Point72 focuses on researching and experimenting with a broad range of technologies that could bring transformational impact to our firm. We conduct research in areas like applied quantum technology, novel non-traditional computing paradigms, applied artificial intelligence, heterogenous architectures, digital reality, machine-brain interfaces, and other areas. 


What you’ll do

We are looking for a Metaverse Research Engineer to join our team. In this role, you will prioritize areas of research based on the likely impact of digital reality/metaverse and human augmentation interfaces on financial services and the asset management industry, collaborate with internal teams for identifying use-cases, collaborate with academia and commercial firms to foster research partnerships, and develop relationships with the wider tech industry and startup ecosystems. For the successful candidate, we are flexible on the location for this role.

Specifically, you will:

  • Explore and understand trends in digital reality, metaverse, DeFi, NFT’s and human augmentation interfaces, identify opportunities, use-cases, and areas of impact and transformation for our business

  • Experiment with creating digital content, digital spaces, DeFi, NFT’s immersive spaces using AR/VR/VR hardware and software technology stack

  • Conduct in-house research and offer thought leadership in a collaborative framework with in-house teams and external parties

  • Collaborate with academia, industry, and start up ecosystems to conduct joint research and experimentation. Your academic contact network will enable you to make significant headway in this space

  • Publish whitepapers, position papers, tech reports and research papers around digital reality/metaverse and human augmentation technologies.

  • Champion and foster a culture of innovation at all levels of the organization

  • Partner internally as needed to ensure adherence to applicable regulatory requirements and Point72 Compliance Policies.


What’s required

  • PhD or Master’s background in computer science or other engineering disciplines, including recent graduates and post-doctorate candidates

  • Experience in one or more of the following areas: 3D/AR/VR/MR/XR technologies, AR cloud/Spatial Computing, Computer Vision, Immersive user collaboration experiences, Immersive Smart Spaces, Affective Computing/Emotional AI interfaces, Digital Humans/Twins, Brain Machine Interfaces, or similar areas

  • Experience or Knowledge in DeFi and NFT’s is a big plus

  • Excellent written & interpersonal communication skills and the ability to communicate complex concepts

  • Be curious, embrace uncertainty and be comfortable with failure while exploring with new ideas

  • Commitment and adherence to the highest ethical standards


We take care of our people

We invest in our people, their careers, their health, and their well-being. We want you to concentrate on success and leave the rest to us. When you work here, we provide:

  • Fully-paid health care benefits

  • Generous parental and family leave policies

  • Mental and physical wellness programs

  • Volunteer opportunities

  • Support for employee-led affinity groups representing women, minorities and the LGBT+ community

  • Tuition assistance

  • A 401(k) savings program with an employer match and more


About Point72

Point72 Asset Management is a global firm led by Steven Cohen that invests in multiple asset classes and strategies worldwide. Resting on more than a quarter-century of investing experience, we seek to be the industry’s premier asset manager through delivering superior risk-adjusted returns, adhering to the highest ethical standards, and offering the greatest opportunities to the industry’s brightest talent. We’re inventing the future of finance by revolutionizing how we develop our people and how we use data to shape our thinking. For more information, visit www.Point72.com/working-here

More info and apply here


Listen To Kàra Màr’s AI Generated LP “Anthropic Principle”. Sensorium Galaxy's PRISM virtual world dedicated to extraordinary music events

Virtual artist Kàra Màr released a debut LP titled “Anthropic Principle” – a fully AI-generated collection of 8 tracks. The 33 minutes long album is a blend of techno genres and it showcases the power of technology that can generate human-like created music. The sounds were generated with the Mubert platform that allows artists to create their own music with the support of artificial intelligence.

Fans will soon be able to experience Kàra Màr’s performances live in PRISM – one of Sensorium Galaxy’s virtual worlds dedicated to extraordinary music events. Sensorium Galaxy is scheduled to go live in several months, while the Sensorium app is already available for download.

More info:

https://sensoriumxr.com

Name: Elena Rudovskaya

Email Address: elena.rudovskaya@sensoriumxr.com

Call for Speakers & Sponsors for our METAVERSE 2.0 virtual event on March 9 & 10

Our first METAVERSE event was a huge success with 1000s of attendees and 100s of speakers.

For METAVERSE 2.0,

Apply to Speak or Sponsor here

Get ready for two days of disruptive ideas and ground-breaking insights as we bring together the most revolutionary minds and reflect on best practices for entertainment and business.

We’re here to collaborate for a better metaverse! Our event will provide the audience a chance to:

* Learn what the best minds in the industry are focusing on

* Skip the cliches and dive deep into the real issues

* Get an insider view on how the leading companies see the future, and how they’re planning to get there

* Connect & network with others (60K+ have already been invited!)

Topics can include:

  • Metaverse platforms

  • Avatars and Meta Humans

  • Volumetric 

  • Blockchain

  • NFTs

  • Cryptocurrency

  • Digital Fashion

  • Ethics & Law

  • Other

Why the Military Needs a Metaverse - ATARS, the Gateway into the Military Metaverse

Who is Red 6 and Why?

In response to a national security crisis in military flight training that represents an annual $24B Total Addressable Market, Red 6 developed ATARS (Airborne Tactical Augmented Reality System). ATARS represents a technological breakthrough that for the first time offers a wide field-of-view, full color demonstrable augmented reality solution that works in dynamic outdoor environments.

Red 6's Technology is the First and Only to put virtual airplanes into the real world. Real pilots in real airplanes, flying against synthetic enemies, brought into a digital AR world, up in the sky. This negates the need for the military to provide physical airplanes and pilots as adversary forces to train against. The initial focus of ATARS was to provide all allied combat pilots across the globe with single- aircraft, within visual range air combat maneuvering training against synthetic threats.

Why?
By providing a dedicated synthetic adversary training resource to every squadron, ATARS enables unlimited 1v1 training against the actual enemy capability they will see in battle. Synthetic aircraft doing exactly what a state-of-the-art enemy aircraft would be able to do.

This bears repeating: ATARS will enable US military pilots to fight against our adversaries’ most advanced fighter jets in synthetic form so that when the day comes, we will be able to exceed the capabilities of anyone we face.

Like chess masters being able to train with chess software programs, ATARS also enables pilots to be trained at all stages of their careers, from initial flight training through advanced leadership of large groups of air combat forces. This results in the US military needing to use fewer instructor pilots, fewer aircraft, and with resulting lessened maintenance needs and operational expenses. We can now rapidly increase the production output of new combat aviators while simultaneously achieving compelling financial offsets across the Combat Air Force.

More info http://www.red6ar.com

Contact:

Name: Christina Babbitt

Email Address: christina.babbitt@red6ar.com

Case MUJI & Varjo: Using VR in retail to achieve a deeper customer connection

Together with Varjo, the Japanese retail company MUJI has harnessed virtual reality to achieve a deeper connection with their in-store customers.

Since November of 2021, MUJI's European flagship store in Helsinki has featured a virtual reality experience centered on nature. As part of a physical in-store installation, customers can try on the new Varjo Aero virtual reality headset while sitting on one of the store’s signature pine wood beds.

Filmed with a 360° camera, the VR experience instantly transports participants to the heart of nature. “Currently virtual reality is often being considered for use in e-commerce in the marketing areas, but MUJI doesn’t think that way. We think of how we can get closer to our customers and offer them a journey through MUJI’s philosophical world," says Miho Takagi, Managing Director, MUJI Finland.

Read the full case study: https://varjo.com/blog/case-muji-vr-in-retail/

makeSEA Brings Live Broadcast Spatial Content Production, Co-Presence and 360-Surround Livestream Event Experiences to the Metaverse

makeSEA.com has joined forces with LEVR.tv to make history offering a new sense of reality at live sporting and entertainment events. For the first time ever, users can attend live and prerecorded pay-per-view event experiences in VR and AR devices, along with others as if physically present together at the venue. Attendees experience live content produced by LEVR and delivered to viewers using the LEVR TV app developed by makeSEA and built on-top of the makeSEA Content Management & Collaboration Platform and Catapult app for AR & VR (XR). Users can watch the events live with friends, interact with the general audience and shared content, and move between lounge and 360-degree VRENA℠ spaces. They view live action events in streaming 360-surround video at up to 8K resolution enhanced with live-produced and interactive spatial content.

The inaugural broadcast was produced for Celebrity Championship Boxing, broadcast live from the US Virgin Islands, on October 23, 2021. Attendees were able to view the event ringside in surround video, and from the comfort of a swanky VR lounge adorned with multiple screens also streaming the live event, along with friends as if together, who were in different physical locations across the US and abroad. Users who attended used Meta (Oculus) Quest devices as part of their Metaverse experience, offering exceptional experiential quality and resolution. The venue is reusable, and the content replaceable, making disposable, single-use XR applications a thing of the past. The Platform also allows for live content production (injection, triggering, manipulation), and live interaction with the audience using spatial content for VR and AR, and supports virtually unlimited audience sizes with automatic scaling and proximity-based communications with others.

Chris Stavros, CEO at makeSEA stated that, "makeSEA + Catapult is a cross-device Content Management and Collaboration Platform for AR and VR that reduces production and refresh times for mixed reality apps by 10x - 100x. For the LEVR TV app, using makeSEA and Catapult helped reduce implementation time to weeks, providing a publishing and update workflow that has reduced the development and operating costs for LEVR by more than 80% as compared to traditional production methods."

makeSEA puts the power of sharing and live collaboration in the hands of everyone: any educator, and any business. makeSEA is the fastest way to get your content onto your XR device with no programming requirements and real-time updates. The California based company serves multiple educational institutions, a diverse range of business clientele, and individual creators and designers.

More info https://www.makeSEA.com

Contact:

Name: Chris Stavros

Email Address: cstavros@makesea.com

Tess McKinney of UN Medical Center appointed as VRARA Healthcare Committee Co-Chair

We are thrilled to have Tess McKinney join our Healthcare Committee as a Co-Chair.

Tess McKinney is currently serving as the Instructional Technologist II supporting VR, AR, XR in Education for the University of Nebraska Medical Center, Global Center for Health Security (NICS). Tess is also working with the UNL College of Computer Science and Engineering, UNMC College of Public Health, and UNMC College of Nursing on a project supporting Agriculture/Farm equipment (Rollover Ranch) safety utilizing Virtual Reality for K-12 students. She formerly served as Enterprise AV Technologist for the University of Nebraska Medical Center College of Nursing: Lincoln Campus.

The Cool Kids group, led by Tess, is a gathering of professionals from Nebraska with a variety of disciplines who all have some interest or hand in creating immersive media. Tess also is the Nebraska VR Network for Education & Research (NeVRNER) Co- leader/outreach coordinator and serves on other organizations and committees in her community.

I am very excited to co-chair the Healthcare Committee for the VRARA Community. I look forward to sharing my expertise in Virtual and Augmented Reality and Innovation with the Healthcare sector. I love to network, and I hope to influence & bring together the renegades, pioneers, and educators by introducing new and upcoming innovations by top vendors in our monthly meetings. I am all about sharing as much information as I can, and helping people connect to achieve their goals. Hope everyone can join us in the VRARA Healthcare Forums. 
— Tess McKinney

SoftServe joins VR/AR Association (VRARA)

Company will expand outreach to member organizations, entrepreneurs, and professionals in virtual and augmented reality

AUSTIN, Texas—January 10, 2021—SoftServe, a leading digital authority and consulting company, has become a member of the VR/AR Association (VRARA). The VR/AR Association is an international industry organization for virtual and augmented reality designed to connect member organizations, encourage research and education, help develop industry best practices, and foster collaboration between solution providers and end-users. The VR/AR Association currently has over 4,000 companies, brands, and schools, and over 60,000 industry professionals.

“SoftServe is excited to be a member of the VR/AR Association,” said Rich Herrington, EVP of Client Success at SoftServe. “We are witnessing accelerated adoption of these technologies in all enterprise segments of our business and feel the Association will help foster ideation and collaboration. Through our partnership network of the leading cloud and platform companies SoftServe is focused on developing solutions that show rapid value and measurable benefits through dynamic user experiences tied to existing, complex, and disparate enterprise data sets.”

SoftServe’s R&D department, responsible for high-tech solutions and scientific research, is currently working on a new AR Remote Assistance solution powered by Magic Leap, Unity, and ServiceNow. It is an equipment service and maintenance solution that makes significant strides in training, technician support, problem diagnosis, maintenance, and asset management. With quick access to analytics, equipment history, and forecasting over the cloud, this unique integration is not just a training and consulting solution, but a working tool for field management and maintenance. It helps solve complex technical operations by connecting specialists and providing them with a unified view of the problem and a large set of tools to solve it.

The Remote Assistance solution creates AI models that monitor and predict when equipment will need maintenance. Using practical AR applications, field technicians can reference articles, videos, manuals, checklists, even collaborate hands-free with a remote expert. With back-end and data source integrations, it offers analytics for better issue diagnostics and maintenance decision-making. The solution provides aggregated data, analytics, and checklists needed to detect and resolve maintenance on the go.

About SoftServe

SoftServe is a digital authority that advises and provides at the cutting-edge of technology. We reveal, transform, accelerate, and optimize the way enterprises and software companies do business. With expertise across healthcare, retail, energy, financial services, software, and more, we implement end-to-end solutions to deliver the innovation, quality, and speed that our clients’ users expect.

 

SoftServe delivers open innovation—from generating compelling new ideas, to developing and implementing transformational products and services. Our work and client experience are built on a foundation of empathetic, human-focused experience design that ensures continuity from concept to release. We empower enterprises and software companies to (re)identify differentiation, accelerate solution development, and vigorously compete in today’s digital economy. No matter where you are in your journey.


Visit our websiteblogLinkedInFacebook, and Twitter pages.


SoftServe Contact

PJ Powell

Extended Reality Practice Lead 

ppowell@softserveinc.com

VR/AR Association (VRARA) launches the Metaverse Committee to create best practices for the ecosystem and industry

After our hugely successful METAVERSE event, we want to continue the momentum, discussions, and collaboration and have created the Metaverse Committee.

The Metaverse Committee will set best practices and enable collaboration among the global providers, enablers, brands and other industry partners to create and deploy engaging technologies and solutions for the Metaverse and other Web 3.0 activities, programs and platforms.

The Committee will host Online Meets every 2 weeks. Topics we will cover: Metaverse platforms, Avatars and Meta Humans, Volumetric , Blockchain, NFTs, Cryptocurrency, Privacy, among others.

Join us to:

  • Learn what the best minds in the industry are focusing on

  • Skip the cliches and dive deep into the real issues

  • Get an insider view on how the leading companies see the future, and how they’re planning to get there

  • Connect & network with others

More info: Metaverse Committee

Email info@thevrara.com if you want to be added to the mtg invites!

Aftermath Islands Metaverse Begins Phase 3 of Virtual Islands

Since going live in early November, over 3,000 unique customers have acquired virtual lands and other related NFT (non-fungible tokens), with over 22% making multiple purchases, as part of the Aftermath Islands Metaverse program accounting for almost 6,400 plots and parcels of virtual land. Over 89% of Phase 1 and 2 Estate Islands have already sold out and many other islands have been reduced to low inventory levels.


Phase 3 sees the introduction of new Theme-based virtual island play, as Smash Island will open Player versus Player (PvP) future capabilities, allowing players to take refuge and sanctuary on their owned properties. Collector Aisle will feature a range of sports, entertainment and comic book collectible programs geared to enthusiasts and Elven Inlet will create a land of wonder and magic.


Additionally, Phase 3 will introduce 10 new Estate Islands with a comic book era theme paying homage to some of the greatest heroes and villains of the industry universes. These include Parker Place, Banner Bay, Wayne Hideaway, Kent Enclave, Stark Shelter, Luthor Lagoon, Odinson Rising, Grimm Gates, Doom Reef, and Logan Refuge.

Contact:

Name: cara buckspan

Email Address: cara.buckspan@liquidavatar.com

ThirdEye Announces Razor MR Glasses, Expands into Consumer Metaverse with New Lightweight Solution with 100+ apps

Going beyond the enterprise metaverse, the Razor MR Glasses will be ThirdEye’s first foray into the consumer market, offering 100+ apps, hands-free features and a wide field of vision.

The Razor MR Glasses will be ThirdEye’s first foray into the consumer market, offering 100+ apps, hands-free features and a wide field of vision

 ThirdEye, a leader in augmented and mixed reality (AR/MR) solutions, today announces its first consumer mixed reality glasses - the Razor MR Glasses. Adding upon the technology that’s proven to be successful in the enterprise, ThirdEye is expanding its lineup of hardware solutions by introducing a new product directly for consumers.

With the Razor MR Glasses’ lightweight, all day wearable form factor, consumers can experience a total immersive metaverse solution. The applications available on the consumer MR glasses range from gaming and entertainment to telehealth and remote assistance. Game developers are creating multi-player metaverse apps for users wearing Razor MR Glasses, where they can view digital information overlaid onto a cityscape. Users can also watch movies or their favorite TV shows with spatial audio.

Repairs and appointments can be handled via the MR glasses as well. Consumers can use existing ThirdEye software, such as RemoteEye, to get real-time help from maintenance crews for fixing things at home or take an inventory of assets at home for insurance purposes. ThirdEye’s RespondEye platform can also be used to communicate with their doctors or caregivers remotely, allowing the remote doctor to view the patient in real time with AR annotations.

“Through the feedback we’ve received from customers since we launched in 2016, we’ve found there to be a great desire to bring our lightweight solutions and user-friendly applications, like RemoteEye, for home use as well,” said Nick Cherukuri, Founder and CEO of ThirdEye. “For the Razor MR Glasses, we wanted to accommodate a variety of needs. For example, these mixed reality glasses are lightweight and myopia friendly, allowing nearsighted users to adjust the Razor MR Glasses from zero to negative five diopters with a single twist of a knob on the side of the glasses. Now, no one will need to attempt stacking multiple eyewear pieces - as is needed with VR solutions, making it extremely comfortable for daily use.”

In addition, the Razor MR Glasses already support many metaverse applications that users can access in ThirdEye’s app store, including RemoteEye for any remote assistance aid and HIPAA-certified RespondEye for telehealth. The Razor MR Glasses feature a refresh rate of 70 Hz and two noise-canceling microphones to prevent lag and enable clear communication. The Razor MR Glasses can connect with most Android and iOS devices, including all phones that support display port (DP) output, laptops and tablets with a USB-C port, and gaming consoles through HDMI adapters.”

Foldable and lightweight at 85 grams, the Razor MR Glasses are comfortable to wear on the go or at home for extended periods of time. The glasses allow users to remain hands-free in a variety of activities, including interacting on their social media, utilizing a multi-purpose assistant, exercising with a personal trainer via a heads-up display coach, and immersing themselves in mixed reality games. The Razor MR Glasses run on the 9.0 Android operating system, boast a 43-degree field of vision (FOV) (equivalent to a 120"-inch display), and have an 8-hour battery life. Additional features for the Razor MR Glasses include voice control and a dual high-definition (HD) directional sound system.

The Razor MR Glasses have already received preorders from leading consumer and telecom companies.

The new Razor MR Glasses are currently in production and will be shipping later this year. Users can pre-order or receive more information at www.thirdeyegen.com or by contacting sales@thirdeyegen.com.

About ThirdEye
ThirdEye is a leader in smart glasses and AR/MR software development. While many companies today provide only hardware (smart glasses) or only software, ThirdEye provides a full end-to-end ecosystem for its customer, which makes deployment easier for our partners and end-users. It has hundreds of software developers creating apps ranging from games to entertainment to enterprise applications and its products retail around the world. From everyday consumers to Fortune 500 companies, ThirdEye is bringing the power of mixed reality globally. Mixed reality has the potential to change the way the world operates, and ThirdEye's vision is to help generate the future.

Media Contact
Iman Scott
Uproar PR for ThirdEye
marketing@thirdeyegen.com
321.236.0102


Terrasolid maps the world in 3D using 3D PluraView monitors that visualize in Stereo

For consistent and precise digital GIS and photogrammetry workflows, raw data must first be converted into integrable and thus valuable information components that meet the requirements of the respective application environments. The 3D point cloud processing modules from the Finnish software provider Terrasolid, such as TerraScan, TerraModeler, TerraMatch, TerraPhoto and TerraStereo, are highly developed, intelligent and powerful applications. They are able to process and model laser points with their XYZ coordinates at high speed and can also display the result in 3D-stereo. During the last 20 years, the capabilities of the available LiDAR hardware has developed rapidly together with the capabilities of the processing software, with Terrasolid applications at the forefront.

The variety of LiDAR applications has developed rapidly within just a few years: surveyors and civil engineers use 3D point clouds for terrain modeling, for the construction and monitoring of bridges, dams, high-voltage power lines, as well as for determining the quality of road surfaces, for high-precision measurement with millimeter accuracy of railway tracks and the entire railway infrastructure. City planners receive very precise information about the existing vegetation in cities, and precise reference points for building measurements are recorded at the same time. Archaeologists use precise and RGB-textured LiDAR data to record and reconstruct important cultural monuments. Recently, following the massive fire at Notre-Dame, the medieval church in the center of Paris, the remaining structure was completely surveyed with high-resolution LiDAR instruments. These are just a few examples for the use of LiDAR data, data that has been processed for many years with the versatile and constantly further developed Terrasolid application tools.


Use cases develop due to the increasing diversity and rapid development of LiDAR data acquisition scanners. Very small but precise airborne laser scanners are available for 'Unmanned Aerial Systems' (UAS), often also referred to as 'drones'. In larger, single or twin-engine aircraft, very powerful LiDAR instruments are used for greater altitudes and for covering large areas. On the ground (terrestrial), LiDAR data acquisition begins with very small scanners that are hand-held and are therefore ultra-mobile, or - somewhat larger and very precise - laser measurement heads installed on tripods with a range of several hundred meters. These are used in combination with digital cameras, often mounted on vehicle roofs, for fast, mobile LiDAR data acquisition.


The dual-screen, stereoscopic 3D PluraView monitors from Schneider Digital visualize these point clouds in the highest 3D-stereo display quality and are ‘plug & play’ compatible with the TerraStereo software by Terrasolid. Users benefit from flicker-free, pixel-precise visualization with resolutions of up to 4K per screen and eye. The 3D PluraView monitors are the perfect visualization solution for comfortable work with high-resolution LiDAR data in all 3D-stereo and VR / AR desktop application areas. The compatibility of TerraStereo with the 3D PluraView monitor family has now been officially certified by the manufacturer Schneider Digital.

The LiDAR applications from the Finnish software company Terrasolid are the world's leading platform for the processing and visualization of point clouds. As a company, Terrasolid has been successfully established in the geodata market for more than 30 years and has been the global market leader specifically with LiDAR software solutions for more than 20 years now. Independent of LiDAR data sources and sensors, Terrasolid offers versatile and powerful tools for the editing of 3D point clouds, feature extractions, terrain representations and point cloud visualizations. With TerraStereo, even very large point clouds with more than 50 billion points can be realistically visualized in 3D-stereo at very high speed with freely selectable point textures.


Visualize, analyze, calculate and extract LiDAR data

TerraStereo is specifically tailored for the 3D-stereo visualization of data from every Terrasolid workflow and can be combined with all other Terrasolid software products. The workflow with software products from the Terrasolid family usually begins with TerraScan. TerraScan manages, processes and visualizes all types of point clouds. The application offers various import and project structuring tools for very large amounts of data. With this powerful tool, complex buildings, landscapes as well as road and cable networks can be reliably measured, vectorized and precisely modeled in 3D. In combination with additional Terrasolid applications such as TerraModeler, TerraMatch, TerraPhoto, TerraSlave and other products, users have all the tools at their command for the highly automated processing of LiDAR point clouds and the creation of 3D vector datasets. TerraStereo offers the possibility to visualize buildings, topographies and entire railway and tram infrastructures in 3D-stereo. The software application is used for instance to visualize power lines, visualize corridor analyses, represent dangerous objects in 3D-stereo and enables the efficient visual representation of road surface conditions for risk assessment.


Terrasolid's software products combine the processing of LiDAR and RGB-I image data from terrestrial and airborne laser scanning systems. Terrasolid is neither limited to certain data applications nor to specific laser scanning or camera systems. From calibration to comparing and merging of input data to the creation of final 3D vector models, ortho-images with TerraPhoto, terrain representations with and without natural vegetation, the software applications are highly flexible and offer powerful solutions, e.g. for surveying and construction, cartography, photogrammetry and surface analysis, but also for archeology, research and urban development. TerraScan converts every point cloud precisely, quickly and in high quality into a corrected 3D model or to CAD vector elements, which saves time and money and optimizes workflows. Fast data export is a great advantage, for example in smart city and urban planning applications that are being used on desktop, mobile and web platforms.


For the spatial viewing and measurement of 3D models, TerraStereo relies on the detailed, high-contrast 3D-stereo display of passive, double-screen beamsplitter systems, the 3D PluraView series from Schneider Digital. These high-end displays are the de-facto industry standard for all stereo-capable LiDAR, photogrammetry and GIS applications. The color representation of RGB-textured point clouds is truly amazing on the powerful and innovative 3D-stereo display systems from Schneider Digital. With a razor-sharp display in real time, they are completely flicker-free and can be used ‘plug & play’ for all TerraStereo functionality, such as precise 3-axis measurement in 3D space.

Impressively present intelligent point clouds and edit precisely

With screen diagonals up to 28“, the 3D PluraView monitors deliver highly detailed, stereoscopic 3D visualizations. Thanks to one monitor per eye, they offer up to 4K stereoscopic resolution and brilliant image brightness. Their optimal ergonomics and passive polarization filter technology ensures fatigue-free work even in normal daylight office conditions. Not only Terrasolid users appreciate the easy handling of the 3D PluraView monitors: 3D models can be easily displayed and easily measured or edited with a 3D mouse. Terrasolid users benefit from mature 3D visualization technology with the 3D PluraView monitors, established throughout the geospatial industry for many years.

Schneider Digital is the world's leading manufacturer and distributor of customized hardware solutions for graphics-intensive computer applications and offers complete workplace solutions for the calculation and visualization of large data sets to professional users in the areas of GIS and photogrammetry. The powerful performance of Schneider Digital workstations, in combination with innovative high-end displays, have ensured fast and precise workflows in geospatial applications for over 25 years. Any 3D modeling workflow, no matter how complex, turns into an impressive presentation and a remarkable, high-resolution working model on a 3D PluraView monitor. Together with the high-end displays from Schneider Digital, professional users of the high-performance Terrasolid software suite get a complete package that is fine-tuned at the highest level. Measuring, capturing, analyzing and visualizing 3D data is not only convenient with these combined and innovative technologies, it also adds real to any demanding GIS workflow. For this reason, Schneider Digital has now officially certified TerraStereo for 3D-stereo visualization with its 3D PluraView monitors.


More information at:

https://www.3d-pluraview.com/en/application-field/3d-pluraview-in-geo-applications




Schneider Digital direct contact:

Schneider Digital

Josef J. Schneider e.K.

Maxlrainer Straße 10

D-83714 Miesbach

Tel.: +49 (8025) 99 300

Mail: info@schneider-digital.com   


Schneider Digital – The company:

Schneider Digital is a global full-service solution provider for professional 3D-stereo, 4K/8K and VR/AR hardware. Based on its 25 years of industry and product experience as well as its excellent relationships with leading manufacturers, Schneider Digital offers innovative, sophisticated professional hardware products and customized complete solutions for professional use. Qualified advice and committed after-sales service are the company's own standards.

The Schneider Digital product portfolio includes the right professional hardware solution for the respective requirements in these areas: High resolution 4K/8K to multi-display walls. Schneider Digital is the manufacturer of its own powerwall solution smartVR-Wall and the passive stereo monitor 3D PluraView. Performance workstations and professional graphics cards from AMD and NVIDIA as well as innovative hardware peripherals (tracking, input devices, etc.) round off the product range. Many articles are in stock. This guarantees fast delivery and project realization.


Schneider Digital is an authorised service distributor of AMD FirePRO/Radeon Pro, PNY/NVIDIA Quadro, 3Dconnexion, Stealth int., Planar and EIZO. Schneider Digital products are used primarily in graphics-intensive computer applications such as CAD/CAM/CAE, FEM, CFD, simulation, GIS, architecture, medicine and research, film, TV, animation and digital imaging.


Further information is available at www.schneider-digital.com and www.PluraView.com.



Schneider Digital press contact:

LEAD Industrie-Marketing GmbH

André Geßner Tel.: +49 80 22 - 91 53 188

Hauptstr.46 E-Mail: agessner@lead-industrie-marketing.de

D-83684 Tegernsee Internet: www.lead-industrie-marketing.de