U.Group Glimpses the Future of Mixed Reality

A preview of the HoloLens 2 and Azure Kinect

What if the world could be a canvas for digital experiences? Imagine being hands-free, untethered, and able to interact with our applications in the same way we interact with each other—through voice, gestures, and visualizing holographic data in the context of where we are and what we’re trying to do.

The first-generation HoloLens gave us a first look at the potential for mixed reality and spatial computing, but after three years the technology has come a long way. We recently went to Microsoft for their first Mixed Reality Dev Days event and had our first opportunity to check out the new HoloLens 2 and Azure Kinect cameras. After two days of intensive sessions and hands-on workshops, here’s our summary of what’s next for mixed reality.

The HoloLens 2 experience

First, let’s talk Field of View (FoV). Yes, it’s bigger—more than double the size of the HoloLens 1—but more importantly for experience designers, the area is now a square rather than the widescreen ratio. Spatial computing means being able to place content in the world relative to the wearer, and the HoloLens 2 provides a substantially larger canvas for taller models and interactions that are positioned both near and far.

FoV comparison chart. Source: MSPowerUser

My first experience in the HoloLens 2 was a warm volumetric video welcome from Alex Kipman, Technical Fellow at Microsoft for AI & Mixed Reality and inventor of the HoloLens. I thought back to the first time I saw Star Wars and saw a hologram deliver a crucial message of hope, but this time the hologram was standing in front of me, at eye level and with spatial audio that made it sound like we were in the same room.

I completely forgot about checking the limitations of the FoV until after exiting the first demo. On the HoloLens 1 one of the first things people often notice is the frame created by the displays, immediately highlighting the FoV constraints. With the new wavelength displays, the frame is completely seamless. Holograms blend more naturally into the environment and the sense of immersion is drastically improved.

Built for comfort. Ready for enterprise.

Ergonomics have also been greatly improved. While the weight of the HoloLens 1 and 2 are almost identical, weight distribution is now split 50/50 between the front and the back. This has the effect of feeling much lighter, resulting in less strain on the neck. This is especially beneficial for industrial applications that can involve looking down frequently while working.


The flip-up visor is another thoughtful addition. It makes it easier for front-line workers to drop in and out of holographic experiences as needed without removing the device. Both of these features help improve comfort and efficiency over longer periods of time.

Instinctual interactions: manipulating holograms like real-world objects

MRTK v2 UI & interaction elements. Source: Microsoft

Ever since Microsoft User Interface (UI) Engineer Julia Schwarz’s incredible demo of the new Hololens 2 interaction elements at Mobile World Congress we’ve been excited to check them out, and they didn’t disappoint. After years of designing around simple gestures like air-tap and tap & hold, the HoloLens 2 opens a whole new range of possibilities for how we can interact with digital content.

Watching a holographic hummingbird fly from hand to hand is delightful. Source: Twitter User @MxdRealityDev

Instinctual interactions represent an evolution of Microsoft’s concept of a natural user interface. Multiple technologies come together to create more fluid, instinctive digital experiences that behave like we expect the physical world to behave. In addition to more accurate hand-tracking, these patterns draw on the HoloLens 2 inside-out tracking, voice recognition, and eye tracking to better understand the wearer’s context and allow creators to better predict their intention.

Objects can anticipate being interacted with when we gaze at them. Enhanced hand and finger level tracking affords new interface patterns to manipulate, resize, and interact with holograms. Buttons can be pushed. Keyboards can be played. Switches can be flipped. Even better, we can even reach out to distant holographic content to manipulate it as if we had telepathic powers. Mixed reality interfaces can either replicate real world controls or completely change the way we think about them.

Pushing holographic buttons interact with holographic controls like their physical counterparts. Source: Microsoft

Eye tracking: visualizing user intent in 3D space

In addition to anticipating the wearer’s actions and intentions, eye tracking also gives us the ability to visualize their attention and actions in 3D space. Eye tracking has long been an invaluable tool for better understanding the user experience of our traditional screen applications, and we now have the ability to leverage eye tracking studies for better insights into the effectiveness and performance of our mixed reality experiences. For training and simulation applications, we’ll be able to identify patterns and data points in a spatial context that may help us create more impactful training, further amplifying the benefits of immersive training for enterprise use cases.

3D spacial heatmap. Source: Microsoft

The bigger picture

It was exciting—and well worth the wait—to see the HoloLens2 in action, but a bigger takeaway from MR Dev Days was that its full potential is unleashed as part of a larger ecosystem of cloud computing and connected devices. New Azure services and devices focused on mixed reality enable new applications for business that understand where you are and what you’re trying to do along with new opportunities for collaboration. These services include:

Collaborative mixed reality experiences shared across multiple devices with Azure Spacial Anchors.

Spatial Anchors. Holographic content can now be anchored to real-world locations and easily shared across HoloLens, iOS, and Android devices. Potential applications include visualizing the Internet of Things, wayfinding, and creating persistent real-world markers enhanced by digital content. The physical world becomes a global canvas for an entirely new class of spatial computing experiences.

Remote Rendering. Higher quality holograms mean higher levels of immersion, which in turn means more effective training and simulation applications. Our early experiments on the original HoloLens often involved a lot of iteration around striking a balance between quality and performance in our 3D models. With the new Remote Rendering service, high quality models are rendered in Azure and more easily streamed to mobile devices like the HoloLens, phones, and tablets.

Azure Kinect. If you’ve used the original Kinect on an Xbox, you have a sense of how accurately it can translate and respond to your movements in the physical space around the camera. The new Azure Kinect adds AI sensors for powerful computer vision and speech applications. 4k resolution, advanced depth sensors, and a comprehensive collection of developer tools enable us to gain more real-world understanding of an environment than ever before, and to work cooperatively with people wearing devices like the HoloLens 2. This further adds understanding awareness and context to mixed reality applications. Even cooler, these cameras can be networked together to create a sophisticated network of sensors that further capture data to drive enterprise applications.

Getting started

As an early Microsoft Mixed Reality Agency Partner, U.Group has been at the forefront of exploring the potential for spatial computing with wearable devices, connecting people and data to transform how we work, play, and collaborate.

How can we bring the physical and digital world together for you? Contact U.Group to schedule a demo and explore what mixed reality can do for you and your team!

Get alerted to new job postings, events, and insights by registering for our monthly newsletter.