HoloLens – Spectator view – allowing others to see what you are seeing

Microsoft just announced an update around the HoloLens that allows you to share on what you are seeing (from a first-person perspective) with others to make to more interactive. This is a combination of MRC (Mixed Reality Capture) which already exists and some new updates that address some of the short coming of the MRC – especially when working with a audience.

The main use case on the spectator view – as the name suggests is to allow those in the room not wearing a device to see the holograms but also the interactions that the folks wearing HoloLens with their mixed reality experience.

You can use this to capture a mixed-reality scene, live stream the content (say in a meeting / conference), and, shoot/record the video. This essentially is the ‘cheap’ version of the special camera rig that Microsoft uses for keynote presentations.

It is not as straight forward as you might imagine; but at the same time if you are doing this ‘properly’ it isn’t as complex as well. You need some special equipment, and need to change some configuration, and add details to your apps to account for this.


You do need some special DSLR cameras (with HDMI output), and some other hardware – details can be found here. You can also 3D print the mount (STP can be found here).

And in addition there are a bunch of other steps that you need to do – from calibrating  (to get the offset from the camera), to the Compositor (which is a unity extension)  and allows you to record the video and change the hologram opacity, spatial mapping data details, etc.

All the detailed steps can be found here. And if this is all new, then I highly recommend to check out the Holograms 240 course. And below is an example on what this all can look like.

Real-time performance capture – HoloPortation?

Some of the folks working on PPI and HoloPortation team from MSR left and went to setup a new company called PerceptiveIO.

They have recently published a paper called Fusion4D: Real0time performance capture of challenging scenes. In that they cover some of the work around multi-view performance capture, the raw depth acquisition and preprocessing that needs to be done around that. This interestingly also handles deformation changes (e.g. taking off a jacket or a scarf) and these can be non-rigid and much more difficult to handle, but they are done beautifully.

ffd 1.png

Combining this with the likes of HoloLens would make it quite interesting. If you want to see more, check out the video below showing the examples and transitions below. Perhaps one day, it would allow us to see and experience events from afar. 🙂

HoloLens–Device Portal (Part 2)

In addition to the HoloLens Device Portal (see part 1), another option is using the UAP HoloLens companion app which you can install from the store. I think this is a little more end-user friendly, and perhaps a little less developer focused. It exposes a subset of the same functionality.

Once you install it, you connect more or less in the same manner; I think most people will like the live streaming option. There is a bit of latency between the device and what is shown, but that could be somewhat because of our (possibly crappy) wireless which was overloaded with many folks at work.

imageStore Option

imageOnce you connect and set it up then you see the above screen. Of course you can manage multiple devices from here.

imageOnce you login, you see a lot of the same information as you saw in Part 1.

imageYou can see the Live stream as shown here; and what might not be obvious that it is both sound and video which is streamed. In this screenshot you can see my (work) login screen, with the password login being a Hologram. Here it is ‘floating’ over the window, and you can see a flavor of the mixed reality.

imageAs you can expect, you can capture either a photo or a video on what is being seen via the Device.

imageThe photos or videos that you do take, show up here. I suppose they are saved on the device and you would want to take it off there.

imageThe virtual keyboard again I think is one of the best features – saving so much time air-tapping and the arms. Smile

imageApp manager can do some elements of management, but not as much as the web version.

imageAnd finally, you can see some details on the device. I think the Shutdown and Reboot options are probably the one which are more useful.

All in all, this is a little more polished and end-user friendly. Useful when demo’ing the mixed reality solutions you are building.

HoloLens–Device Portal (Part 1)

One of the advantages of running Windows 10 on the HoloLens is that it has all the regular features that you would expect. From a developers perspective, one of those being the Device Portal which is awesome. It is essentially a web server that is being hosted on the machine, and allows you to manage your device over Wifi and USB.

It is a must have if you want to stream your apps (including Holograms) so that others can see it, or alternatively you can record and then share. And of course there are details for various debug situations and the Virtual input saves your fingers from getting tired! Smile You also use this to side load the apps you built. There are REST APIs you could use if you want to program, and there is also a UAP app on the store (more on that in part 2).

To get to this, you browse to the IP address. Below are a few screenshots from my playing around which shows you the various aspects of the portal and what all you can do. And the beauty of this is, as a Windows developer, this all should be very familiar and nothing new. Smile

imageHome Screen – once you login

image3D View Settings

imageMixed reality capture – one of the key elements that lets you share the magic with others

imagePerf Tracing and the various levels you can set as part of Windows Performance Toolkit. This is WPR/WPA support in Systems.Diagnostics.Tracing – see this post for more details.

imageProcess details and you can sort by the relevant column.

imageProcess details #1 – showing various details from Power to Framerate to IO, Memory, etc..

imageProcess details #2

imageProcess details #3

imageApp Manager which is where you side-load apps and manage them

imageCrash Data – the name says it all

imageKiosk Mode – this is really interesting; you can ‘lock’ into one app and use that. I wonder how one breaks out of it when done being in this mode and wanting to get back to ‘regular’.


imageAll the ETW (Event tracing for Windows) details and the providers you can want. Again pretty standard stuff.

imageSimulation – not sure if this is used for regression or playback in another setting – where the room capture would help. Does open up interesting possibilities. I think it might allow one to capture the spatial mapping of a room, which then you might be able to use in the emulator (such as someone has done here).

imageNetworking Configuration where you go and manage this.

imageVirtual Input – a great time saver.

imageAnd finally some of the security settings to ensure no one on the same subnet is mucking with you; or when there is more than one device then you talking to the right one.

Creative Coding

As we start to play and explore with new AR/VR mediums like Oculus and HoloLens there is a stronger shift from the traditional medium of working from a more transaction with-known-outcome based model to a more expressive and exploratory model. In the context of many enterprises this is a bigger shift – albeit some of it they have started seeing with mobility but still not the same.

I really like how Rick explains and expresses this both in terms of definition and thinking. The clay analogy I think really helps.