Development notes

SVVR update + 360 panorama capture in VR: 360 video and the perils of Apply()

I attended SVVR 2015 this week which was lots of fun! I saw panels and talks, visited the expo, had lunch with the indie devs, and got my first demo of the eye-tracking VR headset FOVE which I subsequently backed in their Kickstarter. Just like last year I took lots of photos (Karl Krantz actually gave me a free press pass!) and I’ve started uploading them to a new eVRdayVR Flickr account. It’ll take a while to get through them, processing photos is time-consuming as hell, but when I’m done I’ll post them to reddit.


Meanwhile, I got a VR demo going of my 360 panorama capture Unity plugin. You walk around Viking Village in DK2, and at any time can press P to capture a 360 panorama still. It fades to black, captures the screens, fades back in, and then continues processing on CPU for about a minute (converting to equirectangular projection) before saving/uploading. It’s fully asynchronrous so you can play while it’s processing. You can try out the demo here.

I had an issue where as soon as it finished the reprojection processing it would stutter severely for a couple seconds. Eventually I pinned this down to the Texture2D.Apply() call, which I was calling after executing all the SetPixel() calls on my final destination texture which contained the reprojected image. On such a large texture (8192×4096 in this demo) this is a very expensive call, taking multiple frames at the very least. I circumvented the issue by simply writing my result image into a byte array instead of a Texture2D, which is then converted directly into a C# Bitmap via Bitmap.LockBits() (I had to import the System.Drawing.dll assembly, see this post). I can then save out the result to an image file on another thread (EncodeToJPEG() only works on the main thread in Unity, but Bitmap.Save() can run wherever). This has made the experience a lot smoother, with no detectable lag during processing even in the Rift.

I also used my plugin along with some basic replay functionality to produce a short 360 video of some Unity content. I recorded the position of the player in each FixedUpdate() for replay, and used ParticleSystem.Simulate() for making the torches and such go one frame at a time. I used the GPU reprojection which is more resource intensive but about 30 times faster. Even with this it still took about an hour to complete processing for this simply 17 second video – partly because I was capturing everything in 8192×4096 and downscaling to 3840×1920, partly just because there’s a lot more room for optimization. But it’s still super cool to be able to share a complete play environment on YouTube, and to re-play the 360 video in VR in Virtual Desktop. I think this is the beginning of how Let’s Plays of actual full games will be done in VR.

Development notes

Update on 360 capture for VRCHIVE

This week I’ve been working on a small contract for 360-degree panorama hosting website VRCHIVE to create a free Unity plugin that enables the player to capture and upload 360-degree panorama screenshots. You can check out a sample here:

In principle it’s pretty straightforward – I use RenderToCubemap to create a cubemap, then transform it into the equirectangular transform required by VRCHIVE, then upload it using the Chevereto API.

This approach works just fine for small panos, but there are a number of scalability issues as I move to larger panos: the time required for the equirectangular transform grows linearly with the number of pixels, and even an optimized CPU version can take on the order of minutes. I can make it run asynchronously with the game using coroutines, but this slows it down even further.

I created a version of the transform that runs on GPU using compute shaders, which is about 30x faster and can capture a 8192×4096 pano in about 2 seconds on my GTX 980, but these aren’t universally available and I’m encountering some issues where it fails for people on other GPUs (weirdly it works for me both on my fancy GTX 980 desktop PC and my 5-year-old laptop – I may have to try switching to integrated graphics for testing). I believe texture memory consumption may be at fault, as I can reproduce similar failures when Unity’s RAM usage is very high – so I’m investigating various ways to capture the pano using less storage. Unfortunately these too result in processing slowdown.

Transforming the panos to equirectangular on server-side would be more reliable and faster for the player, but also is relatively expensive for a small startup site like VRCHIVE. Another plausible alternative is direct support for cubemap panos in the VRCHIVE viewer.

I haven’t yet addressed the complications raised by e.g. the use of multiple camera rigs with different layer settings, which are prevalent in several VR titles that I know of. I can’t directly call RenderToCubemap on those cameras because they have the wrong settings for it – so I may have to create a separate cubemap camera and clone their layer settings.

Current focus is fixing the failures on other GPUs – luckily my eVRydayVR fan chat has lots of great testers, so I think I can find ways to fix this!

Meanwhile, SVVR is starting tomorrow and I have to prepare for that too. And I just got back this week from the Vive Jam where I got to help with Unity scripting on Electro Smash, a collaboration with several others there (mostly from Virtuix). So many exciting things going on!

Development notes

VR Speed Reader Milestone 6

This is a big update – I’ve got the Jam entry into a submittable state, collected a little alpha testing feedback, and acted upon it. Main changes:

  • Added a menu to the beginning, four walls around you, where three of them correspond to three sample texts, and the fourth is the splash screen.
  • Added animations which smoothly transition from selecting a sample text to reading that sample text. I also show the page dividing into the 3 copies, to help emphasize what is going on (although it may still be not entirely clear).
  • Added a pause menu which allows you to view your WPM, go back in the text 1 or 5 lines (as suggested by Metatron testers), or exit back to the menu (which can also be done with back button). Rather than holding the touchpad, you now just tap it to enter the pause menu and tap on the Continue button to continue reading.
  • Mitchell from Metatron made me a sweet app icon.
  • Renamed the project from VirtuaReader to VR Speed Reader, to better differentiate it from other reading related entries and make its purpose clear.

Main things I learned from this:

  • Learned all about how to do simple animations with Mecanim, and sequenced them using state machines. This turned out to be surprisingly complex – I had to set up the animations and animation controllers, set up an idle state and a trigger to move to the animation state, and set up my script to invoke the trigger. In some ways just scripting animations by hand is easier. But it’s a very flexible system (e.g. does blending, manages state machine visually) and worth keeping in mind for the future.
  • raycast and colliders are super handy for doing eye gaze selection and menus, but to make it work with Unity world-space UI elements, I need to first add a CollisionBox to them and size it manually;
  • I was having trouble with stuttering while loading up the texture resources for the selected sample text. Blair of Technolust fame gave me some helpful advice which works well on Gear VR, which is to fade most of the scene to black except for a small central element, and then load. The time warp artifacts (black at the edges) becomes invisible when this is done.
    To accomplish this, I smoothly translated the selected menu item way back into the distance, to decrease its angular size, while also fading its background color. Then I loaded. Then I restored the default background color.
  • I use a trick where I (invisibly) rotate the menu and player together so that the selected item is always in the same location before proceeding with the animation and reading. This simplifies the rest of the task.
    However, I got confused cause Unity told me not to use RotateAround() cause it was deprecated. Turns out RotateAround() was totally the right thing to use – what was actually deprecated was the overload that takes only an axis and a rotation amount. You need to specify a point as well.

This is mostly done and may be my final Jam submission! I haven’t implemented holding back to go to the universal menu but may not bother with this right now. I still need to do a narrated video and a reddit post and dev forum post about it tomorrow. Let me know any thoughts you have!

New opening menu: 4 short walls around you present three sample texts and a splash screen.
New opening menu: 4 short walls around you present three sample texts and a splash screen.
New pause menu. Allows you to go back in the text and return to the main menu.
New pause menu. Allows you to go back in the text and return to the main menu.


Try it yourself! Download apk


Producing high-quality 1080p60 video of Oculus Rift DK2 gameplay

Viewers and producers on YouTube have a variety of preferences for how to structure Oculus Rift DK2 virtual reality gameplay content. This post summarizes some of the most popular methods and best practices for how to produce 1080p60 videos of each of them with high image quality. 1080p60 is 1920×1080 at 60 FPS, which is currently the highest-resolution 60 FPS format supported by YouTube (I highly recommend uploading all Rift videos in 60 FPS because they generally involve frequent, fast rotations of the camera).

Note: this guide is focused on video publishing, rather than streaming, which requires different tools. Since this guide went up the unwarpvr tool got support for Gear VR; see release page.


  • For warped stereoscopic video intended to be viewed in the Rift DK2, directly publish your original 1080p60 recording.
  • For unwarped stereoscopic video, which compromises between viewing in the Rift and monitor viewing, record at 1080p60 or higher and then use ffmpeg-unwarpvr to unwarp.
  • For unwarped monoscopic video, which provides a view similar to monitor-based titles, play and record the game at 1440p60 (2560×1440 at 60 FPS), then use ffmpeg-unwarpvr to generate the 1080p60 monoscopic view.

Continue reading

Development notes

VR Speed Reader Milestone 5

In this milestone I doubled distance to the page, and simultaneously cranked up the Native Texture Scale (which scales the render target) from 1 to 1.5, allowing the text to remain clear and readable, without hurting the frame rate (at scale=2 the frame rate dropped substantially).

Cropped screenshot at 100% scale. Increasing render target size improved quality markedly, while increased distance increased the amount of text on screen and decreased vection.

I also changed the background color to match the page, avoiding distracting page edges near the beginning/end of the text: the text now has neither pages nor lines. I modified the time per word so that it is proportional to the width of the word. This results in a more consistent speed of motion at all times and seems to enable me to read at higher speeds, but it is a bit odd at first as it seems to “freeze” momentarily on long words. There might be some better way to do this. I also added some instructions at the beginning of the task, and made some small fixes (touchpad does not rotate the view, and red/green glow follows the user’s head pitch/roll correctly).

Because there are a few other reader entries in the Jam I’m considering a name change. Here are a few random ones I was thinking about:

  • Wall of Text: Since you’re literally looking at a giant wall covered in text – also a pun on the slang term, but it’s already a web software
  • TubeReader: Since the text is conceptually wrapped around the inside of a cylinder or tube. Might be confused with “tube” as in “television” though.
  • Readscape or Textscape: Since it’s like a textual landscape. Except vertically oriented. These names appear to be in use; “ReadScape” is a company and “TextScape” is in use by a small App Store app.
  • Let me know your ideas! Maybe something related to speed reading?