Development notes

Update on 360 capture for VRCHIVE

This week I’ve been working on a small contract for 360-degree panorama hosting website VRCHIVE to create a free Unity plugin that enables the player to capture and upload 360-degree panorama screenshots. You can check out a sample here:

http://alpha.vrchive.com/image/rC

In principle it’s pretty straightforward – I use RenderToCubemap to create a cubemap, then transform it into the equirectangular transform required by VRCHIVE, then upload it using the Chevereto API.

This approach works just fine for small panos, but there are a number of scalability issues as I move to larger panos: the time required for the equirectangular transform grows linearly with the number of pixels, and even an optimized CPU version can take on the order of minutes. I can make it run asynchronously with the game using coroutines, but this slows it down even further.

I created a version of the transform that runs on GPU using compute shaders, which is about 30x faster and can capture a 8192×4096 pano in about 2 seconds on my GTX 980, but these aren’t universally available and I’m encountering some issues where it fails for people on other GPUs (weirdly it works for me both on my fancy GTX 980 desktop PC and my 5-year-old laptop – I may have to try switching to integrated graphics for testing). I believe texture memory consumption may be at fault, as I can reproduce similar failures when Unity’s RAM usage is very high – so I’m investigating various ways to capture the pano using less storage. Unfortunately these too result in processing slowdown.

Transforming the panos to equirectangular on server-side would be more reliable and faster for the player, but also is relatively expensive for a small startup site like VRCHIVE. Another plausible alternative is direct support for cubemap panos in the VRCHIVE viewer.

I haven’t yet addressed the complications raised by e.g. the use of multiple camera rigs with different layer settings, which are prevalent in several VR titles that I know of. I can’t directly call RenderToCubemap on those cameras because they have the wrong settings for it – so I may have to create a separate cubemap camera and clone their layer settings.

Current focus is fixing the failures on other GPUs – luckily my eVRydayVR fan chat has lots of great testers, so I think I can find ways to fix this!

Meanwhile, SVVR is starting tomorrow and I have to prepare for that too. And I just got back this week from the Vive Jam where I got to help with Unity scripting on Electro Smash, a collaboration with several others there (mostly from Virtuix). So many exciting things going on!

Advertisements
Development notes

VR Speed Reader Milestone 6

This is a big update – I’ve got the Jam entry into a submittable state, collected a little alpha testing feedback, and acted upon it. Main changes:

  • Added a menu to the beginning, four walls around you, where three of them correspond to three sample texts, and the fourth is the splash screen.
  • Added animations which smoothly transition from selecting a sample text to reading that sample text. I also show the page dividing into the 3 copies, to help emphasize what is going on (although it may still be not entirely clear).
  • Added a pause menu which allows you to view your WPM, go back in the text 1 or 5 lines (as suggested by Metatron testers), or exit back to the menu (which can also be done with back button). Rather than holding the touchpad, you now just tap it to enter the pause menu and tap on the Continue button to continue reading.
  • Mitchell from Metatron made me a sweet app icon.
  • Renamed the project from VirtuaReader to VR Speed Reader, to better differentiate it from other reading related entries and make its purpose clear.

Main things I learned from this:

  • Learned all about how to do simple animations with Mecanim, and sequenced them using state machines. This turned out to be surprisingly complex – I had to set up the animations and animation controllers, set up an idle state and a trigger to move to the animation state, and set up my script to invoke the trigger. In some ways just scripting animations by hand is easier. But it’s a very flexible system (e.g. does blending, manages state machine visually) and worth keeping in mind for the future.
  • raycast and colliders are super handy for doing eye gaze selection and menus, but to make it work with Unity world-space UI elements, I need to first add a CollisionBox to them and size it manually;
  • I was having trouble with stuttering while loading up the texture resources for the selected sample text. Blair of Technolust fame gave me some helpful advice which works well on Gear VR, which is to fade most of the scene to black except for a small central element, and then load. The time warp artifacts (black at the edges) becomes invisible when this is done.
    To accomplish this, I smoothly translated the selected menu item way back into the distance, to decrease its angular size, while also fading its background color. Then I loaded. Then I restored the default background color.
  • I use a trick where I (invisibly) rotate the menu and player together so that the selected item is always in the same location before proceeding with the animation and reading. This simplifies the rest of the task.
    However, I got confused cause Unity told me not to use RotateAround() cause it was deprecated. Turns out RotateAround() was totally the right thing to use – what was actually deprecated was the overload that takes only an axis and a rotation amount. You need to specify a point as well.

This is mostly done and may be my final Jam submission! I haven’t implemented holding back to go to the universal menu but may not bother with this right now. I still need to do a narrated video and a reddit post and dev forum post about it tomorrow. Let me know any thoughts you have!

New opening menu: 4 short walls around you present three sample texts and a splash screen.
New opening menu: 4 short walls around you present three sample texts and a splash screen.
New pause menu. Allows you to go back in the text and return to the main menu.
New pause menu. Allows you to go back in the text and return to the main menu.

Video:

Try it yourself! Download apk

Guides

Producing high-quality 1080p60 video of Oculus Rift DK2 gameplay

Viewers and producers on YouTube have a variety of preferences for how to structure Oculus Rift DK2 virtual reality gameplay content. This post summarizes some of the most popular methods and best practices for how to produce 1080p60 videos of each of them with high image quality. 1080p60 is 1920×1080 at 60 FPS, which is currently the highest-resolution 60 FPS format supported by YouTube (I highly recommend uploading all Rift videos in 60 FPS because they generally involve frequent, fast rotations of the camera).

Note: this guide is focused on video publishing, rather than streaming, which requires different tools. Since this guide went up the unwarpvr tool got support for Gear VR; see release page.

Summary

  • For warped stereoscopic video intended to be viewed in the Rift DK2, directly publish your original 1080p60 recording.
  • For unwarped stereoscopic video, which compromises between viewing in the Rift and monitor viewing, record at 1080p60 or higher and then use ffmpeg-unwarpvr to unwarp.
  • For unwarped monoscopic video, which provides a view similar to monitor-based titles, play and record the game at 1440p60 (2560×1440 at 60 FPS), then use ffmpeg-unwarpvr to generate the 1080p60 monoscopic view.

Continue reading

Development notes

VR Speed Reader Milestone 5

In this milestone I doubled distance to the page, and simultaneously cranked up the Native Texture Scale (which scales the render target) from 1 to 1.5, allowing the text to remain clear and readable, without hurting the frame rate (at scale=2 the frame rate dropped substantially).

2015_05_01_08.17.56.cropped
Cropped screenshot at 100% scale. Increasing render target size improved quality markedly, while increased distance increased the amount of text on screen and decreased vection.

I also changed the background color to match the page, avoiding distracting page edges near the beginning/end of the text: the text now has neither pages nor lines. I modified the time per word so that it is proportional to the width of the word. This results in a more consistent speed of motion at all times and seems to enable me to read at higher speeds, but it is a bit odd at first as it seems to “freeze” momentarily on long words. There might be some better way to do this. I also added some instructions at the beginning of the task, and made some small fixes (touchpad does not rotate the view, and red/green glow follows the user’s head pitch/roll correctly).

Because there are a few other reader entries in the Jam I’m considering a name change. Here are a few random ones I was thinking about:

  • Wall of Text: Since you’re literally looking at a giant wall covered in text – also a pun on the slang term, but it’s already a web software
  • TubeReader: Since the text is conceptually wrapped around the inside of a cylinder or tube. Might be confused with “tube” as in “television” though.
  • Readscape or Textscape: Since it’s like a textual landscape. Except vertically oriented. These names appear to be in use; “ReadScape” is a company and “TextScape” is in use by a small App Store app.
  • Let me know your ideas! Maybe something related to speed reading?
Guides

Disabling the Health & Safety Warning on the Oculus Rift DK2

About the Health & Safety Warning

Win_OculusUnityDemoScene 2015-04-30 11-39-18-664

The Health & Safety Warning appears in every Oculus Rift application, every time you start it, and has done so since SDK version 0.4.0, the first version with full DK2 support. As of SDK version 0.4.3, it reads:

HEALTH & SAFETY WARNING

Read and follow all warnings and instructions included with the Headset before use. Headset should be calibrated for each user. Not for use by children under 13. Stop use if you experience any discomfort or health reactions.

More: www.oculus.com/warnings

Press any key to acknowledge

This warning helps stop users from doing dumb things and shields Oculus from legal liability. It is a good idea for anyone using your headset to read and understand it. Once. You really don’t need to keep seeing it every time you launch an app, especially if you’re a developer who is launching tests on a frequent basis. Continue reading

Development notes

VR Speed Reader Milestone 4: color cues for acceleration + WPM display

The most noticeable change in this new version is that when accelerating your WPM by looking right, a green halo appears around your vision, and when decelerating your WPM by looking left, a red halo appears. When no halo appears, your WPM is not changing. The halo (and the rate of acceleration/deceleration) becomes more noticeable as you look more to the left/right. It was hard to visually estimate acceleration before and this should help.

vlcsnap-2015-04-26-07h42m02s210

Because I needed to fade in/out the color halo, the implementation of this required a (relatively simple) custom shader that could handle both the transparency of the texture and the alpha multiply. I call it “Unlit/Transparent Alpha” and you can grab it at pastebin.

I replaced the blur for now with a simpler rectangular highlight around the current word. This encourages the user to focus at a particular point while still allowing them to use their peripheral vision to the maximum potential. I’m not sure yet though if it might lead to their attention wandering away from the target word.

I added a delay to the first word at the beginning of each paragraph. This helps deal with the momentary disorientation and the (illusory) perceived speedup that occurs when jumping from one paragraph to the next. The associated code refactoring also allows me to arbitrarily customize the amount of time spent on each word.

I added a WPM informational pop-up to the pause screen that appears when touching the touchpad, using Unity 4.6’s world-space UI.

Here is the current prototype:

I experimented a bit with combining short, common words into a single word, like “in the”, so they could be read together. Results were mixed. Sometimes it seemed to help and sometimes it was confusing. After further reflection I think the right thing to do here is actually to group together 2-grams that occur frequently enough in written text. This raises the interesting idea of having the time per word, rather than being constant, actually depend in some way on how “expected” the word is (e.g. its likelihood in context given a simple n-gram model). With this, more time would be spent on unfamiliar words and less time on familiar words, and common words like “Alice” would have less and less time dedicated to them as the text went on. This might be something to revisit at a later time.

Development notes

VR Speed Reader Milestone 3: Continuous paragraph reading + controls

New VR Speed Reader milestone! Nate suggested it was distracting when it jumps from the end of a line to the beginning of a new line while reading a paragraph. To avoid this, I placed two copies of the text on either side, with the one on the left adjusted down one line, and the one on the right adjusted up one line. I then justified the text (so it’s straight and has the same margin on both the left and right sides). The effect is that now when jumping to the next line, it feels as though you’re simply continuing to read horizontally, but you retain the context of the surrounding lines. It’s sort of like the text is wrapped around a cylinder.

vlcsnap-2015-04-25-05h29m00s244
Continuous paragraph reading in action: when you jump to the next line, the app appears to seamlessly continue moving horizontally.

I also implemented basic controls: you can now look left and right to speed up or slow down the reading speed. Your eyes continue tracking the text while you do so. You can also press the touchpad to reset orientation, and by holding the touchpad you pause reading and can look around. The app starts at a standstill, and must be started by looking right. This avoids disorientation and helps develop familiarity with the controls. See video below for demonstration.

Small fixes:

  • I modified the text to avoid dashes/hyphens which were resulting in some long words. Later on I’ll break these up programmatically.
  • Reduced blur of surrounding text to something more reasonable. I’m still considering other methods of highlighting the active word, like highlighting it with another color, or dimming the surrounding text (black on grey).

Next steps:

  • Right now Words Per Minute is hidden. It should be shown while paused.
  • Need to implement Gear VR requirements like holding back button for universal menu. This should also pausing reading while the back button is pressed.
  • Pressing back button should present a zoomed-out sharp view of the document, allowing the reader to change the current reading location (zoomed-out mode). The app should begin in this mode as well. The two copies on the side should vanish in this mode.
  • Consider grouping small common words as units (e.g. “in the”).
  • Maybe some kind of visual indicator for when they’re speeding/slowing, like colors around the edge of vision.

Let me know if you have any thoughts also!