Development notes, Reusable code

Compute shaders and Camera.Render in Unity: workaround for re-ordering

Recently working on the 1.3 release of my Panorama Capture script I encountered a mysterious and difficult-to-reproduce issue on certain older GPUs such as NVIDIA GTX 675M where I made a sequence of calls to Camera.Render() and ComputeShader.Dispatch(), and Unity would blithely re-order them without regard for the read/write dependencies between them, resulting in very strange panorama images like this:

distorted

The function of the compute shader was to take the result of the rendering and store it in a compute buffer, so that the RenderTexture could then be re-used. This is roughly what my code looked like:

renderTexture = new RenderTexture(width, height);
ComputeBuffer computeBuffer = new ComputeBuffer(10 * width * height, 4);
copyShader.SetTexture(kernelIdx, "source", renderTexture);
copyShader.SetBuffer(kernelIdx, "result", computeBuffer);
cam.targetTexture = renderTexture;
for (int i=0; i < 10; i++) {
  // Set cam.transform.position/rotation based on i
  camera.Render();
  copyShader.SetInt("startIdx", i * width * height);
  copyShader.Dispatch(kernelIdx, (width  + threadsX - 1) / threadsX,
                                 (height + threadsY - 1) / threadsY, 1);
}
pixels = new uint[10 * width * height];
computeBuffer.GetData(pixels);

The goal is to render 10 images and copy them all into computeBuffer in order. But on some GPUs, the Render() and Dispatch() calls are done out-of-order – sometimes all the Render() calls are done before the Dispatch() calls, resulting in all 10 images in the computeBuffer being identical. Other times, the Dispatch() calls are done just one iteration early or late, shifting the resulting images up or down in the buffer, or resulting in duplication of certain images. I don’t know whether this is a Unity bug or a GPU memory model limitation, but I needed to find a workaround.

Continue reading

Advertisements
Development notes, Reusable code

Unity Bug workaround: ReadPixels() and RenderTexture.antiAliasing > 1

This is a fairly nefarious bug in Unity that was reported at Issue ID
681089 ([TEXTURE2D] TEXTURE2D.READPIXELS() FAILS IF RENDERTEXTURE HAS ANTI-ALIASING SET) and was causing some serious problems for my Panorama Capture plug-in, since it prevented me from enabling MSAA anti-aliasing. If I tried, it would cause my output renders to be solid black.

Continue reading

Reusable code

Unity Asset Store package: 360 panorama capture

aae3d339-f4d9-43a6-91f1-c4ceec0ea479_scaled

VRCHIVE is a website (currently in alpha) for sharing static 360-degree panoramas and viewing them in VR. I was asked by VRCHIVE to create a Unity script to capture monoscopic 360-degree screenshots in-game (in both traditional and VR games) and upload them to the VRCHIVE website.

In addition to or instead of uploading, it can save the panoramas to disk as image files with equirectangular and/or cubemap projections. It is designed to capture high-resolution panoramas (typically 8192×4096) without causing FPS drops or other issues that are problematic in VR applications, and can be used both by developers and by end-customers who purchase their games.

Install from Unity Asset Store (follow included README to set up)

Continue reading

Reusable code

Unity 5.x package: fade screen in/out

This is a little tiny package I whipped up for a friend who wanted to be able to fade the screen to black and then fade back in in a Unity 5.x application (this is particularly useful in VR since tracking issues are invisible when the screen is faded).

Download: Mirror 1Mirror 2Mirror 3

Usage: Create an empty game object, assign the ScreenFader script to it, and adjust the parameters. Leave “Fade in” checked. At runtime, when you toggle the “Fade in” parameter, it will either fade out (when disabling it) or fade in (when enabling it). It can be toggled from scripts, from Playmaker, or via the editor.

Continue reading

Reusable code

Efficiently finding two vectors both orthogonal/normal to a given vector

normal_diagram

This was a code snippet I did earlier for some work with light field mesh parameterizations. I had a normal vector and wanted to find two orthogonal vectors spanning the plane that the vector is normal to, in order to project another vector into it. This is an underspecified problem, as given one vector, there are many pairs of two vectors that are orthogonal to that vector and each other. This extra degree of freedom can be used to construct a numerically stable procedure that also uses less operations than computing a cross-product.

In the C# code snippet below, n is the input vector and b1 and b2 are the output vectors.

if (Math.Abs(n.x) >= Math.Abs(n.y) &&
    Math.Abs(n.x) >= Math.Abs(n.z)) {
    b1.x = n.y / n.x; b1.y = -1.0; b1.z = 0.0;
    double d = n.x * n.x + n.y * n.y;
    b2.x = n.x * n.z / d; b2.y = n.y * n.z / d; b2.z = -1.0;
}
else if (Math.Abs(n.y) >= Math.Abs(n.x) &&
         Math.Abs(n.y) >= Math.Abs(n.z)) {
    b1.x = -1.0; b1.y = n.x / n.y; b1.z = 0.0;
    double d = n.x * n.x + n.y * n.y;
    b2.x = n.x * n.z / d; b2.y = n.y * n.z / d; b2.z = -1.0;
}
else { // Math.Abs(n.z) >= Math.Abs(n.x) &&
       // Math.Abs(n.z) >= Math.Abs(n.y)
    b1.x = -1.0; b1.y = 0.0; b1.z = n.x / n.z;
    double d = n.x * n.x + n.z * n.z;
    b2.x = n.x * n.y / d; b2.y = -1.0; b2.z = n.y * n.z / d;
}
// Optionally normalize b1 and b2 to unit length here

Continue reading

Uncategorized

Response to Editorial: Why VR Is Going To Be An Enormous Flop

John Walker of Rock Paper Shot Gun recently published an editorial arguing that consumer VR would fail to take off as an industry. While dissenting viewpoints about VR are always welcome, and the possibility of the failure of consumer VR remains in the realm of possibility, I’ve seen these particular arguments raised and refuted many times before. A point-by-point rebuttal follows.

Continue reading

Development notes

SVVR update + 360 panorama capture in VR: 360 video and the perils of Apply()

I attended SVVR 2015 this week which was lots of fun! I saw panels and talks, visited the expo, had lunch with the indie devs, and got my first demo of the eye-tracking VR headset FOVE which I subsequently backed in their Kickstarter. Just like last year I took lots of photos (Karl Krantz actually gave me a free press pass!) and I’ve started uploading them to a new eVRdayVR Flickr account. It’ll take a while to get through them, processing photos is time-consuming as hell, but when I’m done I’ll post them to reddit.

17329873973_76bf74784c_k

Meanwhile, I got a VR demo going of my 360 panorama capture Unity plugin. You walk around Viking Village in DK2, and at any time can press P to capture a 360 panorama still. It fades to black, captures the screens, fades back in, and then continues processing on CPU for about a minute (converting to equirectangular projection) before saving/uploading. It’s fully asynchronrous so you can play while it’s processing. You can try out the demo here.

I had an issue where as soon as it finished the reprojection processing it would stutter severely for a couple seconds. Eventually I pinned this down to the Texture2D.Apply() call, which I was calling after executing all the SetPixel() calls on my final destination texture which contained the reprojected image. On such a large texture (8192×4096 in this demo) this is a very expensive call, taking multiple frames at the very least. I circumvented the issue by simply writing my result image into a byte array instead of a Texture2D, which is then converted directly into a C# Bitmap via Bitmap.LockBits() (I had to import the System.Drawing.dll assembly, see this post). I can then save out the result to an image file on another thread (EncodeToJPEG() only works on the main thread in Unity, but Bitmap.Save() can run wherever). This has made the experience a lot smoother, with no detectable lag during processing even in the Rift.

I also used my plugin along with some basic replay functionality to produce a short 360 video of some Unity content. I recorded the position of the player in each FixedUpdate() for replay, and used ParticleSystem.Simulate() for making the torches and such go one frame at a time. I used the GPU reprojection which is more resource intensive but about 30 times faster. Even with this it still took about an hour to complete processing for this simply 17 second video – partly because I was capturing everything in 8192×4096 and downscaling to 3840×1920, partly just because there’s a lot more room for optimization. But it’s still super cool to be able to share a complete play environment on YouTube, and to re-play the 360 video in VR in Virtual Desktop. I think this is the beginning of how Let’s Plays of actual full games will be done in VR.