The first milestone was to get a sample text running on Gear VR and presenting a word at a time to the user. This was fairly straightforward: I wrote a separate external C# application which took in some of the public domain text of Alice’s Adventures in Wonderland, and spit out two things, a 4096×4096 texture image:
and also a text file giving the position and size of each word in the text, in order (where words are delimited by spaces and position/size are given in texture space):
0.02441406 0.02441406 0.03021049 0.01472854
0.05462455 0.02441406 0.02536964 0.01472854
0.07999419 0.02441406 0.05639457 0.01472854
I then created a Gear VR app in which the texture was applied to a quad, and a script was attached to it which moved the player in front of each word in sequence. The specified Words Per Minute rate was achieved by measuring using the Unity Time class. The only tricky bit was that OVRPlayerController offsets the camera a bit based on the height in the profile; this can be corrected by lowering the camera height by OVRManager.profile.eyeHeight/2.
Here is a screenshot (50% actual size, running live on Gear VR):
And here is a demo video:
Things that remain to do:
- Don’t treat hyphenated phrases as a single word
- Highlight the word the user is supposed to be watching more clearly (e.g. by blurring other words or using color). This is probably easiest to do with a shader, which will have properties telling it which bits of the texture to modify and which bits to pass through. To avoid doing the blur at runtime in the shader, I can have a pre-blurred texture and just select which texture to pull from at runtime based on texture coordinates.
- Allow user to control Words Per Minute rate (currently it’s just a public property on the document quad)
- Add menu, allow user to select where to begin reading from, add animation zooming into document