WebVR

Exporting An Indie Unity Game to WebVR

WebVR holds the key to the future of VR content access – instant gratification without any downloads or installs. Or, at least we think so! We’re building a multi-platform digital game subscription service called Boondogl that delivers native web games to desktop, mobile, console, and VR devices, and we’ve bet our entire business on native web technologies – HTML5, WebGL, JS, and soon WebAssembly. We set out to demonstrate how powerful the web will be for virtual reality, by building an Oculus Rift WebVR game for Boondogl. We built SECVRITY in a month. With such a short window, we didn’t have time to dive into the WebVR API to build it natively on the web. So, we built the game in our engine of choice – Unity 5.

SECVRITY is probably best described as “Whac-A-Mole for viruses”. You play as a computer security specialist trying to thwart a barrage of incoming viruses on your panoramic monitor setup. To disable viruses, you have to – you guessed it – look at the screen currently being attacked and click on it. While the potential for whiplash is fairly high, the potential for fun ended up being even higher, as evidenced by the barrage of people playing it in Mozilla’s Booth at GDC!

SECVRITY at GDC

Back to the technology though – Unity supports both WebGL and VR, but we quickly discovered that they were mutually exclusive and Unity did not have WebVR on their immediate roadmap. We started searching for ways to bridge this gap. Since Unity’s WebGL export spit the game out in website form, there had to be a way to connect the WebVR API to our Unity game to pipe the WebVR input into the engine. We were really hoping to not have to try and write it in that one-month window.

Luckily, one brave soul who goes by gtk2k on GitHub decided to build this bridge for everyone, almost a full year ago. His method is straightforward: he built a Unity WebGL template which includes JS files to handle WebVR input via the API, then he piped that code into Unity through one simple script. To implement the script properly in Unity, he created a camera prefab that houses 3 different cameras – a standard view camera, which is a normal Unity camera; and two stereo cameras that display side-by-side with slightly adjusted x-positions and viewport rects. The developer simply has to replace the main camera in their scene with this prefab, attach StereoCamera.cs to it, and watch the magic work. gtk2k’s bridge very cleverly makes the switch from standard camera to stereo cameras when the user hits the “Enter VR” button in the customized WebVR Unity template.

oculus-rift-vr-headset-1200x698

Download a Sample Unity WebVR Project or grab the UnityPackage to import the necessary files into your own project.

To try out the template yourself, here’s what you’ll need to do:

  • Get your hands on an Oculus Rift. Ensure that you enable running apps from external sources.
  • Download and install Firefox Nightly.
  • Install the WebVR enabler.
  • Grab either the entire sample project or just the UnityPackage above.
  • Open Unity (either the sample project or your own project with the UnityPackage added) and replace your MainCamera with the WebVRCameraSet prefab. tut1
  • Make sure StereoCamera.cs is attached to the parent node of the prefab. tut2
  • From File > Build Settings, select WebGL as the platform (but leave Development Build unchecked). tut3
  • Open Edit > Project Settings > Player to access the Player settings; under Resolution and Presentation, select WebVR as your WebGL Template. tut4
  • In the same project settings, under the Publishing Settings section, ensure your WebGL Memory Size is set to a minimum of 512 MB to avoid out-of-memory errors. (For SECVRITY, we set it to 768 MB.) tut5
  • Build to WebGL, and give it a shot in Firefox Nightly!
    • You can test local builds or upload the build to your favorite web host.

Hopefully that will get you up and running with your first Unity WebVR build! To test in the editor, you’ll need to enable the standard features for desktop VR builds. Go back to Edit > Project Settings > Player, select the Standalone tab (indicated by down arrow icon) above the Resolution and Presentation section, navigate to Other Settings, and check the boxes for both Stereoscopic Rendering and Virtual Reality Supported. These aren’t necessary for the WebVR build itself, but you’ll need them to test in the editor.

tut6

To supplement the template from a design perspective, we added explicit instructions to properly get the user into VR mode in their browser window. We also decided to give the user a choice whether to play in VR or with a mouse. This is where things got tricky.

mainmenu

We wanted non-VR users on desktop to be able to play SECVRITY since, well, it’s in a browser! We supported mouse controls before we supported VR, so mouse control in itself was simple. However, mouse control while VR input was being detected caused some incredibly wonky results. Essentially, the mouse movement would throw off the viewport of the VR headset, causing the user to: A. get completely lost, and B. get super sick. We had to detect whether or not the user was in VR and then disable mouse control to solve this.

Our solution is to completely disable mouse control, whether the player is using VR input or not, until they explicitly select “mouse” control from the main menu. The user must now select their input method of choice via arrow keys or controller joystick before playing. If the user chooses “mouse” while in a VR headset, then the sickness-inducing issues begin. Caveat player: we built this game in one month! Auto-detection will resolve this in future iterations.

SECVRITY_mainmenu

We learned some valuable lessons in building for WebVR via Unity, mainly in designing for hybrid VR/non-VR experiences. A lot of our troubles should be solved in an official WebVR export from the engine. But even when that comes, it’s still important to understand what your user may or may not do to break your game, especially when the control inputs are so drastically different. We had to make a few tweaks to gtk2k’s code for the enter VR flows to work with recent changes to Firefox Nightly, but his codebase largely worked as advertised with very little effort on our end. That man is our hero.

The web is the future of gaming, and Boondogl, armed with games like SECVRITY, will prove it to the world. Web gaming provides almost instant access to games on desktop, mobile, console, VR headsets, and other devices, with no permanent downloads or installs required for users. The web can already deliver near-native speeds and, with WebGL 2.0 and WebAssembly on the horizon, we’ll start seeing near-current-generation graphics as well. Boondogl hopes to help drive the web revolution forward and make the web the ultimate home for games on all devices. If you want to follow Boondogl’s progress, you can sign up for our newsletter and for a beta key at boondogl.com. And if you want to play SECVRITY right now, you can find it at as a demo on mozvr.com! Take it from us: the web will revolutionize the gaming industry. And WebVR will play an important role in showcasing the web’s power to both developers and users by providing instant access to beautiful virtual reality right from a browser.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Introducing the WebVR 1.0 API Proposal

2016 is shaping up to be a banner year for Virtual Reality. Many consumer VR products will finally be available and many top software companies are ramping up to support these new devices. The new medium has also driven demand for web-enabled support from browser vendors. Growth in WebVR has centered on incredible viewing experiences and the tools used to create online VR content.

VR_images
The Mozilla VR team has been working hard to support the online creation and display of VR content in the browser. This week marks a WebVR milestone. Working closely with Brandon Jones of the Google Chrome team, the Mozilla team is excited to announce the version 1.0 release of the WebVR API proposal.

Recent VR technology advances and community feedback have allowed us to improve the API to address developer needs.

Some of the improvements include:

  • VR-specific handling of device rendering and display.
  • The ability to traverse links between WebVR pages.
  • An input handling scheme that can enumerate VR inputs, including six degrees of freedom (6DoF) motion controllers.
  • Accommodation of both sitting and standing experiences.
  • Suitability for both desktop and mobile usage.

We are excited to share these improvements to the API. Keep in mind the list above represents a small sample of what has changed. For all the details, take a look at the full API draft and check out Brandon’s blog post.

This article is focused on basic usage of the proposed API, which requires an understanding of some complex concepts like matrix math. As an alternative, you can get a quick start in WebVR by looking at A-Frame or the WebVR boilerplate, both built on top of the API.

Before we dive in, we’d like to give special thanks to Chris Van Wiemeersch (Mozilla), Kearwood “Kip” Gilbert (Mozilla), Brandon Jones (Google), and Justin Rogers (Microsoft) for contributing to the creation of this specification.

Implementation roadmap

We plan to land a stable implementation of the 1.0 APIs in Firefox Nightly in the first half of the year. You can follow along on Bugzilla for all the details, or see status updates on platform support on iswebvrready.org.

Want to get started today? Currently developers can experiment with a proof-of-concept implementation of the new API using Brandon Jones’ experimental builds of Chromium.

Both three.js and the WebVR Polyfill (used by the WebVR Boilerplate mentioned above) have open pull requests to support the latest APIs.

Components of a VR experience

Let’s take a look at the key components required for any VR experience:

  1. The VR display that we are rendering content to.
  2. The user’s pose. The orientation and positioning of the headset in space.
  3. Eye parameters that define stereo separation and field of view.

Here’s a look at the workflow sequence for getting content into the headset:

  1. navigator.getVRDisplays() to retrieve a VR Display.
  2. Create a <canvas> element which we will use to render content.
  3. Use VRDisplay.requestPresent() to pass in the canvas element.
  4. Create the VR device-specific animation loop in which we will perform content rendering.
    1. VRDisplay.getPose() to update the user’s pose.
    2. Perform calculations and rendering.
    3. Use VRDisplay.submitFrame() to indicate to the compositor when the canvas element content is ready to be presented in the VR display.

The following sections describe each of these actions in detail.

Working with VR displays

Devices that display VR content have very specific display requirements for frame rate, field of view, and content presentation that are handled separately from standard desktop displays.

Enumerating VR displays

To retrieve VR displays available to the browser, use the navigator.getVRDisplays() method, which returns a Promise that resolves with an array of VRDisplay objects:

navigator.getVRDisplays().then(function (displays) {
  if (!displays.length) {
    // WebVR is supported, no VRDisplays are found.
    return;
  }

  // Handle VRDisplay objects. (Exposing as a global variable for use elsewhere.)
  vrDisplay = displays.length[0];
}).catch(function (err) {
  console.error('Could not get VRDisplays', err.stack);
});

Keep in mind:

  • You must have your VR headset plugged in and powered on before any VR devices will be enumerated.
  • If you do not have a VR headset, you can simulate a device by opening about:config and setting dom.vr.cardboard.enabled to true.
  • Users of Firefox Nightly for Android or Firefox for iOS will enumerate a Cardboard VR device for use with Google Cardboard.

Creating a render target

To determine the render target size (i.e., your canvas size), create a render target large enough to hold both the left and right eye viewports. To find the size (in pixels) of each eye:

// Use 'left' or 'right'.
var eyeParameter = vrDisplay.getEyeParameters('left');

var width = eyeParameter.renderWidth;
var height = eyeParameter.renderHeight;

Presenting content into the headset

To present content into the headset, you’ll need to use the VRDisplay.requestPresent() method. This method takes a WebGL <canvas> element as a parameter which represents the viewing surface to be displayed.

To ensure that the API is not abused, the browser requires a user-initiated event in order for a first-time user to enter VR mode. In other words, a user must choose to enable VR, and so we wrap this into a click event handler on a button labeled “Enter VR”.

// Select WebGL canvas element from document.
var webglCanvas = document.querySelector('#webglcanvas');
var enterVRBtn = document.querySelector('#entervr');

enterVRBtn.addEventListener('click', function () {
  // Request to present WebGL canvas into the VR display.
  vrDisplay.requestPresent({source: webglCanvas});
});

// To later discontinue presenting content into the headset.
vrDisplay.exitPresent();

Device-specific requestAnimationFrame

Now that we have our render target set up and the necessary parameters to render and present the correct view into the headset, we can create a render loop for the scene.

We will want to do this at an optimized refresh rate for the VR display. We use the VRDisplay.requestAnimationFrame callback:

var id = VRDisplay.requestAnimationFrame(onAnimationFrame);

function onAnimationFrame () {  
  // Render loop.
  id = VRDisplay.requestAnimationFrame(onAnimationFrame);
}

// To cancel the animation loop.
VRDisplay.cancelRequestAnimationFrame(id);

This usage is identical to the standard window.requestAnimationFrame() callback that you may already be familiar with. We use this callback to apply position and orientation pose updates to our content and to render to the VR display.

Retrieving pose information from a VR display

We will need to retrieve the orientation and position of the headset using the VRDisplay.getPose() method:

var pose = vrDisplay.getPose();

// Returns a quaternion.
var orientation = pose.orientation;

// Returns a three-component vector of absolute position.
var position = pose.position;

Please note:

  • Orientation and position return null if orientation and position cannot be determined.
  • See VRStageCapabilities and VRPose for details.

Projecting a scene to the VR display

For proper stereoscopic rendering of the scene in the headset, we need eye parameters such as the offset (based on interpupillary distance or IPD) and field of view (FOV).

// Pass in either 'left' or 'right' eye as parameter.
var eyeParameters = vrDisplay.getEyeParameters('left');

// After translating world coordinates based on VRPose, transform again by negative of the eye offset.
var eyeOffset = eyeParameters.offset;

// Project with a projection matrix.
var eyeMatrix = makeProjectionMatrix(vrDisplay, eyeParameters);


// Apply eyeMatrix to your view.
// ...

/**
 * Generates projection matrix
 * @param {object} display - VRDisplay
 * @param {number} eye - VREyeParameters
 * @returns {Float32Array} 4×4 projection matrix
 */
function makeProjectionMatrix (display, eye) {
  var d2r = Math.PI / 180.0;
  var upTan = Math.tan(eye.fieldOfView.upDegrees * d2r);
  var downTan = Math.tan(eye.fieldOfView.leftDegrees * d2r);
  var rightTan = Math.tan(eye.fieldOfView.rightDegrees * d2r);
  var leftTan = Math.tan(eye.fieldOfView.leftDegrees * d2r);
  var xScale = 2.0 / (leftTan + rightTan);
  var yScale = 2.0 / (upTan + downTan);

  var out = new Float32Array(16);
  out[0] = xScale;
  out[1] = 0.0;
  out[2] = 0.0;
  out[3] = 0.0;
  
  out[4] = 0.0;
  out[5] = yScale;
  out[6] = 0.0;
  out[7] = 0.0;
  
  out[8] = -((leftTan - rightTan) * xScale * 0.5);
  out[9] = (upTan - downTan) * yScale * 0.5;
  out[10] = -(display.depthNear + display.depthFar) / (display.depthFar - display.depthNear);

  out[12] = 0.0;
  out[13] = 0.0;
  out[14] = -(2.0 * display.depthFar * display.depthNear) / (display.depthFar - display.depthNear);
  out[15] = 0.0;

  return out;
}

Submitting frames to the headset

VR is optimized to minimize the discontinuity between the user’s movement and the content rendered into the headset. This is important for a comfortable (non-nauseating) experience. This is enabled by giving direct control of how this happens using the VRDisplay.getPose() and VRDisplay.submitFrame() methods:

// Rendering and calculations not dependent on pose.
// ...

var pose = vrDisplay.getPose();

// Rendering and calculations dependent on pose. Apply your generated eye matrix here to views.
// Try to minimize operations done here.
// ...

vrDisplay.submitFrame(pose);

// Any operations done to the frame after submission do not increase VR latency. This is a useful place to render another view (such as mirroring).
// ...

The general rule is to call VRDisplay.getPose() as late as possible and VRDisplay.submitFrame() as early as possible.

Demos, feedback and resources

Looking for ways to get started? Here’s a collection of example apps that use the WebVR 1.0 API. Also, check out the resources listed below.

And please keep sharing your feedback!

The development of this API proposal has been in direct response to evolving VR technology, and also from community feedback and discussion. We’re off to a good start, and your ongoing feedback can help us make it better.

We invite you to submit issues and pull requests to the WebVR API specification GitHub repository.

Resources

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Stereoscopic Rendering in WebVR

At Mozilla, a small recon team has been toying with the idea of blending the best features of the web such as interconnectedness, permissionless content creation, and safe execution of remote code with the immersive interaction model of Virtual Reality.

By starting out with support for Oculus’s DK2 headset, we’ve enabled those interested to begin experimenting with VR.  As a quick introduction, I wanted to show some of the differences in rendering techniques developers have to account for when building their first VR experience.  For this post, we’ll focus on describing rendering with the WebGL set of APIs.

Screen Shot 2015-09-09 at 9.06.13 AM

My first rendering in WebVR, the Stanford dragon. Firefox handles the vignetting effect, spatial, and chromatic distortion for us just by entering fullscreen.

Multiple Views of the Same Scene

An important first distinction to make: With traditional viewing on a monitor or screen, we’re flattening our three-dimensional scene onto a plane (the view-port).  While objects may have different distances from the view-port, everything is rendered from a single point of view.  We may have multiple draw calls that build up our scene, but we usually render everything using one view matrix and one projection matrix that’s calculated at scene creation time.

For example, the view matrix might contain information such as the position of our virtual camera relative to everything else in the scene, as well as our orientation (Which way is forward? Which way is up?).  The projection matrix might encode whether we want a perspective or orthographic projection, view-port aspect ratio, our field of view (FOV), and draw distance.

As we move from rendering the scene from one point of view to rendering on a head-mounted display (HMD), suddenly we have to render everything twice from two different points of view!

monoscopic rendering

stereoscopic rendering

In the past, you might have used one view matrix and one projection matrix, but now you’ll need a pair of each.  Rather than having the choice of field of view (FOV), you now must query the headset the user’s FOV setting for each eye. As anyone who’s visited an eye doctor or had an eye exam lately can attest, your eyes each have their own FOV!  This is not necessary to correct for when rendering to a far away monitor, as the monitor usually is a subset within the current field of view, whereas a head-mounted display (HMD) encompasses the entire field of view (FOV).

The Oculus SDK has a configuration utility where the user can set individual FOV per eye and interpupillary distance (IPD), essentially the space between eyes, measured from pupil to pupil).

The unique fields of view give us two unique projection matrices.  Because your eyes are also offset from one another, they also have different positions or translations from the position of the viewer. This gives us two different view matrices (one per eye) as well.  It’s important to get these right so that the viewer’s brain is able to correctly fuse two distinct images into one.

Without accounting for the IPD offset, a proper parallax effect cannot be achieved.  Parallax is very important for differentiating distances to various objects and depth perception.  Parallax is the appearance of objects further away from you moving slower than closer objects when panning side to side.  Github’s 404 page is a great example of parallax in action.

That’s why some 360 degree video shot from a single view/lens per direction tends to smear objects in the foreground with objects farther away.  For more info on 360 degree video issues, this eleVR post is a great read.

We’ll also have to query the HMD to see what the size of the canvas should be set to for the native resolution.

When rendering a monoscopic view, we might have code like —


function init () {
  // using gl-matrix for linear algebra
  var viewMatrix = mat4.lookAt(mat4.create(), eye, center, up);
  var projectionMatrix = mat4.perspective(mat4.create(), fov, near, far);
  var mvpMatrix = mat4.multiply(mat4.create(), projectionMatrix, viewMatrix);
  gl.uniformMatrix4fv(uniforms.uMVPMatrixLocation, false, mvpMatrix);
};
function update (t) {
  gl.clear(flags);
  gl.drawElements(mode, count, type, offset);
  requestAnimationFrame(update);
};

in JS and in our GLSL vertex shader:


uniform mat4 uMVPMatrix;
attribute vec4 aPosition;

void main () {
  gl_Position = uMVPMatrix * aPosition;
}

…but when rendering from two different viewpoints with webVR, reusing the previous shader, our JavaScript code might look more like:


function init () {
  // hypothetical function to get the list of
  // attached HMD's and Position Sensors.
  initHMD();
  initModelMatrices();
};
function update () {
  gl.clear(flags);

  // hypothetical function that
  // uses the webVR API's to update view matrices
  // based on orientation provided by HMD's
  // accelerometer, and position provided by the
  // position sensor camera.
  readFromHMDPS();

  // left eye
  gl.viewport(0, 0, canvas.width / 2, canvas.height);
  mat4.multiply(mvpMatrix, leftEyeProjectionMatrix, leftEyeViewMatrix);
  gl.uniformMatrix4fv(uniforms.uMVPMatrixLocation, false, mvpMatrix);
  gl.drawElements(mode, count, type, offset);

  // right eye
  gl.viewport(canvas.width / 2, 0, canvas.width / 2, canvas.height);
  mat4.multiply(mvpMatrix, rightEyeProjectionMatrix, rightEyeViewMatrix);
  gl.uniformMatrix4fv(uniforms.uMVPMatrixLocation, false, mvpMatrix);
  gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_SHORT, 0);

  requestAnimationFrame(update);
};

In a follow-up post, once the webVR API has had more time to bake, we’ll take a look at some more concrete examples and explain things like quaternions!  With WebGL2’s multiple render targets (WebGL1’s WEBGL_draw_buffers extension, currently with less than 50% browser support, more info), or WebGL2’s instancing (WebGL1’s ANGLE_instanced_arrays extension, currently 89% browser support) it should be possible to not explicitly call draw twice.

For more info on rendering differences, Oculus docs are also a great reference.

90 Hz Refresh Rate and Low Latency

When rendering, we’re limited in how fast we can show updates and refresh the display by the hardware’s refresh rate. For most monitors, this rate is 60 Hz.  This gives us 16.66 ms to draw everything in our scene (minus a little for the browser’s compositor). requestAnimationFrame will limit how quickly we can run our update loops, which prevents us from doing more work than is necessary.

The Oculus DK2 has a max refresh rate of 75 Hz (13.33 ms per frame) and the production version currently slated for a Q1 2016 release will have a refresh rate of 90 Hz (11.11 ms per frame).

So, not only do we need to render everything twice from two different viewpoints, but we only have two-thirds the time to do it (16.66 ms * 2 / 3 == 11.11)! While this seems difficult, hitting a lower frame time is doable by various tricks (lower scene complexity, smaller render target plus upscaling, etc). On the other hand, reducing the latency imposed by hardware is much more challenging!

Not only do we have to concern ourselves with frame rate, but also with latency on user input. The major difference between real-time rendering and pre-rendering is that a real-time scene is generated dynamically usually with input from the viewer. When a user moves their head or repositions themselves, we want to have a tight feedback loop between when they move and when they see the results of their movement displayed to them. This means we want to get our rendering results displayed sooner, but then we run into the classic double buffering vs screen tearing issue. As Oculus Chief Scientist Michael Abrash points out, we want sub 20 ms latency between user interaction and feedback presentation.

Whether or not current desktop, let alone mobile, graphics hardware is up to the task remains to be seen!

To get more info or get involved in WebVR:

* MozVR download page (everything you need to get up and running with WebVR in Firefox)

* WebVR spec (in flux, subject to change, things WILL break.)

* MDN docs (in progress, will change when spec is updated)

* web-vr-dicuss public mailing list

* /r/webvr subreddit

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)