WebGL

WebGL Off the Main Thread

We’re happy to announce WebGL in Web Workers in Firefox 44+! Using the new OffscreenCanvas API you can now create a WebGL context off of the main thread.

To follow along, you’ll need a copy of Firefox 44 or newer (currently Firefox Developer Edition or Firefox Nightly). You’ll have to enable this API by navigating to about:config in Firefox, searching for gfx.offscreencanvas.enabled, and setting it to true. You can grab the code examples from GitHub or preview it here in Firefox 44+ with gfx.offscreencanvas.enabled set to true. This functionality is not yet available on Windows pending ANGLE support. AlteredQualia points out things are running great on Windows/FF Nightly 46. Shame on me for not verifying!

Use Cases

This API is the first that allows a thread other than the main thread to change what is displayed to the user. This allows rendering to progress no matter what is going on in the main thread. You can see more use cases in the working group’s in-progress specification.

Code Changes

Let’s take a look at a basic example of WebGL animation from my Raw WebGL talk. We’ll port this code to run in a worker, rather than the main thread.

WebGL in Workers

The first step is moving all of the code from WebGL context creation to draw calls into a separate file.


<script src="gl-matrix.js"></script>
<script>
  // main thread
  var canvas = document.getElementById('myCanvas');
  ...
  gl.useProgram(program);
  ...

becomes:


// main thread
var canvas = document.getElementById('myCanvas');
if (!('transferControlToOffscreen' in canvas)) {
  throw new Error('webgl in worker unsupported');
}
var offscreen = canvas.transferControlToOffscreen();
var worker = new Worker('worker.js');
worker.postMessage({ canvas: offscreen }, [offscreen]);
...

Recognize that we’re calling HTMLCanvasElement.prototype.transferControlToOffscreen, then transferring that to a newly constructed worker thread. transferControlToOffscreen returns a new object which is an instance of OffscreenCanvas, as opposed to HTMLCanvasElement. While similar, you can’t access properties like offscreen.clientWidth and offscreen.clientHeight, but you can access offscreen.width and offscreen.height. By passing it as the second argument to postMessage, we transfer ownership of the variable to the second thread.

Now in the worker thread, we’ll wait to receive the message from the main thread with the canvas element, before trying to get a WebGL context. The code for getting a WebGL context, creating and filling buffers, getting and setting attributes and uniforms, and drawing does not change.


// worker thread
importScripts('gl-matrix.js');

onmessage = function (e) {
  if (e.data.canvas) {
    createContext(e.data.canvas);
  }
};

function createContext (canvas) {
  var gl = canvas.getContext('webgl');
  ...

OffScreenCanvas simply adds one new method to WebGLRenderingContext.prototype called commit. The commit method will push the rendered image to the canvas element that created the OffscreenCanvas used by the WebGL context.

Animation Synchronization

Now to get the code animating, we can proxy requestAnimationFrame timings from the main thread to the worker with postMessage.


// main thread
(function tick (t) {
  worker.postMessage({ rAF: t });
  requestAnimationFrame(tick);
})(performance.now());

and onmessage in the worker becomes:


// worker thread
onmessage = function (e) {
  if (e.data.rAF && render) {
    render(e.data.rAF);
  } else if (e.data.canvas) {
    createContext(e.data.canvas);
  }
};

and our render function now has a final gl.commit(); statement rather than setting up another requestAnimationFrame loop.


// main thread
function render (dt) {
  // update
  ...
  // render
  gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
  gl.drawArrays(gl.TRIANGLES, 0, n);
  requestAnimationFrame(render);
};

becomes:


// worker thread
function render (dt) {
  // update
  ...
  // render
  gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
  gl.drawArrays(gl.TRIANGLES, 0, n);
  gl.commit(); // new for webgl in workers
};

Limitations with this approach

While in the example code, I’m not doing proper velocity based animation (by not using the value passed from requestAnimationFrame, I’m doing frame rate dependate animation, as opposed to the more correct velocity based animation which is frame rate independent), we still have an issue with this approach.

Assume we moved the rendering logic off of the main thread to avoid pauses from the JavaScript Virtual Machine’s Garbage Collector (GC pauses). GC pauses on the main thread will slow down invocations of requestAnimationFrame. Since calls to gl.drawArrays and gl.commit are asynchronously triggered in the worker thread by postMessages in a requestAnimationFrame loop on the main thread, GC pauses in the main thread will block rendering on the worker thread. Note: GC pauses in the main thread should not block progress in a worker thread (at least they don’t in Firefox’s SpiderMonkey Virtual Machine). GC pauses are per Worker in SpiderMonkey.

While we could try to do something clever in the worker to account for this, the solution will be to make requestAnimationFrame available in a Worker context. The bug tracking this work can be found here.

Summary

Developers will now be able to render to the screen without blocking on the main thread, thanks to the new OffscreenCanvas API. There’s still more work to do with getting requestAnimationFrame on Workers. I was able to port existing WebGL code to run in a worker in a few minutes. For comparison, see animation.html vs. animation-worker.html and worker.js.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

WebGL Deferred Shading

WebGL brings hardware-accelerated 3D graphics to the web. Many features of WebGL 2 are available today as WebGL extensions. In this article, we describe how to use the WEBGL_draw_buffers extension to create a scene with a large number of dynamic lights using a technique called deferred shading, which is popular among top-tier games.

live demosource code

Today, most WebGL engines use forward shading, where lighting is computed in the same pass that geometry is transformed. This makes it difficult to support a large number of dynamic lights and different light types.

Forward shading can use a pass per light. Rendering a scene looks like:

foreach light {
  foreach visible mesh {
    if (light volume intersects mesh) {
      render using this material/light shader;
      accumulate in framebuffer using additive blending;
    }
  }
}

This requires a different shader for each material/light-type combination, which adds up. From a performance perspective, each mesh needs to be rendered (vertex transform, rasterization, material part of the fragment shader, etc.) once per light instead of just once. In addition, fragments that ultimately fail the depth test are still shaded, but with early-z and z-cull hardware optimizations and a front-to-back sorting or a z-prepass, this not as bad as the cost for adding lights.

To optimize performance, light sources that have a limited effect are often used. Unlike real-world lights, we allow the light from a point source to travel only a limited distance. However, even if a light’s volume of effect intersects a mesh, it may only affect a small part of the mesh, but the entire mesh is still rendered.

In practice, forward shaders usually try to do as much work as they can in a single pass leading to the need for a complex system of chaining lights together in a single shader. For example:

foreach visible mesh {
  find lights affecting mesh;
  Render all lights and materials using a single shader;
}

The biggest drawback is the number of shaders required since a different shader is required for each material/light (not light type) combination. This makes shaders harder to author, increases compile times, usually requires runtime compiling, and increases the number of shaders to sort by. Although meshes are only rendered once, this also has the same performance drawbacks for fragments that fail the depth test as the multi-pass approach.

Deferred Shading

Deferred shading takes a different approach than forward shading by dividing rendering into two passes: the g-buffer pass, which transforms geometry and writes positions, normals, and material properties to textures called the g-buffer, and the light accumulation pass, which performs lighting as a series of screen-space post-processing effects.

// g-buffer pass
foreach visible mesh {
  write material properties to g-buffer;
}
 
// light accumulation pass
foreach light {
  compute light by reading g-buffer;
  accumulate in framebuffer;
}

This decouples lighting from scene complexity (number of triangles) and only requires one shader per material and per light type. Since lighting takes place in screen-space, fragments failing the z-test are not shaded, essentially bringing the depth complexity down to one. There are also downsides such as its high memory bandwidth usage and making translucency and anti-aliasing difficult.

Until recently, WebGL had a roadblock for implementing deferred shading. In WebGL, a fragment shader could only write to a single texture/renderbuffer. With deferred shading, the g-buffer is usually composed of several textures, which meant that the scene needed to be rendered multiple times during the g-buffer pass.

WEBGL_draw_buffers

Now with the WEBGL_draw_buffers extension, a fragment shader can write to several textures. To use this extension in Firefox, browse to about:config and turn on webgl.enable-draft-extensions. Then, to make sure your system supports WEBGL_draw_buffers, browse to webglreport.com and verify it is in the list of extensions at the bottom of the page.

To use the extension, first initialize it:

var ext = gl.getExtension('WEBGL_draw_buffers');
if (!ext) {
  // ...
}

We can now bind multiple textures, tx[] in the example below, to different framebuffer color attachments.

var fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT0_WEBGL, gl.TEXTURE_2D, tx[0], 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT1_WEBGL, gl.TEXTURE_2D, tx[1], 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT2_WEBGL, gl.TEXTURE_2D, tx[2], 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT3_WEBGL, gl.TEXTURE_2D, tx[3], 0);

For debugging, we can check to see if the attachments are compatible by calling gl.checkFramebufferStatus. This function is slow and should not be called often in release code.

if (gl.checkFramebufferStatus(gl.FRAMEBUFFER) !== gl.FRAMEBUFFER_COMPLETE) {
  // Can't use framebuffer.
  // See http://www.khronos.org/opengles/sdk/docs/man/xhtml/glCheckFramebufferStatus.xml
}

Next, we map the color attachments to draw buffer slots that the fragment shader will write to using gl_FragData.

ext.drawBuffersWEBGL([
  ext.COLOR_ATTACHMENT0_WEBGL, // gl_FragData[0]
  ext.COLOR_ATTACHMENT1_WEBGL, // gl_FragData[1]
  ext.COLOR_ATTACHMENT2_WEBGL, // gl_FragData[2]
  ext.COLOR_ATTACHMENT3_WEBGL  // gl_FragData[3]
]);

The maximum size of the array passed to drawBuffersWEBGL depends on the system and can be queried by calling gl.getParameter(gl.MAX_DRAW_BUFFERS_WEBGL). In GLSL, this is also available as gl_MaxDrawBuffers.

In the deferred shading geometry pass, the fragment shader writes to multiple textures. A trivial pass-through fragment shader is:

#extension GL_EXT_draw_buffers : require
precision highp float;
void main(void) {
  gl_FragData[0] = vec4(0.25);
  gl_FragData[1] = vec4(0.5);
  gl_FragData[2] = vec4(0.75);
  gl_FragData[3] = vec4(1.0);
}

Even though we initialized the extension in JavaScript with gl.getExtension, the GLSL code still needs to include #extension GL_EXT_draw_buffers : require to use the extension. With the extension, the output is now the gl_FragData array that maps to framebuffer color attachments, not gl_FragColor, which is traditionally the output.

g-buffers

In our deferred shading implementation the g-buffer is composed of four textures: eye-space position, eye-space normal, color, and depth. Position, normal, and color use the floating-point RGBA format via the OES_texture_float extension, and depth uses the unsigned-short DEPTH_COMPONENT format.

Position texture

Normal texture

Color texture

Depth texture

Light accumulation using g-buffers

This g-buffer layout is simple for our testing. Although four textures is common for a full deferred shading engine, an optimized implementation would try to use the least amount of memory by lowering precision, reconstructing position from depth, packing values together, using different distributions, and so on.

With WEBGL_draw_buffers, we can use a single pass to write each texture in the g-buffer. Compared to using a single pass per texture, this improves performance and reduces the amount of JavaScript code and GLSL shaders. As shown in the graph below, as scene complexity increases so does the benefit of using WEBGL_draw_buffers. Since increasing scene complexity requires more drawElements/drawArrays calls, more JavaScript overhead, and transforms more triangles, WEBGL_draw_buffers provides a benefit by writing the g-buffer in a single pass, not a pass per texture.

All performance numbers were measured using an NVIDIA GT 620M, which is a low-end GPU with 96 cores, in FireFox 26.0 on Window 8. In the above graph, 20 point lights were used. The light intensity decreases proportionally to the square of the distance between the current position and the light position. Each Stanford Dragon is 100,000 triangles and requires five draw calls so, for example, when 25 dragons are rendered, 125 draw calls (and related state changes) are issued, and a total of 2,500,000 triangles are transformed.


WEBGL_draw_buffers test scene, shown here with 100 Stanford Dragons.

Of course, when scene complexity is very low, like the case of one dragon, the cost of the g-buffer pass is low so the savings from WEBGL_draw_buffers are minimal, especially if there are many lights in the scene, which drives up the cost of the light accumulation pass as shown in the graph below.

Deferred shading requires a lot of GPU memory bandwidth, which can hurt performance and increase power usage. After the g-buffer pass, a naive implementation of the light accumulation pass would render each light as a full-screen quad and read the entirety of each g-buffer. Since most light types, like point and spot lights, attenuate and have a limited volume of effect, the full-screen quad can be replaced with a world-space bounding volume or tight screen-space bounding rectangle. Our implementation renders a full-screen quad per light and uses the scissor test to limit the fragment shader to the light’s volume of effect.

Tile-Based Deferred Shading

Tile-based deferred shading takes this a step farther and splits the screen into tiles, for example 16×16 pixels, and then determines which lights influence each tile. Light-tile information is then passed to the shader and the g-buffer is only read once for all lights. Since this drastically reduces memory bandwidth, it improves performance. The following graph shows performance for the sponza scene (66,450 triangles and 38 draw calls) at 1024×768 with 32×32 tiles.

Tile size affects performance. Smaller tiles require more JavaScript overhead to create light-tile information, but less computation in the lighting shader. Larger tiles have the opposite tradeoff. Therefore, choosing a suitable tile is important for the performance. The figure below is shown the relationship between tile size and performance with 100 lights.

A visualization of the number of lights in each tile is shown below. Black tiles have no lights intersecting them and white tiles have the most lights.


Shaded version of tile visualization.

Conclusion

WEBGL_draw_buffers is a useful extension for improving the performance of deferred shading in WebGL. Checkout the live demo and our code on github.

Acknowledgements

We implemented this project for the course CIS 565: GPU Programming and Architecture, which is part of the computer graphics program at the University of Pennsylvania. We thank Liam Boone for his support and Eric Haines and Morgan McGuire for reviewing this article.

References

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

WebGL & CreateJS for Firefox OS

This is a guest post by the developers at gskinner. Mozilla has been working with the CreateJS.com team at gskinner to bring new features to their open-source libraries and make sure they work great on Firefox OS.

Here at gskinner, it’s always been our philosophy to contribute our solutions to the dev community — the last four years of which have been focused on web-standards in HTML and Javascript. Our CreateJS libraries provide approachable, modular, cross-browser-and-platform APIs for building rich interactive experiences on the open, modern web. We think they’re awesome.

For example, the CreateJS CDN typically receives hundreds of millions of impressions per month, and Adobe has selected CreateJS as their official framework for creating HTML5 documents in Flash Professional CC.

Firefox OS is a perfect fit for CreateJS content. It took us little effort to ensure that the latest libraries are supported and are valuable tools for app and game creation on the platform.

We’re thrilled to welcome Mozilla as an official sponsor of CreateJS, along with some exciting announcements about the libraries!

WebGL

As WebGL becomes more widely supported in browsers, we’re proud to announce that after working in collaboration with Mozilla, a shiny new WebGL renderer for EaselJS is now in early beta! Following research, internal discussions, and optimizations, we’ve managed to pump out a renderer that draws a subset of 2D content anywhere from 6x to 50x faster than is currently possible on the Canvas 2D Context. It’s fully supported in both the browser and in-app contexts of Firefox OS.

We thought about what we wanted to gain from a WebGL renderer, and narrowed it down to three key goals:

  1. Very fast performance for drawing sprites and bitmaps
  2. Consistency and integration with the existing EaselJS API
  3. The ability to fall back to Context2D rendering if WebGL is not available

Here’s what we came up with:

SpriteStage and SpriteContainer

Two new classes, SpriteStage and SpriteContainer, enforce restrictions on the display list to enable aggressively optimized rendering of bitmap content. This includes images, spritesheet animations, and bitmap text. SpriteStage is built to automatically make additional draw calls per frame as needed, avoiding any fixed maximum on the number of elements that can be included in a single draw call.

These new classes extend existing EaselJS classes (Stage and Container), so creating WebGL content is super simple if you’re familiar with EaselJS. Existing content using EaselJS can be WebGL-enabled with a few keystrokes.

Layering Renderers

This approach allows WebGL and Context2D content to be layered on screen, and mouse/touch interactions can pass seamlessly between the layers. For example, an incredibly fast game engine using WebGL rendering can be displayed under a UI layer that leverages the more robust capabilities of the Context2D renderer. You can even swap assets between a WebGL and Context2D layer.

Finally, WebGL content is fully compatible with the existing Context2D renderer. On devices or browsers that don’t support WebGL, your content will automatically be rendered via canvas 2D.

While it took some work to squeeze every last iota of performance out of the new renderer, we’re really happy with this new approach. It allows developers to build incredibly high performance content for a wide range of devices, and also leverage the extremely rich existing API and toolchain surrounding CreateJS. Below, you’ll find a few demos and links that show off its capabilities.

Example: Bunnymark

A very popular (though limited) benchmark for web graphics is Bunnymark. This benchmark simply measures the maximum number of bouncing bunny bitmap sprites (try saying that 5 times fast) a renderer can support at 60fps.

Bunnymark

The following table compares Bunnymark scores using the classic Context2D renderer and the new WebGL renderer. Higher numbers are better.

Environment Context2D WebGL Change
2012 Macbook Pro, Firefox 26 900 46,000 51x
2012 Macbook Pro, Chrome 31 2,300 60,000 26x
2012 Win 7 laptop, IE11 (x64 NVIDIA GeForce GT 630M, 1 GB VRAM) 1,900 9,800 5x
Firefox OS 1.2.0.0-prerelease (early 1.2 device) 45 270 6x
Nexus 5, Firefox 26 225 4,400 20x
Nexus 5, Chrome 31 230 4,800 21x

Since these numbers show maximum sprites at 60fps, the above numbers can increase significantly if a lower framerate is allowed. It’s worth noting that the only Firefox OS device we have in house is an early Firefox OS 1.2 device (has a relatively low-powered GPU), yet we’re still seeing significant performance gains.

Example: Sparkles Benchmark

This very simple demo was made to test the limits of how many particles could be put on screen while pushing the browser to 24fps.

Sparkles

Example: Planetary Gary

We often use the Planetary Gary game demo as a test bed for new capabilities in the CreateJS libraries. In this case, we retrofitted the existing game to use the new SpriteStage and SpriteContainer classes for rendering the game experience in WebGL.

Planetary Gary

This was surprisingly easy to do, requiring only three lines of changed or added code, and demonstrates the ease of use, and consistency of the new APIs. It’s a particularly good example because it shows how the robust feature set of the Context2D renderer can be used for user interface elements (ex. the start screen) in cooperation with the superior performance of the WebGL renderer (ex. the game).

Even better, the game art is packaged as vector graphics, which are drawn to sprite sheets via the Context2D renderer at run time (using EaselJS’s SpriteSheetBuilder), then passed to the WebGL renderer. This allows for completely scaleable graphics with minimal file size (~85kb over the wire) and incredible performance!

Roadmap

We’ve posted a public preview of the new WebGL renderer on GitHub to allow the community to take it for a test drive and provide feedback. Soon it will be included it in the next major release.

Follow @createjs and @gskinner on twitter to stay up to date with the latest news and let us know what you think — thanks for reading!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Live editing WebGL shaders with Firefox Developer Tools

If you’ve seen Epic Games’ HTML5 port of ‘Epic Citadel’, you have no doubt been impressed by the amazing performance and level of detail. A lot of the code that creates the cool visual effects you see on screen are written as shaders linked together in programs – these are specialized programs that are evaluated directly on the GPU to provide high performance real-time visual effects.

Writing Vertex and Fragment shaders are an essential part of developing 3D on the web even if you are using a library, in fact the Epic Citadel demo includes over 200 shader programs. This is because most rendering is customised and optimised to fit a game’s needs. Shader development is currently awkward for a few reasons:

  • Seeing your changes requires a refresh
  • Some shaders are applied under very specific conditions

Here is a screencast that shows a how to manipulate shader code using a relatively simple WebGL demo:

Starting in Firefox 27 we’ve introduced a new tool called the ‘Shader Editor’ that makes working with shader programs much simpler: the editor lists all shader programs running in the WebGL context, and you can live-edit shaders and see immediate results without interrupting any animations or state. Additionally editing shaders should not impact WebGL performance.

Enabling the Shader Editor

The Shader Editor is not shown by default, because not all the web pages out there contain WebGL, but you can easily enable it:

  1. Open the Toolbox by pressing either F12 or Ctrl/Cmd + Shift + I.
  2. Click on the ‘gear’ icon near the top edge of the Toolbox to open the ‘Toolbox Options’.
  3. On the left-hand side under ‘Default Firefox Developer Tools’ make sure ‘Shader Editor’ is checked. You should immediately see a new ‘Shader Editor’ Tool tab.

Using the Shader Editor

To see the Shader Editor in action, just go to a WebGL demo such as this one and open the toolbox. When you click on the shader editor tab, you’ll see a reload button you will need to click in order to get the editor attached to the WebGL context. Once you’ve done this you’ll see the Shader Editor UI:

The WebGL Shader Editor

  • On the left you have a list of programs, a vertex and fragment shader corresponds to each program and their source is displayed and syntax highlighted in the editors on the right.
  • The shader type is displayed underneath each editor.
  • Hovering a program highlights the geometry drawn by its corresponding shaders in red – this is useful for finding the right program to work on.
  • Clicking on the eyeball right next to each program hides the rendered geometry (useful in the likely case an author wants to focus solely on some geometry but not other, or to hide overlapping geometry).
  • The tool is responsive when docked to the side.

Editing Shader Programs

The first thing you’ll notice about Shader program code is that it is not JavaScript. For more information on how Shader programs work, I highly recommend you start with the WebGL demo on the Khronos wiki and/or Paul Lewis’ excellent HTML5 Rocks post. There also some great long standing tutorials on the Learning WebGL blog. The Shader Editor gives you direct access to the programs so you can play around with how they work:

  • Editing code in any of the editors will compile the source and apply it as soon as the user stops typing;
  • If an error was made in the code, the rendering won’t be affected, but an error will be displayed in the editor, highlighting the faulty line of code; hovering the icon gutter will display a tooltip describing the error.

Errors in shaders

Learn more about the Shader Editor on the Mozilla Developer Network.

Here is a second screencast showing how you could directly edit the shader programs in the Epic Citadel demo:

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

The concepts of WebGL

This post is not going to be yet another WebGL tutorial: there already are enough great ones (we list some at the end).

We are just going to introduce the concepts of WebGL, which are basically just the concepts of any general, low-level graphics API (such as OpenGL or Direct3D), to a target audience of Web developers.

What is WebGL?

WebGL is a Web API that allows low-level graphics programming. “Low-level” means that WebGL commands are expressed in terms that map relatively directly to how a GPU (graphics processing unit, i.e. hardware) actually works. That means that WebGL allows you to really tap into the feature set and power of graphics hardware. What native games do with OpenGL or Direct3D, you can probably do with WebGL too.

WebGL is so low-level that it’s not even a “3D” graphics API, properly speaking. Just like your graphics hardware doesn’t really care whether you are doing 2D or 3D graphics, neither does WebGL: 2D and 3D are just two possible usage patterns. When OpenGL 1.0 came out in 1992, it was specifically a 3D API, aiming to expose the features of the 3D graphics hardware of that era. But as graphics hardware evolved towards being more generic and programmable, so did OpenGL. Eventually, OpenGL became so generic that 2D and 3D would be just two possible use cases, while still offering great performance. That was OpenGL 2.0, and WebGL is closely modeled after it.

That’s what we mean when we say that WebGL is a low-level graphics API rather than a 3D API specifically. That is the subject of this article; and that’s what makes WebGL so valuable to learn even if you don’t plan to use it directly. Learning WebGL means learning a little bit of how graphics hardware works. It can help developing an intuition of what’s going to be fast or slow in any graphics API.

The WebGL context and framebuffer

Before we can properly explain anything about the WebGL API, we have to introduce some basic concepts. WebGL is a rendering context for the HTML Canvas element. You start by getting a WebGL context for your canvas:

var gl;
try {
  gl = canvas.getContext("experimental-webgl");
} catch(e) {}

From there, you perform your rendering by calling the WebGL API functions on the gl element obtained there. WebGL is never single-buffered, meaning that the image that you are currently rendering is never the one that is currently displayed in the Canvas element. This ensures that half-rendered frames never show up in the browser’s window. The image being rendered is called the WebGL framebuffer or backbuffer. Talking of framebuffers is made more complicated by the fact that WebGL also allows additional off-screen framebuffers, but let’s ignore that in this article. The image currently being displayed is called the frontbuffer. Of course, the contents of the backbuffer will at some point be copied into the frontbuffer — otherwise WebGL drawing would have no user-visible effect!

But that operation is taken care of automatically by the browser, and in fact, the WebGL programmer has no explicit access to the frontbuffer whatsoever. The key rule here is that the browser may copy the backbuffer into the frontbuffer at any time except during the execution of JavaScript. What this means is that you must perform the entire WebGL rendering of a frame within a single JavaScript callback. As long as you do that, correct rendering is ensured and the browser takes care of the very complex details of multi-buffered compositing for you. You should, in addition, let your WebGL-rendering callback be a requestAnimationFrame callback: if you do so, the browser will also take care of the complex details of animation scheduling for you.

WebGL as a general, low-level graphics API

We haven’t yet described how WebGL is a low-level graphics API where 2D and 3D are just two possible usage patterns. In fact, the very idea that such a general graphics API may exist is non-trivial: it took the industry many years to arrive to such APIs.

WebGL allows to draw either points, or line segments, or triangles. The latter is of course what’s used most of the time, so we will focus entirely on triangles in the rest of this article.

WebGL’s triangle rendering is very general: the application provides a callback, called the pixel shader or fragment shader, that will be called on each pixel of the triangle, and will determine the color in which it should be drawn.

So suppose that you’re coding an old-school 2D game. All what you want is to draw rectangular bitmap images. As WebGL can only draw triangles (more on this below), you decompose your rectange into two triangles as follows,

A rectangle decomposed as two triangles.

and your fragment shader, i.e. the program that determines the color of each pixel, is very simple: it will just read one pixel from the bitmap image, and use it as the color for the pixel currently being rendered.

Suppose now that you’re coding a 3D game. You have tesselated your 3D shapes into triangles. Why triangles? Triangles are the most popular 3D drawing primitive because any 3 points in 3D space are the vertices of a triangle. By contrast, you cannot just take any 4 points in 3D space to define a quadrilateral — they would typically fail to lie exactly in the same plane. That’s why WebGL doesn’t care for any other kind of polygons besides triangles.

So your 3D game just needs to be able to render 3D triangles. In 3D, it is a little bit tricky to transform 3D coordinates into actual canvas coordinates — i.e. to determine where in the canvas a given 3D object should end up being drawn. There is no one-size-fits-all formula there: for example, you could want to render fancy underwater or glass refraction effects, that would inevitably require a custom computation for each vertex. So WebGL allows you to provide your own callback, called the vertex shader, that will be called for each vertex of each triangle you will render, and will determine the canvas coordinates at which it should be drawn.

One would naturally expect these canvas coordinates to be 2D coordinates, as the canvas is a 2D surface; but they are actually 3D coordinates, where the Z coordinate is used for depth testing purposes. Two pixels differing only by their Z coordinate correspong to the same pixel on screen, and the Z coordinates are used to determine which one hides the other one. All three axes go from -1.0 to +1.0. It’s important to understand that this is the only coordinate system natively understood by WebGL: any other coordinate system is only understood by your own vertex shader, where you implement the transformation to canvas coordinates.

The WebGL canvas coordinate system.

Once the canvas coordinates of your 3D triangles are known (thanks to your vertex shader), your triangles will be painted, like in the above-discussed 2D example, by your fragment shader. In the case of a 3D game though, your fragment shader will typically be more intricate than in a 2D game, as the effective pixel colors in a 3D game are not as easily determined by static data. Various effects, such as lighting, may play a role in the effective color that a pixel will have on screen. In WebGL, you have to implement all these effects yourself. The good news is that you can: as said above, WebGL lets you specify your own callback, the fragment shader, that determines the effective color of each pixel.

Thus we see how WebGL is a general enough API to encompass the needs of both 2D and 3D applications. By letting you specify arbitrary vertex shaders, it allows implementing arbitrary coordinate transformations, including the complex ones that 3D games need to perform. By accepting arbitrary fragment shaders, it allows implementing arbitrary pixel color computations, including subtle lighting effects as found in 3D games. But the WebGL API isn’t specific to 3D graphics and can be used to implement almost any kind of realtime 2D or 3D graphics — it scales all the way down to 1980s era monochrome bitmap or wireframe games, if that’s what you want. The only thing that’s out of reach of WebGL is the most intensive rendering techniques that require tapping into recently added features of high-end graphics hardware. Even so, the plan is to keep advancing the WebGL feature set as is deemed appropriate to keep the right balance of portability vs features.

The WebGL rendering pipeline

So far we’ve discussed some aspects of how WebGL works, but mostly incidentally. Fortunately, it doesn’t take much more to explain in a systematic way how WebGL rendering proceeds.

The key metaphor here is that of a pipeline. It’s important to understand it because it’s a universal feature of all current graphics hardware, and understanding it will help you instinctively write code that is more hardware-friendly, and thus, runs faster.

GPUs are massively parallel processors, consisting of a large number of computation units designed to work in parallel with each other, and in parallel with the CPU. That is true even in mobile devices. With that in mind, graphics APIs such as WebGL are designed to be inherently friendly to such parallel architectures. On typical work loads, and when correctly used, WebGL allows the GPU to execute graphics commands in parallel with any CPU-side work, i.e. the GPU and the CPU should not have to wait for each other; and WebGL allows the GPU to max out its parallel processing power. It is in order to allow running on the GPU that these shaders are written in a dedicated GPU-friendly language rather than in JavaScript. It is in order to allow the GPU to run many shaders simultaneously that shaders are just callbacks handling one vertex or one pixel each — so that the GPU is free to run shaders on whichever GPU execution unit and in whichever order it pleases.

The following diagram summarizes the WebGL rendering pipeline:

The WebGL rendering pipeline

The application sets up its vertex shader and fragment shader, and gives WebGL any data that these shaders will need to read from: vertex data describing the triangles to be drawn, bitmap data (called “textures”) that will be used by the fragment shader. Once this is set up, the rendering starts by executing the vertex shader for each vertex, which determines the canvas coordinates of triangles; the resulting triangles are then rasterized, which means that the list of pixels to be painted is determined; the fragment shader is then executed for each pixel, determining its color; finally, some framebuffer operation determines how this computed color affects the final framebuffer’s pixel color at this location (this final stage is where effects such as depth testing and transparency are implemented).

GPU-side memory vs main memory

Some GPUs, especially on desktop machines, use their own memory that’s separate from main memory. Other GPUs share the same memory as the rest of the system. As a WebGL developer, you can’t know what kind of system you’re running on. But that doesn’t matter, because WebGL forces you to think in terms of dedicated GPU memory.

All what matters from a practical perspective is that:

  • WebGL rendering data must first be uploaded to special WebGL data structures. Uploading means copying data from general memory to WebGL-specific memory. These special WebGL data structures are called WebGL textures (bitmap images) and WebGL buffers (generic byte arrays).
  • Once that data is uploaded, rendering is really fast.
  • But uploading that data is generally slow.

In other words, think of the GPU as a really fast machine, but one that’s really far away. As long as that machine can operate independently, it’s very efficient. But communicating with it from the outside takes very long. So you want to do most of the communication ahead of time, so that most of the rendering can happen independently and fast.

Not all GPUs are actually so isolated from the rest of the system — but WebGL forces you to think in these terms so that your code will run efficiently no matter what particular GPU architecture a given client uses. WebGL data structures abstract the possibility of dedicated GPU memory.

Some things that make graphics slow

Finally, we can draw from what was said above a few general ideas about what can make graphics slow. This is by no means an exhaustive list, but it does cover some of the most usual causes of slowness. The idea is that such knowledge is useful to any programmer ever touching graphics code — regardless of whether they use WebGL. In this sense, learning some concepts around WebGL is useful for much more than just WebGL programming.

Using the CPU for graphics is slow

There is a reason why GPUs are found in all current client systems, and why they are so different from CPUs. To do fast graphics, you really need the parallel processing power of the GPU. Unfortunately, automatically using the GPU in a browser engine is a difficult task. Browser vendors do their best to use the GPU where appropriate, but it’s a hard problem. By using WebGL, you take ownership of this problem for your content.

Having the GPU and the CPU wait for each other is slow

The GPU is designed to be able to run in parallel with the CPU, independently. Inadvertently causing the GPU and CPU to wait for each other is a common cause of slowness. A typical example is reading back the contents of a WebGL framebuffer (the WebGL readPixels function). This may require the CPU to wait for the GPU to finish any queued rendering, and may then also require the GPU to wait for the CPU to have received the data. So as far as you can, think of the WebGL framebuffer as a write-only medium.

Sending data to the GPU may be slow

As mentioned above, GPU memory is abstracted by WebGL data structures such as textures. Such data is best uploaded once to WebGL and then used many times. Uploading new data too frequently is a typical cause of slowness: the uploading is slow by itself, and if you upload data right before rendering with it, the GPU has to wait for the data before it can proceed with rendering — so you’re effectively gating your rendering speed on slow memory transfers.

Small rendering operations are slow

GPUs are intended to be used to draw large batches of triangles at once. If you have 10,000 triangles to draw, doing it in one single operation (as WebGL allows) will be much faster than doing 10,000 separate draw operations of one triangle each. Think of a GPU as a very fast machine with a very long warm-up time. Better warm up once and do a large batch of work, than pay for the warm-up cost many times. Organizing your rendering into large batches does require some thinking, but it’s worth it.

Where to learn WebGL

We intentionally didn’t write a tutorial here because there already exist so many good ones:

Allow me to also mention that talk I gave, as it has some particularly minimal examples.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Interview: Paul Brunt, WebGL Dev Derby winner

Paul Brunt won the WebGL Dev Derby with SnappyTree, his incredibly powerful (and even a little addicting) 3D tree designer. SnappyTree provides a wonderful example of what we can do with the Web today — it even has an export function for using your trees in native applications (move over Blender), not that we will need native 3D applications if progress like this continues.

Recently, I had the opportunity to learn more about Paul and his work. In our interview, Paul shared thoughts on the past, present, and future of web development and gave advice relevant to developers of all levels of experience.

The Interview

How did you become interested in web development?

I first became interested in web development in the late 90′s when I went on the internet for the first time. After a little bit of browsing around the web I thought I’d have a go at making a site myself. With the free space provided by the ISP and with a little assistance from Frontpage I managed to put together something you might tentatively call a website. Unfortunately that first site was short lived, the result of an accidental deletion. But, the seed had been planted so I built another, then another, then somewhere along the way I ended up doing it for a living.

Tell us about developing SnappyTree. Was anything especially exciting, challenging, or rewarding?

It had taken a few days of coding before I was in the position to start drawing anything to the screen. So, the most exciting part was seeing something that resembled a tree appear in the browser for the first time; although it wasn’t much to look at, and there were very obviously issues.

Getting skinning working correctly was extremely challenging; there where many horrible branch twisted iterations while trying to get it right, but I’m pretty happy with the end result.

Can you tell us a little about how SnappyTree works?

Snappy Tree works by taking an initial branch (the trunk) which then splits into two new branches. The direction of these new branches is determined by several user configurable factors symmetry, droopyness, etc. This process is repeated N times to produce the basic tree structure.

After the basic tree has been constructed a skin in generated around the branches before finally adding planes at the ends of the branches which are used for leaves and twigs. The final mesh data is then piped into Webgl for rendering and used to generate the collada or wavefront files for export.

What makes the web an exciting platform for you?

Increasingly it seems to be getting quicker and easier to develop for the web than any native platform, so I think the current rate of progress is the most exciting part. With so many new technologies emerging seeing how developers use them is always fun and often downright awe inspiring.

What up-and-coming web technologies are you most excited about?

I think the most exciting thing emerging right now is WebRTC. I’m really looking forward to seeing what uses developers can come up with. I can see a lot of potential outside of the obvious and it’s going the be a lot of fun discovering interesting uses for it.

If you could change one thing about the web, what would it be?

There are lots of little things I’d love to tweak in CSS and HTML; but, if it’s limited to just one thing I think I would change the “www.” convention. It’s difficult to pronounce, takes far too long to say and just sounds horrible.

What advice would you give to aspiring web developers?

Jump in the deep end and be ambitious. The best way to learn a new technology is to start a project around that technology. Even if you haven’t got a clue what you’re doing when you start you certainly will by the end.

Further Reading

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Sencha Labs releases open source framework for WebGL development

Sencha Labs has announced the availability of a new open source framework for WebGL development. The framework, which is called PhiloGL, makes it easier for developers to adopt WebGL and integrate its functionality in Web applications. The framework is distributed under the permissive MIT license. WebGL is an emerging standard that allows developers to seamlessly integrate 3D content in Web …

View full post on web development – Yahoo! News Search Results

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)