WebGL and OpenGL
WebGL is a DOM API for creating 3D graphics in a Web browser. Based on OpenGL ES 2.0, WebGL uses the OpenGL shading language, GLSL, and offers the familiarity of the standard OpenGL API. In addition, because it is fully integrated into the browser, a WebGL application can take advantage of the JavaScript infrastructure and Document Object Model (DOM) fundamental to any HTML document. WebGL is essentially another rendering context on the <canvas>
element, so it can be cleanly combined with HTML and other web content that is layered on top or underneath the 3D content.
Integrating 3D Graphics with the DOM
The code examples discussed here illustrate some of the basic advantages facilitated by WebGL's integration with the DOM:
- Image loading. Although numerous image loading facilities have been developed for OpenGL, no current standard exists. A WebGL app can simply use the browser's image loading facilities directly, as shown below in the texture loading example (Loading Images).
- Event handling. WebGL uses the standard browser event handling mechanism. A WebGL application can set a callback function on any JavaScript event. See Handling Events for code that passes mouse events to a camera controller.
- Seamless compositing of web content. WebGL uses the standard
<canvas>
element, which is automatically integrated with the other elements on the web page. - Automatic memory management. In OpenGL, memory is explicitly allocated and deallocated. In WebGL, memory management is handled automatically.
Loading Images
The image-texture-test example, shown below, illustrates how simple it is for a WebGL program to use the browser's image loading capabilities:
The loadTexture()
function contains all the code necessary to load an image from the Web and add it to a 3D scene:
// Loads a texture from the absolute or relative URL "src".
// Returns a WebGLTexture object.
// The texture is downloaded in the background using the browser's
// built-in image handling. Upon completion of the download, our
// onload event handler will be called which uploads the image into
// the WebGLTexture.
function loadTexture(src) {
// Create and initialize the WebGLTexture object.
var texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
// Create a DOM image object.
var image = new Image();
// Set up the onload handler for the image, which will be called by
// the browser at some point in the future once the image has
// finished downloading.
image.onload = function() {
// This code is not run immediately, but at some point in the
// future, so we need to re-bind the texture in order to upload
// the image. Note that we use the JavaScript language feature of
// closures to refer to the "texture" and "image" variables in the
// containing function.
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, image, true);
checkGLError();
draw();
};
// Start downloading the image by setting its source.
image.src = src;
// Return the WebGLTexture object immediately.
return texture;
}
Let's look at the loadTexture()
function more closely. First, notice that the WebGL calls are almost identical to their OpenGL counterparts, with a few exceptions. In particular, gl.createTexture()
uses the singular form of "Texture" (not "Textures"), and it uses the verb "create" in place of the OpenGL "Gen" (so, glGenTextures()
in OpenGL becomes gl.createTexture()
in WebGL). The OpenGL call glGenTextures()
simply generates a numeric texture id, whereas the WebGL call gl.createTexture()
creates a CanvasTexture
object to wrap that id.
The gl.createTexture()
call allocates a texture object but does not initialize it yet. The gl.bindTexture()
call binds this texture object to a TEXTURE_2D
binding point. (This call is analogous to OpenGL, where a texture object is bound to either a TEXURE_2D
or a TEXTURE_CUBE_MAP
binding point.) As in OpenGL, the gl.texParameteri()
call is used to set the minification and magnification filters (in this case, to gl.LINEAR
).
The following code in loadTexture()
starts downloading the image located at src
. This image is loaded in the background and, in this case, will be used in the 3D context rather than directly on the Web page. The application uses the browser's image.onload
event to do the WebGL-specific work associated with the image loading:
image.onload = function() {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, image, true);
checkGLError();
draw();
};
image.src = src;
When the image is finished loading, the onload
event handler is called, which inserts the image into the WebGL texture
object. Next, the application's draw()
function is called, which forces a redraw of the scene and causes this texture image to "pop" into place.
Handling Events
The shiny-teapot example illustrates another benefit of WebGL: the WebGL JavaScript program flow is completely integrated with the browser's event handling system. In this example, the camera controller element uses standard browser events that run in any Web browser on any platform. This example tracks the x and y coordinates of incoming mouse events and updates the current camera coordinates used to render the scene. The shiny-teapot example, shown below, also illustrates compositing 3D graphics over standard HTML content:
Here is the CameraController
code that receives the standard onmousedown
, onmouseup
, and onmousemove
events and issues callback functions that automatically update the corresponding camera controller variables:
// A simple camera controller which uses an HTML element as the event
// source for constructing a view matrix. Assign an "onchange"
// function to the controller as follows to receive the updated X and
// Y angles for the camera:
//
// var controller = new CameraController(canvas);
// controller.onchange = function(xRot, yRot) { ... };
//
// The view matrix is computed elsewhere.
function CameraController(element) {
var controller = this;
this.onchange = null;
this.xRot = 0;
this.yRot = 0;
this.scaleFactor = 3.0;
this.dragging = false;
this.curX = 0;
this.curY = 0;
// Assign a mouse down handler to the HTML element.
element.onmousedown = function(ev) {
controller.dragging = true;
controller.curX = ev.clientX;
controller.curY = ev.clientY;
};
// Assign a mouse up handler to the HTML element.
element.onmouseup = function(ev) {
controller.dragging = false;
};
// Assign a mouse move handler to the HTML element.
element.onmousemove = function(ev) {
if (controller.dragging) {
// Determine how far we have moved since the last mouse move
// event.
var curX = ev.clientX;
var curY = ev.clientY;
var deltaX = (controller.curX - curX) / controller.scaleFactor;
var deltaY = (controller.curY - curY) / controller.scaleFactor;
controller.curX = curX;
controller.curY = curY;
// Update the X and Y rotation angles based on the mouse motion.
controller.yRot = (controller.yRot + deltaX) % 360;
controller.xRot = (controller.xRot + deltaY);
// Clamp the X rotation to prevent the camera from going upside
// down.
if (controller.xRot < -90) {
controller.xRot = -90;
} else if (controller.xRot > 90) {
controller.xRot = 90;
}
// Send the onchange event to any listener.
if (controller.onchange != null) {
controller.onchange(controller.xRot, controller.yRot);
}
}
};
}
In WebGL, as in OpenGL, the application must explicitly compute the camera, view, and projection matrices. See the code in shiny-teapot as an example.