It’s been some time since I’ve coding to provide CAAT with transparent seamless WebGL integration.
A 2D rendering engine is not one of the best scenarios for 3D acceleration, and here I’m showing some techniques I must have had to develop to provide the best acceleration scenario possible.
It’s needed to be said that CAAT will be using hardware acceleration if available, and will transparently fall back to canvas if not. Here you can find a mathmayhem game implementation which tries to use WebGL as its renderer. The game suffered no changes at all. You just can tell CAAT to use WebGL by calling Director’s initializeGL method. All the other development issues will be covered by CAAT itself.
First of all, some notes about WebGL needs. Despite us (developers) being thorough regarding our class hierarchies and animation actors collections, the hardware acceleration layer is not aware of or interested at all about our data organization. WebGL is a rasterizer (a very efficient one indeed) so to keep it at high FPS rates, I must have made a switch from object oriented development to data oriented development. In any professional 3D engine it is a must to keep shader switch operations to the minimum, that is sorting your scene assets by shader and by geometry. Achieve this objective’s been priority nº 1 in CAAT.
One of the most obvious ways to lose performance with WebGL rendering is using tons of textures to apply to different scene actors. Instrumenting your shaders to change from texture to texture will immediately drop your scene’s FPS. Besides, not any texture size will be suitable to be used by WebGL. Firefox 4 complaints about not 2^n sized texture sizes (needed to mipmap btw) and Chrome drops performance when no such texture sizes are used (MacOS X, Chrome 10 stable).
CAAT’s solution it to package textures automatically in glTextures of programmers desired size (2048×2048 by default). So the system, will try to pack textures in different ‘texture pages’ transparently keeping track of texture elements in glTexture space so that when selecting a texture for an sprite the programmer has nothing to take into account. This process minimizes absolutely texture switching, having its best efficiency when one one texture page is created (a 2048×2048 texture space should be enough). Mathmayhem game, despite not having its images optimized at all in size, fit perfectly in one single texture page.
A 2D rendering engine must perform back to front pixel overwriting rendering, so depth Z must be disabled. Blending must be enabled so that transparent/translucid images could be properly renderer. One oddity about blending is that the background color of the DOM element the canvas is embedded in will make blending function to saturate to that color. So if you just have a page with the canvas, and the body is white, will make you blending function show wrongly. You should embed the canvas in a div and set this div’s style bgcolor to black (#000) otherwise, your blending won’t showas expected.
The bad news about a 2D rendering engine is the fact that every single sprite on screen can have its own transformation. This means a huge processor use devoted to matrix computations. CAAT has translation, rotation by rotation pivot and scale by scale pivot. That means 7 matrix multiplications for each Actor on Scene. In fact, the matrix operations MUST be performed regardless the scene’s complexity. In an scenario in which you have let’s say a 100K polygon fixed model, only one matrix must be calculated and uploaded to the graphics card via uniform parameters. But in a Scene with 1000 different actors (2000K triangles), 1000 matrices must be calculated and uploaded to the graphics card. That undoubtfully will kill you frame rate. So CAAT’s approximation is different.
CAAT simply transforms 2D coordinates to Screen/World space via JS code, and buffers these coordinates in a GL buffer. These coordinates are rendered at once either because an actor requests an image of a different texture page than the one that it is currently selected, an actor’s paintGL method request a flush or a different alpha value different than the currently set one is requested . I’ve made some tests and this is by far more efficient than uploading matrices via uniform parameters.
Every actor in CAAT is drawn by 2 triangles so the possibility of using triangle strips is not a real possibility.
Also, to minimize matrix calculation processor sink, CAAT tracks two different transformation matrices for each Scene actor. One regarding local transformation called modelViewMatrix which tracks (translate/rotate/scale), and for world transformation called worldModelViewMatrix which handles positioning in world coordinates. In a 2D world, a normal scenario is that of containers of actors which compose their transformation to that of its contained children. So whenever a container is in example rotated, this transformation must be propagated to every contained actor. CAAT implements a matrix caching plus dirtiness flags which will keep matrix operation to its minimum.
A test in which 1000 sprites are rotating/scaling is not a real one but a stress test.
CAAT does the whole process with one single shader which is able to render from texture space calculated from texture packer or a color. These values can be complemented by the alpha composition component set. The idea as before is interacting the minimum with the shader to avoid gl driver penalty.
One more action I recommend when using this gl implementation is creating sprite sheets with different alpha applications for small images. Thus avoiding to flush the current geometry by the need of setting the alpha channel and instead, selecting another image index from the sprite sheet. This should be done for small textures though.
At this point, I just can suggest you going to the game implementation and watch all these processes in action. Let me know how sumon-webgl performs for you. Needeless to be said that all these processes are worthless if we face a bad gl driver implementation.