Happy new chinese year !!!

To all the chinese people out there:

Happy new year !!!

Advertisements

Multi-resolution (HTML5 or not) games.

How to conform your game content to the available screen space ?
How to appropriately handle retina display resources ?
How to deal with the million different screen resolutions available and have your game gracefully up/down scale ?

This post describes how this problem is Addressed in Cocos2d-html5 V4, the API available for this purpose and how to handle special cases like orientation change requests.

Less Bunnymarks, more features

Yesterday, I was speaking with some friends about latest state-of-the-art WebGL game engines (and/or renderers).

It is not long ago since I saw people mesmerized by pushing 1000 sprites to the screen. Of course that was using Javascript, and a few years back. Nowadays, we can see 40K sprites, or bunnies, pushed to the screen in a 2013 MBAir at steady 60fps. IMO, we are pushing the envelope, if not in the wrong direction, in a not too fair one.

Let me explain. The bunnymark test, which has been seen as a reference for WebGL performance in some 2D engines (even on mines) showcases an ideal scenario. It is no more than a shiny marketing ad where conditions are truly unreal. This is the scenario where just one texture, one shader and gpu-calculated matrices are being showcased as an engine’s performance while it is indeed a bare-bones test which just measures you gpu’s Buffer sub-data speed.

Don’t take me wrong, this is not a rant. I have nothing against Pixy. I believe it is actually the the-facto standard for fast 2D WebGL-backed graphics for the web. A reference for many of us. Brilliant work. Bonus by the fact that the @Goodboy guys are awesome.

Almost 4 years ago I wrote a post on how to make 2D WebGL faster. Even today, there’s little I’d add on top of that post, except for buffer pinpointing, multi-texturing, triangles vs quads and probably use of bufferData instead of bufferSubData. With these ‘old’ guidelines, this is the result of a Bunnymark test on my current daily job:
CocosJSBunnymark (original viewport 800x600)
Original viewport is 800×600 pixels and yes, the sprites are tinted on-the-fly while translate and rotate.

Is the result good or bad … ?. IMO It is meaningless.

I can’t avoid to have the same sentiment: ‘hey, 100k bunnies @60fps on the screen’ === ‘ideal scenario like a pure marketing campaign’.
We are lucky enough to be in a moment where rendering is not our game bottleneck (at least on desktops).

I’d love to see more production tools, and less rendering wars. And this, I think, would be a good engine performance measure: how fast I can deliver working with it.

So next time someone asks me ‘can you do 100k bunnies/sec @60fps ?‘ I’d say: YES, of course (if the conditions are ideal). And if not, I could always use Pixy 🙂

Typescript

It’s not been the first time I have to deal with a gigantic Javascript codebase. While most of the side projects I’ve been writing are no more than 5K loc, CAAT was near 30K, and in my current job I will likely end up managing a codebase between 50-100k loc.
It is fairly complicated to keep things up and running in such a big codebase when I have to defer error checking to the runtime environment and different people contribute code. The nature of the code I write is mostly visual. Things that a human being won’t be able to spot whether they are right or wrong tend to be quite complicated to automate, either at unit or functional testing levels. For example, is that gradient-filled shape right ?, etc.
While writing CAAT, I had to manage all the basic platform stuff on my own: (prototypical) inheritance, module management, dependencies, … all had to be either solved on my own, or by using external libraries, not by the language itself. Good news is that Javascript can mostly fix itself.
Basic features for a typed language, like code refactor, keeping up with correct VM code for performance or hinting code for performance is something so tied to the core of my development that I simply don’t want to deal with it on my own.
For this and so many other reasons I took a shot on Typescript.

Conclusion: it pays by itself after 10 minutes of using it. Specially when putting the stress in long-term maintainability and code quality.

Let me enumerate some of the goods I get while writing Typescript code:

  • Refactor: my IDE recognizes the code semantically. It allows for deep type and elements refactor on-the-fly.
  • ES6: Typescript brings to the table many ES6 language features. By the time ES6 is around, all my typescript code will almost be a one-to-one mapping which does not need any change.
  • Strong typed. Modern Javascript VM make deep type inference to JIT the code. While Typescript is not executable in the browser and has to be transpiled to javascript, since it is strong typed will guarantee type consistency, something that will for sure help to your codebase performance.
  • Javascript VM-friendly transpiled code. While some time ago it was expected to define properties in prototypes, but at the same time, keeping prototype chains sort, today, if you want to take advantage of hidden classes and already optimized code, properties should be injected in constructor functions.
  • Solved extension mechanism. It ships with a very simple prototypical extension mechanism.
  • Solved modularity. Module information will be written for you.
  • Super light overhead. Resulting code is just the original code without the type information.
  • Since 1.4, union types and type aliases will leverage the Typescript capabilities, switching from type:any to a union type.
  • No learning curve at all.

Nowadays, there are Typescript mapping files for all major libraries around: webgl, nodejs, (node io ?? :), webaudio, etc. And being myself so close to HTL5 gaming, most modern HTML5 engines are either written in Typescript, like CocosJS or Egret, or have Typescript mappings like Phaser.

I’d rather have a compiler on my side. Ever.

Dinamically Proxying objects and Wrapping functions.

It’s been some time ago since I needed some mechanism to make debugging in CAAT and other javascript projects I’m working on easier. I’ve an strong Java background and I’m used to dynamically proxying most of my core objects, mainly for safety reasons but as well for the ease of development and to do some aspect-oriented programming.
So I’ve come up with a javascript proxy solution which will allow me upon a call to evaluate whatever object’s method , hook that call at least in three stages:

  • Pre method call, which allows to wrap, change or modify method call parameters.
  • Post method call, which allows to modify method call’s return value.
  • On exception throw call, which allows to return values on exception preventing exception toss.
  • Of course, these are just some examples of why I would be willing to use proxies. But one more compelling reason is that of log code activity without sprouting alert/console.log sentences all around my beautifully crafted code.

    While in Java we’ve incredibly powerful introspection mechanisms as well as an InvocationHanlder ready to be used, the javascript implementation will be a little bit less ambitious, I mean, the code security concerns of Java’s dynamic proxy will be left out the implementation.
    The use of this proxy to wrap a whole object be as follows:

    // Define a module/namespace whatever you call it.
    Meetup= Meetup || {};
    
    // Define an 'object class'
    (function() {
     Meetup.C1= function(p) {
       this.a= p;
       return this;
     }
    
     Meetup.C1.prototype= {
       a: 5,
       b: function() {
         return this.a*this.a;
       },
       c: function(p1,p2) {
          return this.a+p1/p2;
       },
       err: function() {
           throw 'mal';
       }
     };
     })();
    
    var c0= new Meetup.C1(10);
    
    // Instantiate and wrap the object into a proxy:
    // This cp0 object will behave exactly as c0 object does.
    var cp0= proxy(
            c0,
            function(ctx) {
                console.log('pre method on object: ',
                        ctx.object.toString(),
                        ctx.method,
                        ctx.arguments );
            },
            function(ctx) {
                console.log('post method on object: ',
                        ctx.object.toString(),
                        ctx.method,
                        ctx.arguments );
    
            },
            function(ctx) {
                console.log('exception on object: ',
                        ctx.object.toString(),
                        ctx.method,
                        ctx.arguments,
                        ctx.exception);
    
                return -1;
            });
    
            

    With this code, by calling cp0.b(); the following method calls will be performed:

  • Call anonymous function pre-method.
  • Call proxied object’s ‘b’ method
  • If the proxied object’s method call went rightm call anonymous function post-method, otherwise (on execption toss) call anonymous function on-method-exception.
  • This, with some little extra work, could be considered aspect oriented function calls. These hook functions receive as their parameters one object of the form:


    {
      object: the-proxied-object,
      method: evaluated-proxied-object-method-name,
      arguments: the-evaluated-method-arguments,
      exception: if-exception-thrown-on-error-hook--the-exception-thrown
    }

    It is up to the developer what to do with these information for each hook function, but in the example, they’ve been set up as activity logging functions. An example of the result of the execution of cp0.c(1,2) will be:


    pre method on object: Meetup.C1 {a: 10} c [ 1, 2 ]
    post method on object: Meetup.C1 {a: 10} c [ 1, 2 ]
    10.5

    (which should be read as pre-method call on object XX, method ‘b’, with method arguments ‘[]’ (none in this case).)

    When proxying a simple function, the code downgrades to a function wrapping. In this case I’m keeping the same call scheeme. An example will be as follows:

    function ninja(){
      console.log("ninja running" );
    };
    
    var pninja= proxy(
            ninja,
            function(context) {
                console.log('pre method on function: ',
                        context.fn,
                        context.arguments );
            },
            function(context) {
                console.log('post method on function: ',
                        context.fn,
                        context.arguments );
            },
            function(context) {
                console.log('exception on function: ',
                        context.fn,
                        context.arguments );
                return -1;
            });
            

    As you can see, when wrapping functions, the context supplied to hook functions is of the form:

    {
      fn: the-wrapped-function,
      arguments: the-wrapperd-function-arguments
    }

    The proxy function itself is the following:

    function proxy(object, preMethod, postMethod, errorMethod) {
    
        // proxy a function
        if ( typeof object=='function' ) {
    
            if ( object.__isProxy ) {
                return object;
            }
    
            return (function(fn) {
                var proxyfn= function() {
                    if ( preMethod ) {
                        preMethod({
                                fn: fn,
                                arguments:  Array.prototype.slice.call(arguments)} );
                    }
                    var retValue= null;
                    try {
                        // apply original function call with itself as context
                        retValue= fn.apply(fn, Array.prototype.slice.call(arguments));
                        // everything went right on function call, then call
                        // post-method hook if present
                        if ( postMethod ) {
                            postMethod({
                                    fn: fn,
                                    arguments:  Array.prototype.slice.call(arguments)} );
                        }
                    } catch(e) {
                        // an exeception was thrown, call exception-method hook if
                        // present and return its result as execution result.
                        if( errorMethod ) {
                            retValue= errorMethod({
                                fn: fn,
                                arguments:  Array.prototype.slice.call(arguments),
                                exception:  e} );
                        } else {
                            // since there's no error hook, just throw the exception
                            throw e;
                        }
                    }
    
                    // return original returned value to the caller.
                    return retValue;
                }
                proxyfn.__isProxy= true;
                return proxyfn;
    
            })(object);
        }
    
        /**
         * If not a function then only non privitive objects can be proxied.
         * If it is a previously created proxy, return the proxy itself.
         */
        if ( !typeof object=="object" ||
                object.constructor==Array ||
                object.constructor==String ||
                object.__isProxy ) {
    
            return object;
        }
    
        // Our proxy object class.
        var cproxy= function() {};
        // A new proxy instance.
        var proxy= new cproxy();
        // hold the proxied object as member. Needed to assign proper
        // context on proxy method call.
        proxy.__object= object;
        proxy.__isProxy= true;
    
        // For every element in the object to be proxied
        for( var method in object ) {
            // only function members
            if ( typeof object[method]=="function" ) {
                // add to the proxy object a method of equal signature to the
                // method present at the object to be proxied.
                // cache references of object, function and function name.
                proxy[method]= (function(proxy,fn,method) {
                    return function() {
                        // call pre-method hook if present.
                        if ( preMethod ) {
                            preMethod({
                                    object:     proxy.__object,
                                    method:     method,
                                    arguments:  Array.prototype.slice.call(arguments)} );
                        }
                        var retValue= null;
                        try {
                            // apply original object call with proxied object as
                            // function context.
                            retValue= fn.apply( proxy.__object, arguments );
                            // everything went right on function call, the call
                            // post-method hook if present
                            if ( postMethod ) {
                                postMethod({
                                        object:     proxy.__object,
                                        method:     method,
                                        arguments:  Array.prototype.slice.call(arguments)} );
                            }
                        } catch(e) {
                            // an exeception was thrown, call exception-method hook if
                            // present and return its result as execution result.
                            if( errorMethod ) {
                                retValue= errorMethod({
                                    object:     proxy.__object,
                                    method:     method,
                                    arguments:  Array.prototype.slice.call(arguments),
                                    exception:  e} );
                            } else {
                                // since there's no error hook, just throw the exception
                                throw e;
                            }
                        }
    
                        // return original returned value to the caller.
                        return retValue;
                    }
                })(proxy,object[method],method);
            }
        }
    
        // return our newly created and populated of functions proxy object.
        return proxy;
    }
            

    Despite being a powerful beast, the proxy model has some drawbacks. In example, attributes (object’s non function elements) can’t be proxied, since they can’t be evaluated by function call (Will have to take a look at __define_getter and setters functions tough). Also, newly dynamically added methods to an object won’t be proxied by an already defined proxy object, but you always have the opportunity of creating a new proxy for that object.
    Also, not every object type can be proxied. Concretely, String and Array primitive object types can’t, so the proxy function is checking at first instance whether the supplied object is elligible to be proxied. If not, the unchanged object will be returned.

    One more thing to tell about the proxy function is its own definition. It is defined as a global function because it makes no sense adding it to Object’s prototype since as I said, not every object type is elligible to be proxied.

    This technique has shown very useful to me. While developing, I’m just passing object’s proxies back and forth throught the code which seamlessly behave as regular objects. Given that one single object can have more that one proxy, I’ve the abbility to hook different extra behaviors to the original object by passing different proxies for different situations. In the end, when I’m finished with the development/debug phase, I simply disable the proxy functionality by supplying with a proxy function which just returns the passed object. Maybe this technique is only suitable for very object centric developments, but I’ve found it very valuable.

    Maybe I’m wrong, let me know what you think.

    Fishpond

    The fishpond is a CAAT based experiments.
    It is a test of building procedurally generated content, in this case fish. Each fish is composed of 4 bezier curves, 2 cubic curves to make the body, and 2 quadric curves to build the head.
    Here it is an sketch of the first fish type where its curves can be seen:

    This simple curves lead to a rich procedural parameterized fish, where tail width and height, fish width and height, head curvature, etc. can be procedurally set to create many different fish types.
    With the very same curves, and only by moving some control points around, new fish breeds can be very easily created. In example, with the following curves setup a sort of fish-cat can be created:

    Parameters such as eyes size or fin position, fin size and fin movement are also procedurally set. Also tail movement frecuency and bending angle is parametric.

    All these handlers alltogether lead to a really rich result. In addition, every fish on screen follows a cubic bezier path with is as well randomly generated. Here’s a screenshot of the whole result. I’m playing with new fish breeds which will enrich the fishpond with new life forms. Here are two examples of such that new creatures:


    An screenshot of the fishpond:

    See the fishpond in action.

    CAAT’s WebGL implementation notes.

    It’s been some time since I’ve coding to provide CAAT with transparent seamless WebGL integration.
    A 2D rendering engine is not one of the best scenarios for 3D acceleration, and here I’m showing some techniques I must have had to develop to provide the best acceleration scenario possible.

    It’s needed to be said that CAAT will be using hardware acceleration if available, and will transparently fall back to canvas if not. Here you can find a mathmayhem game implementation which tries to use WebGL as its renderer. The game suffered no changes at all. You just can tell CAAT to use WebGL by calling Director’s initializeGL method. All the other development issues will be covered by CAAT itself.

    First of all, some notes about WebGL needs. Despite us (developers) being thorough regarding our class hierarchies and animation actors collections, the hardware acceleration layer is not aware of or interested at all about our data organization. WebGL is a rasterizer (a very efficient one indeed) so to keep it at high FPS rates, I must have made a switch from object oriented development to data oriented development. In any professional 3D engine it is a must to keep shader switch operations to the minimum, that is sorting your scene assets by shader and by geometry. Achieve this objective’s been priority nÂș 1 in CAAT.

    Texture packing

    One of the most obvious ways to lose performance with WebGL rendering is using tons of textures to apply to different scene actors. Instrumenting your shaders to change from texture to texture will immediately drop your scene’s FPS. Besides, not any texture size will be suitable to be used by WebGL. Firefox 4 complaints about not 2^n sized texture sizes (needed to mipmap btw) and Chrome drops performance when no such texture sizes are used (MacOS X, Chrome 10 stable).

    CAAT’s solution it to package textures automatically in glTextures of programmers desired size (2048×2048 by default). So the system, will try to pack textures in different ‘texture pages’ transparently keeping track of texture elements in glTexture space so that when selecting a texture for an sprite the programmer has nothing to take into account. This process minimizes absolutely texture switching, having its best efficiency when one one texture page is created (a 2048×2048 texture space should be enough). Mathmayhem game, despite not having its images optimized at all in size, fit perfectly in one single texture page.

    ZOrder

    A 2D rendering engine must perform back to front pixel overwriting rendering, so depth Z must be disabled. Blending must be enabled so that transparent/translucid images could be properly renderer. One oddity about blending is that the background color of the DOM element the canvas is embedded in will make blending function to saturate to that color. So if you just have a page with the canvas, and the body is white, will make you blending function show wrongly. You should embed the canvas in a div and set this div’s style bgcolor to black (#000) otherwise, your blending won’t showas expected.

    Transformations

    The bad news about a 2D rendering engine is the fact that every single sprite on screen can have its own transformation. This means a huge processor use devoted to matrix computations. CAAT has translation, rotation by rotation pivot and scale by scale pivot. That means 7 matrix multiplications for each Actor on Scene. In fact, the matrix operations MUST be performed regardless the scene’s complexity. In an scenario in which you have let’s say a 100K polygon fixed model, only one matrix must be calculated and uploaded to the graphics card via uniform parameters. But in a Scene with 1000 different actors (2000K triangles), 1000 matrices must be calculated and uploaded to the graphics card. That undoubtfully will kill you frame rate. So CAAT’s approximation is different.

    CAAT simply transforms 2D coordinates to Screen/World space via JS code, and buffers these coordinates in a GL buffer. These coordinates are rendered at once either because an actor requests an image of a different texture page than the one that it is currently selected, an actor’s paintGL method request a flush or a different alpha value different than the currently set one is requested . I’ve made some tests and this is by far more efficient than uploading matrices via uniform parameters.

    Every actor in CAAT is drawn by 2 triangles so the possibility of using triangle strips is not a real possibility.

    Also, to minimize matrix calculation processor sink, CAAT tracks two different transformation matrices for each Scene actor. One regarding local transformation called modelViewMatrix which tracks (translate/rotate/scale), and for world transformation called worldModelViewMatrix which handles positioning in world coordinates. In a 2D world, a normal scenario is that of containers of actors which compose their transformation to that of its contained children. So whenever a container is in example rotated, this transformation must be propagated to every contained actor. CAAT implements a matrix caching plus dirtiness flags which will keep matrix operation to its minimum.

    A test in which 1000 sprites are rotating/scaling is not a real one but a stress test.

    Shaders

    CAAT does the whole process with one single shader which is able to render from texture space calculated from texture packer or a color. These values can be complemented by the alpha composition component set. The idea as before is interacting the minimum with the shader to avoid gl driver penalty.

    One more action I recommend when using this gl implementation is creating sprite sheets with different alpha applications for small images. Thus avoiding to flush the current geometry by the need of setting the alpha channel and instead, selecting another image index from the sprite sheet. This should be done for small textures though.

    At this point, I just can suggest you going to the game implementation and watch all these processes in action. Let me know how sumon-webgl performs for you. Needeless to be said that all these processes are worthless if we face a bad gl driver implementation.