WebDev 4 de marzo

El 4 de marzo, celebramos la primera edición WebDevs en la Universidad de Deusto. Acudieron entre 60 y 70 personas, y el lugar de la presentación fue espectacular: la sala de conferencias de la Deusto Business School.
Gracias Txipi y Univeridad de Deusto por acogernos como a reyes. Y por supuesto, gracias a todos por asistir. Para mi fue uno de los mejores meetups a los que he asistido.

Los ponentes fuimos:

Naiara Abaroa, @nabaroa, hizo una impecable presentación sobre temas de css como GridLayout, CSS Modules, y demás. Aquí os dejo un enlace a sus slides, curradísimas y espectaculares.

Cc7rV5mUEAIaeuw

Hugo Biarge, @hbiarge, con una potentísima presentación sobre componentes en Angular 1.5 y sus recovecos. Aquí podéis obtener las slides.

CcuVIVLXIAArZga

Yo mismo, Ibon Tolosana, que hablé de como aprovecharnos de algunos secretos no tan ocultos de la Chrome dev tools. No hice slides, porque era más live-coding, así que abrí la consola de Chrome, y trabajé sobre ella directamente. Podéis increparme por aquí: @hyperandroid.

CcudKVgWEAEX9Io

Futuras ediciones ?

El formato del meetup es abierto. No hay organizadores oficiales, ni jefes. Como ya comentamos, motivación no es otra que el simple hecho de dar un poco a la comunidad. Constantemente nos beneficiamos de código abierto, frameworks, conocimiento público, y con este meetup espero que hayamos podido equilibrar un poco el karma a este respecto.

Me gustaría que hubiera más ediciones, y acudir como público. Me gustaría ver presentaciones en el mismo formato, 25-30 minutos (lo sé, me pase del tiempo muuuuucho), densas, profundas, introductorias, da igual. Sobre temas tan dispares como:

  • WebVR
  • explicación del ecosistema de Javascript (webpack, gulp, browserify, babel, TS, etc. etc.)
  • three js
  • css3 transitions and transformation live coding
  • vertx
  • node internals
  • pixy js
  • React+Redux internals
  • Aplicaciones híbridas JS-Nativo,
  • V8 for dummies or pros.
  • WebComponents
  • etc. etc.

Animáos, y hagamos que la rueda siga girando. Nos vemos el 4 de abril ?

 

Efficient WebGL stroking

While working on WebGL rendering you soon realize that a canvas like API, despite its opinionated implementation, is extremely useful and easy to work with. Concretely, I am referring to its stroking capabilities. While making a stroke WebGL implementation I came up with the following solution, which requires very simple maths, is fast, and can be implemented just from 2d vectors.

The Canvas API offers support for plenty of cool stuff like line width, line cap and line join value, etc. and so will we. Here’s an example of the implementation:

Efficient WebGL stroking live demo:

https://hypertolosana.github.io/efficient-webgl-stroking/index.html

  • Check ‘show lines’ to see the internally calculated anchors and vectors to setup the tessellation.
  • The red points, and the green arrows can be dragged to see live results.
  • ‘Draw with canvas’ is enabled so that the calculated triangles will be over-imposed for visual comparison between the algorithm and the canvas API.
  • Select different line caps and join values.
  • The pink line (substracted intersection point from center point) will disappear when the angle is too narrow to base the tessellation in the intersection point. Note that not always exists an intersection point.

For the implementation to work, a vector of Points and some line attributes are necessary. These points should define a poly-line which is a contour we want to have tessellated. In Cocos2d-html5 v4 we have a general purpose Path class, which classifies its segments in contours, and could be a good starting point if nothing else is around.

First steps, just stroke the poly-line

If we went to create quads for each line segment of our contour, we would end up with wrong results, where too much overdraw, and no fancy corners are gotten as a result. To avoid this, the supplied data must be a bit preprocessed if smooth line join and closed contours showing correctly a requirement. Specifically, we want to split our contour segments (but the first and last one) into two different segments by creating a new middle point.
This is the result of an straighforward algorithm that will create a straight line every two points, thus, for each 3 points, we’ll get two straight line segments.

Data setup will be as follows:

  var i;

  for ( i = 0; i < points.length - 1; i++) {
    if ( i===0 ) {
      midp.push( points[0] );
    } else if ( i===points.length-2 ) {
      midp.push( points[points.length-1] )
    } else {
      midp.push( Point.Middle(points[i], points[i + 1]) );
    }
  }

  // for every middle point, create all necessary triangles for the two segments is represents.
  for ( i = 1; i < mids.length; i++) {
    createTriangles( 
      midp[i - 1], 
      points[i], 
      midp[i], 
      resulting_array_of_points, 
      lineWidth,
      join, 
      cap, 
      miterLimit );
  }

Second, we must calculate the correct line width based on the current transformation matrix. To do so, we create two vectors: Vector( 0, 0 ) and Vector( lineWidth, 0 ) both of which will be transformed by the current transformation matrix. the module of the resulting vector between both transformed points will be our actual line width:

  var v0= new Vector();
  var v1= new Vector(lineWidth, 0);
  currentMatrix.transformPoint( v0 );
  currentMatrix.transformPoint( v1 );

  lineWidth= Vector.sub( v0, v1 ).getModule();

The result from stroking 3 points at (100,100), (300,100), and (300,300) with a line width of 80, and assuming an identity transformation matrix would be the following:

And for 4 points at (100,100), (300,100), (300,300), and (100,300) we’d get the following:

In this example the resuting triangles include a filler triangle for the union of both segments, corresponding to a “bevel” type line join.
Importatnt to note that in the examples, with 3 points there’s no mid point created, but there’s one with 4 points, leading to two 3-point (or two line segment) tessellation blocks. More points will create more mid points.

But how’s all this calculated ?

From each line segment, a vector is calculated (blue), and normal vectors with the size of half the line width are created too (green). From the segment points (red circles), these normal vectors are added, creating 6 new points, which will be the pointers of the green arrows.
Between each 2 of the external newly created points, a line will be traced (red). The intersection point of the two red lines and the subtraction of the middle point and the middle point are very important (blue dots). If the lines do not cross, the segments are parallel, and the tessellation is straightforward, just create the two triangles that compose it.

With the segment points, the ‘bottom’ blue point, and the calculated normal points for the line segments, we can already simply tessellate the segment. The result will be the one we have in the previous images.

For example, the first tessellated triangle would be: (p0+t0), (p0-t0), (p1+anchor), being:

  • p0, p1: the first two line segment points. (red dots)
  • t0: the external perpendicular vector to each p0 and p1 (light red arrow)
  • anchor: the vector resulting from subtracting the intersection point from the last (p1) line segment point.

And another one would be: (p0+t0), (p1+t0), (p1+anchor)

It is just a matter of continuing dividing the space for the rest of the points.

My two lines do not intersect

Sometimes there’s no intersection point (red lines). In this case, or when the angle between the two segments is very narrow (the vector from center point to the intersection, which is magenta in the playground, is bigger than any of the segments) the algorithm switches to a different way of tessellation, since otherwise, that would be a failed case. Since there’s no intersection point, some elements like the miter must be calculated differently.

Line Joins

Line joins are the tessellation between every two line segments. The most basic one would be a bevel line join, which is trivial a single triangle whose tessellation information is already available:
joinbevel

A round needs to draw with center at p1 (middle point) and radius the difference between p1 and p1+t0 for example. The result would be:
joinround

Another line join type is miter. The following image describes it:
joinmiter
This type of line join is controlled by a parameter, a miter limit. The miter is a ratio between the length of the anchor vector (magenta) and half the line width. (Remember the line width is used to grow the tessellation segment in both line segment point directions). It is also important to note that the miter limit calculation must be clamped to an integer (something not told even in the Canvas Spec).

For segments that have a narrow angle between them, the miter should be very lengthy to cover the join area, leading to odd visuals like in the example: miter too high
This can be controlled by the miter limit parameter, which will prevent the mitter join (switching to bevel) if the value is too high.

Line cap

Line caps control how the start and the end of a poly-line looks like. With a butt value, the line looks as it results tessellated, no extra triangles like in the image:
capbutt

Other types of cap are available: square
capsquare

And round
capround

Triangle signed area

In all the cases, signs and directions of the perpendicular or line cap vectors must be taken into account. For example, it is mandatory to get the direction of the triangle created by each 3 points before tessellation since otherwise odd visual results will arise. For example, if this is not addressed, the following result will be seen:
Screen Shot 2015-03-08 at 11.31.13 PM
What we have here is that the intersection lines (red lines in the previous examples) are being calculated from the inner instead the outer. It is important to appropriately calculate the ‘signed area of the line segments triangle’, which not only gives the area of the triangle defined by 3 points, but also the sign, describing whether it is defined clockwise or counterclockwise.
In our case, if the signed area is greater than 0, describing a counterclockwise triangle, we invert the vectors to define the intersection lines. We always see these vectors pointing outwards in green light.

Analogously, vectors to line caps must be appropriately calculated, where they point to in order to have the caps pointing in the right direction.

Also, when rounds cap or joins are used, the arc angle must be calculated as the minimum angle.

Closed contours

In order to have closed contours properly tessellated, some extra work must done. A closed contour will be that which has the first and last point to be the same one, either same reference, or same coordinates. The process is simple:

  • calculate a new point which is going to be the middle between the first and second contour points.
  • remove the first point.
  • add twice, at the beginning and at the end of the contour points the calculated point.

The results would be: w/o processing:
pathclosederror

And processing the collection of points:
pathclosedok

Important to note that closed contours don’t have any line cap applied.

Trivial optimizations

As far as i can tell, the most trivial optimization will be switching to bevel joins and butt caps when the size of the calculated line width is too small (<1.5).
Also it is important to calculate rounded arcs tessellated triangles based on the length of the arc for the current transformation matrix. You don't want to have super perfectly tessellated arcs with 200 triangles each.

Limitations

The algorithm is totally unoptimized.
It is also not suited for situations where there are a lot of segments taking over the same area, for example with very twisted segments. In this case, the tessellation won’t be correct.
Sometimes there’s overdraw on the generated triangles. If transparency is used, some overdraw will appear. You could first draw the triangles to the stencil though (weak solution, I know).

Playground

Efficient WebGL stroking playground live demo:

https://hypertolosana.github.io/efficient-webgl-stroking/playground.html

Finite State Machine

A finite-state machine, or finite automaton is a model of computation. An abstract machine defined by a set of States and a the set of events that trigger State changes called Transitions. The finite model states that the Machine can be in only one State at any given time, which is called the CurrentState.

It’s been since a long time I first met FSM, probably early 2000, when I was CTO at a social multiplayer games site, were people played against people in real-time. Every game context had to be held remotely, had to be secure as much as possible and deterministic, to name a few of the core features. Modeling a whole game, cards, table, etc. and making it fulfill such features is not an easy task unless you rely on a powerful framework. The best solution I came up with was FSMs, which allowed me to build whole game models declaratively.

Some years ago I developed a formal FSM implementation in Javascript as a client side library or npm-hosted node package. I am currently making use of it in some multiplayer games, and the Automata code is still in good shape. The following are some features of the library:

Declarative definition

FSM must be defined declaratively. You must be able to declare the set of States and Transitions in a JSON object. The FSM definition must be unique and non instantiable. For example, there’s only one definition of the Dominoes game.
The FSM Model must allow for deep FSM nesting so that an State can spawn a whole new FSM. Think of a nested behavior, where a game table ‘playing’ state, is a whole ‘dominoes’ FSM.
Each stacked FSM will have its own FSM context, like a stack trace of States.

An example declaration could be:

context.registerFSM( {

    name    : "Test2",
    logic   : function() { 
        // convention function called when State 'a' exits.
        this.aExit = function(...) { ... };

        // convention function called when State 'b' enters.
        this.bEnter = function(...) { ... };
    },

    state  : [
        {
            name    : "a",
            initial : true,
            onTimer : {         // Timed transition.
                timeout: 4000,  // after 4 seconds.
                event: {
                    msgId: "ab" // if exits fire transition "ab".
                }
            }
        },
        {
            name    : "b"
        },
        {
            name    : "c"
        }
    ],

    transition : [
        {
            event       : "ab",
            from        : "a",
            to          : "b"
        },
        {
            event   : "bc",
            from    : "b",
            to      : "c"
        }
    ]
} );

Session

For each dominoes played game, a game Session is created. Sessions have common initial conditions and changes when Transitions are triggered. It exposes full lifecycle events for:

  • context creation. Each time a new FSM is pushed to the session.
  • context destruction. Each time a FSM pops.
  • final state reached. Once the final state is reached, the Session is empty and not further interaction can happen on it.
  • state changed. The CurrentState changes. It could auto-transition, which means the previous and Current States will be same
  • custom event exposure. Whenever you want to notify an event to the external world.

This lifecycle allows for full FSM traceability and deterministic behavior. Whenever the ‘state changed’ callback is invoked, you can save the event that triggered this change. If all the events that fire a state change are saved sequentially, and later fed to a freshly created session, you’ll always get to the same results. If a user notifies of a bug and have previously saved a game ‘state change’ messages collection, magically you’ll get to the same bug. The FSM framework guarantees code traceability. Black Magic !!

As an implementation note, it is desirable the Session object to be serializable in JSON format. Saving and restoring the game session on-the-fly is definitely a feature you want to have near you.

Lifecycle

FSM elements will expose full lifecycle:

  • enter State
  • exit State
  • transition fired
  • pre and post guard events

These callback hooks will be invoked by the FSM engine to notify of the named events. The ‘transition fired’ event is normally the point to change the Session data. ‘Enter/exit State’ events are on average the points to manage the Session, like setting timers, ending the Session, initializing sub-State FSM, etc.

Timed transitions

Desirable is to have a per-state event scheduler. A very common use case is the following: A game must start in certain amount of time. When the FSM enters ‘start game’ state, a timer is triggered. If the FSM does not change state to ‘playing’ in (for example) 30 seconds, the CurrentState receives a transition event to ‘end game’ state automatically instead of keeping players locked in the game table forever.
Another thing to note, is that the server hosting the remote FSM will timeout in for example 1 minute, and the game clients in half the time. Legit clients will have a local timeout of 30 seconds to start the game, and request ‘go to the lobby’ otherwise. If an unfair client is connected to the FSM and does not send the ‘go to the lobby’ event, the server FSM will trigger it on its side anyway. So the game server is secure.

Guards

Transition guards are FSM Transition vetoes. They can either cancel the Transition event, or force the Event to be an auto-transition instead of a State change.

  • pre transition guards. A pre-transition guard, nulls the Transition event as if it never happened. Though this is not defined in any FSM literature I’ve read about, I have found it to be invaluable useful under some circumstances, that’s why it’s been added to the framework.
  • post transition guards (regular guards). This kind of guards will coerce the transition from State A to State B, to be an auto transition from State A, to State A. Being said a transition is fired, the sequence of actions: ExitA -> TransitionAB -> EnterA will be fired. The use case for this is a counting semaphore. For example, a game table is in state ‘waiting for players’, and needs 4 people to start. The FSM will only change to State ‘playing’ after there’s 4 people. The post-transition guard will make the Transition ‘wait for players’->’playing’ transition to fail until the counter is 4. Bonus points: The table can be waiting for players for one minute as much after each player enters. Whenever ‘wait for players’ State exits, a timer is canceled, and whenever the ‘wait for players’ State enters, the timer is set. The post-transition guard guarantees this behaviour since it fires the transition. Pre transition guard will inevitable fail for this scenario.

No dependencies

Automata is an standalone package. Has no dependencies and works on both the client and server sides.

FSM masking

Though not a FSM framework requirement, remote Session content masking is a must for multiplayer games. For security purposes, we don’t want to share the remote game Session content with each of the connected clients. For example in the Dominoes, we don’t want to share each players tiles with every connected user/player.
The masking will make sure only the necessary information per player will be shared with each of them.
This process is not integrated into the FSM framework, but all the tools are already there: There’s a session listener, and information will only be sent to the connected clients whenever a state change happens. So the rules for masking are not something inherent to the FSM itself, but some external rules to add on top of the reflected events happening at the core of the framework.

These are all the elements that for years have made the multiplayer space I’ve been working with secure, traceable and deterministic. Let me know about your own experiences.

Trello test

I was about to create a Trello board when i saw the positions tab. Like a moth attracted by the light, I clicked on it (after all, I love the product and was curious about what positions were being offered).
And BOOM, there it was: nodejs developer. Curiously, clicked on the link, and found that to apply, I needed to solve the following problem:

Write JavaScript (or CoffeeScript) code to find a 9 letter string of characters that contains only letters from: ‘acdegilmnoprstuw’ such that the hash(the_string) is 956446786872726.

Normally, I would not pay attention to this kind of problems. I was not even thinking on applying, but don’t know why, I got engaged to the problem and solved it in the background thinking. Apparently the hash function gave higher numbers for longer strings, and higher values for using latter letters from the alphabet (or it seemed so). 30 minutes of coding and testing after that, lead to the following solution. It works. And is making a very low number of tests to get the solution. 88 comparisons actually for the sample. It could make even less comparisons by using binary traversal of the alphabet. And it does not take into attention the expected word length hint.
The solution is written in the worst javascript ever. imperative but has no side-effects. I am more curious about what they’d expect from a solution than the solution itself. Algorithm complexity ? Functional code ? …

Is this a general inverse hash algorithm ? of course not. This is only possible because it is a very simple hash function.


/**
 * Original hash function.
 * @param alphabet {string} alphabet composing the guessed words
 * @param s {string} string to get hash for
 *
 * @return {number} string hash value
 */
function hash (alphabet, s) {
    var h = 7;
    for(var i = 0; i  find) {
            letters.pop();
            break;
        } else if ( hashValue===find ) {
            // lucky boy
            return word;
        }
    }
    
    // guess the hashed word
    for( var currentCharIndex=0; currentCharIndex < letters.length; currentCharIndex++ ) {

        for( var index=0; index  find ) {
                letters[ currentCharIndex ]= alphabet[index-1];
                break;
            }
        }
    }

    return "not found :(";
}

// get the word with the given hash. 
// takes 88 iterations on the double nested for loop.
// result: trellises
console.log( find( "acdegilmnoprstuw", 956446786872726 ) );

// takes 44 iterations.
// result: leepadg
console.log( find( "acdegilmnoprstuw", 680131659347 ) );

Less Bunnymarks, more features

Yesterday, I was speaking with some friends about latest state-of-the-art WebGL game engines (and/or renderers).

It is not long ago since I saw people mesmerized by pushing 1000 sprites to the screen. Of course that was using Javascript, and a few years back. Nowadays, we can see 40K sprites, or bunnies, pushed to the screen in a 2013 MBAir at steady 60fps. IMO, we are pushing the envelope, if not in the wrong direction, in a not too fair one.

Let me explain. The bunnymark test, which has been seen as a reference for WebGL performance in some 2D engines (even on mines) showcases an ideal scenario. It is no more than a shiny marketing ad where conditions are truly unreal. This is the scenario where just one texture, one shader and gpu-calculated matrices are being showcased as an engine’s performance while it is indeed a bare-bones test which just measures you gpu’s Buffer sub-data speed.

Don’t take me wrong, this is not a rant. I have nothing against Pixy. I believe it is actually the the-facto standard for fast 2D WebGL-backed graphics for the web. A reference for many of us. Brilliant work. Bonus by the fact that the @Goodboy guys are awesome.

Almost 4 years ago I wrote a post on how to make 2D WebGL faster. Even today, there’s little I’d add on top of that post, except for buffer pinpointing, multi-texturing, triangles vs quads and probably use of bufferData instead of bufferSubData. With these ‘old’ guidelines, this is the result of a Bunnymark test on my current daily job:
CocosJSBunnymark (original viewport 800x600)
Original viewport is 800×600 pixels and yes, the sprites are tinted on-the-fly while translate and rotate.

Is the result good or bad … ?. IMO It is meaningless.

I can’t avoid to have the same sentiment: ‘hey, 100k bunnies @60fps on the screen’ === ‘ideal scenario like a pure marketing campaign’.
We are lucky enough to be in a moment where rendering is not our game bottleneck (at least on desktops).

I’d love to see more production tools, and less rendering wars. And this, I think, would be a good engine performance measure: how fast I can deliver working with it.

So next time someone asks me ‘can you do 100k bunnies/sec @60fps ?‘ I’d say: YES, of course (if the conditions are ideal). And if not, I could always use Pixy 🙂

Typescript

It’s not been the first time I have to deal with a gigantic Javascript codebase. While most of the side projects I’ve been writing are no more than 5K loc, CAAT was near 30K, and in my current job I will likely end up managing a codebase between 50-100k loc.
It is fairly complicated to keep things up and running in such a big codebase when I have to defer error checking to the runtime environment and different people contribute code. The nature of the code I write is mostly visual. Things that a human being won’t be able to spot whether they are right or wrong tend to be quite complicated to automate, either at unit or functional testing levels. For example, is that gradient-filled shape right ?, etc.
While writing CAAT, I had to manage all the basic platform stuff on my own: (prototypical) inheritance, module management, dependencies, … all had to be either solved on my own, or by using external libraries, not by the language itself. Good news is that Javascript can mostly fix itself.
Basic features for a typed language, like code refactor, keeping up with correct VM code for performance or hinting code for performance is something so tied to the core of my development that I simply don’t want to deal with it on my own.
For this and so many other reasons I took a shot on Typescript.

Conclusion: it pays by itself after 10 minutes of using it. Specially when putting the stress in long-term maintainability and code quality.

Let me enumerate some of the goods I get while writing Typescript code:

  • Refactor: my IDE recognizes the code semantically. It allows for deep type and elements refactor on-the-fly.
  • ES6: Typescript brings to the table many ES6 language features. By the time ES6 is around, all my typescript code will almost be a one-to-one mapping which does not need any change.
  • Strong typed. Modern Javascript VM make deep type inference to JIT the code. While Typescript is not executable in the browser and has to be transpiled to javascript, since it is strong typed will guarantee type consistency, something that will for sure help to your codebase performance.
  • Javascript VM-friendly transpiled code. While some time ago it was expected to define properties in prototypes, but at the same time, keeping prototype chains sort, today, if you want to take advantage of hidden classes and already optimized code, properties should be injected in constructor functions.
  • Solved extension mechanism. It ships with a very simple prototypical extension mechanism.
  • Solved modularity. Module information will be written for you.
  • Super light overhead. Resulting code is just the original code without the type information.
  • Since 1.4, union types and type aliases will leverage the Typescript capabilities, switching from type:any to a union type.
  • No learning curve at all.

Nowadays, there are Typescript mapping files for all major libraries around: webgl, nodejs, (node io ?? :), webaudio, etc. And being myself so close to HTL5 gaming, most modern HTML5 engines are either written in Typescript, like CocosJS or Egret, or have Typescript mappings like Phaser.

I’d rather have a compiler on my side. Ever.

Dinamically Proxying objects and Wrapping functions.

It’s been some time ago since I needed some mechanism to make debugging in CAAT and other javascript projects I’m working on easier. I’ve an strong Java background and I’m used to dynamically proxying most of my core objects, mainly for safety reasons but as well for the ease of development and to do some aspect-oriented programming.
So I’ve come up with a javascript proxy solution which will allow me upon a call to evaluate whatever object’s method , hook that call at least in three stages:

  • Pre method call, which allows to wrap, change or modify method call parameters.
  • Post method call, which allows to modify method call’s return value.
  • On exception throw call, which allows to return values on exception preventing exception toss.
  • Of course, these are just some examples of why I would be willing to use proxies. But one more compelling reason is that of log code activity without sprouting alert/console.log sentences all around my beautifully crafted code.

    While in Java we’ve incredibly powerful introspection mechanisms as well as an InvocationHanlder ready to be used, the javascript implementation will be a little bit less ambitious, I mean, the code security concerns of Java’s dynamic proxy will be left out the implementation.
    The use of this proxy to wrap a whole object be as follows:

    // Define a module/namespace whatever you call it.
    Meetup= Meetup || {};
    
    // Define an 'object class'
    (function() {
     Meetup.C1= function(p) {
       this.a= p;
       return this;
     }
    
     Meetup.C1.prototype= {
       a: 5,
       b: function() {
         return this.a*this.a;
       },
       c: function(p1,p2) {
          return this.a+p1/p2;
       },
       err: function() {
           throw 'mal';
       }
     };
     })();
    
    var c0= new Meetup.C1(10);
    
    // Instantiate and wrap the object into a proxy:
    // This cp0 object will behave exactly as c0 object does.
    var cp0= proxy(
            c0,
            function(ctx) {
                console.log('pre method on object: ',
                        ctx.object.toString(),
                        ctx.method,
                        ctx.arguments );
            },
            function(ctx) {
                console.log('post method on object: ',
                        ctx.object.toString(),
                        ctx.method,
                        ctx.arguments );
    
            },
            function(ctx) {
                console.log('exception on object: ',
                        ctx.object.toString(),
                        ctx.method,
                        ctx.arguments,
                        ctx.exception);
    
                return -1;
            });
    
            

    With this code, by calling cp0.b(); the following method calls will be performed:

  • Call anonymous function pre-method.
  • Call proxied object’s ‘b’ method
  • If the proxied object’s method call went rightm call anonymous function post-method, otherwise (on execption toss) call anonymous function on-method-exception.
  • This, with some little extra work, could be considered aspect oriented function calls. These hook functions receive as their parameters one object of the form:


    {
      object: the-proxied-object,
      method: evaluated-proxied-object-method-name,
      arguments: the-evaluated-method-arguments,
      exception: if-exception-thrown-on-error-hook--the-exception-thrown
    }

    It is up to the developer what to do with these information for each hook function, but in the example, they’ve been set up as activity logging functions. An example of the result of the execution of cp0.c(1,2) will be:


    pre method on object: Meetup.C1 {a: 10} c [ 1, 2 ]
    post method on object: Meetup.C1 {a: 10} c [ 1, 2 ]
    10.5

    (which should be read as pre-method call on object XX, method ‘b’, with method arguments ‘[]’ (none in this case).)

    When proxying a simple function, the code downgrades to a function wrapping. In this case I’m keeping the same call scheeme. An example will be as follows:

    function ninja(){
      console.log("ninja running" );
    };
    
    var pninja= proxy(
            ninja,
            function(context) {
                console.log('pre method on function: ',
                        context.fn,
                        context.arguments );
            },
            function(context) {
                console.log('post method on function: ',
                        context.fn,
                        context.arguments );
            },
            function(context) {
                console.log('exception on function: ',
                        context.fn,
                        context.arguments );
                return -1;
            });
            

    As you can see, when wrapping functions, the context supplied to hook functions is of the form:

    {
      fn: the-wrapped-function,
      arguments: the-wrapperd-function-arguments
    }

    The proxy function itself is the following:

    function proxy(object, preMethod, postMethod, errorMethod) {
    
        // proxy a function
        if ( typeof object=='function' ) {
    
            if ( object.__isProxy ) {
                return object;
            }
    
            return (function(fn) {
                var proxyfn= function() {
                    if ( preMethod ) {
                        preMethod({
                                fn: fn,
                                arguments:  Array.prototype.slice.call(arguments)} );
                    }
                    var retValue= null;
                    try {
                        // apply original function call with itself as context
                        retValue= fn.apply(fn, Array.prototype.slice.call(arguments));
                        // everything went right on function call, then call
                        // post-method hook if present
                        if ( postMethod ) {
                            postMethod({
                                    fn: fn,
                                    arguments:  Array.prototype.slice.call(arguments)} );
                        }
                    } catch(e) {
                        // an exeception was thrown, call exception-method hook if
                        // present and return its result as execution result.
                        if( errorMethod ) {
                            retValue= errorMethod({
                                fn: fn,
                                arguments:  Array.prototype.slice.call(arguments),
                                exception:  e} );
                        } else {
                            // since there's no error hook, just throw the exception
                            throw e;
                        }
                    }
    
                    // return original returned value to the caller.
                    return retValue;
                }
                proxyfn.__isProxy= true;
                return proxyfn;
    
            })(object);
        }
    
        /**
         * If not a function then only non privitive objects can be proxied.
         * If it is a previously created proxy, return the proxy itself.
         */
        if ( !typeof object=="object" ||
                object.constructor==Array ||
                object.constructor==String ||
                object.__isProxy ) {
    
            return object;
        }
    
        // Our proxy object class.
        var cproxy= function() {};
        // A new proxy instance.
        var proxy= new cproxy();
        // hold the proxied object as member. Needed to assign proper
        // context on proxy method call.
        proxy.__object= object;
        proxy.__isProxy= true;
    
        // For every element in the object to be proxied
        for( var method in object ) {
            // only function members
            if ( typeof object[method]=="function" ) {
                // add to the proxy object a method of equal signature to the
                // method present at the object to be proxied.
                // cache references of object, function and function name.
                proxy[method]= (function(proxy,fn,method) {
                    return function() {
                        // call pre-method hook if present.
                        if ( preMethod ) {
                            preMethod({
                                    object:     proxy.__object,
                                    method:     method,
                                    arguments:  Array.prototype.slice.call(arguments)} );
                        }
                        var retValue= null;
                        try {
                            // apply original object call with proxied object as
                            // function context.
                            retValue= fn.apply( proxy.__object, arguments );
                            // everything went right on function call, the call
                            // post-method hook if present
                            if ( postMethod ) {
                                postMethod({
                                        object:     proxy.__object,
                                        method:     method,
                                        arguments:  Array.prototype.slice.call(arguments)} );
                            }
                        } catch(e) {
                            // an exeception was thrown, call exception-method hook if
                            // present and return its result as execution result.
                            if( errorMethod ) {
                                retValue= errorMethod({
                                    object:     proxy.__object,
                                    method:     method,
                                    arguments:  Array.prototype.slice.call(arguments),
                                    exception:  e} );
                            } else {
                                // since there's no error hook, just throw the exception
                                throw e;
                            }
                        }
    
                        // return original returned value to the caller.
                        return retValue;
                    }
                })(proxy,object[method],method);
            }
        }
    
        // return our newly created and populated of functions proxy object.
        return proxy;
    }
            

    Despite being a powerful beast, the proxy model has some drawbacks. In example, attributes (object’s non function elements) can’t be proxied, since they can’t be evaluated by function call (Will have to take a look at __define_getter and setters functions tough). Also, newly dynamically added methods to an object won’t be proxied by an already defined proxy object, but you always have the opportunity of creating a new proxy for that object.
    Also, not every object type can be proxied. Concretely, String and Array primitive object types can’t, so the proxy function is checking at first instance whether the supplied object is elligible to be proxied. If not, the unchanged object will be returned.

    One more thing to tell about the proxy function is its own definition. It is defined as a global function because it makes no sense adding it to Object’s prototype since as I said, not every object type is elligible to be proxied.

    This technique has shown very useful to me. While developing, I’m just passing object’s proxies back and forth throught the code which seamlessly behave as regular objects. Given that one single object can have more that one proxy, I’ve the abbility to hook different extra behaviors to the original object by passing different proxies for different situations. In the end, when I’m finished with the development/debug phase, I simply disable the proxy functionality by supplying with a proxy function which just returns the passed object. Maybe this technique is only suitable for very object centric developments, but I’ve found it very valuable.

    Maybe I’m wrong, let me know what you think.

    CAAT’s WebGL implementation notes.

    It’s been some time since I’ve coding to provide CAAT with transparent seamless WebGL integration.
    A 2D rendering engine is not one of the best scenarios for 3D acceleration, and here I’m showing some techniques I must have had to develop to provide the best acceleration scenario possible.

    It’s needed to be said that CAAT will be using hardware acceleration if available, and will transparently fall back to canvas if not. Here you can find a mathmayhem game implementation which tries to use WebGL as its renderer. The game suffered no changes at all. You just can tell CAAT to use WebGL by calling Director’s initializeGL method. All the other development issues will be covered by CAAT itself.

    First of all, some notes about WebGL needs. Despite us (developers) being thorough regarding our class hierarchies and animation actors collections, the hardware acceleration layer is not aware of or interested at all about our data organization. WebGL is a rasterizer (a very efficient one indeed) so to keep it at high FPS rates, I must have made a switch from object oriented development to data oriented development. In any professional 3D engine it is a must to keep shader switch operations to the minimum, that is sorting your scene assets by shader and by geometry. Achieve this objective’s been priority nº 1 in CAAT.

    Texture packing

    One of the most obvious ways to lose performance with WebGL rendering is using tons of textures to apply to different scene actors. Instrumenting your shaders to change from texture to texture will immediately drop your scene’s FPS. Besides, not any texture size will be suitable to be used by WebGL. Firefox 4 complaints about not 2^n sized texture sizes (needed to mipmap btw) and Chrome drops performance when no such texture sizes are used (MacOS X, Chrome 10 stable).

    CAAT’s solution it to package textures automatically in glTextures of programmers desired size (2048×2048 by default). So the system, will try to pack textures in different ‘texture pages’ transparently keeping track of texture elements in glTexture space so that when selecting a texture for an sprite the programmer has nothing to take into account. This process minimizes absolutely texture switching, having its best efficiency when one one texture page is created (a 2048×2048 texture space should be enough). Mathmayhem game, despite not having its images optimized at all in size, fit perfectly in one single texture page.

    ZOrder

    A 2D rendering engine must perform back to front pixel overwriting rendering, so depth Z must be disabled. Blending must be enabled so that transparent/translucid images could be properly renderer. One oddity about blending is that the background color of the DOM element the canvas is embedded in will make blending function to saturate to that color. So if you just have a page with the canvas, and the body is white, will make you blending function show wrongly. You should embed the canvas in a div and set this div’s style bgcolor to black (#000) otherwise, your blending won’t showas expected.

    Transformations

    The bad news about a 2D rendering engine is the fact that every single sprite on screen can have its own transformation. This means a huge processor use devoted to matrix computations. CAAT has translation, rotation by rotation pivot and scale by scale pivot. That means 7 matrix multiplications for each Actor on Scene. In fact, the matrix operations MUST be performed regardless the scene’s complexity. In an scenario in which you have let’s say a 100K polygon fixed model, only one matrix must be calculated and uploaded to the graphics card via uniform parameters. But in a Scene with 1000 different actors (2000K triangles), 1000 matrices must be calculated and uploaded to the graphics card. That undoubtfully will kill you frame rate. So CAAT’s approximation is different.

    CAAT simply transforms 2D coordinates to Screen/World space via JS code, and buffers these coordinates in a GL buffer. These coordinates are rendered at once either because an actor requests an image of a different texture page than the one that it is currently selected, an actor’s paintGL method request a flush or a different alpha value different than the currently set one is requested . I’ve made some tests and this is by far more efficient than uploading matrices via uniform parameters.

    Every actor in CAAT is drawn by 2 triangles so the possibility of using triangle strips is not a real possibility.

    Also, to minimize matrix calculation processor sink, CAAT tracks two different transformation matrices for each Scene actor. One regarding local transformation called modelViewMatrix which tracks (translate/rotate/scale), and for world transformation called worldModelViewMatrix which handles positioning in world coordinates. In a 2D world, a normal scenario is that of containers of actors which compose their transformation to that of its contained children. So whenever a container is in example rotated, this transformation must be propagated to every contained actor. CAAT implements a matrix caching plus dirtiness flags which will keep matrix operation to its minimum.

    A test in which 1000 sprites are rotating/scaling is not a real one but a stress test.

    Shaders

    CAAT does the whole process with one single shader which is able to render from texture space calculated from texture packer or a color. These values can be complemented by the alpha composition component set. The idea as before is interacting the minimum with the shader to avoid gl driver penalty.

    One more action I recommend when using this gl implementation is creating sprite sheets with different alpha applications for small images. Thus avoiding to flush the current geometry by the need of setting the alpha channel and instead, selecting another image index from the sprite sheet. This should be done for small textures though.

    At this point, I just can suggest you going to the game implementation and watch all these processes in action. Let me know how sumon-webgl performs for you. Needeless to be said that all these processes are worthless if we face a bad gl driver implementation.