WebDev 4 de marzo

El 4 de marzo, celebramos la primera edición WebDevs en la Universidad de Deusto. Acudieron entre 60 y 70 personas, y el lugar de la presentación fue espectacular: la sala de conferencias de la Deusto Business School.
Gracias Txipi y Univeridad de Deusto por acogernos como a reyes. Y por supuesto, gracias a todos por asistir. Para mi fue uno de los mejores meetups a los que he asistido.

Los ponentes fuimos:

Naiara Abaroa, @nabaroa, hizo una impecable presentación sobre temas de css como GridLayout, CSS Modules, y demás. Aquí os dejo un enlace a sus slides, curradísimas y espectaculares.

Cc7rV5mUEAIaeuw

Hugo Biarge, @hbiarge, con una potentísima presentación sobre componentes en Angular 1.5 y sus recovecos. Aquí podéis obtener las slides.

CcuVIVLXIAArZga

Yo mismo, Ibon Tolosana, que hablé de como aprovecharnos de algunos secretos no tan ocultos de la Chrome dev tools. No hice slides, porque era más live-coding, así que abrí la consola de Chrome, y trabajé sobre ella directamente. Podéis increparme por aquí: @hyperandroid.

CcudKVgWEAEX9Io

Futuras ediciones ?

El formato del meetup es abierto. No hay organizadores oficiales, ni jefes. Como ya comentamos, motivación no es otra que el simple hecho de dar un poco a la comunidad. Constantemente nos beneficiamos de código abierto, frameworks, conocimiento público, y con este meetup espero que hayamos podido equilibrar un poco el karma a este respecto.

Me gustaría que hubiera más ediciones, y acudir como público. Me gustaría ver presentaciones en el mismo formato, 25-30 minutos (lo sé, me pase del tiempo muuuuucho), densas, profundas, introductorias, da igual. Sobre temas tan dispares como:

  • WebVR
  • explicación del ecosistema de Javascript (webpack, gulp, browserify, babel, TS, etc. etc.)
  • three js
  • css3 transitions and transformation live coding
  • vertx
  • node internals
  • pixy js
  • React+Redux internals
  • Aplicaciones híbridas JS-Nativo,
  • V8 for dummies or pros.
  • WebComponents
  • etc. etc.

Animáos, y hagamos que la rueda siga girando. Nos vemos el 4 de abril ?

 

Efficient WebGL stroking

While working on WebGL rendering you soon realize that a canvas like API, despite its opinionated implementation, is extremely useful and easy to work with. Concretely, I am referring to its stroking capabilities. While making a stroke WebGL implementation I came up with the following solution, which requires very simple maths, is fast, and can be implemented just from 2d vectors.

The Canvas API offers support for plenty of cool stuff like line width, line cap and line join value, etc. and so will we. Here’s an example of the implementation:

Efficient WebGL stroking live demo:

https://hypertolosana.github.io/efficient-webgl-stroking/index.html

  • Check ‘show lines’ to see the internally calculated anchors and vectors to setup the tessellation.
  • The red points, and the green arrows can be dragged to see live results.
  • ‘Draw with canvas’ is enabled so that the calculated triangles will be over-imposed for visual comparison between the algorithm and the canvas API.
  • Select different line caps and join values.
  • The pink line (substracted intersection point from center point) will disappear when the angle is too narrow to base the tessellation in the intersection point. Note that not always exists an intersection point.

For the implementation to work, a vector of Points and some line attributes are necessary. These points should define a poly-line which is a contour we want to have tessellated. In Cocos2d-html5 v4 we have a general purpose Path class, which classifies its segments in contours, and could be a good starting point if nothing else is around.

First steps, just stroke the poly-line

If we went to create quads for each line segment of our contour, we would end up with wrong results, where too much overdraw, and no fancy corners are gotten as a result. To avoid this, the supplied data must be a bit preprocessed if smooth line join and closed contours showing correctly a requirement. Specifically, we want to split our contour segments (but the first and last one) into two different segments by creating a new middle point.
This is the result of an straighforward algorithm that will create a straight line every two points, thus, for each 3 points, we’ll get two straight line segments.

Data setup will be as follows:

  var i;

  for ( i = 0; i < points.length - 1; i++) {
    if ( i===0 ) {
      midp.push( points[0] );
    } else if ( i===points.length-2 ) {
      midp.push( points[points.length-1] )
    } else {
      midp.push( Point.Middle(points[i], points[i + 1]) );
    }
  }

  // for every middle point, create all necessary triangles for the two segments is represents.
  for ( i = 1; i < mids.length; i++) {
    createTriangles( 
      midp[i - 1], 
      points[i], 
      midp[i], 
      resulting_array_of_points, 
      lineWidth,
      join, 
      cap, 
      miterLimit );
  }

Second, we must calculate the correct line width based on the current transformation matrix. To do so, we create two vectors: Vector( 0, 0 ) and Vector( lineWidth, 0 ) both of which will be transformed by the current transformation matrix. the module of the resulting vector between both transformed points will be our actual line width:

  var v0= new Vector();
  var v1= new Vector(lineWidth, 0);
  currentMatrix.transformPoint( v0 );
  currentMatrix.transformPoint( v1 );

  lineWidth= Vector.sub( v0, v1 ).getModule();

The result from stroking 3 points at (100,100), (300,100), and (300,300) with a line width of 80, and assuming an identity transformation matrix would be the following:

And for 4 points at (100,100), (300,100), (300,300), and (100,300) we’d get the following:

In this example the resuting triangles include a filler triangle for the union of both segments, corresponding to a “bevel” type line join.
Importatnt to note that in the examples, with 3 points there’s no mid point created, but there’s one with 4 points, leading to two 3-point (or two line segment) tessellation blocks. More points will create more mid points.

But how’s all this calculated ?

From each line segment, a vector is calculated (blue), and normal vectors with the size of half the line width are created too (green). From the segment points (red circles), these normal vectors are added, creating 6 new points, which will be the pointers of the green arrows.
Between each 2 of the external newly created points, a line will be traced (red). The intersection point of the two red lines and the subtraction of the middle point and the middle point are very important (blue dots). If the lines do not cross, the segments are parallel, and the tessellation is straightforward, just create the two triangles that compose it.

With the segment points, the ‘bottom’ blue point, and the calculated normal points for the line segments, we can already simply tessellate the segment. The result will be the one we have in the previous images.

For example, the first tessellated triangle would be: (p0+t0), (p0-t0), (p1+anchor), being:

  • p0, p1: the first two line segment points. (red dots)
  • t0: the external perpendicular vector to each p0 and p1 (light red arrow)
  • anchor: the vector resulting from subtracting the intersection point from the last (p1) line segment point.

And another one would be: (p0+t0), (p1+t0), (p1+anchor)

It is just a matter of continuing dividing the space for the rest of the points.

My two lines do not intersect

Sometimes there’s no intersection point (red lines). In this case, or when the angle between the two segments is very narrow (the vector from center point to the intersection, which is magenta in the playground, is bigger than any of the segments) the algorithm switches to a different way of tessellation, since otherwise, that would be a failed case. Since there’s no intersection point, some elements like the miter must be calculated differently.

Line Joins

Line joins are the tessellation between every two line segments. The most basic one would be a bevel line join, which is trivial a single triangle whose tessellation information is already available:
joinbevel

A round needs to draw with center at p1 (middle point) and radius the difference between p1 and p1+t0 for example. The result would be:
joinround

Another line join type is miter. The following image describes it:
joinmiter
This type of line join is controlled by a parameter, a miter limit. The miter is a ratio between the length of the anchor vector (magenta) and half the line width. (Remember the line width is used to grow the tessellation segment in both line segment point directions). It is also important to note that the miter limit calculation must be clamped to an integer (something not told even in the Canvas Spec).

For segments that have a narrow angle between them, the miter should be very lengthy to cover the join area, leading to odd visuals like in the example: miter too high
This can be controlled by the miter limit parameter, which will prevent the mitter join (switching to bevel) if the value is too high.

Line cap

Line caps control how the start and the end of a poly-line looks like. With a butt value, the line looks as it results tessellated, no extra triangles like in the image:
capbutt

Other types of cap are available: square
capsquare

And round
capround

Triangle signed area

In all the cases, signs and directions of the perpendicular or line cap vectors must be taken into account. For example, it is mandatory to get the direction of the triangle created by each 3 points before tessellation since otherwise odd visual results will arise. For example, if this is not addressed, the following result will be seen:
Screen Shot 2015-03-08 at 11.31.13 PM
What we have here is that the intersection lines (red lines in the previous examples) are being calculated from the inner instead the outer. It is important to appropriately calculate the ‘signed area of the line segments triangle’, which not only gives the area of the triangle defined by 3 points, but also the sign, describing whether it is defined clockwise or counterclockwise.
In our case, if the signed area is greater than 0, describing a counterclockwise triangle, we invert the vectors to define the intersection lines. We always see these vectors pointing outwards in green light.

Analogously, vectors to line caps must be appropriately calculated, where they point to in order to have the caps pointing in the right direction.

Also, when rounds cap or joins are used, the arc angle must be calculated as the minimum angle.

Closed contours

In order to have closed contours properly tessellated, some extra work must done. A closed contour will be that which has the first and last point to be the same one, either same reference, or same coordinates. The process is simple:

  • calculate a new point which is going to be the middle between the first and second contour points.
  • remove the first point.
  • add twice, at the beginning and at the end of the contour points the calculated point.

The results would be: w/o processing:
pathclosederror

And processing the collection of points:
pathclosedok

Important to note that closed contours don’t have any line cap applied.

Trivial optimizations

As far as i can tell, the most trivial optimization will be switching to bevel joins and butt caps when the size of the calculated line width is too small (<1.5).
Also it is important to calculate rounded arcs tessellated triangles based on the length of the arc for the current transformation matrix. You don't want to have super perfectly tessellated arcs with 200 triangles each.

Limitations

The algorithm is totally unoptimized.
It is also not suited for situations where there are a lot of segments taking over the same area, for example with very twisted segments. In this case, the tessellation won’t be correct.
Sometimes there’s overdraw on the generated triangles. If transparency is used, some overdraw will appear. You could first draw the triangles to the stencil though (weak solution, I know).

Playground

Efficient WebGL stroking playground live demo:

https://hypertolosana.github.io/efficient-webgl-stroking/playground.html

Finite State Machine

A finite-state machine, or finite automaton is a model of computation. An abstract machine defined by a set of States and a the set of events that trigger State changes called Transitions. The finite model states that the Machine can be in only one State at any given time, which is called the CurrentState.

It’s been since a long time I first met FSM, probably early 2000, when I was CTO at a social multiplayer games site, were people played against people in real-time. Every game context had to be held remotely, had to be secure as much as possible and deterministic, to name a few of the core features. Modeling a whole game, cards, table, etc. and making it fulfill such features is not an easy task unless you rely on a powerful framework. The best solution I came up with was FSMs, which allowed me to build whole game models declaratively.

Some years ago I developed a formal FSM implementation in Javascript as a client side library or npm-hosted node package. I am currently making use of it in some multiplayer games, and the Automata code is still in good shape. The following are some features of the library:

Declarative definition

FSM must be defined declaratively. You must be able to declare the set of States and Transitions in a JSON object. The FSM definition must be unique and non instantiable. For example, there’s only one definition of the Dominoes game.
The FSM Model must allow for deep FSM nesting so that an State can spawn a whole new FSM. Think of a nested behavior, where a game table ‘playing’ state, is a whole ‘dominoes’ FSM.
Each stacked FSM will have its own FSM context, like a stack trace of States.

An example declaration could be:

context.registerFSM( {

    name    : "Test2",
    logic   : function() { 
        // convention function called when State 'a' exits.
        this.aExit = function(...) { ... };

        // convention function called when State 'b' enters.
        this.bEnter = function(...) { ... };
    },

    state  : [
        {
            name    : "a",
            initial : true,
            onTimer : {         // Timed transition.
                timeout: 4000,  // after 4 seconds.
                event: {
                    msgId: "ab" // if exits fire transition "ab".
                }
            }
        },
        {
            name    : "b"
        },
        {
            name    : "c"
        }
    ],

    transition : [
        {
            event       : "ab",
            from        : "a",
            to          : "b"
        },
        {
            event   : "bc",
            from    : "b",
            to      : "c"
        }
    ]
} );

Session

For each dominoes played game, a game Session is created. Sessions have common initial conditions and changes when Transitions are triggered. It exposes full lifecycle events for:

  • context creation. Each time a new FSM is pushed to the session.
  • context destruction. Each time a FSM pops.
  • final state reached. Once the final state is reached, the Session is empty and not further interaction can happen on it.
  • state changed. The CurrentState changes. It could auto-transition, which means the previous and Current States will be same
  • custom event exposure. Whenever you want to notify an event to the external world.

This lifecycle allows for full FSM traceability and deterministic behavior. Whenever the ‘state changed’ callback is invoked, you can save the event that triggered this change. If all the events that fire a state change are saved sequentially, and later fed to a freshly created session, you’ll always get to the same results. If a user notifies of a bug and have previously saved a game ‘state change’ messages collection, magically you’ll get to the same bug. The FSM framework guarantees code traceability. Black Magic !!

As an implementation note, it is desirable the Session object to be serializable in JSON format. Saving and restoring the game session on-the-fly is definitely a feature you want to have near you.

Lifecycle

FSM elements will expose full lifecycle:

  • enter State
  • exit State
  • transition fired
  • pre and post guard events

These callback hooks will be invoked by the FSM engine to notify of the named events. The ‘transition fired’ event is normally the point to change the Session data. ‘Enter/exit State’ events are on average the points to manage the Session, like setting timers, ending the Session, initializing sub-State FSM, etc.

Timed transitions

Desirable is to have a per-state event scheduler. A very common use case is the following: A game must start in certain amount of time. When the FSM enters ‘start game’ state, a timer is triggered. If the FSM does not change state to ‘playing’ in (for example) 30 seconds, the CurrentState receives a transition event to ‘end game’ state automatically instead of keeping players locked in the game table forever.
Another thing to note, is that the server hosting the remote FSM will timeout in for example 1 minute, and the game clients in half the time. Legit clients will have a local timeout of 30 seconds to start the game, and request ‘go to the lobby’ otherwise. If an unfair client is connected to the FSM and does not send the ‘go to the lobby’ event, the server FSM will trigger it on its side anyway. So the game server is secure.

Guards

Transition guards are FSM Transition vetoes. They can either cancel the Transition event, or force the Event to be an auto-transition instead of a State change.

  • pre transition guards. A pre-transition guard, nulls the Transition event as if it never happened. Though this is not defined in any FSM literature I’ve read about, I have found it to be invaluable useful under some circumstances, that’s why it’s been added to the framework.
  • post transition guards (regular guards). This kind of guards will coerce the transition from State A to State B, to be an auto transition from State A, to State A. Being said a transition is fired, the sequence of actions: ExitA -> TransitionAB -> EnterA will be fired. The use case for this is a counting semaphore. For example, a game table is in state ‘waiting for players’, and needs 4 people to start. The FSM will only change to State ‘playing’ after there’s 4 people. The post-transition guard will make the Transition ‘wait for players’->’playing’ transition to fail until the counter is 4. Bonus points: The table can be waiting for players for one minute as much after each player enters. Whenever ‘wait for players’ State exits, a timer is canceled, and whenever the ‘wait for players’ State enters, the timer is set. The post-transition guard guarantees this behaviour since it fires the transition. Pre transition guard will inevitable fail for this scenario.

No dependencies

Automata is an standalone package. Has no dependencies and works on both the client and server sides.

FSM masking

Though not a FSM framework requirement, remote Session content masking is a must for multiplayer games. For security purposes, we don’t want to share the remote game Session content with each of the connected clients. For example in the Dominoes, we don’t want to share each players tiles with every connected user/player.
The masking will make sure only the necessary information per player will be shared with each of them.
This process is not integrated into the FSM framework, but all the tools are already there: There’s a session listener, and information will only be sent to the connected clients whenever a state change happens. So the rules for masking are not something inherent to the FSM itself, but some external rules to add on top of the reflected events happening at the core of the framework.

These are all the elements that for years have made the multiplayer space I’ve been working with secure, traceable and deterministic. Let me know about your own experiences.

Trello test

I was about to create a Trello board when i saw the positions tab. Like a moth attracted by the light, I clicked on it (after all, I love the product and was curious about what positions were being offered).
And BOOM, there it was: nodejs developer. Curiously, clicked on the link, and found that to apply, I needed to solve the following problem:

Write JavaScript (or CoffeeScript) code to find a 9 letter string of characters that contains only letters from: ‘acdegilmnoprstuw’ such that the hash(the_string) is 956446786872726.

Normally, I would not pay attention to this kind of problems. I was not even thinking on applying, but don’t know why, I got engaged to the problem and solved it in the background thinking. Apparently the hash function gave higher numbers for longer strings, and higher values for using latter letters from the alphabet (or it seemed so). 30 minutes of coding and testing after that, lead to the following solution. It works. And is making a very low number of tests to get the solution. 88 comparisons actually for the sample. It could make even less comparisons by using binary traversal of the alphabet. And it does not take into attention the expected word length hint.
The solution is written in the worst javascript ever. imperative but has no side-effects. I am more curious about what they’d expect from a solution than the solution itself. Algorithm complexity ? Functional code ? …

Is this a general inverse hash algorithm ? of course not. This is only possible because it is a very simple hash function.


/**
 * Original hash function.
 * @param alphabet {string} alphabet composing the guessed words
 * @param s {string} string to get hash for
 *
 * @return {number} string hash value
 */
function hash (alphabet, s) {
    var h = 7;
    for(var i = 0; i  find) {
            letters.pop();
            break;
        } else if ( hashValue===find ) {
            // lucky boy
            return word;
        }
    }
    
    // guess the hashed word
    for( var currentCharIndex=0; currentCharIndex < letters.length; currentCharIndex++ ) {

        for( var index=0; index  find ) {
                letters[ currentCharIndex ]= alphabet[index-1];
                break;
            }
        }
    }

    return "not found :(";
}

// get the word with the given hash. 
// takes 88 iterations on the double nested for loop.
// result: trellises
console.log( find( "acdegilmnoprstuw", 956446786872726 ) );

// takes 44 iterations.
// result: leepadg
console.log( find( "acdegilmnoprstuw", 680131659347 ) );

Less Bunnymarks, more features

Yesterday, I was speaking with some friends about latest state-of-the-art WebGL game engines (and/or renderers).

It is not long ago since I saw people mesmerized by pushing 1000 sprites to the screen. Of course that was using Javascript, and a few years back. Nowadays, we can see 40K sprites, or bunnies, pushed to the screen in a 2013 MBAir at steady 60fps. IMO, we are pushing the envelope, if not in the wrong direction, in a not too fair one.

Let me explain. The bunnymark test, which has been seen as a reference for WebGL performance in some 2D engines (even on mines) showcases an ideal scenario. It is no more than a shiny marketing ad where conditions are truly unreal. This is the scenario where just one texture, one shader and gpu-calculated matrices are being showcased as an engine’s performance while it is indeed a bare-bones test which just measures you gpu’s Buffer sub-data speed.

Don’t take me wrong, this is not a rant. I have nothing against Pixy. I believe it is actually the the-facto standard for fast 2D WebGL-backed graphics for the web. A reference for many of us. Brilliant work. Bonus by the fact that the @Goodboy guys are awesome.

Almost 4 years ago I wrote a post on how to make 2D WebGL faster. Even today, there’s little I’d add on top of that post, except for buffer pinpointing, multi-texturing, triangles vs quads and probably use of bufferData instead of bufferSubData. With these ‘old’ guidelines, this is the result of a Bunnymark test on my current daily job:
CocosJSBunnymark (original viewport 800x600)
Original viewport is 800×600 pixels and yes, the sprites are tinted on-the-fly while translate and rotate.

Is the result good or bad … ?. IMO It is meaningless.

I can’t avoid to have the same sentiment: ‘hey, 100k bunnies @60fps on the screen’ === ‘ideal scenario like a pure marketing campaign’.
We are lucky enough to be in a moment where rendering is not our game bottleneck (at least on desktops).

I’d love to see more production tools, and less rendering wars. And this, I think, would be a good engine performance measure: how fast I can deliver working with it.

So next time someone asks me ‘can you do 100k bunnies/sec @60fps ?‘ I’d say: YES, of course (if the conditions are ideal). And if not, I could always use Pixy 🙂