Less Bunnymarks, more features

Yesterday, I was speaking with some friends about latest state-of-the-art WebGL game engines (and/or renderers).

It is not long ago since I saw people mesmerized by pushing 1000 sprites to the screen. Of course that was using Javascript, and a few years back. Nowadays, we can see 40K sprites, or bunnies, pushed to the screen in a 2013 MBAir at steady 60fps. IMO, we are pushing the envelope, if not in the wrong direction, in a not too fair one.

Let me explain. The bunnymark test, which has been seen as a reference for WebGL performance in some 2D engines (even on mines) showcases an ideal scenario. It is no more than a shiny marketing ad where conditions are truly unreal. This is the scenario where just one texture, one shader and gpu-calculated matrices are being showcased as an engine’s performance while it is indeed a bare-bones test which just measures you gpu’s Buffer sub-data speed.

Don’t take me wrong, this is not a rant. I have nothing against Pixy. I believe it is actually the the-facto standard for fast 2D WebGL-backed graphics for the web. A reference for many of us. Brilliant work. Bonus by the fact that the @Goodboy guys are awesome.

Almost 4 years ago I wrote a post on how to make 2D WebGL faster. Even today, there’s little I’d add on top of that post, except for buffer pinpointing, multi-texturing, triangles vs quads and probably use of bufferData instead of bufferSubData. With these ‘old’ guidelines, this is the result of a Bunnymark test on my current daily job:
CocosJSBunnymark (original viewport 800x600)
Original viewport is 800×600 pixels and yes, the sprites are tinted on-the-fly while translate and rotate.

Is the result good or bad … ?. IMO It is meaningless.

I can’t avoid to have the same sentiment: ‘hey, 100k bunnies @60fps on the screen’ === ‘ideal scenario like a pure marketing campaign’.
We are lucky enough to be in a moment where rendering is not our game bottleneck (at least on desktops).

I’d love to see more production tools, and less rendering wars. And this, I think, would be a good engine performance measure: how fast I can deliver working with it.

So next time someone asks me ‘can you do 100k bunnies/sec @60fps ?‘ I’d say: YES, of course (if the conditions are ideal). And if not, I could always use Pixy 🙂

Advertisements

Typescript

It’s not been the first time I have to deal with a gigantic Javascript codebase. While most of the side projects I’ve been writing are no more than 5K loc, CAAT was near 30K, and in my current job I will likely end up managing a codebase between 50-100k loc.
It is fairly complicated to keep things up and running in such a big codebase when I have to defer error checking to the runtime environment and different people contribute code. The nature of the code I write is mostly visual. Things that a human being won’t be able to spot whether they are right or wrong tend to be quite complicated to automate, either at unit or functional testing levels. For example, is that gradient-filled shape right ?, etc.
While writing CAAT, I had to manage all the basic platform stuff on my own: (prototypical) inheritance, module management, dependencies, … all had to be either solved on my own, or by using external libraries, not by the language itself. Good news is that Javascript can mostly fix itself.
Basic features for a typed language, like code refactor, keeping up with correct VM code for performance or hinting code for performance is something so tied to the core of my development that I simply don’t want to deal with it on my own.
For this and so many other reasons I took a shot on Typescript.

Conclusion: it pays by itself after 10 minutes of using it. Specially when putting the stress in long-term maintainability and code quality.

Let me enumerate some of the goods I get while writing Typescript code:

  • Refactor: my IDE recognizes the code semantically. It allows for deep type and elements refactor on-the-fly.
  • ES6: Typescript brings to the table many ES6 language features. By the time ES6 is around, all my typescript code will almost be a one-to-one mapping which does not need any change.
  • Strong typed. Modern Javascript VM make deep type inference to JIT the code. While Typescript is not executable in the browser and has to be transpiled to javascript, since it is strong typed will guarantee type consistency, something that will for sure help to your codebase performance.
  • Javascript VM-friendly transpiled code. While some time ago it was expected to define properties in prototypes, but at the same time, keeping prototype chains sort, today, if you want to take advantage of hidden classes and already optimized code, properties should be injected in constructor functions.
  • Solved extension mechanism. It ships with a very simple prototypical extension mechanism.
  • Solved modularity. Module information will be written for you.
  • Super light overhead. Resulting code is just the original code without the type information.
  • Since 1.4, union types and type aliases will leverage the Typescript capabilities, switching from type:any to a union type.
  • No learning curve at all.

Nowadays, there are Typescript mapping files for all major libraries around: webgl, nodejs, (node io ?? :), webaudio, etc. And being myself so close to HTL5 gaming, most modern HTML5 engines are either written in Typescript, like CocosJS or Egret, or have Typescript mappings like Phaser.

I’d rather have a compiler on my side. Ever.