Dev Blog

Dev-Blog 128: Color Grading: Another cool rendering trick



Welcome back, followers of the fearsome!

On Monday we showed our game off at the Seattle Indie Expo, and it was super fun to meet a bunch of Seattle indie developers, as well as meeting all the people checking out our game! We’ve gotten a lot of feedback, and we’re going to be incorporating all of that into our game to make it even better. Here’s a quick pic I snapped of Caley and Jesse and our setup.


Alright, now to the tech stuff. This week I put in something I’ve been wanting to put in the game for a long time: Color grading. Color grading isn’t new, and I am definitely not claiming this is something unique. However, it is very cool, and I wish more people knew about it. Every artist I show this too reacts as if their mind is blown, and every programming is like ‘yea, I knew that’. There have been many games that are using this already, like The Witness, and a lot of AAA games. There’s a great description on how to use it on Code Laboratorium, and this week I’ll try to give you my version here as well.

The basic idea of color grading is to map every possible color to another (color graded) color. In other words, you basically want to be able to call a function in the form of:

Color GetColorGradedColor(Color rawColor)


One way to map each possible color to each other color is to use a huge array. If we’re using 8 bits per color channel, that means 256 steps per color channel, so we’d need an array of 256 * 256 * 256 different mapped colors. That’s a lot of memory, and a lot of cache misses to deal with! Luckily there is an easier and faster way.

Imagine that there was a way to store all these mapped colors in a cube. the X axis could be the Red value, the Y axis the Green value, and the Z axis the Blue value. Now, if we had a color value, we could just use the RGB value as the XYZ coordinate in the cube, and get the color graded color. Now also imagine that the color grading is pretty smooth, and neighboring colors in the cube are very close together. That means we could just take a lower resolution cube, and interpolate for any colors in between.

I’ve just described the exact behaviors of a 3d-texture. For my implementation I’m using a 3d texture of 16x16x16 pixels as my color grading look-up texture (also called a LUT). I’ll get into how to create these textures later, but for now, imagine we’ve got a color grading 3d texture completely set up. What actually needs to happen to get the screen to show up completely color graded? It’s simple: Render the entire screen to a render target, then render this render target to the screen using the color grading pixelshader.

The color grading pixel shader is very simple (this is GLSL, but it should be pretty easy to convert this to HLSL):

uniform sampler2D TextureMap;
uniform sampler3D ColorGradingLUT;
varying vec2 f_texCoord;
const float lutSize = 16.0;
const vec3 scale = vec3((lutSize - 1.0) / lutSize);
const vec3 offset = vec3(1.0 / (2.0 * lutSize));
void main()
 vec4 rawColor = texture2D(TextureMap, f_texCoord);
 gl_FragColor = texture3D(ColorGradingLUT, scale * + offset);

Note that the alpha of the color isn’t used in this shader, as the screen is drawn without Alpha blending. The scale and offset parameters are there because of the way textures are interpolated. There’s a great explanation in GPU Gems about this.

Alright, so now that we’re able to combine an input texture (from your render target) and a 3d color grading look up texture into a final color graded output image, we need to start worrying about how to create these color grading textures. (This is the part that usually blows the artists mind). The basic process is this:

1) Create a screenshot of your game, and insert the color cube information to the image.

2) Load the screenshot into photoshop, aperture, paint .net, gimp, or whatever color correction application you prefer, and change the colors of the image using whatever plugin you want.

3) Save the screenshot once you’re done

4) Load the screenshot back into your tool, and extract the color cube information.

That’s it! Pretty simple, right? Here it is in pictures:


This is the interface we have in the editor for generating the color grading textures. The button ‘Generate Screenshot’ will create a screenshot of the game, attach the color cube information at the top, and save it out to disk as a png (you’ll want to use a loss-less image format, don’t use jpg!). This is the image it generated:

color grading test


Notice the color grading information in the top left. To generate this screenshot, all existing color correction and bloom in the engine was turned off, so the image is as close to the raw data as you can get.

Now, I’ve loaded this image into, and changed the color, then saved it out to this image:

color grading test after


I didn’t change these colors to look particularly good, just enough to show a difference between the incoming and outgoing image. Next, this image is loaded back into the editor, which extracts the colors from the image, and puts them in a 3d texture to use as the color grading LUT:

editor after

That’s all there is to it! I’ve added a few extra things like being able to blend between color gradings within a level, which creates really cool mood changes when you walk into a cave or when you get into a village. Here’s a few more extreme color gradings:

color grading collage

Alright, that’s it, hope it was helpful. Keep color grading them games!


Twitter: Nick: @nickwaanders Jesse: @jouste Caley: @caleycharchuk SlickEntertainment: @SlickEntInc

Google+: Nick: +NickWaanders Jesse: +JesseTurner Caley: +CaleyCharchuk


Posted by: Under: Tech,Vikingsquad Comments: 4

Dev-Blog 122: Those textures need some polygonizin’, son.



Welcome back followers of the fearsome!

You may remember a blog-post I did a long time ago about the texture polygons we created for Shellrazer. If not, here it is. The system we used at the time required hand-editing of each texture, and this quickly becomes WAY too much work. Especially at the rate Jesse is able to crank out art for Viking Squad!

Now, Viking Squad is not meant for mobile devices, but we’re still generating texture polygons. The difference is that we’re only using full opaque texture polygons for viking squad, and we have removed all alpha blended triangles from most triangles that get rendered. This is mostly as a preparation for proper shadows and lighting (which I haven’t implemented yet). Also, by enabling full screen anti-aliasing, the edges of our models are really nice and crisp, especially on a 1080p TV.

The whole process of generating the texture polygons is also fully automated now. The basic algorithm is fairly simple, I first trace the outline of the mesh by checking the alpha of each 2×2 pixel block. I add edges for each edge, and connect them all together. After this step, the mesh would basically look like this:



That’s a LOT of points! As a clarification, on the right is the image with blue circles representing the vertices, and a green triangles representing fully opaque triangles. On the left is the resulting mesh rendered. As you can see the edges get a bit blocky with this many points, something we’re trying to avoid.

The next step is to simplify the edge, while trying to retain the shape of the outline we just created. This is done by a basic edge-collapse algorithm that calculates the ‘cost’ of collapsing each edge by looking at how much it would alter the original outline. Each turn, every potential edge collapse is given a score based on how much it alters the original outline, and then the one with the lowest score is collapsed. Then another turn is started. This continues until the cost of a collapse is above a given threshold value. The end result looks something like this:


As you can see here, the steppy-blocky edges are pretty much all gone, and they have been smoothed out into nice sharp looking edges.

The triangulation of the outline is the last step in the process. I use OpenTK in our editor (a C# OpenGL, OpenAL and OpenCL implementation), and luckily that has access to the GLUTess functions, which create a tessellation of a given outline. This works pretty well!

There’s another example, this time a tree:


As you see, we’re generating quite a few triangles for each object, and in a whole scene this turns into quite a lot of triangles, but they are all nice and crisp and all ready for some awesome lighting (which again, I still need to implement). Check out this wireframe/solid slide able image to see how many triangles we’re putting on screen: (The second image might take a while to load, just give it some time. It’s mostly because I can’t even HTML)


(Btw, the very cool slider thing we used in the ‘Click me’ image above is from here)

Alright, that’s it for this week. Until next time, and keep those triangles wireframing.


Twitter: Nick: @nickwaanders Jesse: @jouste Caley: @caleycharchuk SlickEntertainment: @SlickEntInc

Google+: Nick: +NickWaanders Jesse: +JesseTurner Caley: +CaleyCharchuk



Posted by: Under: Slick Entertainment,Tech,Vikingsquad Comments: 1

Dev Blog 117: A tour of the Editor



Welcome back followers of the fearsome!

Alright, this week we’ll give you a closer look into how we are developing Viking Squad. We develop the entire game in C# using .Net 4.0, and later port the game code to C++. The game is pretty much entirely developed in an editor that I’ve been changing and optimizing since N+. The only part that isn’t inside the main editor is the UI editor. The UI editor is a separate editor for UI-screens. Come to think of it, I should actually integrate the UI editor into this main editor as well… hmmm. Ok, first finish this blog.

I’ve created a base editor that I extend to fit the current game we’re developing. Here’s a screenshot:


So as you can see, there’s a big main window on the left showing whatever we’re currently editing, and on the right is the tools pane. The main windows is basically just an OpenGL viewport that runs our proprietary engine to render whatever we need to render. We use OpenTK to get OpenGL rendering in the main view-port with C#. It used to require XNA, but when Microsoft announced they would abandon it we decided to switch to OpenGL. This switch actually helped us port Scrap Metal and Shellrazer to mobile devices at the same time!

There are 14 tabs at the top of the tools pane. Activating a tab puts the editor in the proper ‘mode’ for that tab. So when you’re editing the general settings, it will just show a list of settings. If you’re editing a puppet, the main view will change to show only the selected puppet, and provide a UI for easy editing of the puppet. Here’s a list of tabs:

1 global settings Global Settings: This is a tab that just holds a big list of global settings, with things like the Camera settings, physics engine settings, etc.

2 destination Destination: A destination is one ‘level’ in our game. It consists of multiple chunks that are randomly put together. In this tab you can select which chunks are eligible to be randomly selected in the destination, which order they should be in, etc. You are also able to set some settings for the destination, such as which music to play, which background to use, and which puppet the destination should use in the world map view.

3 chunk Chunk: A chunk is a small piece of level. In this tool pane you can edit the layout of the chunk by painting lanes onto a grid:

Chunk Editor

You can also place entities and triggers. Entities are basically anything walking around or sitting on top of the lanes. Triggers can be used to create waves on enemies coming at you.

Chunk Editor Entities

4 lane Lane: In the lane editor you can define which different types of lanes there are, and what the settings are for each of these lane types. You can also select a procedural to represent the lane.

5 procedural Procedural: Procedurals are procedurally generated meshes, which we use for the lanes, backgrounds, and anything else that needs to be generated differently every time. I’ve blogged about this before here. This is where you can do stuff like this:

6 background Background: This is where we edit different backgrounds for  our destinations. This is also where you define the water/waves settings.

7 entity Entity: This is where we setup individual entities, adjust their behaviour, select which puppet they use to render, etc.

8 items Item: In our game, items are objects that can be equipped on an entity. The object can alter the current skin used by the entity, as well as alter the entity stats, such as health, stamina, etc.

9 dojo Dojo: This is where we can test our combat in an easy setting. Caley blogged about tuning the fighting before, and he basically uses the dojo to do this. When in the dojo, the joystick can be used to control your character, while the tools pane can be used to spawn any type of enemy.

10 worldmap WorldMap: This is where we edit our world map. We haven’t blogged about this much, but it is basically where you row your boat around to different islands. More on this in future dev-blogs!

11 play Play: This is the magic button. This button starts the game with a full UI flow. It is basically playing the game as an actual game, rather than just a bunch of separate setups in the editor. We’re able to play local multi-player games in here, and we’re working on getting multi-player over internet working as well.

12 puppet Puppet Editor: This is an editor I’ve built to be able to create and animate skeletal sprites (which we call puppets). You can also attach sounds, visual effects, and camera shakes to specific points in the animation. Another cool feature I added is the ability to set certain bones to be physics controlled. Even though it’s very simple Verlet style physics, it still gives a neat effect when things move in a physical way.

13 visfx Visual Effects Editor: This is where visual effects can be created and modified. I should probably do a post about how our visual effects system works one day. I think it’s pretty neat, and there are an enormous amount of possibilities with it.

14 textures Texture Editor: This is an editor I added for Viking Squad. All our textures are converted to a set of opaque polygons. We do this to get crisper edges, and it also gives us the ability to do proper shadows and lighting, which I haven’t implemented yet. Once I do it will be cool though! :)


Alright, I think it’s time to end the tour. If there is anything you’d like to hear more about, please leave us some comments!

Till next time,


Twitter: Nick: @nickwaanders Jesse: @jouste Caley: @caleycharchuk SlickEntertainment: @SlickEntInc

Google+: Nick: +NickWaanders Jesse: +JesseTurner Caley: +CaleyCharchuk


Posted by: Under: Tech Comments: Comments Off

Dev-Blog 111: Procedural Goodness Part 5



Welcome back, followers of the fearsome!

Let’s talk about some more procedural tech stuff. First off, we’re not using procedural generation to generate everything in the world. We’re using procedural generation to drastically speed up the handcrafting of levels, and as a nice side benefit, the procedurals generate a slightly different outcome each time we place a chunk of level in the game. So basically we ‘program’ the way we want a chunk to look.

Let’s start off with a screenshot of our editor, and then I’ll talk about some stuff you see in there:


This is an example of a procedural lane for Viking Squad. The game creates multiple of these next to each other to generate a level.

On the right you can see a treeview of the procedural. The way the procedural generation works is that each node in the tree gets a list of transforms from the parent node. A transform is a matrix, which holds the position, rotation, scale, etc, for that particular point. The node can do anything it wishes to the transforms, such as add new transforms, remove a few transforms, change the transforms, etc. When it has completed with the transforms, it passes these transforms on to the child nodes. Each child node in turn can do things to modify the transforms again, etc. Some nodes can also choose to use the transform to place an object, like a mesh, sprite, image, sound, effects, etc. That’s pretty much it in a nutshell, but I’ll go through the treeview in the image above a bit more.

It all starts with two transforms from the Root. These transforms are shown as the two little red spheres on the left and right ends of the lane. In the procedural editor, these two points are movable for testing purposes, but in the game these transforms get created based on where we want the lanes to be.

Right underneath the Root node, there’s a LaneMesh node. The lane mesh basically draws the base mesh between the two transforms. If we were to just draw the lane mesh, you’d see the left image below. There’s a bit more to the lane mesh than meets the eye. I’ll get into that a bit later. First, let’s continue with the procedural stuff.

ProceduralBase ProceduralTop

Next there’s an ‘InLane’ folder node. We use these folder nodes purely for organizational purposes, they don’t do anything for the procedural generation.

Then there’s a ‘Repeat every 1 meters’ node. This node grabs the two transforms that came from the parent node (in this case the Root), an it generates multiple transforms in between these transforms, spaced 1 meter (adjustable) apart. So now we have a bunch of transforms that get passed down to the ‘Random Transform’ node.

The Random transform node alters each of the incoming transforms by offsetting, rotating, or scaling the transform by a random range. Now we’re left with the same amount of transforms, but they are all scrambled up into random positions. We’re almost getting to the point of actually placing some stuff!

Next there’s a Random (one of) node. This one is a bit special. It basically selects one of the child nodes to use each time the procedural is generated. There’s a subtlety to it though. If the Random (one of) node would select one of the nodes underneath, and pass all it’s transforms to it, it would basically mean that for each time the procedural gets generated, ONE child node would handle ALL transforms. Sometimes though, we want a different image on each transform! The Random (one of) node actually has a flag to split up the transforms into individual segments, and pass these segments to a random child each time. Now we can get a random image to placed on each incoming transform. Sweet!

Now there’s a Place Image node, which does exactly that. It places an image on each incoming transform node. It has a few parameters such as a random color multiplier, random size multipliers, etc. All in all this means that the procedural can look drastically different each time it gets generated, which is exactly what we want! The result is shown in the right image above.

Alright, now to lane meshes. If you look in the images above, you might have noticed that the perspective we used is PFU. (Pretty Fucked Up™). It’s almost isometric, but not quite. It’s using a 5 degree field of view for the camera, but it also uses a skew on the depth axis (which is the Y axis in our case), so that any lines along the X axis are perfectly horizontal. This weird perspective made drawing textures for the lane meshes quite a challenge.  The lane mesh is actually rendered as a stretched cube along the X axis. Below is an image of the lane with a default coloured texture on it:


We first tried drawing the individual textures for the tops, sides, and fronts, but we could never get it to fit properly, and not look stretched or like lines were discontinued. We needed a different approach. Whenever Jesse creates concept art, the lanes look really cool, and that’s the look we were going after. However, drawing the different textures in this faux-3d space created quite a few problems. So to mitigate this, I created a little tool that generates a template texture, for Jesse to draw over. When he’s done, he can import this texture back into the editor, which grabs each piece, unskews it, crops it, and exports it as a separate texture. Here’s the texture the editor generates for Jesse to draw over top of. I’ve added text to show what each part is:


As you can see, there’s two top lane parts. This is because we want every odd lane to look different from every even lane (so there’s more distinction between lanes).

When Jesse draws over top of this, this is the result:


Now this texture gets processed in the editor when he imports it, which cuts the hand-drawn texture up into separate parts to wrap over our 3d cube. Now, it’s hard to explain how this works exactly, but here’s a video that shows it all:

First, I show how the procedural generates a new version every time you move the nodes on the sides. Obviously in the game we only generate it once, and then use that for the entire game. Next, I show a normal perspective camera view, and you can see how the texture Jesse drew is wrapped over the 3d mesh. It’s kind of a weird optical illusion, but it works perfectly from the default game camera! :)

Alright, that’s it for this week, gotta get back to work. I hope I didn’t confuse you too much! :)

Nick_avatar64 – Nick

Twitter: Nick: @nickwaanders Jesse: @jouste Caley: @caleycharchuk SlickEntertainment: @SlickEntInc

Google+: Nick: +NickWaanders Jesse: +JesseTurner Caley: +CaleyCharchuk


Posted by: Under: Tech Comments: 1

Dev-Blog 102: Water world



Welcome back followers of the fearsome!

GDC is coming up, so we’re working hard to get *something* running that we can show off. As we’re a viking game, most of our game will have a lot to do with the ocean and islands, so we need a decent looking ocean. It needs to be able to display many different styles of water: large ocean-like waves, smaller bay-like waves, high winds, low winds, etc. Also, it needs to look cartoony, to fit the style of the rest of the game.

I started looking into shaders for ocean rendering, and found that a lot of methods use a lot of math and large shaders to make them work. This is great, but not really something I want to spend a ton of time on a the moment. The one page that I thought had a pretty good explanation on how they did their ocean rendering is this one: The interesting part of that page (besides all the cool video’s) is the image with all the circles on it. I figured this was worth a test run in a very simple shader.

So I started working on a subdivided plane, with a vertex shader to alter the position of the vertices over time. I slapped an internet test texture on it, and got to building some simple vertex shader to move the points.


Now, to get some wave action, we’ll need to add an offset to the position in the shader. Instead of just adding a sine wave to the Z (up), I’m also adding a sine wave to the X (sideways), with a 90 degree phase shift. Phase shift is simply adding an offset to the sine evaluation function for the X direction. So in a GLSL shader, it looks something like this:

attribute vec4 v_position;
attribute vec2 v_uv;
uniform mat4 wvp;
uniform float time;
uniform float phaseshift;
uniform float xamplitude;
uniform float zamplitude;
varying vec2 f_texCoord;
void main()
    vec4 pos = v_position;
    pos.x += cos(time + phaseshift) * xamplitude;
    pos.z += cos(time) * zamplitude;
    gl_Position = wvp * pos;
    f_texCoord = v_uv;

As you can see, there are a few variables to play with: phaseshift, xamplitude and zamplitude.  Here are some examples:


The image above has a phase shift of 90 degrees (1/2 PI), so each point will traverse a nice circle. It generates big rolling ocean waves.

sine with phaseshift 

When you change the phase shift to about 125 degrees, and make the xamplitude larger than the zamplitude, you can generate waves that are a bit more like breaking waves (like the image above), with a steeper angle on one side versus the other side. You can go way crazy with this and generate something like this:


In the image above you see one more thing. Blending between textures based on where the ‘cycle’ of the vertex is. I’m using two textures, one quiet water texture, and one frothy foamy water texture (which is also a test image from the internetz), and blend between them using another cosine to get the blend factor.

The last thing to note about these images is that the water kind of appears straight at the back, and the color fades to a different color. This is because I’m fading out the wave heights towards the back, and fade the color to the skybox color behind it.

When all is said and done, it looks like this:

And here’s the whole shader with a ton of tunables in case you’re interested. (Make sure to right-click and save!)

Alright, that’s it for this week, I’ve gotta get back to implementing cool stuff!

Nick_avatar64 Nick

Follow us on twitter: Nick: @nickwaanders Jesse: @jouste Caley: @caleycharchuk SlickEntertainment: @SlickEntInc

Follow us on Google: Nick: +NickWaanders Jesse: +JesseTurner Caley: +CaleyCharchuk

Like us on facebook:

Posted by: Under: Tech,Vikingsquad Comments: 4

Follow us!

titlebutton_twitter titlebutton_facebook titlebutton_youtube titlebutton_twitch

Join our mailing list!

/ /

Dev Blog

September 17 2014

Welcome back followers of the fearsome! this week we’ll be checking out some of our very cool universal elements! while we want to bring a bunch of different enemy races to viking squad, we also realize that we are not an army of animators and artists. Originally we were making a new puppet for EVERY […]