Dev Blog

Dev Blog 197: Networking



Welcome back, followers of the fearsome!

Today I’ll be giving a little update on the progress of making the online multiplayer part of our game work.

The networking stack for Viking Squad consists out of two layers. The bottom, most low-level layer is the communication layer. On top of this layer sits the game-layer, which is probably worth an entire blog post at some point in the future. Today I’ll describe my communication layer.

The communication layer

This layer is built on the UDP model, and ads connections, reliable messages, and data-blobs (for lack of a better name). I started developing this layer a long time ago, but it turns out to be extremely similar to Glenn Fiedlers excellent network layer, and it you are at all interested in game-networking, you should most definitely read his write-ups. I seriously can’t recommend it enough, he knows what he’s talking about.

When you want to send a large file over the internet, you probably want to just send parts of file over a reliable connection, until all the parts have been received on the other side, and the file can be reconstructed. A TCP connection is reliable connection, and acts like a pipe (data in on one side, data out on the other side), and is well suited for this purpose. When dealing with packet loss, TCP ensures reliability by re-sending packets, until it has arrived on the other side before continuing. Awesome! We don’t even need to do anything, it takes care of everything! Well, not so fast now (har har), re-sending packets will cause delays in the communication, and while this may be fine for sending a large file, it can be devastating for games.

The network functionality I actually want from the communication layer is as follows:

  • 1) A very fast way to send the latest data for roughly 95% of the total traffic (unreliable is ok).
  • 2) A reliable way to send data for roughly 5% of the total traffic.
  • 3) A way to automatically replicate data.
  • 4) Optimized for small data packets, in the order of magnitude of hundreds of bytes, not thousands.

I’ll go into detail a bit more on this points, and how they were implemented in the connection layer.

Sending the latest data fast

For most network traffic in games, only the very latest information matters. A simple example is the position of a player. When you send a message ‘player is at location XYZ‘, and it somehow doesn’t arrive, would you really want to resend that same information? No, of course not. You’d want to update it with the latest location of the player before you send it, wouldn’t you? In fact, you probably want to just send the latest position of the player a couple of times per second, and ignore any missed packets entirely. This is where UDP comes in. UDP is connection-less, and provides no reliability out-of-the-box, other than a checksum for data integrity. UDP is able to send messages between clients very fast, but not reliable. This doesn’t mean that it is not usable though, the communication layer just needs to be built to handle the lack of reliability.

When I started building the communication layer, I started with a simple UDP connection between two machines. One of the first things I added was a bad-connection-simulator. Traffic on a local area network is fast and pretty reliable, and I wanted to make sure to build the game on a really shitty connection, so I added a piece of code that simulates about 5% packet loss, and a lag of anywhere between 200 and 300ms. At this point I was able to send unreliable messages, in about 95% of the cases, they would arrive after 200-300ms.

Reliable messages

The next thing to add was a the reliable message structure. I’m not going to describe the exact implementation of this layer, because it is excellently described here. In a nutshell, it uses ack/nak bits and re-sends messages that weren’t acknowledged as received by the other end, until the message is acknowledged. This is reminiscent of the way TCP works, so it should be expected that reliable messages may arrive later than you’d want. I want to point out that while this reliable message layer acts like TCP, the rest of the traffic still flows even if a packet gets lost. Cool, now we’re able to send reliable messages!

Data replication

In previous games I’ve worked on I had to create new net-messages for each new piece of data I wanted to send, as well as code for handling this message when it arrives. This became quite a chore, and I wanted to fix this for Viking Squad. The new system I am using may not be as sophisticated as the replication system used in Unreal (for example), but it works well for my use, and the network traffic required is minimal.

The data-blob system has 8 local slots for data. Whenever you want a piece of data to be available to other clients, you submit a byte array to one of the 8 slots, and this data will be communicated to the other clients in the network automatically. This system uses a similar ack/nak system as described in the reliable message buffer. On top of this, it uses a binary difference algorithm to make sure only the changes in the data are sent over the network, rather than the entire data blob. Most data in the data-blob slots is somewhat coherent between frames, so the binary diffs are quite small. Whenever a new data-blob comes in from a client, an event is sent to the game layer notifying it of the newly arrived data-blob.

Small data packets

I kind of touched on this in the previous parts, but the messages generated by the game are kept as small as possible. The main message pump sends a steady number of packets per second, and each packet can contain multiple unreliable, reliable, and data-blob messages. The send rate of the message pump depends on the detected amount of packet loss and round trip time, in an effort to control the flow of data and prevent clogs.

To make sure data is saved in the smallest way possible I’ve implemented a BitStreamWriter and BitStreamReader. They allow for dense bit-packing, and are able to considerably reduce message sizes. For example, naively saving a ‘bool’ to the average stream would take 8 bits, since the smallest unit it deals with is a byte. The bit stream would only use up 1 bit. Saving an unsigned int that has a range between 0 and 127 would still take 32 bits in a ‘normal’ stream, but in the BitStreamReader/Writer it would save using only 7 bits. Large savings, if you know the range of your variables!

Alright, I’ve been kind of rambling on, but I wanted to show you one more thing. How do you test all this stuff? Well, I made a tool. Here’s a screenshot:Networktool

Here you can see all of the things described above. On the left is one client, on the right is another client, and in the middle is the bad-connection-simulator. At the moment of this screenshot, it was sending 30 reliable and 30 unreliable messages from both sides. More reliable traffic that I’d generally use in a game, but this is a test case, right?

In the middle you can see that I’ve set the lag to be 100-130ms for both send and receive, resulting in a total round trip time (RTT) of around 200-260ms. As you can see in the ‘RTT’ graph, which shows the average RTT, this is exactly what’s happening (the number that is cut off is 256.0). The detected packet loss varies between roughly between 4% and 8%, which is pretty close to the 5% I’ve set in the bad connection simulator.

Another interesting thing to look at is the graphs for ‘Reliable xfer time’ and ‘Unreliable xfer time’. Here you can clearly see that sending unreliable messages is consistently pretty fast (limited by lag only), and a reliable message can take much longer (a few multiples of the lag), based on both packet loss and lag.

The last thing I wanted to mention is the ‘Local Data Blobs’ and ‘Remote Data Blobs’ part. When you type into the Local Data Blobs fields, every change will be treated as a separate ‘revision’, signified by the first number that follows. The second number is the remotely ack-ed version, which is used to generate delta-compressed messages to make sure the other side is updated to the latest version.

Alright, I hope that gives you a bit of insight into how our lower level network layer is set up! Until next week, and don’t forget the art stream later today!




Twitter: Nick: @nickwaanders Jesse: @jouste Caley: @caleycharchuk SlickEntertainment: @SlickEntInc



Posted by: Under: Tech,Vikingsquad Comments: 0

Dev-Blog 187: PAX Prime 2015!



Welcome back, followers of the fearsome!

I am typing this from my couch at home, still tired, but very satisfied with the PAX showing we just had. We were showing Viking Squad in the Indie MEGABOOTH, and it was a lot of fun. We met many, many cool gamers, and it was a blast to see everybody enjoy our game!

We set up a go-pro to make a cool time-lapse of all the people coming through our corner of the Indie MEGABOOTH:

There was a bit of a last-minute shake-up in our plans before we went to PAX. We bought two big TV’s to show the game on, but due to some trouble with transport and border crossings, we ended up renting three Intel sponsored TV’s from the Megabooth in the last minute. Sony was kind enough to lend us three Playstation test kits, and we set up every TV with one. At first, we thought we’d just keep two TV’s as playable stations, and then have the third TV looping our trailer.  After the show opened we very quickly found out that it was a very good idea to have each TV set up as a playable station. We turned off the trailer, and turned on the game on the third TV, and each TV was basically occupied the entire duration of the show! Talk about a lucky break!

Also, the average time people had to wait to try the game was less than 5 minutes. This is a drastic improvement over our PAX East setup, where we had 1 big TV, and people lining up for up to an hour. Don’t get me wrong, it’s great that people want to line up for that long, but at the same time it makes me feel bad for taking up people’s time with waiting in a line! So having three stations where everybody can play the game within minutes is great!

In other news, Ookpixels wrote a GREAT article about us this week! Eli came by our office and interviewed us, and then he did a ton of research into what we were saying. He’s a great guy, and we’re honoured he wanted to write an article about us!

Alright, that’s it for this week. We’re getting back into the swing of things now, and we’ll be back with our weekly dev-stream in a few hours! Come hang out!



Twitter: Nick: @nickwaanders Jesse: @jouste Caley: @caleycharchuk SlickEntertainment: @SlickEntInc



Posted by: Under: Slick Entertainment,Videos,Vikingsquad Comments: Comments Off on Dev-Blog 187: PAX Prime 2015!

Dev-Blog 181: The room-scale Vive VR Jam!



Welcome back, followers of the fearsome!

Last weekend Caley and myself took part in the coolest game-jam we’ve ever attended: The roomscale Vive VR jam! Valve and HTC came by Vancouver and set up 3 complete VR rooms of about 12’x10′. There were about 100 attendees making several VR games, and everybody shared the rooms for testing during the two days of the jam.


My personal goal for the jam wasn’t to make an actual game though. I decided I want to get my engine up and running with the Vive! Even though Scrap Metal was the last real 3d game we made, and Shellrazer and Viking Squad are mostly 2d, the guts of the engine are still all in 3d. Of the about 100 people at the jam, there were only 3 people working on their own tech, the rest were all using Unity and Unreal. As a programmer this makes me sad.

Getting the engine converted to be able to render with the Vive was actually very straightforward. The VR library that steam provides is free to download, and it even works with the Oculus DK2! I was able to test small things using just my laptop and the DK2, and when I created a build to test on the Vive setup, it worked straight-away! I had a few problems with the transform matrix coming from the controllers (different coordinate spaces), but once that was solved it was all working!

Working with the Vive, we learned a few things very quickly. First off, you have to run at 90 frames per second. Anything lower than that, and you will start feeling sick very quickly. Most console games nowadays run at 1920×1080, either at 60, or sometimes at 30 fps. For 60fps, you have about 16 milliseconds per frame to fill 2,073,600 pixels. The Vive has two screens of 1200×1080, and each needs to run at 90 fps, meaning you have 11 milliseconds to fill 2,592,000 pixels. To really put that in perspective, for a 1080p game at 60fps you need to fill just over 124 million pixels per second, for a Vive game at 90fps that number is 233 million(!). So, you need to have some serious consideration for what you’re rendering, and how it’s being rendered!

The second thing I learned was that the controllers are a HUGE part of the VR experience. I drove down to Valve to test their prototype VR setup about half a year ago, and at the time they were only showing the headset. That experience blew my mind, because I could feel my brain being conflicted about whether this was real or not. Of course I could reason ‘this is not real, I am just in a room with a headset on’, but at the same time every nerve in my body was resisting when I was asked to step off a virtual ledge. This weekend was the first time I got to try out the controllers, and it blew my mind all over again. When you put the headset on, you can see the controllers in VR on the floor below you. It’s just so natural to bend your knees and grab them and not even think about it twice. That is amazing! And now suddenly you have all this interactivity in the scene. You can grab things, you can throw things, etc. I think the controllers are an essential part of making VR actually work properly. Selling just the headset won’t cut it (looking at you Oculus!)

Alright, so what did we end up making? Well, I just made a little toy where you can play around with our particle systems. But in my own engine! :) Each controller has the same 5 different particle effects you can rotate through by pressing the touch-pad on the controller. All there is to do is to play around with the particles! Somehow it’s pretty fun and satisfying though. :) Here’s a photo of Colin Northway of Northway Games playing around with the particle systems:

Colin VR

I forgot to record the actual output to the Vive in a video, and since I don’t have a Vive here, I modified the code a bit to place a few controllers and move them around in circles. So this is an example of what he might have seen, except then in 3D and using the actual vive controllers. :)

Caley used Unity to make a super sweet VR Tennis game in Unity where a ball-launcher shoots balls at you, and you try to hit them with a racket and try to bounce them back over the net to score a point. He also forgot to fraps it. Maybe he’ll do another blog post in the future if we can somehow record some footage on a real Vive at some point.

In conclusion, it was a super fun weekend, and I cannot wait to get my hands on an actual Vive to experiment with for our future projects. Thanks again to Valve, HTC, Radial Games, Cloudhead games, Nvidia, Unity and Unreal for putting this on!

Also, check out this super cool video by Video Carl White:

Vancouver Room-Scale Vive VR Jam! (07|15)

That’s it for this week. Remember to come by our dev stream later today!



Twitter: Nick: @nickwaanders Jesse: @jouste Caley: @caleycharchuk SlickEntertainment: @SlickEntInc

Google+: Nick: +NickWaanders Jesse: +JesseTurner Caley: +CaleyCharchuk




Posted by: Under: Slick Entertainment,Tech Comments: Comments Off on Dev-Blog 181: The room-scale Vive VR Jam!

Dev-Blog 179: Managing 2000+ textures



Welcome back, followers of the fearsome!

This week Jesse is on a sweet trip to Japan with our good friend Ryan Clark of Brace Yourself Games, so I’ll take over and do another Tech blog. This one is about the Texture Tool I’ve build a while ago. This tool fills a very important role in our pipe line, but because it’s all behind-the-scenes it doesn’t really get much attention. This is its time to shine!

First a bit of history so the requirements of this tool become clear. When I first started building my engine, we were making N+, a game that doesn’t have a lot of textures, so no special care was taken to manage the textures in an optimal way. Then, when we started building Scrap Metal, and later Shellrazer, the number of textures increased drastically. The need for an extra step in our pipeline became clear. We needed a tool that could convert all of the source art to a format that is more optimized for the final game, while giving us the control to convert/change/compress the textures as we saw fit.

The first step to optimize the source data is to create texture atlases. The thing with many separate textures is that each of them needs a separate draw-call, which in turn means that you get draw-call bound very quickly. A common way to fix this issue is to combine multiple textures into texture-atlases. This way you can batch all the calls that use the same texture atlas into one draw call, and therefore drastically reduce the number of calls needed to draw your frame, generally resulting in a faster frame-rate. Here’s a screenshot of our texture tool showing how it combined a bunch of textures into a texture atlas:TextureTool_atlas

How do you decide which textures to combine into an atlas? If you just randomly start combining textures, there’s still a chance of doing lots of draw-calls if you require a texture that’s in atlas A, and then a texture that is in atlas B, then another texture from atlas A, and so on. So you will want to combine textures into one atlas if they are likely to be drawn in order. In our case, our puppets use many separate textures, but they will all be rendered in one go, so they could easily be combined. Luckily, Jesse creates a new directory for each character he creates, so the first filter we added is to group textures by the sub-directory they are located in.

There are also textures that generally belong together, such as textures that are used only in the world map. So we added a way to set the category of a directory (recursively), or an individual file. This allows us to easily combine all ‘user interface’ textures into the same texture atlas, so that the user interface can be drawn in a single draw call.

The last filter we need is of course how you want the texture to be compressed. Some textures require RGBA8888 (such as visual effects that do multiple layers of overdraw), and some are fine to be compressed using DXTC5. The entire atlas texture is compressed, so all the textures within that atlas require the same texture compression. The required compression is again set on a directory basis, or on an individual file basis.


Another wish for this tool was a way to easily re-size textures manually. Some textures are used in ways where cutting their size in half isn’t noticeable in the final result. For example textures that were blurred can easily be reduced to 50% or even 25% of their original size. Keep in mind that all of this is happening while building the game package, so none of the original art is resized or altered. The resize factor can be set manually per texture.

Because some of the art we use is actually drawn on a higher resolution that it will ever be shown at in the game, I came up with a way to try and reduce all textures to exactly the size they require on screen. The way this works is that each entity is drawn at the size they appear in the game, and the size at which the textures were rendered is recorded as this is going on. It keeps the average size each texture was rendered at, as well as the standard deviation and min/max values. In the end, the recorded sizes, as well as a recommended resize factor is exported. These values are visible in the image above: The bow image is resized to 57.43% of its original size.

When I checked the game with NVidia’s awesome NSight, you can see that it draws large chunks of level data in single draw calls:


The lanes are drawn using multiple separate textures, but because they were combined in a texture atlas, we were able to draw it in one draw call.

Same goes for the puppet of the shield guy. One draw call is all it needs to draw the entire puppet, which consists out of 30 or so individual textures.

All in all, this tool saves us large amounts of texture space, and it really optimizes the way we draw our frames.

Oh and as a reminder, because Jesse’s in Japan, we won’t do a dev-stream today. We did one yesterday though! Check it out here and here.


Twitter: Nick: @nickwaanders Jesse: @jouste Caley: @caleycharchuk SlickEntertainment: @SlickEntInc

Google+: Nick: +NickWaanders Jesse: +JesseTurner Caley: +CaleyCharchuk




Posted by: Under: Slick Entertainment,Tech,Vikingsquad Comments: Comments Off on Dev-Blog 179: Managing 2000+ textures

Dev-Blog 177: Taking out the trash!



Welcome back, followers of the fearsome!

This week it’s time for a rather mundane task: taking out the trash. Every project I’ve worked on seems to collect a lot of assets that were used in a test or generally assets that are just no longer actually used in the game. In most of those cases, it was actually rather difficult to figure out what is being used, and what isn’t. Often I’d delete an asset, and then two weeks later suddenly somebody notices that it is missing in an obscure part of the game.


I started thinking about how to fix this. Luckily the Slick file system uses a standard way to reference files: Every file is stored relative to a data root. For example, if a texture is located in C:\Games\VikingSquad\Data\Textures\MyTexture.png, the game code and data will always reference it as DATA:Textures\MyTexture.png. The conversion of a full path to an aliased path is done in the editor, and it guarantees that all data files are properly referencing other data files.

So, I figured, I should be able to create a quick C# tool that scans every data and source file for this DATA: tag, and stores the links. It should therefore be able to see whether a file is referenced or not, and as an added bonus it should be able to see which resource is referencing another resource.

The code simply recursively scans all directories and files (excluding specific files and directories, such as PSD files, or svn directories), and for each file it scans the binary file data for the string “DATA:”. When it finds the string, it tries to read the rest of the null-terminated string to determine which file it’s referencing, and stores this reference in memory. The tool scans the entire data directory, and it also scans the entire source code directory, so that any hard-coded references are found as well. Very handy!

Well, I coded it up, and it works really great! The scanning process takes about 10 to 15 seconds to complete, but it scans every single source and data file we have (about 7000 files). The final result isn’t very pretty to look at, but it does give lots of useful information. Click on the screenshot below to see what I mean:


In this case, I’ve selected a background file called DEBUG_NICK. It has 1 parent (meaning, one other resource is using this file), and 6 children. At the bottom you can see which resource is the parent (in this case Debug_Nick.destination), and which are the resources used by this background (in this case a few png’s, a few puppets, and a lighting setup).

To find out which resources are no longer used, you can simply sort by the NrParents column, and go through all of the resources that have 0 parents. If you found a texture that looks like it’s a test texture, you can find it in easily by typing the name in the text-box at the very top, and the list will filter based on the typed name. Then you can see who is still referencing is, and fix the problem. When you’ve determined an asset can be removed, you can delete the asset right from this tool, which will call the proper SVN delete command.

In conclusion, it still takes a bit of manual labour to go through the assets and determine if they are used or not, but it’s a hell of a lot better than guessing whether it can be deleted. We can now delete with confidence.

Alright, that’s it for this week. Keep throwing out the trash!

Also, remember: As always we’ll be Dev-Streaming today at 4pm-6pm PST. We’ll be working on the game and talking to our followers and answering any questions that come up to the best of our ability! It’s been a lot of fun and really rewarding doing the dev-stream so come on down and say hey!



Twitter: Nick: @nickwaanders Jesse: @jouste Caley: @caleycharchuk SlickEntertainment: @SlickEntInc

Google+: Nick: +NickWaanders Jesse: +JesseTurner Caley: +CaleyCharchuk



Posted by: Under: Slick Entertainment,Tech,Vikingsquad Comments: 1

Follow us!

titlebutton_twitter titlebutton_facebook titlebutton_youtube titlebutton_twitch

Join our mailing list!

/ /

Dev Blog

November 25 2015

Welcome back followers of the fearsome! This week we are going to cover some new developments on the effect side of things in Viking Squad! Effects are definitely something that I’ve always struggled with. Nick’s engine has a really powerful effects editor that Caley and him can pilot really well while I always found myself […]