I am typing this from my couch at home, still tired, but very satisfied with the PAX showing we just had. We were showing Viking Squad in the Indie MEGABOOTH, and it was a lot of fun. We met many, many cool gamers, and it was a blast to see everybody enjoy our game!
We set up a go-pro to make a cool time-lapse of all the people coming through our corner of the Indie MEGABOOTH:
There was a bit of a last-minute shake-up in our plans before we went to PAX. We bought two big TV’s to show the game on, but due to some trouble with transport and border crossings, we ended up renting three Intel sponsored TV’s from the Megabooth in the last minute. Sony was kind enough to lend us three Playstation test kits, and we set up every TV with one. At first, we thought we’d just keep two TV’s as playable stations, and then have the third TV looping our trailer. After the show opened we very quickly found out that it was a very good idea to have each TV set up as a playable station. We turned off the trailer, and turned on the game on the third TV, and each TV was basically occupied the entire duration of the show! Talk about a lucky break!
Also, the average time people had to wait to try the game was less than 5 minutes. This is a drastic improvement over our PAX East setup, where we had 1 big TV, and people lining up for up to an hour. Don’t get me wrong, it’s great that people want to line up for that long, but at the same time it makes me feel bad for taking up people’s time with waiting in a line! So having three stations where everybody can play the game within minutes is great!
In other news, Ookpixels wrote a GREAT article about us this week! Eli came by our office and interviewed us, and then he did a ton of research into what we were saying. He’s a great guy, and we’re honoured he wanted to write an article about us!
Alright, that’s it for this week. We’re getting back into the swing of things now, and we’ll be back with our weekly dev-stream in a few hours! Come hang out!
Last weekend Caley and myself took part in the coolest game-jam we’ve ever attended: The roomscale Vive VR jam! Valve and HTC came by Vancouver and set up 3 complete VR rooms of about 12’x10′. There were about 100 attendees making several VR games, and everybody shared the rooms for testing during the two days of the jam.
My personal goal for the jam wasn’t to make an actual game though. I decided I want to get my engine up and running with the Vive! Even though Scrap Metal was the last real 3d game we made, and Shellrazer and Viking Squad are mostly 2d, the guts of the engine are still all in 3d. Of the about 100 people at the jam, there were only 3 people working on their own tech, the rest were all using Unity and Unreal. As a programmer this makes me sad.
Getting the engine converted to be able to render with the Vive was actually very straightforward. The VR library that steam provides is free to download, and it even works with the Oculus DK2! I was able to test small things using just my laptop and the DK2, and when I created a build to test on the Vive setup, it worked straight-away! I had a few problems with the transform matrix coming from the controllers (different coordinate spaces), but once that was solved it was all working!
Working with the Vive, we learned a few things very quickly. First off, you have to run at 90 frames per second. Anything lower than that, and you will start feeling sick very quickly. Most console games nowadays run at 1920×1080, either at 60, or sometimes at 30 fps. For 60fps, you have about 16 milliseconds per frame to fill 2,073,600 pixels. The Vive has two screens of 1200×1080, and each needs to run at 90 fps, meaning you have 11 milliseconds to fill 2,592,000 pixels. To really put that in perspective, for a 1080p game at 60fps you need to fill just over 124 million pixels per second, for a Vive game at 90fps that number is 233 million(!). So, you need to have some serious consideration for what you’re rendering, and how it’s being rendered!
The second thing I learned was that the controllers are a HUGE part of the VR experience. I drove down to Valve to test their prototype VR setup about half a year ago, and at the time they were only showing the headset. That experience blew my mind, because I could feel my brain being conflicted about whether this was real or not. Of course I could reason ‘this is not real, I am just in a room with a headset on’, but at the same time every nerve in my body was resisting when I was asked to step off a virtual ledge. This weekend was the first time I got to try out the controllers, and it blew my mind all over again. When you put the headset on, you can see the controllers in VR on the floor below you. It’s just so natural to bend your knees and grab them and not even think about it twice. That is amazing! And now suddenly you have all this interactivity in the scene. You can grab things, you can throw things, etc. I think the controllers are an essential part of making VR actually work properly. Selling just the headset won’t cut it (looking at you Oculus!)
Alright, so what did we end up making? Well, I just made a little toy where you can play around with our particle systems. But in my own engine! :) Each controller has the same 5 different particle effects you can rotate through by pressing the touch-pad on the controller. All there is to do is to play around with the particles! Somehow it’s pretty fun and satisfying though. :) Here’s a photo of Colin Northway of Northway Games playing around with the particle systems:
I forgot to record the actual output to the Vive in a video, and since I don’t have a Vive here, I modified the code a bit to place a few controllers and move them around in circles. So this is an example of what he might have seen, except then in 3D and using the actual vive controllers. :)
Caley used Unity to make a super sweet VR Tennis game in Unity where a ball-launcher shoots balls at you, and you try to hit them with a racket and try to bounce them back over the net to score a point. He also forgot to fraps it. Maybe he’ll do another blog post in the future if we can somehow record some footage on a real Vive at some point.
In conclusion, it was a super fun weekend, and I cannot wait to get my hands on an actual Vive to experiment with for our future projects. Thanks again to Valve, HTC, Radial Games, Cloudhead games, Nvidia, Unity and Unreal for putting this on!
This week Jesse is on a sweet trip to Japan with our good friend Ryan Clark of Brace Yourself Games, so I’ll take over and do another Tech blog. This one is about the Texture Tool I’ve build a while ago. This tool fills a very important role in our pipe line, but because it’s all behind-the-scenes it doesn’t really get much attention. This is its time to shine!
First a bit of history so the requirements of this tool become clear. When I first started building my engine, we were making N+, a game that doesn’t have a lot of textures, so no special care was taken to manage the textures in an optimal way. Then, when we started building Scrap Metal, and later Shellrazer, the number of textures increased drastically. The need for an extra step in our pipeline became clear. We needed a tool that could convert all of the source art to a format that is more optimized for the final game, while giving us the control to convert/change/compress the textures as we saw fit.
The first step to optimize the source data is to create texture atlases. The thing with many separate textures is that each of them needs a separate draw-call, which in turn means that you get draw-call bound very quickly. A common way to fix this issue is to combine multiple textures into texture-atlases. This way you can batch all the calls that use the same texture atlas into one draw call, and therefore drastically reduce the number of calls needed to draw your frame, generally resulting in a faster frame-rate. Here’s a screenshot of our texture tool showing how it combined a bunch of textures into a texture atlas:
How do you decide which textures to combine into an atlas? If you just randomly start combining textures, there’s still a chance of doing lots of draw-calls if you require a texture that’s in atlas A, and then a texture that is in atlas B, then another texture from atlas A, and so on. So you will want to combine textures into one atlas if they are likely to be drawn in order. In our case, our puppets use many separate textures, but they will all be rendered in one go, so they could easily be combined. Luckily, Jesse creates a new directory for each character he creates, so the first filter we added is to group textures by the sub-directory they are located in.
There are also textures that generally belong together, such as textures that are used only in the world map. So we added a way to set the category of a directory (recursively), or an individual file. This allows us to easily combine all ‘user interface’ textures into the same texture atlas, so that the user interface can be drawn in a single draw call.
The last filter we need is of course how you want the texture to be compressed. Some textures require RGBA8888 (such as visual effects that do multiple layers of overdraw), and some are fine to be compressed using DXTC5. The entire atlas texture is compressed, so all the textures within that atlas require the same texture compression. The required compression is again set on a directory basis, or on an individual file basis.
Another wish for this tool was a way to easily re-size textures manually. Some textures are used in ways where cutting their size in half isn’t noticeable in the final result. For example textures that were blurred can easily be reduced to 50% or even 25% of their original size. Keep in mind that all of this is happening while building the game package, so none of the original art is resized or altered. The resize factor can be set manually per texture.
Because some of the art we use is actually drawn on a higher resolution that it will ever be shown at in the game, I came up with a way to try and reduce all textures to exactly the size they require on screen. The way this works is that each entity is drawn at the size they appear in the game, and the size at which the textures were rendered is recorded as this is going on. It keeps the average size each texture was rendered at, as well as the standard deviation and min/max values. In the end, the recorded sizes, as well as a recommended resize factor is exported. These values are visible in the image above: The bow image is resized to 57.43% of its original size.
When I checked the game with NVidia’s awesome NSight, you can see that it draws large chunks of level data in single draw calls:
The lanes are drawn using multiple separate textures, but because they were combined in a texture atlas, we were able to draw it in one draw call.
Same goes for the puppet of the shield guy. One draw call is all it needs to draw the entire puppet, which consists out of 30 or so individual textures.
All in all, this tool saves us large amounts of texture space, and it really optimizes the way we draw our frames.
Oh and as a reminder, because Jesse’s in Japan, we won’t do a dev-stream today. We did one yesterday though! Check it out here and here.
This week it’s time for a rather mundane task: taking out the trash. Every project I’ve worked on seems to collect a lot of assets that were used in a test or generally assets that are just no longer actually used in the game. In most of those cases, it was actually rather difficult to figure out what is being used, and what isn’t. Often I’d delete an asset, and then two weeks later suddenly somebody notices that it is missing in an obscure part of the game.
I started thinking about how to fix this. Luckily the Slick file system uses a standard way to reference files: Every file is stored relative to a data root. For example, if a texture is located in C:\Games\VikingSquad\Data\Textures\MyTexture.png, the game code and data will always reference it as DATA:Textures\MyTexture.png. The conversion of a full path to an aliased path is done in the editor, and it guarantees that all data files are properly referencing other data files.
So, I figured, I should be able to create a quick C# tool that scans every data and source file for this DATA: tag, and stores the links. It should therefore be able to see whether a file is referenced or not, and as an added bonus it should be able to see which resource is referencing another resource.
The code simply recursively scans all directories and files (excluding specific files and directories, such as PSD files, or svn directories), and for each file it scans the binary file data for the string “DATA:”. When it finds the string, it tries to read the rest of the null-terminated string to determine which file it’s referencing, and stores this reference in memory. The tool scans the entire data directory, and it also scans the entire source code directory, so that any hard-coded references are found as well. Very handy!
Well, I coded it up, and it works really great! The scanning process takes about 10 to 15 seconds to complete, but it scans every single source and data file we have (about 7000 files). The final result isn’t very pretty to look at, but it does give lots of useful information. Click on the screenshot below to see what I mean:
In this case, I’ve selected a background file called DEBUG_NICK. It has 1 parent (meaning, one other resource is using this file), and 6 children. At the bottom you can see which resource is the parent (in this case Debug_Nick.destination), and which are the resources used by this background (in this case a few png’s, a few puppets, and a lighting setup).
To find out which resources are no longer used, you can simply sort by the NrParents column, and go through all of the resources that have 0 parents. If you found a texture that looks like it’s a test texture, you can find it in easily by typing the name in the text-box at the very top, and the list will filter based on the typed name. Then you can see who is still referencing is, and fix the problem. When you’ve determined an asset can be removed, you can delete the asset right from this tool, which will call the proper SVN delete command.
In conclusion, it still takes a bit of manual labour to go through the assets and determine if they are used or not, but it’s a hell of a lot better than guessing whether it can be deleted. We can now delete with confidence.
Alright, that’s it for this week. Keep throwing out the trash!
Also, remember: As always we’ll be Dev-Streaming today at 4pm-6pm PST. We’ll be working on the game and talking to our followers and answering any questions that come up to the best of our ability! It’s been a lot of fun and really rewarding doing the dev-stream so come on down and say hey!
This week a bit more tech stuff. When we were play testing our game, we had a problem with the screen being too busy/attention grabbing at times. It was hard to see what to focus on, as your eyes would get drawn to parts of the screen that didn’t necessarily matter to the game play. A trick we used in Shellrazer was to use a blurred background to make the foreground elements stand out more. So I decided to add it to Viking Squad to see how it looks.
The process is fairly easy. We mark the elements in the world as ‘foreground’, ‘background’, or ‘normal’. Now that the elements are marked, we use a number of steps to get the blurred effect we want:
Step 1: Background pass. All background elements are rendered to a render target, and the render target is blurred. The render target is 2/3rd the size of the full frame. The reason it’s smaller is to save video memory, and once the image gets blurred you can’t really tell it’s at a lower resolution anymore.
Step 2: Foreground pass. All the foreground elements are rendered to a render target that is half the size of the full frame. Note that the render target is cleared with RGBA 0x00000000, so that the alpha information stays correct for later. The render target is then blurred.
Step 3: Normal pass. (This pass us using MSAA) In the normal pass, first the background pass texture is rendered as a full screen quad (zwrite off). Then the entire scene is rendered on top of this. After the entire scene is done rendering, the foreground pass texture is rendered over top as a full screen quad using alpha blending.
After these steps, the rest of the post processing steps are performed, like Color Grading, bloom, etc.
The result is subtle in a static image, but when moving around it’s a lot better than before! Mouse-over the images below to see the before and after (click to go fullscreen)
A small addition, but the combination of all these little bits will make the game more and more pretty. Hopefully. :)
Alright, that’s it for this week. Until next time! And don’t forget our dev-stream later today!
Welcome back followers of the fearsome! This week we’ll do up a re-cap on last weekend’s very cool Twitch Con 2015! Twitch’s Blog covers the event really well right here but we’d like to talk a bit about our experience there as well! First off, this con was definitely not one that we were used to. […]