And now for something completely different. What?! Another change? Yes! That’s the beauty of being an indie developer; we can work on whatever we want! :) In any case, we’ve been working on this procedural generation stuff so much lately that we feel the need to let it settle and crystallize a bit before we can make up our minds about what the next most important feature will be. It’s not done and over with, it’s just that we didn’t feel like working on it this past week.
This week we decided to return to the idea of Star Razer. The awesome Chris Hadfield just returned from space, so it may be that we were inspired by his awesomeness. Since the last time we touched Star Razer we’ve had a number of cool ideas on how to change and improve the game. We’re going to take out the strategy elements out because they weren’t really working for us. Also, we’re going to take the procedural tunnel-like levels out as well. They generated barriers causing the movement of your ships to become an annoyance rather than fun.
We’re going to treat working on this game almost like a mini game jam: Cram the features into the game very quickly and see if they work. The idea we’re pursuing is actually fairly similar to Shellrazer. The difference being that you constantly collect new guns and ball them together into a giant katamari ball, and of course that it’s all in SPACE! So we expect it to be deliciously chaotic, and hopefully not too confusing! We’ll get back to you next week on whether it actually worked the way we envisioned it.
For now, here’s our entire game design doc:
And yes, we hate game design docs. We prefer visual design cartoons like the one above. It does help to have the almighty Drawbarian around to whip up these awesome little drawings.
Alright, that’s it for this week. Keep those starcannons blasting!
This week is a bit of a dev-intermission. We don’t have an intermission banner, so I just put the dev-blog of the beast banner at the top. I’m currently in Memphis visiting my sister who does amazing research work at St Jude Children’s Research Hospital. So I’ve been busy catching up with her, and seeing the sights around Memphis. I visited Graceland, Sun Studios, the Gibson factory, and lots more. It’s a very neat place! Music is everywhere.
On the week-days I’m trying to get some work done, though. As most of you probably know, I usually write everything in C# first, and then convert the code to C++ once I’ve iterated on it enough to be stable. Over the last half year we’ve been making a bunch of prototypes that required a few changes in the render engine, but the changes were only made on the C# side. So this week I am trying to catch up on all the changes I’ve made in the C# version of the render engine, and putting them into the C++ engine. I guess this is the part that most people see as the main disadvantage of my method of C#–>C++, maintaining both sides at the same time. With proper SVN logs it’s not too hard though. I just look up when the last change was in the C++ engine code, and then grab all the changes from the C# branch until now. I usually remember exactly what changed, so even just seeing the filenames is often enough to remember what needed to be changed, but of course I can always see the actual changes in the files as well. This process takes me only a few days if there were lots of changes, so usually the time I saved iterating at break-neck speeds in C# vastly makes up for this.
There weren’t too many changes. I changed my shaders from .FX (directX) files to generic XML files that can hold the OpenGL shader code, as well as the direct X shader code if required. I’ve taken anything DirectX out of my engine though, it’s all OpenGL at the moment (as pretty much all platforms support OpenGL), so currently my XML shaders only support OpenGL. The XML files get converted to ‘final’ binary shader files in the build process, which allow for fast loading in the C++ engine.
Another thing I changed is the full screen post effects. I’ve streamlined the code so that it’s easier to turn parts on and off to accomodate for slower devices. The post effects we’re doing aren’t really that complex, there’s a screen distortion pass (can be used to create shimmers above fire, blast waves for explosions, etc), a tinting pass (can be used to alter the hue of the screen, alter the saturation, brightness and contrast), and a stencil pass (used for the line of sight system in the space game we were working on, and might return to at some point). I used to have a bunch of extra passes like edge detection and smoothing, but I’ve taken out this code for now as it unnecessarily complicates things. I can always put it back in if needed.
The last thing I changed was the sound system. When we were using XNA for our tools, we were pretty much required to use XACT as the sound engine. So for the C++ sound engine I built a converter that could read the XACT file format, convert it to our own sound data and play that in our C++ sound engine which was based on OpenAL. Now that I’ve converted all our C# tools to use OpenTK (a mix of OpenGL, OpenAL and OpenCL), we no longer have the dependency on XACT, so I spent some time to implement all the sound curves, random cues, etc in our own editor. Below is a screenshot.
As you can see, we have pretty basic controls for sound cues: Volume, Pitch (as well as variations for both), polyphony, attenuation, as well as random samples, multiple tracks, and a ‘strength’ dependency for volume, and pitch (which is off-screen at the bottom). This is something we set from the code, so when a sample is played, a strength between 0 and 1 always needs to be passed. This allows for nicely scaled impact sounds depending on the severity of the impact, for example. We also used this for the engine sounds in Scrap Metal, where 0 was 0 rmp, and 1 was the max rpm of the motor.
Alright, that’s it for this week. Like I said before, not too many changes, but I hope this was useful to somebody out there.
This week’s dev-blog was typed very slowly because I cut a part my finger off while making salad… Yea… You don’t win friends with salad… Anyway, I’d like to talk a bit more about procedural content generation. We’re working on this large procedural world created using Voronoi math and assigning biomes, etc. (See blog posts 52 and 53) Currently the generated island looks like this (well, one random version of it anyway):
A little less patchy than it was a while back, and now there are some entities on the terrain like trees, people, and of course rabbits. There’s still some things missing though, roads and rivers!
To be able to create nice looking roads and rivers, I came up with a modular system that allows for cool automated content generation along a path. To create roads between two points in the world, I would do a standard A* path find, and then feed the path into the procedural road generator, and it would draw the road on the terrain, and add entities where it sees fit. The procedural generator works with paths, and modules that can modify this path, or perform an action on the path. Here is a shot from the procedural-editor:
There are two points with an interpolated path in between. As you can see I’ve added a ‘Repeat’ node, which repeats any action underneath it every x meters. Currently there’s only a ‘Paint Surface’ action, which draws tiles onto our terrain within a given radius. As you can see, the tiles underneath the yellow line have been changed to a stony looking surface.
Ok, so now we can draw a road on the terrain. Now it gets more interesting. What if we wanted trees along the road, evenly spaced? Well, that’s easy too:
I’ve added another ‘Repeat’ node, but this time it repeats every 8 meters. The first child node is a ‘Transform’ node, which allows me to offset everything to the side of the road. Then we have a ‘Place Entity’ node, which can place any entity in the world. I chose a Pine tree for now. As pine trees can’t live on solid rock, I decided to paint the surface underneath the tree to soil.
We can do this another time and add a few stones to the right of the road:
This was done in pretty much the same style as above. Other node types include random offsets, in between nodes, only at the end or at the start nodes, and I’m sure I’ll be adding more in the near future.
The nice thing about doing things procedurally is that you can change the start and end points, and it will still look cool:
Alright, that’s it for this week. Next week we’re getting new standing desks delivered (yay!!), and to allow for the delivery, we’re going to be out of the office. In fact, I’ll be visiting my beautiful and super smart sister in Memphis, I can’t wait! The dev-blog next week might just feature Elvis, Music and BBQ!
Today we’re doing another episode of the Waanderful tech posts. One question I get asked a lot is how we do our 2d sprite animation. Well, when I was working on N+ for XBLA we decided to invest some time in a sprite editor that uses 2d skeletal animation. This editor has evolved over time, and it is now fully integrated in our game editor (still called TrackEditor because of Scrap Metal :) ).
If you’re not familiar with the concept of skeletal animation, it basically means nodes linked to other nodes, like your hand is connected to your under-arm, which is connected to your upper arm. If you were to ‘rotate’ only your upper arm, your lower arm and hand move as well, because they are linked.
Below you can see a screenshot of the TrackEditor, which Jesse uses to create all the animations for the game. (Click on the image for a larger version)
On the right at the top you can see the basic animation controls, where you can add/remove/rename animations in the sprite, set the looping mode, change the duration, and preview it by pressing the play button. The looping mode controls how the animations loops in the game. The options are:
Loops the animation until you switch to a new animation
Plays the animation once and sticks on the last frame.
Doesn’t progress the animation at all, which is usually used for finely controlling the animation through code, for example for an aiming animation that aims up or down. We know the exact angle we want to aim at, so we can tell the animation to go to a specific time.
Below is a tree view control that shows the hierarchy of nodes. There are a number of different nodes:
(the little green & red arrow icon).
(the little bitmap icon). Draws a quad with a given texture/color.
(not shown). Draws a vertex coloured polygon.
(not shown). Allows us to link and animate a visual effect inside the sprite, for example sparks when hitting something, etc.
(not shown). Allows us to link a sound to a specific time in the animation.
(not shown). Allows us to link a camera shake to a specific time in the animation.
Linking visual effect, sounds and camera shakes to the sprites is super useful. Before I had this, I had to manually spawn these through code, and it just made things quite messy at times. Now I just call a certain animation in the sprite, and everything just works. Nice!
Below the tree view are the animation key-frames. The yellow line is the current time in the animation, which can be dragged left and right to scrub through the animation. As you can see, each node can animate the position, rotation and scale, and we can enable/disable linking to the parent for each of the components. Frankly we don’t use this much, but it’s nice to have for those times where you do need it. Because the selected node is a quad node, we can also animate the sprite index. The sprite index can be used to do flip-frame animation if you link more than 1 texture to the quad.
Then there’s Move Forward/Backward. This is used to alter the render order for each node, without changing the hierarchy. This really is quite important to have, there are many instances where you want a certain hierarchy, but a different render order.
Then Jesse plays around with this for a few minutes, he comes up with cool animations like this:
In this video the skeleton is enabled, depicted by the yellow lines. Each green-red axis is a separate part of the sprite.
Because the animation key frames aren’t set to a specific frame, but rather just have a time with a large number of decimals, we interpolate between the animation keyframes smoothly, adjusted by the playback speed or framerate. This makes for super smooth animations, with a minimal texture load compared to flip-frame animation. And the best thing is that you could still do flip frame animation for certain parts if you wanted to. Best of both worlds.
We just got back from an awesome GDC, and we’re actually a little bit down now that we’re back in the office. It was great to meet and hang out with a ton of interesting people and watch great talks about their experiences, we’d want it to last forever. Some people called it the post-GDC-depression, and as far as I am concerned, that’s a pretty accurate description!
We had a Independent Games Summit pass, which includes access to the Independent Games Summit (of course), as well as the expo hall for the rest of the week. Since I’ve become indie I found that the IGS has way more relevant talks than the main GDC. The main GDC talks are still interesting of course, but most of them are so specific that they are not very useful to an indie who has to do all the programming for an entire game. The IGS talks are less instructional, but way more inspirational, which is why I go to the GDC in the first place nowadays. I find it very interesting to hear about the development process of FRACT, the art and lifestyle of Incredipede, the sobering and raw talk about the development of Retro Grade, and many others.
After the IGS ended, we spent the rest of the week hanging around the IGF booth, and partying at night. We attended the Mojang party, where Skrillex and Diplo were playing an awesome back to back set. I’ve been a fan of Skrillex for a while, and heard Diplo through my lady’s music list, but we’d never seem them live. Needless to say, great times were had.
On the Friday night we were hanging around in the Hilton lobby and we met Daniel Cook (of Spry Fox‘s game Tripple Town), and we ended up showing a few tech demo’s. I was showing him the Voronoi generation code we’re using for the procedural landscape generation, and it turned out he knew the guy who wrote the page I based it on. Sweet! Dan, being the awesome designer he is, started jamming on the idea of using the Voronoi app as the base for a sheep herding game. We may just put it together during a game jam one day!
Alright, that’s it for this week, keep those turtle cannons blazing!
VANCOUVER, British Columbia – July 26, 2012 – Independent development studio Slick Entertainment is proud to announce Shellrazer for iPad, iPod Touch, and iPhone, released on the App Store today for a temporary launch sale of $0.99 (regular $2.99). RIDE YOUR GIANT WAR TURTLE TO VICTORY OVER THE NASTY GOBLINS!! You are a Klang Clan [...]
Welcome back followers of the fearsome! After some shifting around we are back into the groove of rapid prototyping! It’s been refreshing having something as clear to work on as our space title and we’ve been having a blast drumming up ideas for an exciting space shooter. This week we have some basic weapon ideas [...]