Major gameplay systems done!

Hi guys,

at least from the major systems I’d say the game is feature complete. That’s often the point for saying a game is in its alpha stage (unless you are an AAA dev who wants to push out his publicity free multiplayer event and labels it “alpha” so people can’t complain about bugs)

All the underlying scripts work for these elements:

  • World Map
  • Dialogue / Event manager
  • Car Combat (dynamic and static camera)
  • Inventory

So basically I think most of the necessary stuff has a very solid foundation now.

I put up a short video with the different systems:

The main gameplay is as follows:

  •  You have a main goal of finding a certain person at the end of the world map
  •  You chose a path on the world map.
  •  Along the way you encounter random events, enemies, friends. But you also have certain points of interest marked on your map and you can’t finish the game without stopping by some.
  • Sometimes diplomacy fails and you have to engage in combat.
  • Combat plays out a bit like a party action RPG. You have a group of cars with differently skilled operators and different sets of weapons. Use skill and positioning to win.
  • Loot destroyed enemy cars, upgrade your own.
  • repeat until game over, either through finishing the game or losing all your men and women.

Obviously content is the biggest hurdle for most indie developers.

And, sadly, the artist i used to collaborate can’t find any time right now. So don’t expect a lot of new 3d models in the near future.

If you are an artist who would like to collaborate, PM me here or on Twitter please :)

So yeah, that’s it for now. Not a lot of production value in the video, but I wanted to push it out after a lot of coding.

Geometry trails / Tire Tracks Tutorial

Lot’s of T’s in the title :O


Hi guys and girls,

today I’d like to talk a little bit about geometry trails (is this even the right name?), which can replace particles and be used for example for trails left by space ship engines or tire tracks in a racing game.

Since it’s a fairly easy thing to implement but for some reason not many tutorials can be found on the topic I decided to write up a little bit about it. As you might have learned from past blog entries I really enjoy writing about that stuff, even if the write-up takes longer than the sloppy implementation.

The code bits and pieces are in c# and were implemented in MonoGame.


In my case I use this stuff for tire tracks – see how the cars leave tracks in the sand!


So the basic idea is: Let’s create a strip of triangles that “follow” a certain path (for example the cursor, or a ship, a car etc.)

We do not want to animate the triangles – only the last segment stretches a bit until it reaches our defined maximum length. Then we remove the oldest segment and create a new one at the front.


In this example I limit the trail to 3 segments.

In-engine this would look like this:


It’s easy to notice that the curves are not really smooth yet. We have to change the direction of the segment before the one we currently draw to face the half-vector between the direction to the second-last segment and the new segmenttrail_simple2

Now we get stuff like this

Finally we want the trail to fade out smoothly in the end.

The idea is pretty simple.

Let’s say we want to fade out over 2 segments (we can also use a world unit length and convert it to segment amount).

Our trail has a visibility term (alpha) which goes from 0 (transparent) to 1 (fully visible).

If we all our segments have full length then it’s pretty easy:
Our 3 first segments have visibility:

0 – – – 0,5 – – – 1 – – – 1 – – – 1 ….
^            ^             ^          ^          ^
# 0            1             2            3          4   …

Makes sense right?

But what we really want is a smooth fade depending on the length of the newest (not full-length) segment.
Let’s say that it has reached half the length of a full segment … where does our ramp start and end?

Well obviously half way between segment 0 and 1 and it finished half way between segment 2 and 3.

So thing is basically just some simple linear math.


To get the visibility at our current segment i we can use this formula:

y = 1/fadeOutSegments * x – percentOfFinalSegment

If we want the ramp to start somewhere in decimal numbers we have to use the range {-1, 2} for our visibility term and then clamp to {0, 1} in the pixel shader.

Because our graphics card only accepts floats between 0 and 1 we “encode” our y value like this

visibilty = (visibility + 1)/3.0f

to map from {-1,2} to {0,1}. Later we can decode the other way around.


Looks pretty smooth, right?

Final Modification

So that’s basically it.

Now we need to bring it to the screen and there are just a few things left to say.

First of all – your trails don’t have to have equidistant segment points. It makes sense to make more, smaller segments when processing curves and use larger ones when having a long straight line.

Another thing – if you want to have floating trails, for example lines following some projectile, it would be a good idea to modify the position of both vertices (per segment) in our vertex shader so they always face the camera (like billboards, stuff like lens flares, distant trees etc.)

If we use them as tire tracks it would be a good idea to project them onto our geometry.
Here is a great blog article about decal projection (by David Rosen from Wolfire games)

This is not trivial and, depending on geometry density, not cheap either – but it is the proper way!

If you happen to work with a deferred engine making decals can be easier, there are tons of good resources if you search for “deferred decals” :)

In my case I went a different route.

Since I know I only want to have tire tracks on terrain I simply draw the terrain and then draw the lines on top without any depth check. Since the terrain is rather low frequency it’s a pretty plausible looking solution.

Afterwards I draw all the other models. The obvious downside to this method is that I have a little bit of additional overdraw since I draw the terrain before drawing the models that obstruct/hide parts of it.

However, the effect on frame time is really minimal and the effort of implementing the thing is really low, so I take that.
With the visibility term I can also ensure that cars that currently do not touch ground do not contribute a visible tire track, which is pretty useful.



Let’s initialize our class

public class Trail

        //our buffers
        private DynamicVertexBuffer _vBuffer;
private IndexBuffer _iBuffer;

        private TrailVertex[] vertices;

        //How many segments are already initialized?
private int _segmentsUsed = 0;

        //How many segments does our strip have?
   private int _segments;

        //How long is each segment in world units?
private float _segmentLength;

        //The world coordinates of our last static segment end
private Vector3 _lastSegmentPosition;

       //If we fade out – over how many segments?
        private int fadeOutSegments = 4;

        private float _width = 1;

        private float _minLength = 0.01f;


public Trail(Vector3 startPosition, float segmentLength, int segments, float width, GraphicsDevice graphicsDevice)
_lastSegmentPosition = startPosition;
_segmentLength = segmentLength;
_segments = segments;
_width = width;

            _vBuffer = new DynamicVertexBuffer(graphicsDevice, TrailVertex.VertexDeclaration, _segments*2, BufferUsage.None);
_iBuffer = new IndexBuffer(graphicsDevice, IndexElementSize.SixteenBits, (_segments-1)*6, BufferUsage.WriteOnly);

            vertices = new TrailVertex[_segments*2];


        private void FillIndexBuffer()
short[] bufferArray = new short[(_segments-1)*6];
for (var i = 0; i < _segments-1; i++)
bufferArray[0 + i*6] = (short) (0 + i*2);
bufferArray[1 + i * 6] = (short)(1 + i * 2);
bufferArray[2 + i * 6] = (short)(2 + i * 2);
bufferArray[3 + i * 6] = (short)(1 + i * 2);
bufferArray[4 + i * 6] = (short)(3 + i * 2);
bufferArray[5 + i * 6] = (short)(2 + i * 2);


Pretty simple so far right?

We use a dynamic vertex buffer where we store the vertex information. A dynamic vertex buffer plays nicely with our goal of changing the geometry constantly.
On the other hand we do not need a dynamic index buffer since the relationship of the vertices always stays the same, so we can initialize it from the start. (Actually we don’t have to do that for each instance of our trail, we can make the index buffer static if we use the same amount of segments/vertices for all our trails/tracks).

Now let’s move to the other 2 parts that are pretty trivial – the draw() function and a dispose() function (since graphics recourses are not handled by our garbage collector we need to delete them manually)

public void Draw(GraphicsDevice graphics, Effect effect)
effect.CurrentTechnique = effect.Techniques[“TexturedTrail”];
graphics.Indices = _iBuffer;

graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _segmentsUsed*2);
public void Dispose()

I think these 2 should make sense, right?

Now comes the main part – the Update() class, which we call from our target with the new position.

public void Update(Vector3 newPosition, float visibility)
if (!GameSettings.DrawTrails) return;

            //Initialize the first segment, we have no indication for the direction, so just displace the 2 vertices to the left/right
            if (_segmentsUsed == 0)
vertices[0].Position = _lastSegmentPosition + Vector3.Left;
vertices[0].TextureCoordinate = new Vector2(0,0);

                vertices[1].Position = _lastSegmentPosition + Vector3.Right;
vertices[1].TextureCoordinate = new Vector2(0, 1);

                _segmentsUsed = 1;

            Vector3 directionVector = newPosition – _lastSegmentPosition;
float directionLength = directionVector.Length();

            //If the distance between our newPosition and our last segment is greater than our assigned
// _segmentLength we have to delete the oldest segment and make a new one at the other end
            if (directionLength > _segmentLength)
Vector3 normalizedVector = directionVector / directionLength;

                //normal to the direction. In our case the trail always faces the sky so we can use the cross product
//with (0,0,1)
                Vector3 normalVector = Vector3.Cross(Vector3.UnitZ, normalizedVector);

               //how many segments are we in?
                int currentSegment = _segmentsUsed;

                //if we are already at max #segments we need to delete the last one
                if (currentSegment >= _segments – 1)

                //Update our latest segment with the new position
                vertices[currentSegment*2].Position = newPosition + normalVector*_width;
vertices[currentSegment * 2].TextureCoordinate = new Vector2(1, 0);
vertices[currentSegment * 2+1].Position = newPosition – normalVector*_width;
vertices[currentSegment * 2+1].TextureCoordinate = new Vector2(1, 1);

                //Fade out
//We can’t have more fadeout segments than initialized segments!
                int max_fade_out_segments = Math.Min(fadeOutSegments, currentSegment);

                for (var i = 0; i < max_fade_out_segments; i++)
//Linear function y = 1/max * x – percent. Need to check with prior visibility, might be lower (if car jumps for example)
                    float visibilityTerm = Math.Min(1.0f / max_fade_out_segments * i, DecodeVisibility(vertices[i * 2].Visibility));
visibilityTerm = EncodeVisibility(visibilityTerm);

                    vertices[i * 2].Visibility = visibilityTerm;
vertices[i * 2 + 1].Visibility = visibilityTerm;

                //Our last segment’s position is the current position now. Go on from there
_lastSegmentPosition = newPosition;

//If we are not further than a segment’s length but further than the minimum distance to change something
//(We don’t wantto recalculate everything when our target didn’t move from the last segment)
//Alternatively we can save the last position where we calculated stuff and have a minimum distance from that, too.
            else if (directionLength > _minLength)
Vector3 normalizedVector = directionVector/directionLength;

                Vector3 normalVector = Vector3.Cross(Vector3.UnitZ, normalizedVector);

                int currentSegment = _segmentsUsed;

                vertices[currentSegment * 2].Position = newPosition + normalVector*_width;
vertices[currentSegment * 2].TextureCoordinate = new Vector2(1, 0);
vertices[currentSegment * 2].Visibility = EncodeVisibility(visibility);
vertices[currentSegment * 2 + 1].Position = newPosition – normalVector*_width;
vertices[currentSegment * 2 + 1].TextureCoordinate = new Vector2(1, 1);
vertices[currentSegment * 2 + 1].Visibility = EncodeVisibility(visibility);

                //We have to adjust the orientation of the last vertices too, so we can have smooth curves!
                if (currentSegment >= 2)
Vector3 directionVectorOld = vertices[(currentSegment – 1) * 2].Position –
vertices[(currentSegment – 2) * 2].Position;

                    Vector3 normalVectorOld = Vector3.Cross(Vector3.UnitZ, directionVectorOld.NormalizeLocal());

                    normalVectorOld = normalVectorOld + (1 – Vector3.Dot(normalVectorOld, normalVector).Saturate())*normalVector;


                    vertices[(currentSegment – 1) * 2].Position = _lastSegmentPosition + normalVectorOld * _width;
vertices[(currentSegment – 1) * 2 + 1].Position = _lastSegmentPosition – normalVectorOld * _width;

               // Visibility

                //Fade out the trail to the back
                int max_fade_out_segments = Math.Min(fadeOutSegments, currentSegment);

                //Get the percentage of advance towards the next _segmentLength when we need to change vertices again
                float percent =  directionLength/_segmentLength / max_fade_out_segments;

                for (var i = 0; i < max_fade_out_segments; i++)
//Linear function y = 1/max * x – percent. Need to check with prior visibility, might be lower (if car jumps for example)
                    float visibilityTerm = Math.Min(1.0f/max_fade_out_segments*i – percent, DecodeVisibility(vertices[i*2].Visibility));
visibilityTerm = EncodeVisibility(visibilityTerm);

                    vertices[i*2].Visibility = visibilityTerm;
vertices[i * 2 + 1].Visibility = visibilityTerm;



I hope that is relatively clear. The helper functions used are here:

private float EncodeVisibility(float visibility)
return (visibility + 1)/3.0f;

private float DecodeVisibility(float visibility)
return (visibility * 3) – 1.0f;

private void ShiftDownSegments()
for (var i = 0; i < _segments-1; i++)
vertices[i*2] = vertices[i*2 + 2];
vertices[i*2 + 1] = vertices[i*2 + 3];


Our Vertex Declaration looks like this

public struct TrailVertex
// Stores the starting position of the particle.
public Vector3 Position;

        // Stores TexCoords
public Vector2 TextureCoordinate;

        // Visibility term
public float Visibility;

        public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration
new VertexElement(0, VertexElementFormat.Vector3,
VertexElementUsage.Position, 0),
new VertexElement(12, VertexElementFormat.Vector2,
VertexElementUsage.TextureCoordinate, 0),
new VertexElement(20, VertexElementFormat.Single,
VertexElementUsage.TextureCoordinate, 0)


The final remaining part is the HLSL code. Here you go

float4x4 WorldViewProjection

float4 GlobalColor;

struct VertexShaderTexturedOutput
float4 Position : SV_POSITION;
float2 TexCoord : TEXCOORD0;
float4 Color : COLOR0;

Texture2D texMapLine;
sampler LinearSampler = sampler_state
MinFilter = linear;
MagFilter = Point;
AddressU = Wrap;
AddressV = Wrap;
// Simple trails

VertexShaderTexturedOutput VertexShaderTrailFunction(float4 Position : SV_POSITION, float2 TexCoord : TEXCOORD0, float Visibility : TExCOORD1)
VertexShaderTexturedOutput output;

float4 worldPosition = mul(Position, WorldViewProjection);

float vis = saturate(Visibility * 3 – 1);
output.Color = GlobalColor * vis * float4(0.65f,0.65f,0.65f,0.5f);
output.TexCoord = TexCoord;
return output;

float4 PixelShaderTrailFunction(VertexShaderTexturedOutput input) : SV_TARGET0
float4 textureColor = 1-texMapLine.Sample(LinearSampler, input.TexCoord);
return input.Color * textureColor;

technique AmbientTexturedTrail
pass Pass1
VertexShader = compile vs_5_0 VertexShaderTrailFunction();
PixelShader = compile ps_5_0 PixelShaderTrailFunction();

Deferred Engine Progress part2

So a few basic features made it in. I guess I go in the order of implementation


In the .gif above you can see the renderTargets for the rendering

  • Albedo (base texture/color)
  • World Space Normals
  • Depth (contrast enhanced)
  • Diffuse light contribution (contrast enhanced)
  • Specular light contribution (contrast enhanced)
  • skull models for the “hologram” effect (half resolution, grayscale – saved only as Red)
  • composed deferred image + glass dragon rendered on top

Variance Shadow Mapping

So yeah basically I have only used PCF shadow mapping earlier, but since this was a nice sponza test scene I really wanted to have very soft shadows.

A possible solution for my needs are Variance Shadow Maps, here is a link to a nvidia paper about it:
and the original paper/website by Donnelly and Lauritzen

Find detailed description of the process in these papers. Short idea: Store both depth and depth squared in the shadow map. We can use them to calculate variance:
σ2 = E(x 2 ) – E(x) 2

Chebyshev’s Inequality states that

P( x >= t) <= pmax(t) = σ2 / (σ2 + (t-mu)2)vsm shadowmap

Which, basically means – the probability that x is greater than t must be smaller than pmax which is where we can use our variance. In our case x would be the pixel depth and t would be the depth stored in the shadow map.

So the cool part is that with calculating this number we don’t have an binary yes/no shadow, but a gradient.

The biggest benefit of VSMs are that they can be blurred at a texture level(because we are not dealing with binary responses) and that is much cheaper than sampling a lot of times at the lighting stage (which we normally do).


Another little cool trick is that we can offset the depth a little when drawing transparent/translucent meshes. That way the variance will be off a little bit for the whole mesh and the shadowed pixels will never be fully in shadow.

VSMs have their own share of problems, namely light leaking and inaccuracies at the edges when a pixel should be shadowed by multiple objects. This problem gets worse with the transparency “just shift around some numbers” idea, but meh it works and most of the time it’s good looking.


Environment Mapping (Cubemaps)

So I am pretty happy with the lighting response from the models in the test scene. However; the lighting was still pretty flat. I don’t use any hemispherical diffuse this time around so basically all the colors come from the light sources.

This is ok for some stuff, but when I added some glass and metal (with the helmets, see below) I knew I needed some sort of environment cubemap.

So I just generate one at the start of the program (or manually by the press of the button whenever I want to update the perspective).

I basically render the scene from the point of view of the camera in 6 directions (top, bottom, left, right, front, back) and save these 6 images to a texture array (TextureCube).

I use 512×512 resolution per texture slice, I think the quality is sufficient.

I can update the cubemap every frame, but that basically means rendering 7 perspectives per frame and updating all the rendertargets in between (since my main backbuffer has a different resolution than the 512×512 I use for the cubemap) and I only get around 27 FPS. Keep in mind the whole engine is not very optimized (using large, high precision renderTargets and expensive pixel shader functions with lots of branching and no lookup tables etc.)

When creating the cubemap I enable automatic mip-map chain generation (creating smaller downsampled textures off of the original one –> 256, 128, 64, 32, 16, 8, 4, 2, 1) which I will use later. Note: Because of Monogame/Slim.dx limitations I cannot manually create the mip maps and have to go with simple bilinear downsampling. If I had manual access I would use some nice gauss blurring (which would be even more expensive to do at runtime).

When it comes to the deferred lighting pass I add one fullscreen quad which applies the environment map to all the objects (Note: Engines often apply environment mapping just like deferred lights with sphere models around them)

Depending on the roughness of the material I select different mip levels of the environment map:


Note: I noticed a lot of specular aliasing with very smooth surfaces. I implemented a sort of normal variance edge detection shader to tackle the issue, but it wasn’t very sophisticated. The idea was to check the neighbour pixels and compare their normals. If there was little difference, no problem. But if they faced vastly different directions then I sampled at a higher mip level to avoid aliasing.


Hologram effect on helmets

I got inspired by this concept art by johnsonting (DeviantArt)

The helmets on these guys have an optical effect which overlays some sort of skull on top of their visors which makes them look really badass in my opinion.

Here is a closer look

So my basic idea was to render skulls to a different render target and then, when composing the final deferred image, sample this render target at pixels with a certain material id (the visor/glass).
With some basic trickery I made this pixel look. Note that I do not need to render the skulls at full resolution since they will be undersampled anyways in the final image.

First attempt: (click on the gif for better resolution)


Later through trying some gaussian blurring I found that the soft, non-pixelated look, is also interesting.

Here I show them both:


It was hard finding appropriate free models for the helmets, but eventually I found something usable from the talented Anders Lejczak. Here is the artist’s website


Some other ideas floating around in my head

Hybrid Supersampling

maybe use double (any 2^multiplier) resolution for the gbuffer. Should be relatively cheap since there is almost no pixel shader work.
Then use nearest neighbour base resolution of the g-buffer for lighting calculations.
Upsample the lighthing with depth information (bilateral upsampling) to the double resolution.
Downsample the whole thing again. Have antialiasing. Success? Need to try that out.

Light Refractions/Caustics for Glass Objects

For the glass dragon it would be nice to have light refraction/caustics behind the model on the ground. But we can’t use photon mapping or sorts.

We know from the lights perspective with normals/refraction/depth where the light projection pixel should end up, but we can’t do that in a rasterizer, we can only write to the pixel we started out with. BUT we can manipulate the vertices.
Conveniently the stanford dragon has almost a million of these.

-> maybe: From the lights perspective displace the vertices of the model by the correct amount given refraction and the normal information (like you would pixels at the corresponding positions). The depth/amount depends on the depth buffer (shadow map).
-> this distorted model is then saved in another shadow/depth map (plus depth information), could be in .gb of the original shadow map.
-> reconstruct light convergence/caustics during light pass with the map
-> possible?


Deferred Engine Progress

So far so simple. I added some new features, namely: Transparent/Glass shader which is done in forward:


The refractions (and when the incidence is high, reflection) are simply some distortion of the background image. I am still trying to figure out screen space ray marching, but I have run into some depth conversion issues. I will try more in future.


Apart from that I implemented VSM-shadow mapping for a spotlight. By playing with the values I can make the shadow from the glass dragon a little less dark.

The gif above is a breakdown of the deferred rendering path minus the depth buffer, since it’s almost black.

-> albedo, normals, diffuse, specular, final composite


I have been toying with the idea of writing a deferred shading or lighting renderer for a while and it’s really easy to do with the myriads of tutorials online.

The main thing I wanted to try was an idea I had floating in my head about a potential idea for a realtime global illumination or simple light bounce.

Not very feasible for a game or anything, but something I wanted to try.


My main idea was for a single light source first.

What if we store a fullscreen reflection vector rendertarget (not the normals from the objects but the direction the light bounces off of them). In a different renderTarget or potentially just the alpha channel of the first one I then store the distance to the light source from each pixel.

The idea then would be – for each pixel I know which path the light will take and I know how much it has traveled already. With specular information gathered in some alpha channel of the g-buffer I can estimate how reflective and how “rough/smooth” the current pixel is.

My first idea was I could just ray march from this pixel until I hit another pixel and add some color to it. Doesn’t have to be full resolution.

And if the pixel is relatively unsmooth I just go in a random direction, depending on how unlike a mirror my surface was. Would be a bit noisy, but for a lot of pixels and maybe some temporal blending it would be ok, maybe?

Well and here the whole thing fell apart. I forgot that I can’t write on just any pixel. I can sample from a certain point and write my pixel, but I can’t manipulate others (unless I use computer shader or CUDA etc. I’d love to, but I felt like staying with MonoGame since it’s so fast to setup and stuff like this is not supported).

So basically RIP.

I thought I might at least finish what I started and I decided to go with the sampling approach. For each pixel I check in a chunk of its neighbours. If their reflection value points towards me and they still have some light distance “left” I color my pixel a bit with their color multiplied by light color.

Looks like an expensive bloom/glow now. Mission fucking accomplished.


Recent Updates

Hi guys,

haven’t posted here in a while, so let’s catch up!

Most important:

I added drones and motorcycles.
Here’s a close-up. You’ll notice the motorcycle is fully textured already! (Don’t mind the shadow bias and the glossy wheels)

From concept:


To model:


The movement has to be special and the motorcycles should lean a lot into the curves (maybe more than they need to, since it should be visible from far away)

Motorcycles should be important in my game. I didn’t want to create a world where the player ends up commanding 8 tanks by the time the game is almost over. Smaller vehicles like this one can flank and position itself in front of the crowd in order to drop mines or other grenades.
The gunner can only fire backwards and is probably not hyper effective, but I think I’ll allow rocket launchers to be carried as well.


Another cool rendering feature is some near-terrain dust clouds moving over the sand.
Visible here if you pay attention. It’s basically just two tiled maps overlayed (and multiplied) and I think it looks pretty good already.


I tried how specular for point lights would work, and I’m pretty happy with the results. I think making a rainy / wet version of the game seems pretty easy to realize.


A while ago, I modified the smoke even further.



Hi guys,

I will note some small things that I wish I’d learned earlier, but hopefully one or two of you can appreciate this list of significant and not so significant things to consider.

Object Oriented Programming

So, yeah I love OOP. And so do the tools I use, like ReSharper for Visual Studio (this thing is godsend, heavily recommended.)

I did work with OOP in mind for many years now and so it was pretty obvious to me how to do stuff, which later on turned out to be great for clean programming, but maybe not optimal for applications which need to crank out maximum performance at thousands of frames per minute.

Now it should be worth to note that on modern hardware, especially when not CPU bound, these things don’t matter as much. Many times the beauty of the program and the ease of working with a proper OOP setup saves more time to the programmer to an extend that makes it worth using them anyways (your time vs. program runtime)

Just a quick list of what to avoid, especially if called hundreds of times per frame.

  • foreach –> use “for” instead, foreach creates unnecessary garbage. This does not apply to all foreach loops, there are quite some articles out there on this.
  • LINQ Expressions –> look awesome (and make the code look sophisticated to absolute beginners) but are even worse in terms of garbage generation.
  • Events –> if your projectiles call an event on all the enemies to check if it has hit, you might as well rewrite all your code. Again unnecessary garbage creation.
  • Interfaces are apparently very slow. Call directly. Something I had to learn at a point where basically my whole infrastructure implements interfaces and the calls are made via these :(
    Indirect calls in general are not great.
    Check these out:
  • there is other stuff like inlining everything and using fields instead of properties but I cannot comment on that and I don’t think it’s worth thinking about these.
  • Lists. Lists are awesome, I use lists a lot. But if you have big lists which change a lot use pools or arrays instead. Again, to avoid unnecessary garbage.
  • can you think of others? Leave a comment!


Don’t use “SetRenderTarget”

“Why not? Every tutorial ever uses it!” you may say, but hear me out.

One of the major problems as well as benefits with using .net as an underlying platform is garbage collection. It’s awesome because you don’t have to bother allocating memory manually and you can’t really forget to release said memory later, so memory leaks can basically never happen (well they can in some cases, I talk about that later).

But – the garbage collector will cause a small frame spike every time it decides enough useless stuff has piled up and needs to be disposed.

So we must avoid creating a lot of data (arrays of data!) every frame. However, if you want to switch your current render target you do just that.


will create a new array containing “myRenderTarget” every frame! And renderTargets are not small at all. In fact, render target changes accounted for more than 40% of my whole garbage alone! So we should avoid it.

Usually, somewhere in the initialize() function and the resize() function you create your renderTarget like this:

myRenderTarget = new RenderTarget2D(_graphicsDevice,
(int)(width * GameSettings.SuperSample),

Now, what you should do as well is the following:

Have a new field in your render class of the type RenderTargetBinding[] – like this

private readonly RenderTargetBinding[] _myRenderTargetBinding =
new RenderTargetBinding[1];

and assign it like this under your renderTarget creation:

_myRenderTargetBinding[0] = new RenderTargetBinding(_myRenderTarget);

when assigning the current renderTarget to the GPU use:


the important thing here is to use SetRenderTargets instead of SetRenderTarget (note the plural!)

Obviously when using multiple render targets, the same thing applies, your binding array just contains more items.

The image below captured the “hot path” of memory allocations. Note how the difference.



In case you use geometry instancing to pass a lot of individual geometry with the same mesh in one call you have to use SetVertexBuffers.

Usually like this:

graphicsDevice.SetVertexBuffers(_myMeshVertexBufferBinding, _myInstancesVertexBufferBinding);

This will create crazy amounts of garbage, since, again, every frame we set up a new array and fill it with these properties.

Have a field called something like

private readonly VertexBufferBinding[] _vertexBuffers = new VertexBufferBinding[2];

and then fill it like this

_vertexBuffers[0] = _myMeshVertexBufferBinding;

_vertexBuffers[1] = _myInstancesVertexBufferBinding;

Obviously filling the _vertexBuffer[0] should only happen once, since the mesh of the model does not change. So only use the second part every frame. Then call:






Well, you are most likely using Visual Studio. And it’s really great just by itself (minus the crashes). Especially stuff like the deep Git integration and a nice variety of profilers.

But you can improve your experience a lot. Stuff I would heavily recommend:

  • ReSharper, simply an improvement for intelliSense. Helps you tons with refactoring and gives advice on optimization. Hard to work without it once you get hooked.
  • HLSL Tools for Visual Studio. If you use Visual Studio to write your shaders this is a godsend. You can find it in VS studio itself in “Extensions and Plugins”
  • Intel Graphics Performance Analyzers (GPA) – I have tried many profilers for the GPU side of things, including RenderDoc, AMD GPUPerf, the default VS ones and some more. All of these are good, but Intel GPA is the most competent and comprehensible for me.

Final words

I may or may not add some other stuff, but I hope what content is there will help you regardless. If you have more tips or improvements for this entry, feel free to leave a comment here (or if you don’t like WordPress, you can hit me up on twitter as well.

Obviously links to other beginner tips would be appreciated a lot!

UPDATE 1st MAY 2016

Welcome to another update!

So I readded the tank model (you can see it on the left, with no turret, just a gunner) and I added a basic drone / quadcopter model to the game.


Why a drone you ask? Well this is my realization of having a “control mage” so to speak.

It will be an alternate armament for a car – instead of having a basic rifle, a gattling gun, rocket/torpedo launcher etc. the car will have a drone operator and some drones to deploy.

The idea is to have some “Area of Effect” modifier for a set time.

With different drones you can buff certain areas, for example:

  • a smaller / different version could help make repairs on the cars mid-fight
  • they could give better vision and help make more precise shots
  • they are equipped with some light firearms to lay down some fire
  • they can shoot enemy rockets/torpedos/other projectiles

So yeah, what do you think about that?


Another visual thing I finally implemented is some geometry muzzle flashes. Together with some lights and the bloom effect it looks pretty good i think.


Some additional visual changes:

  • the flags now get more and more torn the lower the car’s hitpoints are
  • I added support for streaking out bloom (as I did already with lensflares). Not sure if useful, but at least one can play around with it


All of that comes on top of some big background changes, where I changed how I order meshes and send them to the gpu. I also implemented instancing, for example for the smoke. I thought this was non trivial since I compute position, speed and other properties on the GPU, but it worked out fine.

So I don’t know if any monogame developers would be interested in that, but basically I made a library where each mesh and it’s texture is registered. Then I draw ordered by textures and by mesh type. This allows me to have as few state switches as possible.

Another thing is the instancing of the smoke. I noticed that in the World Matrix, that is unique for each instance, I have a lot of free spaces (aka zeroes) because I don’t do anything fancy with my smoke isosphere before handing it over to the GPU apart from a simple translation.

All the transformations like size, noise displacement, displacement because of wind, color and alpha value are done on the GPU. For each individual smoke instance I need the current time and the max time as well as some initial random value. I can pass these in the empty slots of my 4×4 transformation matrix therefore saving some bandwidth.

Interesting note: Because I have massive overdraw on the smoke (all these alpha blended meshes sorted from back to front) I didn’t even think about doing per pixel lighting for so many local light sources. So welcome to 1999. Vertex Lighting it is. Performs great and I don’t think the visual quality suffers in any recognizable manner.

So yeah – that’s it for now.

Feedback would be appreciated as always :)

UPDATE 27. April 2016 – Bounty Road

Welcome to another update.
This one is more on the graphics side, but just watch!

UPDATE 17. April 2016 – Bounty Road

Hello and welcome to another little update.


The most important thing about this update is most likely that we settled on a title. This may change, but probably won’t. Bounty Road.

What is Bounty Road?

You take on the role of an infamous bounty hunter. An anonymous source offered a hefty amount of credits for the execution of the self-declared “Lord of the Desert”, a rebel leader hiding far away in the land of sand.
You gather your gang of loyal road warriors to take on the job and prepare for a long journey full of hazards.

While locals, bandits and wildlife are some of the enemies you’ll encounter on this adventure, there are also rumours about other bounty hunters making their way to the desert…

Bounty Road is a combination of classic Party RPGs with the roguelike genre. You control a group of cars, which each has unique attributes depending on chassis, armament, armor and drivers/gunners. Light motorcycles will stay just as relevant as heavier vehicles through their unique loadouts.


Bounty Road – Motorcycle concept by Tobias Schmithals

You will depend on recources like food/water, gas and spare repair parts to make your way through. The people that come along will improve in skill, but without proper tactics you won’t win the fight.

I hope you liked this little intro into the world of our game.

UPDATE 17.April 2016

Features shown in the video:

  • Standard game mode: “Always on the road” – we thought it would be exiting to have the world move all the time. This gives a much better “road warrior” feeling and works pretty well.
  • Sounds are implemented, but obviously not final. The system works, however.
  • Range and Direction indicators are now generated meshes with moving textures
  • Torpedo prototype
  • Inventory now highlights corresponding item slots.
  • New hitmarkers show the damage dealt and when armor is destroyed on a certain side of the vehicle
  • Spark particles
  • Smoke shadows

Features not shown specifically:

  • Rework of all systems to accomodate for the dynamic moving game world.
  • Flags are now alpha tested and are rugged and torn. Flag shadows also use this alpha test information
  • Flags/beam physics improvements
  • Garbage Collector and general frame variance improvements
  • Optimization for fullscreen: Mouse contained
  • Several new options for shadow smoothness.
  • Use of adaptive Vsync instead of default Vsync







Get every new post delivered to your Inbox.