Screen Space Emissive Materials

Hi guys,

today I want to talk about screen space emissive materials, which is a relatively simple technique I implemented, which allows for some real-time lighting in a deferred engine (along with some problems)

Emissive materials in real life would for example be fluorescents. But they can also be used as lamp/light shapes that are hard to approximate with simple point lights. You can see some examples at the end of this blog entry.


So, I was just coming off of implementing screen space ambient occlusion (SSAO) into my deferred engine, along with trying to make screen space reflections (SSR) work.

I haven’t worked with ray marching in screen space before but their power became apparent immediately.

So I went ahead and implemented something I have been thinking about for quite some time – screen space emissive materials.

The idea is pretty easy – just ray march diffuse and specular contribution for each pixel.

Per Pixel Operations

First question would be – which pixels?


The pixels used are bound by a sphere – similar to normal point lights in a deferred rendering engine (We don’t want to check every pixel on the screen when the model is only covering a small fraction). I simply take the Bounding Sphere of the model (see the smaller circle around the dragon) and multiply it by some factor, depending on the emissive properties of the material.

Then I raymarch a certain amount of times across some random vectors in a hemisphere (based on the normal) of the pixel to get the diffuse contribution. If the ray hits the emissive material I add some diffuse contribution to the pixel.

For the specular contribution I reflect the incidence vector (camera direction) on the normal and raymarch and check if I hit something. I actually use more than one reflection vector – depending on the roughness of the material this is more of a cone actually.









First results

Here is an early result. I think it looks pretty convincing.


Now, there are 3 major problems with the way I described it above:

  • If the emissive material is obstructed there is no lighting happening around it (think of a pillar in front)
  • If the emissive material is outside the screen space there is no lighting happening
  • The results are very noisy (see above)



blog_ssem_emissiveThere is a pretty easy solution for this one – draw the emissive mesh to another rendertarget and check against that one when ray marching.

In my case I went with the approach to save world space coordinates for the meshes (translated by the origin of the mesh, so precision is good). I draw the model on a new rendertarget so the scene depth is not considered and cannot obstruct.

One could go with a depth map here, but I went with this approach this time.

This makes depth comparison pretty trivial, but it may not be the most efficient solution.

Note: For each light source I – clear the emissive depth/world position map and draw the object and then calculate lighting and add to the lighting buffer. This way emissives cannot obstruct each other and I can optimize the lighting steps for each individual mesh.



blog_ssem_diffuseInstead of sampling in a random direction, we can sample in only in the direction of the bounding box/sphere of the model. With the same amount of samples we get much smoother results.

Apart from that – all of the techniques that help SSAO and SSR can be applied here. Bilateral blur would be a prime example here.

Another often used solution that helps here is to actually change the noisy vectors per frame and then use some temporal accumulation to smooth out the results.

Simply more samples per pixel is the most simple solution obviously, but often performance limitations do not allow for that.

Screen Space Limitations


blog_ssem_screen-spaceThe good old “it only applies to screen space” problem:

As soon as the materials aren’t visible any more the whole thing breaks basically, since we can’t ray march against anything any more.

Philippe Rollin on twitter (@prollin) suggested to “render a bigger frame but show only a part of it”

This would be a performance problem if we had to render the whole frame in a bigger resolution, but since we draw the emissive material to another texture we can use a neat trick here:

FOVfactor2We can draw the emissive materials with another view projection – specifically with a larger field of view than the normal camera field of view (for example factor 2).

Then, when calculating the lighting, we reproject our screen coordinates to the new view*projection matrix to sample from there. Barely any cost.

Now, the local resolution goes down a bit, but, for a factor of 2 for example, it is not noticeable at all.

To address this issue one could change the alternate field of view depending on how much “out of view” the meshes are, but I found the results to be good enough with a constant factor 2.

Note: Simply changing the FOV is pretty naive. It would be more beneficial to also change the aspect ratio so the amount of additional coverage to the top/bottom is equal to the sides. A larger FOV gives proportionally more coverage in x-direction than in y direction if the aspect ratio is > 1. This should be adjusted for.

That is all great, but it won’t help when the emissive mesh is behind the camera. There is no way to beat around the bush, you won’t have any effect then.


Can there be done anything about that?

Well you can always draw several emissive rendertargets with different orientations/projections and then check against each one of them (as is suggested for so many screen space effects), but this is honestly not viable in terms of performance.

What I would rather suggest is a fade to a deferred light with similar strength. Not optimal, but people overlook so many rendering errors and discontinuities it might work? I don’t know.


So yeah, that’s all thanks for reading, I hope you all enjoyed it. Bye :)


(click on the image above for a large view. Note that SSAO creates black shadows below the dragon, which obviously doesn’t make any sense with an emissive material)

Performance is relatively bad – right now. As you can see in the image above the emissive effect (which at that proximity covers all pixels) costs ~15 ms for one material only at ~1080p (on a Radeon R9 280)

The SSEM is rendered in full resolution with – 16 samples with 8 ray march steps for diffuse and 4 samples with 4 steps for specular.

There is a lot of room for improvements – as mentioned above. Especially diffuse doesn’t have to be rendered full-resolution, a half or even quarter resolution with bilateral blur and upscale would most likely have little impact on visual fidelity.

A smaller sample count makes the results more noisy – but that can be helped with some blur also, especially for diffuse.


In this picture we have many more emissive materials, again at 26ms total at ~1080p. Because the screen coverage / overdraw is relatively low the performance is not much worse.

Conclusion and further research

I presented a basic solution for rendering emissive materials in real time applications and proposed possible solutions / work-arounds for typical issues with screen space ray marching based algorithms. It does not need precomputation and all meshes can be transformed during runtime.

I am not sure whether or not this can actually be viable in high performance applications, but I am confident the rendering cost can be much improved.

I am sorry for not providing any code or pseudo code snippets, maybe I’ll update the article eventually.

A possible extension of this method would be to have textured materials and read out the color at the sampling position to add to the lighting contribution. This would greatly extend the use of the implementation but bring a number of new problems with it, for example when some color values are occluded.

Making the rendering more physically based would be another goal, currently there are many issues with accuracy and false positives/negatives based on wrong ray marching assumptions in my version.

I hope you enjoyed the read, and if you like you can track the progress of the test environment/engine here and find a download on github here:


Homefront 2 is the better Crysis 2

[This is a Homefront: The Revolution single player review/opinion piece. It is written from a long time Crysis fan and from someone who was let down by the Crysis sequels and pleasantly surprised by Homefront]

(skip to The Homefront: Reloaded Revolutions if you are not interested in the backstory)


The prequel to our story

Hi guys,

somewhere after 2011 Crytek lost me. The German game development power house known for games like the original Far Cry and Crysis – these were the guys I always wanted to be a part of. My dream job: Creating the next Crysis experience, intense action and hyper realistic graphics of course.

Apparently they were very much a fan of themselves, too, and around the release of Crysis they decided to take drastic steps to become one of the largest independent game developers in the world by buying and creating several studios around the globe, rapidly approaching the 1000 employees mark.CRYTEK_CMYK

They had a German studio, a Ukrainian studio, a Bulgarian studio, a Hungarian studio, an English one … two American ones … they founded smaller studios in Korea, China, Turkey. It seems almost unreal how many different studios Crytek had.

They had only produced 2 games at that time. Far Cry and Crysis (plus a standalone addon in 2008).

Crysis was moderately successful, but it came out in 2007 to compete with the likes of Call Of Duty Modern Warfare, Stalker, Unreal Tournament 3, Bioshock and Halo 3. Amazing year for first person shooters, really.
Not super great if your game was a PC exclusive which barely ran well on any PC at the time though.

Either way, with all these new studios Crytek very much relied on their next game being a hit.


That game would, of course, be Crysis 2.

A quick recap of the original Crysis: A strange energy reading is picked up on a pacific island, so the United States send their “nano suit” iron man supersoldiers to investigate. The island is occupied by North Korea though, who are interested themselves in what this about. Turns out: Aliens.

29_Crysis1_Artworks_7_-l1So the player uses his incredible exo suit to deal with hundreds of North Korean soldiers by either sneaking around like a chameleon or punching like a gorilla, until one eventually encounters the Aliens, who turn the palm island into frozen hell.
The general consensus was that fighting human soldiers with plausible AI and group behaviour is more fun than violating the rights of some strange tentacle ice aliens.

Ah yes and it looked good, too. Plus some wicked fast controls and intense unscripted battles.

The management at Crytek saw the player reception and decided to change a lot of things for the upcoming sequel.

Therefore, a big part of the enemy composition in Crysis 2 is still human. And actually the Aliens became bipedal and humanoid too. And the ice stuff is gone – they shoot different lasers and plasma grenades now.

So far so Halo good.


But it turns out Crysis players were only a small group of the player reception that Crytek were studying. Players appreciated Call of Duty. A lot of players did.They want to play blockbuster titles on their console. Linear action fests.
And they love the motivational “level up!” multiplayer. With classes and perks.

So Crysis 2 had to compete. It had to be catapulted from an appreciated game to a global franchise. A top seller. A Call of Duty killer?

And this conflict on some remote island, that was a joke, wasn’t it? Yep. Where do disaster movies usually take place? Yes, of course, New York. With an epic soundtrack by Hans Zimmer. And a big marketing campaign where known pop artists perform their version of Sinatra’s New York New York. A renowned science fiction author writes a book parallel for the game.

All the ingredients for the global smash hit are there, right? (I did forget multiplayer DLC, but Crytek/EA didn’t, so that was another novelty for the sequel)

Oh and by the way: The multiplayer component for Crysis 2 was made by Crytek’s British arm – Crytek UK. Formerly known as Free Radical Design and renowned for their last-gen console classic TimeSplitter 1-3 the studio was now to create a Modern Warfare 1/2 multiplayer clone (with a twist! The nanosuit!)

Now my write-up might have seemed very cynical so far, but I think they actually did a good job. I do enjoy Call of Duty multiplayer very much and I did enjoy Crysis 2 multiplayer, too.

It should be noted though that the player base was much smaller, there were fewer maps, and only a fraction of weapons/upgrades/unlocks and killstreaks. The long term motivation was much lower, especially for players that also own Modern Warfare.

Crysis 2 releases in spring of 2011, in a time slot where it does not have to compete with the big hitters. It sells at least 3 million copies, but it is fair to assume that it didn’t outsell its predecessor, which sold at least 4 million by that time and had longer lasting appeal according to some steam sale numbers.

homefront original

Unlike Crysis 1, the direct competitors in terms of release slot are not the Call of Dutys or Bioshocks. It’s actually Homefront. Not a big hitter, not a sequel, not a big developer behind it.

Homefront, a game which basically tries to sell this ridiculous story of North Korea becoming a global superpower and invading the weakened United States.

The publisher, THQ was, like Crytek, desperately trying to become one of the big guys and made some bad decisions along the way eventually leading to its downfall.

Homefront; however, was not one of these mistakes. It actually sold 2.6 million copies in 2 months, well above expectations and actually comparable to Crysis 2.

I should note at this point that a big seller was the innovative mixed vehicle/infantry multiplayer, but I cannot say myself whether it actually was or was not great.


THQ decided to shut down the developer, KAOS, but still wanted a sequel since the first one sold so well. Crytek had a lot of studios, but nothing to do with them now that Crysis 3 definitely wouldn’t be the biggest release of the decade.

So THQ contracts Crytek to do it for them. Crytek then assigns its second biggest studio: Crytek UK, based in Nottingham. THQ files for bankruptcy. Crytek buys the IP. Crytek’s house of cards falls apart. They almost file for bankruptcy and sell IP and development studio to Koch Media / Deep Silver.

What a mess.

And so we come to Homefront: The Revolution and what it has to do with Crytek’s Crysis.



The Homefront: Reloaded Revolutions

Homefront 2 is a soft reboot of the first game. It’s still: China North Korea has invaded America, but the story is a bit more “believable” in that China North Korea produced all technical gadgets and finally weapons for America and when the post-Donald-Trump USA couldn’t pay up their debts the Chinese Koreans turned their toys off and invaded instead.

Small twist: Homefront was to originally feature a hostile China (a more plausible threat), but THQ feared for their Asian market. So it was banned in South Korea instead (Stupid western media trying to make propaganda for the North? Get out!)


The game releases in May 2016, sells really badly and gets absolutely torn apart in reviews. The metascores across the systems are around 50.If you search for Homefront TR reviews on youtube there will be dozens of videos with “worst game ever” in the title.

That’s impressive by itself, really. I mean the “worst game ever”. Wow.

So I, the brilliant genius that I am, decide not to purchase this title. I was looking forward to some nice screenshots of the title (it’s done by basically Crytek – with CryEngine!), but since almost no one bought the game I didn’t get to see any.

So then september comes along and with it comes a free weekend of Homefront: The Revolution.

I see 35gb download, I hesitate a bit but I decide to give it a try.

The Revolution is off to a bad start

[NOTE: These are not heavy spoilers, all of this takes place in the first hour of the game]

North Korea has long won and you sit around with your anarchy buddies watching TV when suddenly the house gets raided by angry fascists / North Koreans.

They try to convince the player – Ethan Brady – to reveal the whereabouts of rebel leader Walker. Even though they kill his friends in front of him they get nothing out of our man Brady, since he is a silent protagonist. Unlucky.

Especially since that information is basically useless as Walker comes along to rescue Brady. You guys leave the house, Walker tells you to manipulate some radio transmitter, you come back and so do the North Koreans and capture your leader. Not the best day.


So you try to reach the other parts of the resistance, but you get captured and almost cut up because this really crazy bitch (yes) thinks you are a Korean spy. Again, Brady can’t talk, so he can’t defend himself and only keeps his balls because the now de facto resistance leader – Jack Parrish – comes along and says he knows who you are. Mrs. Crazy is disappointed, since she’s a sadist.

Great day so far for our friend Mr. Brady.

He then gets send out to some barren industrial zone to capture some random points. You get there and there is nothing but: more or less bombed out buildings which are either empty or some sort of quarter for KPA (Korean People’s Army) which you can capture for the resistance.

Similar to the latest Far Cry titles this will fast forward the time a bit and then these control points are all graffiti’d up and filled with resistance fighters.

The game almost lost me here. This is not exciting. This is Far Cry 4 with a smaller map and no honey badgers.

Brave New World

Thankfully you get into a “yellow” – civilian sector for the next part of the game. I should clarify – this game is separated into areas which unlock with story progression. The size of these is roughly comparable to S.T.A.L.K.E.R.’s maps but much more detailed.

Homefront2_Release 2016-09-11 10-19-16-97

As with Stalker some of them serve only a story purpose and cannot be revisited later, but most of them are sandboxes which you can travel to and from at any time.

Anyways, does the game get better? Yes – a lot. In the civilian area you generally do not run around armed.
Yep. Weapons hidden. Just walking around minding your business.
Mingle with the civilians. Avoid patrols and looking enemies too much in the eye. Watch out for evil scanner drones. Watch out for CCTV.
If you unlock the silencer you can take out these cameras from hiding and have a good chance not to get detected.

Dealing with enemies in this area usually involves sneaking up on them and silently cutting their throat. If you get detected it usually means running away – OR – gunning the down the KPA witnesses and then running away. If you are out of sight you slowly lose the enemies’ interest, much like in AC/GTA. It’s faster if you hide in a container / bin / toilet. There are no haystacks or spray shops in this game unfortunately.

Like in most open world games there is some sort of general level of accomplishment needed to “finish” an area. The same is true for Homefront.

The metric of choice here is called “Hearts and Minds” and you fill it up by sabotaging the oppressors, helping out/saving cilivans and capturing control points.

If you get 100% the people will start a local “revolution” / an uprising and you can usually make your attack on the local police station which is tied to the last objective in the civilian areas.

Homefront2_Release 2016-09-11 14-52-57-95

“Better than playing stupid violent video games, I say!”

Now – that sounds like grind. Here me out.

I play open world games like this – if I come across something interesting or some small activity I can do while on my way, I will do that.


In Homefront these small things can be: cutting the cable to a power generator, saving a local from harassment by some soldier, giving money to beggars, assassinating some soldier, destroying a truck, tuning a radio to your “Free America” frequency.

These things I can do “on the fly” and it makes my path more interesting.

Sometimes my mates will tell me something about “a rooftop garden around here” – I look up and see a lot of planks across balconies and sides of buildings. This is exciting – I wanna get there. This I will cover later, because in my opinion the platforming puzzles are Homefront’s greatest strength.

So when the final mission for the area comes around and my objective is to get the people to riot, I usually sit at 80-85% progress already. This usually means one more control point and maybe killing an abusive Korean and I am done.

Homefront2_Release 2016-09-11 14-52-29-38

“Fry me to the moon”

This is great. Another metric I can show you is this: Every civilian area has at least 8 or 10 control points. These points can become save houses, give you information about secret stashes/radios etc. and they change the look and population of the area. In each sector I usually had 3 or 4 of them captured and it was enough. Most of the time some captures were tied to missions anyways.

This is not grind. It’s not a super long game this way (played through in 14 hours) and the open world is just giving you choice of movement. I liked this a lot about Homefront.

The real strengths will be discussed in comparison to Crysis 2.



If you incite a revolution the city scape will change even more drastically than after converting certain areas with control points. People will smarten up and smash their neighbor’s car while beating up the other neighbor’s kids.

That was actually a joke, a realistic dystopian video game future obviously features no kids. They are talked about though, but can’t leave the house or they might cut themselves on all this broken glass. Just a guess.


Red (abandoned / military) and Yellow (civilian) areas usually alternate. The red ones feature more – run here, do this stuff but they are usually much shorter when it comes to time spend. I think the balance is pretty nice, because you can just shoot down patrols and go ham in the red areas, enemies do not always call in reinforcements. The variety in civilian zones is really nice, all of them create a vastly different mood.

Unless one of those cliché airships/zeppelins spot you, then the game is basically just a Forest Gump simulator.

But you can avoid them easily and at no point was there a mission where you are just “on the watch – 5 stars – all the respawning world hunts you” by default, which I very much applaud.


One last point here: Story / pacing.

I think both are alright, the story won’t win awards. Like in any good game the last few hours of the game are a wild ride without much pause, some twists and drama. It’s a lot of shooting then and less of anything else, as you can probably expect, but thankfully I like the shooting.

Now a problem I have seen many mention is the characters being unlikeable. This goes for the whole game actually. Yes you have a great revolution, but afterwards the slums look even worse and many innocent people have died. This is picked up by some and dismissed by others. The characters themselves are sometimes really stubborn to the point where you hate them for their stupid actions, but so do other characters, which softens this issue a bit.

Homefront 2 is the better Crysis

Ah so we come to the point! Finally, you might say!

Why do I feel like a comparison is even fair?

Because of what was advertised and what fans were looking for versus what was released.

Homefront2_Release 2016-09-11 14-03-11-72

“The Urban Jungle”

“The Urban Jungle” was the term Crytek used for Crysis 2 over and over again. At interviews Crytek insisted different play styles and paths were possible and even encouraged.

Now I can’t say the openly lied – they didn’t. The result is just not what I and many others wanted from a Crysis game.

Crysis, much like Far Cry, is a game where you can set your own pace. You usually have a wide open space with some sort of cities / villages and a lot of vegetation / terrain and water in between.

Cat and mouse playstyles are encouraged, especially on higher difficulties. Go ham –> get spotted, sneak behind some wall or lay prone in the tall grass until your energy is refilled –> reposition with stealth –> go around the enemy –> go ham again.
Homefront2_Release 2016-09-11 10-51-47-84

Or make some creative (encouraged) things like breaking the palm tree someone is laying below. Or stick a c4 to a barrel and throw it to a helicopter with super strength.

This playstyle is not possible in Crysis 2. Creativity is super limited, flanking is almost impossible, running out of range and then reassessing the situation… all impractical.

It’s very obvious with the changes to the binocular as well. You can basically just have them turned on (normal view, binoc overlay) or zoom to 1.4. That’s it. They are absolutely useless. You don’t need to scout the area in Crysis 2 because you don’t need to think about the outcome in advance.


What I expected Crysis 2 to be

The Urban jungle.

Two or 3 streets wide, you can get through to the other street in small alleys and through houses, you can go into some windows, climb some stairs then get out somewhere else.
Only some, not all houses are modeled internally. But they have interesting paths, secrets and are interconnected somehow.
You find civilians in the houses sometimes. Sometimes, like in Crysis, you have small side objectives. You can trick the enemies into ambushes, you can use sleeping darts like in Crysis.

All of the gameplay simplifications are fine, really. But I wish the world was still a Crysis semi-open sandbox.

Homefront2_Release 2016-09-11 20-44-55-30
But – amazingly – Crytek UK had something like this in mind for their Homefront.

You cannot enter every house – but you can enter so many that it always opens up unexpected unmarked routes when escaping. And there is so many houses where you can access something in the second floor or through the rooftop, jump to the balcony of the house next to it and continue your journey.

There is another thing. I really hoped Crytek would use the opportunity given by the nanosuit to create interesting platforming puzzles.

This, to me, is mind-boggling. You have this versatile suit that can jump so high and pull itself up on everything, there could be so many possibilities of making interesting platforming puzzles. Hide secrets in locations where you really have to figure out the path first!

And here is where Homefront shines the brightest. In my opinion of course. I loved this kind of small challenges in Far Cry Radio Towers or Assassins Creed secrets.
But Homefront really takes it to the next level and I cannot think of a first person open world game that does it so well. It is simply fun looking up and seeing some wooden plank going across houses and looking for the entry point to slowly climb the way up to the house.

Homefront2_Release 2016-09-11 14-39-16-12

Let me give you an example during the midgame.

You see there is a control point on your map, it’s in a car park on the second floor. You enter and you notice the gate to the first floor is opened just a bit. So you slide through. You find yourself on the first floor and notice the gate to the next one is closed, but there is a blue motorcycle painted next to it. You notice a ramp outside, so you go out. There is a locked container, you break the lock and obviously you find a motorcycle. You then drive up the ramp and jump through to the first floor. There you find a generator which you power up with your motorcycle – it opens the gate. On the way up there is a closed door, but you find a way through the window up. Unfortunately it leads to a gated window – you can however shot the lock of the door. You go back and make it through the door. This is fun.

However, in the last civilian section these become really hard sometimes, I admit I had to use Google the solution in 2 cases. Not that these control points were necessary for the game progression, I could have gotten my area approval up in other ways.

Another neat thing about Homefront is that the civilian area feels like a lived-in world. You can see them leaving their homes (where you can’t go, cause they lock it) and you can even see how they actually spray new grafitti on the walls.



Homefront2_Release 2016-09-11 17-35-08-50The original Crysis, let’s be honest, was not super varied in terms of environments and set pieces. The first few hours were very similar, but since the gameplay was so fun it was alright. Then later we got some tank battles, VTOL flying (never again), zero gravity, ice age, nuclear age.

While the variety picked up later, it’s the first few levels that are usually remembered most fondly because they were the most fun.

Crysis 2 I remember more for the varied locations than for the gameplay. It has two vehicle sequences, one of these is on rails, the other is as linear as can be.

But it does have good pacing, similar to Crysis 1, in terms of weapons that slowly get exposed to the player throughout the levels. This is well made in Crysis 2 I feel like. Plus giving players showoff weapons like the Majestic revolver is awesome.

Homefront has this horrible unlockable pacing problem. Basically you have all you want after the first hour of gameplay. You can buy new weapons for tech points, which you get automatically by capturing control points. Same goes for attachments.

But that’s it. In my opinion one of the largest strengths of a semi-open world was left out here. You can make players do extra stuff very happily if the reward is special. There are no special awards here.
There are no special weapons here either. You have your weapons, you can change them around, make an SMG out of a pistol and a flamethrower out of a crossbow, but that’s just unlocked by buying stuff, not by discovering.

If Homefront tied some upgrades – or even better- unique weapons to certain stashes / missions or secret locations it would have been much better in my opinion. This is a part where S.T.A.L.K.E.R shines a lot and I feel like Dambuster Studios could learn.

For example, instead of a mission where you free a control point, you could have the helping two different weapon smiths, one giving you the special assault rifle upgrade, the other the sniper one etc.

Speaking of variety though, the missions are not very varied and often do have a “do this 3 times” pattern – like check the 3 houses for the remaining prisoners. This has to be said, if you despise games like let’s say Mad Max, you won’t be happy here.


Oh it is a shooter after all, so let’s talk about that. I am a player that will always prefer the single fire to the full auto, so the “battle rifle” was for me. It’s perfect, really, felt absolutely right. The only option for the assault rifle was also an M4, which makes sense as the “American Rifle” but I just hate the feel of it. The system of having few “chassis” on which we can attach bigger mods is cool, but the resulting lack of variety of each type hurts. Crysis 2 is not much better unfortunately.

What I can commend them on is the weapon sounds. When you get a headshot a deep bass will join your shot and it is really satisfying. Something that I haven’t noticed in other games.


Crysis 2 had only limited scopes/attachments and the movement when aiming was totally off, I simply cannot play with the 4x assault scope for example because it moves so fast. But they put so much work into the weapon animations, this is something that really brings me joy. Every weapon will reload differently depending on stance. Prophet treats his weapons like a hammer when in armor mode, but gently like a baby when in stealth. Impressive.

But yeah, what’s up with only having such a limited amount of scopes per weapon? Why can’t I just put a sniper scope on my scarab? I can do that in Homefront luckily.

Enemies? Human enemies are not super varied in either game, but they are fun to fight. This has been nailed I think by both games.



Is Homefront a better game than Crysis 2? I don’t know, maybe not. But it is a better realization of the Crysis gameplay and feeling. Is it a Crysis successor? No, not at all. But it is a vision of an urban jungle that Crysis 2 failed to deliver.

For players that take the open world “as they go” I think it is a fun ride. I am baffled by the metascore, this is by any means an above average production. I often start games just to put them down again. I cannot stay for long with most releases. It’s almost ironic that a relatively generic open world game would make me spent my weekend playing through, but I liked it. It also helped that I was thinking about taking screenshots so often.

I would think many bad reviews had to do with technical problems, which I, many months after release, did not experience at all. I cannot say whether or not consoles have less problems now.

Graphics are great, but not relevant in the comparison to C2, which has aged rather poorly. I think they are on par with Assassin’s Creed in terms of lighting, especially when overcast and with rain. Really well implemented dynamic global illumination system.

A final note: The game never crashed on me, I once experienced a bug where my weapon had the wrong scope attached after a cutscene. It ran very smooth on a Radeon 280, which is to be expected.

I used the HUD Toggle from NeoGAF user The Janitor to create some of these screenshots.

Implementation of PBR and SOME DEPTH OF FIELD

Hi guys,

really quick update, I haven’t posted in a while. In my game I have further improved my rendering model so that it complies with Substance Painter, which by the way is an awesome program.

Here you can see the texturing of my main truck with PBR materials, done in substance painter. I hope you like it.
For the environment maps I actually implemented some roughness calculation tools, I might write about that later.


I also implemented some Depth of Field effect to better showcase my assets in screenshots/trailers/cutscenes

I have been playing a lot of Warhammer 40K games lately so I was looking to import a WH40k model into my engine and create some PBR materials for it.

I found this one and did some texturing work on it to make it look like it walked in the desert for decades :)

impwalker big2

Besides I implemented some sort of particle editor and made a custom GUI for it. Will come in handy sooner or later :)



My friend and me also made a quick game in one evening basically. The idea was to be an overpowered cyberninja slashing through evil robots. Fun. Done in Monogame from scratch.


Major gameplay systems done!

Hi guys,

at least from the major systems I’d say the game is feature complete. That’s often the point for saying a game is in its alpha stage (unless you are an AAA dev who wants to push out his publicity free multiplayer event and labels it “alpha” so people can’t complain about bugs)

All the underlying scripts work for these elements:

  • World Map
  • Dialogue / Event manager
  • Car Combat (dynamic and static camera)
  • Inventory

So basically I think most of the necessary stuff has a very solid foundation now.

I put up a short video with the different systems:

The main gameplay is as follows:

  •  You have a main goal of finding a certain person at the end of the world map
  •  You chose a path on the world map.
  •  Along the way you encounter random events, enemies, friends. But you also have certain points of interest marked on your map and you can’t finish the game without stopping by some.
  • Sometimes diplomacy fails and you have to engage in combat.
  • Combat plays out a bit like a party action RPG. You have a group of cars with differently skilled operators and different sets of weapons. Use skill and positioning to win.
  • Loot destroyed enemy cars, upgrade your own.
  • repeat until game over, either through finishing the game or losing all your men and women.

Obviously content is the biggest hurdle for most indie developers.

And, sadly, the artist i used to collaborate can’t find any time right now. So don’t expect a lot of new 3d models in the near future.

If you are an artist who would like to collaborate, PM me here or on Twitter please :)

So yeah, that’s it for now. Not a lot of production value in the video, but I wanted to push it out after a lot of coding.

Geometry trails / Tire Tracks Tutorial

Lot’s of T’s in the title :O


Hi guys and girls,

today I’d like to talk a little bit about geometry trails (is this even the right name?), which can replace particles and be used for example for trails left by space ship engines or tire tracks in a racing game.

Since it’s a fairly easy thing to implement but for some reason not many tutorials can be found on the topic I decided to write up a little bit about it. As you might have learned from past blog entries I really enjoy writing about that stuff, even if the write-up takes longer than the sloppy implementation.

The code bits and pieces are in c# and were implemented in MonoGame.


In my case I use this stuff for tire tracks – see how the cars leave tracks in the sand!


So the basic idea is: Let’s create a strip of triangles that “follow” a certain path (for example the cursor, or a ship, a car etc.)

We do not want to animate the triangles – only the last segment stretches a bit until it reaches our defined maximum length. Then we remove the oldest segment and create a new one at the front.


In this example I limit the trail to 3 segments.

In-engine this would look like this:


It’s easy to notice that the curves are not really smooth yet. We have to change the direction of the segment before the one we currently draw to face the half-vector between the direction to the second-last segment and the new segmenttrail_simple2

Now we get stuff like this

Finally we want the trail to fade out smoothly in the end.

The idea is pretty simple.

Let’s say we want to fade out over 2 segments (we can also use a world unit length and convert it to segment amount).

Our trail has a visibility term (alpha) which goes from 0 (transparent) to 1 (fully visible).

If we all our segments have full length then it’s pretty easy:
Our 3 first segments have visibility:

0 – – – 0,5 – – – 1 – – – 1 – – – 1 ….
^            ^             ^          ^          ^
# 0            1             2            3          4   …

Makes sense right?

But what we really want is a smooth fade depending on the length of the newest (not full-length) segment.
Let’s say that it has reached half the length of a full segment … where does our ramp start and end?

Well obviously half way between segment 0 and 1 and it finished half way between segment 2 and 3.

So thing is basically just some simple linear math.


To get the visibility at our current segment i we can use this formula:

y = 1/fadeOutSegments * x – percentOfFinalSegment

If we want the ramp to start somewhere in decimal numbers we have to use the range {-1, 2} for our visibility term and then clamp to {0, 1} in the pixel shader.

Because our graphics card only accepts floats between 0 and 1 we “encode” our y value like this

visibilty = (visibility + 1)/3.0f

to map from {-1,2} to {0,1}. Later we can decode the other way around.


Looks pretty smooth, right?

Final Modification

So that’s basically it.

Now we need to bring it to the screen and there are just a few things left to say.

First of all – your trails don’t have to have equidistant segment points. It makes sense to make more, smaller segments when processing curves and use larger ones when having a long straight line.

Another thing – if you want to have floating trails, for example lines following some projectile, it would be a good idea to modify the position of both vertices (per segment) in our vertex shader so they always face the camera (like billboards, stuff like lens flares, distant trees etc.)

If we use them as tire tracks it would be a good idea to project them onto our geometry.
Here is a great blog article about decal projection (by David Rosen from Wolfire games)

This is not trivial and, depending on geometry density, not cheap either – but it is the proper way!

If you happen to work with a deferred engine making decals can be easier, there are tons of good resources if you search for “deferred decals” :)

In my case I went a different route.

Since I know I only want to have tire tracks on terrain I simply draw the terrain and then draw the lines on top without any depth check. Since the terrain is rather low frequency it’s a pretty plausible looking solution.

Afterwards I draw all the other models. The obvious downside to this method is that I have a little bit of additional overdraw since I draw the terrain before drawing the models that obstruct/hide parts of it.

However, the effect on frame time is really minimal and the effort of implementing the thing is really low, so I take that.
With the visibility term I can also ensure that cars that currently do not touch ground do not contribute a visible tire track, which is pretty useful.



Let’s initialize our class

public class Trail

        //our buffers
        private DynamicVertexBuffer _vBuffer;
private IndexBuffer _iBuffer;

        private TrailVertex[] vertices;

        //How many segments are already initialized?
private int _segmentsUsed = 0;

        //How many segments does our strip have?
   private int _segments;

        //How long is each segment in world units?
private float _segmentLength;

        //The world coordinates of our last static segment end
private Vector3 _lastSegmentPosition;

       //If we fade out – over how many segments?
        private int fadeOutSegments = 4;

        private float _width = 1;

        private float _minLength = 0.01f;


public Trail(Vector3 startPosition, float segmentLength, int segments, float width, GraphicsDevice graphicsDevice)
_lastSegmentPosition = startPosition;
_segmentLength = segmentLength;
_segments = segments;
_width = width;

            _vBuffer = new DynamicVertexBuffer(graphicsDevice, TrailVertex.VertexDeclaration, _segments*2, BufferUsage.None);
_iBuffer = new IndexBuffer(graphicsDevice, IndexElementSize.SixteenBits, (_segments-1)*6, BufferUsage.WriteOnly);

            vertices = new TrailVertex[_segments*2];


        private void FillIndexBuffer()
short[] bufferArray = new short[(_segments-1)*6];
for (var i = 0; i < _segments-1; i++)
bufferArray[0 + i*6] = (short) (0 + i*2);
bufferArray[1 + i * 6] = (short)(1 + i * 2);
bufferArray[2 + i * 6] = (short)(2 + i * 2);
bufferArray[3 + i * 6] = (short)(1 + i * 2);
bufferArray[4 + i * 6] = (short)(3 + i * 2);
bufferArray[5 + i * 6] = (short)(2 + i * 2);


Pretty simple so far right?

We use a dynamic vertex buffer where we store the vertex information. A dynamic vertex buffer plays nicely with our goal of changing the geometry constantly.
On the other hand we do not need a dynamic index buffer since the relationship of the vertices always stays the same, so we can initialize it from the start. (Actually we don’t have to do that for each instance of our trail, we can make the index buffer static if we use the same amount of segments/vertices for all our trails/tracks).

Now let’s move to the other 2 parts that are pretty trivial – the draw() function and a dispose() function (since graphics recourses are not handled by our garbage collector we need to delete them manually)

public void Draw(GraphicsDevice graphics, Effect effect)
effect.CurrentTechnique = effect.Techniques[“TexturedTrail”];
graphics.Indices = _iBuffer;

graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _segmentsUsed*2);
public void Dispose()

I think these 2 should make sense, right?

Now comes the main part – the Update() class, which we call from our target with the new position.

public void Update(Vector3 newPosition, float visibility)
if (!GameSettings.DrawTrails) return;

            //Initialize the first segment, we have no indication for the direction, so just displace the 2 vertices to the left/right
            if (_segmentsUsed == 0)
vertices[0].Position = _lastSegmentPosition + Vector3.Left;
vertices[0].TextureCoordinate = new Vector2(0,0);

                vertices[1].Position = _lastSegmentPosition + Vector3.Right;
vertices[1].TextureCoordinate = new Vector2(0, 1);

                _segmentsUsed = 1;

            Vector3 directionVector = newPosition – _lastSegmentPosition;
float directionLength = directionVector.Length();

            //If the distance between our newPosition and our last segment is greater than our assigned
// _segmentLength we have to delete the oldest segment and make a new one at the other end
            if (directionLength > _segmentLength)
Vector3 normalizedVector = directionVector / directionLength;

                //normal to the direction. In our case the trail always faces the sky so we can use the cross product
//with (0,0,1)
                Vector3 normalVector = Vector3.Cross(Vector3.UnitZ, normalizedVector);

               //how many segments are we in?
                int currentSegment = _segmentsUsed;

                //if we are already at max #segments we need to delete the last one
                if (currentSegment >= _segments – 1)

                //Update our latest segment with the new position
                vertices[currentSegment*2].Position = newPosition + normalVector*_width;
vertices[currentSegment * 2].TextureCoordinate = new Vector2(1, 0);
vertices[currentSegment * 2+1].Position = newPosition – normalVector*_width;
vertices[currentSegment * 2+1].TextureCoordinate = new Vector2(1, 1);

                //Fade out
//We can’t have more fadeout segments than initialized segments!
                int max_fade_out_segments = Math.Min(fadeOutSegments, currentSegment);

                for (var i = 0; i < max_fade_out_segments; i++)
//Linear function y = 1/max * x – percent. Need to check with prior visibility, might be lower (if car jumps for example)
                    float visibilityTerm = Math.Min(1.0f / max_fade_out_segments * i, DecodeVisibility(vertices[i * 2].Visibility));
visibilityTerm = EncodeVisibility(visibilityTerm);

                    vertices[i * 2].Visibility = visibilityTerm;
vertices[i * 2 + 1].Visibility = visibilityTerm;

                //Our last segment’s position is the current position now. Go on from there
_lastSegmentPosition = newPosition;

//If we are not further than a segment’s length but further than the minimum distance to change something
//(We don’t wantto recalculate everything when our target didn’t move from the last segment)
//Alternatively we can save the last position where we calculated stuff and have a minimum distance from that, too.
            else if (directionLength > _minLength)
Vector3 normalizedVector = directionVector/directionLength;

                Vector3 normalVector = Vector3.Cross(Vector3.UnitZ, normalizedVector);

                int currentSegment = _segmentsUsed;

                vertices[currentSegment * 2].Position = newPosition + normalVector*_width;
vertices[currentSegment * 2].TextureCoordinate = new Vector2(1, 0);
vertices[currentSegment * 2].Visibility = EncodeVisibility(visibility);
vertices[currentSegment * 2 + 1].Position = newPosition – normalVector*_width;
vertices[currentSegment * 2 + 1].TextureCoordinate = new Vector2(1, 1);
vertices[currentSegment * 2 + 1].Visibility = EncodeVisibility(visibility);

                //We have to adjust the orientation of the last vertices too, so we can have smooth curves!
                if (currentSegment >= 2)
Vector3 directionVectorOld = vertices[(currentSegment – 1) * 2].Position –
vertices[(currentSegment – 2) * 2].Position;

                    Vector3 normalVectorOld = Vector3.Cross(Vector3.UnitZ, directionVectorOld.NormalizeLocal());

                    normalVectorOld = normalVectorOld + (1 – Vector3.Dot(normalVectorOld, normalVector).Saturate())*normalVector;


                    vertices[(currentSegment – 1) * 2].Position = _lastSegmentPosition + normalVectorOld * _width;
vertices[(currentSegment – 1) * 2 + 1].Position = _lastSegmentPosition – normalVectorOld * _width;

               // Visibility

                //Fade out the trail to the back
                int max_fade_out_segments = Math.Min(fadeOutSegments, currentSegment);

                //Get the percentage of advance towards the next _segmentLength when we need to change vertices again
                float percent =  directionLength/_segmentLength / max_fade_out_segments;

                for (var i = 0; i < max_fade_out_segments; i++)
//Linear function y = 1/max * x – percent. Need to check with prior visibility, might be lower (if car jumps for example)
                    float visibilityTerm = Math.Min(1.0f/max_fade_out_segments*i – percent, DecodeVisibility(vertices[i*2].Visibility));
visibilityTerm = EncodeVisibility(visibilityTerm);

                    vertices[i*2].Visibility = visibilityTerm;
vertices[i * 2 + 1].Visibility = visibilityTerm;



I hope that is relatively clear. The helper functions used are here:

private float EncodeVisibility(float visibility)
return (visibility + 1)/3.0f;

private float DecodeVisibility(float visibility)
return (visibility * 3) – 1.0f;

private void ShiftDownSegments()
for (var i = 0; i < _segments-1; i++)
vertices[i*2] = vertices[i*2 + 2];
vertices[i*2 + 1] = vertices[i*2 + 3];


Our Vertex Declaration looks like this

public struct TrailVertex
// Stores the starting position of the particle.
public Vector3 Position;

        // Stores TexCoords
public Vector2 TextureCoordinate;

        // Visibility term
public float Visibility;

        public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration
new VertexElement(0, VertexElementFormat.Vector3,
VertexElementUsage.Position, 0),
new VertexElement(12, VertexElementFormat.Vector2,
VertexElementUsage.TextureCoordinate, 0),
new VertexElement(20, VertexElementFormat.Single,
VertexElementUsage.TextureCoordinate, 0)


The final remaining part is the HLSL code. Here you go

float4x4 WorldViewProjection

float4 GlobalColor;

struct VertexShaderTexturedOutput
float4 Position : SV_POSITION;
float2 TexCoord : TEXCOORD0;
float4 Color : COLOR0;

Texture2D texMapLine;
sampler LinearSampler = sampler_state
MinFilter = linear;
MagFilter = Point;
AddressU = Wrap;
AddressV = Wrap;
// Simple trails

VertexShaderTexturedOutput VertexShaderTrailFunction(float4 Position : SV_POSITION, float2 TexCoord : TEXCOORD0, float Visibility : TExCOORD1)
VertexShaderTexturedOutput output;

float4 worldPosition = mul(Position, WorldViewProjection);

float vis = saturate(Visibility * 3 – 1);
output.Color = GlobalColor * vis * float4(0.65f,0.65f,0.65f,0.5f);
output.TexCoord = TexCoord;
return output;

float4 PixelShaderTrailFunction(VertexShaderTexturedOutput input) : SV_TARGET0
float4 textureColor = 1-texMapLine.Sample(LinearSampler, input.TexCoord);
return input.Color * textureColor;

technique AmbientTexturedTrail
pass Pass1
VertexShader = compile vs_5_0 VertexShaderTrailFunction();
PixelShader = compile ps_5_0 PixelShaderTrailFunction();

Deferred Engine Progress part2

So a few basic features made it in. I guess I go in the order of implementation


In the .gif above you can see the renderTargets for the rendering

  • Albedo (base texture/color)
  • World Space Normals
  • Depth (contrast enhanced)
  • Diffuse light contribution (contrast enhanced)
  • Specular light contribution (contrast enhanced)
  • skull models for the “hologram” effect (half resolution, grayscale – saved only as Red)
  • composed deferred image + glass dragon rendered on top

Variance Shadow Mapping

So yeah basically I have only used PCF shadow mapping earlier, but since this was a nice sponza test scene I really wanted to have very soft shadows.

A possible solution for my needs are Variance Shadow Maps, here is a link to a nvidia paper about it:
and the original paper/website by Donnelly and Lauritzen

Find detailed description of the process in these papers. Short idea: Store both depth and depth squared in the shadow map. We can use them to calculate variance:
σ2 = E(x 2 ) – E(x) 2

Chebyshev’s Inequality states that

P( x >= t) <= pmax(t) = σ2 / (σ2 + (t-mu)2)vsm shadowmap

Which, basically means – the probability that x is greater than t must be smaller than pmax which is where we can use our variance. In our case x would be the pixel depth and t would be the depth stored in the shadow map.

So the cool part is that with calculating this number we don’t have an binary yes/no shadow, but a gradient.

The biggest benefit of VSMs are that they can be blurred at a texture level(because we are not dealing with binary responses) and that is much cheaper than sampling a lot of times at the lighting stage (which we normally do).


Another little cool trick is that we can offset the depth a little when drawing transparent/translucent meshes. That way the variance will be off a little bit for the whole mesh and the shadowed pixels will never be fully in shadow.

VSMs have their own share of problems, namely light leaking and inaccuracies at the edges when a pixel should be shadowed by multiple objects. This problem gets worse with the transparency “just shift around some numbers” idea, but meh it works and most of the time it’s good looking.


Environment Mapping (Cubemaps)

So I am pretty happy with the lighting response from the models in the test scene. However; the lighting was still pretty flat. I don’t use any hemispherical diffuse this time around so basically all the colors come from the light sources.

This is ok for some stuff, but when I added some glass and metal (with the helmets, see below) I knew I needed some sort of environment cubemap.

So I just generate one at the start of the program (or manually by the press of the button whenever I want to update the perspective).

I basically render the scene from the point of view of the camera in 6 directions (top, bottom, left, right, front, back) and save these 6 images to a texture array (TextureCube).

I use 512×512 resolution per texture slice, I think the quality is sufficient.

I can update the cubemap every frame, but that basically means rendering 7 perspectives per frame and updating all the rendertargets in between (since my main backbuffer has a different resolution than the 512×512 I use for the cubemap) and I only get around 27 FPS. Keep in mind the whole engine is not very optimized (using large, high precision renderTargets and expensive pixel shader functions with lots of branching and no lookup tables etc.)

When creating the cubemap I enable automatic mip-map chain generation (creating smaller downsampled textures off of the original one –> 256, 128, 64, 32, 16, 8, 4, 2, 1) which I will use later. Note: Because of Monogame/Slim.dx limitations I cannot manually create the mip maps and have to go with simple bilinear downsampling. If I had manual access I would use some nice gauss blurring (which would be even more expensive to do at runtime).

When it comes to the deferred lighting pass I add one fullscreen quad which applies the environment map to all the objects (Note: Engines often apply environment mapping just like deferred lights with sphere models around them)

Depending on the roughness of the material I select different mip levels of the environment map:


Note: I noticed a lot of specular aliasing with very smooth surfaces. I implemented a sort of normal variance edge detection shader to tackle the issue, but it wasn’t very sophisticated. The idea was to check the neighbour pixels and compare their normals. If there was little difference, no problem. But if they faced vastly different directions then I sampled at a higher mip level to avoid aliasing.


Hologram effect on helmets

I got inspired by this concept art by johnsonting (DeviantArt)

The helmets on these guys have an optical effect which overlays some sort of skull on top of their visors which makes them look really badass in my opinion.

Here is a closer look

So my basic idea was to render skulls to a different render target and then, when composing the final deferred image, sample this render target at pixels with a certain material id (the visor/glass).
With some basic trickery I made this pixel look. Note that I do not need to render the skulls at full resolution since they will be undersampled anyways in the final image.

First attempt: (click on the gif for better resolution)


Later through trying some gaussian blurring I found that the soft, non-pixelated look, is also interesting.

Here I show them both:


It was hard finding appropriate free models for the helmets, but eventually I found something usable from the talented Anders Lejczak. Here is the artist’s website


Some other ideas floating around in my head

Hybrid Supersampling

maybe use double (any 2^multiplier) resolution for the gbuffer. Should be relatively cheap since there is almost no pixel shader work.
Then use nearest neighbour base resolution of the g-buffer for lighting calculations.
Upsample the lighthing with depth information (bilateral upsampling) to the double resolution.
Downsample the whole thing again. Have antialiasing. Success? Need to try that out.

Light Refractions/Caustics for Glass Objects

For the glass dragon it would be nice to have light refraction/caustics behind the model on the ground. But we can’t use photon mapping or sorts.

We know from the lights perspective with normals/refraction/depth where the light projection pixel should end up, but we can’t do that in a rasterizer, we can only write to the pixel we started out with. BUT we can manipulate the vertices.
Conveniently the stanford dragon has almost a million of these.

-> maybe: From the lights perspective displace the vertices of the model by the correct amount given refraction and the normal information (like you would pixels at the corresponding positions). The depth/amount depends on the depth buffer (shadow map).
-> this distorted model is then saved in another shadow/depth map (plus depth information), could be in .gb of the original shadow map.
-> reconstruct light convergence/caustics during light pass with the map
-> possible?


Deferred Engine Progress

So far so simple. I added some new features, namely: Transparent/Glass shader which is done in forward:


The refractions (and when the incidence is high, reflection) are simply some distortion of the background image. I am still trying to figure out screen space ray marching, but I have run into some depth conversion issues. I will try more in future.


Apart from that I implemented VSM-shadow mapping for a spotlight. By playing with the values I can make the shadow from the glass dragon a little less dark.

The gif above is a breakdown of the deferred rendering path minus the depth buffer, since it’s almost black.

-> albedo, normals, diffuse, specular, final composite


I have been toying with the idea of writing a deferred shading or lighting renderer for a while and it’s really easy to do with the myriads of tutorials online.

The main thing I wanted to try was an idea I had floating in my head about a potential idea for a realtime global illumination or simple light bounce.

Not very feasible for a game or anything, but something I wanted to try.


My main idea was for a single light source first.

What if we store a fullscreen reflection vector rendertarget (not the normals from the objects but the direction the light bounces off of them). In a different renderTarget or potentially just the alpha channel of the first one I then store the distance to the light source from each pixel.

The idea then would be – for each pixel I know which path the light will take and I know how much it has traveled already. With specular information gathered in some alpha channel of the g-buffer I can estimate how reflective and how “rough/smooth” the current pixel is.

My first idea was I could just ray march from this pixel until I hit another pixel and add some color to it. Doesn’t have to be full resolution.

And if the pixel is relatively unsmooth I just go in a random direction, depending on how unlike a mirror my surface was. Would be a bit noisy, but for a lot of pixels and maybe some temporal blending it would be ok, maybe?

Well and here the whole thing fell apart. I forgot that I can’t write on just any pixel. I can sample from a certain point and write my pixel, but I can’t manipulate others (unless I use computer shader or CUDA etc. I’d love to, but I felt like staying with MonoGame since it’s so fast to setup and stuff like this is not supported).

So basically RIP.

I thought I might at least finish what I started and I decided to go with the sampling approach. For each pixel I check in a chunk of its neighbours. If their reflection value points towards me and they still have some light distance “left” I color my pixel a bit with their color multiplied by light color.

Looks like an expensive bloom/glow now. Mission fucking accomplished.


Recent Updates

Hi guys,

haven’t posted here in a while, so let’s catch up!

Most important:

I added drones and motorcycles.
Here’s a close-up. You’ll notice the motorcycle is fully textured already! (Don’t mind the shadow bias and the glossy wheels)

From concept:


To model:


The movement has to be special and the motorcycles should lean a lot into the curves (maybe more than they need to, since it should be visible from far away)

Motorcycles should be important in my game. I didn’t want to create a world where the player ends up commanding 8 tanks by the time the game is almost over. Smaller vehicles like this one can flank and position itself in front of the crowd in order to drop mines or other grenades.
The gunner can only fire backwards and is probably not hyper effective, but I think I’ll allow rocket launchers to be carried as well.


Another cool rendering feature is some near-terrain dust clouds moving over the sand.
Visible here if you pay attention. It’s basically just two tiled maps overlayed (and multiplied) and I think it looks pretty good already.


I tried how specular for point lights would work, and I’m pretty happy with the results. I think making a rainy / wet version of the game seems pretty easy to realize.


A while ago, I modified the smoke even further.



Hi guys,

I will note some small things that I wish I’d learned earlier, but hopefully one or two of you can appreciate this list of significant and not so significant things to consider.

Object Oriented Programming

So, yeah I love OOP. And so do the tools I use, like ReSharper for Visual Studio (this thing is godsend, heavily recommended.)

I did work with OOP in mind for many years now and so it was pretty obvious to me how to do stuff, which later on turned out to be great for clean programming, but maybe not optimal for applications which need to crank out maximum performance at thousands of frames per minute.

Now it should be worth to note that on modern hardware, especially when not CPU bound, these things don’t matter as much. Many times the beauty of the program and the ease of working with a proper OOP setup saves more time to the programmer to an extend that makes it worth using them anyways (your time vs. program runtime)

Just a quick list of what to avoid, especially if called hundreds of times per frame.

  • foreach –> use “for” instead, foreach creates unnecessary garbage. This does not apply to all foreach loops, there are quite some articles out there on this.
  • LINQ Expressions –> look awesome (and make the code look sophisticated to absolute beginners) but are even worse in terms of garbage generation.
  • Events –> if your projectiles call an event on all the enemies to check if it has hit, you might as well rewrite all your code. Again unnecessary garbage creation.
  • Interfaces are apparently very slow. Call directly. Something I had to learn at a point where basically my whole infrastructure implements interfaces and the calls are made via these :(
    Indirect calls in general are not great.
    Check these out:
  • there is other stuff like inlining everything and using fields instead of properties but I cannot comment on that and I don’t think it’s worth thinking about these.
  • Lists. Lists are awesome, I use lists a lot. But if you have big lists which change a lot use pools or arrays instead. Again, to avoid unnecessary garbage.
  • can you think of others? Leave a comment!


Don’t use “SetRenderTarget”

“Why not? Every tutorial ever uses it!” you may say, but hear me out.

One of the major problems as well as benefits with using .net as an underlying platform is garbage collection. It’s awesome because you don’t have to bother allocating memory manually and you can’t really forget to release said memory later, so memory leaks can basically never happen (well they can in some cases, I talk about that later).

But – the garbage collector will cause a small frame spike every time it decides enough useless stuff has piled up and needs to be disposed.

So we must avoid creating a lot of data (arrays of data!) every frame. However, if you want to switch your current render target you do just that.


will create a new array containing “myRenderTarget” every frame! And renderTargets are not small at all. In fact, render target changes accounted for more than 40% of my whole garbage alone! So we should avoid it.

Usually, somewhere in the initialize() function and the resize() function you create your renderTarget like this:

myRenderTarget = new RenderTarget2D(_graphicsDevice,
(int)(width * GameSettings.SuperSample),

Now, what you should do as well is the following:

Have a new field in your render class of the type RenderTargetBinding[] – like this

private readonly RenderTargetBinding[] _myRenderTargetBinding =
new RenderTargetBinding[1];

and assign it like this under your renderTarget creation:

_myRenderTargetBinding[0] = new RenderTargetBinding(_myRenderTarget);

when assigning the current renderTarget to the GPU use:


the important thing here is to use SetRenderTargets instead of SetRenderTarget (note the plural!)

Obviously when using multiple render targets, the same thing applies, your binding array just contains more items.

The image below captured the “hot path” of memory allocations. Note how the difference.



In case you use geometry instancing to pass a lot of individual geometry with the same mesh in one call you have to use SetVertexBuffers.

Usually like this:

graphicsDevice.SetVertexBuffers(_myMeshVertexBufferBinding, _myInstancesVertexBufferBinding);

This will create crazy amounts of garbage, since, again, every frame we set up a new array and fill it with these properties.

Have a field called something like

private readonly VertexBufferBinding[] _vertexBuffers = new VertexBufferBinding[2];

and then fill it like this

_vertexBuffers[0] = _myMeshVertexBufferBinding;

_vertexBuffers[1] = _myInstancesVertexBufferBinding;

Obviously filling the _vertexBuffer[0] should only happen once, since the mesh of the model does not change. So only use the second part every frame. Then call:






Well, you are most likely using Visual Studio. And it’s really great just by itself (minus the crashes). Especially stuff like the deep Git integration and a nice variety of profilers.

But you can improve your experience a lot. Stuff I would heavily recommend:

  • ReSharper, simply an improvement for intelliSense. Helps you tons with refactoring and gives advice on optimization. Hard to work without it once you get hooked.
  • HLSL Tools for Visual Studio. If you use Visual Studio to write your shaders this is a godsend. You can find it in VS studio itself in “Extensions and Plugins”
  • Intel Graphics Performance Analyzers (GPA) – I have tried many profilers for the GPU side of things, including RenderDoc, AMD GPUPerf, the default VS ones and some more. All of these are good, but Intel GPA is the most competent and comprehensible for me.

Final words

I may or may not add some other stuff, but I hope what content is there will help you regardless. If you have more tips or improvements for this entry, feel free to leave a comment here (or if you don’t like WordPress, you can hit me up on twitter as well.

Obviously links to other beginner tips would be appreciated a lot!