Culling, normals and face winding

The internet is an amazing resource for programmers, especially when using a language or platform that has a massive community behind it. It’s almost guaranteed that someone has experienced your current problem and found a solution to it, a solution that’s shared on a forum post or mailing list somewhere. Unity3D is one such platform.

Unfortunately though, if you don’t use the correct search terms you can end up going in circles for a while and such a thing happened to me last week when trying to extrude my OSM buildings into 3D. I kept telling myself that it was not important for gameplay (and it isn’t) but I was so curious to see what it looked like that I’ve spent just over a weeks worth of free time trying to implement it.

Inevitably trying something new (or even doing something that was done ages ago but is now being reimplemented in some new manifestation) results in all sorts of problems, puzzles, frustrations, mindless hacking and misinterpretations.

Mesh extrusion should be easy, right? Just duplicate the initial mesh, offset it then create new triangles between the two layers to create the walls?


Fun with normals

It turns out that doing a ‘sandwich’ style join between two layers of the 2D mesh (one at a y offset to the other) was a bad idea because of the way shading (from lighting) is done in the grahics pipeline. If there is only one normal per vertex, the buildings would look like they have rounded edges instead of flat shaded (hard) edges. What’s required is to have three copies of a vertex in order to hold the three normals that represent each face. Consider a cube. There aren’t 8 vertices, there are really 24 vertices with 24 corresponding normals (3 per corner).

I had to break the code down into its most basic form and try an extrude just a box into a cube so that I could figure out where my code was breaking.

Fun with normals part 2

Oh, so Unity uses a left-hand coordinate system? That right hand rule for cross product I learnt at Uni and is all over the internet doesn’t actually apply here? Grrr. If I’d been more diligent I’d have noticed that the Unity3D docs clearly state the use of left-hand rule when using Vector3.Cross but it took a while to ‘get it’. My Vector3 normal = Vector3.Cross(b-a, d-a).normalized; should have been Vector3 normal = Vector3.Cross(d-a, b-a).normalized; (I didn’t want to multiply by -1).

With the normals facing the wrong way, the walls would look dark despite the intensity of the lighting.

Fun with normals and culling

Upon investigating the ‘why are the walls dark’ issue, I experimented with some Unity3D provided shaders, but oh no!

Why oh why are some of the walls of my buildings being culled out?! The normals are definitely outward facing!

I stumbled across this forum post (see, someone had a similar issue). Normals are only used for shading and colour, the triangle winding order determines what triangles are back facing.

*sigh* I felt like I’d had that very issue before when my collision meshes didn’t work; the triangles had to be drawn with CCW winding instead of CW (or was it vice versa) for the raycasts to ‘hit’. Anyway, duh! I felt stupid for wasting an evening.

Fun with normals and culling part 2

But holy crap! It still didn’t work! Some of the buildings had walls disappearing but others were fine.

Think, think…

Oh, the OSM data has some polygons defined in CCW order and others in CW order. I had to make them use the same ordering. More searching and I found this. Yay for pseudo code. It works. Ship it.

And now for a screenshot.

Oh look, I’ve got placeholder art from Shenandoah Studio’s Battle of the Bulge; I hope they don’t mind. Their games looks amazing.


Sphere Casts

There are so many instances where I write a pleading forum post asking for suggestions/solutions to a problem I have only to discover the answer soon after I post it. I’m hedging bets in a way, hoping that someone else has a solution in case I don’t eventually find one myself.

This almost happened again in regards to collision detection in Unity3D. I wanted to do something that seemed really basic: find out when two collision meshes collided (overlapped). You’d think this would be simple, right? Nah, maybe… kind of…

After the initial hunting around the documentation I tried implementing the the OnCollisionEnter/Exit approach. Mmmm, didn’t work. So I hunted around the forums and found this.

Oh, so I need rigid bodies to detect a collision? Seriously? That’s a bit overkill when I’m not using any physics.

Oh, wait, it turns out I should have been using raycasts.

That I can do because I’d been using them for touch selection (mouse picking).

…but oh, wait… What’s this sphere cast?


SphereCasts are just fat raycasts but they feel like a little heavy handed for detecting collisions. Every time a unit moves (such as a tank or infantry) I’m having to sphere cast its bounds against the 2D world to see if it hits anything. I’d rather the engine have been smart enough to tell be there was a collision rather than me having to ask. I feel like I’m have to hand-hold it.

In my case I’m trying to determine if a unit is inside one of the OpenStreetMaps defined areas such as a building. If I was having to build this system from scratch I’d use the fantastic Sort and Sweep algorithm that is described in Real-Time Collision Detection by Christer Ericson but I have to trust that Unity is using some cleverly optimised collision detection algorithm that quickly determines collisions between sphere cast and the collision meshes.

That’s why I’m using an engine. It look quick a while to make that algorithm work for Mobile Assault.

Path Complete

Whoops, that was a tangent. After all those experiments with creating user defined paths (and posting a number of articles about it) the actual solution is conceptually quite simple. It was not until I discussed the pathfinding problem with a work colleague that it was realised what Autumn Dynasty’s path creation was actually doing, especially when it came to avoiding world obstacles and impassible terrain.

The solution

When the user draws a path, the points making up the path are initially exactly as they are drawn. There is no resampling or fancy path finding going on; the unit just follows the path that the user traces with their finger. This happens until the path hits impassible terrain.

As soon as the path hits impassible terrain, the traced path turns into a line segment that goes from the last point that was in passable terrain and the current location of the touch (cursor). As the user drags their finger around, the line segment’s final endpoint follows the finger.

I’ve kept it simple and made the line-segment-mode persist until touch release at which point the two endpoints of the line segment are used by the pathfinding library to create a new path around the obstacles.

z-fighting fixed

I even managed to fix the z-fighting with proper use of render queues.

Potential problem

To say this method is simple is a half-truth. In practice there is one big problem in that the resolution of the points read in by the system as the user traces a path may not be sufficient. If the touch gesture happens very quickly there is a chance that not enough points are read in and an obstacle may be bypassed. The solution is just to subdivide the line segments but I’ll deal with this if it becomes a problem later. For now it seems okay.

In practice

Given the high density of obstacles (e.g. buildings being obstacles to tanks) it’s actually quite difficult for the user to trace complicated, exact routes/paths. That being said, the ability to do curvy lines is new for touch screens. With a mouse controlled RTS, shift-clicking out waypoints is used for creating exact paths but in most instances players just set single point destinations for a selection of units.

If the player is going to quickly flick out a straight path, the line is going to be in line-segment-mode almost immediately and you end up with a single-click-destination mouse based RTS. The player is therefore not missing out much.

All this has reminded me that if I’m going to make a PC version, I’m going to have to actually implement the shift+click waypoint to path creation.

Paths are hard

I almost never estimate correctly; it’s freak’n difficult. That being said, for a project like mine it doesn’t really matter because all progress is done linearly by just me. The only stakeholder is me and only I am suffering from blown out time frames.

I consider Failed State (as I’m currently making it) a first draft app. I’ll hurriedly make it to some kind of limited scope, ship it, then look back on it and see how it can be edited, e.g. better code integrity, better modularisation etc. For now, I don’t care too much because:

  1. I’m just learning Unity so naturally I’m unfamiliar with best practise still, and
  2. I just need to ship something. I’m not after creating some kind of architectural masterpiece. This is not medical software so bugs aren’t going to be detrimental.

This past week my expectations of progress have been dashed by path drawing. It turns out it’s not that simple. Well, conceptually it’s simple but trying to code and and seeing the result being rendered using those naive assumptions, it becomes quickly apparent that more thought is required.

Work flow (simple version)

  • The user clicks on (touches) a unit and drags a path out from that unit.
  • The pathfinding library calculates a new path in case some impassible objects have to be avoided. If there are no obstacles, the user drawn path and the path-lib path will be the same.
  • A rendered line follows that new path
  • As the path is being ‘drawn’ by the user/player, the unit begins to follow that path even if the path has not been completed yet.
  • As the path is followed by the unit, the rendered line is cropped as it is ‘consumed’ by the unit that is following it

Unfortunately it’s not that simple.

Work flow (more complex version)

  • The user clicks/touches a unit. This only selects the unit, i.e. making it “ready to be given a path to follow”.
  • A drag start event creates the first major waypoint; we’ll call it the 0th major waypoint. Major waypoints are the points that make up the user drawn line and ordinary waypoints are what the pathfinding library generates.
  • With the number of major waypoints greater than zero and the targetMajorWaypoint still at zero, this special case means that the pathfinding lib will calculate a path between the unit’s current position and that 0th major waypoint.
  • This will provide a list of points that make up the actual path that the unit will follow. A line can be rendered along those points.
  • As more path calculations occur between the 0th and 1st major waypoint, then the 1st and 2nd major waypoint etc, the resulting paths (or legs between waypoints) can be added together to form a complete rendered path line to represent the path the unit is following. As the unit tracks through each waypoint of the mega-path, the rendered line is cropped so that it only renders from the current waypoint up to the up to the final destination waypoint.

Touch-dragged paths

The above steps work great if the major waypoints are well defined all at once. Consider a modern RTS game where you can shift-click out a bunch of waypoints for the unit to follow; when you release the shift-key and mouse press, the path that navigates through all the major waypoints is created in one go.

This isn’t applicable for touch screens. Games like Flight Control have created the expectation that the unit will move as soon as you start dragging. The problem with this is that the drag motion creates a huge amount of points which need to be filtered. It would be a bit of waste of processing power to feed the pathfinding lib a multitude of paths to work out.

Here’s what is going to happen:

  • The user starts dragging and the pathfinding lib calculates between the unit’s pos and the drag point.
  • The user continues to drag and x number of major waypoints get detected. This is where things get a bit crazy…
  • …The pathfinding lib does its processing in a background thread and a callback notifies when it’s done processing each path. There’s a chance that many major waypoints have been added before the pathfinding lib has completed the path between the first 2 points it was given! This is an opportunity to do some resampling.
  • Take all the major waypoints from [targetMajorWaypoint, last major waypoint] and use the Douglas-Peucker Line Approximation Algorithm to resample those points to something that is a lot simpler (less points).
  • This creates a complication for our rendered line. It now comprises of both the calculated paths that the library has processed and the remaining resampled points that make up what the user/player has drawn on the screen.

Pathfinding errors

Ignoring the obvious z-fighting issues, there is one glaring problem and it partly from trying to interpret user intent. I’ve drawn a path that goes through the middle of the buildings but because of the amount of major waypoints being recorded and the resampler not taking enough points away, some of those major waypoints have ended up in the streets between buildings. The result is a u-curve path around buildings as the pathfinding lib calculates a path from one side of the building to the other. A much simpler path is required, certainly not one with u-curves in it.

Consuming the line

The last piece of the puzzle is the rendered line being consumed. There is one giant issue that occurs when the distance between two waypoints is quite large. As the unit starts moving toward the next waypoint, the line segment/leg between the previously reached waypoint and the target/destination waypoint is “consumed”. A suddenly disappearing line segment is going to be really obvious to the user. I see a couple of solutions:

  1. Resample the rendered path so that there are even more points; essentially subdividing each line segment of the path. This new line should probably exist as a different list of points but because this subdivided path perfectly maps on top of the path the unit is following, the unit will transition through each of the multitude of points of the rendered line. A distance check between the unit and each of those points making up the line will dictate when the segments are lopped off.
  2. Alternatively, the line segment/leg that the unit is currently following could become semi-transparent. In fact, the amount of transparency could increase the closer the unit gets to the next waypoint. This could look quite good.

Weekly goals and realities

I got sick this week. I blame it on the salsa dancing where I go from a stupidly hot and humid dance floor to the outside night of Winter; it cools you down but I’m pretty sure the rapid temperature changes mess with the immune system to the point where the bugs get you.

And that’s what happened to me this week. Nothing crippling but enough to lose the appetite, have the usually pleasant coffee smell make you feel queasy and just general lethargy. Still, I went to the day job and managed to drag myself to the gym once my gut didn’t feel like crunches were going to make me throw up.

Hows that for an intro?

This week it was lots of preparatory work for being able to move my place-holder units around the screen. The goal was to actually get the ‘drag-to-path’ system in place using the style used in Autumn Dynasty but good old Unity-learning-curve and life’s distractions only made this a partial success. I finally made my own Unity prefabs (they’re so fundamental it’s any wonder why I haven’t used them until now) to represent each unit and it’s gotten to the point where the press/release will select/deselect them. I even have a picture…

Super basic unit selection.

The white circle is a touch-pressed unit and all the placeholder white lines are the paths that the units are currently following. Really basic and with lots of fun z-fighting problems.

The next is to drag a path from the unit to a desired location so that the unit traverses it. Currently I’ve just got hardcoded destinations.

Sounds simple, but here’s what I observed from Autumn Dynasty.

  • When dragging the path line from the unit, the path line continually resamples itself, whether than be via simplification, smoothing or both.
  • When the final path line is created, it might totally redraw itself depending on impassible terrain. For example, a straight line drawn across an impassible mountain range will change to a line that moves around the mountains.
  • Multiunit selection and moving will draw multiple lines; the lines will converge quite early on.
  • A path that has an end point in impassible terrain will crop the end of the path.
  • An eventual path that the unit follows will be consumed as the unit follows it.

It turns out dragged paths are quite involved but Autumn Dynasty has a winning formula that’s worthy of copying (shamelessly). Thankfully there are some line simplifications libraries out there. I’ve already gotten my hands on a C# implementation of the Douglas-Peucker Line Approximation Algorithm.

Version Control in Unity – RTFM

I could probably create an entire RTFM series because of my terrible track record of ‘getting it’ the first time around. I could have sworn I’d read this Unity article about version control, especially considering I’d changed the settings so that the .meta files showed up in my project. Strangely though, I somehow didn’t come to actually commit those .meta files to version control despite the article explicitly stating to do so. Perhaps I’d read some misinformed post somewhere else that said not to.

The realisation came when I decided to clone my repository just to make sure I’d been committing all the files I was supposed to have. I’ve had some misadventures with getting Unity command line builds to work and had given up on my continuous integration aspirations; the consequence being that I never got to the part where I could make the project build from the files submitted to the repo…

…It turns out the repo build was pretty broken.

The primary culprit was all the missing meta files, but there were a few other little tweaks that were required. All fixed now. Unfortunately no one wants to answer my post as to whether or not command line builds are possible using the free version of Unity. At this rate I will have to apply for a 30 day Pro trial to see if I can get it working and thus answer my own forum question.

What makes a gun battle?

As I evaluate the pros and cons of Aron Granberg’s A* Pathfinding library over that of the RAIN{Indie} package, my mind continually get distracted with the idea of what is going to constitute a gun battle in Failed State. Is it going to be a straight numbers game where the troops with the greatest numbers will prevail (like Galcon)? Will there be a rock, paper scissors balance like Age of Empires or Autumn Dynasty? Will a squad be broken up into individual entities (e.g. soldiers of a squad) that have smaller granularity of info such as ammo, whether they’re aiming, loading or suppressed much like Clost Combat and Wargame European Escalation? How will tactics play a part in making the game more interesting? And once all these decisions are made, how do I balance the game mechanics so that one strategy is infinitely better than the others?

I’m getting too far ahead of myself.

I play paintball quite regularly and simply put, paintball is a game of suppression and assault. If you take more ground you get better angles on your opponent and the best way to make ground is to keep their head down with suppressive fire.

Suppressive fire isn’t necessary directed at a specific foe like a sniper taking a shot at an individual. Instead it’s firing on an area to force those within that area to take cover. There is no ‘clear shot’ so to speak.

That’s something that I want to be able to capture in Failed State. Like in Close Combat I want artillery units to potentially suppress infantry in and around the firing zone. If a machine gun has a 200m shot at a building with troops in it, I want that machine gun to be able to suppress the troops inside. This offers a tactical advantage because other infantry units can move up on the opponent with reduced chance of being seen and/or fired upon.

Those are the type of gun gun battles that I want to be able to bring to hand held devices.