Sphere Casts

There are so many instances where I write a pleading forum post asking for suggestions/solutions to a problem I have only to discover the answer soon after I post it. I’m hedging bets in a way, hoping that someone else has a solution in case I don’t eventually find one myself.

This almost happened again in regards to collision detection in Unity3D. I wanted to do something that seemed really basic: find out when two collision meshes collided (overlapped). You’d think this would be simple, right? Nah, maybe… kind of…

After the initial hunting around the documentation I tried implementing the the OnCollisionEnter/Exit approach. Mmmm, didn’t work. So I hunted around the forums and found this.

Oh, so I need rigid bodies to detect a collision? Seriously? That’s a bit overkill when I’m not using any physics.

Oh, wait, it turns out I should have been using raycasts.

That I can do because I’d been using them for touch selection (mouse picking).

…but oh, wait… What’s this sphere cast?


SphereCasts are just fat raycasts but they feel like a little heavy handed for detecting collisions. Every time a unit moves (such as a tank or infantry) I’m having to sphere cast its bounds against the 2D world to see if it hits anything. I’d rather the engine have been smart enough to tell be there was a collision rather than me having to ask. I feel like I’m have to hand-hold it.

In my case I’m trying to determine if a unit is inside one of the OpenStreetMaps defined areas such as a building. If I was having to build this system from scratch I’d use the fantastic Sort and Sweep algorithm that is described in Real-Time Collision Detection by Christer Ericson but I have to trust that Unity is using some cleverly optimised collision detection algorithm that quickly determines collisions between sphere cast and the collision meshes.

That’s why I’m using an engine. It look quick a while to make that algorithm work for Mobile Assault.


Path Complete

Whoops, that was a tangent. After all those experiments with creating user defined paths (and posting a number of articles about it) the actual solution is conceptually quite simple. It was not until I discussed the pathfinding problem with a work colleague that it was realised what Autumn Dynasty’s path creation was actually doing, especially when it came to avoiding world obstacles and impassible terrain.

The solution

When the user draws a path, the points making up the path are initially exactly as they are drawn. There is no resampling or fancy path finding going on; the unit just follows the path that the user traces with their finger. This happens until the path hits impassible terrain.

As soon as the path hits impassible terrain, the traced path turns into a line segment that goes from the last point that was in passable terrain and the current location of the touch (cursor). As the user drags their finger around, the line segment’s final endpoint follows the finger.

I’ve kept it simple and made the line-segment-mode persist until touch release at which point the two endpoints of the line segment are used by the pathfinding library to create a new path around the obstacles.

z-fighting fixed

I even managed to fix the z-fighting with proper use of render queues.

Potential problem

To say this method is simple is a half-truth. In practice there is one big problem in that the resolution of the points read in by the system as the user traces a path may not be sufficient. If the touch gesture happens very quickly there is a chance that not enough points are read in and an obstacle may be bypassed. The solution is just to subdivide the line segments but I’ll deal with this if it becomes a problem later. For now it seems okay.

In practice

Given the high density of obstacles (e.g. buildings being obstacles to tanks) it’s actually quite difficult for the user to trace complicated, exact routes/paths. That being said, the ability to do curvy lines is new for touch screens. With a mouse controlled RTS, shift-clicking out waypoints is used for creating exact paths but in most instances players just set single point destinations for a selection of units.

If the player is going to quickly flick out a straight path, the line is going to be in line-segment-mode almost immediately and you end up with a single-click-destination mouse based RTS. The player is therefore not missing out much.

All this has reminded me that if I’m going to make a PC version, I’m going to have to actually implement the shift+click waypoint to path creation.

Paths are hard

I almost never estimate correctly; it’s freak’n difficult. That being said, for a project like mine it doesn’t really matter because all progress is done linearly by just me. The only stakeholder is me and only I am suffering from blown out time frames.

I consider Failed State (as I’m currently making it) a first draft app. I’ll hurriedly make it to some kind of limited scope, ship it, then look back on it and see how it can be edited, e.g. better code integrity, better modularisation etc. For now, I don’t care too much because:

  1. I’m just learning Unity so naturally I’m unfamiliar with best practise still, and
  2. I just need to ship something. I’m not after creating some kind of architectural masterpiece. This is not medical software so bugs aren’t going to be detrimental.

This past week my expectations of progress have been dashed by path drawing. It turns out it’s not that simple. Well, conceptually it’s simple but trying to code and and seeing the result being rendered using those naive assumptions, it becomes quickly apparent that more thought is required.

Work flow (simple version)

  • The user clicks on (touches) a unit and drags a path out from that unit.
  • The pathfinding library calculates a new path in case some impassible objects have to be avoided. If there are no obstacles, the user drawn path and the path-lib path will be the same.
  • A rendered line follows that new path
  • As the path is being ‘drawn’ by the user/player, the unit begins to follow that path even if the path has not been completed yet.
  • As the path is followed by the unit, the rendered line is cropped as it is ‘consumed’ by the unit that is following it

Unfortunately it’s not that simple.

Work flow (more complex version)

  • The user clicks/touches a unit. This only selects the unit, i.e. making it “ready to be given a path to follow”.
  • A drag start event creates the first major waypoint; we’ll call it the 0th major waypoint. Major waypoints are the points that make up the user drawn line and ordinary waypoints are what the pathfinding library generates.
  • With the number of major waypoints greater than zero and the targetMajorWaypoint still at zero, this special case means that the pathfinding lib will calculate a path between the unit’s current position and that 0th major waypoint.
  • This will provide a list of points that make up the actual path that the unit will follow. A line can be rendered along those points.
  • As more path calculations occur between the 0th and 1st major waypoint, then the 1st and 2nd major waypoint etc, the resulting paths (or legs between waypoints) can be added together to form a complete rendered path line to represent the path the unit is following. As the unit tracks through each waypoint of the mega-path, the rendered line is cropped so that it only renders from the current waypoint up to the up to the final destination waypoint.

Touch-dragged paths

The above steps work great if the major waypoints are well defined all at once. Consider a modern RTS game where you can shift-click out a bunch of waypoints for the unit to follow; when you release the shift-key and mouse press, the path that navigates through all the major waypoints is created in one go.

This isn’t applicable for touch screens. Games like Flight Control have created the expectation that the unit will move as soon as you start dragging. The problem with this is that the drag motion creates a huge amount of points which need to be filtered. It would be a bit of waste of processing power to feed the pathfinding lib a multitude of paths to work out.

Here’s what is going to happen:

  • The user starts dragging and the pathfinding lib calculates between the unit’s pos and the drag point.
  • The user continues to drag and x number of major waypoints get detected. This is where things get a bit crazy…
  • …The pathfinding lib does its processing in a background thread and a callback notifies when it’s done processing each path. There’s a chance that many major waypoints have been added before the pathfinding lib has completed the path between the first 2 points it was given! This is an opportunity to do some resampling.
  • Take all the major waypoints from [targetMajorWaypoint, last major waypoint] and use the Douglas-Peucker Line Approximation Algorithm to resample those points to something that is a lot simpler (less points).
  • This creates a complication for our rendered line. It now comprises of both the calculated paths that the library has processed and the remaining resampled points that make up what the user/player has drawn on the screen.

Pathfinding errors

Ignoring the obvious z-fighting issues, there is one glaring problem and it partly from trying to interpret user intent. I’ve drawn a path that goes through the middle of the buildings but because of the amount of major waypoints being recorded and the resampler not taking enough points away, some of those major waypoints have ended up in the streets between buildings. The result is a u-curve path around buildings as the pathfinding lib calculates a path from one side of the building to the other. A much simpler path is required, certainly not one with u-curves in it.

Consuming the line

The last piece of the puzzle is the rendered line being consumed. There is one giant issue that occurs when the distance between two waypoints is quite large. As the unit starts moving toward the next waypoint, the line segment/leg between the previously reached waypoint and the target/destination waypoint is “consumed”. A suddenly disappearing line segment is going to be really obvious to the user. I see a couple of solutions:

  1. Resample the rendered path so that there are even more points; essentially subdividing each line segment of the path. This new line should probably exist as a different list of points but because this subdivided path perfectly maps on top of the path the unit is following, the unit will transition through each of the multitude of points of the rendered line. A distance check between the unit and each of those points making up the line will dictate when the segments are lopped off.
  2. Alternatively, the line segment/leg that the unit is currently following could become semi-transparent. In fact, the amount of transparency could increase the closer the unit gets to the next waypoint. This could look quite good.

Version Control in Unity – RTFM

I could probably create an entire RTFM series because of my terrible track record of ‘getting it’ the first time around. I could have sworn I’d read this Unity article about version control, especially considering I’d changed the settings so that the .meta files showed up in my project. Strangely though, I somehow didn’t come to actually commit those .meta files to version control despite the article explicitly stating to do so. Perhaps I’d read some misinformed post somewhere else that said not to.

The realisation came when I decided to clone my repository just to make sure I’d been committing all the files I was supposed to have. I’ve had some misadventures with getting Unity command line builds to work and had given up on my continuous integration aspirations; the consequence being that I never got to the part where I could make the project build from the files submitted to the repo…

…It turns out the repo build was pretty broken.

The primary culprit was all the missing meta files, but there were a few other little tweaks that were required. All fixed now. Unfortunately no one wants to answer my post as to whether or not command line builds are possible using the free version of Unity. At this rate I will have to apply for a 30 day Pro trial to see if I can get it working and thus answer my own forum question.

Like taking a jellyfish for a walk

My highschool orchestra conductor had a saying:

“It’s like taking a jellyfish for a walk.”

He was referring to our playing (more specifically our string section’s playing) where the pace would gradually slow irreparably. So much for animato.

That’s what my Failed State game feels like at the moment, a freak’n Jellyfish that just won’t slop over fast enough because of slimy, slippery impediments that stuff up the flow.

Here’s a list of annoyances:

  • Fun with z-fighting billboards. – I tried to get some billboarded text in front of a billboarded background in Unity. It seems the 3DText GameObject and my custom Mesh just don’t want to get along. I’m probably doing something stupid.
  • Freak’n Unity keeps crashing when I try to use breakpoints! Seriously! And the bug reporter seems to keep hanging! Argh! It turns out that I’m running Unity 4.2.1 and they’re up to 4.3.4. Let’s hope that updating makes a difference (It doesn’t seem like that long ago I’d updated so maybe they released a crap build).
  • MonoDevelop gets in this weird state where the CPU maxes out and everything grinds to a halt. One forum post suggested turing off the version control system but that doesn’t do anything. *sigh*
  • Started trying RAIN{Indie} as an alternative path finding library. I’ve had to spend a few evenings figuring that out. The pathfinding seems promising by I need to get it to calculate the navmesh programmatically.
  • And work drains me to the point of computer apathy. No amount of exercise is fixing that.

So here I am, bitching about my misadventures. Hopefully I can look back on this post and laugh at it once I figure this mess out.

Now it’s off to applying a bunch of OSX updates and installing a new version of Unity.

Random Ramblings Regardling Recent Wangling

Wangle (verb): to bring about, accomplish, or obtain by scheming or underhand methods.

…but to say that’s what I have been doing all week would be rather hyperbolic, unless one were to suggest that I’m trying to cheat time but time will always win. In fact, time probably just toys with us.

I set myself the ambitious goal of trying to get a Continuous Integration build server set up using TeamCity set up so I can auto create builds to give to friends and to make sure that I’m not breaking things along the way.

I suppose it was crazy of me to think this would be straight forward.

I set up TeamCity once before for an iOS only build of my Breezy Bubbles game, more to prove that I could than for being really thorough about the games integrity (it was a quick ‘throw away’ title, so to speak). I don’t remember the process being that painful because it required two steps.

  • Hook it up to my git repo.
  • Create a build step using the XCode build step template which hooked into my project settings.

I was under the optimistic impression that the Unity Runner plugin for TeamCity would afford the same simplicity but alas, that wasn’t the case. I got so far as getting TeamCity pulling my project from my git repo but the build runner was going nowhere fast.

I got the impression that I could sink a week into something like this. Yes, there is command line documentation on how to use Unity’s command like arguments to create a build but I couldn’t help but shake the feeling it was going to be easier said that done. So I did what any cautious developer does and bailed.

I bailed and went back to coding my unit circles (the little circles that are going to represent infantry, tanks, vehicles etc). I also made a unity Inspector element whereby I can edit the unit circle’s size and colour during runtime.

Combined with the billboard-like 3D labels which I ripped off from that Auckland International Airport video, I can represent each unit on the map.

Unfortunately even that is causing a lot of drama. The unit labels and associated text are billboarded so that they always face the camera. That bit’s fine (I found some code for that quite a while ago) but I’m having ‘fun times’ trying to get the text correctly positioned and in front of the label background.


I feel like I’m wasting time making it look pretty but the reality is that what I’m trying to create could be considered the bare necessity to represent a unit on the map. Once I get that done I’ll post a victory screenshot.

Inspirational Images

Pinterest is your friend when it comes to discovering great UI ideas. Google images is great too but Pinterest provides the ability to save (pin) all the interesting images rather than having to actually download them and store them somewhere yourself. This week I’ve spent time looking up map UI, tactical map, airstrike and hud and collating a modest collection of ideas for the visual aesthetic of Failed State.

Just to prove that good ideas can come from all sorts of diverse places, I happened to see a video about Auckland International Airport showing off their expansion plans up to the year 2044. I found it thanks to transportblog.co.nz which is a great source of all things NZ tranport infrastructure related.

What was interesting about it (with the exception of its content and great graphics) was some of the coloured overlays that were being used to define locations and areas on the map. Also of interest was the camera angle, but that’s an altogether different topic.

The overlays in question looked something like this…

…and what was especially of interest was the way the labelled overlays ‘pinned’ to the terrain. I was trying to think of a way of anchoring an overlay representing a in-game unit (e.g. a tank) to a map location. This is not a 3D game as such and the zoom level is too high so there is not going to be a concept of an actual 3D tank model on the map, it will probably just be a circle with a label. The problem is conveying enough info with something like this. I don’t want to use a google maps type pin or anything else synonymous with mapping apps so seeing this AIA video was a lucky break; I’m not above copying it.

Making higher res mock ups is in violation of my idea of quickly getting a first draft done but hi-res is good for eliciting different type of feedback than hand drawn sketches do. It also helps me visualise screen real-estate and how one might interact with units (e.g. tanks).

I’m still trying to find unit identifiers similar to what World in Conflict and Wargame European Escalation use when the map is zoomed out (gosh did my Mac Book Pro just go absolutely nuts running European Escalation just so I could get a screenshot. Meh, I don’t even need it; just take my word for it that those games look amazing). I’d like to use icons to represent units but for now I’ll just use plain text, e.g. a M1-A1 tank will just use that very text on the overlay unit label.

Regardless, here’s something to show off. There is an expectation that touch devices require bold UI elements and simple game play. Failed State is certainly not going to be a Eugen Systems game but one has to start from somewhere, right.