Chauvinism, Criticism and Harassment – my comment on the topic.

Last week was the week the game industry showed its lack of maturity by essentially bitching about the drama surrounding Anita Sarkeesian and Zoe Quinn. I even saw another article about Phil Fish and naturally the comment section was going all out, saying things they’d never say to people’s faces.

It’s all a bit embarrassing really and I’m not sure why people care so much. Yet, here I am writing a blog post about it so obviously I care? Or do I? Yes, I seem to care enough to get involved in the mud slinging, even though what I try to do is actually pass around some bags so people can clean up their mess.

Here was my contribution to one comment section [edited for typos]…

“This comment section has basically reinforced the point that there is essentially a lot of hate directed at certain professionals in the industry. The problem is that commenters continually attack the player instead of the ball and continually try to find fault (which may be minimal) instead of trying to find what may be insightful.

Case in point are the ridiculous accusations directed at Quinn. All I’m going to say about that is for people to look up Kotaku’s response the allegations. It’s almost like people want or expect there to be something solacious and to hell with what really happened, so long as it *could* have happened.

And the hate directed at Sarkeesian is mindless in that it’s not like many people offer rebuttals to what she has to say. In fact, most of her videos are matter-of-fact in that they merely report instances of tropes and create an awareness of their existence. Culturally we get so used to the status quo that we need the efforts of people such as Sarkeesian to enlighten our thinking a bit more.

In regards to Phil Fish, I think people forget how much that guy toiled to get his game released (watch Indie Game: The Movie). Not everyone can handle media (and social media) in such a way that is dry enough to be palettable to everyone in the world. A more recent example is the maker of Flappy Wings who basically hid while all the conspiracy theories died down.”

*sigh* I don’t know why I bother. I find that people want want to discover a big conspiracy, just like a Hollywood movie but more often that not, the simplest explanations will suffice.

I endeavour to be level headed in these debates. I like jousting with my left leaning friends and conspiratorial friends alike but I do so with the knowledge that I’m as biased as everyone else. I like to appreciate all points of view (Re: Ukraine and Russia) but in the end I still feel compelled to pick a side. Will I get all bitchy with someone who disagrees with me? No, but I’ll still try and make a thought provoking remark on an Al Jazeera Facebook post about the Ukraine conflict (which is a cultural and geopolitical puzzle that I suspect doesn’t bode well for Ukraine).


Come on guys. The rest of the world must be thinking what the hell is wrong with our industry given the way we’re behaving at the moment.

Better pinch-to-zoom

The horrific pinch-to-zoom implementation that I had in my code up until recently was interfering with user testing. When I say user testing, I really mean that I show the game to my work colleagues and get feedback on what is or isn’t working.

The problem is that I had to apologise all the time about the crappy camera pan and zoom which was annoyingly jerky and nothing like Google or Apple maps.

I needed to find a solution and in the process found some posts like this and this, both of which followed my own naive approach to pinch zoom, i.e. useless for zoom.

Field of View

One of the things that really confused me was that often people were changing the field of view in order to achieve the zoom. I thought moving the camera closer or further away made more sense but maybe that’s because I don’t use SLR cameras very often (I’m no photographer). That’s essentially what changing of the FOV mimics; the camera stays still and the lens zooms in/out.

Perhaps I figured that if a camera pan is achieved by moving the camera then a camera zoom would also move the camera. After all, I want to ‘fly’ the camera around the scene. Plus I didn’t actually want to mess with the FOV because it puts distortion on the 3D world (it at least changes it).

It was educational though, even if what people said on forums created some confusion. It turns out that a camera dolly has nothing to do with a dolly zoom.

Never immediate, always transitioning

Finally I found a gem. The author wasn’t even writing about camera zoom but rather about linear interpolation; I can’t even remember how I came to be looking into LERP.

Ignoring his grammar, the author’s conclusion struck a chord…

In general, is a fairly common mistake in beginner videogame developers that elements in the games are either in position A or B. For us, programmers, is much easier the world were things are never in a “travelling” or “transition” status. [sic] – Juan Gomez

The jerkiness of my camera zoom was because I was moving the camera immediately from A to B rather than letting the update loop transition the camera to the new position. Position B needed to be a requested destination to which the camera moved toward every clock tick. The LERP function smoothed this transition further because it can be set to accelerate away from A and decelerates into B.

And with that, another problem was solved. Yay.

Why just iOS?

A colleague asked me recently why I was just making Failed State for iOS. The reality is that I don’t intend for it to work exclusively for iOS as it would make an excellent PC game (given that it’s basically a rip off of PC games) but mobiles are just so ubiquitous and portable.

My clarinet and saxophone students know that I make games and sometimes inquire as to my progress. With a phone it’s a simple matter of handing it to them to have a play. I’m not someone who always keeps their laptop with me, despite my high regard for the retina Macbook Pro. Mobile phones, which we take almost everywhere these days, is the perfect platform to share things on, person to person. It’s the portable, interactive, digital scrapbook.

The fabled pot of gold

There’s a lot of commentary on the the net now (especially on Gamasutra) about the struggles of indie developers to make a living. I’ll be totally honest in saying that publishing Mobile Assault is never going to make me much (if any) money for the following reasons.

  • It’s just me and I’m too proud to share the work with anyone other than a close friends (none of whom are interested)
  • I’m no risk taker so I’m not going to quit my job to make this game and speed up development
  • I’m not exactly rich (and again, not a risk taker) to go hiring developer and artists. I may pay for some art late in development though.
  • I’m making a very niche game. This is not a Doodle Jump, Cut the Rope or Angry Birds. Mobile Assault is only going to appeal to a select group.

As a result, Mobile Assault is going to end up being a slow, slog on a labour of love on a game I’ve wanted to make for many years. The only thing that could kill my satisfaction of release is if someone does the exact same thing first… yeah… why am I writing all this on a public blog again?

Will I do a PC build? Yeah, why not. It makes sense because I have to debug the the thing on it. I really should get Unity exporting to web.

Line of Sight

Typically, my foray into trying to detect line of sight using 2D objects turned into a misadventure. It turns out that Unity only does 2D in the x, y plane and this matters because my entire world has Up in the Y axis and thus my entire world is rendered along the x, z plane.

But hope was not lost, it turns out that for all my GameObjects that needed some 2D collision checks but already had a 3D collision mesh added to it, I could attach a child GameObject and then add a PolygoneCollider2D to that (you can’t have 2D and 3D collision meshes attached to the same GameObject).

The result (if you look at the Scene view) is a perpendicular world where all the 2D collision polygons are visible along the x, y plane. It doesn’t matter though because after some Vector3 to Vector2 conversions are done, a 2D Raycast works just fine and the collisions are detected. The Vector2 can then be converted back to Vector3 and then used for whatever purpose that was intended. Phew. Why is this important? Well, I got a line of sight indicator working for when making a unit target something and I figured 2D checks would be cheaper to calculate. Here’s a screenshot.

Marvel at my awesome placeholder UI (and just ignore the fact that some tanks have been randomly placed in the water).

Combat Engine

What is combat? What is a combat engine?

I suppose (because I’m really just guessing) a combat engine for a game is anything from a turn based system all the way to a high accuracy physics engine with projectiles colliding with physic bodies. With Failed State’s game world being merely a 2.5D environment where terrain is flat and the height of buildings is contrived, it makes little sense to try and perform accurate projectile physics.

In fact, there may be no point in having bullets fly around that map. The infantry of the original Command and Conquer series did this quite well, whereby the infantry would be doing all this shooting and with an animated sprite to show it and another animated sprite to show kicks of dirt around their target.

That and health bars which I don’t intend to use.

That being said, C&C did use projectiles for their tanks and missiles yet even then, they weren’t there to perform high accuracy point to collision mesh collision detection; there’s a good chance it was known in advance if there was going to be a hit or not.

Level of accuracy

Failed State only requires a certain level of combat accuracy to be exposed to the player. Although seeing individual bullets might not be required, there is something to be said of seeing missiles with vapour trails. This video of M1A1 Abrams tanks doing live fire exercises clearly show projectiles in motion and it would be a shame to not see some of those in the game world.

I am definitely going to try and pull some audio from that video for sound effects.

But even if there are projectiles along a 2D plane, they needn’t “hit” anything; not in a physics sense at least. The system I’m interested in making involves the idea that something shoots at something else and that there is a chance of hitting the target. You could say I want to conduct a dice roll in realtime.

Here’s a really basic example…

One soldier fires on an opponent soldier. Either the target is hit or it is not. Now, what are the chances of the hit occurring? What affects the likelihood of hitting the target?

  • The skill of the shooter
  • Dumb luck

Let’s expand upon this. The skill of the shooter alone is not enough to determine if a target will be hit. Here are things that could make the shot more difficult.

  • The target could be moving.
  • The target could be far away.
  • The target could be behind cover, partially or fully obscuring them.
  • The cover could be of various strengths. A bullet might pass through wood but not steel.


There’s a possibility of lots of shooting and not much hitting, especially considering modern combat tactics. Suppressive fire (and modern weapons) use a lot of ammo. What will make Failed State interesting is if these modern combat strategies can be realised. Suppressive machine gun fire or artillery can make a target ineffective (they can’t move, shoot back or even see their opponent approaching their position). Different weapons would create differing amounts of suppression. Implementing such a system would bring some strategic elements to the game because when suppressed, a unit’s awareness of their surroundings is compromised. They may also take time to become effective again as the shock of being shelled diminishes. A unit with high moral and/or experience would recover quicker than a ‘green’ unit.

By taking the above into account, a rudimentary combat system can be constructed.

Let’s say there is a valid target.

  • Every 5 seconds the soldier takes a shot at the target
  • Typically this soldier has a hit ratio based on some non linear relationship with distance (and skill), i.e. when really close they’re more likely to hit but when further away, they’re more likely to miss.
  • Assuming this curve exists, let’s say that the soldier has a 50% chance of hitting
  • …but the target is behind cover, reducing the chance of hitting to 20%
  • Also, the soldier is in turn under fire, reducing their effectiveness and thus reducing their chance of hitting to 5%.
  • So the soldier mathematically needs about 20 shots to hit the target which would equate to about 100 seconds of shooting before their target is hit.

But then there are other considerations. What if the soldier doing the shooting is part of a squad? Surely a squad would be more efficient at eliminating a target than a lone (non sniper) soldier that is isolated?


As much as I want to avoid it, projectiles are important. They take time to reach a target. It would be a strange sight in a game to see one unit fire at another and the target to get hit immediately. It would look like there was a disruption in the time-space continuum.

When dealing with straight line or artillery projectiles, the time for a projectile to reach a target can be calculated in advance; all the visuals are just cosmetic. Yet despite there being no need for collision detection because the start and end points of the projectile are known, this does complicate the timing of the combat engine. Unit A could fire at unit B while the projectile of Unit B is already in flight.

It’s as if the game is turn based but where each turn is 0.01 seconds long.

Attack step and damage resolution

Thankfully the attacking step and the resolving-of-damage step are independent of each other.

Attacking modifiers:

  • Range
  • How good a shot are the units? Are the weapons accurate?
  • What type of weapon? Small arms or a tank shell?
  • Are they currently under fire (suppressed)?

Damage resolution modifiers (NB: it takes distance / velocity of the projectile before this can be resolved)

  • Cover
  • Moving (will probably affect cover)
  • Armoured? Infantry with kevlar or tanks with steel all have an affect
  • Do they generally suck at keeping their head down (skill/experience level)
  • Are they being shot at from all sides? This is probably part of the suppression.

Note that suppression probably only affects a unit’s ability to see and shoot back rather than a likelihood of being hit.

First draft complete

And with that brain dump to a blog post, all my initial thoughts for a combat engine have been collated. There’s a good chance that many of these ideas will ultimately be flawed and will have to be modified but one has to start somewhere.

Culling, normals and face winding

The internet is an amazing resource for programmers, especially when using a language or platform that has a massive community behind it. It’s almost guaranteed that someone has experienced your current problem and found a solution to it, a solution that’s shared on a forum post or mailing list somewhere. Unity3D is one such platform.

Unfortunately though, if you don’t use the correct search terms you can end up going in circles for a while and such a thing happened to me last week when trying to extrude my OSM buildings into 3D. I kept telling myself that it was not important for gameplay (and it isn’t) but I was so curious to see what it looked like that I’ve spent just over a weeks worth of free time trying to implement it.

Inevitably trying something new (or even doing something that was done ages ago but is now being reimplemented in some new manifestation) results in all sorts of problems, puzzles, frustrations, mindless hacking and misinterpretations.

Mesh extrusion should be easy, right? Just duplicate the initial mesh, offset it then create new triangles between the two layers to create the walls?


Fun with normals

It turns out that doing a ‘sandwich’ style join between two layers of the 2D mesh (one at a y offset to the other) was a bad idea because of the way shading (from lighting) is done in the grahics pipeline. If there is only one normal per vertex, the buildings would look like they have rounded edges instead of flat shaded (hard) edges. What’s required is to have three copies of a vertex in order to hold the three normals that represent each face. Consider a cube. There aren’t 8 vertices, there are really 24 vertices with 24 corresponding normals (3 per corner).

I had to break the code down into its most basic form and try an extrude just a box into a cube so that I could figure out where my code was breaking.

Fun with normals part 2

Oh, so Unity uses a left-hand coordinate system? That right hand rule for cross product I learnt at Uni and is all over the internet doesn’t actually apply here? Grrr. If I’d been more diligent I’d have noticed that the Unity3D docs clearly state the use of left-hand rule when using Vector3.Cross but it took a while to ‘get it’. My Vector3 normal = Vector3.Cross(b-a, d-a).normalized; should have been Vector3 normal = Vector3.Cross(d-a, b-a).normalized; (I didn’t want to multiply by -1).

With the normals facing the wrong way, the walls would look dark despite the intensity of the lighting.

Fun with normals and culling

Upon investigating the ‘why are the walls dark’ issue, I experimented with some Unity3D provided shaders, but oh no!

Why oh why are some of the walls of my buildings being culled out?! The normals are definitely outward facing!

I stumbled across this forum post (see, someone had a similar issue). Normals are only used for shading and colour, the triangle winding order determines what triangles are back facing.

*sigh* I felt like I’d had that very issue before when my collision meshes didn’t work; the triangles had to be drawn with CCW winding instead of CW (or was it vice versa) for the raycasts to ‘hit’. Anyway, duh! I felt stupid for wasting an evening.

Fun with normals and culling part 2

But holy crap! It still didn’t work! Some of the buildings had walls disappearing but others were fine.

Think, think…

Oh, the OSM data has some polygons defined in CCW order and others in CW order. I had to make them use the same ordering. More searching and I found this. Yay for pseudo code. It works. Ship it.

And now for a screenshot.

Oh look, I’ve got placeholder art from Shenandoah Studio’s Battle of the Bulge; I hope they don’t mind. Their games looks amazing.

Sphere Casts

There are so many instances where I write a pleading forum post asking for suggestions/solutions to a problem I have only to discover the answer soon after I post it. I’m hedging bets in a way, hoping that someone else has a solution in case I don’t eventually find one myself.

This almost happened again in regards to collision detection in Unity3D. I wanted to do something that seemed really basic: find out when two collision meshes collided (overlapped). You’d think this would be simple, right? Nah, maybe… kind of…

After the initial hunting around the documentation I tried implementing the the OnCollisionEnter/Exit approach. Mmmm, didn’t work. So I hunted around the forums and found this.

Oh, so I need rigid bodies to detect a collision? Seriously? That’s a bit overkill when I’m not using any physics.

Oh, wait, it turns out I should have been using raycasts.

That I can do because I’d been using them for touch selection (mouse picking).

…but oh, wait… What’s this sphere cast?


SphereCasts are just fat raycasts but they feel like a little heavy handed for detecting collisions. Every time a unit moves (such as a tank or infantry) I’m having to sphere cast its bounds against the 2D world to see if it hits anything. I’d rather the engine have been smart enough to tell be there was a collision rather than me having to ask. I feel like I’m have to hand-hold it.

In my case I’m trying to determine if a unit is inside one of the OpenStreetMaps defined areas such as a building. If I was having to build this system from scratch I’d use the fantastic Sort and Sweep algorithm that is described in Real-Time Collision Detection by Christer Ericson but I have to trust that Unity is using some cleverly optimised collision detection algorithm that quickly determines collisions between sphere cast and the collision meshes.

That’s why I’m using an engine. It look quick a while to make that algorithm work for Mobile Assault.