Thursday, June 21, 2007

Lessons From My First Job

Ah, I finally found an interesting topic to write about – at least to me. I was going to write something about the best company I ever worked for (not google, unfortunately). However, upon further inspection of the themes I really wanted to talk about, I decided that it's really all about my first professional experience as an employee.

Everybody have different experiences on the first job. For some, it's the realization that money is REALLY hard to earn. For others, it's the realization that they can finally do their thing and stop worrying about irrelevant school works. And there are those unfortunate ones who will realize that this 'work' thing isn't so exciting after all – at least not much worth of all those years in school...

So I suppose I was lucky. I found a job that, just like the clich̩ being bandied about, I would probably pay others just so I can do it. Looking back now, I find that I appreciate it even more because it got me off the right foot as far as the business of software engineering goes. In fact, from what I hear around, some of the things I learned there are not even regular practices here in my country to this day Рor maybe I just got to know some bad companies later. Here's the list...

Use version control software. One of the first things I had to learn during my first week is the venerable CVS. All projects in our small company gets enrolled there. Some of those who postpone enrolling their projects (mostly waiting until it is at least a running prototype) suffer through the consequences of lost sources (by being overwritten, or hard disk crashes). Before working there, I didn't really use any version control, floppy disk space are so precious back then. (The entry level hard drive with the largest capacity was 650MB – some blank CDs today have more than that!)

When I eventually moved to other jobs, I've been so used to having vcs around that I have a hard time forcing myself NOT to use one. I even use it for my personal projects. I really can't figure out why anyone would NOT use one. I've been in some companies doing enterprise size software without the benefit of vcs. All the developers' lives are hell – especially when some sources goes missing during project turnovers from departing employees (which are frequent).

Have a tool developer. In my experience, software shops tend to function a lot more efficiently if there is someone around who can make tools to automate repetitive, or rote processes. It's even better if everyone have this mindset because not only will everyone make tools for everyone else's use, they will also be on the lookout for things that CAN be converted into a tool. My first job is chock full of people who think this way, and it's infectious. Another side effect of the can-it-be-a-tool mentality is that it also drives people towards the real problems, the one that needs creative human solutions, and thus avoid expending precious efforts.

One good example of this is when we need to find a network packet problem (windows 95 was new then, DOS is still king, and so are packet drivers) that's causing malfunctions in one of our software. We needed something to intercept the packets and log them for later viewing so we can trace the exchanges. Since existing tools aren't available, we made one ourselves. We also enabled it to work across ethernet and arcnet so we wouldn't have to do fancy riggings to feed one onto another just for snooping purposes.

Another instance is when those Delphi (1.0-3.0) UI generic controls just weren't working for everyone. Since they're all generic controls, there's not much specialization provided. If you wanted, say, an edit box to enter quantity information, you pretty much have to check the input, AND range check the resulting string just to be sure. And this is only one specialization problem! So we decided to make tools and libraries to remove those troublesome issues. It's a lot of upfront work, but it paid off. All the other UI work from then on was simply a matter of dropping components and changing their properties.

Have coding and library standards. This maybe obvious, but there are still shops who ignore doing this. Maybe they fear reining in the programmers' creative juices? Or maybe some other nebulous reason exists, but what do we know? As far as I'm concerned, this is necessary to promote smooth operations all around. You don't want anyone to argue about placement of braces, or the capitalizations of variable names, or things of that nature. You also don't want programmer A to change programmer B's code to conform to his own coding style. A global coding standards will simply force both of them to format code the same way.

As for code libraries, this helps a lot in knowing what a fresh developer's machine needs. It's no good having only joe developer have the ftp libraries, and the know-how to install it, if you actually need it companywide. Just make it a standard and be done with it.

Create fixes not patches. Nowadays, patches are ubiquitous. Though I'm not saying that patches are bad, mind you. What I think have gone out of control is that it's better to find why something is not working and fix the source of the problem. Too often, problems are just worked around and tagged with some TODO for later investigation. The only problem is, later never comes and the problem gets laid at the foot of whoever the next maintainer is. One reason for the proliferation of this mindset is the way software folks are scheduled to work, which is the next lesson.

People DO have optimum work hours. Because we have been so used to working 9-5, we tend to ignore this one. Good thing my first job has a flexible schedule. We were just required to be around the office between 10am-2pm, and even that can be waived if you pulled an all-nighter the day before. The boss just recognizes the fact that there are hours where you're so focused you can plow through several days' worth of work – and this may not be within the 9-5 cycle at all.

In fact, let's admit to ignoring this optimum hours thing. For other tasks that we want to do, we 'know' when we're ready physically, mentally, or even psychologically. We sometimes put off reading that book because we're not in the proper mindset yet. We sometimes reschedule writing that blog because we just don't have the spirit at the moment. In fact, any non-time-sensitive activity (and maybe some of those that are) are simply moved to a more convenient time simply because we're just not ready for it, yet. Sometimes, when we see this in others, we automatically think they're just procrastinating, and yet we do it ourselves. But in the long run, this is just more efficient. Ever witnessed a team forced to do their thing at fixed hours? They tend to 'procrastinate' a lot more trying to find their groove before they can actually be productive.

The IT field moves too fast to be complacent. This is a biggie. In fact, there are several implications once you accept the truth in this statement. For one, you simply cannot stop learning. Some of the things you are using and/or doing can become obsolete even before you're done! And when I say obsolete, it's not always the same cause, nor the same effect. It might become moot because another product comes out doing exactly whe you set out to do. It might also become useless because the technology wasn't accepted (example: there was this POS standardization initiative from the company in Redmond, it was swept under a rug due to low acceptance from POS manufacturers). Whatever the reason, the people must at least be updated in order to react accordingly.

Another effect of this, which is lost on most HR types, is that it doesn't matter where you studied, or when you finished studying. What matters is your willingness to learn continuously, maybe even doing so on the fly. In order to have a productive, long-term career, you must be able to evolve all the time. At one of my previous job, I met this one guy who was hired because he's an expert in language X (specifying which language might pinpoint the actual person) and refused to evolve even when all the products end up being written in language Y. Would you want a teammate who will not adapt at all?

Flatter organizationas are more efficient. It's not that I have something againts large companies with multi-level hierarchies. It's just that I found that the flatter the organization is, the less bureaucracies there are, and the more personally invested the workers are at what they do. The fact that everyone knows who to report to, and that departments seldom have overlapping concerns are just icing on the cake. If you're one of the flat organizations, try to keep it that way. And avoid promoting technical people to management positions just to give them a career ladder of some sorts. Doing this over time will lead to ever increasing hierarchies – you'll have department heads, vice department heads, assistant department heads – and you'll no longer be as efficient a before. If technical people want career progression of sorts, there are other, more acceptable solutions. In my first job, we had a rather uncreative, but viable solution – we just get promoted to higher levels in the same position (i.e., engineer I, engineer II, engineer III...).

(Disclaimer: Note that I didn't say NOT to promote technical folks to management. I said to not do it for the sake of giving them a career ladder. Some technical people can be good management material and can be promoted as such. Just be certain that they want the promotion in the first place, and you can stand to lose one of your better technical resource to management)

Monday, April 9, 2007

Procedural Methods

Introduction

Recently, I've been doing research and prototyping for an economic simulation game. Yep, another one of those. However, for reasons I can't disclose, it's not exactly just another one of those. I could, however, discuss some of the stuff I came up with for this project.

One of the things we really want our system to have is the ability to have randomly-generated towns. Our simulation will have a concept of levels, but we don't want base data for a level be fixed and unchanging. Fixed maps tend to have optimal strategies for winning, and players WILL find them, and SHARE them among each other – making the game easier to later players. Random includes not only the map and the town layout, but also the demographics. In this blog, I'll discuss how I came up with the random systems we require.

Random Demographics

For random demographics, I found this website invaluable in jump-starting my knowledge of economics – at least those that I will need to make a realistic enough simulation. I started on the classic consumer theory essay to get the nails down. That essay had a link to the alternative approach using agents, which looks a lot more like something we can use. Of course, prior to reading these stuff, I've already studied a couple of economics text book for a solid grounding overall.

First, I built a couple of quick prototypes using the classical methods: indifference curves, aggregate demand and supply. It worked fine, but feels like all I'm writing are, basically, code to solve equations involving some knowns and unknowns. This isn't so hard, it turned out. It was easy enough that I have enough time to include elasticities of supply and demand relative to price, and have demographics breakdown three-ways: income scale, gender, and age. Having created these prototypes, I felt we could do better.

In the next iteration, I tried the agent-based consumer simulation. That, too, wasn't too hard. As a bonus, I got the behavior trumpeted by the article (see link above), AND the classical behavior regarding demand elasticity relative to price. There was only one hitch: it's sloooow! We're creating a web-based game and this just won't cut it. I had to be a bit creative if I want to use this approach. But first, find out why it's slow.

When doing the agent-based simulation, the code essentially generates the per-agent information once, and persists them in a database. However, during my testing/prototyping, this turns out to be the speed bottleneck. To help around this problem, I modified the simulation such that agents represent one household instead of one individual, but it didn't help much. More testing finally revealed that generating the per-agent information is faster than streaming it in and out of the database! Not that it was surprising, I just didn't think it would be such a significant problem. Finally, I modified it again to just store the rng (random-number generator) seed and use that to recreate the agents' information every time they are needed.

Random Terrain

I must confess, this part of my research is perilously close to an embellishment – until a bit later. Because a bit later, I found a pretty good way to generate random towns, and that one requires a height-map terrain – which is the output of this terrain generator. I also figured that, hey, we might someday want a way to generate random, height-map terrain procedurally. When that time comes (I hope it does of course, so I can work on this thing more. :)), we already have one ready for the task.

I started by using random numbers all over the grid. It didn't work out well of course, but I want to start somewhere! Several research ticks later I found the diamond-squares midpoint-displacement terrain generation method (a mouthful, that), and coded an implementation. My first few attempts were awful, but eventually figured it out. I also used neighbor-smoothing and band-smoothing to remove the worst of the blockiness. Much later, I simply used an erosion filter (not very good yet, but works), and neighbor-smoothing. As for the latter, turns out I had an overlooked bug during my first implementation, so I fixed it.

All in all, the terrain generation portion worked out very well.

Random Towns

This one is a lot harder that it first appears – mostly because the literature on procedural towns are either 3d-oriented, or are much too vague to easily translate to code. The one I settled on was the latter. But hey, if it's too easy, it's probably not worth doing at all. :)

Anyway, this article discussed the method I wound up using. It's basically an agent-based city generator that requires a height-map terrain as input. Its output are very reasonably realistic, and the generation can be stopped at anytime (for instance, when the desired densities are reached). The only drawback is that there are no reference implementation available. The only thing I have to go on is the article and nothing more. So that's what I used. I simply read and re-read it until I distilled the stuff I can use.

My implementation, however, does not make use of road-related agents. I 'cheated' and simply pre-generated rectilinear road grids ready to be populated with zones. This worked because the method, as discussed in the article, also 'knows' how to populate cities with pre-generated road networks.

During my coding of this random-town generator, I first used if-else clauses when laying down zones. This turned out to be a bad idea as races eventually bogged down the town-generation. That is, one cycle a zone is residential, next cycle it becomes a commercial, and the the cycle after that it goes back to being residential, and so on and on. So I recoded that zoning portion into a pseudo-agent model. I followed the article's approach of generation a normalized score for the prospect tile, and decide off that. Things went well after that and I even had time to add park zoning.

Downloads

Demo program that generates terrain, town, and zonings. This is not bug-free, and the usual disclaimer applies: I'm not responsible for any shenanigans that can happen while running this program. It ran alright on my system, but YMMV. I will appreciate any comments though, especially constructive ones.

Also, the demo program requires the dll files from the FreeSL project. I didn't really use any sound, but the framework I used used those.

Other Notes

I haven't had time to create documentation, but here are some for those who will download the files and run them.

* Zones are color-coded thus: yellow - residential, cyan - commercial, red - industrial, pink - parks.
* Regenerating roads also resets the zone development
* Regenerating the terrain, applying the filter or the smoothing, resets everything!
* Terrain generation sometimes takes a bit more time since terrain with more than 30% water are discarded. A new one will be recreated until one that has 30% or less water is found.
* The blue road is the main road and can span anything, including bodies of water. All the other roads can only span land.
* All roads MUST be accessible from the main roads (the blue ones).

Saturday, December 30, 2006

C# Nitpick

A lot of modern languages supports optional parameters, or parameters with default values. I've read in this article that .net actually supports them, but for some inane reasons, the C# designers did not build support for it. The article mention something about optional arguments being just syntactic sugar that can be emulated using overloaded methods. This might be true, but the code you have to write to do this workaround is usually too much, and quite frankly just silly.

Consider constructors. It's true you can write overloaded constructors anyway, but what if I have to do some initialization that's quite involved? What I usually do in these cases is that I create a private CommonInit() method to handle the involved initializations, and the constructors call to it as it's first, and usually only, line of code. Here's an, admittedly, contrived example:

class Contrived
{
public Contrived(int maxLines)
{ _maxLines = maxLines; }
public Contrived()
{ _maxLines = 5; }
private int _maxLines;
}

In the example above, using the parameterless constructor actually provides a default value of 5. Imagine if prior to assigning to _maxLines, we have to do more stuff, whatever they are. Where do you put this code? This is usually what I do:

class Contrived
{
private void CommonInit(int maxLines)
{
// involved code here
_maxLines = maxLines;
}
public Contrived(int maxLines)
{ CommonInit(maxLines); }
public Contrived()
{ CommonInit(5); }
private int _maxLines;
}

Looks ok, but imagine if default values are supported: we won't have to go through all this hoops just to support multiple ways of constructing the object from this class. Had optional arguments been supported, things would be this clean:

class Contrived
{
public Contrived(int maxLines = 5)
{
// involved code here
_maxLines = maxLines;
}
private int _maxLines;
}

See how short and concise it is now? Here's hoping that later versions of C# will support this type of declarations. Having to do all those CommonInits for all my classes that I want to be constructed in various ways consumes more time that I'm willing to waste working around 'syntactic sugars', or lack of it.

Tuesday, December 5, 2006

A Page On Properties

Whew! Was it two weeks since my last post? Time sure flies fast when you're busy doing interesting things... As for my schedule change, it got a bit chaotic due to unforeseen natural events (typhoons and family events). However, I can say that it's now successful. I did, however, toned down playing basketball from everyday to 4-5 times a week. I find that I can't quite sustain the physical abuse seven mornings a week -- not if I want to spend some time with my wife and son.

I've been re-building my personal code libraries for the past two months, and have been doing well for the most part. Because of this, building some common libraries for work wasn't that much harder. I mostly just break them down to reuseable classes so I don't have to write the same thing more than once -- ok, twice. But along the way, I encountered this weird property thing. In Delphi and C++, it's powerful and straightforward. Turned out it wasn't so simple and/or elegant in C# and VB.

In VB, it kinds looks ugly to use the () pair to index stuff, as opposed to the [] pair I got used to, but I can adapt to a lot of style. What surprised my was that in C#, which I thought was a pretty good language to begin with, does not have an indexed property! Wow! I wasn't quite prepared for that one. I know, I know, you can create a class indexer as this[], but it's not the same thing. What I really wanted was something like this.sublist[] - and, well, it's not built-in. I did find a way around it however, thanks to a bit of google`ing.

For anyone interested at all, what you do is just create some sort of class whose main purpose is to do the indexing. This indexer class gets instantiated inside the host class and will be returned as a regular property. Hmmm, wasn't quite as clear as I wanted to say, so let's have an example.

In my case, I was doing a class to act as a section of an *.ini file (yeah, quaint compared to xml, but it's simpler) and I wanted to be able to do the following at a minimum:

section s = new section(sectionName)
s[keyName] -> returns key=value pair where key=keyName
s[index as int] -> returns key=value pair at position index
s.keys[index as int] -> returns the key at position index
s.values[keyName] -> returns value for keyName
s.values[index as int] -> returns value at position index


In order to support the likes of s.keys[] stuff, I created extra classes to be indexers. Here's how I did it:

public class KeyIndexer
{
public KeyIndexer(section owner)
{
_owner = owner;
}
public string this[int index]
{
// KeyAt() is a function declared in section
get { return _owner.KeyAt(index); }
}
private section _owner;
}


The host object simply declares an instance of this one and returns it whenever the property Keys is accessed. From the host class, this is how it's utilized:

public class section
{
public section(...)
{
...
_ki = new KeyIndexer(this)
...
}
...
public string KeyAt(int index)
{
... code to find and return key value at index ...
}
...
public property Keys as KeyIndexer
{ get { return _ki; } }
...
private KeyIndexer _ki;
}


With this done, you can do things like s.Keys[index] to get the key at index! It's a lot more code that I would have like, but it's what it took to make the client code much more readable. Just like most everyone else I read about, I have no idea why C# do not have indexed properties. It sounds like a major oversight not to have one considering that array-like properties are not that rare.

Monday, November 20, 2006

Almost Normal

The schedule change is going very well. I'm now getting up a little before six in the morning. My first activity is a one-hour exercise, sort of. I'm actually playing basketball every first hour of my day. I figured that doing routine exercise will bore me to death and I won't be able to sustain the activity. Basketball at least is something I enjoy doing. I've played more basketball the last six days as compared to the last six months! What can I say? Programming is too chair-bound a career.

In other areas, I finally got a working arkanoid clone. It only has one level, but it is easy to modify since the layout is just a text file listing positions where bricks ought to be. However, it is enough to validate that the engine port is doing good. The next module up for porting is the windowing system. Or maybe I should do the sound modules first? I'm not sure yet.

Friday, November 17, 2006

Wrapping the Windows message loop

Back when my engine was created in Delphi, I created the Application class. It's job is to wrap the Windows message loop such that I won't have to worry about it elsewhere. It's basically a thin wrapper though, which is useful if you want to work at a lower level. The problem with that version is you have to sub-class it to actually use it. It wasn't much trouble then since I'm the only one using it.

Now in C++, I have redesigned it to make use of patterns. It's still a thin-wrapper. However, instead of the dispatching being embedded and useable only by sub-classing, it now uses an observer model. There is another class whose job is to manage a list of multiple observers. This made the Application class design cleaner. It also has better support for multiple target when dispatching just by registering multiple observers.

Part of the design is to insulate the code using it from caring how the application window changes state (minimized, selected, etc). To facilitate this, an enum was created which is what gets passed around during observer-registration and event-dispatching. The windows messages are still available for interception -- though the observers must explicitly be registered to expect it.

If anyone at all is interested, I may post the code. I probably have to clean it up some since it has dependencies I may not necessarily want to post. My library/engine is mostly unicode configured using UTF32 characters so it may not mesh instantly with anybody else's code.

What to post, what to post....

Hmmm, there doesn't seem to be enough time in a day to do all my stuff and still post about it. But I did promise this blog to be used so I will. I will.

I've been busy recoding my old isometric engine to C++ -- it was originally written in Delphi. Some of the code have to be rethought, some practically unchanged, and still others just get discarded (those delphi classes I wrote to support containers seemed to have gotten the worst of the shafting). I'm already at the stage where the port is useable. I have written the beginnings of an arkanoid clone, and it's running. And as this library/engine is quite a big project, I'll just discuss them in chunks in upcoming posts. Maybe start with how I wrapped the Windows application message loop?