Issue 8 - Ranking I.T. Staff

Jeff De Luca's picture

De Luca on FDD Newsletter Issue 8

Ranking I.T. Staff


As a project manager I often get involved in a company's H.R. Practices when doing a project with them. It might be to help with hiring, it should always be to be involved in firing, and it might be to help with appraisals - i.e. rating and ranking staff. On many occasions, I have direct responsibility for all of these. There are many topics related to H.R. and I.T. that warrant discussion but I want to focus on ranking staff in this newsletter.

Here's how the process usually works. Let's assume a mid-size development organization. There's a series of managers that report to the head of development. Each manager has some number of programmers reporting to him.

Each manager appraises and ranks each of his staff. Then, they all get together (usually in a series of sessions) to rank all the workers in the organization.

A common way this is done is to rank within the levels or titles. For example, let's say the levels are associate programmer, programmer and senior programmer. The managers will first rank all the associate programmers. That is, the outcome is a ranking report of all associate programmers in the development organization. Which manager an associate programmer reports to is irrelevant to this ranking report. Let's say there are 25 associate programmers in the organization. Manager #1 happens to have 3 reporting to her, Manager #2 has 4 reporting to him, and so on. The outcome of this first stage is a report that ranks all the associate programmers in the development organization from 1 to 25.

The managers will then rank all the programmers and finally they will rank all the senior programmers. They key point here is that the ranking is done by or within each job level.

Why do companies produce ranking reports? A ranking report can be produced for many reasons including as a key indicator for promotions and salary increases.

Here's how that process usually works. The ranking reports are passed to H.R. who have the final say on who gets what. They, typically, already have the numbers for promotions by job level and also the numbers for salary increases. They will take the ranking reports by job level, normalize them and come back with their list of promotions. e.g. Here's the 3 associate programmers that will be promoted to programmer.

I've left out a lot of details as they can vary but this basic shape is common in larger companies. I have seen it several times.

So, what's wrong with it? Well, plenty if you ask me. Ranking staff within their levels loses a great deal of valuable information that management could obtain if they ranked in a different way (which I'll explain shortly). Furthermore, H.R. applying the bell curves and normalizing within levels is not just sub-optimal it is often plain unfair. It assumes a related normalization of staff numbers and skills and performances by level across the organization. You can't just say we'll promote 3 associate programmers this year and 2 programmers. What if the associates mostly performed poorly (can easily happen if the hiring was not good). 3 associates get promoted no matter what. What if there is a much higher percentage of outstanding programmers then there is outstanding associate programmers this year? (again, can easily happen). They are penalized as only 2 will get promoted no matter what.

There's a lot of problems here and doing things by or within job levels is a big contributor to the problems.

An alternative I've seen, that reduces conflict and provides a better reward and motivation, comes from IBM back in the '80s.

Each manager appraises all their staff. Then each manager ranks his own staff irrespective of their level. For example, manager #1 has 3 associate programmers, 5 programmers, and 2 senior programmers reporting to him. That is a total of 10 staff reporting to manager #1. This manager, after appraising all 10 staff, ranks them all from 1 to 10 irrespective of their level. (yes this is subjective - of course it is, yes it can be hard, and yes it can be temporarily influenced by who happens to be on the critical project at this point in time, and so on. All this is true and all of these concerns apply equally regardless of which ranking process is being used. The purpose here is to show you a better ranking process, not to explain the issues that are common regardless of the ranking process).

Each manager does the same. Then, all the managers meet to rank all the programmers across the organization irrespective of their level. Here's how to do it.

Let's say there are 8 managers meeting in a room with a whiteboard. Each manager puts their number 1 employee's name on the board. Thus, we have 8 names on the board which are the number 1 ranked employees for each manager. The managers now discuss and decide who is number 1 in the organization from this list of 8 names. That persons name is recorded as #1 on the organization ranking report and their name is removed from the list of 8 on the board - leaving 7 names. The manager of that person then writes their number 2 ranked employee on the board, making 8 names again. All the managers then discuss and decide from this list of 8 names who is number 2 in the organization from this list of 8 names. That persons name is recorded as #2 on the organization ranking report and is removed from the board - leaving 7 names - and the manager of that person writes the name of their next highest ranked employee on the board making 8 names again. And so on the process goes.

The outcome is a ranking report that ranks all programmers irrespective of their level and who they report to. If a manager happens to have a higher percentage of top performers, then those employees are in no way disadvantaged in terms of their placement in the ranking report. Similarly, if a manager happens to have a high percentage of average or poor performers, they are not artificially and incorrectly ranked higher than they should be. The same applies for levels. If the percentage of outstanding programmers is great than the percentage of outstanding associate programmers, the programmers are in no way disadvantaged in terms of their placement in the ranking report.

This ranking report also makes for far more insightful observations. You can see, for example, that a particular associate programmer has ranked well above most other programmers and even some senior programmers. Actually, in either process, it is pretty easy to spot a single star performer like this. Instead what you need to envisage is ranking report of say 250 names (a modest size development organization) and the mix of levels in that report. You can see those associates who are extremely valued right now, those seniors who are not, and so on. And, of course, the unfairness of bell curves and normalization by level against H.R. quotas is removed. H.R. Will still have their quotas but in a process like this they will not be by level. Furthermore, this kind of ranking report makes for a far more compelling argument to try and get that extra promotion or two. I'll contrive an extreme case to make the point.

There are 5 associate programmers that ranked within in the top 10 of this 250 person organization. Only 3 will get promoted though using the example of the common process I described earlier (as the H.R. number for promotions from associate to programmer was 3). Not so with the ranking report process I have described. The promotion decisions are now not tied to levels and even if the total promotion allocation or quota has been exhausted, a case like this where 5 associates rank in the top 10 makes for a compelling reason to alter the mix or get a few more promotion slots.


© Nebulon Pty. Ltd.

With thanks to the review team: Phil Bradley, David Bye.