I bet you could point to the weakest software developer in your department with little hesitation. But if I asked you to identify your criteria, it's unlikely that "metrics" would play any part in that judgment.
Since the earliest days of software development, the Suits have grasped at metrics to evaluate programmer performance. They started out by looking at the number of lines of code generated, and then the methods got even sillier. Every discussion about programming metrics begins by saying, "We know this is only slightly useful, but...."
The purported goal in collecting such measurements is to judge team members' production. With the best of intentions, a manager wants to use that data to strengthen and reward top performers, and also to identify weaker team members so they can be trained and improved. (Right. Its usefulness in "Who to fire or lay off" would never be relevant, I'm sure.)
The problem is... the metrics don't really help managers accomplish that. Just as you shouldn't confuse the map with the territory, looking too hard at metrics—any metrics—causes you to put your attention on the ruler, not the object being measured. Looking only at the yardstick makes you lose sight of the mission, which is probably something like "create happy users" or "deploy useful software on time and on budget."
I do understand the goal. Nobody wants to work with someone who is bad at what they do. Managers want to be fair and... well, not dispassionate, exactly, but at least even-handed. But judging a team member's contributions isn't easy. It's even harder if you're not looking at the developer's code day-to-day, or if your lack of technical expertise keeps you from understanding what you're looking at. (That's okay; you're a manager. Developers don't always expect you to write code anymore.)
Yet, I think you could do a quiet survey in any programming shop, asking everyone who the weakest contributor is—and probably get the same answer from most team members. (My gut tells me that you'd get less consensus from "Who's the best programmer?" but that's another discussion.) This all came to mind recently when, in the comments to James Turner's When Agile Projects Go Bad, a few Agile un-fans recommended that companies just "fire the bad programmers." Okay, maybe—but how do you know who they are?
That was the question I posed on LinkedIn and almost immediately I got a few dozen responses. You can read through the entire discussion if you like (I think it's fascinating), but in the rest of this blog post I summarize the wisdom I gained from some very smart software developers, and then follow with a few thoughts of my own. You'll notice one important thing: metrics aren't part of the criteria.
Developers whom others label as bad:
Don't care about the quality of their work. This shows in both personal attitude and in code quality (such as a lack of software testing). "The worst programmers tend to not buy into the project emotionally, intellectually and professionally," wrote one computer consultant. "They tend to show up, do the bare minimum to get by, and then leave." Their lack of engagement usually hurts the project, he says, in big ways (such as misses in requirements) and subtle ones, too. Or, as another described concisely, "The one [who] makes most money out of least work."
Attitude matters a lot to team members, much more than does technical skill. (Ignorance is curable, after all.) "There is a difference between someone who has potential vs. someone who just does not get it," wrote a project manager, "Even if the latter may be stalled in development but temporarily appears superior to the one with potential." Wrote one woman, "The worst programmer on the team is the person who creates a counterproductive environment for other team members. Regardless of this person's technical ability or volume of output, he/she has a net negative effect on the team's productivity."
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.