Ratings Game

Ratings Game

Function point analysis (FPA), a method to quantify software's functional requirements, has been commonly discredited for some purposes, such as calculating the productivity of an IT group. But Nielsen Media Research Inc. has been successfully using FPA in a new way. About three years ago, Nielsen Media began to use an FPA-based productivity model to estimate the size, duration and staffing requirements of software projects up front. Today Nielsen Media also uses the model to help negotiate contracts with an outsourcer.

Nielsen Media Research needs little introduction: It is the pre-eminent supplier of data on measured TV viewing. The New York City-based company's ratings help advertisers and TV producers understand the value of individual programs and guide the buying of US$45 billion worth of TV advertising every year.

Nielsen Media entered the Internet ratings business more than a year ago when it launched a joint venture with NetRatings of Milpitas, Calif. Nielsen Media has since become the majority owner of Nielsen/ NetRatings. The service does for the Internet what Nielsen Media has traditionally done for TV: measure usage from the end user's perspective. With the two medias converging, the same ad buyers are now interested in both TV and Internet viewing.

One of Nielsen Media's core competencies is its ability to develop systems to acquire, measure, manage and distribute viewing data, says CIO Kim Ross, who joined Nielsen Media eight years ago as vice president of software development. Accordingly, the company maintains an operations and development center with 300 software developers and 100 systems administrators in Dunedin, Fla.

When Ross became CIO about three years ago, he wanted to understand why software development projects at the company took, on average, twice as long as expected. Nielsen Media used standard techniques to plan projects, including estimating lines of code and task-based Gantt charts. But lines of code bear little relation to software function. And while Gantt charts were OK for tracking tasks, they were inadequate for estimating size. Indeed, "the Gantt chart is most accurate at the end of the project," Ross notes wryly.

Instead, Ross turned to KnowledgePlan, a productivity modeling tool from Software Productivity Research (SPR). By analysing a given project's size, duration and staffing together, the model spells out the optimal resource requirements for the project. Users can also adjust the three variables to generate what-if scenarios.


KnowledgePlan relies on function point analysis (FPA) for the input it uses as a measure of the size of a project. FPA measures software size by assigning a numeric value to each function provided to the user. For example, spell checking a word in a word processing document is a function; so is spell checking the entire document. The latter is more complex, however, and involves more data files, so it would get a higher numeric value than the former. Because FPs measure the things users actually do with a piece of software, they relate directly to its business value. The people who count FPs look at each of the business functions and, with the aid of standardised criteria available from the International Function Point Users Group (IFPUG), assign a numeric value to each function. Counting is done by hand. With the help of KnowledgePlan, these numbers are totalled and normalised to produce a single number that represents the size and complexity of the software.

The productivity model doesn't end with the FP count, however. More than half of the KnowledgePlan model is based on more than 100 questions the project planners must answer, such as "Is the software brand-new or an upgrade?"; "What platform is it being built on?"; and "How familiar are staff with that platform and the programming languages to be used?"

Before Ross applied the SPR tool to live projects, he began about three years ago to use it on six completed projects to establish a baseline, and Nielsen Media continues to update that. Next, Ross used the methodology to size, price and schedule several new software projects. He started with six projects and has since used it on about 20 more. Ross encourages his project managers to use the methodology on every project, but the final decision is the manager's. Since Ross adopted the methodology, it has been used on about half of all projects.

"The big question in any project is, how do you know how big it is going to be at the outset," says Ross. "You never know with great accuracy. But with this method you can take an educated guess." By the time Nielsen Media has a detailed design early in the life cycle of a project, the scope estimate is reasonably accurate, Ross believes. Even still, the scope grows in nearly every project, he admits.


One of the most recent projects on which Nielsen Media is using FPA is a strategic one--an upgrade to the Global Sample Management System (GSMS) for Nielsen Media/NetRatings, which keeps track of the Internet rating sample, a group of 40,000 homes nationwide that volunteer to participate in ongoing ratings. "The current system is generation zero, weak and not up to the job," says Ross.

For this project, Nielsen Media has teamed up with Teaneck, N.J.-based Cognizant Technology Solutions Corp. (CTS), an outsourcer whose programmers are in Chennai, India. Nielsen Media has a five-year relationship with CTS, which also uses FPA to assess projects. The intent was to negotiate a fixed-price contract. Ross prefers the term "firm price," because the outsourcer does not guarantee the so-called fixed price; the actual cost can vary based on scope creep. "The secret is for both parties to agree on the project definition at the beginning," says Ross. "If you don't do that, you run into trouble."

In late fall of 1998, Nielsen Media and CTS each did their own FP counts based on the current GSMS. Ross says FPA is always easier and more accurate when there is an existing system. Nielsen Media had its counters--three quality assurance staff--do their FPA, and then input their count into KnowledgePlan. CTS did its own FP count and used its own productivity model.

There was bad news: Speaking in representative terms (Ross declined to disclose actual counts for the project), Nielsen Media tallied 300 FPs, and CTS counted 500. But there was also good news: "When CTS used its FP count and its own productivity model, it came up with the same duration and staffing that we did using our FP count and KnowledgePlan," says Ross.

Next a discussion ensued among the counters to understand the difference in numbers. The problem: Nielsen Media uses IFPUG-certified counting standards; CTS doesn't. "The certified standards are a much better practice and produce consistent results," says Ross. "But CTS's standards were still better than working with an outsourcing firm that doesn't use this metric at all."

Once he knew why CTS's count differed--and given that CTS estimated the same staffing and duration as Nielsen Media did--Ross believed CTS understood the project's scope and that its bid was credible. The two were very close on dollars per FP. They agreed to use CTS's FP counts throughout the project. "Once we understood why their count was bigger we agreed they could use their technique for this project," says Ross.

The FP count of 500, however, was based on the existing system; in the upgrade, Nielsen Media business users expected new functionality. In other projects, Nielsen Media has taken the time to delve deeper into the specs of the new functionality to come up with a more accurate FP count. Time was of the essence in the GSMS project, however, so Nielsen Media and CTS simply agreed to 500 FPs plus 20 percent--or 100 FPs--for new functionality.

This ran counter to Ross's rule of thumb, which is that the FP count of an upgraded system will be twice that of the old one. He later kicked himself for not raising expectations that the new project would run about 1,000 FPs. "If I had, everyone would be happy," he says.


In early 1999, seven CTS programmers began writing code for the system, which they were expected to complete in 10 months. As November approached, CTS was adding staff to try to meet the deadline. When it became evident that the deadline would not be met, a new FP count was conducted. The GSMS had grown to 900 FPs. Besides the original 500 plus 20 percent, there were 300 FPs attributable to features and functions that had crept into the project.

How did that happen? The way it always does: The developers and users had added a button here, a new feature there, and soon the project was much larger than it was originally. But Nielsen Media and CTS had put a stake in the ground at the beginning from which they could measure growth along the way. When the business users learned the project had grown, they said they needed the extra functionality, says Ross.

Nielsen Media and CTS agreed that 600 FPs were covered by the original contract, and that 300 FPs represented new functionality. The business users agreed to pare back the extra FPs by 10 percent to 15 percent, to pay for the additional functionality they decided to keep and to accept a later deadline. As a result, CTS nearly doubled the number of programmers working on the project. It was given another 10 months and, at press time, was in its final stages. Ross declines to divulge the total cost of the project.

Clearly, Ross's methodology does not keep projects from growing. "This doesn't eliminate scope creep," he readily admits. However, using the tool during the project's life cycle does help manage the problem, as the GSMS example illustrates. Establishing up front a benchmark and a methodology for checking against that FP benchmark eliminates a lot of ill feelings and nasty negotiations midstream.

Internally, when Ross attends project meetings with business users, he has data to explain why a project is running late and to propose alternatives. Sometimes the business users reduce the functionality. More often they pay for additional staff or an extended deadline. Whether he's dealing with his own staff or with the outsource team, Ross says, "If you don't have quantitative numbers, then you just get into arguments. This way, the developers and the business users understand the project and collaboratively manage it."

As always, there were lessons learned. The best practice, Ross says, is to run the FPA and productivity model at the project's launch and again when there is a full list of functional requirements. Then do another analysis anytime there is a major modification in the functional definition of the project. For the GSMS, they used the method only at the beginning and when they realised there was a problem.

Ross wants to use FPA and KnowledgePlan in future negotiations. CTS is currently the only Nielsen Media outsourcer that uses FPA methodology, but it also does more work for Nielsen Media than any of the others. The two companies are now using the methodology on another CTS project currently in the proposal stage.

Ross thinks the methodology is better than anything else he's seen. "I would be surprised to find a contract using FPs that had a bid widely different from our expectations," Ross says. "After we worked out the one initial difference with CTS, we were in the ballpark."



Nielsen Media may have found a sound application for function point analysis (FPA). Using FPA to manage outsourcing is certainly an improvement on less metrics-based agreements.

One issue I have with KnowledgePlan specifically and FPA in general, though, is that they don't provide statistically useful results--namely, a probability distribution that shows the chance that software costs will fall between certain values. When I last reviewed the KnowledgePlan product, the accuracy of estimates was reported simply as "high," "medium" or "low." The problem with that is that the user doesn't know what that really means: Does "high" accuracy mean a 90 percent chance that the estimate will be within plus or minus 20 percent of the actual figure, for example? Or does it mean something else? If this were made clear in statistical terms, it might have been possible to determine if differences between Nielsen Media's and Cognizant's estimates were within the expected range of statistical error or due to more profound differences.

Note that the International Function Point Users Group often claims an error rate of only plus or minus 11 percent with FPA. But what it really means is that FP counters are within this range of each other's counts--not that this is how accurately their cost estimates compare to actuals. My research with clients who use FPA shows that, realistically, an estimate based on FPA will get you within 20 percent of the actual effort only about 55 percent of the time. Of course, that's certainly an improvement over a best guess.

In terms of managing scope creep, FP provides only half the picture at best. The real cost of scope creep is much more than the additional development costs that are required. I routinely do "scope control" analysis on various IT projects for which I am asked to do a risk/return analysis. Usually, the main cost of scope creep is deferring benefits of the initially defined scope, since the benefits of all the features have to wait to be realised when the project takes longer. On average, this makes up about half the true cost of adding new features.

About 20 percent is increased future maintenance and about 1 percent is an increased risk of project cancellation (probability of cancellation is directly related to project duration). Telling the business that the cost of scope creep is only the cost of extra development makes new features look much cheaper than they really are.

Bottom line: Nielsen Media may have found a good way to manage outsourcing contracts with FPA. But it should keep in mind that FPs are just another type of measurement, and that all real measurements have error that should be expressed statistically. Finally, FPA looks at only part of the picture--labor costs--so Nielsen Media should be careful about using FP alone for scope control.

Valuation Flow Chart

Nielsen Media's negotiation using FPA

Perform function point count; outsourcer performs its own Input count into productivity model; outsourcer does same Compare counts and resource requirements Identify cause of differences Agree upon final count and requirements Reassess count when major changes occur Renegotiate scope and requirements Have a value methodology you'd like to share and have analysed?Contact us at Bill Roberts, a freelancer writer based in Los Altos, Calif., covers business, technology and management issues.

Douglas Hubbard is president of Hubbard Decision Research in Glen Ellyn, Ill., and inventor of the applied information economics method for valuing IT projects. He can be reached at 630 858-2789 or

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Cognizant Technology SolutionsNetRatingsNielsenNielsen Media Research

Show Comments