Phase 2: 2001-2003 - Pilot Systems and Productivity Benchmarking
Next, Rude added the real world to the mix. Ideally when he starts a pilot, Rude wants to visit other sites that have adopted the technology already and learn from them so that the pilot program will avoid simple mistakes that could derail the project. "It was difficult finding enterprise DVR customers at that time," Rude recalls. "But the gaming industry was adopting it. So we visited a lot of casinos."
Rude also had to choose between a greenfield and a legacy setting for the pilot. Testing the new system in new buildings would be easier. Piloting in older facilities would make it harder to deploy. Still, he chose older facilities rather than a greenfield, for two reasons. One, it made a more even comparison with the time-lapse VCR systems he had in other old facilities, and two, if his pilot proved the value of DVR technology there, he'd have done the hardest work already.
Rude had 808 cameras connected to dozens of DVRs at four sites, which were chosen for reasons both practical and arbitrary. Mainly, he wanted their sizes to roughly match the sizes of the sites with VCR systems, to which he would compare them.
Rude's team then began collecting the data for his productivity benchmark. He started with these calculations for each site. He based his metrics on the main expense (after capital expense, which will come up in Phase 3) he must pay for when he uses video surveillance: man-hours spent retrieving and reviewing footage (see "The Productivity Bump", below).
Once he had this figure for each site, he averaged all the sites of each type (DVR or VCR) together to get one average monthly viewing time per camera.
The per camera per month numbers, however, are small, hard to apply and not very useful, because most deployments for Intel will number in the hundreds of cameras. So Rude made his data more real-world and executive-friendly by calculating the viewing time for 100 cameras over the course of a full year. He took his per camera per month number and multiplied it by 100 (for the camera total) and then by 12 (for a full year).
This gave him the time staff spent looking at tape over a full year at a small Intel site (100 cameras), a figure that could be easily adjusted for larger sites, those with, say, 500 cameras.
Still, dollars are what matter. Rude had to come up with an expense for all of this camera viewing. To do that he made one final determination: cost of an event per hour. He chose $US50 ($A75) per hour as the cost of dealing with video surveillance events. While it wasn't a perfectly scientific number, it wasn't a capricious one either. To arrive at the figure, Rude factored in the cost of paying staffers to go get the video, the cost of devoting investigation time and energy to an event (thus taking those away from other jobs), the cost of any related investigations coming out of the event, and other factors.
By multiplying his annual hours of viewing time for 100 cameras by $US50 per hour, Rude arrived at his productivity benchmark: a full 33 percent savings by using digital video surveillance.
It's important to note that life expectancy of systems, maintenance and, most important, the capital expense of installing a new system were not factored in here. Rude's pilot was done to prove productivity gains once the system was installed. Now came the hard part for Rude: He had to prove that the large-scale capital investment and associated costs of maintaining digital video surveillance would save his company money on the bottom line.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.