To remain PCI-compliant, I conduct quarterly security assessments of our infrastructure. This means external testing of our internet-facing PCI resources, using an approved scanning vendor (ASV), and what I call internal PCI full-population scans.
We do the external scanning every month, even though PCI requires only quarterly scanning. If there is a problem, I want to catch it sooner than two months or more after it first cropped up. And the frequency isn’t a problem, because conducting the scans is easy, even though I use three different ASVs.
Most ASVs make it easy to configure and run assessments, and there aren’t many options in the scan profile as defined by PCI. I simply use a web browser to log into each ASV site; enter the PCI resources’ URLs; set the schedule for date, time and frequency; and that’s about it. While several items, such as running weak versions of SSL, needed attention the first couple of times we did this, our environment now typically scans clean. I am notified by email when the scans are completed, and then I log in and download the scan results to a secure location. Any problems are sent along to the right people for remediation.
I use Tenable Nessus to conduct the internal PCI full-population scans, but because of server proliferation, we’re doing these scans on a quarterly basis. When I first started at this company, we had perhaps 400 servers responsible for serving our commercial application. Over the past couple of years, with expanding services, that number has grown to approximately 1,000 production and development servers, for the web, applications, databases, load balancing, monitoring, logging, email, storage and firewalls. Of those, roughly 500 are in scope for PCI, making for a hefty scan workload.
To avoid outages and streamline reporting, I break up the servers into groups that are scanned on a rotating schedule on weekends throughout each quarter.
Then there’s the fact that the scans never come up clean. And because system administrators are constantly tweaking servers and inadvertently bumping them out of compliance, scans yield different issues from one quarter to the next. I want each class of server (web, applications, databases, etc.) to conform to a secure baseline configuration so that scans will produce similar findings for all like servers. Now I’m getting some help there from a tool called Chef, which the operations team recently deployed to manage large numbers of servers.
As often happens when new technology is introduced into our environment, security wasn’t a major component of our initial Chef deployment. But now that I see the potential in this technology, we will start to use it to help address my current security problem.
Using Chef, whenever a new server is needed, the team simply deploys a baseline installation of Linux with a special agent that calls into a central Chef server to obtain configuration information. So if the resource will be used as a web server, the Chef server will ensure that the Linux server is configured with the latest operating system patches, required services, Apache and so on.
Another aid here is that we recently adopted the Center for Internet Security (CIS) benchmarks to ensure that server OS, Apache, Oracle and other major applications and infrastructure conform to a standard secure baseline. I will ask that the defined CIS benchmark configurations be included in the Chef templates for each type of resource.
This is going to take a lot of time, resources and testing to get things right. But my hope is that after the initial hard work, only minor changes will need to be made from time to time to keep the templates current. And for compliance, this effort should result in clean PCI scans.
This week's journal is written by a real security manager, "Mathias Thurman," whose name and employer have been disguised for obvious reasons. Contact him at firstname.lastname@example.org.
Click here for more security articles.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.