Security and Google Apps
- 22 August, 2011 22:36
How does Google’s approach to security differ from traditional models?
It’s a bit of a different model than trying to protect the end point devices, the laptop, the desktop. We focus on protecting the data, and have very limited data at the endpoint.
We are an internet company — born and raised on the internet — and we have built security as part of the core DNA of our products. I find it a model that is really scalable to millions and millions of users, both from a technology and operations perspective. We rely heavily on data replication and no single point of failure. We have learned that IT systems fail – that’s their nature – so how do you build something that heals itself and doesn’t depend on any single failure?
Where do you think CIOs and CISOs are spending most of their time, money, their effort, in terms of their infrastructure?
One of things they still tell me – and I’m a little surprised to be honest – is that patching is still a huge problem. Most organisations have a very heterogenous environment. They have multiple operation systems, with multiple versions of the operating systems, multiple applications, user stores, and with different versions. And we all know software vendors issue security patches on a regular basis. It is [the CISO’s] job to know whether those patches are applicable and get them deployed – all before the bad guys reverse engineer them.
According to Microsoft, companies take between 25-56 days to deploy an OS patch after it has been released. That’s a pretty scary number – to have an open, known vulnerability that long. And most of the CIOs I meet with say that they wish they were in that category; they feel they’re at least one or two deltas beyond that. So it’s still a bit of a problem, and it’s a problem that can go away if you move into the Cloud. Because now you have no more servers to patch. So it’s very important that your Cloud provider doesn’t have this problem and that you’re not just handing the same problem over to somebody else.
The other issue CISOs tell me is around the data and where the data resides. Sixty per cent of the data still resides on unprotected laptops and desktops. One out of every 12 laptops is lost or stolen within the first 12 months of purchase. Those USB keys that we find so convenient to use – 66 per cent of us admit to losing them with 60 per cent of those having corporate private data on them. Now they aren’t, typically, malicious users; these are users that are really trying to get the job done, so they’re taking the data with them.
What CISOs found is that they’re not making it easy for users to do the right thing. By putting the data in the Cloud and making it available anytime, anywhere, you don’t have to worry about these issues and user behaviour starts to change.
Even the FBI, one of the most conscious organisations around security, has admitted to losing close to 150 laptops in the last four years. Two million laptops are lost or stolen every year.
And really when you think about the Cloud, you almost need to start thinking about a different paradigm. It’s changing that mindset.
How does Google approach the security issue internally?
We actually have one of the largest security practices I know of; it is over 250 people dedicated to security. I don’t know of too many organisations, other than perhaps intelligence agencies, that have that kind of security. I can’t guarantee it, but when you move into the Google Cloud, you are getting 250 people looking over the security for your data. And I’m very lucky to work alongside some of the world’s leading experts in things like drive-by downloads and malware. And that doesn’t count our internal audit and engineering compliance teams, our physical security teams and the security people who actually sit within the product teams.
Do you find Google is a honeypot for attacks because of your size and who you are?
I think we’re being challenged on a regular basis. People asked Jesse James why he robbed banks, and he said, ‘that’s where the money is’. So I believe Cloud providers, or anybody who has large amounts of data, are going to be potential targets. It’s something that we realized and we manage that threat. We monitor new threats and we react accordingly.
But I think also the scale on which we operate gives us an advantage. For example, we process about 2 billion emails every day, and with that comes a tremendous amount of knowledge about new attacks and new viruses. [It means] we can protect our users from that without them installing any software or any updating signatures files. So yes, there is obviously the target, but with that also comes some good knowledge of what is going on.
From a physical security perspective, most of our data centres are undisclosed and unmarked. We have publicly announced the locations of some of our data centres. I’d say we have the typical security you expect from a world class data centre with 24x7 guard coverage, access logs, video surveillance, thermal imaging cameras, multiple backups et cetera. So regardless of which data centre it is, it has to meet our own minimum security standard that we audit our data centres against.
One of the sticking points for Cloud computing for many CIOs is around the location of data, particularly for government and risk adverse organisations. How do you tackle that issue?
My personal opinion is that it is not as important where the data is located, but how it is protected and who has access. So it doesn’t matter if the data is physically stored in the United States or the Netherlands or in Ireland et cetera.
I can understand the discomfort associated with moving away from a server that I know exactly where it is, to being in the Cloud, but does it really make a difference?
How do you overcome any regulatory issues with Cloud computing?
I am finding it more in the government sector because of some laws – not specific to Australia, but in general. And I can see why government feels that by keeping it in country it helps more with economics. I think the banking industry is changing. We have banking industry customers all over the world that are using our services. I think it’s more important that you have access to the data in the country so if you have a regulatory compliance request you can respond to it from that country. I also think there is a little bit of a misconception that if the data is not stored in a country then authorities would not have access to it. That’s not the case; Google’s a US company and regardless of where the data is stored it is our responsibility to comply with US laws and all other applicable laws from other jurisdictions.
Do you make use of Safe Harbor provisions?
We are Safe Harbor compliant. It is also our policy to notify customers wherever legally possible that there’s a request for their data. And specifically in the Google Apps for business area, we prefer customers respond to those requests themselves, and we have given them the tools to do so.
How often would you have that kind of interaction?
We have been very public with our data transparency report, where we show by which country how many requests we get from data. They typically fall into two different types of buckets; one is the request to remove data and one is the request to produce data. While the numbers are public, what a lot of people don’t do is reverse calculate the percentages. If you consider Gmail has a couple of hundred million users and look at the number of requests for data, you realise it’s a tiny percentage.
What are the benefits of Cloud computing from a security standpoint?
When we think about security, we think about it from a people, process and technology perspective. One of the core benefits of moving specifically to the Google Cloud is the way in which we store data. I use ‘mail’ as an example but it is true of all the applications in the Google enterprise suite. If I was a typical on-premise environment, all my mail would be sitting on a single mail server along with everybody else’s mail. And if I compromised that single mail server, I got everybody’s mail. And you might replicate that server for a redundancy and an availability perspective.
We’ve taken a different approach. We’ve taken all of my mail, for example, and we’ve broken it into small pieces, which we have spread across our environment. We have done the same with all our consumers and business users.
Now, I don’t have a dedicated server but I have a series of servers. Rather than having just one copy, I have multiple copies, within a single data centre and within a secondary data centre. Now I’m not only not dependent on a single server, I’m not even dependent on a single data centre. We built our infrastructure to accept servers to go down, to accept entire racks to go down and even entire data centres – and hopefully without users ever knowing. We have zero scheduled downtime for our services. And in fact in 2010 Gmail was available 99.984 per cent of the time – that’s less than five minutes of downtime a month – and so far in 2011 it has been 99.999. I think if you compare that to an on-premise solution, which you have to take down at least once a month for patching and so on, it really blows that out.
What this also means is we have a recovery point objective and a recovery time objective of zero and one minute; if we recover from one data centre to another, our recovery point is to lose zero data. And to recover from one data centre to another data centre within one minute. This is not a primary data centre with a backup. It is a primary and a secondary data centre. And they may change to balance the load and see where users are coming from.
There is another added benefit to the traditional on-premises environment. Almost every customer I have spoken to, the data on the mail server is sitting in the clear and rather than doing that we obfuscate – not encrypt but obfuscate – that data so that it is not humanly readable. So if I do get access somehow to a disk, I do not know how to read the data, nor do I know which data belongs to which user. In a typical environment it would be simple to understand; it would probably be called something like Eranmail.db. At Google, each of these files is given a truly random file name so I can’t go and map it back, nor can I read the contents. It’s very different to the traditional storage model.
Why obfuscation rather than encryption?
Encryption is a valuable, great technology and it has its purpose. And we do encrypt data where it’s appropriate; we were revolutionary in encrypting all transactions with Gmail and with Docs. And we are still one of the major Cloud providers that, by default, turns encryption on for all communications using SSL between Google and the users. But we didn’t see much of a security benefit from encrypting the data centre. IT was going to require a lot of key management, and going back and forth to get those keys, so every time you want to do something simple like search your inbox or search a keyword, I would have to decrypt each of those small files. And I would have to go to the key store, get the key, decrypt it, hand it back and go to the next one. It was going to introduce a lot more complexity and latency and we really felt that obfuscation gave us the risk mitigation that we needed while balancing those other aspects.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.
Queensland government to provide 200 services online by 2015
Call Centers Suffer From Big Data Overload
CIO 100: Carsales wins top gong for innovation
How to secure passwords and other critical numbers
Australian National University streamlines IT
Is your data centre growing too complex for your backup?
Backing up data today is growing more complex - and in an era of virtualisation, big data and cloud deployments, it can be difficult to maintain control over your data, resulting in loss and downtime. This hour-long webcast features expert commentary on navigating the complexity of backing up a heavily virtualised infrastructure; simplifying your backup software and hardware ecosystem; reducing the cost of backing up your organisation’s data, and modernising your backup infrastructure with integration. The presentations will conclude with an interactive Q&A session.
Delivering Private Clouds Today
Developing effective private cloud management and implementation plans is a top priority for many IT decision makers. Highly integrated management environments can monitor and optimize complex, heterogeneous private cloud applications, while infrastructure can provide IT staff with self-service provisioning capabilities and tools to automate many routine processes and workflows. This white paper discusses industry-wide views on private cloud management trends and priorities, while case studies highlight the benefits of using enhanced architecture and highlight the lessons learned.
Efficient Data Management in Three Simple Steps
Gartner reports that Business Intelligence, Mobile Technologies and Cloud Computing rank 1-2-3 as the 2013 Global CIO Technology Priorities. These three trends, labelled the “Perfect Storm” of new technologies, are transforming every link in the IT value chain, promising to deliver more efficient, responsive and dynamic IT operations. But this also means massive shifts in the way IT applications and services are created, deployed and maintained. This whitepaper aims to help you begin the journey to efficient modern data management