Weaknesses in Password Strength

Weaknesses in Password Strength

Can password complexity be reduced without sacrificing security?

Password strength is always being discussed because it is difficult to balance password strength with usability.  Typically, resetting passwords is the main reason to call the helpdesk, indicating that passwords are too complex and / or change too frequently.  Can we make passwords less complex without compromising security?

Password strength is often discussed in terms of entropy. Put simply, the greater number of characters and the greater the variation of characters used in a password, the higher the entropy and the harder it is to crack. This is why password policies typically require a minimum number of characters, and a mix of lower case, upper case, number and symbol characters, which all increase the entropy of the password.

Passwords have had their weaknesses.  Password used to be stored in the clear on UNIX systems; you just had to know where they were stored.  In the next method used to increase security, the passwords were hashed using well know hashing algorithms, such as MD5, and then stored.  Subsequently, to check if a correct password had been supplied, the supplied password would be hashed again using the same algorithm and compared to the original stored hashed value.  If they were the same, the password supplied must have matched the original.

To break these passwords, crackers created “rainbow tables”, effectively a database of passwords and their hash values for a given algorithm.  If the table of hashed passwords was available, and very often it was by default, the cracker just had to look up the hashed value in the rainbow table to lookup the original password.  To counteract this, “salt” was used.  Salt is a value that is combined with the password to create hash values for passwords that are unique to the salt being used.  Each machine is supposed to use different salt values, so rainbow tables again become cognitively expensive to calculate, as each system should have unique hash values for the same password.

Hopefully, with these techniques in place a cracker now has no shortcuts to working out a password.  The cracker now has to use a "rote" system for cracking the password, where systematic guessing is performed, trying, for example, "a", then "b", etc. to "z", then "aa", "ab", and so on. If it is known that there are some minimum length and complexity requirements, then the guesses are tailored to meet those requirements.  In this way it is obvious how entropy is directly related to the ease of working out the password.

However, crackers are aware of how passwords are thought up. Firstly, they are aware of the habit of users choosing standard dictionary words, names and dates, so they try those first.  If there are complexity requirements, such as at least one upper case letter and one number, crackers know that the uppercase is likely to be at the beginning of the dictionary word, and the number at the end. They know the symbol substitution patterns used when symbols are required, e.g., “@” for “a”, etc. and will tailor the attack as required.  Crackers study lists of released passwords, and will try the most popular ones first (why is "monkey" in the top twenty?).  Such known patterns are not easily taken into account when calculating entropy of a given password.

This behaviour based cracking greatly reduces the time to get into a system where users do not use truly complex passwords.  I am not going to go into a discussion of what techniques should be used to create or securely store passwords; there is little I can add to that debate.  I will be introducing a seldom discussed factor – choosing password strength in relation to other controls that exist.
Security policies (or more correctly security standards) often stipulate the minimum complexity requirements of passwords.  However, this seldom takes account of other controls that exist. 

Highly complex passwords must still be used for data that can be taken offline, such as encrypted tape backups, encrypted files in transit, etc..  Such data can be attacked without any further protective controls coming into play.  Password complexity can be reduced if there are such controls as account lockout after a given small number of incorrect passwords being supplied.  This combined with controls as password expiry, log capture and monitoring of such events mitigate the weaker passwords being used – it is unlikely that the attack will be unnoticed before it is successful.  Similar controls are provided by “poison pills”, data in systems that is securely destroyed after a few incorrect supplied passwords.

With reliance on the whole control environment, carefully choosing the complexity of passwords in relation to the existence or absence of those controls throughout the lifecycle of the information being protected, password complexity requirements can be reduced for the majority of our users.  This will reduce the number of calls to our helpdesks and happier staff.

About the author

Charles Wale CISSP, CISA, CRISC, QSA is a freelance information security consultant with 15 years’ experience in the industry.  He is a Director of Lee Douglas  & Associates.

More from this author

Risk Assessments not worth the risk


Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.

Tags MD5rainbow tablesUNIX systemssecurityhashing alogorithimssecurity policieshackerpassword strength

Show Comments