-
As a rule, CAPTCHAs should be a last resort. They tend to be annoying, often aren't human-solvable, most of them are ineffective against bots, all of them are ineffective against cheap third-world labor (according to OWASP, the current sweatshop rate is $12 per 500 tests), and some implementations are technically illegal in some countries (see link number 1 from the MUST-READ list). If you must use a CAPTCHA, use reCAPTCHA, since it is OCR-hard by definition (since it uses already OCR-misclassified book scans).
-
It is possible to prevent browsers from storing/retrieving a password with the autocomplete tag for forms/input fields. However in the real world, your customers will have many accounts on different systems; it compromises their security if they use the same password for every site. Can you expect them to remember different passwords for every site? There are some good password managers out there, however there are also bad ones - which will become a target for attackers.
-
The only (currently practical) way to protect against login interception (packet sniffing) during login is by using a certificate-based encryption scheme (for example, SSL) or a proven & tested challenge-response scheme (for example, the Diffie-Hellman-based SRP). Any other method can be easily circumvented by an eavesdropping attacker. On that note: hashing the password client-side (for example, with JavaScript) is useless unless it is combined with one of the above - that is, either securing the line with strong encryption or using a tried-and-tested challenge-response mechanism (if you don't know what that is, just know that it is one of the most difficult to prove, most difficult to design, and most difficult to implement concepts in digital security). Hashing the password is effective against password disclosure, but not against replay attacks, Man-In-The-Middle attacks / hijackings, or brute-force attacks (since we are handing the attacker both username, salt and hashed password).
-
After sending the authentication tokens, the system needs a way to remember that you have been authenticated - this fact should only ever be stored serverside in the session data. A cookie can be used to reference the session data. Wherever possible, the cookie should have the secure and HTTP Only flags set when sent to the browser. The httponly flag provides some protection against the cookie being read by a XSS attack. The secure flag ensures that the cookie is only sent back via HTTPS, and therefore protects against network sniffing attacks. The value of the cookie should not be predictable. Where a cookie referencing a non-existent session is presented, its value should be replaced immediately to prevent session fixation.
Persistent Login Cookies ("remember me" functionality) are a danger zone; on the one hand, they are entirely as safe as conventional logins when users understand how to handle them; and on the other hand, they are an enormous security risk in the hands of most users, who use them on public computers, forget to log out, don't know what cookies are or how to delete them, etc.
Personally, I want my persistent logins for the web sites I visit on a regular basis, but I know how to handle them safely. If you are positive that your users know the same, you can use persistent logins with a clean conscience. If not - well, then you're more like me; subscribing to the philosophy that users who are careless with their login credentials brought it upon themselves if they get hacked. It's not like we go to our user's houses and tear off all those facepalm-inducing Post-It notes with passwords they have lined up on the edge of their monitors, either. If people are idiots, then let them eat idiot cake.
Of course, some systems can't afford to have any accounts hacked; for such systems, there is no way you can justify having persistent logins.
If you DO decide to implement persistent login cookies, this is how you do it:
-
First, follow Charles Miller's 'Best Practices' article Do not get tempted to follow the 'Improved' Best Practices linked at the end of his article. Sadly, the 'improvements' to the scheme are easily thwarted (all an attacker has to do when stealing the 'improved' cookie is remember to delete the old one. This will require the legitimate user to re-login, creating a new series identifier and leaving the stolen one valid).
-
And DO NOT STORE THE PERSISTENT LOGIN COOKIE (TOKEN) IN YOUR DATABASE, ONLY A HASH OF IT! The login token is Password Equivalent, so if an attacker got his hands on your database, he/she could use the tokens to log in to any account, just as if they were cleartext login-password combinations. Therefore, use strong salted hashing (bcrypt / phpass) when storing persistent login tokens.
Don't. Never ever use 'secret questions'. Read the paper from link number 5 from the MUST-READ list. You can ask Sarah Palin about that one, after her AOL email account got hacked during the presidential campaign because the answer to her 'security' question was... (wait for it) ... "Wasilla High School"!
Even with user-specified questions, it is highly likely that most users will choose either:
-
A 'standard' secret question like mother's maiden name or favourite pet
-
A simple piece of trivia that anyone could lift from their blog, LinkedIn profile, or similar
-
Any question that is easier to answer than guessing their password. Which, for any decent password, is every question conceivable.
In conclusion, security questions are inherently insecure in all their forms and variations, and should never be employed in an authentication scheme for any reason.
A secondary question is often considered as adequate for fulfilling a requirement for two-factor authentication. While capturing some of the response via clicks rather than typing in theory provides protection against keylogger attacks, it is still just an extension to the password mechanism - and when users are presented with a text box instead of drop-downs on a phishing site, they rarely perceive this as abnormal. Note that you may be able to fulfill your two-factor obligations by using a long-lasting cookie (granted on submission of multiple authentication questions) in place of a security question asked each and every time - but at the expense of user convenience.
The only reason anyone still uses security questions by choice is that is saves the cost of a few support calls from users who can't remember their email passwords to get to their reactivation codes. At the expense of security and Sara Palin's reputation, that is. Worth it? You be the judge.
I already mentioned why you should never use security questions for handling forgotten/lost user passwords. There are at least two more all-too-common pitfalls to avoid in this field:
-
Don't RESET user's passwords no matter what - 'reset' passwords are harder for the user to remember, which means he/she MUST either change it OR write it down - say, on a bright yellow Post-It on the edge of his monitor. Instead, just let users pick a new one right away - which is what they want to do anyway.
-
Always hash the lost password code/token in the database. AGAIN, this code is another example of a Password Equivalent, so it MUST be hashed in case an attacker got his hands on your database. When a lost password code is requested, send the plaintext code to the user's email address, then hash it, save the hash in your database -- and throw away the original. Just like a password or a persistent login token.
A final note: always make sure your interface for entering the 'lost password code' is at least as secure as your login form itself, or an attacker will simply use this to gain access instead. Making sure you generate very long 'lost password codes' (for example, 16 case sensitive alphanumeric characters) is a good start, but consider adding the same throttling that you do for logins.
First, you'll want to read this small article for a reality check: The 500 most common passwords
Okay, so maybe the list isn't the canonical list of most common passwords on any system anywhere ever, but it's a good indication of how poorly people will choose their passwords when there is no enforced policy in place. Plus, the list looks frighteningly close to home when you compare it to the publicly available analyses of 40.000+ recently stolen MySpace passwords.
Well, enough MySpace-bashing for now. Moving on..
So: With no minimum password strength requirements, 2% of users use one of the top 20 most common passwords. Meaning: if an attacker gets just 20 attempts, 1 in 50 accounts on your website will be crackable.
Thwarting this requires calculating the entropy of a password and then applying a threshold. The National Institute of Standards and Technology (NIST) Special Publication 800-63 has a set of very good suggestions. That, when combined with a dictionary and keyboard layout analysis (for example, 'qwertyuiop' is a bad password), can reject 99% of all poorly selected passwords at a level of 18 bits of entropy. Simply calculating password strength and showing a visual strength meter to a user is insufficient. Unless it is enforced, users will ignore it.
First, have a look at the numbers: Password Recovery Speeds - How long will your password stand up
If you don't have the time to look through the tables in that link, here's the gist of them:
-
It takes virtually no time to crack a weak password, even if you're cracking it with an abacus
-
It takes virtually no time to crack an alphanumeric 9-character password, if it is case insensitive
-
It takes virtually no time to crack an intricate, symbols-and-letters-and-numbers, upper-and-lowercase password, if it is less than 8 characters long (a desktop PC can search the entire keyspace up to 7 characters in a matter of days or even hours)
-
It would, however, take an inordinate amount of time to crack even a 6-character password, if you were limited to one attempt per second!
So what can we learn from these numbers? Well, lots, but we can focus on the most important part: the fact that preventing large numbers of rapid-fire successive login attempts (ie. the brute force attack) really isn't that difficult. But preventing it right isn't as easy as it seems.
Generally speaking, you have three choices that are all effective against brute-force attacks (and dictionary attacks, but since you are already employing a strong passwords policy, they shouldn't be an issue):
-
Present a CAPTCHA after N failed attempts (annoying as hell and often ineffective -- but I'm repeating myself here)
-
Locking accounts and requiring email verification after N failed attempts (this is a DoS attack waiting to happen)
-
And finally, login throttling: that is, setting a time delay between attempts after N failed attempts (yes, DoS attacks are still possible, but at least they are far less likely and a lot more complicated to pull off).
Best practice #1: A short time delay that increases with the number of failed attempts, like:
- 1 failed attempt = no delay
- 2 failed attempts = 2 sec delay
- 3 failed attempts = 4 sec delay
- 4 failed attempts = 8 sec delay
- 5 failed attempts = 16 sec delay
- etc.
DoS attacking this scheme would be very impractical, but on the other hand, potentially devastating, since the delay increases exponentially. A DoS attack lasting a few days could suspend the user for weeks.
To clarify: The delay is not a delay before returning the response to the browser. It is more like a timeout or refractory period during which login attempts to a specific account or from a specific IP address will not be accepted or evaluated at all. That is, correct credentials will not return in a successful login, and incorrect credentials will not trigger a delay increase.
Best practice #2: A medium length time delay that goes into effect after N failed attempts, like:
- 1-4 failed attempts = no delay
- 5 failed attempts = 15-30 min delay
DoS attacking this scheme would be quite impractical, but certainly doable. Also, it might be relevant to note that such a long delay can be very annoying for a legitimate user. Forgetful users will dislike you.
Best practice #3: Combining the two approaches - either a fixed, short time delay that goes into effect after N failed attempts, like:
- 1-4 failed attempts = no delay
- 5+ failed attempts = 20 sec delay
Or, an increasing delay with a fixed upper bound, like:
- 1 failed attempt = 5 sec delay
- 2 failed attempts = 15 sec delay
- 3+ failed attempts = 45 sec delay
This final scheme was taken from the OWASP best-practices suggestions (link 1 from the MUST-READ list), and should be considered best practice, even if it is admittedly on the restrictive side.
As a rule of thumb however, I would say: the stronger your password policy is, the less you have to bug users with delays. If you require strong (case-sensitive alphanumerics + required numbers and symbols) 9+ character passwords, you could give the users 2-4 non-delayed password attempts before activating the throttling.
DoS attacking this final login throttling scheme would be very impractical. And as a final touch, always allow persistent (cookie) logins (and/or a CAPTCHA-verified login form) to pass through, so legitimate users won't even be delayed while the attack is in progress. That way, the very impractical DoS attack becomes an extremely impractical attack.
Additionally, it makes sense to do more aggressive throttling on admin accounts, since those are the most attractive entry points
Just as an aside, more advanced attackers will try to circumvent login throttling by 'spreading their activities':
-
Distributing the attempts on a botnet to prevent IP address flagging
-
Rather than picking one user and trying the 50.000 most common passwords (which they can't, because of our throttling), they will pick THE most common password and try it against 50.000 users instead. That way, not only do they get around maximum-attempts measures like CAPTCHAs and login throttling, their chance of success increases as well, since the number 1 most common password is far more likely than number 49.995
-
Spacing the login requests for each user account, say, 30 seconds apart, to sneak under the radar
Here, the best practice would be logging the number of failed logins, system-wide, and using a running average of your site's bad-login frequency as the basis for an upper limit that you then impose on all users.
Too abstract? Let me rephrase:
Say your site has had an average of 120 bad logins per day over the past 3 months. Using that (running average), your system might set the global limit to 3 times that -- ie. 360 failed attempts over a 24 hour period. Then, if the total number of failed attempts across all accounts exceeds that number within one day (or even better, monitor the rate of acceleration and trigger on a calculated treshold), it activates system-wide login throttling - meaning short delays for ALL users (still, with the exception of cookie logins and/or backup CAPTCHA logins).
I also posted a question with more details and a really good discussion of how to avoid tricky pitfals in fending off distributed brute force attacks
Credentials can be compromised, whether by exploits, passwords being written down and lost, laptops with keys being stolen, or users entering logins into phishing sites. Logins can be protected with two-factor authentication, which use out-of-band factors such as single-use codes received from a phone call, SMS message, or dongle. Several providers offer two-factor authentication services.
Authentication can be completely delegated to a single-sign-on service such as OAuth, OpenID or Persona (nee BrowserID), where another provider handles collecting credentials. This pushes the problem to a trusted third party. Twitter is an example of an OAuth provider, while Facebook provides a similar proprietary solution.
- OWASP Guide To Authentication
- Dos and Don’ts of Client Authentication on the Web (very readable MIT research paper)
- Charles Miller's Persistent Login Cookie Best Practice
- Wikipedia: HTTP cookie
- Personal knowledge questions for fallback authentication: Security questions in the era of Facebook (very readable Berkeley research paper)
from http://stackoverflow.com/questions/549/the-definitive-guide-to-forms-based-website-authentication