WebApp Sec mailing list archives

Re: Account Lockouts


From: Michael Silk <michaelsilk () gmail com>
Date: Sat, 4 Dec 2004 14:13:08 +1100

Mark,

 Re: "Long term strategy".

 Sure, maybe they aren't a long term strategy, but of course any
security measures you implement should be reviewed at appropriate
intervals to see if it is still adequate. If this was inplace there
would be no problem.

 Anyway, there are other things that aren't neccessarily "long term"
but are still used ... i.e. consider AES (Rijndael) ... there is an
attack on reduced forms of that that may lead to other interesting
attacks down the track (or may not) but that doesn't mean that we
should not use it.

 We just need to review it.


 Re: "Secret Question"

but you must be careful with that type of
implementation. Before you ask a secret
question, you first must know who the user
is.

 Note that the OP said that the usernames were well none, but even if
this does concern you (username exposure) you could always make up a
question anyway when they try a username that doesn't exist.

 The system would then look it up (sql might be:
 "where username = '' and password='' and question1answer = ''") but
just find no results.

 The point is - you don't need a valid username to ask a secret question.

-- Michael


-----Original Message----- 
From: Mark Burnett [mailto:mb () xato net] 
Sent: Sat 4/12/2004 7:07 AM 
To: webappsec () securityfocus com; secprog () securityfocus com 
Cc: 
Subject: Re: Account Lockouts

There has been some talk of CAPTCHA's in this thread and I wanted to
comment on them further. Although CAPTCHA's are very effective at
blocking automated abuse, in their current form they are not an
effective long term strategy. The problem is that with our current
image enhancement, OCR, and AI technology, they can be cracked with
quite good accuracy. Their limited use and proprietary implementations
still makes them useful for now but once someone releases a script
kiddie tool to automate CAPTCHA cracking, they will become mostly
ineffective.

Furthermore, I have seen many CAPTCHA implementations that are simply
flawed. For example, instructing the user to select one of three
choices means that a script can randomly guess and still be 33%
accurate. Furthermore, I have seen e-mail systems that come with a
collection of photos for CAPTCHA use but everyone with that program
has the same exact photos. It would be trivial to build a tool to
bypass that product's CAPTCHA feature. I have also seen image
manipulations, such as excessive use of color or adding noise, which
might seem confusing to humans but are quite meaningless to a computer
program and do little to prevent automated interpretation.

To test out CAPTCHA images, try this in an image editing application:
1.      Convert to grayscale
2.      Apply noise removing, deinterlacing, despeckling, and other filters
3.      Increase brightness and contrast

The results are quite surprising, often producing something that an
average OCR program can accurately read.

It turns out that building a secure CAPTCHA that is universally
useable and stands the test of time is quite difficult. Image-based
CAPTCHA's obviously won't work for everyone. Language-based CAPTCHA's
face language barriers. There are also cultural, literacy, morality,
education, environmental, perception, and interpretation barriers that
might come into play.

Another issue is how you handle failed CAPTCHA input. To be most
effective against an automated attack, you must be consistent with
their use and you should use the same error message for failed
username, password, or CAPTCHA response. This prevents a script from
knowing why the login failed, but also might be more confusing for
users not knowing exactly what they did wrong.

Someone earlier mentioned adding a secret question to the CAPTCHA, but
you must be careful with that type of implementation. Before you ask a
secret question, you first must know who the user is. This leads to
continuity problems and might allow for attackers to brute-force
usernames, which can also be a problem.

So while current CAPTCHA's are interesting and, for now, quite
effective, you still need to keep them in perspective as a short-term
solution that relies heavily on obscurity for their security. They are
the right idea, we just haven't yet figured out the best
implementation, or if there even is one.


Mark Burnett



--------------------------------------------------------------------
Hacking The Code: ASP.NET Web Application Security
http://www.hackingthecode.com



On Thu, 02 Dec 2004 17:49:31 -0500, Valdis.Kletnieks () vt edu wrote:
On Fri, 03 Dec 2004 09:38:28 +1100, Michael Silk said:

And you can only "beat" the captcha in this scenario by getting the password
_right_. That would mean sending out a captcha image for each password
you attempt.

But remember - once you set it up, it's the same effort for one or a thousand.

I can't believe you think captcha add's "no" security here. It add's a
great deal
of complications for someone trying to annoy the site - probably far too much
to bother with.

Well.. "too much to bother with".  That's OK - *IF* your threat model consists
only of attacks by people who will give up if it gets difficult, and doesn't
include the possibility that you're being attacked by somebody who is seriously
determined to make life difficult for you.

And remember - if they know enough about your system to know that such a script
would do *anything*, they're either (a) an (probably very disgruntled) insider
determined to do you harm or (b) an outsider who's *already* invested all the
effort in figuring out *this* much about your setup.

Remember - we're *NOT* discussing "how to secure it against the bugtraq exploit
du jour".  We're specifically discussing how to secure it against somebody who
is *already* doing a one-off customized script to do this attack....

If you're not assuming an infinite amount of determination (you're allowed to
assume finite supplies of resources and technical clue, of course) on the part
of such an attacker, you need to do a re-examination of your threat model...


Current thread: