Although many systems are incorporating various authentication mechanisms that replace or complement the traditional username/password approach, passwords remain at the center of internet security today. And the best way to guarantee a password's strength is to guarantee that it is long enough and random enough that brute-forcing it becomes extremely difficult.
Achieving optimal randomness often necessitates the use of password generators. However, while strategies like the Diceware method help, computer-generated passwords can be difficult to remember and require the use of password managers which many users opt out of.
Thus, for users, understanding and following best practices is crucial to ensure a password's resilience. Concurrently, for developers, strength meters serve as a defensive measure ensuring that users don't jeopardize system security with easily crackable passwords.
The Shortcomings of Composition Rules
According to the NIST guidelines on the strength of memorized secrets, both length and complexity of a password are crucial. While a minimum-length policy undoubtedly augments security, however, the same cannot be said about enforcing composition rules.
"...users respond in very predictable ways to the requirements imposed by composition rules..."
Such rules introduce three main flaws:
An increase in rules leads to a decrease in the total number of possible passwords.
Humans exhibit predictable behaviors in attempting to meet these rules.
Constraints can sometimes make passwords more challenging to remember.
With such weaknesses, enforcing composition rules can reduce the password's inherent unpredictability, and consequently increase its vulnerability. Furthermore, it often propagates the false notion that a password is strong if and only if it lacks predictable patterns.
Consider a password like Done_Wind_Brown1234_Pa$sword
. Despite comprising predictable components such as "1234" and "Pas$word", it is a generally robust password due to its diverse composition, unclear pattern, and 28-character length. Therefore, binary policies that simply reject passwords for including things like sequential numbers, or for lacking numbers, can be counter productive.
Entropy: A Simplified Explanation
Entropy is a direct reflection of a password's strength. To put it simply, entropy is the degree of unpredictability. It is evaluated as the total number of bits we need in order to represent all passwords from a given character pool and with a given length.
The formula for calculating it is E = log_2(N^L)
.
Assuming all passwords are equally likely, there are N^L
possible combinations. The base-2 logarithm tells us the bits required to represent a number in binary. Hence, log_2(N^L)
tells us the number of bits needed to represent all possible passwords.
This figure denotes the unpredictability of a randomly chosen password, because the bigger the number of bits needed to represent all possible passwords from a given pool and with a given length, the bigger the number of all possible passwords. In essence, higher entropy means a password is harder to crack because it likely requires more guesses.
You have probably seen this relationship between bits of entropy and password strength in the brilliant comic by XKCD:
The Entropy Paradox
While entropy is a robust measure, it has its limitations when applied to human-generated passwords. Why? Because it presumes all characters within a given pool are equally likely to be selected—which is far from the truth when a human is doing the selection.
Consider the overly simplistic password: 1111111111111111111111
. From a purely entropy-based perspective, it appears robust with a high value. But its predictability makes it a sitting duck for rule-based attacks, which is why many systems started incorporating composition rules.
Keeping in mind the factors at play in entropy computation, and the limitation of relying blindly on it, how do we measure the strength of the password without enforcing binary composition rules?
One approach is to implement a scoring system that adds and deducts points based on the elements in your password. However, the problem with such systems is that they can easily be fooled by not-so-strong passwords like AAbbccdd112233
.
Making Entropy More Reliable
Here's a question for you:
What if, instead of scoring, we can re-introduce a factor of randomness to the evaluated password so that the computed entropy becomes more reliable?
If we run a series of steps that gradually strip the password of predictable patterns, we end up with a sanitized version that represents the password at its core. The added randomness comes from the reduced predictability and reduced "filler slots" in the password.
Going back to our example, AAbbccdd112233
is indeed predictable. However, it's not strictly sequential to be rejected, or with enough character repetition, because most systems can't realistically prevent you from having double letters.
But why is it predictable then? Well because it's really just Abcd123
. That's what you see when your human eyes look at it! We can confirm its predictability by the fact that its lowercase version aabbccdd112233
already exists in databases of breached passwords multiple times.
By identifying and eliminating patterns that act as sources of predictability, we not only increase the randomness of the sanitized password but also counteract the deceptive effect of length introduced by these patterns.
If we remove the repeated characters from our example we end up with the core version: Abcd123
. If we now remove the sequential characters we end up with A1
. Factoring this version in the computation of entropy leads to a far more reliable number.
Bringing It All Together: A New Verifier in Town
I played around with this idea these past couple of weeks, and ended up creating what I believe to be a versatile library that I called pass-profiler. It is robust enough by default to require good practices in creating passwords without enforcing any particular policy. It is also highly configurable to allow developers to define their own criteria of "predictable".
The library is written in TypeScript and is compatible with both Node.js and browser environments.
I believe this to be a novel approach in measuring password strength that - despite being a work-in-progress in need for extensive testing - is proving to be much harder to trick than many other strength meters available.
Top comments (0)