Monday, June 16, 2014

A Recipe For Password Security

Several months ago I helped architect a password security scheme for a client. During that process I learned quite a bit about how to encrypt passwords in a secure fashion.

Encryption vs. Hashing

Most developers have heard the term "encryption", which means that data is encoded in such a way that it is not human-readable. But in the context of password security the word “encryption” implies that the encoding can be decoded, that is it’s a “two-way” encryption. While it may be advantageous to decode a user’s password, especially in situations where they have forgotten it, it opens up a security hole. Simply put, if someone attacking your security implementation can guess the algorithm and parameters used to encrypt passwords they can then decrypt all the passwords in your system! At this point you have the equivalent of passwords stored in your system in plaintext – not an excellent approach.

A much more secure method for storing encrypted passwords is to use a cryptographically secure hash1. A “hash” is an algorithm that will take a block of data and from that information generate a value such that if any of the data is changed the hashed value will change as well. The block of data is generally called a “message” and the hashed value is called a “digest”. What is valuable about cryptographic hashes with regard to password security is that they are “one-way”, that is once the password has been hashed it cannot be decrypted back to its original plaintext form. This eliminates the security vulnerability that exists with two way encryption.

By now I’m sure some of you have thought, “Great, if I have this hashed value how to I validate that against the plaintext password typed in by the user?” The answer is, you don’t. When the user types in their password you hash the value that they entered using the same hash algorithm. You then compare that hashed value with the hashed password stored in your system. If they match then the user is authenticated.

Adding Some Salt

So we now have a process for storing passwords in our system in a secure form that cannot be decrypted, thus closing the door that allows attackers access to all the passwords stored in the system. But determined attackers are not so easily thwarted. They will use a rainbow of methods to gain access to your systems, which segues (in a ham-handed fashion) into the next topic, rainbow tables.

Since they can no longer decrypt your passwords attackers will try the next best thing. They’ll take a large list of common words and passwords and hash them using some of the well-known standard algorithms. They’ll then compare this list of hashed words to your password list. Any matches will immediately indicate a successful password search. Given users’ penchant for commonly used passwords the chances are good that the attacker will end up with quite a few successes.

The generally accepted practice for prohibiting this practice is to use a “password salt”2. A salt value is just a randomly generated value that is added to the user’s password before hashing. The salt value is then stored with the user’s hashed password so that the authentication method can use it to hash a password entered by the user.

Now I’m sure some of you are wondering how this prevents rainbow table attacks if the salt value is easily accessible. What the salt value does is require the attacker to regenerate all the values in their rainbow table using the specified salt value. Even if they have a match it will only work for the one user for which that particular salt value was used. While it doesn’t prevent a successful attack it certainly limits it to one success and makes it very slow and cumbersome for the attacker to make additional attempts on other passwords.

Needs Some Pepper

So how can we make it even more difficult for the determined attacker? Well, we can add a “secret salt” value not stored in the database to the password before we hash it. This value would be well known to the system so that it can reproduce it as necessary for authentication but would not be stored in the database. This type of value is commonly known as a “pepper” value. The fact this it is not published or stored makes it even more difficult for an attacker to guess what the plaintext value was before hashing. Unless they have access to the source code for generating the pepper value they may never be able to generate a successful rainbow table.

Simmer Slowly

So it seems like we’ve covered all the bases. But we can’t forget about Moore’s Law3. As CPUs and GPUs get faster and faster it becomes easier to generate multiple rainbow tables so that an attacker can take many guesses at an encrypted password list. What’s a poor, security-minded developer to do?

Well, how about we purposely slow them down?

There are several well known cryptographic hash algorithms4, such as the Message Digest derivatives (MD2, MD4, MD5) and the Secure Hash Algorithms from the National Security Agency (SHA-1, SHA-256) but many of these were designed to work quickly. In some cases, like MD5, the algorithm is considered “cryptographically broken”5. What we really need is a hash algorithm that can be adapted so that it is slow enough to discourage the generation of multiple rainbow tables but fast enough to hash a password quickly after a user types it in for authentication.

Enter bcrypt6. Bcrypt is a hashing function based on the well-regarded Blowfish encryption algorithm that includes an iteration count to make it process more slowly. Even if the attacker knows that bcrypt is the algorithm in use if a properly selected iteration count is employed it renders the generation of rainbow tables very expensive. Furthermore, the iteration count is stored in the hashed result value so it’s forward compatible; that is as computing power continues to increase the iteration count can be increased and applied to existing password hashes so that the generation of rainbow tables continues to be expensive.

A Spicy Meatball

So by using a combination of the right spices (salt and pepper) and the proper cook time (iterations) we can end up with an excellently prepared plate of hash. It’s not perfect - no security approach ever is - but we can certainly make our systems less vulnerable to the point where an attacker will look for victims that are less well-protected. And that’s all we can really hope for, that they look somewhere else.

Additional References

Coding Horror: You're Probably Storing Passwords Incorrectly

Sunday, June 15, 2014

Throwing a Great Block

Last year I was working on a cloud-hosted Windows service for a client that contained an application-specific logging implementation. The existing architecture had log entries posted at various process points, i.e. file discovery, pickup, dropoff, and download. The log code would post a message to the Microsoft Messaging Queueing service (MSMQ) and a separate database writer service would dequeue those messages and post them to a series of tables in SQL Server.

Lagging The Play

While this setup worked perfectly well it had one minor issue - the queueing of a log message to MSMQ happened sequentially. That means that while the service was attempting to post a log message to the queue all other file processing was temporarily suspended. Since posting a log message to MSMQ means you're performing an inter-process communication there will be a noticeable lag imposed on the calling thread. Add to that the possibility that the MSMQ service could be located on another server and you've now imposed network lag time on the calling process as well. That's potentially alotta-lag! In the worst possible case, if MSMQ cannot be reached for some reason file processing could be suspended for a very long time. For a platform that expects to be able to process thousands of messages a day this was clearly not going to work as a long term solution. However, the client wanted to retain the use of MSMQ as a persistent message forwarding mechanism so that if the writer service was unavailable the log messages would not end up getting lost.

Block For Me

It seemed clear that what was needed was some way for the service to save log messages internally for near-term posting to MSMQ in a way that would minimally impact file processing. What came to mind initially was to have an internal Queue object on which the service could store log messages that could be dequeued and posted to MSMQ by another thread. It's a classic Producer-Consumer pattern1. While this is a threading implementation that is not of surpassing difficulty to implement it has some subtleties that make it non-trivial. First, all access to the Queue object has to be thread-safe. Second, the MSMQ posting thread needs to enter a low-CPU-load no-operation loop while it's waiting for a log message to be queued. Wouldn't it be nice if there was something built into the .Net Framework to do all this?

Well, sometimes Microsoft gets it right. In the .Net Framework 4 release Microsoft added something called a Blocking Collection2 that does exactly what we needed. It allows for thread-safe Producer-Consumer patterns that do not consume CPU resources when there is nothing on the queue.

Here's an example of how to implement it in a simple console application.

First, we'll need a message class. In the service for the client the log information message was more complex, but this should give you the general idea.

The real "meat" of the operation is in the class that encapsulates the blocking collection. Here's the first portion of the class definition.
You'll notice that the class implements the IDisposable interface. This is so that the thread that dequeues the messages from the blocking collection can clean up after itself. This will be seen in another section of the code for this class.

You'll also notice that when the BlockingCollection is defined we specify the class of objects that will be placed on the collection. However, when we instantiate the collection we signify that it should use a ConcurrentQueue object as the backing data store for the blocking collection. This ensures that the items placed in the collection will be handled in a thread-safe manner on a first-in, first-out (FIFO) basis.

The finalizer method merely calls our Dispose method with a parameter indicating that this was called from the class' destructor, a common patterm for IDisposable implementations3. The Dispose methods will be shown in their entirety later in this post.

The AddLog method is very simple; it invokes the blocking collection's Add method to enqueue the message in a thread safe manner. The DequeueMessageThread method appears to be an endless loop that keeps attempting to dequeue a message, causing a CPU spike from the tight looping. But here's where the magic of the blocking collection comes into play. The Take method of the blocking collection will enter into a low-CPU wait state if nothing is found on the queue, blocking the loop from proceeding. As soon as a message is enqueued the Take method will return from the wait state and the loop will proceed. Note that the Take mehod will also return immediately if the blocking collection has been closed down, indicating completion, hence the IsCompleted check right after the call.

The exception handler in the method captures two specific exceptions:
  1. The InvalidOperationException will be signaled if the blocking collection is stopped. We'll see this in the Dispose method;
  2. The ThreadAbortException will be signaled if the thread had to be killed because the Dispose method timed out waiting for the thread to finish.
In this code snippet the first Dispose method is our public interface that satisfies the requirement for IDisposable implementation. It simply calls our private Dispose method that takes a parameter indicating whether it was called from the class destructor method.

The second private Dispose method is where some housekeeping for the blocking collection and dequeue thread happens. First we call the blocking collection's CompleteAdding method. This will disallow any further additions to the queue, minimizing the chance that the dequeue thread will never end because messages continue to be added. We then attempt to wait for the thread to complete by calling the thread's Join method, specifying a timeout value for the thread. If the thread is not complete within the specified timeout we forcibly destroy it and exit. Finally, if called from the class' destructor we can suppress the finalize method of the garbage collector.

To utilize a producer-consumer queue like this one is quite simple:
The using statement ensures that the queue's Dispose method is invoked upon completion, thereby stopping the dequeing thread. When executed in a loop like this one that enqueues 100 messages the tail end of the output looks like this:

Enqueueing: Message with ID 92 and value Message text # 92.
Enqueueing: Message with ID 93 and value Message text # 93.
Enqueueing: Message with ID 94 and value Message text # 94.
Dequeueing: Message with ID 88 and value Message text # 88.
Dequeueing: Message with ID 89 and value Message text # 89.
Dequeueing: Message with ID 90 and value Message text # 90.
Dequeueing: Message with ID 91 and value Message text # 91.
Enqueueing: Message with ID 95 and value Message text # 95.
Enqueueing: Message with ID 96 and value Message text # 96.
Enqueueing: Message with ID 97 and value Message text # 97.
Enqueueing: Message with ID 98 and value Message text # 98.
Dequeueing: Message with ID 92 and value Message text # 92.
Dequeueing: Message with ID 93 and value Message text # 93.
Dequeueing: Message with ID 94 and value Message text # 94.
Dequeueing: Message with ID 95 and value Message text # 95.
Enqueueing: Message with ID 99 and value Message text # 99.
Enqueueing: Message with ID 100 and value Message text # 100.
Dequeueing: Message with ID 96 and value Message text # 96.
Dequeueing: Message with ID 97 and value Message text # 97.
Dequeueing: Message with ID 98 and value Message text # 98.
Dequeueing: Message with ID 99 and value Message text # 99.
Dequeueing: Message with ID 100 and value Message text # 100.
Shutting down queue. Waiting for dequeue thread completion.
Dequeue thread complete.

As you can see the dequeue process slightly lags the enqueue process, as you would expect for processes running in separate threads. The messages are interspersed as the threads compete for the shared resource.

Finishing It Off

So what we've demonstrated is a way to implement a producer-consumer pattern without writing a lot of thread management code. While this pattern is not applicable in a great many situations it certainly has its uses. Any time you need to queue up items for processing but don't want to slow down the primary process give this pattern a try.