You are viewing bramcohen

Thu, May. 19th, 2011, 09:16 am
Practical Cryptography Corrected

The book 'Practical Cryptography' is perfectly good for giving an overview of basic concepts in cryptography, but its immediate practical advice to implementers is not terribly to the point or accurate. Here is much more to the point and accurate advice.

  • For a block cipher you should use AES-128. If you don't understand your protocol well enough to know whether there are birthday attacks on your keys, you have bigger issues. (Shame on Schneier for still trying to revisit the AES design competition by yammering on about Twofish.)

  • For an encryption mode, you should always use CTR, and always use a nonce of zero, and never reuse keys.

  • For a hash function, you should use sha-256 until the winner of the sha-3 design competition is announced, and then you should use sha-3.

  • You should always do encryption as a layer outside of authentication.

  • For entropy, you should always do whatever the Python os.urandom() call does on the local platform.

  • For a data format, you should use JSON. (Not XML!)

  • For an RSA exponent, you should always use 2. Technically that's Rabin-Williams, and requires slightly different implementation, but that actually works in its favor. Rabin-Williams has a reduction to factoring, RSA does not.

  • You should never use the same key for both encryption and authentication. If you need both encryption and authentication, you should use two keys. One for encryption, and one for authentication.

  • If you're going to be using RSA, you should learn about encoding for it! This is by far the most historically error-prone part of crypto protocols, and Practical Cryptography bizarrely doesn't even mention it as an issue.

  • You should not parameterize your protocols! That just creates compatibility problems. If necessary you can parameterize it by having two values, one good for the next twenty years, and one good until the end of time, but key sizes have gotten big enough that skipping that first one should be the default.


Maybe someday Schneier will write a book which I can recommend to people who are getting started in cryptography, but I'm not holding my breath.



Are you a good programmer? Try this coding challenge.

Thu, May. 19th, 2011 08:39 pm (UTC)
dossy

The coding challenge implicitly asks for a brute-force search based solution?

It also seems to assume that it's a straight line or at least a one-dimensional space - a curve is certainly a line. If it's not a straight line, then there isn't sufficient information to solve the problem in two dimensions.

Overall, this problem sounds an awful lot like a variant of the 8 Queens puzzle.

Thu, May. 19th, 2011 08:49 pm (UTC)
bramcohen

Yes, the test is to see if you can write the code to brute force it. There's no gimmick.

Fri, May. 20th, 2011 01:50 pm (UTC)
jered

Right, this is an optimal golumb ruler; it's NP-complete.

Fri, May. 20th, 2011 04:37 pm (UTC)
bramcohen

Not exactly optimal golomb ruler - that would disallow the same distance between any two pairs, where this is just disallowing arithmetic progressions of length 3.

Thu, May. 19th, 2011 08:43 pm (UTC)
allonymist

For an RSA modulus, you should always use 2.

I don't think you mean modulus here.

Thu, May. 19th, 2011 08:48 pm (UTC)
bramcohen

Yes, of course I meant exponent. Corrected now, thanks for pointing that out.

Thu, May. 19th, 2011 09:08 pm (UTC)
cypherpunk95

You should always do encryption as a layer outside of authentication.

Strongly disagree.

Authenticate-then-encrypt (AtE) is subject to attacks that encrypt-then-authenticate (EtA) is not. How practical those attacks are depend on exactly what you're using for your encryption, but they can occur surprisingly often, and you don't want to have to worry about them.

The most prevalent class of such attack (in my opinion) is when you've got a (possibly public-key) encryption system that isn't reaction resistant. The encryption might be somewhat homomorphic, in that the adversary can modify the ciphertext in such a way as to make it come out to the same plaintext some of the time, and a different plaintext (or \bottom) some of the time. Then the inner authentication will fail, and the adversary will know which happened. Lattice-based crypto often has this problem, as does McElice (see our 1999 paper on this). Other encryption systems may have this or similar problems, as protecting against this isn't generally something that's a design goal of semantically secure encryption. (If your encryption is IND-CCA2, you're safe, but then you've basically got the authentication built in, anyway.)

Authenticating the ciphertext avoids the whole issue.

Of course, it may have its own issues, if you're using signatures (and not MACs) for authentication, and you want, say, to hide the information of who's signing the message. If you have complex requirements like that, though, you might think twice about designing the crypto protocol yourself.

Thu, May. 19th, 2011 09:25 pm (UTC)
bramcohen

RSA in particular has vicious attacks when you do authentication checks the wrong way, but that's really an argument in favor of doing authentication properly rather than changing the order, particularly because checking that high order byte is something a naive implementer might do anyway.

That more complicated stuff you mention really shouldn't be done by people who don't already know these issues well, and my advice is for people who don't really know what they're doing by are trying to do something simple (which still might not be a good idea, but they're better off with good advice than without).

Thu, May. 19th, 2011 09:29 pm (UTC)
cdcash [cseweb.ucsd.edu]

I certainly agree that Practical Cryptography feels outdated, and readers should be warned about the problematic parts. Here are a few thoughts and questions from a theoretical crypto POV:

For an encryption mode, you should always use CTR, and always use a nonce of zero, and never reuse keys.

I'm interested to hear more about why this is true. Certainly there are times when you want to reuse keys and use modes that provide more security than that :-).

You should always do encryption as a layer outside of authentication.

It sounds like you are suggesting the opposite of what you should do, which is encrypt first, and then authenticate the ciphertext. Authenticating the plaintext before encrypting is not safe.

For an RSA exponent, you should always use 2. Technically that's Rabin-Williams, and requires slightly different implementation, but that actually works in its favor. Rabin-Williams has a reduction to factoring, RSA does not.

I think suggesting that practitioners dig into implementations of the number-theoretic algorithms (as would be necessary here) is likely to lead to exploitable bugs. Moreover, even a proper implementation of Rabin-Williams would not increase security in practice, unless the sky falls and we find algorithms for inverting the RSA trapdoor function that do something other than factoring. And what's worse is that in practice you'll be using a padding scheme like PKCS#1v1.5, which means the reduction no longer applies. Even if you used OAEP instead, the reduction would be in the random oracle model, further clouding the significance of a reduction to factoring vs RSA inversion.

This is all ignoring the fact that low-exponent (e=3) RSA has lead to vulnerabilities in the past...

You should never use the same key for both encryption and authentication. If you need both encryption and authentication, you should use two keys. One for encryption, and one for authentication.

This is good advice, unless you're using an authenticated encryption mode like GCM. It would be good to be more complete on when it applies.

Thu, May. 19th, 2011 11:36 pm (UTC)
bramcohen

CTR is the simplest, easiest to analyze and least error-prone mode there is, and doesn't require padding and allows for random access. The added security of other modes is mostly fallacious, and ones where it isn't are way too complicated for me to feel comfortable using them.

I don't know what your issue is with encrypting outside of authenticating is. If nothing else, it leaks authentication information which one may want to keep secret, and this is one case where I'm agreeing with Practical Cryptography, just making the recommendation more straightforwardly. My simplest explanation for this is that doing encryption on the outside allows for clean layering of the software implementation, where the other way does not.

RSA exponents more than 2 are at least twice as slow, which should not be simply ignored for speculative reasons. Encoding issues should be dealt with by using a good encoding, which is a dangerous issue in any case, as I explained.

I might start recommending GCM in the future, after more chips support in and I've spent some time analyzing its implications. For the time being simply using a secure hash and CTR is the reasonable conservative approach.

Fri, May. 20th, 2011 01:32 am (UTC)
dossy

Interestingly, I just came across this article from 2 years ago --

http://www.daemonology.net/blog/2009-06-11-cryptographic-right-answers.html

Seems somewhat similar at times to what you wrote. Interesting how the recommendations then are still pretty much the same you made above, now.

Fri, May. 20th, 2011 02:35 am (UTC)
snappywan: What are the guarantees of Python os.urandom?

Can you explain/elaborate on the guarantees of Python's os.urandom()?

Fri, May. 20th, 2011 12:23 pm (UTC)
bramcohen: Re: What are the guarantees of Python os.urandom?

Mostly that it actually calls whatever the appropriate underlying native API is.

Fri, May. 20th, 2011 07:03 am (UTC)
ciphergoth

I would directly recommend an AEAD mode like GCM rather than even mentioning CTR mode.

Fri, May. 20th, 2011 12:25 pm (UTC)
bramcohen

I still view those as too complicated and speculative to recommend, given that the CPU overhead of separate encrypt and hash passes generally isn't a bottleneck.

Fri, May. 20th, 2011 11:32 am (UTC)
shinigami31

The CTR nonce MUST be random. It cannot be the same value, or zero as you have suggested. Doing so leads to chosen-plaintext attacks.

Authentication should be done after encryption because this is the way that can be proved to work for any encryption and authentication schemes.

Fri, May. 20th, 2011 12:30 pm (UTC)
bramcohen

If you never reuse keys, as I recommended, then the nonce can be fixed. Trying to reuse keys and change the nonce is just doing the same thing, where the nonce is really acting as a key, and with a whole lot of bits of security removed because of related blocks being used throughout the message.

I have no idea what 'proved to work' you're babbling about.

Fri, May. 20th, 2011 12:52 pm (UTC)
shinigami31

I see. But there many scenarios where not reusing keys is impractical. And even if you never reuse keys, why not use a random nonce?

As a I said, encrypt-then-authentication is the only way mathematically proven to work with any secure encryption and authentication schemes. See the proof in, e.g., "Introduction to Modern Cryptography" by Katz & Lindell. If you use authenticate-then-encrypt, you may get an insecure scheme even if using secure cryptography & authentication schemes. (It may be secure for particular schemes, but it's much more safer to use the route that is guaranteed to work)

Fri, May. 20th, 2011 04:34 pm (UTC)
bramcohen

There are two meanings of 'key', one is a key that you use for encryption, the other is an input into CTR mode. You can renegotiate the input of CTR mode for each time you use it, using the key for encryption to do that each time.

You could use a random nonce, but it complicates things a bit and gives you more bits than you need - one-time key has 128 bits, random nonce has somewhere south of 256 bits.

I'm fairly skeptical of proofs of security, because historically they haven't really demonstrated the things they claimed reliably, and every time I've seen a protocol which tries to do authentication outside of encryption it's a mess, and I put a big premium on simplicity and analyzability when it comes to protocol design, because those have a real effect on likelihood of breakage.

Sat, May. 21st, 2011 01:17 pm (UTC)
shinigami31

Sorry, I don't follow you. CTR has 3 inputs: the messge, the key, and the nonce/IV. Are you calling the nonce a key? It doesn't make sense to me, because the nonce is public, a key is not.

I understand that not reusing keys and using the same nonce can be secure, but it is impractical.

I agree that proofs may be not enough, but they don't hurt, either. If a have a choice between something proven secure and something that is probably secure, but may be not, it's a clear choice to me.
If the protocols you've seen are a mess, then it's the protocols fault, not of the technique.

Fri, May. 20th, 2011 03:14 pm (UTC)
lensassaman: I almost entirely agree; some nits:

I usually recommend e=2^16-1, but as you pointed out, you're really saying "don't use RSA, use Rabin-Williams."

Ian brought up some of the issues with Authenticate-then-Encrypt; frankly, I'm a fan of the way we handle it in RFC 4880, with the user-verifiable authentication, then encryption, then message authentication.

So, it looks like the only remaining thing I have an issue with is your comments on parameterization; I strongly argue that one should design one's protocol in a parameterized fashion, but provide only one option per needed component. E.g., you need a hash function; fine; design your protocol such that it supports multiple hash functions, but only specify SHA-2. By building parameterization into the protocol, transitioning to SHA-3 becomes a much less painful process.

I say this in part because of the awful experience of transitioning to 160-bit hashes after assuming MD5 in a non-parameterized fashion. Parameterization does not automatically imply multiple choices.

Fri, May. 20th, 2011 04:36 pm (UTC)
bramcohen: Re: I almost entirely agree; some nits:

Funny how I'm getting the most pushback about authenticate-then-encrypt, when it's a point where I basically agree with what Practical Cryptography says.

Having a generic handshake at the beginning of a protocol where dictionaries are exchanged to help with future upgrades is a very good idea, but I don't think that making specific hooks for particular upgrades beyond that makes life any easier in the future.

Fri, May. 20th, 2011 08:10 pm (UTC)
monocular35: Re: I almost entirely agree; some nits:

Haha, I was going to complain that they actually do recommend AtE, except they hem and haw for a couple of pages first.

Then I was going to ask how many books published in 2003 recommend (or even mention) JSON, but then I took a look at the second edition of Practical Cryptography, which was renamed to Cryptography Engineering for some reason, and it still recommends XML. Whoops.

Sat, May. 21st, 2011 01:16 am (UTC)
themusicgod1: 'that's entropy and I hope that you're all down with it'

> For entropy, you should always do whatever the Python os.urandom() call does on the local platform.

Disagreed. If you need entropy, you should use real random numbers. If that isn't waht os.urandom() does on your platform, you should rewrite your platform to make it do that.

If that incurs too much overhead you should talk to your isp, random.org, your local network admin, the person who designed your computer or whatever. But if you need random numbers there is a source for them.

( bramcohen -> jwz -> exoskeleton -> masuimi -> ruthanolis -> themusicgod1 -> caponex -> pope_guilty -> bramcohen )

Sat, May. 21st, 2011 02:34 am (UTC)
bramcohen: Re: 'that's entropy and I hope that you're all down with it'

Wow, that service is one truly awful security horror which it never even occurred to me someone might do.