Also toyed with:
- Possibility single var derefs at `freeze`/`thaw` call.
Abandoned since big change, and slower with opts destructuring.
- Possibility of consolidating all config into a single var.
Abandoned since breaking, and slower with opts destructuring.
Why?
- AES-GCM is faster and can be more secure, Ref. https://goo.gl/Dsc9mL, etc.
- AES-GCM is an authenticated[1] encryption mechanism, providing
automatic integrity checks. This is relevant to [#101].
What's the issue with #101?
- We compress then encrypt on freeze ; Reverse would make compression useless
- So we decrypt then decompress on thaw
Attempting CBC decryption with the wrong password will often but not
*always* throw. Meaning it's possible for decompression could be
attempted with a junk ba. And this can cause some decompressors to
fail in a destructive way, including large allocations (DDoS) or even
taking down the JVM in extreme cases.
Possible solutions?
- We could add our own HMAC, etc.
- And/or we could use something like AES-GCM which offers built-in
integrity and will throw an AEADBadTagException on failure.
There may indeed be reasons [2,3,4] to consider adding a custom HMAC -
and that's still on the cards for later.
But in the meantime, the overall balance of pros/cons seems to lean
in the direction of choosing AES-GCM as a reasonable default.
Note that the change in this commit is done in a backward-compatible
way using Nippy's versioned header: new payloads will be written using
AES-GCM by default. But old payloads already written using AES-CBC will
continue to be read using that scheme.
References
[1] https://en.wikipedia.org/wiki/Authenticated_encryption
[2] https://www.daemonology.net/blog/2009-06-24-encrypt-then-mac.html
[3] https://blog.cryptographyengineering.com/2011/12/04/matt-green-smackdown-watch-are-aead/
[4] HMAC vs AEAD integrity, https://crypto.stackexchange.com/q/24379
[5] AES-GCM vs HMAC-SHA256 integrity, https://crypto.stackexchange.com/q/30627
Motivation for changing this default:
v1 compatibility requires that in the event of a thaw failure, a fallback
attempt is made using v1 options. This must include an attempt at Snappy
decompression.
But the version of Snappy we're using has a major bug that can segfault +
crash the JVM when attempted against non-Snappy data:
https://github.com/dain/snappy/issues/20
I'd switch to an alternative Snappy implementation, but the only other
implementation I'm aware of uses JNI which can introduce troublesome
compatibility issues even for people who don't want the Snappy support.
Had hoped that the Snappy bug would eventually get fixed, but that's
looking unlikely.
Nippy v2 was released on July 22nd 2013 (2 years, 2 months ago) - so
am hoping that the majority of lib users will no longer have a need
for v1 data thaw support at this point.
For those that do, they can re-enable v1 thaw support with this flag.
If a better alternative solution ever presents (e.g. the Snappy bug
is fixed, an alternative implementation turns up, or we write a util
to reliably identify Snappy compressed data) - we can re-enable this
flag by default.