Before this commit:
- When freezing an item WITHOUT a native Nippy implementation,
Nippy may try to use (1) Java Serializable or (2) Clojure's reader.
If these also fail, an ex-info will be thrown.
The ex-info does NOT include any info about possible exceptions
from (1) or (2).
After this commit:
- The thrown ex-info now includes info about possible exceptions
from (1) and (2). These can be useful, e.g. when indicating
an OOM error, etc.
When support is added for a new type in Nippy version X, it necessarily means
that data containing that new type and frozen with Nippy version X is unthawable
with Nippy versions < X.
Earlier versions of Nippy will throw an exception on thawing affected data:
\"Unrecognized type id (<n>). Data frozen with newer Nippy version?\"
This can present a challenge when updating to new versions of Nippy, e.g.:
- Rolling updates could lead to old and new versions of Nippy temporarily co-existing.
- Data written with new types could limit your ability to revert a Nippy update.
There's no easy solution to this in GENERAL, but we CAN at least help reduce the
burden related to CHANGES in core data types by introducing changes over 2 phases:
1. Nippy vX reads new (changed) type, writes old type
2. Nippy vX+1 writes new (changed) type
When relevant, we can then warn users in the CHANGELOG to not leapfrog
(e.g. Nippy vX -> Nippy vX+2) when doing rolling updates.
This commit bootstraps the new compatibility feature by initially targeting core type
compatibility with Nippy v3.2.0 (2022-07-18).
A future Nippy version (e.g. v3.5.0) will then target v3.4.0, with an appropriate
CHANGELOG instruction to update in phases for environments that involve rolling
updates.
Details:
- Nippy will continue to support thawing OLD data that was originally compressed with Snappy.
- But Nippy will no longer support freezing NEW data with Snappy.
Motivation:
- The current Snappy implementation can cause JVM crashes in some cases [1].
- The only alternative JVM implementation that seems to be safe [2] uses JNI and
so would introduce possible incompatibility issues even for folks not using Snappy.
- Nippy already moved to the superior LZ4 as its default compression scheme in v2.7.0,
more than 9 years ago.
[1] Ref. <https://github.com/airlift/aircompressor/issues/183>
[2] Ref. <https://github.com/xerial/snappy-java>
BREAKING for the very small minority of folks that use `nippy/stress-data`.
Changes:
1. Make `nippy/stress-data` a function
It's unnecessarily wasteful to generate and store all this data when it's not
being used in the common case.
2. Make data deterministic
The stress data will now generally be stable by default between different versions
of Nippy, etc. This will help support an upcoming test for stable serialized output.
Note: also considered (but ultimately rejected) idea of a separate
`*thaw-mapfn*` opt that operates directly on every `thaw-from-in!`
result.
This (transducer) approach is more flexible, and covers the most
common use cases just fine. Having both seems excessive.
Before:
Longs in [ -128, 127] use 1 byte
Longs in [-32768, 32767] use 2 bytes
etc.
After:
Longs in [ -255, 255] use 1 byte
Longs in [-65535, 65535] use 2 bytes
etc.
I.e. doubles the range of longs that can be stored by 1, 2, and 4 bytes.
This changes saves:
- 1 byte per long in [ 128, 255], or [ -129, -255]
- 2 bytes per long in [32768, 65535], or [-32769, -65535]
- 4 bytes per long ...
Is this advantage worth the extra complexity? Probably yes, given how
common longs (and colls of longs) are in Clojure.
This reverts commit f1af0cae674f7dea29d460c5b630a58c59c7dcab.
Motivation for revert:
At least 1 user has reported depending on the current `cache`
feature, and implementing it manually (i.e. outside of Nippy) can
be non-trivial.
Rather than risk breaking folks, I'll take some more time to
consider alternatives. There's no urgency on this.
This commit is BREAKING for those still actively using `nippy/cache`.
Data previously frozen using `nippy/cache` can still be thawed, though
support for thawing may also be removed in a future Nippy version.
Motivation for removal:
This cache feature (marked as experimental) was always a bit dubious.
The use cases were very limited, and the complexity quite significant.
I don't believe that the feature has ever had much (any?) public
adoption, so I'm removing it here.
PLEASE LET ME KNOW if this removal negatively impacts you.