by Guest on 2019/03/22 02:14:08 PM
Speaking of xz, better option seems to be to use:
xz -T0 -9e --lzma2=dict=256M,lc=2,pb=0,nice=273 --block-size=384M
(that for a 4Gb RAM machine; for larger RAM, it is advisable to increase dict and block sizes)
But overall, I find Facebook's zstd to perform real good w.r.t. speed vs. size. Plain "zstdmt -19" usually suffices for short-term storage; use -13 or -14 if there's a Need for Speed(tm); alternatively for a smaller size but longer compression and somewhat higher RAM usage, use:
zstdmt --ultra -21
by Guest on 2019/03/24 06:10:01 AM
Speaking of long term data storage, I'd go with segmented, encrypted 7-Zip with a couple of parity volumes added.
7z format is stable enough for my tastes (15+ years no problem, and it does warn you fairly when you do need that parity applied), opensource (so you can easily find old versions if need arises), encrypts (as long as you trust AES), compresses well (using quite some RAM depending on options), decompresses fast enough (but zstd is faster). Heck, xz can be viewed as a fork of it, and in fact it uses one of it's algorithms!
For parity, I'd go with PAR 2.0 format because, well, is there any better?... There was even a version with Thread Building Blocks support which is still cached, but I am not aware of its commit history. It's old, probably older than your smoothie-age githubs. So sue both Solomon and his friend Reed for not coming up with next algorithm!
But that's just me.
Can't recall from the top of my head if lzip segments+encrypts, but if it does, and does that multithreaded - well then, add some parity and you'll be fine.
OTOH, encrypting with another tool makes for four++ commands (including key management and key storage), and four is larger than two. This either calls for a wrapper, or I personally CBB to use that pipeline.