
On Monday 26 December 2011 18:50:56 Claudio Freire wrote:
On Mon, Dec 26, 2011 at 6:47 PM, Anders Johansson <ajh@nitio.de> wrote:
Really? I'm not aware of any codecs that can handle data corruption, except for a few that have been developed for usenet that involve massive redundancy.
Could you give an example?
bzip2.
You can loose blocks, but that only looses you a block, all blocks in bzip2 are independent.
So you loose only a portion of compressed data.
Interesting, I didn't know that. I knew the algorithm broke on corruption, I didn't know they do a keyframe setup to get around it. I see the default block size is 900K, that's a huge chunk of log gone
For lzma, I think (not as sure as with bzip2) blocks exist just as well, only in a different form.
Good question. The bzip2 man page discusses it, but lzma/xz man pages don't. Still, it's not a feature of the algorithm itself. The algorithms fail completely on corrupt data. The important bit is how it's implemented in a tool. Maybe compress line by line? Anders -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org