Go to the first, previous, next, last section, table of contents.


E2compr doesn't seem to compress as well as gzip.

First of all, check that the "best" (i.e. most compressing) compression algorithm and cluster size are being used.

$ chattr -c myfile
$ chattr +c -m gzip9 -b 32 myfile

Now take a look at how it compares with straight `gzip -9'.

  $ e2ratio -l myfile
  1488    721      48.5%  myfile
  $ gzip -9 < myfile > myfile.gz
  $ du -k myfile.gz
  601     /tmp/delme.gz

There is still a difference (721 versus 601 1KB-blocks). This difference is because e2compr divides a file into "clusters" ---sections of a fixed size (in this case 32KB)--- and compresses each of those clusters independently: any similarity between the contents of different clusters will be ignored. The decision to do things this way was made in order to reduce the amount of time it takes for random access. (With gzip, there's no way of accessing the last byte of the decompressed stream without decompressing the whole stream. Other compression methods, more suitable to random access, are conceivable, but designing a good compression method is a serious undertaking.)


Go to the first, previous, next, last section, table of contents.