Here are a few examples you may want to type to experiment with e2compr.
These examples assume that the e2compr versions of lsattr
and
chattr
have been installed. `$ ' is the shell prompt.
First we create some temporary directories, and copy some files (from an old e2compr distribution, in this case) to them. `tmp1' will be the reference directory in this example.
$ mkdir tmp1 $ cp /sbin/lsattr HOWTO lsattr.c e2compr.patch tmp1 $ ls -l tmp1 total 144 -rw-r--r-- 1 root root 17864 Mar 25 23:20 HOWTO -rw-r--r-- 1 root root 67149 Mar 25 23:20 e2compr.patch -rwxr-xr-x 1 root root 51524 Mar 25 23:20 lsattr -rw-r--r-- 1 root root 5811 Mar 25 23:20 lsattr.c $ du tmp1 145 tmp1 $ cp -r tmp1 tmp2
Then we compress some files in directory `tmp2':
$ chattr +c tmp2/* $ lsattr tmp2 --c---- 8 lzv1 HOWTO --c---- 8 lzv1 e2compr.patch --c---- 8 lzv1 lsattr --c---- 8 lzv1 lsattr.c ^^^^--------------- algorithm ^-------------------- cluster size (here 8 blocks per cluster) ^--------------------------- file should be compressed $ du tmp2 94 tmp2
Ok, we gained 51 blocks. We didn't specify a compression algorithm or cluster size to `chattr', so it used the system default (which in the above example was <LZV1, 8>). Now we decompress the file that we just compressed:
$ chattr -c tmp2/* $ lsattr tmp2 ------- - - HOWTO ------- - - e2compr.patch ------- - - lsattr ------- - - lsattr.c $ du tmp2 145 tmp2
We can use different cluster sizes with the `-b' option:
$ chattr +c -b 16 tmp2/* $ lsattr tmp2 --c---- 16 lzv1 HOWTO --c---- 16 lzv1 e2compr.patch --c---- 16 lzv1 lsattr --c---- 16 lzv1 lsattr.c $ du tmp2 88 tmp2
Of course, lsattr
reports the new cluster size. As you can see, a
bigger cluster size seems to compress better (but it is slower,
particularly for random access).
Supported cluster sizes are 4, 8, 16 or 32 blocks. (However, if the filesystem has a blocksize different from the usual 1024 bytes, then there is the additional constraint that the cluster size is less than 32KB. If you don't understand this parenthesis then it probably doesn't apply to you, but see section Block size, if you're interested.)
Now we decompress the files again and compress them with another
algorithm, using the `-m' option (Note: the following example will
not work if your kernel was not configured to use LZRW3A. Also note
that if you have an older version of chattr
that doesn't support
`-m' (i.e. if chattr
complains that it doesn't understand
your `-m' option), then you should use `-A' in its place.):
$ chattr -c tmp2/* $ chattr +c -b 16 -m lzrw3a tmp2/* $ lsattr tmp2 --c---- 16 lzrw3a HOWTO --c---- 16 lzrw3a e2compr.patch --c---- 16 lzrw3a lsattr --c---- 16 lzrw3a lsattr.c $ du tmp2 81 tmp2
Well, lzrw3a compresses better. Now we create another directory and mark it as compressed:
$ chattr +c -b 16 tmp3 $ lsattr -d tmp3 --c---- 16 lzv1 tmp3 $ cp tmp1/* tmp3 $ lsattr tmp3 --c---- 16 lzv1 HOWTO --c---- 16 lzv1 e2compr.patch --c---- 16 lzv1 lsattr --c---- 16 lzv1 lsattr.c
Files in `tmp3' have automatically been compressed with the same cluster size and algorithm as `tmp3'. Good.
For skeptics:
$ diff tmp1/HOWTO tmp3/HOWTO $ ./tmp3/lsattr tmp3 --c---- 16 lzv1 HOWTO --c---- 16 lzv1 e2compr.patch --c---- 16 lzv1 lsattr --c---- 16 lzv1 lsattr.c
Compressed files and uncompressed ones hold the same data! Compressed binaries behave as uncompressed ones.
And now, the ultimate test:
# cd /usr/src/linux-1.1.76 # make clean # time make depend 81.66user 50.67system 2:24.37elapsed 91%CPU ... # du -s 8753 . # time make 1006.18user 138.49system 19:37.92elapsed 97%CPU ... # du -s 11842 . # chattr +c -R -b 16 -m lzrw3a /usr/src/linux-1.1.76 # cd /usr/src/linux-1.1.76 # make clean # time make depend 77.27user 70.95system 2:45.50elapsed 89%CPU ... # du -s 4269 . # time make 1002.23user 173.16system 20:01.90elapsed 97%CPU ... # du -s 6449 .
(pjm: A clean linux 2.1.42 tree with compression method <gzip9, 32> compresses as follows:
$ e2ratio -s linux-2.1.42 32720 11491 35.1% linux-2.1.42
Go to the first, previous, next, last section, table of contents.