Go to the first, previous, next, last section, table of contents.
For over 99% of files, fragmentation is no worse than in an uncompressed filesystem.(3) As the file is being written, we compress whenever we reach the end of a cluster. When we start allocating blocks for the next cluster, we try to allocate them right next to the blocks of the previous, compressed, cluster. (Don't worry, the ext2 block allocation strategy does the right thing concerning holes.)
It is only when you write over a cluster other than the last in the file that compression can cause extra fragmentation. This is because the new data might compress to a different number of blocks than previously, so you either get a gap in allocation (if the new data takes up fewer blocks) or you get a block that has to be written out of sequence (if the new data takes up more blocks than previously allocated to the cluster). The only example of this sort of file that I can think of is large (more than one cluster) database files. (`updatedb' doesn't count, because (I believe) it gets truncated to zero length before being written over.)
As someone else (kragen@pobox.com) pointed out, compression can actually reduce fragmentation in some cases, simply because we don't fill up the disk as quickly.
Nevertheless, people who are short on disk space (as many e2compr users are) tend to have a high turnover of files (deleting files to make way for new files, which have to be written in the cracks occupied by the files just deleted), which causes high fragmentation. See section Can I still use a defragmenter?, for comments on using a defragmenter.
Go to the first, previous, next, last section, table of contents.