Specifically, the patent in question claims to compress any input data by at least one bit, and to do this with no loss of information. The patent also claims that this process can be applied recursively, making multiple passes over a file until the desired level of compression is reached. These claims imply that you could run the compression enough times to eventually reach a compressed data size of 1 bit, regardless of the input. Now, suppose I compress 10 different files in this way -- each of them compresses to a single '1' or '0' (a single bit). How, then, can we decompress a '1' into many different files? Where does that extra information come from? Obviously, this is a non-starter.
The sad fact is that anyone who has studied rudimentary computer science should immediately know this patent is impossible. Every introductory course in Information Theory makes this plain within the first week of lectures. But you don't have to be a computer scientist to understand the impossibility of these claims through simple logic. We keep hearing that USPTO examiners are experts in their fields, yet the patent office keeps churning out patent approvals just like this one. I'll leave it to you, gentle reader, to draw your own conclusions.
Perhaps, though, it really is time to "open up the examination process to those beyond the single PTO employee doing the examination, and ... let adversarial forces (competitors, existing players) use their own survival as an incentive to participate. And let's let the poor overworked patent examiner act more as a judge or referee in this activity (instead of adversary, advocate, AND judge)" (from "More Examiners = Better Patents?").