Seems he has not included my rough PKWare decompressor, but then again it was incomplete (could not decompress text files, but WC3 release never had any...).
It also still pointlessly makes a temporary copy of the archive when one might just want to read from it. Any speed improvements from copying are fake, it just seems to make it faster due to the file being in the file cache as it was recently written. Read only mode is kind of important to make sure one does not accidently make changes to a MPQ archive when not desirable, and also because one might not have write access on some MPQ files but can still read them.
Although memory mapping may seem a clever solution to reading chunks, it is not. The actual memory mapping calls are very slow and so only trade off if one maps entire large files with a long map retention time (eg for a high performance database engine). For small reads such as involved here it would be better to read into a freshly allocated ByteBuffer. This also removes the dependency on memory mapping support from the file system, which would help make way for it to read directly from the file.
Manipulating the actual files should be done using some sort of FileChannel or SeekableByteChannel. This allows for partial reads to only have to process the required file blocks instead of all file blocks. It also fits with a lot of standard IO code. When writing all writes are pushed directly to a temporary file, and all reads come from the temporary file (only if at least 1 byte was written). On closure the temporary file is then written to a file in the MPQ.
Unfortunately MPQs are not really designed for random modifications. This is one of the reasons Blizzard moved to CASC for all their modern game data.