How I pulled this directory down from the web.

wget -mk http://drugwarvictim.blogspot.com/?m=1
cd drugwarvictim.blogspot.com
find . -name '*.html' | xargs rm
find . -name '*.html?m=0' | xargs rm
;; Then I renamed *.html?m=1 files to *.html, in emacs
rm -rf feeds
find . -type f | xargs sed -i .bak -e 's/\?m=1//g'
find . -type f | xargs sed -i .bak -e 's/\?show/\%3Fshow/g'
find . -name '*.bak' | xargs rm

I tried with -E (--adjust-extension) and
"--restrict-file-names=windows", in an attempt to get the "?"
characters out of the file names, so that it was viewable through a
web browser. That changed the filenames as expected, but the links in
the files weren't updated to point at the updated file names, so the
site was unusable. Note that I did have to go through all the files
changing "?m" to "" and "?show" to "%3Fshow", to make the links
work in a local copy, browsed from the file system.
