## PulseAudio: How to Decouple Application Volumes And Global Volume

I wondered why e.g. my VLC volume kept getting lowered. As it turns out, there was a change.

PulseAudio seems to have copied one of Windows 6+’s most annoying features, at least in terms of the media framework: flat volumes.

Quick refresher: This is that annoying thing that Windows (and now, PulseAudio, by default) does, where turning up the volume in an application will increase the master system volume alongside it. This has the side-effect that any application which sets its own volume can commandeer the master volume of your system. Why is this bad? The short answer is headphones.

It’s not as if it’s only headphones that can blare at ridiculously loud volumes. Anyway, a quick search came up with this helpful suggestion regarding the flat-volumes setting.

PulseAudio supports per-application volume control, but by default this doesnt do much as you can only control these volumes from the pulseaudio volume control utility. Meaning that in an application like Audacious, when the output device is set to PulseAudio, and the volume control is set to hardware, it will adjust the master volume control, not the per-application volume control.

To fix this behavior, set the following in /etc/pulse/daemon.conf

flat-volumes = no

Now whenever Audacious goes to adjust the volume, it will adjust the audacious only volume and thus you wont have multiple applications fighting over the master volume control.

What a horribly annoying new default.

## Working around the broken Creative HS-720 headset

A few years ago I received a Creative HS-720 as a gesture of good will. I wasn’t displeased, but since I didn’t need it I didn’t really investigate. Recently I’ve been wanting to use it as a headphone and noticed that even at the lowest possible volume, it was still significantly too loud. What’s really crazy is that there are actually positive reviews for the product out there. Read this negative one instead. That’s all you need to know. Avoid this product. Ideally I’d acquire something like an Asus Xonar U3, a Creative Sound Blaster Play! 2 or a Creative Sound Blaster E1 in combination with proper headphones (although the HS-720 certainly doesn’t make me want to buy another Creative product), but I figured there just had to be a software solution.

Some searching gave me “Fix for USB Audio is Too Loud and Mutes at Low Volume in Ubuntu.” The title isn’t quite accurate, because it’s a workaround. No matter. It requires modifying the file /usr/share/pulseaudio/alsa-mixer/paths/analog-output.conf.common. But we might as well take a look at what else there is to play with while we’re at it.

/usr/share/pulseaudio/alsa-mixer/paths$ls analog-input-aux.conf analog-input-mic.conf analog-output-lineout.conf analog-input.conf analog-input-mic.conf.common analog-output-mono.conf analog-input.conf.common analog-input-mic-line.conf analog-output-speaker-always.conf analog-input-dock-mic.conf analog-input-rear-mic.conf analog-output-speaker.conf analog-input-fm.conf analog-input-tvtuner.conf hdmi-output-0.conf analog-input-front-mic.conf analog-input-video.conf hdmi-output-1.conf analog-input-headphone-mic.conf analog-output.conf hdmi-output-2.conf analog-input-headset-mic.conf analog-output.conf.common hdmi-output-3.conf analog-input-internal-mic-always.conf analog-output-desktop-speaker.conf iec958-stereo-output.conf analog-input-internal-mic.conf analog-output-headphones-2.conf analog-input-linein.conf analog-output-headphones.conf As you can see there’s a bunch of PulseAudio profiles. In my case I might be able to adjust one of the headphones files without changing the entire system, but as luck would have it I use a digital IEC958 output for my main sound system, so I could afford mess up all handling of analog output for the sake of these headphones. I’ll quote part of Chris Jean’s guide in case linkrot ever strikes. Search for the text “Element PCM”. You should see the following text: [Element PCM] switch = mute volume = merge override-map.1 = all override-map.2 = all-left,all-right Update this section of text to look like the following (changes are in bold): [Element PCM] switch = mute volume = ignore volume-limit = 0.01 override-map.1 = all override-map.2 = all-left,all-right Note that the value 0.01 can be adjusted as needed to change how quiet and loud the volume is. Making the number smaller reduces the max volume while making the number larger increases the max volume. I tested out 0.05 and found that the max volume was much louder than I would ever use. I also decided that 0.01 was technically louder than I’d ever use. I ended up with a value of 0.0075 (0.005 was too quiet) which I felt gave a good maximum volume and resulted in better overall control over the volume range. Then run killall pulseaudio (or pulseaudio -k, but why bother with something non-generic) to get it to work. You can do some more volume play using alsamixer, but you’ll have to figure out which device to use first. $ pacmd list-sources | grep -e device.string -e 'name:'
name:
device.string = "1"
name:
device.string = "hw:2"
name:
device.string = "0"
name:
device.string = "3"

As you can see the headset is device 3. You can print some more info using amixer.

$amixer -c 3 Simple mixer control 'PCM',0 Capabilities: pvolume pswitch pswitch-joined Playback channels: Front Left - Front Right Limits: Playback 0 - 38 Mono: Front Left: Playback 9 [24%] [-21.67dB] [on] Front Right: Playback 9 [24%] [-21.67dB] [on] Simple mixer control 'Mic',0 Capabilities: cvolume cvolume-joined cswitch cswitch-joined Capture channels: Mono Limits: Capture 0 - 16 Mono: Capture 14 [88%] [20.83dB] [on] Simple mixer control 'Auto Gain Control',0 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] And using alsamixer -c 3 you can play around with the volume a bit more, too. My only regret is that I haven’t been able to find something like Identifier for xorg.conf. Oh well, it’ll save me some money in the short term. PS On Windows, try EqualizerAPO (source). ## Just a Star Messing about a little in Inkscape with my wife’s Wacom CTH-680S tablet on Linux 4.1, after first trying it in Xournal. It seems to be functioning a fair bit better than a few kernel versions ago. The tablet is really good. I’d recommend it. ## Alt + Print screen in Xfce Perhaps it’s merely a fluke of Debian Xfce, but the Print screen key does nothing by default. If you just want to use Print screen that’s easy to rectify through Settings > Keyboard > Application Shortcuts > Add. Use the command xfce4-screenshooter -f or -w for respectively full screen and window. As it turns out that interface doesn’t support the key combination of Alt + Print screen thanks to some kernel feature. The suggestion is to disable the kernel feature, but interestingly enough it works when you add the shortcut manually. Remember how we regained control of Ctrl + F1F12? Once again, edit ~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml. Look for the custom section, which should look a little something like this:  <property name="custom" type="empty"> <property name="XF86Display" type="string" value="xfce4-display-settings --minimal"/> <property name="&lt;Primary&gt;&lt;Alt&gt;Delete" type="string" value="xflock4"/> <property name="&lt;Primary&gt;Escape" type="string" value="xfdesktop --menu"/> <property name="&lt;Alt&gt;F2" type="string" value="xfrun4"/> <property name="override" type="bool" value="true"/> <property name="&lt;Super&gt;p" type="string" value="xfce4-display-settings --minimal"/> </property> Next, we add in our custom setting:  <property name="&lt;Alt&gt;Print" type="string" value="xfce4-screenshooter -w"/> Now it should look like this.  <property name="custom" type="empty"> <property name="XF86Display" type="string" value="xfce4-display-settings --minimal"/> <property name="&lt;Primary&gt;&lt;Alt&gt;Delete" type="string" value="xflock4"/> <property name="&lt;Primary&gt;Escape" type="string" value="xfdesktop --menu"/> <property name="&lt;Alt&gt;F2" type="string" value="xfrun4"/> <property name="override" type="bool" value="true"/> <property name="&lt;Super&gt;p" type="string" value="xfce4-display-settings --minimal"/> <property name="&lt;Alt&gt;Print" type="string" value="xfce4-screenshooter -w"/> </property> You're going to have to log out and log in again (or restart) for the changes to take effect. I have to admit it's probably more useful to bind Print screen to take window screenshots by default, but on the other hand it might be a good idea to stick to the global standard so you can still use desktop environments other than your own without feeling hampered. ## Custom page number count in Prince Prince makes it really easy to do all of the usual things with page numbers, like a different numbering scheme in the front matter and whatnot. Unfortunately you can’t counter-increment on @page, but thanks to Prince.addScriptFunc() you’ve got something better. h2 {counter-reset: page 50} @page { @bottom-left { content: prince-script(fixpagenum, counter(page)); margin-left: 2cm; } } In this CSS, instead of passing regular generated content, we’re passing a prince-script. That script has to be defined somewhere, like this. Prince.addScriptFunc("fixpagenum", function(pagenum) { pagenum = Number(pagenum); pagenum = pagenum + pagenum - 50; return pagenum; });  The rationale in this case was to generate two separate documents, starting at page 50, one only left pages and the other only right pages. (Of course, the other one started at page 51.) I combined them with pdftk’s shuffle command. pdftk left.pdf right.pdf shuffle output combined.pdf I don’t think there’s a way to do something like this purely in Prince using CSS, but I’d love to be proved wrong. ## LaTeX: combining added margins with hanging indents Since I’m using KOMA, the obvious method would seem to be: \begin{addmargin}[1cm]{0cm} Yada. \end{addmargin}  Unfortunately, that doesn’t seem to combine with the hanging environment. So I did it a little more manually, which will probably have someone shaking their head while I’m stuck feeling pretty clever about it: \parindent=1cm\hangindent=2cm Yada. ## Xfce: Keep Windows from Switching Workspaces I thought applications switching workspaces was an issue in all window managers, but it never bothered me quite enough to investigate changing it. But what do you know, in Xfwm you can change it. When using several workspaces in Xfce, I had the problem that when I pressed a link in Claws-Mail, it opened Iceweasel, but it also move my Iceweasel from workspace 3 to workspace 1 (where Claws-Mail was). To avoid this, start the xfce4-settings-editor, select xfwm4 in the list, and go to general->activate_action (in my case it is the topmost entry), and change the string value from “bring” to “switch”. Thanks to Gusnan. ## Introducing Nimbler: A Window Switcher When I discovered SmartTab.org about a decade ago, I was quite happy. Never before had my window switching been so fast and nimble. The most important feature was of course the list-based view of window titles, rather than the standard mysterious icons that only coughed up their secret window titles once you landed on them. But one thing I hadn’t yet conceived of was switching to a far-off window without pressing Tab a dozen times or more. SmartTab.org quickly became so ingrained into my workflow that I even resisted changing my operating system because of it. If Windows 7 would mean giving up on all of my trusty tools like SmartTab.org and ASD Clock, I might as well upgrade to something completely different. In 2011 I switched to Debian Squeeze as my main OS and I haven’t looked back, barring perhaps the occasional game. To me, Linux is just so much more user-friendly these days. But enough about that. You can get Nimbler here or you can continue reading about my window switching philosophy. Even before I switched, I looked for SmartTab.org alternatives. I discovered that both Openbox and KWin can display sane window lists, but superswitcher was much closer to what I wanted. Unfortunately, its C-based code was too complicated for now. I’d pretty much have to learn C first. Fast forward a few years and Fuzzy Window Switcher came out. It was a lovely little application, and it managed to scratch my itch. Someone quipped that a Compiz plugin could perform the same task. I replied that “The Compiz (plugins) source had absolutely no documentation when I last checked, nor was it so obvious that none was needed. Above all else I see [Fuzzy Window Switcher] as a great place to start for hacking together your own thing. Besides, not everyone uses Compiz. This can be useful regardless of whether you use Compiz, Mutter, xfwm4, OpenBox, KWin or whatever else there is.” Still, I wasn’t ready. I had to learn the much easier Python first, which I didn’t start with until a few months later. By that time I was too busy, and when my schedule finally opened up again I’d forgotten about my plans. Until a couple of weeks ago a bug report reminded me that I could actually mimic what I liked about SmartTab.org. Several hours of coding later I can present Nimbler. Bear with me if you actually know GTK+ 3; I’m figuring this stuff out as I go along. Long story short, this is what it looks like: So how does it work? I like to think it couldn’t be much more intuitive. You press the shortcut — at the moment I use <Super>grave — and the window pops up. Then you can immediately switch to any window using its identifier, the arrow keys or the mouse. You can also switch workspaces using F1-F12. Currently non-functional, if you press colon (:) a text input box is added to the window. At some undefined point in the future I probably intend to couple this with Fuzzy Window Switcher’s fuzzy matching code, but don’t hold your breath. 😉 If I do get around to implementing something like that, I figure there should also be a configuration setting for this fuzzy-mode where you don’t have to press colon first. This sounds like a wasteful duplication of Fuzzy Window Switcher itself, but my thoughts are that if the text input is merely one character, no fuzzy matching would occur and instead it would just be treated as a window identifier. Theoretically I suppose you could also use such functionality to introduce double-character identifiers, easily quadrupling the amount of potential options. Quadrupling, you ask? The current amount of identifiers is a little more than 90, so you would expect to easily get over 8,000 unique identifiers out of it. However, for usability purposes aa would is much faster than aA, let alone a0. Besides, who could possibly keep so many windows straight? I hope someone besides me finds this useful. Enjoy! ## Fixing Up Scanned PDFs with Scan Tailor Scanned PDFs come my way quite often and I don’t infrequently wish they were nicer to use on digital devices. One semi-solution might include running off to the library and rescanning them personally, but there is a middle road between doing nothing and doing too much: digital manipulation. The knight in shining armor is called Scan Tailor. Note that this post is not about merely cropping away some black edges. When you’re just looking for a tool to cut off some unwanted edges, I’d recommend PDF Scissors instead. If you just want to fix some incorrect rotation once and for all, try the pdftools found in texlive-extra-utils, which gives you simple shorthands like pdf90, pdf180 and pdf270. This post is about splitting up double scanned pages, increasing clarity, and adding an OCR layer on top. With that out of the way, if you’re impatient, you can skip to the script I wrote to automate the process. ### Coaxing Scan Tailor Unfortunately Scan Tailor doesn’t directly load scanned PDFs, which is what seems to be produced by copiers by default and what you’re most likely to receive from other people. Luckily this is easy to work around. If you want to use the program on documents you scan yourself, selecting e.g. TIFF in the output options could be a better choice. To extract the images from PDF files, I use pdfimages. I believe it tends to come preinstalled, but if not grab poppler-utils with sudo apt install poppler-utils. pdfimages -png filename.pdf outputname You might want to take a look at man pdfimages. The -j flag makes sure JPEG files are output as is rather than being converted to something else, for instance, while the -tiff option would convert the output to TIFF. Like PNG, that is lossless. What might also be of interest are -ccitt and -all, but in this case I’d want the images as JPEG, PNG or TIFF because that’s what Scan Tailor takes as input. At this point you could consider cropping your images to aid processing with Scan Tailor, but I’m not entirely sure how to automate it out of the way. Perhaps unpaper with a few flags could work to remove (some) black edges, but functionally speaking Scan Tailor is pretty much unpaper with a better (G)UI. In any case, this could be investigated. You’ll want to use pdfimages once more to obtain the DPI of your images for use with Scan Tailor, unless you like to calculate the DPI yourself using the document dimensions and the number of pixels. Both PNG and TIFF support this information, but unfortunately pdfimages doesn’t write it. $ pdfimages -list filename.pdf
page   num  type   width height color comp bpc  enc interp  object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1     0 image    1664  2339  gray    1   1  ccitt  no         4  0   200   200  110K  23%
2     1 image    1664  2339  gray    1   1  ccitt  no         9  0   200   200  131K  28%

Clearly our PDF was scanned at a somewhat disappointing 200 DPI. Now you can start Scan Tailor, create a new project based on the images you just extracted, enter the correct DPI, and just follow the very intuitive steps. For more guidance, read the manual. With any setting you apply, take care to apply it to all pages if you wish, because by default the program quite sensibly applies it only to one page. Alternatively you could run scantailor-cli to automate the process, which could reduce your precious time spent to practically zero. I prefer to take a minute or two to make sure everything’s alright. I’m sure I’ll make up the difference by not having to scroll left and right and whatnot afterwards.

By default Scan Tailor wants to output to 600 DPI, but with my 200 DPI input file that just seemed odd. Apparently it has something to do with the conversion to pure black and white, which necessitates a higher DPI to preserve some information. That being said, 600 DPI seems almost laughably high for 200 DPI input. Perhaps “merely” twice the input DPI would be sufficient. Either way, be sure to use mixed mode on pages with images.

Scan Tailor’s output is not a PDF yet. It’ll require a little bit of post-processing.

### Simple Post-Processing

The way I usually go about trying to find new commands already installed on my computer is simply by typing the relevant phrase, in this case tiff. Press Tab for autocomplete. If that fails, you could try apt search tiff, although I prefer a GUI like Synaptic for that. The next stop is a search engine, where you can usually find results faster by focusing on the Arch, Debian or Ubuntu Wikis. On the other hand, blog and forum posts often contain useful information.

$tiff tiff2bw tiff2ps tiffcmp tiffcrop tiffdump tiffmedian tiffsplit tiff2pdf tiff2rgba tiffcp tiffdither tiffinfo tiffset tifftopnm tiff2pdf sounds just like what we need. Unfortunately it only processes one file at a time. Easy to fix with a simple shell script, but rtfm (man tiff2pdf) for useful info. “If you have multiple TIFF files to convert into one PDF file then use tiffcp or other program to concatenate the files into a multiple page TIFF file. tiffcp *.tif out.tif You could easily stop there, but for sharing or use on devices a PDF (or DjVu) file is superior. My phone doesn’t even come with a TIFF viewer by default and the one in Dropbox — why does that app open almost all documents by itself anyway? — just treats it as a collection of images, which is significantly less convenient than your average document viewer. Meanwhile, apps like the appropriately named Document Viewer deal well with both PDF and DjVu. tiff2pdf -o bla.pdf out.tif Wikipedia suggests the CCITT compression used for black and white text is lossless, which is nice. Interestingly, an 1.8 MB low-quality 200 DPI PDF more than doubled in size with this treatment, but a 20MB 400 DPI document was reduced in size to 13MB. Anyway, for most purposes you could consider compressing it with JBIG2, for instance using jbig2enc. Another option might be to ignore such PDF difficulties and use pdf2djvu or to compile a DjVu document directly from the TIFF files. At this point we’re tentatively done. ### Harder but Neater Post-Processing After I’d already written most of this section, I came across this Spanish page that pretty much explains it all. So it goes. Because of that page I decided to add a little note about checkinstall, a program I’ve been using for years but apparently always failed to mention. You’re going to need jbig2enc. You can grab the latest source or an official release. But first let’s get some generic stuff required for compilation: sudo apt install automake autotools-dev libtool And the jbig2enc-specific dependencies: sudo apt install libleptonica-dev libjpeg8-dev libpng12-dev libtiff5-dev zlib1g-dev In the jbig2enc-master directory, compile however you like. I tend to do something along these lines: ./autogen.sh mkdir build cd build ./configure make Now you can sudo make install to install, but you’ll have to keep the source directory around if you want to run sudo make uninstall later. Instead you can use checkinstall (sudo apt install checkinstall, you know the drill). Be careful with this stuff though. sudo checkinstall make install You have to enter a name such as jbig2enc, a proper version number (e.g. 0.28-0 instead of 0.28) and that’s about it. That wasn’t too hard. At this point you could produce significantly smaller PDFs using jbig2enc itself (some more background information): jbig2 -b outputbasename -p -s whatever-*.tif pdf.py outputbasename > output.pdf However, it doesn’t deal with mixed images as well as tiff2pdf does. And while we’re at it, we might just as well set up our environment for some OCR goodness. Mind you, the idea here is just to add a little extra value with no extra time spent after the initial setup. I have absolutely no intention of doing any kind of proofreading or some such on this stuff. The simple fact is that the Scan Tailor treatment drastically improved the chances of OCR success, so it’d be crazy not to do it. There’s a tool called pdfbeads that can automatically pull it all together, but it needs a little setup first. You need to install ruby-rmagick, ruby-hpricot if you want to do stuff with OCRed text (which is kind of the point), and ruby-dev. sudo apt install ruby-rmagick ruby-hpricot ruby-dev Then you can install pdfbeads: sudo gem install pdfbeads Apparently there are some issues with iconv or something? Whatever it is, I have no interest in Ruby at the moment and the problem can be fixed with a simple sudo gem install iconv. If iconv is added to the pdfbeads dependencies or if it switches to whatever method Ruby would prefer, this shouldn’t be an issue in the future. At this point we’re ready for the OCR. sudo apt install tesseract-ocr and whatever languages you might want, such as tesseract-ocr-nld. The -l switch is irrelevant if you just want English, which is the default. parallel tesseract -l eng+nld {} {.} hocr ::: *.tif GNU Parallel speeds this up by automatically running as many different tesseracts as you’ve got CPU cores. Install with sudo apt install parallel if you don’t have it, obviously. I’m pretty patient about however much time this stuff might take as long as it proceeds by itself without requiring any attention, but on my main computer this will make everything proceed almost four times as quickly. Why wait any longer than you have to? The OCR results are actually of extremely high quality: it has some issues with italics and that’s pretty much it. It’s not an issue with the characters, but it doesn’t seem to detect the spaces in between words. But what do I care, other than that minor detail it’s close to perfect and this wasn’t even part of the original plan. It’s a very nice bonus. Once that’s done, we can produce our final result: pdfbeads *.tif > out.pdf My 20 MB input file now is a more usable and legible 3.7 MB PDF with decent OCR to boot. Neat. A completely JPEG-based file I tried went from 46.8 MB to 2.6 MB. Now it’s time to automate the workflow with some shell scripting. ### ReadablePDF, the script Using the following script you can automate the entire workflow described above, although I’d always recommend double-checking Scan Tailor’s automated results. The better the input, the better the machine output, but even so there might just be one misdetected page hiding out. The script could still use a few refinements here and there, so I put it up on Github. Feel free to fork and whatnot. I licensed it under the GNU General Public License version 3. #!/bin/bash # readablepdf # ReadablePDF streamlines the effort of turning a not so great PDF into # a more easily readable PDF (or of course a pretty decent PDF into an # even better one). This script depends on poppler-utils, imagemagick, # scantailor, tesseract-ocr, jbic2enc, and pdfbeads. # # Unfortunately only the first four are available in the Debian repositories. # sudo apt install poppler-utils imagemagick scantailor tesseract-ocr # # For more background information and how to install jbig2enc and pdfbeads, # see http://fransdejonge.com/2014/10/fixing-up-scanned-pdfs-with-scan-tailor#harder-post-processing # # GNU Aspell and GNU Parallel are recommended but not required. # sudo apt install aspell parallel # # Aspell dictionaries tend to be called things like aspell-en, aspell-nl, etc. BASENAME=${@%.pdf} # or basename "@%" .pdf

# It would seem that at some point in its internal processing, pdfbeads has issues with spaces.
# Let's strip them and perhaps some other special characters so as still to provide
# meaningful working directory and file names.
BASENAME_SAFE=$(echo "${BASENAME}" | tr ' ' '_') # Replace all spaces with underscores.
#BASENAME_SAFE=$(echo "${BASENAME_SAFE}" | tr -cd 'A-Za-z0-9_-') # Strip other potentially harmful chars just in case?

SCRIPTNAME=$(basename "$0" .sh)
TMP_DIR=${SCRIPTNAME}-${BASENAME_SAFE}

TESSERACT_PARAMS="-l eng+nld"

# If project file exists, change directory and assume everything's in order.
# Else do the preprocessing and initiation of a new project.
if [ -f "${TMP_DIR}/${BASENAME_SAFE}.ScanTailor" ]; then
echo "File ${TMP_DIR}/${BASENAME_SAFE}.ScanTailor exists."
cd "${TMP_DIR}" else echo "File${TMP_DIR}/${BASENAME_SAFE}.ScanTailor does not exist." # Let's get started. mkdir "${TMP_DIR}"
cd "${TMP_DIR}" # Only output PNG to prevent any potential further quality loss. pdfimages -png "../${BASENAME}.pdf" "${BASENAME_SAFE}" # This is basically what happens in https://github.com/virantha/pypdfocr as well # get the x-dpi; no logic for different X and Y DPI or different DPI within PDF file # y-dpi would be pdfimages -list out.pdf | sed -n 3p | awk '{print$14}'
DPI=$(pdfimages -list "../${BASENAME}.pdf" | sed -n 3p | awk '{print $13}') #<<'end_long_comment' # TODO Skip all this based on a rotation command-line flag! # Adapted from http://stackoverflow.com/a/9778277 # Scan Tailor says it can't automatically figure out the rotation. # I'm not a programmer, but I think I can do well enough by (ab)using OCR. :) file="${BASENAME_SAFE}-000.png"

TMP="/tmp/rotation-calc"
mkdir ${TMP} # Make copies in all four orientations (the src file is 0; copy it to make # things less confusing) north_file="${TMP}/0"
east_file="${TMP}/90" south_file="${TMP}/180"
west_file="${TMP}/270" cp "$file" "$north_file" convert -rotate 90 "$file" "$east_file" convert -rotate 180 "$file" "$south_file" convert -rotate 270 "$file" "$west_file" # OCR each (just append ".txt" to the path/name of the image) north_text="$north_file.txt"
east_text="$east_file.txt" south_text="$south_file.txt"
west_text="$west_file.txt" # tesseract appends .txt automatically tesseract "$north_file" "$north_file" tesseract "$east_file" "$east_file" tesseract "$south_file" "$south_file" tesseract "$west_file" "$west_file" # Get the word count for each txt file (least 'words' == least whitespace junk # resulting from vertical lines of text that should be horizontal.) wc_table="$TMP/wc_table"
echo "$(wc -w${north_text}) ${north_file}" >$wc_table
echo "$(wc -w${east_text}) ${east_file}" >>$wc_table
echo "$(wc -w${south_text}) ${south_file}" >>$wc_table
echo "$(wc -w${west_text}) ${west_file}" >>$wc_table

# Spellcheck. The lowest number of misspelled words is most likely the
# correct orientation.
misspelled_words_table="$TMP/misspelled_words_table" while read record; do txt=$(echo "$record" | awk '{ print$2 }')
# This is harder to automate away, pretend we only deal with English and Dutch for now.
misspelled_word_count=$(< "${txt}" aspell -l en list | aspell -l nl list | wc -w)
echo "$misspelled_word_count$record" >> $misspelled_words_table done <$wc_table

# Do the sort, overwrite the input file, save out the text
winner=$(sort -n$misspelled_words_table | head -1)
rotated_file=$(echo "${winner}" | awk '{ print $4 }') rotation=$(basename "${rotated_file}") echo "Rotating${rotation} degrees"

# Clean up.
if [ -d ${TMP} ]; then rm -r${TMP}
fi
# TODO end skip

if [[ ${rotation} -ne 0 ]]; then mogrify -rotate "${rotation}" "${BASENAME_SAFE}-*.png" fi #end_long_comment # consider --color-mode=mixed --despeckle=cautious scantailor-cli --dpi="${DPI}" --margins=5 --output-project="${BASENAME_SAFE}.ScanTailor" ./*.png ./ fi while true; do read -p "Please ensure automated detection proceeded correctly by opening the project file${TMP_DIR}/${BASENAME_SAFE}.ScanTailor in Scan Tailor. Enter [Y] to continue now and [N] to abort. If you restart the script, it'll continue from this point unless you delete the directory${TMP_DIR}. " yn
case $yn in [Yy]* ) break;; [Nn]* ) exit;; * ) echo "Please answer yes or no.";; esac done # Use GNU Parallel to speed things up if it exists. if command -v parallel >/dev/null; then parallel tesseract {} {.}${TESSERACT_PARAMS} hocr ::: *.tif
else
for i in ./*.tif; do tesseract $i$(basename $i)${TESSERACT_PARAMS} hocr; done;
fi

# pdfbeads doesn't play nice with filenames with spaces. There's nothing we can do
# about that here, but that's ${BASENAME_SAFE} is generated up at the beginning. # # Also pdfbeads ./*.tif > "${BASENAME_SAFE}.pdf" doesn't work,
# so you're in trouble if your PDF's name starts with "-".
# See http://www.dwheeler.com/essays/filenames-in-shell.html#prefixglobs
pdfbeads *.tif > "${BASENAME_SAFE}.pdf" #OUTPUT_BASENAME=${BASENAME}-output@DPI${DPI} mv "${BASENAME_SAFE}.pdf" ../"\${BASENAME}-readable.pdf"


### Alternatives

If you’re not interested in the space savings of JBIG2 because the goal of ease of use and better legibility has been achieved (and you’d be quite right; digging further is just something I like to do), after tiff2pdf you could still consider tossing in pdfsandwich. You might as well, for the extra effort only consists of installing an extra package. Instead, OCRmyPDF might also work, or perhaps even plain Tesseract 3.03 and up. pdfsandwich just takes writing the wrapper out of your hands. But again, this part is just a nice bonus.

pdfsandwich -lang nld+eng filename.pdf -o filename-ocr.pdf

The resulting file doesn’t strike my fancy after playing with the tools mentioned above, but hey, it takes less time to setup and it works.

#### DjVu

DjVu is probably a superior alternative to PDF, so it ought to be worth investigating. This link might help.

A very useful application, found in the Debian repositories to boot, is djvubind. It works very similar to ReadablePDF, but produces DjVu files instead. For sharing these may be less ideal, but for personal use they seem to be even smaller (something that could probably be affected by the choices for dictionary size) while displaying even faster.

### Other Matters of Potential Interest

Note that I’m explicitly not interested in archiving a book digitally or some such. That is, I want to obtain a digital copy of a document or book that avoids combining the worst of both digital and paper into one document, but I’m not interested in anything beyond that unless it can be done automatically. Moreover, attempting to replicate original margins would actually make the digital files less usable. For digital archiving you’ll obviously have to redo that not-so-great 200 DPI scan and do a fair bit more to boot. It looks like Spreads is a great way to automate the kind of workflow desired in that case. This link dump might offer some further inspiration.

### Conclusion

My goal has been achieved. Creating significantly improved PDFs shouldn’t take more than a minute or two of my time from now on, depending a bit on the quality of the input document. Enjoy.

## Connecting to Belgacom FON: It’s Still Possible

I was happily using Belgacom FON Autologin instead of the behemoth of an official Belgacom app, but ever since Belgacom updated their portal code I’ve barely been able to connect at all. That’s Belgacom’s fault, not the app’s. I haven’t been able to connect through the web interface or the official app either, because it just times out or gives mysterious errors. Unfortunately Belgacom FON Autologin pretends to be a browser by sending information over the HTTP protocol, so it equally fails to connect.

Luckily I just came across FON AccessFon, an app that utilizes the WISPr protocol. In a magnificent 33 kB it manages to connect to Belgacom FON quickly and efficiently, and for every FON router rather than just Belgacom’s to boot.

Older Entries »