The One with the Thoughts of Frans

Workarounds for Turning Off Your DisplayPort Monitor

Sometimes you jot down a few quick notes for yourself without bothering to turn them into a blogpost that might be useful to others. This is one of those notes. First, I’ll introduce my computer monitor workflow as it’s been since time immemorial, also known as ’95 or ’96. Just like how I turn off lights I don’t use, I’ve always turned off my monitor when I wanted it. This has never been a problem, until in early 2015 I had to use DisplayPort for the first time. If you want an UltraHD monitor, which you do if you care even the tiniest bit about sharpness and clarity, you have to use DisplayPort.

But DisplayPort isn’t nice. Turning off your monitor is treated the same as disconnecting it. In Windows this means everything resets itself to some absurdly low resolution, whereas in Linux the consequences can be even worse (like having to SSH in from another computer to run an xrandr command to reactivate the monitor). This means you either face a colossal waste of energy or continuous annoyance at the fact that your monitor has turned itself off yet again. In my view monitor timeouts should be at least twenty minutes, just as a failsafe in the extremely unlikely event that you forgot to turn it off. Luckily I found two reasonable workarounds within the first week or two of having acquired my UHD monitor.

The first?

xset dpms force off

This has the same effect as your monitor timeout, only at your volition. I tend to find the blinking light on the monitor somewhat annoying, but this nevertheless remains your best bet to quickly turn the screen off as part of your regular workflow.

The second method consists of actually turning the monitor off. Besides getting rid of the blinking light I figure it saves just a tiny bit more electricity to boot. Which is useful if you want to keep your computer active, but not your monitor. For this method you have to switch to TTY (Ctrl + Alt + F1-6) before turning your monitor off. Then when you turn your monitor back on, X won’t know it’s been missing. Switch back with Ctrl + Alt + F7.

I’m still hopeful that there might simply be an xorg.conf setting I’ve overlooked, but in any case these workarounds serve their purpose. Note that xset dpms force off is also tremendously useful on laptops that don’t have a function key for turning off the screen. Standby often just isn’t what you want.

CommentsTags: ,

Scanning with an HP MFP (multi-function peripheral) on Debian

You need to install hplip. It looks like something’s still off about the colors compared to Ubuntu and Windows, and I can’t figure out what the difference is. Alas. :/ Also, don’t buy HP.

CommentsTags: , , ,

Less is More

A minimal Debian install comes without the ability to view man pages. Fair enough, it’s minimal after all. But they can be quite useful. A sudo apt install man later results in man pages being shown. That’s all, folks? Unnfortunately not, because the man pages are shown using the more command, which doesn’t allow for scrolling up and down with the arrow keys or j and k, Pg Up and Pg Dn, and all the other usual niceties. To fix you need to sudo apt install less, a “pager program similar to more.” And better, at least on any machine with sufficient RAM. Meaning anything anyone is likely to use in 2016, or probably also in 1990 for that matter.

CommentsTags: ,

Let’s Encrypt on Debian/Jessie

Wow, that was easy. I’ve been reading about Let’s Encrypt all over the place, and I wouldn’t like any snooping on my feeds password, now would I?

  1. Add the jessie-backports repository.
  2. sudo apt install -t jessie-backports certbot python-certbot-apache
  3. sudo letsencrypt --apache -d example.com [-d subdomain.example.com]

This stuff expires every 90 days, so you still need a cron job to renew.

sudo crontab -e

Let’s say 4 at night every Sunday.

* 4 * * 0 letsencrypt renew >> /var/log/letsencrypt-renew.log

Neat!

CommentsTags: , ,

xorg.conf: EmulateWheel stopped working on libinput update

I didn’t spot it in the Debian changelog, but apparently the latest libinput10 update on Debian/stretch (unstable) broke my EmulateWheel option. Because the scroll ring on my trackball is broken, it’s all I’ve got. It’s also rather nice on trackballs without any kind of scrolling functionality at all, such as the Logitech Trackman Marble.

Let’s start by examining my current xorg.conf:

$ cat /etc/X11/xorg.conf 
Section "InputClass"
	Identifier	"Kensington Trackball"
	MatchProduct	"Kensington Expert Mouse"
	Option		"SendCoreEvents" "True"
	Option		"ButtonMapping" "0 1 2 4 5 6 7 3"
	Option		"EmulateWheel" "True"
	Option		"EmulateWheelButton" "1"
EndSection

Scanning man libinput doesn’t list any entries for those options anymore, but it does contain the following:

Option "ScrollButton" "int"
Designates a button as scroll button. If the ScrollMethod is button and the button is logically held down, x/y axis movement is converted into scroll events.
Option "ScrollMethod" "string"
Enables a scroll method. Permitted values are none, twofinger, edge, button. Not all devices support all options. If an option is unsupported, the default scroll option for this device is used.

Note how this would allow you to disable two-finger scroll on e.g. our Wacom drawing tablet if you don’t like it. (But I do!) In any case, adjusting my xorg.conf accordingly:

Section "InputClass"
        Identifier      "Kensington Trackball"
        MatchProduct    "Kensington Expert Mouse"
        Option          "SendCoreEvents" "True"
        Option          "ButtonMapping" "0 1 2 4 5 6 7 3"
        Option          "ScrollMethod" "button"
        Option          "ScrollButton" "1"
EndSection

Works like a charm. Better yet, it now also scrolls horizontally. Which can be disabled with Option "HorizontalScrolling" "false" if you so desire. All’s well that ends well.

Comments (4)Tags: , ,

Fixing Up Scanned PDFs with Scan Tailor

Scanned PDFs come my way quite often and I don’t infrequently wish they were nicer to use on digital devices. One semi-solution might include running off to the library and rescanning them personally, but there is a middle road between doing nothing and doing too much: digital manipulation. The knight in shining armor is called Scan Tailor. Note that this post is not about merely cropping away some black edges. When you’re just looking for a tool to cut off some unwanted edges, I’d recommend PDF Scissors instead. If you just want to fix some incorrect rotation once and for all, try the pdftools found in texlive-extra-utils, which gives you simple shorthands like pdf90, pdf180 and pdf270. This post is about splitting up double scanned pages, increasing clarity, and adding an OCR layer on top. With that out of the way, if you’re impatient, you can skip to the script I wrote to automate the process.

Coaxing Scan Tailor

Unfortunately Scan Tailor doesn’t directly load scanned PDFs, which is what seems to be produced by copiers by default and what you’re most likely to receive from other people. Luckily this is easy to work around. If you want to use the program on documents you scan yourself, selecting e.g. TIFF in the output options could be a better choice.

To extract the images from PDF files, I use pdfimages. I believe it tends to come preinstalled, but if not grab poppler-utils with sudo apt install poppler-utils.

pdfimages -png filename.pdf outputname

You might want to take a look at man pdfimages. The -j flag makes sure JPEG files are output as is rather than being converted to something else, for instance, while the -tiff option would convert the output to TIFF. Like PNG, that is lossless. What might also be of interest are -ccitt and -all, but in this case I’d want the images as JPEG, PNG or TIFF because that’s what Scan Tailor takes as input.

At this point you could consider cropping your images to aid processing with Scan Tailor, but I’m not entirely sure how to automate it out of the way. Perhaps unpaper with a few flags could work to remove (some) black edges, but functionally speaking Scan Tailor is pretty much unpaper with a better (G)UI. In any case, this could be investigated.

You’ll want to use pdfimages once more to obtain the DPI of your images for use with Scan Tailor, unless you like to calculate the DPI yourself using the document dimensions and the number of pixels. Both PNG and TIFF support this information, but unfortunately pdfimages doesn’t write it.

$ pdfimages -list filename.pdf
page   num  type   width height color comp bpc  enc interp  object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
   1     0 image    1664  2339  gray    1   1  ccitt  no         4  0   200   200  110K  23%
   2     1 image    1664  2339  gray    1   1  ccitt  no         9  0   200   200  131K  28%

Clearly our PDF was scanned at a somewhat disappointing 200 DPI. Now you can start Scan Tailor, create a new project based on the images you just extracted, enter the correct DPI, and just follow the very intuitive steps. For more guidance, read the manual. With any setting you apply, take care to apply it to all pages if you wish, because by default the program quite sensibly applies it only to one page. Alternatively you could run scantailor-cli to automate the process, which could reduce your precious time spent to practically zero. I prefer to take a minute or two to make sure everything’s alright. I’m sure I’ll make up the difference by not having to scroll left and right and whatnot afterwards.

By default Scan Tailor wants to output to 600 DPI, but with my 200 DPI input file that just seemed odd. Apparently it has something to do with the conversion to pure black and white, which necessitates a higher DPI to preserve some information. That being said, 600 DPI seems almost laughably high for 200 DPI input. Perhaps “merely” twice the input DPI would be sufficient. Either way, be sure to use mixed mode on pages with images.

Scan Tailor’s output is not a PDF yet. It’ll require a little bit of post-processing.

Simple Post-Processing

The way I usually go about trying to find new commands already installed on my computer is simply by typing the relevant phrase, in this case tiff. Press Tab for autocomplete. If that fails, you could try apt search tiff, although I prefer a GUI like Synaptic for that. The next stop is a search engine, where you can usually find results faster by focusing on the Arch, Debian or Ubuntu Wikis. On the other hand, blog and forum posts often contain useful information.

$ tiff
tiff2bw     tiff2ps     tiffcmp     tiffcrop    tiffdump    tiffmedian  tiffsplit   
tiff2pdf    tiff2rgba   tiffcp      tiffdither  tiffinfo    tiffset     tifftopnm

tiff2pdf sounds just like what we need. Unfortunately it only processes one file at a time. Easy to fix with a simple shell script, but rtfm (man tiff2pdf) for useful info. “If you have multiple TIFF files to convert into one PDF file then use tiffcp or other program to concatenate the files into a multiple page TIFF file.

tiffcp *.tif out.tif

You could easily stop there, but for sharing or use on devices a PDF (or DjVu) file is superior. My phone doesn’t even come with a TIFF viewer by default and the one in Dropbox — why does that app open almost all documents by itself anyway? — just treats it as a collection of images, which is significantly less convenient than your average document viewer. Meanwhile, apps like the appropriately named Document Viewer deal well with both PDF and DjVu.

tiff2pdf -o bla.pdf out.tif

Wikipedia suggests the CCITT compression used for black and white text is lossless, which is nice. Interestingly, an 1.8 MB low-quality 200 DPI PDF more than doubled in size with this treatment, but a 20MB 400 DPI document was reduced in size to 13MB. Anyway, for most purposes you could consider compressing it with JBIG2, for instance using jbig2enc. Another option might be to ignore such PDF difficulties and use pdf2djvu or to compile a DjVu document directly from the TIFF files. At this point we’re tentatively done.

Harder but Neater Post-Processing

After I’d already written most of this section, I came across this Spanish page that pretty much explains it all. So it goes. Because of that page I decided to add a little note about checkinstall, a program I’ve been using for years but apparently always failed to mention.

You’re going to need jbig2enc. You can grab the latest source or an official release. But first let’s get some generic stuff required for compilation:

sudo apt install build-essential automake autotools-dev libtool

And the jbig2enc-specific dependencies:

sudo apt install libleptonica-dev libjpeg8-dev libpng12-devlibpng-dev libtiff5-dev zlib1g-dev

In the jbig2enc-master directory, compile however you like. I tend to do something along these lines:

./autogen.sh
mkdir build
cd build
../configure
make

Now you can sudo make install to install, but you’ll have to keep the source directory around if you want to run sudo make uninstall later. Instead you can use checkinstall (sudo apt install checkinstall, you know the drill). Be careful with this stuff though.

sudo checkinstall make install

You have to enter a name such as jbig2enc, a proper version number (e.g. 0.28-0 instead of 0.28) and that’s about it. That wasn’t too hard.

At this point you could produce significantly smaller PDFs using jbig2enc itself (some more background information):

jbig2 -b outputbasename -p -s whatever-*.tif
pdf.py outputbasename > output.pdf

However, it doesn’t deal with mixed images as well as tiff2pdf does. And while we’re at it, we might just as well set up our environment for some OCR goodness. Mind you, the idea here is just to add a little extra value with no extra time spent after the initial setup. I have absolutely no intention of doing any kind of proofreading or some such on this stuff. The simple fact is that the Scan Tailor treatment drastically improved the chances of OCR success, so it’d be crazy not to do it. There’s a tool called pdfbeads that can automatically pull it all together, but it needs a little setup first.

You need to install ruby-rmagick, ruby-hpricot if you want to do stuff with OCRed text (which is kind of the point), and ruby-dev.

sudo apt install ruby-rmagick ruby-hpricot ruby-dev

Then you can install pdfbeads:

sudo gem install pdfbeads

Apparently there are some issues with iconv or something? Whatever it is, I have no interest in Ruby at the moment and the problem can be fixed with a simple sudo gem install iconv. If iconv is added to the pdfbeads dependencies or if it switches to whatever method Ruby would prefer, this shouldn’t be an issue in the future.

At this point we’re ready for the OCR. sudo apt install tesseract-ocr and whatever languages you might want, such as tesseract-ocr-nld. The -l switch is irrelevant if you just want English, which is the default.

parallel tesseract -l eng+nld {} {.} hocr ::: *.tif

GNU Parallel speeds this up by automatically running as many different tesseracts as you’ve got CPU cores. Install with sudo apt install parallel if you don’t have it, obviously. I’m pretty patient about however much time this stuff might take as long as it proceeds by itself without requiring any attention, but on my main computer this will make everything proceed almost four times as quickly. Why wait any longer than you have to? The OCR results are actually of extremely high quality: it has some issues with italics and that’s pretty much it. It’s not an issue with the characters, but it doesn’t seem to detect the spaces in between words. But what do I care, other than that minor detail it’s close to perfect and this wasn’t even part of the original plan. It’s a very nice bonus.

Once that’s done, we can produce our final result:

pdfbeads *.tif > out.pdf

My 20 MB input file now is a more usable and legible 3.7 MB PDF with decent OCR to boot. Neat. A completely JPEG-based file I tried went from 46.8 MB to 2.6 MB. Now it’s time to automate the workflow with some shell scripting.

ReadablePDF, the script

Using the following script you can automate the entire workflow described above, although I’d always recommend double-checking Scan Tailor’s automated results. The better the input, the better the machine output, but even so there might just be one misdetected page hiding out. The script could still use a few refinements here and there, so I put it up on Github. Feel free to fork and whatnot. I licensed it under the GNU General Public License version 3.

#!/bin/bash
# readablepdf
# ReadablePDF streamlines the effort of turning a not so great PDF into
# a more easily readable PDF (or of course a pretty decent PDF into an
# even better one). This script depends on poppler-utils, imagemagick,
# scantailor, tesseract-ocr, jbic2enc, and pdfbeads.
#
# Unfortunately only the first four are available in the Debian repositories.
# sudo apt install poppler-utils imagemagick scantailor tesseract-ocr
#
# For more background information and how to install jbig2enc and pdfbeads,
# see //fransdejonge.com/2014/10/fixing-up-scanned-pdfs-with-scan-tailor#harder-post-processing
#
# GNU Aspell and GNU Parallel are recommended but not required.
# sudo apt install aspell parallel
#
# Aspell dictionaries tend to be called things like aspell-en, aspell-nl, etc.

BASENAME=${@%.pdf} # or `basename "@%" .pdf`

# It would seem that at some point in its internal processing, pdfbeads has issues with spaces.
# Let's strip them and perhaps some other special characters so as still to provide
# meaningful working directory and file names.
BASENAME_SAFE=$(echo "${BASENAME}" | tr ' ' '_') # Replace all spaces with underscores.
#BASENAME_SAFE=$(echo "${BASENAME_SAFE}" | tr -cd 'A-Za-z0-9_-') # Strip other potentially harmful chars just in case?

SCRIPTNAME=$(basename "$0" .sh)
TMP_DIR=${SCRIPTNAME}-${BASENAME_SAFE}

TESSERACT_PARAMS="-l eng+nld"


# If project file exists, change directory and assume everything's in order.
# Else do the preprocessing and initiation of a new project.
if [ -f "${TMP_DIR}/${BASENAME_SAFE}.ScanTailor" ]; then
	echo "File ${TMP_DIR}/${BASENAME_SAFE}.ScanTailor exists."
	cd "${TMP_DIR}"
else
	echo "File ${TMP_DIR}/${BASENAME_SAFE}.ScanTailor does not exist."
	
	# Let's get started.
	mkdir "${TMP_DIR}"
	cd "${TMP_DIR}"
	
	# Only output PNG to prevent any potential further quality loss.
	pdfimages -png "../${BASENAME}.pdf" "${BASENAME_SAFE}"
	
	# This is basically what happens in https://github.com/virantha/pypdfocr as well
	# get the x-dpi; no logic for different X and Y DPI or different DPI within PDF file
	# y-dpi would be pdfimages -list out.pdf | sed -n 3p | awk '{print $14}'
	DPI=$(pdfimages -list "../${BASENAME}.pdf" | sed -n 3p | awk '{print $13}')
	
#<<'end_long_comment'
	# TODO Skip all this based on a rotation command-line flag!
	# Adapted from http://stackoverflow.com/a/9778277
	# Scan Tailor says it can't automatically figure out the rotation.
	# I'm not a programmer, but I think I can do well enough by (ab)using OCR. :)
	file="${BASENAME_SAFE}-000.png"
	
	TMP="/tmp/rotation-calc"
	mkdir ${TMP}

	# Make copies in all four orientations (the src file is 0; copy it to make 
	# things less confusing)
	north_file="${TMP}/0"
	east_file="${TMP}/90"
	south_file="${TMP}/180"
	west_file="${TMP}/270"

	cp "$file" "$north_file"
	convert -rotate 90 "$file" "$east_file"
	convert -rotate 180 "$file" "$south_file"
	convert -rotate 270 "$file" "$west_file"

	# OCR each (just append ".txt" to the path/name of the image)
	north_text="$north_file.txt"
	east_text="$east_file.txt"
	south_text="$south_file.txt"
	west_text="$west_file.txt"

	# tesseract appends .txt automatically
	tesseract "$north_file" "$north_file"
	tesseract "$east_file" "$east_file"
	tesseract "$south_file" "$south_file"
	tesseract "$west_file" "$west_file"

	# Get the word count for each txt file (least 'words' == least whitespace junk
	# resulting from vertical lines of text that should be horizontal.)
	wc_table="$TMP/wc_table"
	echo "$(wc -w ${north_text}) ${north_file}" > $wc_table
	echo "$(wc -w ${east_text}) ${east_file}" >> $wc_table
	echo "$(wc -w ${south_text}) ${south_file}" >> $wc_table
	echo "$(wc -w ${west_text}) ${west_file}" >> $wc_table

	# Spellcheck. The lowest number of misspelled words is most likely the 
	# correct orientation.
	misspelled_words_table="$TMP/misspelled_words_table"
	while read record; do
		txt=$(echo "$record" | awk '{ print $2 }')
		# This is harder to automate away, pretend we only deal with English and Dutch for now.
		misspelled_word_count=$(< "${txt}" aspell -l en list | aspell -l nl list | wc -w)
		echo "$misspelled_word_count $record" >> $misspelled_words_table
	done < $wc_table

	# Do the sort, overwrite the input file, save out the text
	winner=$(sort -n $misspelled_words_table | head -1)
	rotated_file=$(echo "${winner}" | awk '{ print $4 }')
	
	rotation=$(basename "${rotated_file}")
	
	echo "Rotating ${rotation} degrees"

	# Clean up.
	if [ -d ${TMP} ]; then
		rm -r ${TMP}
	fi
	# TODO end skip
	
	if [[ ${rotation} -ne 0 ]]; then
		mogrify -rotate "${rotation}" "${BASENAME_SAFE}-*.png"
	fi
#end_long_comment
	
	# consider --color-mode=mixed --despeckle=cautious
	scantailor-cli --dpi="${DPI}" --margins=5 --output-project="${BASENAME_SAFE}.ScanTailor" ./*.png ./
fi

while true; do
	read -p "Please ensure automated detection proceeded correctly by opening the project file ${TMP_DIR}/${BASENAME_SAFE}.ScanTailor in Scan Tailor. Enter [Y] to continue now and [N] to abort. If you restart the script, it'll continue from this point unless you delete the directory ${TMP_DIR}. " yn
	case $yn in
		[Yy]* ) break;;
		[Nn]* ) exit;;
		* ) echo "Please answer yes or no.";;
	esac
done

# Use GNU Parallel to speed things up if it exists.
if command -v parallel >/dev/null; then
	parallel tesseract {} {.} ${TESSERACT_PARAMS} hocr ::: *.tif
else
	for i in ./*.tif; do tesseract $i $(basename $i) ${TESSERACT_PARAMS} hocr; done;
fi

# pdfbeads doesn't play nice with filenames with spaces. There's nothing we can do
# about that here, but that's ${BASENAME_SAFE} is generated up at the beginning.
# 
# Also pdfbeads ./*.tif > "${BASENAME_SAFE}.pdf" doesn't work,
# so you're in trouble if your PDF's name starts with "-".
# See http://www.dwheeler.com/essays/filenames-in-shell.html#prefixglobs
pdfbeads *.tif > "${BASENAME_SAFE}.pdf"

#OUTPUT_BASENAME=${BASENAME}-output@DPI${DPI}
mv "${BASENAME_SAFE}.pdf" ../"${BASENAME}-readable.pdf"

Alternatives

If you’re not interested in the space savings of JBIG2 because the goal of ease of use and better legibility has been achieved (and you’d be quite right; digging further is just something I like to do), after tiff2pdf you could still consider tossing in pdfsandwich. You might as well, for the extra effort only consists of installing an extra package. Instead, OCRmyPDF might also work, or perhaps even plain Tesseract 3.03 and up. pdfsandwich just takes writing the wrapper out of your hands. But again, this part is just a nice bonus.

pdfsandwich -lang nld+eng filename.pdf -o filename-ocr.pdf

The resulting file doesn’t strike my fancy after playing with the tools mentioned above, but hey, it takes less time to setup and it works.

DjVu

DjVu is probably a superior alternative to PDF, so it ought to be worth investigating. This link might help.

A very useful application, found in the Debian repositories to boot, is djvubind. It works very similar to ReadablePDF, but produces DjVu files instead. For sharing these may be less ideal, but for personal use they seem to be even smaller (something that could probably be affected by the choices for dictionary size) while displaying even faster.

Other Matters of Potential Interest

Note that I’m explicitly not interested in archiving a book digitally or some such. That is, I want to obtain a digital copy of a document or book that avoids combining the worst of both digital and paper into one document, but I’m not interested in anything beyond that unless it can be done automatically. Moreover, attempting to replicate original margins would actually make the digital files less usable. For digital archiving you’ll obviously have to redo that not-so-great 200 DPI scan and do a fair bit more to boot. It looks like Spreads is a great way to automate the kind of workflow desired in that case. This link dump might offer some further inspiration.

Conclusion

My goal has been achieved. Creating significantly improved PDFs shouldn’t take more than a minute or two of my time from now on, depending a bit on the quality of the input document. Enjoy.

Comments (6)Tags: , ,

Debian: International Fonts

Ubuntu comes with a large swath of international fonts installed by default, but Debian requires a little more attention. Although I can’t read the languages, I can recognize which script is which. Besides, boxes are just ugly.

  • East Asian: apt-get install ttf-arphic-uming ttf-wqy-zenhei ttf-sazanami-mincho ttf-sazanami-gothic ttf-unfonts-core (source)
  • Indic: apt-get install ttf-indic-fonts (source)
  • All together: apt-get install ttf-arphic-uming ttf-wqy-zenhei ttf-sazanami-mincho ttf-sazanami-gothic fonts-unfonts-core ttf-indic-fonts

These are merely the ones that I missed the most. I may update this post in the future.

Comments (1)Tags: , ,

Ubuntu/Linux Tips That I Can’t Do Without

This post is more of a reference for myself than for other people to read, but some of it might be useful. I’m currently using Ubuntu 9.10.

Audio

My #1 biggest problem with Linux is still audio-related issues. Luckily they are mostly fairly trivial to fix – at least on my laptop. I haven’t figured out how to make my desktop output 5.1 audio through optical out, so I’m still using Windows there.

Crackling Sound

If I boot into KDE4 instead of Gnome then my audio is messed up afterward. I have no idea why, but to fix it I can run alsamixer -Dhw and turn the PCM volume all the way up (or at least higher than 0). Source.

Another issue I’ve noticed is that after adjusting the volume in Gnome or KDE, I can never quite get the volume back up to what was previously 100% (i.e. the same max as in Windows). Starting alsamixer shows that the front and headphone volumes get stuck at about 70%.

Audio input/output

Apparently the Ubuntu System > Preferences > Sound doesn’t properly set the default inputs and outputs for PulseAudio. To use different inputs or outputs for audio for programs you can use pavucontrol. It makes the latest betas of Skype usable on Linux.

Last.FM submission (in Quod Libet and Perhaps Other Applications)

You need to install the Last.FM submission daemon (sudo apt-get install lastfmsubmitd). Also see this bug report.

Browsers

Chromium

I like having a bunch of browsers at my disposal and Chromium comes right after Opera, Links2, Firefox, Epiphany, and Konqueror in my list of favorite browsers. That puts it ahead of most notably Safari and IE. 😉

deb http://ppa.launchpad.net/chromium-daily/ppa/ubuntu karmic main
deb-src http://ppa.launchpad.net/chromium-daily/ppa/ubuntu karmic main

Install using sudo apt-get install chromium-browser. If you forget about the browser suffix then you’ll end up with some kind of Space Invaders clone. It’s quite nice actually, but I keep forgetting about its existence.

Links 2

Always nice to have. sudo apt-get install links2.

Opera

No OS is fully functional if it doesn’t have Opera. Download the latest Qt4 build from FTP because the repository, which is primarily aimed at Debian, doesn’t seem to be working properly right now—and if it did, it would install the Qt3 rather than the Qt4 version. I find that Tango CL does a pretty good job of blending my Opera in with various types of Gnome and KDE looks, although there may be more specialized skins available to take this even further. I use the purple color scheme because it seems to fit in better with the blue looks of my system than the blue color scheme does.

If Flash doesn’t want to work on YouTube in Opera, get rid of all the flashplugin-alternative.so files (or at least make sure that Opera doesn’t see them).

Miscellaneous

AmarokAudio Player

Rhythmbox is some kind of iTunes clone full of bugs. Utterly useless unless you want to listen to one of the predefined Internet radio channels while Ubuntu is installing. Get Amarok with sudo apt-get install amarok. First things first, go to Settings > Configure Amarok and uncheck Show splash screen on startup. While very self-explanatory, still very annoying. I mostly use Amarok for playing internet sources, such as Librivox, Last.FM and various internet radio channels. For my local music I prefer something like foobar2000, which probably translates best to Quod Libet in a Linux context. I would not use Amarok. Goggles Music Manager is my current audio player of choice in Linux, though it leaves much to be desired compared to foobar2000.

Background

I like to use a touch of red as my background on Ubuntu. Source.

Circular scrolling

Install GSynaptics using sudo apt-get install gsynaptics. Go to System > Preferences > Touchpad. Go to the tab Scrolling. Enable circular scrolling. Much better use of the touchpad.

Compositing

I haven’t yet figured out what window manager I want to use. I do like Compiz, but its application switching capabilities are pure bile. Metacity has compositing, but it feels slower than with compositing turned off and you can’t seem to configure anything. I don’t want shadows and all that junk; I just don’t want my windows to take half a second to appear when I switch desktops. I could try to use Metacity compositing in combination with superswitcher, but it just lacks some of that nice 3D accelerated flair. If only the Compiz plugins were properly annotated, perhaps I could take a stab at writing a SmartTab.org clone myself. It’s a pity that with all of Ubuntu’s usability improvements over Windows, application switching isn’t one of them. Perhaps I’ll have to use Kwin, which is much better but feels somewhat out of place.

Compiling Software

Don’t forget that when a ./autogen.sh or a ./configure is complaining about missing a package it’s talking about package-dev. Gave me a headache a couple of times, but I don’t suppose I’ll break my head over it again.

exiv2

Very useful command line utility for taking care of the metadata of your photos. Find out more with man exiv2, my post highlighting some of my favorite options, and at exiv2’s official website.

Glipper

xclipboard forgets what you copied if you close the application from which you were copying; luckily the situation is easy to rectify. sudo apt-get install glipper, right-click on a panel, click “Add to Panel,” select the entry named “Clipboard manager” and click “Add.”

Grub2

Ubuntu 9.10 comes with Grub2. The relevant command to make it dance and sing is update-grub. Things like default boot entry can be set in /etc/default/grub. Don’t forget to add parentheses. Recovering is horribly complicated >compared to ye olde grub.

Keyboard Disabling for External USB Keyboard

It depends on the specific hardware and drivers, but the generic principle may still apply.

To disable internal laptop keyboard: sudo sh -c 'echo -n "i8042" > /sys/bus/platform/drivers/i8042/unbind'

To enable it again: sudo sh -c 'echo -n "i8042" > /sys/bus/platform/drivers/i8042/bind'

If it disables more than intended, at worst you’ll have to reboot.

Keyboard Settings

In System > Preferences > Keyboard go to the tab Layouts. I tend to use USA International (AltGR dead keys), but these settings would probably yield real usability improvements in any layout (most notably the plain USA one). In Layout Options, check the following boxes. I’ve also made a screenshot of my settings.

  • Adding EuroSign to certain keys: 5
  • Compose key position: Right Ctrl
  • Key to choose 3rd level: Right Alt

Now you can type the € sign using Right Alt + 5, type various accents like é using either Right Alt + e or Right Ctrl > ' > e and do other fun things like typing the en dash using Right Ctrl > --. and the em dash using Right Ctrl > ---. There’s an extensive compose key reference available for the characters that you can produce with the compose key; the characters that you can type with the Right Alt modifier depend on your keyboard setup. This can be viewed by utilizing the Add Keyboard interface, but there has got to be an easy way to view the current keyboard layout.

In KDE the same can be achieved in by going to the KDE System Settings > Regional & Language > Keyboard Layout configuration. Under the Layout tab, select Enable keyboard layouts. Then go to Advanced and there you can put the same as outlined above.

MS Core Fonts

Don’t forget about them. They make browsing more pleasant because many websites use MS fonts like Verdana. Install using sudo apt-get install msttcorefonts.

SciTE

Gedit is insufficient and Kate is too slow. Grab the latest version because Ubuntu 9.10 comes with the ancient 1.71. I do like to sudo apt-get install scite it first because it fixes up the icons, menu entries and such—although it fails to properly register it for all the file types that it automatically registers Gedit and Kate for. Note that after make install it results in a /usr/bin/SciTE binary. I simply delete the remaining scite and then rename the SciTE binary to scite, but there are probably some good reasons not to do it like that—too bad.

Some settings that I like for my .SciTEUser.properties.

position.width=700
position.height=800
# Indentation
tabsize=2
indent.size=2
use.tabs=1
#indent.auto=1
indent.automatic=1
indent.opening=0
indent.closing=0
#tab.indents=0
#backspace.unindents=0
statusbar.visible=1

# Sizes and visibility in edit pane
line.margin.visible=1
line.margin.width=4
margin.width=16

# Wrapping of long lines
wrap=1

Screen Capture

You can make videos of applications, regions on your screen, or your entire screen with recordMyDesktop. Install with sudo apt-get install gtk-recordmydesktop.

Temperature sensors

Install sensors-applet to be able to monitor temperatures of various hardware components in a Gnome panel. This depends on lm-sensors, which is of course installed automatically. To make it actually detect all available sensors run sudo sensors-detect. Without doing that, I can’t monitor the temperature of my CPU cores.

Network

Enter Password to Unlock Keyring

To get rid of this annoying behavior, check Available to all users in the settings for the respective network. I prefer to have my network available ASAP, as do many others.

Internet Connection Sharing

I don’t know what the guys at the Ubuntu documentation are smoking, but all you need to do is right click on the NetworkManager icon, Edit Connections, pick the one you want to share through, and pick Shared to other computers in the Method drop-down. They really scared me with that when I wanted to share my laptop’s wireless connection with my desktop to download some updates (not that they got the Linksys wireless USB to work properly).

CommentsTags: ,

« Newer Entries