Archive for Operating Systems
Back in November I decided to try Aard 2 on my laptop. I followed the instructions and it worked. Then I created a launcher with the following command and suddenly it did not.
java -Dslobber.browse=true -jar ~/programs/aard2/aard2-web-0.7.jar ~/programs/aard2/slobs/*.slob
A different strategy, passing a command to Bash, did the trick.
bash -c "java -Dslobber.browse=true -jar ~/programs/aard2/aard2-web-0.7.jar ~/programs/aard2/slobs/*.slob"
Enjoy your fully functional launcher! 😉
Over the past few years I acquired a bad habit of using search engines for basic calculations and conversions. I’m not talking about the stuff you should just do in your head — not quite that bad, but about the fact that most Linux distros don’t seem to ship with a calculator by default. So I finally got around to testing some programs and Qalculate! does all I want. You can install it on Debian using
sudo apt install qalculate-gtk. There’s a list of features on the website. Enjoy a few screenshots.
I hope you’ll like it too!
On the forum I administer, I am forced to run a tight attachment policy. Disk space doesn’t grow on trees. Occasionally this leads to questions about the small attachment size limit of 50 KiB. This guide is intended to clarify that this is not nearly as tiny as you might think. Note that although I’ll mention commands without much explanation for the sake of brevity, you’re always recommended to further explore the possibilities offered by those commands with the
--help flag as well as by running
First you need to ask yourself what kind of file type is appropriate, if you have the choice. On screenshots, the main purpose of attachments on my forum, you’ll often encounter large areas of uniform background colors. PNG is therefore almost invariably the right choice. Crop out everything but what’s relevant. JPEG is appropriate for more dynamic pictures such as photographs. If you want to do a lot with photographs, you might want to consider an external hosting service. My wife likes SmugMug. Still, for thumbnails you might be able to do a fair bit more within a few hundred KiB than you might think. Finally, the vector graphics in SVG result in pictures that always look sharp. You’ll typically have drawn these in a program like Inkscape or Adobe Illustrator.
Table of Contents
- 1. Optimizing JPEG
- 2. Optimizing PNG
- 3. Optimizing SVG
- Addendum A: Scanned Documents
- Addendum B: Video
1. Optimizing JPEG
Often you’ll want to crop your file. Do not edit your JPEG followed by resaving it because this will result in reduced quality! You can crop losslessly with cropgui. On Windows you can use IrfanView.
If you don’t want to crop, and also potentially for some post-cropgui optimization, use
jpegtran -copy none -progressive -optimize file.jpg > file-opt.jpg. Note that this will get rid of all metadata, which may be undesirable. If so, use
jpegtran -copy all -progressive -optimize file.jpg > file-opt.jpg.
Of course if you want to scale down your JPEG there’s no point in mucking about with lossless cropping first. After scaling down, check how long your quality can go (also see a little helper script I wrote). In any case, you should avoid introducing any unnecessary compression steps with associated quality loss. Here are some results:
The original 11.jpg at 2.19 MB.
Losslessly cropped 11-crop.jpg at 1.11 MB.
-copy all -progressive -optimize11-crop-opt.jpg at 1.04 MB.
-copy nonewould’ve saved an extra whopping 40-some KiB, which on this kind of filesize has little benefit, and besides, I quite like the metadata. For thumbnail-sized files the balance is likely to be different. For example, the 52.2 KiB SmugMug auto-generated thumbnail below can be insignificantly reduced to 51.1 KiB with
--copy all, but to 48.2 KiB with
--copy none. I think an 8% reduction is not too shabby, plus it brings the file size down to under the arbitrary 50 KiB limit on my forum.
2. Optimizing PNG
As I wrote in the introduction, for screenshots PNG is typically the right choice. If you want to use lossless PNG, use
optipng -o7. In my experience it’s ever so slightly smaller than other solutions like
pngcrush. But as long as you use a PNG optimizer it shouldn’t much matter which one you fancy. Also see this comparison.
If you don’t care about potentially losing some color accuracy, use
pngquant instead. To top it off, if you really want to squeeze out your PNG, you can pass quality settings with
--quality min-max, meaning you can pass
--quality 30-50 or just
--quality 10. Here are some quick results for the screenshot in the SVG section below, but be sure to check out the pngquant website for some impressive examples.
$ du -h --apparent-size inkscape-plain-svg.png 27K inkscape-plain-svg.png $ du -h --apparent-size inkscape-plain-svg-fs8\ default.png 7.6K inkscape-plain-svg-fs8 default.png $ du -h --apparent-size inkscape-plain-svg-fs8\ quality\ 10.png 4.3K inkscape-plain-svg-fs8 quality 10.png
In this case there is no visual distinction between the original PNG and the default pngquant settings. The quality 10 result is almost imperceptibly worse unless you look closely, so I didn’t bother to include a sample.
3. Optimizing SVG
For using SVG on the web, I imagine I don’t have to tell you that in Inkscape, you should save your file as Plain SVG.
What you may not know is that just like there are lossy PNGs, you can also create what amounts to lossy SVGs. There are some command-line tools to optimize SVGs, including (partially thanks to this SO answer):
- Scour is probably the best command line tool for some quick optimization. You can just use the defaults like
scour < in.svg > out.svgor
scour -i in.svg -o out.svg. But I recommend you go further.
- SVGO (SVG Optimizer)
- SVG-optimiser (by Peter Collingridge)
- SVG-editor (by Peter Collingridge
My personal preference for squeezing out every last byte goes toward the web-based version of the SVG-editor by Peter Collingridge. By running it in a browser with inferior SVG support such as Firefox, you’ll be sure that your optimized SVG still works properly afterward. The command line tools can only safely be used for basic optimizations, whereas the effects of going lossy (such as lowering precision) can only be fully appreciated graphically.
Addendum A: Scanned Documents
Scanned documents are a different item altogether. The best format for private use is DjVu, but for public sharing PDF is probably preferable. To achieve the best results, you should scan your documents in TIFF or PNG, followed by processing with unpaper or ScanTailor. If you’ve already got a PDF you’d like to improve, you can use pdfsandwich or my own readablepdf.
Addendum B: Video
I’m not aware of any lossless optimization for video compression such as offered by jpegtran, but you can often losslessly cut video. In the general purpose editor Avidemux, simply make sure both video and audio are set to copy. There is also a dedicated cross-platform app for lossless trimming of videos called, unsurprisingly, LosslessCut. If you do want to introduce loss for a smaller file size you can use the very same Avidemux with a different setting, ffmpeg, mpv, VLC, and so forth. You can get reasonable quality that’ll play many places with something like:
ffmpeg -i input-file.ext -c:v libx264 -crf 19 -preset slow -c:a libfaac -b:a 192k -ac 2 output-file.mp4
For the open WebM format, you can use something along these lines:
ffmpeg -i input.mp4 -c:v libvpx -b:v 1M -c:a libvorbis output.webm
More examples on the ffmpeg wiki. Note that in many cases you should just copy the audio using
-acodec copy, but of course that’s not always an option. Extra compression artifacts in audio detract significantly more from the experience than low-quality video.
After acquiring a new laptop in October ’16, I was surprised to find how fast the old Intel Core 2 laptop still felt. To dig a little deeper, I decided to shoddily compare the performance of the first (SATA) SSD I ever bought back in 2010 to the (M2) SSD in my new 2016 ASUS UX305C. The old laptop did not feel faster than the new one as such, but between an ’09 AMD Phenom II and an Intel i7 there was a really noticeable speed increase on the same SSD. But this new laptop actually seemed to be slower at installing programs.
Obviously in 2016 and beyond I’d strongly consider upgrading to a larger model, but that it could also be worthwhile to upgrade for performance reasons saddens me. The write rate is particularly bad, and this can be felt in software installation taking longer than on the old laptop. NB The old laptop originally came with a significantly slower HDD, so I suppose I shouldn’t be surprised that my old desktop SSD performed better… Still, I was expecting more. As long as it’s better than your average HDD I suppose I can’t complain.
After some update or other, Broken Age refused to start.
$ ./start.sh Running Broken Age libGL error: unable to load driver: radeonsi_dri.so libGL error: driver pointer missing libGL error: failed to load driver: radeonsi libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 155 (GLX) Minor opcode of failed request: 3 (X_GLXCreateContext) Value in failed request: 0x0 Serial number of failed request: 91 Current serial number in output stream: 92
Oh well, let’s give it a little hand, shall we?
LD_PRELOAD='/usr/lib/i386-linux-gnu/libstdc++.so.6 /lib/i386-linux-gnu/libgpg-error.so.0' ./start.sh
This loads the included libraries before any others, in order to override the incompatible libraries shipped with the program in question. The same trick also works for Steam. If gaming is your goal, you should probably stick to whatever version of Ubuntu is supported best. I’m just pleased that I can play the occasional game like Oxenfree (no preloading required, mind you) or Broken Age on my workhorse without having to install any stability-reducing binary blobs.
I run Debian Stretch (testing) as my daily driver, and at some point I stopped being able to start programs like start Synaptic, Gparted, Synaptic etc. without manually typing
gksu(do). The solution is as simple as it is seemingly unnecessary and stupid:
sudo apt install policykit-1-gnome
The problem is apparent upon reading the description:
This implementation was originally designed for GNOME 2, but most
GNOME-based desktop environments, including GNOME 3, GNOME Flashback,
MATE and Cinnamon, have their own built-in PolicyKit agents and no
longer use this one. The remaining users of this implementation
are XFCE and Unity.
Reported as Debian bug #843224. My first?
D’oh, I wrote this on November 5, 2016. I’ll still publish it anyway in case it’ll still help someone searching for a solution.
UNetbootin has been broken for many, many years, but just today (a few years after the fact) I discovered that the previous GUI option to show all drives was readded as a command line option. So if the program doesn’t want to detect your drive, just use the
And voila, it’s working. I have no idea why it should have to be so difficult. The program categorically refuses to detect any of my USB flashdrives or harddrives, so since the removal of show all drives it’s been utterly useless.
PS This is basically only for Windows ISOs. For everything else you can just use, e.g.,
dd. Much easier.
Sometimes you jot down a few quick notes for yourself without bothering to turn them into a blogpost that might be useful to others. This is one of those notes. First, I’ll introduce my computer monitor workflow as it’s been since time immemorial, also known as ’95 or ’96. Just like how I turn off lights I don’t use, I’ve always turned off my monitor when I wanted it. This has never been a problem, until in early 2015 I had to use DisplayPort for the first time. If you want an UltraHD monitor, which you do if you care even the tiniest bit about sharpness and clarity, you have to use DisplayPort.
But DisplayPort isn’t nice. Turning off your monitor is treated the same as disconnecting it. In Windows this means everything resets itself to some absurdly low resolution, whereas in Linux the consequences can be even worse (like having to SSH in from another computer to run an xrandr command to reactivate the monitor). This means you either face a colossal waste of energy or continuous annoyance at the fact that your monitor has turned itself off yet again. In my view monitor timeouts should be at least twenty minutes, just as a failsafe in the extremely unlikely event that you forgot to turn it off. Luckily I found two reasonable workarounds within the first week or two of having acquired my UHD monitor.
xset dpms force off
This has the same effect as your monitor timeout, only at your volition. I tend to find the blinking light on the monitor somewhat annoying, but this nevertheless remains your best bet to quickly turn the screen off as part of your regular workflow.
The second method consists of actually turning the monitor off. Besides getting rid of the blinking light I figure it saves just a tiny bit more electricity to boot. Which is useful if you want to keep your computer active, but not your monitor. For this method you have to switch to TTY (Ctrl + Alt + F1-6) before turning your monitor off. Then when you turn your monitor back on, X won’t know it’s been missing. Switch back with Ctrl + Alt + F7.
I’m still hopeful that there might simply be an xorg.conf setting I’ve overlooked, but in any case these workarounds serve their purpose. Note that
xset dpms force off is also tremendously useful on laptops that don’t have a function key for turning off the screen. Standby often just isn’t what you want.
You need to install
hplip. It looks like something’s still off about the colors compared to Ubuntu and Windows, and I can’t figure out what the difference is. Alas. :/ Also, don’t buy HP.