Save tarball space with "make dist-xz"

I read about Ubuntu considering it for 13.04 and I think compressing with XZ really makes a difference. When creating a tarball for Liferea 1.9.6 the following compression ratios can be achieved:

Compression Size Extract Tarball With
uncompressed 8,0MB
make dist 1,88MB tar zxf ...
make dist-bzip2 1,35MB tar jxf ...
make dist-zlma 1,16MB tar Jxf ...
make dist-xz 1,14MB tar Jxf ...

XZ as well as zlma is supported by automake starting with version 1.12.

Also check here for an in-depth speed and efficiency comparison of the Linux compression zoo.

Filtering dmesg Output

Many administrators just run "dmesg" to check a system for problems and do not bother with its options. But actually it is worth to know about the filtering and output options of the most recent versions (Note: older distros e.g. CentOS5 might not yet ship these options!).

You always might want to use "-T" to show human readable timestamps:

$ dmesg -T
[...]
[Wed Oct 10 20:31:22 2012] Buffer I/O error on device sr0, logical block 0
[Wed Oct 10 20:31:22 2012] sr 1:0:0:0: [sr0]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[...]

Additionally the severity and source of the messages is interesting (option -x):

$ dmesg -xT
[...]
kern  :err   : [Wed Oct 10 20:31:21 2012] Buffer I/O error on device sr0, logical block 0
kern  :info  : [Wed Oct 10 20:31:21 2012] sr 1:0:0:0: [sr0]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[...]

Now we see that only one of those example message lines was an actual error. But we can even filter for errors or worse ignoring all the boot messages (option -l):

$ dmesg -T -l err,crit,alert,emerg
[...]
[Wed Oct 10 20:31:21 2012] Buffer I/O error on device sr0, logical block 0
[...]

In the same way it is possible to filter the facility (the first column in the -x output). For example this could return:

$ dmesg -T -f daemon
[...]
[Wed Oct 10 19:57:50 2012] udevd[106]: starting version 175
[Wed Oct 10 19:58:08 2012] udevd[383]: starting version 175
[...]

In any case it might be worth remembering:

  • -xT for a quick overview with readable timestamps
  • -T -l err,crit,alert,emerg to just check for errors

I recently created a simple dmesg Nagios plugin to monitor for important messages with Nagios. You can find it here.

How to Test for Colors in Shell Scripts

When watching thousands of log lines from some long running script you might want to have color coding to highlight new sections of the process or to have errors standing out when scrolling through the terminal buffer.

Using colors in a script with tput or escape sequences is quite easy, but you also want to check when not to use colors to avoid messing up terminals not supporting them or when logging to a file.

How to Check for Terminal Support

There are at least the following two ways to check for color support. The first variant is using infocmp

$ TERM=linux infocmp -L1 | grep color
[...]
	max_colors#8,
[...]

or using tput

$ TERM=vt100 tput colors
-1

$ TERM=linux tput colors
8

tput is propably the best choice.

Checking the Terminal

So a sane check for color support along with a check for output redirection could look like this

#!/bin/bash

use_colors=1

# Check wether stdout is redirected
if [ ! -t 1 ]; then
    use_colors=0
fi

max_colors=$(tput colors)
if [ $max_colors -lt 8 ]; then 
    use_colors=0
fi

[...]

This should ensure no ANSI sequences ending up in your logs while still printing colors on every capable terminal.

Use More Colors!

And finally if normal colors are not enough for you: use the secret 256 color mode of your terminal! I'm not sure how to test for this but it seems to be related to the "max_pairs" terminal capability listed by infocmp.

Liferea 1.9.6 released

Fixes mass downloading enclosures. Introduces support for downloading using the steadyflow download manager. Removes support for curl/wget. Improves spacing of browser tab buttons. Prevent DnD problems with Google Reader.

Download the recent Liferea source from: liferea.sf.net

Overview on Automated Linux Package Vulnerability Scanning

I got some really helpful comments on my recent post Scan Linux for Vulnerable Packages. The suggestions on how to do it on Debian and Redhat made me wonder: which distributions provide tools and what are they capable of? So the goal is to check wether each distribution has a way to automatically check for vulnerable packages that need upgrades.

Below you find an overview of the tools I've found and the distributions that might not have a good solution yet.

Distribution Scanner Rating Description
Debian debsecan superb Easy to use. Maintained by the Debian testing team. Lists packages, CVE numbers and details.
Ubuntu debsecan useless They just packaged the Debian scanner without providing a database for it!
And since 2008 there is a bug about it being 100% useless.
CentOS
Fedora
Redhat
"yum list-security" good Provides package name and CVE number. Note: On older systems there is only "yum list updates".
OpenSuSE "zypper list-patches" ok Provides packages names with security relevant updates. You need to filter the list yourself or use the "--cve" switch to limit to CVEs only.
SLES "rug lu" ok Provides packages names with security relevant updates. Similar to zypper you need to do the filtering yourself.
Gentoo glsa-check bad There is a dedicated scanner, but no documentation.
FreeBSD Portaudit superb No Linux? Still a nice solution... Lists vulnerable ports and vulnerability details.

I know I didn't cover all Linux distributions and I rely on your comments for details I've missed.

Ubuntu doesn't look good here, but maybe there will be some solution one day :-)

Scan Linux for Vulnerable Packages

How do you know wether your Linux server (which has no desktop update notifier or unattended security updates running) does need to be updated? Of course an

apt-get update && apt-get --dry-run upgrade

might give an indication. But what of the package upgrades do stand for security risks and whose are only simple bugfixes you do not care about?

Check using APT

One useful possibility is apticron which will tell you which packages should be upgraded and why. It presents you the package ChangeLog to decided wether you want to upgrade a package or not. Similar but less details is cron-apt which also informs you of new package updates.

Analyze Security Advisories

Now with all those CERT newsletters, security mailing lists and even security news feeds out there: why can't we check the other way around? Why not find out:

  1. Which security advisories do affect my system?
  2. Which ones I have already complied with?
  3. And which vulnerabilities are still there?

My mad idea was to take those security news feeds (as a start I tried with the ones from Ubuntu and CentOS) and parse out the package versions and compare them to the installed packages. The result was a script producing the following output:

screenshot of lpvs-scan.pl

In the output you see lines starting with "CEBA-2012-xxxx" which is CentOS security advisory naming schema (while Ubuntu has USN-xxxx-x). Yellow color means the security advisory doesn't apply because the relevant packages are not installed. Green means the most recent package version is installed and the advisory shouldn't affect the system anymore. Finally red, of course meaning that the machine is vulnerable.

Does it Work Reliably?

The script producing this output can be found here. I'm not yet satisfied with how it works and I'm not sure if it can be maintained at all given the brittle nature of the arbitrarily formatted/rich news feeds provided by the distros. But I like how it gives a clear indication of current advisories and their effect on the system.

Maybe persuading the Linux distributions into using a common feed format with easy to parse metadata might be a good idea...

How do you check your systems? What do you think of a package scanner using XML security advisory feeds?

visudo: #includedir sudoers.d

WTF. Today I fell for this sudo madness and uncommented this "comment" in /etc/sudoers:

#includedir /etc/sudoers.d

which gives a

visudo: >>> /etc/sudoers: syntax error near line 28 <<<

Let's check the "sudoers" manpage again: full of EBNF notations! But nothing in the EBNF about comments being commands. At least under Other special characters and reserved words one finds

The pound sign ('#') is used to indicate a comment (unless it is part
of a #include directive or unless it occurs in the context of a user
name and is followed by one or more digits, in which case it is treated
as a uid).  Both the comment character and any text after it, up to the
end of the line, are ignored.

Cannot this be done in a better way?

How-to Write a Chef Recipe for Editing Config Files

Most chef recipes are about installing new software including all config files. Also if they are configuration recipes they usually overwrite the whole file and provide a completely recreated configuration. When you have used cfengine and puppet with augtool before you'll miss a possibility to edit files.

In cfengine2...

You could write

editfiles:
{ home/.bashrc
   AppendIfNoSuchLine "alias rm='rm -i'"
}

While in puppet...

You'd have:

augeas { "sshd_config":
  context => "/files/etc/ssh/sshd_config",
  changes => [
    "set PermitRootLogin no",
  ],
}

Now how to do it in Chef?

Maybe I missed the correct way to do it until now (please comment if this is the case!) but there seems to be no way to use for example augtool with chef and there is no built-in cfengine like editing. The only way I've seen so far is to use Ruby as a scripting language to change files using the Ruby runtime or to use the Script ressource which allows running other interpreters like bash, csh, perl, python or ruby.

To use it you can define a block named like the interpreter you need and add a "code" attribute with a "here doc" operator (e.g. <<-EOT) describing the commands. Additionally you specify a working directory and a user for the script to be executed with. Example:

bash "some_commands" do
    user "root"
    cwd "/tmp"
    code <<-EOT
       echo "alias rm='rm -i'" >> /root/.bashrc
    EOT
end

While it is not a one-liner statement as possible as in cfengine it is very flexible. The Script resource is widely used to perform ad-hoc source compilation and installations in the community codebooks, but we can also use it for standard file editing.

Finally to do conditional editing use not_if/only_if clauses at the end of the Script resource block.

Use a graphical Linux Editor for VCS Commits

1. Using gedit

If your main editor is the graphical GNOME editor gedit you can also use it when doing version control system commits, crontab edits, visudo and others things on the command line. All you need is to set it to the $EDITOR environment variable.

The only thing you need to be careful about is wether the editor detaches from the terminal right after starting. This must not happen as the calling command (e.g. "git commit -a") would never know when the editing is finished. So for gedit you have to add the "-w" switch to keep it waiting.

export EDITOR="gedit -w"

2. Using Emacs

For XEmacs simply set

export EDITOR=xemacs

With Emacs itself you also have the possibility to use server mode by running emacs-client (see "Starting emacs automatically" in the EmacsWiki). To do so set

export ALTERNATE_EDITOR=emacs EDITOR=emacsclient

3. Using Other Editors

If you use the goold old Nirvana Editor nedit you can simply

export EDITOR=nedit

the same goes for the KDE editor kate:

export EDITOR=kate

and if you want it really hurting try something like this:

export EDITOR=eclipse
export VISUAL=oowriter

4. $EDITOR and sudo

When using sudo you need to pass $EDITOR along to the root environment. This can be done by using "sudo -e" e.g.

sudo -e visudo

Wether passing the enviroment to root is a good idea might be a good question though...

Have fun!

How-to: Migrate Linux applications to XDG directories

If you are maintaining a Linux Glib-based or GTK application for some time you might want to migrate it to the XDG way of user data layout. This is something I had to do for Liferea (around since 2003) recently. Also when creating a new application you might ask yourself where to put the user data. This post tries to describe how to access the correct paths using Glib.

1. Preparation: Categorize your data

Determine what types of data you have. The specification knows three major directories:

  1. $XDG_DATA_HOME: usually ~/.local/share
  2. $XDG_CONFIG_HOME: usually ~/.config
  3. $XDG_CACHE_HOME: usually ~/.cache

In each of the directories your application should create a subfolder with the unique name of the application and place relevant files there. While volatile cache files go into ~/.cache, persistent important data should go to ~/.local/share and all configuration to ~/.config.

2. Migrate the code

The simple task is to rewrite the old code creating directory paths some arbitrary way to use XDG style directory paths now. As the specification is non-trivial when finding the directory base paths (via multiple paths in $XDG_DATA_DIRS and $XDG_CONFIG_DIRS) it might be good to rely on a library for doing this.

2.1 Using Glib

When developing for GTK or maybe only using Glib one already gets support since GTK uses Glib and Glib 2.6 introduced support for the XDG base directory specification. So with Glib use the following methods to find the target directories:

$XDG_DATA_HOME g_get_user_data_dir()
$XDG_CONFIG_HOME g_get_user_config_dir()
$XDG_CACHE_HOME g_get_user_cache_dir()

Given your application being named "coolApp" and you want to create a cache file named "render.dat" you could use the following C snippet:

g_build_filename (g_get_user_cache_dir (), "coolApp", "render.dat", NULL);

to produce a path. Most likely you'll get something like "/home/joe/.cache/coolApp/render.dat".

2.2 Using wxWidgets

When programming for wxWidgets you need to use the wxStandardPaths class. The methods are

$XDG_DATA_HOME wxStandardPaths::GetDataDir()
$XDG_CONFIG_HOME wxStandardPaths::GetConfigDir()
$XDG_CACHE_HOME wxStandardPaths::GetLocalDataDir()

2.3 With KDE

Since KDE 3.2 it also supports the XDG base specification. But honestly: googling our trying to browse the KDE API I couldn't find any pointers on how to do it. If you know please leave a comment!

Syndicate content Syndicate content