chef-server "Failed to authenticate."

If your chef GUI suddenly stops working and you see something like the following exception in both server.log and server-webui.log:

merb : chef-server (api) : worker (port 4000) ~ Failed to authenticate. Ensure that your client key is valid. - (Merb::ControllerExceptions::Unauthorized)
/usr/share/chef-server-api/app/controllers/application.rb:56:in `authenticate_every'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:352:in `send'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:352:in `_call_filters'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:344:in `each'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:344:in `_call_filters'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:286:in `_dispatch'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:284:in `catch'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:284:in `_dispatch'

Then try stopping all chef processes, remove


and start everything again. It will regenerate the keys.

The downside is that you have to

knife configure -i

all you knife setup locations again.

Chef: Which nodes have role X / recipe Y

Why is it so hard to find out which nodes have a given role or recipe in chef?

The only way seems to loop yourself:

for node in $(knife node list); do
   if knife node show -r $node | grep 'role\[base\]' >/dev/null; then       
     echo $node;

Did I miss some other obvious way? I'd like to have some "knife run_list filter ..." command!

Added Google Chrome to Browser Preferences

Chrome Logo

After reading this recent review of Liferea over at TechNewsWorld I found that we somehow missed adding Google Chrome to the browser preferences!

Thanks to review author Jack M. Germain this is now added in 1.9 git branch to be released with 1.9.7.

As a workaround you can still use Chrome if you select "Default Browser" if your system preferences default to Chrome or you can specify the following "Manual Command" in the browser settings tab:

google-chrome "%s"

Have fun!

PHP preg_match() Examples

This post gives some simple examples for using regular expressions with preg_match() in PHP scripts.

1. Syntax of preg_match

While full syntax is

int preg_match ( string $pattern , string $subject 
     [, array &$matches [, int $flags = 0 [, int $offset = 0 ]]] )

you propably will use preg_match() mostly with two parameters for simply matching checks or with three to extract matches.

You probably won't use the 4th and 5th parameter which can be used to return match offsets and limit matching to a given offset in the string.

2. Simple String Checks with preg_match()

Here are some syntax examples that check strings for certain content:

Basic matching

preg_match("/PHP/", "PHP")       # Match for an unbound literal
preg_match("/^PHP/", "PHP")      # Match literal at start of string
preg_match("/PHP$/", "PHP")      # Match literal at end of string
preg_match("/^PHP$/", "PHP")     # Match for exact string content
preg_match("/^$/", "")           # Match empty string

Using different regex delimiters

preg_match("/PHP/", "PHP")                # / as commonly used delimiter
preg_match("@PHP@", "PHP")                # @ as delimiter
preg_match("!PHP!", "PHP")                # ! as delimiter

Changing the delimiter becomes useful in some cases

preg_match("/http:\/\//", "http://");     # match http:// protocol prefix with / delimiter
preg_match("#http://#",   "http://")      # match http:// protocol prefix with # delimiter

Case sensitivity

preg_match("/PHP/", "PHP")                # case sensitive string matching
preg_match("/php/i", "PHP")               # case in-sensitive string matching

Matching with wildcards

preg_match("/P.P/",     "PHP")            # match a single character
preg_match("/P.*P/",    "PHP")            # match multipe characters
preg_match("/P[A-Z]P/", "PHP")            # match from character range A-Z
preg_match("/[PH]*/",   "PHP")            # match from character set P and H
preg_match("/P\wP/",    "PHP")            # match one word character
preg_match("/\bPHP\b/", "regex in PHP")   # match the word "PHP", but not "PHP" as larger string

Using quantifiers

preg_match("/[PH]{3}/",   "PHP")          # match exactly 3 characters from set [PH]
preg_match("/[PH]{3,3}/", "PHP")          # match exactly 3 characters from set [PH]
preg_match("/[PH]{,3}/",  "PHP")          # match at most 3 characters from set [PH]
preg_match("/[PH]{3,}/",  "PHP")          # match at least 3 characters from set [PH]

Note: all of the examples above should work (please comment if you find an error)!

3. Extracting Data with preg_match()

To extract data using regular expression we have to use capture/grouping syntax.

Some basic examples

# Extract everything after the literal "START"
preg_match("/START(.*)/", $string, $results)   

# Extract the number from a date string 
preg_match("/(\d{4})-(\d{2})-(\d{2})/", "2012-10-20", $results)

# Nesting of capture groups, extract full name, and both parts...
preg_match("/name is ((\w+), (\w+))/", "name is Doe, John", $results)

So you basically just enclose the sub patterns you want to extract with braces and fetch the results by passing a third parameter which preg_match() will fill as an array.

Named Capture Groups

# Extract the number from a date string 
preg_match("/(?P<year>\d{4})-(?P<month>\d{2})-(?P<day>\d{2})/", "2012-10-20", $results)

Now the $result array will additionally to the position matches 1, 2 and 3 contain the keys "year", "month" and "day". The advantage is never having to think of the capture positions anymore when you modify the expression!

4. Check for preg_match() Processing Errors!

While it might often be unimportant be aware that applying a regular expression might fail due to PCRE constraints. This usually happens when matching overly long strings or strings with faulty encoding.

The only way to notice that preg_match() was not able to check the string is by calling


Only if it returns PREG_NO_ERROR you got a safe result! Consider this when using preg_match() for security purposes.

How to Vacuum SQLite

This post is a summary on how to effectively VACUUM SQLite databases. Actually open source project like Firefox and Liferea were significantly hurt by not efficiently VACUUMing their SQLite databases. For Firefox this was caused by the Places database containing bookmarks and the history. In case of Liferea it was the feed cache database. Both projects suffered from fragmentation caused by frequent insertion and deletion while not vacuuming the database. This of course caused much frustration with end users and workarounds to vacuum manually.

In the end both projects started to automatically vacuum their sqlite databases on demand based on free list threshold thereby solving the performance issues. Read on to learn how to perform vacuum and why not to use auto-vacuum in those cases!

1. Manual VACUUM

First for the basics: with SQLite 3 you simply vacuum by running:

sqlite3 my.db "VACUUM;"

Depending on the database size and the last vacuum run it might take a while for sqlite3 to finish with it. Using this you can perform manual VACUUM runs (e.g. nightly) or on demand runs (for example on application startup).

2. Using Auto-VACCUM

Note: SQLite Auto-VACUUM does not do the same as VACUUM! It only moves free pages to the end of the database thereby reducing the database size. By doing so it can significantly fragment the database while VACUUM ensures defragmentation. So Auto-VACUUM just keeps the database small!

You can enable/disable SQLite auto-vacuuming by the following pragmas:

PRAGMA auto_vacuum = NONE;
PRAGMA auto_vacuum = FULL;

So effectively you have two modes: full and incremental. In full mode free pages are removed from the database upon each transaction. When in incremental mode no pages are free'd automatically, but only metadata is kept to help freeing them. At any time you can call

PRAGMA incremental_vacuum(n);

to free up to n pages and resize the database by this amount of pages.

To check the auto-vacuum setting in a sqlite database run

sqlite3 my.db "PRAGMA auto_vacuum;"

which should return a number from 0 to 2 meaning: 0=None, 1=Incremental, 2=Full.

3. On Demand VACUUM

Another possibility is to VACUUM on demand based on the fragmentation level of your sqlite database. Compared to peridioc or auto-vaccum this is propably the best solution as (depending on your application) it might only rarely be necessary. You could for example decide to perform on demand VACUUM upon startup when the empty page ratio reaches a certain threshold which you can determine by running

PRAGMA page_count;
PRAGMA freelist_count;

Both PRAGMA statements return a number of pages which together give you a rough guess at the fragmentation ratio. As far as I know there is currently no real measurement for the exact table fragmentation so we have to go with the free list ratio.

How-to Dump Keys from Memcache

You spent already 50GB on the memcache cluster, but you still see many evictions and the cache hit ratio doesn't look good since a few days. The developers swear that they didn't change the caching recently, they checked the code twice and have found no problem.

What now? How to get some insight into the black box of memcached? One way would be to add logging to the application to see and count what is being read and written and then to guess from this about the cache efficiency. For to debug what's happening we need to set how the cache keys are used by the application.

An Easier Way

Memcache itself provides a means to peek into its content. The memcache protocol provides commands to peek into the data that is organized by slabs (categories of data of a given size range). There are some significant limitations though:

  1. You can only dump keys per slab class (keys with roughly the same content size)
  2. You can only dump one page per slab class (1MB of data)
  3. This is an unofficial feature that might be removed anytime.

The second limitation is propably the hardest because 1MB of several gigabytes is almost nothing. Still it can be useful to watch how you use a subset of your keys. But this might depend on your use case.

If you don't care about the technical details just skip to the tools section to learn about what tools allow you to easily dump everything. Alternatively follow the following guide and try the commands using telnet against your memcached setup.

How it Works

First you need to know how memcache organizes its memory. If you start memcache with option "-vv" you see the slab classes it creates. For example

$ memcached -vv
slab class   1: chunk size        96 perslab   10922
slab class   2: chunk size       120 perslab    8738
slab class   3: chunk size       152 perslab    6898
slab class   4: chunk size       192 perslab    5461

In the configuration printed above memcache will keep fit 6898 pieces of data between 121 and 152 byte in a single slab of 1MB size (6898*152). All slabs are sized as 1MB per default. Use the following command to print all currently existing slabs:

stats slabs

If you've added a single key to an empty memcached 1.4.13 with

set mykey 0 60 1

you'll now see the following result for the "stats slabs" command:

stats slabs
STAT 1:chunk_size 96
STAT 1:chunks_per_page 10922
STAT 1:total_pages 1
STAT 1:total_chunks 10922
STAT 1:used_chunks 1
STAT 1:free_chunks 0
STAT 1:free_chunks_end 10921
STAT 1:mem_requested 71
STAT 1:get_hits 0
STAT 1:cmd_set 2
STAT 1:delete_hits 0
STAT 1:incr_hits 0
STAT 1:decr_hits 0
STAT 1:cas_hits 0
STAT 1:cas_badval 0
STAT 1:touch_hits 0
STAT active_slabs 1
STAT total_malloced 1048512

The example shows that we have only one active slab type #1. Our key being just one byte large fits into this as the smallest possible chunk size. The slab statistics show that currently on one page of the slab class exists and that only one chunk is used.

Most importantly it shows a counter for each write operation (set, incr, decr, cas, touch) and one for gets. Using those you can determine a hit ratio!

You can also fetch another set of infos using "stats items" with interesting counters concerning evictions and out of memory counters.

stats items
STAT items:1:number 1
STAT items:1:age 4
STAT items:1:evicted 0
STAT items:1:evicted_nonzero 0
STAT items:1:evicted_time 0
STAT items:1:outofmemory 0
STAT items:1:tailrepairs 0
STAT items:1:reclaimed 0
STAT items:1:expired_unfetched 0
STAT items:1:evicted_unfetched 0

What We Can Guess Already...

Given the statistics infos per slabs class we can already guess a lot of thing about the application behaviour:

  1. How is the cache ratio for different content sizes?
    • How good is the caching of large HTML chunks?
  2. How much memory do we spend on different content sizes?
    • How much do we spend on simple numeric counters?
    • How much do we spend on our session data?
    • How much do we spend on large HTML chunks?
  3. How many large objects can we cache at all?

Of course to answer the questions you need to know about the cache objects of your application.

Now: How to Dump Keys?

Keys can be dumped per slabs class using the "stats cachedump" command.

stats cachedump <slab class> <number of items to dump>

To dump our single key in class #1 run

stats cachedump 1 1000
ITEM mykey [1 b; 1350677968 s]

The "cachedump" returns one item per line. The first number in the braces gives the size in bytes, the second the timestamp of the creation. Given the key name you can now also dump its value using

get mykey
VALUE mykey 0 1

This is it: iterate over all slabs classes you want, extract the key names and if need dump there contents.

Dumping Tools

There are different dumping tools sometimes just scripts out there that help you with printing memcache keys:

PHP simple script Prints key names.
Perl simple script Prints keys and values
Ruby simple script Prints key names.
Perl memdump Tool in CPAN module Memcached-libmemcached
PHP memcache.php Memcache Monitoring GUI that also allows dumping keys
libmemcached peep

Does freeze your memcached process!!!

Be careful when using this in production. Still using it you can workaround the 1MB limitation and really dump all keys.

Save tarball space with "make dist-xz"

I read about Ubuntu considering it for 13.04 and I think compressing with XZ really makes a difference. When creating a tarball for Liferea 1.9.6 the following compression ratios can be achieved:

Compression Size Extract Tarball With
uncompressed 8,0MB
make dist 1,88MB tar zxf ...
make dist-bzip2 1,35MB tar jxf ...
make dist-zlma 1,16MB tar Jxf ...
make dist-xz 1,14MB tar Jxf ...

XZ as well as zlma is supported by automake starting with version 1.12.

Also check here for an in-depth speed and efficiency comparison of the Linux compression zoo.

Filtering dmesg Output

Many administrators just run "dmesg" to check a system for problems and do not bother with its options. But actually it is worth to know about the filtering and output options of the most recent versions (Note: older distros e.g. CentOS5 might not yet ship these options!).

You always might want to use "-T" to show human readable timestamps:

$ dmesg -T
[Wed Oct 10 20:31:22 2012] Buffer I/O error on device sr0, logical block 0
[Wed Oct 10 20:31:22 2012] sr 1:0:0:0: [sr0]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE

Additionally the severity and source of the messages is interesting (option -x):

$ dmesg -xT
kern  :err   : [Wed Oct 10 20:31:21 2012] Buffer I/O error on device sr0, logical block 0
kern  :info  : [Wed Oct 10 20:31:21 2012] sr 1:0:0:0: [sr0]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE

Now we see that only one of those example message lines was an actual error. But we can even filter for errors or worse ignoring all the boot messages (option -l):

$ dmesg -T -l err,crit,alert,emerg
[Wed Oct 10 20:31:21 2012] Buffer I/O error on device sr0, logical block 0

In the same way it is possible to filter the facility (the first column in the -x output). For example this could return:

$ dmesg -T -f daemon
[Wed Oct 10 19:57:50 2012] udevd[106]: starting version 175
[Wed Oct 10 19:58:08 2012] udevd[383]: starting version 175

In any case it might be worth remembering:

  • -xT for a quick overview with readable timestamps
  • -T -l err,crit,alert,emerg to just check for errors

I recently created a simple dmesg Nagios plugin to monitor for important messages with Nagios. You can find it here.

How to Test for Colors in Shell Scripts

When watching thousands of log lines from some long running script you might want to have color coding to highlight new sections of the process or to have errors standing out when scrolling through the terminal buffer.

Using colors in a script with tput or escape sequences is quite easy, but you also want to check when not to use colors to avoid messing up terminals not supporting them or when logging to a file.

How to Check for Terminal Support

There are at least the following two ways to check for color support. The first variant is using infocmp

$ TERM=linux infocmp -L1 | grep color

or using tput

$ TERM=vt100 tput colors

$ TERM=linux tput colors

tput is propably the best choice.

Checking the Terminal

So a sane check for color support along with a check for output redirection could look like this



# Check wether stdout is redirected
if [ ! -t 1 ]; then

max_colors=$(tput colors)
if [ $max_colors -lt 8 ]; then 


This should ensure no ANSI sequences ending up in your logs while still printing colors on every capable terminal.

Use More Colors!

And finally if normal colors are not enough for you: use the secret 256 color mode of your terminal! I'm not sure how to test for this but it seems to be related to the "max_pairs" terminal capability listed by infocmp.

Liferea 1.9.6 released

Fixes mass downloading enclosures. Introduces support for downloading using the steadyflow download manager. Removes support for curl/wget. Improves spacing of browser tab buttons. Prevent DnD problems with Google Reader.

Download the recent Liferea source from:

Syndicate content Syndicate content