HowTo: Munin and rrdcached on Ubuntu 12.04

Let's expect you already have Munin installed and working and you want to reduce disk I/O and improve responsiveness by adding rrdcached... Here are the complete steps to integrate rrdcached:

Basic Installation

First install the stock package

apt-get install rrdcached

and integrate it with Munin:

  1. Enable the rrdcached socket line in /etc/munin/munin.conf
  2. Disable munin-html and munin-graph calls in /usr/bin/munin-cron
  3. Create /usr/bin/munin-graph with
    #!/bin/bash
    
    nice /usr/share/munin/munin-html $@ || exit 1
    
    nice /usr/share/munin/munin-graph --cron $@ || exit 1 

    and make it executable

  4. Add a cron job (e.g. to /etc/cron.d/munin) to start munin-graph:
    10 * * * *      munin if [ -x /usr/bin/munin-graph ]; then /usr/bin/munin-graph; fi

The Critical Stuff

To get Munin to use rrdcached on Ubuntu 12.04 ensure to follow these vital steps:

  1. Add "-s <webserver group>" to $OPT in /etc/init.d/rrdcached (in front of the first -l switch)
  2. Change "-b /var/lib/rrdcached/db/" to "-b /var/lib/munin" (or wherever you keep your RRDs)


So a patched default Debian/Ubuntu with Apache /etc/init.d/rrdcached would have

OPTS="-s www-data -l unix:/var/run/rrdcached.sock"
OPTS="$OPTS -j /var/lib/rrdcached/journal/ -F"
OPTS="$OPTS -b /var/lib/munin/ -B"

If you do not set the socket user with "-s" you will see "Permission denied" in /var/log/munin/munin-cgi-graph.log

[RRD ERROR] Unable to graph /var/lib/munin/
cgi-tmp/munin-cgi-graph/[...].png : Unable to connect to rrdcached: 
Permission denied

If you do not change the rrdcached working directory you will see "rrdc_flush" errors in your /var/log/munin/munin-cgi-graph.log

[RRD ERROR] Unable to graph /var/lib/munin/
cgi-tmp/munin-cgi-graph/[...].png : 
rrdc_flush (/var/lib/munin/[...].rrd) failed with status -1.

Some details on this can be found in the Munin wiki.

Liferea Code Repo Moved to github

I moved the source repo away from SourceForge away to GitHub.
It is currently located here:

https://github.com/lwindolf/liferea

If in doubt always follow the "Code" link from the website to find the repo.

Sorry, if this causes troubles for you. I'll contact all with current git write
access directly to see how we can continue on github and who will be able
to merge.

Please keep contributing! I think with github this can actually become
easier and more developers are familiar with its best practices.

GLib GRegex Regular Expression Cheat Sheet

Glib supports PCRE based regular expressions since v2.14 with the GRegex class.

Usage

GError *err = NULL;
GMatchInfo *matchInfo;
GRegex *regex;
   
regex = g_regex_new ("text", 0, 0, &err);
// check for compilation errors here!
     
g_regex_match (regex, "Some text to match", 0, &matchInfo);

Not how g_regex_new() gets the pattern as first parameter without any regex delimiters. As the regex is created separately it can and should be reused.

Checking if a GRegex did match

Above example just ran the regular expression, but did not test for matching. To simply test for a match add something like this:

if (g_match_info_matches (matchInfo))
    g_print ("Text found!\n");

Extracting Data

If you are interested in data matched you need to use matching groups and need to iterate over the matches in the GMatchInfo structure. Here is an example (without any error checking):

regex = g_regex_new (" mykey=(\w+) ", 0, 0, &err);   
g_regex_match (regex, content, 0, &matchInfo);

while (g_match_info_matches (matchInfo)) {
   gchar *result = g_match_info_fetch (matchInfo, 0);

   g_print ("mykey=%s\n", result);
         
   g_match_info_next (matchInfo, &err);
   g_free (result);
}

Easy String Splitting

Another nice feature in Glib is regex based string splitting with g_regex_split() or g_regex_split_simple():

gchar **results = g_regex_split_simple ("\s+", 
       "White space separated list", 0, 0);

Use g_regex_split for a precompiled regex or use the "simple" function to just pass the pattern.

Chef: How To Debug Active Attributes

If you experience problems with attribute inheritance on a chef client and watch the chef-client output without knowing what attributes are effective you can either look at the chef GUI or do the same on console using "shef" or in "chef-shell" in newer chef releases.

So run

chef-shell -z

The "-z" is important to get chef-shell to load the currently active run list for the node that a "chef-client" run would use.

Then enter "attributes" to switch to attribute mode

chef > attributes
chef:attributes >

and query anything you like by specifying the attribute path as you do in recipes:

chef:attributes > default["authorized_keys"]
[...]
chef:attributes > node["packages"]
[...]

By just querying for "node" you get a full dump of all attributes.

Never Forget _netdev with GlusterFS Mounts

When adding GlusterFS share to /etc/fstab do not forget to add "_netdev" to the mount options. Otherwise on next boot your system will just hang!

Actually there doesn't seem to be a timeout. That would be nice too.

As a side-note: do not forget that Ubuntu 12.04 doesn't care about the "_netdev" even. So network is not guaranteed to be up when mounting. So an additional upstart task or init script is needed anyway. But you need "_netdev" to prevent hanging on boot.

I also have the impression that this only happens with stock kernel 3.8.x and not with 3.4.x!

Splunk Cheat Sheet

Basic Searching Concepts

Simple searches look like the following examples. Note that there are literals with and without quoting and that there are field selections with an "=":

Exception                # just the word
One Two Three            # those three words in any order
"One Two Three"          # the exact phrase

# Filter all lines where field "status" has value 500 from access.log
source="/var/log/apache/access.log" status=500

# Give me all fatal errors from syslog of the blog host
host="myblog" source="/var/log/syslog" Fatal

Basic Filtering

Two important filters are "rex" and "regex".

"rex" is for extraction a pattern and storing it as a new field. This is why you need to specifiy a named extraction group in Perl like manner "(?...)" for example

source="some.log" Fatal | rex "(?i) msg=(?P[^,]+)"

When running above query check the list of "interesting fields" it now should have an entry "FIELDNAME" listing you the top 10 fatal messages from "some.log"

What is the difference to "regex" now? Well "regex" is like grep. Actually you can rephrase

source="some.log" Fatal

to

source="some.log" | regex _raw=".*Fatal.*"

and get the same result. The syntax of "regex" is simply "=". Using it makes sense once you want to filter for a specific field.

Simple Useful Examples

Splunk usually auto-detects access.log fields so you can do queries like:

source="/var/log/nginx/access.log" HTTP 500
source="/var/log/nginx/access.log" HTTP (200 or 30*)
source="/var/log/nginx/access.log" status=404 | sort - uri 
source="/var/log/nginx/access.log" | head 1000 | top 50 clientip
source="/var/log/nginx/access.log" | head 1000 | top 50 referer
source="/var/log/nginx/access.log" | head 1000 | top 50 uri
source="/var/log/nginx/access.log" | head 1000 | top 50 method
...

Emailing Results

By appending "sendemail" to any query you get the result by mail!

... | sendemail to="john@example.com"

Silencing the Nagios Plugin check_ntp_peer

in

The Nagios plugin "check_ntp_peer" from Debian package "nagios-plugins-basic" is not very nice. It shouts at you about LI_ALARM bit and negative jitter all the time after a machine reboots despite everything actually being fine.

#!/bin/bash

result=$(/usr/lib/nagios/plugins/check_ntp_peer $@)
status=$?

if echo "$result" | egrep 'jitter=-1.00000|has the LI_ALARM' >/dev/null; then
	echo "Unknown state after reboot."
	exit 0
fi

echo $result
exit $status

Using above wrapper you get rid of the warnings.

How to Get TinyTinyRSS Categories

in

If you are using TinyTinyRSS and want a hierarchic subscription list you need to explicitely enable categories from the preferences! Ensure to enable the "Enables feed categories" check box. Then save and open the "Feeds" tab which now allows you to add categories. All existing feeds are presented in category "Uncategorized".

Preferences Screenshot

Missing Roles in "knife node show" Output

Sometimes the knife output can be really confusing:

$ knife node show myserver
Node Name:   myserver1
Environment: _default
FQDN:        myserver1
IP:          
Run List:    role[base], role[mysql], role[apache]
Roles:       base, nrpe, mysql
Recipes:     [...]
Platform:    ubuntu 12.04
Tags:        

Noticed the difference in "Run List" and "Roles"? The run list says "role[apache]", but the list of "Roles" has no Apache. This is because of the role not yet being run on the server. So a

ssh root@myserver chef-client

Solves the issue and Apache appears in the roles list.

The learning: do not use "knife node show" to get the list of configured roles!

How To Debug PgBouncer

When you use Postgres with pgbouncer when you have database problems you want to have a look at pgbouncer too. To inspect pgbouncer operation ensure to add at least one user you defined in the user credentials file (e.g. on Debian per-default /etc/pgbouncer/userlist.txt) to the "stats_users" key in pgbouncer.ini:

stats_users = myuser

Now reload pgbouner and use this user "myuser" to connect to pgbouncer with psql by requesting the special "pgbouncer" database:

psql -p 6432 -U myuser -W pgbouncer

At the psql prompt list the supported pgbouncer commands with

SHOW HELP;

PgBouncer will present all statistics and configuration options:

pgbouncer=# SHOW HELP;
NOTICE:  Console usage
DETAIL:  
	SHOW HELP|CONFIG|DATABASES|POOLS|CLIENTS|SERVERS|VERSION
	SHOW STATS|FDS|SOCKETS|ACTIVE_SOCKETS|LISTS|MEM
	SET key = arg
	RELOAD
	PAUSE []
	SUSPEND
	RESUME []
	SHUTDOWN

The "SHOW" commands are all self-explanatory. Very useful are the "SUSPEND" and "RESUME" commands when you use pools.

Syndicate content Syndicate content