GLib GRegex Regular Expression Cheat Sheet

Glib supports PCRE based regular expressions since v2.14 with the GRegex class.

Usage

GError *err = NULL;
GMatchInfo *matchInfo;
GRegex *regex;
   
regex = g_regex_new ("text", 0, 0, &err);
// check for compilation errors here!
     
g_regex_match (regex, "Some text to match", 0, &matchInfo);

Not how g_regex_new() gets the pattern as first parameter without any regex delimiters. As the regex is created separately it can and should be reused.

Checking if a GRegex did match

Above example just ran the regular expression, but did not test for matching. To simply test for a match add something like this:

if (g_match_info_matches (matchInfo))
    g_print ("Text found!\n");

Extracting Data

If you are interested in data matched you need to use matching groups and need to iterate over the matches in the GMatchInfo structure. Here is an example (without any error checking):

regex = g_regex_new (" mykey=(\w+) ", 0, 0, &err);   
g_regex_match (regex, content, 0, &matchInfo);

while (g_match_info_matches (matchInfo)) {
   gchar *result = g_match_info_fetch (matchInfo, 0);

   g_print ("mykey=%s\n", result);
         
   g_match_info_next (matchInfo, &err);
   g_free (result);
}

Easy String Splitting

Another nice feature in Glib is regex based string splitting with g_regex_split() or g_regex_split_simple():

gchar **results = g_regex_split_simple ("\s+", 
       "White space separated list", 0, 0);

Use g_regex_split for a precompiled regex or use the "simple" function to just pass the pattern.

Chef: How To Debug Active Attributes

If you experience problems with attribute inheritance on a chef client and watch the chef-client output without knowing what attributes are effective you can either look at the chef GUI or do the same on console using "shef" or in "chef-shell" in newer chef releases.

So run

chef-shell -z

The "-z" is important to get chef-shell to load the currently active run list for the node that a "chef-client" run would use.

Then enter "attributes" to switch to attribute mode

chef > attributes
chef:attributes >

and query anything you like by specifying the attribute path as you do in recipes:

chef:attributes > default["authorized_keys"]
[...]
chef:attributes > node["packages"]
[...]

By just querying for "node" you get a full dump of all attributes.

Never Forget _netdev with GlusterFS Mounts

When adding GlusterFS share to /etc/fstab do not forget to add "_netdev" to the mount options. Otherwise on next boot your system will just hang!

Actually there doesn't seem to be a timeout. That would be nice too.

As a side-note: do not forget that Ubuntu 12.04 doesn't care about the "_netdev" even. So network is not guaranteed to be up when mounting. So an additional upstart task or init script is needed anyway. But you need "_netdev" to prevent hanging on boot.

I also have the impression that this only happens with stock kernel 3.8.x and not with 3.4.x!

Splunk Cheat Sheet

Basic Searching Concepts

Simple searches look like the following examples. Note that there are literals with and without quoting and that there are field selections with an "=":

Exception                # just the word
One Two Three            # those three words in any order
"One Two Three"          # the exact phrase

# Filter all lines where field "status" has value 500 from access.log
source="/var/log/apache/access.log" status=500

# Give me all fatal errors from syslog of the blog host
host="myblog" source="/var/log/syslog" Fatal

Basic Filtering

Two important filters are "rex" and "regex".

"rex" is for extraction a pattern and storing it as a new field. This is why you need to specifiy a named extraction group in Perl like manner "(?...)" for example

source="some.log" Fatal | rex "(?i) msg=(?P[^,]+)"

When running above query check the list of "interesting fields" it now should have an entry "FIELDNAME" listing you the top 10 fatal messages from "some.log"

What is the difference to "regex" now? Well "regex" is like grep. Actually you can rephrase

source="some.log" Fatal

to

source="some.log" | regex _raw=".*Fatal.*"

and get the same result. The syntax of "regex" is simply "=". Using it makes sense once you want to filter for a specific field.

Calculations

Sum up a field and do some arithmetics:

... | stats sum(<field>) as result | eval result=(result/1000)

Determine the size of log events by checking len() of _raw. The p10() and p90() functions are returning the 10 and 90 percentiles:

| eval raw_len=len(_raw) | stats avg(raw_len), p10(raw_len), p90(raw_len) by sourcetype

Simple Useful Examples

Splunk usually auto-detects access.log fields so you can do queries like:

source="/var/log/nginx/access.log" HTTP 500
source="/var/log/nginx/access.log" HTTP (200 or 30*)
source="/var/log/nginx/access.log" status=404 | sort - uri 
source="/var/log/nginx/access.log" | head 1000 | top 50 clientip
source="/var/log/nginx/access.log" | head 1000 | top 50 referer
source="/var/log/nginx/access.log" | head 1000 | top 50 uri
source="/var/log/nginx/access.log" | head 1000 | top 50 method
...

Emailing Results

By appending "sendemail" to any query you get the result by mail!

... | sendemail to="[email protected]"

Timecharts

Create a timechart from a single field that should be summed up

... | table _time, <field> | timechart span=1d sum(<field>)
... | table _time, <field>, name | timechart span=1d sum(<field>) by name

Index Statistics

List All Indices

 | eventcount summarize=false index=* | dedup index | fields index
 | eventcount summarize=false report_size=true index=* | eval size_MB = round(size_bytes/1024/1024,2)
 | REST /services/data/indexes | table title
 | REST /services/data/indexes | table title splunk_server currentDBSizeMB frozenTimePeriodInSecs maxTime minTime totalEventCount

on the command line you can call

$SPLUNK_HOME/bin/splunk list index

To query write amount of per index the metrics.log can be used:

index=_internal source=*metrics.log group=per_index_thruput series=* | eval MB = round(kb/1024,2) | timechart sum(MB) as MB by series

MB per day per indexer / index

index=_internal metrics kb series!=_* "group=per_host_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb

index=_internal metrics kb series!=_* "group=per_index_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb

Silencing the Nagios Plugin check_ntp_peer

in

The Nagios plugin "check_ntp_peer" from Debian package "nagios-plugins-basic" is not very nice. It shouts at you about LI_ALARM bit and negative jitter all the time after a machine reboots despite everything actually being fine.

#!/bin/bash

result=$(/usr/lib/nagios/plugins/check_ntp_peer $@)
status=$?

if echo "$result" | egrep 'jitter=-1.00000|has the LI_ALARM' >/dev/null; then
	echo "Unknown state after reboot."
	exit 0
fi

echo $result
exit $status

Using above wrapper you get rid of the warnings.

How to Get TinyTinyRSS Categories

in

If you are using TinyTinyRSS and want a hierarchic subscription list you need to explicitely enable categories from the preferences! Ensure to enable the "Enables feed categories" check box. Then save and open the "Feeds" tab which now allows you to add categories. All existing feeds are presented in category "Uncategorized".

Preferences Screenshot

Missing Roles in "knife node show" Output

Sometimes the knife output can be really confusing:

$ knife node show myserver
Node Name:   myserver1
Environment: _default
FQDN:        myserver1
IP:          
Run List:    role[base], role[mysql], role[apache]
Roles:       base, nrpe, mysql
Recipes:     [...]
Platform:    ubuntu 12.04
Tags:        

Noticed the difference in "Run List" and "Roles"? The run list says "role[apache]", but the list of "Roles" has no Apache. This is because of the role not yet being run on the server. So a

ssh root@myserver chef-client

Solves the issue and Apache appears in the roles list.

The learning: do not use "knife node show" to get the list of configured roles!

How To Debug PgBouncer

When you use Postgres with pgbouncer when you have database problems you want to have a look at pgbouncer too. To inspect pgbouncer operation ensure to add at least one user you defined in the user credentials file (e.g. on Debian per-default /etc/pgbouncer/userlist.txt) to the "stats_users" key in pgbouncer.ini:

stats_users = myuser

Now reload pgbouner and use this user "myuser" to connect to pgbouncer with psql by requesting the special "pgbouncer" database:

psql -p 6432 -U myuser -W pgbouncer

At the psql prompt list the supported pgbouncer commands with

SHOW HELP;

PgBouncer will present all statistics and configuration options:

pgbouncer=# SHOW HELP;
NOTICE:  Console usage
DETAIL:  
	SHOW HELP|CONFIG|DATABASES|POOLS|CLIENTS|SERVERS|VERSION
	SHOW STATS|FDS|SOCKETS|ACTIVE_SOCKETS|LISTS|MEM
	SET key = arg
	RELOAD
	PAUSE []
	SUSPEND
	RESUME []
	SHUTDOWN

The "SHOW" commands are all self-explanatory. Very useful are the "SUSPEND" and "RESUME" commands when you use pools.

MySQL Dump Skip Event Table

If your MySQL backup tool or self-written script complains about an event table than you have run into an issue caused by newer MySQL versions (>5.5.30) that introduced a new table "events" in the internal schema.

If you run into this you need to decide wether you want to include or exclude the new events table when dumping your database.

To skip: Due to a MySQL bug #68376 you have two choices. You can check documentation and add the logical option

--skip-events

which will cause the event table not to be exported. But the warning won't go away. To also get rid of the warning you need to use this instead:

--events --ignore-table=mysql.events

And of course you can also choose just to dump the events table: Add the option

--events

to your "mysqldump" invocation. If you use a tool that invokes "mysqldump" indirectly check if the tool allows to inject additional parameters.

/etc/sudoers.d Pitfalls

From the sudoers manpage:

[...] sudo will read each file in /etc/sudoers.d, skipping file names 
that end in ~ or contain a . character to avoid causing problems with 
package manager or editor temporary/backup files. [...]

This mean if you have a Unix user like "lars.windolf" you do not want to create a file

/etc/sudoers.d/lars.windolf

The evil thing is neither sudo nor visudo warns you about the mistake and the rules just do not work. And if you have some other definition files with the same rule and just a file name without a dot you might wonder about your sanity :-)

Syndicate content Syndicate content