Memcache Monitoring GUIs

When using memcached or memcachedb everything is fine as long as it is running. But from an operating perspective memcached is a black box. There is no real logging you can only use the -v/-vv/-vvv switches when not running in daemon mode to see what your instance does. And it becomes even more complex if you run multiple or distributed memcache instances available on different hosts and ports.

So the question is: How to monitor your distributed memcache setup?

There are not many tools out there, but some useful are. We'll have a look at the following tools. Note that some can monitor multiple memcached instances, while others can only monitor a single instance at a time.

Name Multi-Instances Complexity/Features
telnet no Simple CLI via telnet
memcached-top no CLI
stats-proxy yes Simple Web GUI
memcache.php yes Simple Web GUI
PhpMemcacheAdmin yes Complex Web GUI
Memcache Manager yes Complex Web GUI

1. Use telnet!

memcached already brings it own statistics which are accessible via telnet for manual interaction and are the basis for all other monitoring tools. Read more about using the telnet interface.

2. Live Console Monitoring with memcached-top

You can use memcache-top for live-monitoring a single memcached instance. It will give you the I/O throughput, the number of evictions, the current hit ratio and if run with "--commands" it will also provide the number of GET/SET operations per interval.

memcache-top v0.6       (default port: 11211, color: on, refresh: 3 seconds)

INSTANCE                USAGE   HIT %   CONN    TIME    EVICT/s GETS/s  SETS/s  READ/s  WRITE/s 
10.50.11.5:11211        88.9%   69.7%   1661    0.9ms   0.3     47      9       13.9K   9.8K    
10.50.11.5:11212        88.8%   69.9%   2121    0.7ms   1.3     168     10      17.6K   68.9K   
10.50.11.5:11213        88.9%   69.4%   1527    0.7ms   1.7     48      16      14.4K   13.6K   
[...]

AVERAGE:                84.7%   72.9%   1704    1.0ms   1.3     69      11      13.5K   30.3K   

TOTAL:          19.9GB/ 23.4GB          20.0K   11.7ms  15.3    826     132     162.6K  363.6K  
(ctrl-c to quit.)

(Example output)

3. Live browser monitoring with statsproxy

Using the statsproxy tool you get a browser-based statistics tool for multiple memcached instances. The basic idea of statsproxy is to provide the unmodified memcached statistics via HTTP. It also provide a synthetic health check for service monitoring tools like Nagios.

To compile statsproxy on Debian:

# Ensure you have bison
sudo apt-get install bison

# Download tarball
tar zxvf statsproxy-1.0.tgz
cd statsproxy-1.0
make

Now you can run the "statsproxy" binary, but it will inform you that it needs a configuration file. I suggest to redirect the output to a new file e.g. "statsproxy.conf" and remove the information text on top and bottom and then to modify the configuration section as needed.

./statsproxy > statsproxy.conf 2>&1

Ensure to add as many "proxy-mapping" sections as you have memcached instances. In each "proxy-mapping" section ensure that "backend" points to your memcached instance and "frontend" to a port on your webserver where you want to access the information for this backend.

Once finished run:

./statsproxy -F statsproxy.conf

Below you find a screenshot of what stats-proxy looks like:

4. Live browser monitoring with memcache.php

Using this PHP script you can quickly add memcached statistics to a webserver of your choice. Most useful is the global memory usage graph which helps to identify problematic instances in a distributed environment.

Here is how it should look (screenshot from the project homepage):

When using this script ensure access is protected and not to trigger the "flush_all" menu option by default. Also on large memcached instances refrain from dumping the keys as it might cause some load on your server.

Acer Aspire One - Linux Flash Video Performance

in

There are dozens of guides on how to run the Acer Aspire One netbook with Ubuntu 12.04 (and derivates Lubuntu, Xubuntu & Co) which provides reasonably good hardware support. For performance reasons most of them suggest to install the AMD Catalyst driver. And this is good because it allows playing HD videos without problems on this otherwise small netbook.

Still Flash video doesn't work!

Most guides do not have this hint, but the solution is quite simple: One also needs to install the XvBA Linux support. In Debian and Ubuntu this means

sudo apt-get install xvba-va-driver libva-glx1 libva-egl1 vainfo

This is also described on the Ubuntu BinaryDriverHowto page but missing in almost all other tutorials on how to get Radeon chips working on Linux.

So again: Install XvBA!!!

If you are unsure wether it is working on your system just run

vainfo

and check if it lists "Supported profile and entrypoints". If it does not or the tool doesn't exist you probably run without hardware accelaration for Flash.

Confluence: Query data from different space with {content-reporter}

You can use the {content-reporter} macro (provided by the commercial Customware Confluence plugin) to access pages in different spaces, but you need to

  1. Add "space=<space name>" to the {content-reporter} parameters
  2. Add "<space name>:" in front of the page path in the scope parameter

to make it work.

Example to query "somefield" from all child pages of "ParentPage" in "THESPACE":

{report-table}
{content-reporter:space=THESPACE|scope="THESPACE:ParentPage" > descendents}
{text-filter:data:somefield|required=true}
{content-reporter}
{report-column:title=somefield}{report-info:data:somefield}{report-column}
{report-table}

Configure Hadoop to use Syslog on Ubuntu

If you come here and search for a good description on how to use syslog with Hadoop you might have run into this issue:

As documented on apache.org (HowToConfigurate) you have setup the log4j configuration similar to this

# Log at INFO level to DRFAAUDIT, SYSLOG appenders
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=INFO,DRFAAUDIT,SYSLOG

# Do not forward audit events to parent appenders (i.e. namenode)
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false

# Configure local appender
log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAAUDIT.File=/var/log/audit.log
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n

# Configure syslog appender
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.syslogHost=loghost
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.SYSLOG.Facility=LOCAL1

It is important to have "SYSLOG" in the "...FSNamesystem.audit" definition at the top and to define such a "SYSLOG" appender below with "log4j.appender.SYSLOG". There you configure your loghost and facility.

Now it might be that you still do not get anything in syslog on your loghost when using Hadoop version 0.18 up to at least 0.20. I found a solution to this only at this Japanese blog post which suggested to modify the Hadoop start helper script /usr/lib/hadoop/bin/hadoop-daemon.sh to make it work.

You need to change the environment variables

export HADOOP_ROOT_LOGGER="INFO,DRFA"
export HADOOP_SECURITY_LOGGER="INFO,DRFAS"

to include "SYSLOG":

export HADOOP_ROOT_LOGGER="INFO,SYSLOG,DRFA"
export HADOOP_SECURITY_LOGGER="INFO,SYSLOG,DRFAS"

After making this change the syslog logging will work.

Windows Terminal Server: List connections

Use this strange command to list users logged on a Windows terminal server

qwinsta

Linux: Half-Maximize Applications like in Windows

There is a really useful short-cut in Windows which allows you to align a window to the left or the right of a screen. This way you can use your 16:9 wide screen efficiently using keyboard without any mouse resizing.

This is possible on Ubuntu Unity, GNOME, KDE and XFCE too!

Just with different mechanisms...

Desktop Half Maximize Left Half Maximize Right
Windows [Windows]+[Left] [Windows]+[Right]
Ubuntu Unity [Ctrl]+[Super]+[Left] [Ctrl]+[Super]+[Right]
GNOME + Compiz Grid plugin Drag to left border Drag to right border
GNOME/XFCE/KDE + Brightside Drag to configured edge Drag to configured edge
XFCE 4.10 [Super]+[Left] [Super]+[Right]

GTK+: How to select the nth entry in a tree view?

If you don't build your own GtkTreePaths and thus are able to calculate the one to select, then for flat lists at least you can use the GtkTreeModel function that returns the nth child of a given GtkTreeIter. By passing NULL for the parent you get the nth child on the top-level of the tree.

The code looks like this:

GtkTreeIter iter;

if (gtk_tree_model_iter_nth_child (treemodel, &iter, NULL, position)) {
   GtkTreeSelection *selection = gtk_tree_view_get_selection (treeview)

   if (selection) 
      gtk_tree_selection_select_iter (selection, &iter);
}

Ubuntu 12.04 on Xen: blkfront: barrier: empty write xvda op failed

When you run Ubuntu 12.04 VM on XenServer 6.0 (kernel 2.6.32.12) you can get the following errors and your local file systems will mount read-only. Also see Debian #637234 or Ubuntu #824089.

blkfront: barrier: empty write xvda op failed
blkfront: xvda: barrier or flush: disabled

You also won't be able to remount read-write using "mount -o remount,rw /" as this will give you a kernel error like this

EXT4-fs (xvda1): re-mounted. Opts: errors=remount-ro

This problem more or less sporadically affects paravirtualized Ubuntu 12.04 VMs. Note that officially Ubuntu 12.04 is not listed as a supported system in the Citrix documentation. Note that this problem doesn't affect fully virtualized VMs.

The Solution:

  • According to the Debian bug report the correct solution is to upgrade the dom0 kernel to at least 2.6.32-41.
  • To solve the problem without upgrading the dom0 kernel: reboot until you get a writable filesystem and add "barrier=0" to the mount options of all your local filesystems.
  • Alternatively: do not use paravirtualization :-)

Patch to Create Custom FLV Tags with Yamdi

As described in the Comparison of FLV and MP4 metadata tagging tools (injectors) post yamdi is propably the best and fastest Open Source FLV metadata injector.

Still yamdi is missing the possibility to add custom FLV tags. I posted a patch upstream some months ago with no feedback so far. So if you need custom tags as provided by flvtool2 you might want to merge this patch against the yamdi sources (tested with 1.8.0.

--- ../yamdi.c	2010-10-17 20:46:40.000000000 +0200
+++ yamdi.c	2010-10-19 11:32:34.000000000 +0200
@@ -105,6 +105,9 @@
 	FLVTag_t *flvtag;
 } FLVIndex_t;
 
+// max number of user defined tags
+#define MAX_USER_DEFINED	10
+
 typedef struct {
 	FLVIndex_t index;
 
@@ -168,6 +171,8 @@
 
 	struct {
 		char creator[256];		// -c
+		char *user_defined[MAX_USER_DEFINED];	// -a
+		int user_defined_count;		// number of user defined parameters
 
 		short addonlastkeyframe;	// -k
 		short addonlastsecond;		// -s, -l (deprecated)
@@ -288,8 +293,15 @@
 
 	initFLV(&flv);
 
-	while((c = getopt(argc, argv, "i:o:x:t:c:lskMXh")) != -1) {
+	while((c = getopt(argc, argv, "a:i:o:x:t:c:lskMXh")) != -1) {
 		switch(c) {
+			case 'a':
+				if(flv.options.user_defined_count + 1 == MAX_USER_DEFINED) {
+					fprintf(stderr, "ERROR: to many -a options\n");
+					exit(1);
+				}
+				printf("Adding tag >>>%s<<<\n", optarg);
+				flv.options.user_defined[flv.options.user_defined_count++] = strdup (optarg);
 			case 'i':
 				infile = optarg;
 				break;
@@ -1055,6 +1067,7 @@
 
 int createFLVEventOnMetaData(FLV_t *flv) {
 	int pass = 0;
+	int j;
 	size_t i, length = 0;
 	buffer_t b;
 
@@ -1073,6 +1086,21 @@
 	if(strlen(flv->options.creator) != 0) {
 		writeBufferFLVScriptDataValueString(&b, "creator", flv->options.creator); length++;
 	}
+	
+	printf("Adding %d user defined tags\n", flv->options.user_defined_count);
+	for(j = 0; j < flv->options.user_defined_count; j++) {
+		char *key = strdup (flv->options.user_defined[j]);
+		char *value = strchr(key, '=');
+		if(value != NULL) {
+			*value++ = 0;
+			printf("Adding tag #%d %s=%s\n", j, key, value);
+			writeBufferFLVScriptDataValueString(&b, key, value);
+			length++;
+		} else {
+			fprintf(stderr, "ERROR: Invalid key value pair: >>>%s<<<\n", key);
+		}
+		free(key);
+	} 
 
 	writeBufferFLVScriptDataValueString(&b, "metadatacreator", "Yet Another Metadata Injector for FLV - Version " YAMDI_VERSION "\0"); length++;
 	writeBufferFLVScriptDataValueBool(&b, "hasKeyframes", flv->video.nkeyframes != 0 ? 1 : 0); length++;

Using the patch you then can add up to 10 custom tags using the new "-a" switch. The syntax is

-a <key>=<value>

How to Merge CSV Files

When reading the meager manpage of the "join" command many Unix users propably give up immediately. Still it can be worth using it instead of scripting the same task in your favourite scripting language.

Here is an example on how to merge 2 CSV files:

CSV File 1 "employees.csv"

# <employee id>;<name>;<location>
1;Anton;37;Geneva
2;Marie;28;Paris
3;Tom;25;London

CSV File 2 "task.csv"

# <task id>;<employee id>;<task description>
1000;2;Setup new Project
1001;3;Call customer X
1002;3;Get package from post office

And now some action:

A naive try...

The following command

join employee.csv tasks.csv

... doesn't produce any output. This is because it expects the shared key to reside at the first column in both file, which is not the case. Also the default separator for 'join' is a whitespace.

Full Join

join -t ";" -1 1 -2 2 employee.csv tasks.csv

We need to run join with '-t ";"' to get tell it that we have CSV format. Then to avoid the pitfall of not having the common key in the first column we need to tell join where the join key is in each file. The switch "-1" gets the key index for the first file and "-2" for the second file.

2;Marie;28;Paris;1000;Setup new Project
3;Tom;25;London;1001;Call customer X
3;Tom;25;London;1002;Get package from post office

Print only name and task

join -o1.2,2.3 -t ";" -1 1 -2 2 employee.csv tasks.csv

We use "-o" to limit the fields to be printed. "-o" takes a comma separated list of "<file nr>.<field nr>" definitions. So we only want the second file of the first file (1.2) and the third field of the second file (2.3)...

Marie;Setup new Project
Tom;Call customer X
Tom;Get package from post office

Summary

While the syntax of join is not that straight forward, it still allows doing things quite quickly that one is often tempted to implement in a script. It is quite easy to convert batch input data to CSV format. Using join it can be easily grouped and reduced according to the your task.

If this got you interested you can find more and non-CSV examples on this site.

Syndicate content Syndicate content