Blogs

Acer Aspire One - Linux Flash Video Performance

in

There are dozens of guides on how to run the Acer Aspire One netbook with Ubuntu 12.04 (and derivates Lubuntu, Xubuntu & Co) which provides reasonably good hardware support. For performance reasons most of them suggest to install the AMD Catalyst driver. And this is good because it allows playing HD videos without problems on this otherwise small netbook.

Still Flash video doesn't work!

Most guides do not have this hint, but the solution is quite simple: One also needs to install the XvBA Linux support. In Debian and Ubuntu this means

sudo apt-get install xvba-va-driver libva-glx1 libva-egl1 vainfo

This is also described on the Ubuntu BinaryDriverHowto page but missing in almost all other tutorials on how to get Radeon chips working on Linux.

So again: Install XvBA!!!

If you are unsure wether it is working on your system just run

vainfo

and check if it lists "Supported profile and entrypoints". If it does not or the tool doesn't exist you probably run without hardware accelaration for Flash.

Confluence: Query data from different space with {content-reporter}

You can use the {content-reporter} macro (provided by the commercial Customware Confluence plugin) to access pages in different spaces, but you need to

  1. Add "space=<space name>" to the {content-reporter} parameters
  2. Add "<space name>:" in front of the page path in the scope parameter

to make it work.

Example to query "somefield" from all child pages of "ParentPage" in "THESPACE":

{report-table}
{content-reporter:space=THESPACE|scope="THESPACE:ParentPage" > descendents}
{text-filter:data:somefield|required=true}
{content-reporter}
{report-column:title=somefield}{report-info:data:somefield}{report-column}
{report-table}

Configure Hadoop to use Syslog on Ubuntu

If you come here and search for a good description on how to use syslog with Hadoop you might have run into this issue:

As documented on apache.org (HowToConfigurate) you have setup the log4j configuration similar to this

# Log at INFO level to DRFAAUDIT, SYSLOG appenders
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=INFO,DRFAAUDIT,SYSLOG

# Do not forward audit events to parent appenders (i.e. namenode)
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false

# Configure local appender
log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAAUDIT.File=/var/log/audit.log
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n

# Configure syslog appender
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.syslogHost=loghost
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.SYSLOG.Facility=LOCAL1

It is important to have "SYSLOG" in the "...FSNamesystem.audit" definition at the top and to define such a "SYSLOG" appender below with "log4j.appender.SYSLOG". There you configure your loghost and facility.

Now it might be that you still do not get anything in syslog on your loghost when using Hadoop version 0.18 up to at least 0.20. I found a solution to this only at this Japanese blog post which suggested to modify the Hadoop start helper script /usr/lib/hadoop/bin/hadoop-daemon.sh to make it work.

You need to change the environment variables

export HADOOP_ROOT_LOGGER="INFO,DRFA"
export HADOOP_SECURITY_LOGGER="INFO,DRFAS"

to include "SYSLOG":

export HADOOP_ROOT_LOGGER="INFO,SYSLOG,DRFA"
export HADOOP_SECURITY_LOGGER="INFO,SYSLOG,DRFAS"

After making this change the syslog logging will work.

Windows Terminal Server: List connections

Use this strange command to list users logged on a Windows terminal server

qwinsta

Linux: Half-Maximize Applications like in Windows

There is a really useful short-cut in Windows which allows you to align a window to the left or the right of a screen. This way you can use your 16:9 wide screen efficiently using keyboard without any mouse resizing.

This is possible on Ubuntu Unity, GNOME 2, KDE and XFCE too!

Just with different mechanisms...

Desktop Half Maximize Left Half Maximize Right
Windows [Windows]+[Left] [Windows]+[Right]
Ubuntu Unity [Ctrl]+[Super]+[Left] [Ctrl]+[Super]+[Right]
GNOME 3 Drag to left edge Drag to right edge
GNOME 2 + Compiz Grid plugin Drag to left border Drag to right border
GNOME 2/XFCE/KDE + Brightside Drag to configured edge Drag to configured edge
XFCE 4.10 [Super]+[Left] [Super]+[Right]

GTK+: How to select the nth entry in a tree view?

If you don't build your own GtkTreePaths and thus are able to calculate the one to select, then for flat lists at least you can use the GtkTreeModel function that returns the nth child of a given GtkTreeIter. By passing NULL for the parent you get the nth child on the top-level of the tree.

The code looks like this:

GtkTreeIter iter;

if (gtk_tree_model_iter_nth_child (treemodel, &iter, NULL, position)) {
   GtkTreeSelection *selection = gtk_tree_view_get_selection (treeview)

   if (selection) 
      gtk_tree_selection_select_iter (selection, &iter);
}

Ubuntu 12.04 on Xen: blkfront: barrier: empty write xvda op failed

When you run Ubuntu 12.04 VM on XenServer 6.0 (kernel 2.6.32.12) you can get the following errors and your local file systems will mount read-only. Also see Debian #637234 or Ubuntu #824089.

blkfront: barrier: empty write xvda op failed
blkfront: xvda: barrier or flush: disabled

You also won't be able to remount read-write using "mount -o remount,rw /" as this will give you a kernel error like this

EXT4-fs (xvda1): re-mounted. Opts: errors=remount-ro

This problem more or less sporadically affects paravirtualized Ubuntu 12.04 VMs. Note that officially Ubuntu 12.04 is not listed as a supported system in the Citrix documentation. Note that this problem doesn't affect fully virtualized VMs.

The Solution:

  • According to the Debian bug report the correct solution is to upgrade the dom0 kernel to at least 2.6.32-41.
  • To solve the problem without upgrading the dom0 kernel: reboot until you get a writable filesystem and add "barrier=0" to the mount options of all your local filesystems.
  • Alternatively: do not use paravirtualization :-)

Patch to Create Custom FLV Tags with Yamdi

As described in the Comparison of FLV and MP4 metadata tagging tools (injectors) post yamdi is propably the best and fastest Open Source FLV metadata injector.

Still yamdi is missing the possibility to add custom FLV tags. I posted a patch upstream some months ago with no feedback so far. So if you need custom tags as provided by flvtool2 you might want to merge this patch against the yamdi sources (tested with 1.8.0.

--- ../yamdi.c	2010-10-17 20:46:40.000000000 +0200
+++ yamdi.c	2010-10-19 11:32:34.000000000 +0200
@@ -105,6 +105,9 @@
 	FLVTag_t *flvtag;
 } FLVIndex_t;
 
+// max number of user defined tags
+#define MAX_USER_DEFINED	10
+
 typedef struct {
 	FLVIndex_t index;
 
@@ -168,6 +171,8 @@
 
 	struct {
 		char creator[256];		// -c
+		char *user_defined[MAX_USER_DEFINED];	// -a
+		int user_defined_count;		// number of user defined parameters
 
 		short addonlastkeyframe;	// -k
 		short addonlastsecond;		// -s, -l (deprecated)
@@ -288,8 +293,15 @@
 
 	initFLV(&flv);
 
-	while((c = getopt(argc, argv, "i:o:x:t:c:lskMXh")) != -1) {
+	while((c = getopt(argc, argv, "a:i:o:x:t:c:lskMXh")) != -1) {
 		switch(c) {
+			case 'a':
+				if(flv.options.user_defined_count + 1 == MAX_USER_DEFINED) {
+					fprintf(stderr, "ERROR: to many -a options\n");
+					exit(1);
+				}
+				printf("Adding tag >>>%s<<<\n", optarg);
+				flv.options.user_defined[flv.options.user_defined_count++] = strdup (optarg);
 			case 'i':
 				infile = optarg;
 				break;
@@ -1055,6 +1067,7 @@
 
 int createFLVEventOnMetaData(FLV_t *flv) {
 	int pass = 0;
+	int j;
 	size_t i, length = 0;
 	buffer_t b;
 
@@ -1073,6 +1086,21 @@
 	if(strlen(flv->options.creator) != 0) {
 		writeBufferFLVScriptDataValueString(&b, "creator", flv->options.creator); length++;
 	}
+	
+	printf("Adding %d user defined tags\n", flv->options.user_defined_count);
+	for(j = 0; j < flv->options.user_defined_count; j++) {
+		char *key = strdup (flv->options.user_defined[j]);
+		char *value = strchr(key, '=');
+		if(value != NULL) {
+			*value++ = 0;
+			printf("Adding tag #%d %s=%s\n", j, key, value);
+			writeBufferFLVScriptDataValueString(&b, key, value);
+			length++;
+		} else {
+			fprintf(stderr, "ERROR: Invalid key value pair: >>>%s<<<\n", key);
+		}
+		free(key);
+	} 
 
 	writeBufferFLVScriptDataValueString(&b, "metadatacreator", "Yet Another Metadata Injector for FLV - Version " YAMDI_VERSION "\0"); length++;
 	writeBufferFLVScriptDataValueBool(&b, "hasKeyframes", flv->video.nkeyframes != 0 ? 1 : 0); length++;

Using the patch you then can add up to 10 custom tags using the new "-a" switch. The syntax is

-a <key>=<value>

How to Merge CSV Files

When reading the meager manpage of the "join" command many Unix users propably give up immediately. Still it can be worth using it instead of scripting the same task in your favourite scripting language.

Here is an example on how to merge 2 CSV files:

CSV File 1 "employees.csv"

# <employee id>;<name>;<location>
1;Anton;37;Geneva
2;Marie;28;Paris
3;Tom;25;London

CSV File 2 "task.csv"

# <task id>;<employee id>;<task description>
1000;2;Setup new Project
1001;3;Call customer X
1002;3;Get package from post office

And now some action:

A naive try...

The following command

join employee.csv tasks.csv

... doesn't produce any output. This is because it expects the shared key to reside at the first column in both file, which is not the case. Also the default separator for 'join' is a whitespace.

Full Join

join -t ";" -1 1 -2 2 employee.csv tasks.csv

We need to run join with '-t ";"' to get tell it that we have CSV format. Then to avoid the pitfall of not having the common key in the first column we need to tell join where the join key is in each file. The switch "-1" gets the key index for the first file and "-2" for the second file.

2;Marie;28;Paris;1000;Setup new Project
3;Tom;25;London;1001;Call customer X
3;Tom;25;London;1002;Get package from post office

Print only name and task

join -o1.2,2.3 -t ";" -1 1 -2 2 employee.csv tasks.csv

We use "-o" to limit the fields to be printed. "-o" takes a comma separated list of "<file nr>.<field nr>" definitions. So we only want the second file of the first file (1.2) and the third field of the second file (2.3)...

Marie;Setup new Project
Tom;Call customer X
Tom;Get package from post office

Summary

While the syntax of join is not that straight forward, it still allows doing things quite quickly that one is often tempted to implement in a script. It is quite easy to convert batch input data to CSV format. Using join it can be easily grouped and reduced according to the your task.

If this got you interested you can find more and non-CSV examples on this site.

Determine Memory Configuration of HP servers

Use dmidecode like this

dmidecode 2>&1 |grep -A17 -i "Memory Device" |egrep "Memory Device|Locator: PROC|Size" |grep -v "No Module Installed" |grep -A1 -B1 "Size:"

The "Locator:" line gives you the slot assignements as listed in the HP documentation, e.g. the HP ProLiant DL380 G7 page.

Of course you can also look this up in the ILO GUI.

Syndicate content