Link Search Menu Expand Document

How to get openshift scc uid ranges

A typical problem when using Helm charts in Openshift is handling security context UID ranges. As Helm charts are usually targeting kubernetes they do not force you to set the proper security context.

Disable sleep key on keyboard

Do you also have this utterly useless “sleep” key on your keyboard right above the keypad? At the right upper corner of the keyboard? Right where accidentily hit in the midst of a meeting when reaching over for something?

Accessing grafeas with a swagger client

This is a short howto on workarounds needed to use the artifact metadata DB Grafeas (developed by Google and JFrog). While the upstream project does provide Swagger definitions those do not work out-of-the-box with for example Swagger 2.4. The Grafeas server is being implemented in Golang using Protobuf for API bindings, so the offered Swagger bindings are not really used and are thus untested.

Store multi-line secrets in Azure DevOps Pipeline Libraries

Azure DevOps wants you to provide secrets to pipelines using a so called pipeline library. You can store single line strings as secrets in the pipeline library. You cannot though store multi-line strings as secrets without messing up the line-breaks.

Easily fix async video with ffmpeg

1. Correcting Audio that is too slow/fast

This can be done using the -async parameter of ffmpeg which according to the documentation “Stretches/squeezes” the audio stream to match the timestamps. The parameter takes a numeric value for the samples per seconds to enforce.

ffmpeg -async 25 -i input.mpg <encoding options> -r 25

Try slowly increasing the -async value until audio and video matches.

2. Auto-Correcting Time-Shift

2.1 Audio is ahead

When audio is ahead of video: As a special case the -async switch auto-corrects the start of the audio stream when passed as -async 1. So try running

ffmpeg -async 1 -i input.mpg <encoding options>

2.2 Audio lags behind

Instead of using -async you need to use -vsync to drop/duplicate frames in the video stream. There are two methods in the manual page “-vsync 1” and “-vsync 2” and an method auto-detection with “-vsync -1”. But using “-map” it is possible to specify the stream to sync against.

ffmpeg -vsync 1 -i input.mpg <encoding options>
ffmpeg -vsync 2 -i input.mpg <encoding options>

Interestingly Google shows people using -async and -vsync together. So it might be worth experimenting a bit to achieve the intended result :-)

3. Manually Correcting Time-Shift

If you have a constantly shifted sound/video track that the previous fix doesn’t work with, but you know the time shift that needs to be corrected, then you can easily fix it with one of the following two commands:

3.1 Audio is ahead

Example to shift by 3 seconds:

ffmpeg -i input.mp4 -itsoffset 00:00:03.0 -i input.mp4 -vcodec copy -acodec copy -map 0:1 -map 1:0 output_shift3s.mp4

Note how you specify your input file 2 times with the first one followed by a time offset. Later in the command there are two -map parameters which tell ffmpeg to use the time-shifted video stream from the first -i input.mp4 and the audio stream from the second one.

I also added -vcodec copy -acodec copy to avoid reencoding the video and loose quality. These parameters have to be added after the second input file and before the mapping options. Otherwise one runs into mapping errors.

3.2 Audio lags behind

Again an example to shift by 3 seconds:

ffmpeg -i input.mp4 -itsoffset 00:00:03.0 -i input.mp4 -vcodec copy -acodec copy -map 1:0 -map 0:1 output_shift3s.mp4

Note how the command is nearly identical to the previous command with the exception of the -map parameters being switched. So from the time-shifted first -i input.mp4 we now take the audio instead of the video and combine it with the normal video.

Adding custom ca certificates in openshift

This post documents quite some research through the honestly quite sad Openshift documentation. The content below roughly corresponds to Openshift releases 4.4 to 4.7

Jenkins search for unmasked passwords

When you run a larger multi-tenant Jenkins instance you might wonder if everyone properly hides secrets from logs. The script below needs to be run as admin and will uncover all unmasked passwords in any pipeline job build:

ffmpeg video transcoding from Nautilus

I want to share this little video conversion script for the GNOME file manager Nautilus. As Nautilus supports custom scripts being executed for selected files I wrote a script to do allow video transcoding from Nautilus.

How to use custom css with jekyll minima theme

When providing this blog with some custom CSS to better format code examples I had troubles applying several of the online suggestions on how to add custom CSS in a Jekyll setup with Minima theme active.

Helm Best Practices

As a reminder to myself I have compiled this list of opinionated best practices (in no particalur order) to follow when using Helm seriously:

Helm Checking Keys

It is quite impressive how hard it is to check a map key in Go templates to do some simple if conditions in your Helm charts or other kubernetes templates.

Why decoding aac with ffmpeg doesn't work

Update: The workaround for the problem doesn't work for ffmpeg versions more recent than 20.06.2011 as libfaad support was dropped in favour of the now stable native ffmpeg AAC encoder! If you still have a separate compilation of libfaad you can workaround using the "faad" encoder tool as described in this post. If you are using recent ffmpeg versions to decode a .MOV file you might get the following error:
Stream #0.0(eng): Audio: aac, 48000 Hz, 2 channels, s16
Stream #0.1(eng): Video: h264, yuv420p, 1280x530, PAR 1:1 DAR 128:53, 25 tbr, 25 tbn, 50 tbc
Output #0, flv, to 'test.flv':
Stream #0.0(eng): Video: flv (hq), yuv420p, 400x164 [PAR 101:102 DAR 050:2091], 
q=2-31, 300 kb/s, 1k tbn, 25 tbc
Stream #0.1(eng): Audio: libmp3lame, 22050 Hz, 2 channels, s16, 64 kb/s
Stream mapping:
Stream #0.1 -> #0.0
Stream #0.0 -> #0.1
Press [q] to stop encoding
[aac @ 0x80727a0]channel element 1.0 is not allocated
Error while decoding stream #0.0
Error while decoding stream #0.0
Error while decoding stream #0.0
Error while decoding stream #0.0
Error while decoding stream #0.0
Error while decoding stream #0.0
[...]
The message "Error while decoding stream #0.0" is repeated continuously. The resulting video is either unplayable or has no sound. Still the input video is playable in all standard players (VLC, in Windows...). The reason for the problem as I understood it is that the ffmpeg-builtin AAC codec cannot handle an audio stream stream with index "1.0". This is documented in various bugs (see ffmpeg issues #800, #871, #999, #1733...). It doesn't look like this will be handled by ffmpeg very soon. In fact it could well be that they'll handle it as an invalid input file. Solution: Upgrade to latest ffmpeg and faad library version and add " -acodec libfaad " in front of the "-i" switch. This uses the libfaad AAC decoder, which is said to be a bit slower than the ffmpeg-builtin, but which decodes the AAC without complaining. For example:
ffmpeg -acodec libfaad -i input.mov -b 300kbit/s -ar 22050 -o test.flv
The "-acodec" preceding the "-i" option only influences the input audio decoding, not the audio encoding.

Ssh login without interaction

This is a short summary what you need to avoid any type of interaction when accessing a machine by SSH.

Interaction Pitfalls:

  • Known hosts entry is missing.
  • Known hosts entry is incorrect.
  • Public key is incorrect or missing.
  • Keyboard Authentication is enabled when public key failed.
  • stdin is connected and the remote command waits for input.

Here is what you need to do to circumvent everything:

  • Ensure to use the correct public key (if necessary pass it using -i)
  • Ensure to pass "-o UserKnownHostsFile=/dev/null" to avoid termination when the known hosts key has changed (Note: this is highly insecure when used for untrusted machines! But it might make sense in setups without correctly maintained known_hosts)
  • Ensure to pass "-o StrictHostKeyChecking=no" to avoid SSH complaining about missing known host keys (caused by using /dev/null as input).
  • Pass "-o PreferredAuthentications=publickey" to avoid password querying when the public key doesn't work
  • Pass "-n" to avoid remote interaction

    Example command line:

    ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"
    

Nagios check plugin for nofile limit

Following the recent post on how to investigate limit related issues which gave instructions what to check if you suspect a system limit to be hit I want to share this Nagios check to cover the open file descriptor limit. Note that existing Nagios plugins like this only check the global limit, only check one application or do not output all problems. So here is my solution which does:
  1. Check the global file descriptor limit
  2. Uses lsof to check all processes "nofile" hard limit
It has two simple parameters -w and -c to specify a percentage threshold. An example call:
./check_nofile_limit.sh -w 70 -c 85
could result in the following output indicating two problematic processes:
WARNING memcached (PID 2398) 75% of 1024 used CRITICAL apache (PID 2392) 94% of 4096 used
Here is the check script doing this:class="brush: bash">#!/bin/bash # MIT License # # Copyright (c) 2017 Lars Windolf # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE.

Curl and http 1.1 keepalive test traffic

Curl is really helpful for debugging when after recognizing a problem you need to decide wether it is a network issue, a DNS problem, an app server problem or an application performance problem.

Filter aws ec2 json with jq

The AWS CLI is fine, but dumping stuff becomes a pain the more stuff is in your account and when you want to extract multiple things depending on each other.

Network split test scripts

Today I want to share two simple scripts for simulating a network split and rejoin between two groups of hosts. The split is done by adding per-host network blackhole routes on each host for all hosts of the other group.

Sequence definitions with kwalify

After guess-trying a lot on how to define a simple sequence in kwalify (which I do use as a JSON/YAML schema validator) I want to share this solution for a YAML schema.

Puppet agent settings issue

Experienced a strange puppet agent 4.8 configuration issue this week. To properly distribute the agent runs over time to even out puppet master load I wanted to configure the splay settings properly. There are two settings:

Using linux keyring secrets from your scripts

When you write script that need to perform remote authentication you don't want to include passwords plain text in the script itself. And if the credentials are personal credentials you cannot deliver them with the script anyway.

Openshift s2i and spring profiles

When porting Springboot applications to Openshift using S2I (source to image) directly from a git repo you cannot rely on a start script passing the proper -Dspring.profiles.active=<profile name> parameter like this

Openshift ultra Fast bootstrap

Today I want to share some hints on ultra-fast bootstrapping developers to use Openshift. Given that adoption in your organisation depends on developers daring and wanting to use Kubernetes/Openshift I believe showing a clear and easy migration path is the way to go.

Docker disable ext4 journaling

Noteworthy point from the Remind Ops: when running docker containers on ext4 consider disabling journaling. Why, because a throw-away almost read-only filesystem doesn't need recovery on crash.

Puppet dry run

To do a "dry run" in Puppet you need to invoke the agent in noop mode:

How to search confluence for macro usage

When you want to find all pages in Confluence that embed a certain macro you cannot simply use the search field as it seamily only searches the resulting content. A normal search query does not check the markup for the macro code.

Solving d3.scale is undefined

When porting older code and examples of d3.js visualizations you might encounter the following exception:

Match structured facts in mcollective

If you are using Facter 2+, which is what you do when you run at least Puppet4, then you have structured facts (meaning nested values) like those:

How to fix debsecan for wheezy

Someone at Debian increase security for all Debian servers by breaking debsecan a while ago for everything before Jessie by moving the vulnerability definitions from

More changes after the dyn ddos attack

Looking at NS records again of all scanned top 500 Alexa domains after the recent Dyn DDoS attack now 13 of the previously 14 Dyn customers which previously relied solely on Dyn now switched away entirely or added additional non-Dyn DNS servers.

Changes after the dyn dns outage

Looking at NS records today some of yesterdays affected companies decided to change things after the DoS on Dyn. As the NS record is not really a customer facing feature this is more an indication of Dyn's customers expectations. One could argue switching away from Dyn could mean fear of more downtimes to come.

Whom you can ddos via dyndns

After todays various affected major websites you might ask who is actually also affected together with the well known sites as Amazon, Twitter or Github.

Sharing screen with multiple users

How to detect screen sessions of other users:

screen -ls <user name>/

How to open screen to other users:

  1. Ctrl-A :multiuser on
  2. Ctrl-A :acladd <user to grant access>

Attach to other users screen session:

With session name
screen -x <user name>/<session name>
With PID and tty
screen -x <user name>/<pid>.<ptty>.<host>

Hiera eyaml gpg troubleshooting

When using Hiera + Eyaml + GPG as Puppet configuration backend one can run into a multitude of really bad error message. The problem here is mostly the obscene layering of libraries e.g. Eyaml on top of Eyaml-GPG on top of either GPGME or Ruby GPG on top on GnuPG. Most errors originate from/are reported by GnuPG and are badly unspecified.

Workaround openssh 7.0 problems

OpenSSH 7+ deprecates weak key exchange algorithm diffie-hellman-group1-sha1 and DSA public keys for both host and user keys which lead to the following error messages:

Scan linux for vulnerable packages

How do you know wether your Linux server (which has no desktop update notifier or unattended security updates running) does need to be updated? Of course an
apt-get update && apt-get --dry-run upgrade
might give an indication. But what of the package upgrades do stand for security risks and whose are only simple bugfixes you do not care about?

Check using APT

One useful possibility is apticron which will tell you which packages should be upgraded and why. It presents you the package ChangeLog to decided wether you want to upgrade a package or not. Similar but less details is cron-apt which also informs you of new package updates.

Analyze Security Advisories

Now with all those CERT newsletters, security mailing lists and even security news feeds out there: why can't we check the other way around? Why not find out:
  1. Which security advisories do affect my system?
  2. Which ones I have already complied with?
  3. And which vulnerabilities are still there?
My mad idea was to take those security news feeds (as a start I tried with the ones from Ubuntu and CentOS) and parse out the package versions and compare them to the installed packages. The result was a script producing the following output: screenshot of lpvs-scan.pl In the output you see lines starting with "CEBA-2012-xxxx" which is CentOS security advisory naming schema (while Ubuntu has USN-xxxx-x). Yellow color means the security advisory doesn't apply because the relevant packages are not installed. Green means the most recent package version is installed and the advisory shouldn't affect the system anymore. Finally red, of course meaning that the machine is vulnerable.

Does it Work Reliably?

The script producing this output can be found here. I'm not yet satisfied with how it works and I'm not sure if it can be maintained at all given the brittle nature of the arbitrarily formatted/rich news feeds provided by the distros. But I like how it gives a clear indication of current advisories and their effect on the system. Maybe persuading the Linux distributions into using a common feed format with easy to parse metadata might be a good idea... How do you check your systems? What do you think of a package scanner using XML security advisory feeds?

Gerrit howto remove changes

Sometimes you want to delete a Gerrit change to make it invisible for everyone (for example when you did commit an unencrypted secret...). AFAIK this is only possible via the SQL interface which you can enter with

SSH ProxyCommand Examples

Use "ProxyCommand" in your ~/.ssh/config to easily access servers hidden behind port knocking and jump hosts.

Nagios plugin for dmesg monitoring

So far I found no easy solution to monitor for Linux kernel messages. So here is a simple Nagios plugin to scan dmesg output for interesting stuff:
#!/bin/bash

SEVERITIES="err,alert,emerg,crit"
WHITELIST="microcode: |\
Firmware Bug|\
i8042: No controller|\
Odd, counter constraints enabled but no core perfctrs detected|\
Failed to access perfctr msr|\
echo 0 > /proc/sys"

# Check for critical dmesg lines from this day
date=$(date "+%a %b %e")
output=$(dmesg -T -l "$SEVERITIES" | egrep -v "$WHITELIST" | grep "$date" | tail -5)

if [ "$output" == "" ]; then
	echo "All is fine."
	exit 0
fi

echo "$output" | xargs
exit 1
"Features" of the script above:
  • It gives you the 5 most recent messages from today
  • It allows to whitelist common but useless errors in $WHITELIST
  • It uses "dmesg" to work when you already have disk I/O errors and to be faster than syslog parsing
This script helped a lot to early on detect I/O errors, recoverable as well as corruptions. It even worked when entire root partition wasn't readable anymore, because then the Nagios check failed with "NRPE: unable to read output" which indicated that dmesg didn't work anymore. By always showing all errors from the entire day one cannot miss recovered errors that happened in non-office hours. Another good thing about the check is detecting OOM kills or fast spawning of processes.

Visualizing configuration drift with polscan

In the last two months I've now spent quite some time revising and improving visualizations of a larger number of findings when managing many hosts. Aside from result tables that you can ad-hoc properly filter and group by some attribute, host maps group by certain groups are the most useful.

Usage scenarios for polscan

The generic sysadmin policy scanner for Debian based distros "Polscan" (https://github.com/lwindolf/polscan) I wrote about recently is coming further along. Right now I am focussing on how to get it really useful in daily work with a lot of systems, which usually means a lot of findings. And the question is: how does the presentation of the findings help you with working on all of them?

Debugging hiera Eyaml encryption, decryption failed

When Hiera works without any problems everything is fine. But when not it is quite hard to debug why it is not working. Here is a troubleshooting list for Hiera when used with hiera-eyaml-gpg.

Building a generic sysadmin policy scanner

After writing the same scripts several times I decided it is time for a generic solution to check Debian servers for configuration consistency. As incidents and mistakes happen each organization collects a set of learnings (let's call it policies) that should be followed in the future. And one important truth is that the free automation and CM tools we use (Chef, Puppet, Ansible, cfengine, Saltstack...) allow to implement policies, but do not seem to care much about proofing correct automation.

Debugging dovecot acl shared mailboxes not showing in thunderbird

When you can't get ACL shared mailboxes visible with Dovecot and Thunderbird here are some debugging tipps:
  1. Thunderbird fetches the ACLs on startup (and maybe at some other interval). So for testing restart Thunderbird on each change you make.
  2. Ensure the shared mailboxes index can be written. You probably have it configured like
    plugin {
      acl_shared_dict = file:/var/lib/dovecot/db/shared-mailboxes.db
    }
    Check if such a file was created and is populated with new entries when you add ACLs from the mail client. As long as entries do not appear here, nothing can work.
  3. Enable debugging in the dovecot log or use the "debug" flag and check the ACLs for the user who should see a shared mailbox like this:
    doveadm acl debug -u [email protected] shared/users/box
    • Watch out for missing directories
    • Watch out for permission issues
    • Watch out for strangely created paths this could hint a misconfigured namespace prefix

The damage of one second

Update: According to the AWS status page the incident was a problem related to BGP route leaking. AWS does not hint on a leap second related incident as originally suggested by this post!

Chef Editing Config Files

Most chef recipes are about installing new software including all config files. Also if they are configuration recipes they usually overwrite the whole file and provide a completely recreated configuration. When you have used cfengine and puppet with augtool before you'll be missing the agile editing of config files.

Puppet apply only specific classes

If you want to apply Puppet changes in an selective manner you can run
puppet apply -t --tags Some::Class
on the client node to only run the single class named "Some::Class". Why does this work? Because Puppet automatically creates tags for all classes you have. Ensure to upper-case all parts of the class name, because even if you actual Ruby class is "some::class" the Puppet tag will be "Some::Class".

Puppet agent noop pitfalls

The puppet agent command has a --noop switch that allows you to perform a dry-run of your Puppet code.
puppet agent -t --noop
It doesn't change anything, it just tells you what it would change. More or less exact due to the nature of dependencies that might come into existance by runtime changes. But it is pretty helpful and all Puppet users I know use it from time to time.

Unexpected Things

But there are some unexpected things about the noop mode:
  1. A --noop run does trigger the report server.
  2. The --noop run rewrites the YAML state files in /var/lib/puppet
  3. And there is no state on the local machine that gives you the last "real" run result after you overwrite the state files with the --noop run.

Why might this be a problem?

Or the other way around: why Puppet think this is not a problem? Probably because Puppet as an automation tool should overwrite and the past state doesn't really matter. If you use PE or Puppet with PuppetDB or Foreman you have an reporting for past runs anyway, so no need to have a history on the Puppet client. Why I still do not like it: it avoids having safe and simple local Nagios checks. Using the state YAML you might want to build a simple script checking for run errors. Because you might want a Nagios alert about all errors that appear. Or about hosts that did not run Puppet for quite some time (for example I wanted to disable Puppet on a server for some action and forgot to reenable). Such a check reports false positives each time someone does a --noop run until the next normal run. This hides errors. Of course you can build all this with cool Devops style SQL/REST/... queries to PuppetDB/Foreman, but checking state locally seems a bit more the old-style robust and simpler sysadmin way. Actively asking the Puppet master or report server for the client state seems wrong. The client should know too. From a software usability perspective I do not expect a tool do change it's state when I pass --noop. It's unexpected. Of course the documentation is carefull phrased:
Use 'noop' mode where the daemon runs in a no-op or dry-run mode. This is useful for seeing what changes Puppet will make without actually executing the changes.

Getting rid of bash ctrl R

Today was a good day, as I stumbled over this post (at http://architects.dzone.com) hinting on the following bash key bindings:
bind '"\e[A":history-search-backward'
bind '"\e[B":history-search-forward'
It changes the behaviour of the up and down cursor keys to not go blindly through the history but only through items matching the current prompt. Of course at the disadvantage of having to clear the line to go through the full history. But as this can be achieved by a Ctrl-C at any time it is still preferrable to Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R ....

Visudo: #includedir sudoers.d

WTF. Today I fell for this sudo madness and uncommented this "comment" in /etc/sudoers:
#includedir /etc/sudoers.d
which gives a
visudo: >>> /etc/sudoers: syntax error near line 28 <<<
Let's check the "sudoers" manpage again: full of EBNF notations! But nothing in the EBNF about comments being commands. At least under Other special characters and reserved words one finds
The pound sign ('#') is used to indicate a comment (unless it is part
of a #include directive or unless it occurs in the context of a user
name and is followed by one or more digits, in which case it is treated
as a uid).  Both the comment character and any text after it, up to the
end of the line, are ignored.
Cannot this be done in a better way?

Strptime() implementation in javascript

If you need a simply strptime() implementation for Javascript feel free to use the following. I needed this for more sane date formatting in SpurTracer. If you find this useful or find bugs please post a comment!
// Copyright (c) 2012 Lars Lindner <[email protected]>
//
// GPLv2 and later or MIT License - http://www.opensource.org/licenses/mit-license.php

var dayName = new Array("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun");
var monthName = new Array("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dez");
   
/* simulates some of the format strings of strptime() */
function strptime(format, date) {
	var last = -2;
	var result = "";
	var hour = date.getHours();

	/* Expand aliases */
	format = format.replace(/%D/, "%m/%d/%y");
	format = format.replace(/%R/, "%H:%M");
	format = format.replace(/%T/, "%H:%M:%S");

	/* Note: we fail on strings without format characters */

	while(1) {
		/* find next format char */
		var pos = format.indexOf('%', last + 2);

		if(-1 == pos) {
			/* dump rest of text if no more format chars */
			result += format.substr(last + 2);
			break;
		} else {
			/* dump text after last format code */
			result += format.substr(last + 2, pos - (last + 2));

			/* apply format code */
			formatChar = format.charAt(pos + 1);
			switch(formatChar) {
				case '%':
					result += '%';
					break;
				case 'C':
					result += date.getYear();
					break;
				case 'H':
				case 'k':
					if(hour < 10) result += "0";
					result += hour;
					break;
				case 'M':
					if(date.getMinutes() < 10) result += "0";
					result += date.getMinutes();
					break;
				case 'S':
					if(date.getSeconds() < 10) result += "0";
					result += date.getSeconds();
					break;
				case 'm':
					if(date.getMonth() < 10) result += "0";
					result += date.getMonth();
					break;
				case 'a':
				case 'A':
					result += dayName[date.getDay() - 1];
					break;
				case 'b':
				case 'B':
				case 'h':
					result += monthName[date.getMonth()];
					break;
				case 'Y':
					result += date.getFullYear();
					break;
				case 'd':
				case 'e':
					if(date.getDate() < 10) result += "0";
					result += date.getDate();
					break;
				case 'w':
					result += date.getDay();
					break;
				case 'p':
				case 'P':
					if(hour < 12) {
						result += "am";
					} else {
						result += "pm";
					}
					break;
				case 'l':
				case 'I':
					if(hour % 12 < 10) result += "0";
					result += (hour % 12);
					break;
			}
		}
		last = pos;
	}
	return result;
}

Sqlite commands

Sqlite Command Line Client

You can access any sqlite database file, as long there is no other client locking it, using the command line client to perform SQL commands:
sqlite3 <database file>

Sqlite List Schema

To list the schema of a sqlite database run
SELECT name FROM sqlite_master;
It will dump all schema elements. One field of the result table contains the SQL used to create each schema element. If you only want a list of tables use the client command ".tables"
.tables

Sqlite Export/Dump Database as SQL

To dump the entire database content in SQL for example for a backup run:
sqlite3 <database file> .dump >output.sql
See the next sections how to dump a single table or in SQL.

Sqlite Dump Table as SQL

To dump the SQL to create a table and its values run the command line client using the ".dump" command an redirect the output:
sqlite3 <database file> ".dump <table name>" >output.sql

Sqlite Dump Table as CSV

To dump the SQL to create a table and its values run the command line client using the ".mode" command to enable CSV output and then to perform a "SELECT" on the table you want to dump. By echoing this to the CLI you can easily redirect the output:
echo ".mode csv
select * from <table name>;" | sqlite3 >output.sql

Sqlite Cleanup with Vacuum

To run a one time cleanup just run the following command on an sqlite database file. Ensure there is no program accessing the database file. If there is it will fail and do nothing:

sqlite3 my.db "VACUUM;"

Sqlite Configure Auto-Vacuum

If you want sqlite to perform vacuum on-demand you can set the "auto_vacuum" pragma to either "INCREMENTAL" or "FULL":
PRAGMA auto_vacuum = INCREMENTAL;
PRAGMA auto_vacuum = FULL;
To quere the current "auto_vacuum setting run
PRAGMA auto_vacuum
Read more in this detailed post about sqlite vacuuming!

Rsync: buffer_append_space: alloc 10512504 not supported

If an rsync call gives you the following: buffer_append_space: alloc 10512504 not supported rsync: writefd_unbuffered failed to write 4092 bytes [sender]: Broken pipe (32) rsync: connection unexpectedly closed (36 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(635) [sender=3.0.3] Don't bother to debug rsync! This is an SSH problem. You need to upgrade your SSH version (which is propably some 4.3 or older).

Pgbouncer "pooler error: auth failed"

If connections to your pgbouncer setup fail with "Pooler Error: Auth failed" check the following configuration values in your pgbouncer.ini
  • auth_file = ... : Ensure to point this path to your pg_auth file in your Postgres setup.
  • auth_type = ... : Ensure to set the correct authentication type. E.g. "md5" for MD5 hashed passwords.
  • Check if your pg_auth file has the needed passwords entries.

Libfaac 1.28 compilation fails with: mpeg4ip.h:126:58: error: new declaration ‘char* strcasestr(const char*, const char*)’

When compiling libfaac with GCC you get: g++ -DHAVE_CONFIG_H -I. -I../.. -I../../include -Wall -g -O2 -MT 3gp.o -MD -MP -MF .deps/3gp.Tpo -c -o 3gp.o 3gp.cpp In file included from mp4common.h:29:0, from 3gp.cpp:28: mpeg4ip.h:126:58: error: new declaration ‘char* strcasestr(const char*, const char*)’ /usr/include/string.h:369:28: error: ambiguates old declaration ‘const char* strcasestr(const char*, const char*)’ make[3]: *** [3gp.o] Error 1 Solution is to remove the declaration of strcasestr() in commom/mp4v2/mpeg4ip.h (suggested here).

Flvtool2 1.0.6 bugs

Crash Variant #1 Sometimes flvtool2 1.0.6 crashes on FLVs created by ffmpeg or mencoder. The FLV video itself is playable without the metadata and looks fine, still flvtool2 crashes like this: $ flvtool2 -kUP -metadatacreator:'some label' video.flv ERROR: EOFError ERROR: /usr/lib/ruby/site_ruby/1.8/flv/amf_string_buffer.rb:37:in `read' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/amf_string_buffer.rb:243:in `read__STRING' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/audio_tag.rb:56:in `read_header' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/audio_tag.rb:47:in `after_initialize' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/tag.rb:56:in `initialize' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/stream.rb:447:in `new' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/stream.rb:447:in `read_tags' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/stream.rb:58:in `initialize' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:272:in `new' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:272:in `open_stream' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:238:in `process_files' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:225:in `each' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:225:in `process_files' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:44:in `execute!' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2.rb:168:in `execute!' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2.rb:228 ERROR: /usr/bin/flvtool2:2:in `require' ERROR: /usr/bin/flvtool2:2 $ In the Wowza Media Server support forum is a hint on how to patch flvtool2 to solve the issue:
--- /usr/local/lib/site_ruby/1.8/flv/audio_tag.rb	2009-11-12 10:46:13.000000000 +0100
+++ lib/flv/audio_tag.rb	2010-03-17 11:25:35.000000000 +0100
@@ -44,7 +44,9 @@
     
     def after_initialize(new_object)
       @tag_type = AUDIO
-      read_header
+      if data_size > 0
+      	read_header
+      end
     end
 
     def name
Crash Variant #2 Here is another crashing variant: $ flvtool2 -kUP -metadatacreator:'some label' video.flv ERROR: EOFError ERROR: /usr/local/lib/site_ruby/1.8/flv/amf_string_buffer.rb:37:in `read' ERROR: /usr/local/lib/site_ruby/1.8/flv/amf_string_buffer.rb:75:in `read__AMF_string' ERROR: /usr/local/lib/site_ruby/1.8/flv/amf_string_buffer.rb:90:in `read__AMF_mixed_array' ERROR: /usr/local/lib/site_ruby/1.8/flv/amf_string_buffer.rb:134:in `read__AMF_data' ERROR: /usr/local/lib/site_ruby/1.8/flv/meta_tag.rb:40:in `after_initialize' ERROR: /usr/local/lib/site_ruby/1.8/flv/tag.rb:56:in `initialize' ERROR: /usr/local/lib/site_ruby/1.8/flv/stream.rb:451:in `new' ERROR: /usr/local/lib/site_ruby/1.8/flv/stream.rb:451:in `read_tags' ERROR: /usr/local/lib/site_ruby/1.8/flv/stream.rb:58:in `initialize' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:272:in `new' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:272:in `open_stream' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:238:in `process_files' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:225:in `each' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:225:in `process_files' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:44:in `execute!' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2.rb:168:in `execute!' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2.rb:228 ERROR: /usr/local/bin/flvtool2:2:in `require' ERROR: /usr/local/bin/flvtool2:2 $ I have not yet found a solution... Update: I have found crash variant #2 to often happen with larger files. Using flvtool++ instead of flvtool2 always solved the problem. Using flvtool++ is also a good idea as it is much faster than flvtool2. Still both tools have their problems. More about this in the Comparison of FLV and MP4 metadata tagging tools.

Ffmpeg aac "can not resample 6 channels..."

When you try to encode with ffmpeg and you end up with such an error
Resampling with input channels greater than 2 unsupported.
Can not resample 6 channels @ 48000 Hz to 6 channels @ 48000
you are probably trying to encode from AAC with 5.1 audio to less than 6 channels or different audio sampling rate. There are three solutions:
  1. As a solution either do not reduce the audio channels and change the audio sampling rate or do convert the audio with faad first.
  2. Apply one of the available ffmpeg patches to fix the AAC 6 channel issue...
  3. Split video and audio and convert audio separately.
The third solution can be done as following:
  1. Extract audio with ffmpeg:
    ffmpeg -y -i source.avi -acodec copy source.6.aac
  2. Convert audio with faad:
    faad -d -o source.2.pcm source.6.aac
  3. Merge video and audio again with ffmpeg:
    ffmpeg -y -i source.avi -i source.2.pcm -map 0:0 -map 1:0 -vcodec copy -acodec copy output.avi
Update: As hinted by a fellow commenter the big disadvantage is the quality loss as faad can only convert into PCM 16bit.

Ffmpeg + mt + svq3 video = argh

Try decoding a video with SVQ3 video codec with multithreading enabled (e.g. -threads 4) ffmpeg r25526 simply refuses to decode it: Stream #0.0(eng): Video: svq3, yuvj420p, 640x476, 1732 kb/s, 25 fps, 25 tbr, 600 tbn, 600 tbc ... [svq3 @ 0x806bfe0] SVQ3 does not support multithreaded decoding, patch welcome! (check latest SVN too) ... Error while opening decoder for input stream #0.0 Instead of simply just using only one thread and just working ffmpeg bails. What a pain. You need to specify "-threads 1" or no threads option at all for decoding to work.

/etc/sudoers.d/ pitfalls

From the sudoers manpage:
[...] sudo will read each file in /etc/sudoers.d, skipping file names 
that end in ~ or contain a . character to avoid causing problems with 
package manager or editor temporary/backup files. [...]
This mean if you have a Unix user like "lars.windolf" you do not want to create a file
/etc/sudoers.d/lars.windolf
The evil thing is neither sudo nor visudo warns you about the mistake and the rules just do not work. And if you have some other definition files with the same rule and just a file name without a dot you might wonder about your sanity :-)

Chef Server "failed to authenticate."

If your chef GUI suddenly stops working and you see something like the following exception in both server.log and server-webui.log:
merb : chef-server (api) : worker (port 4000) ~ Failed to authenticate. Ensure that your client key is valid. - (Merb::ControllerExceptions::Unauthorized)
/usr/share/chef-server-api/app/controllers/application.rb:56:in `authenticate_every'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:352:in `send'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:352:in `_call_filters'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:344:in `each'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:344:in `_call_filters'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:286:in `_dispatch'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:284:in `catch'
/usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:284:in `_dispatch'
[...]
Then try stopping all chef processes, remove
/etc/chef/webui.pem
/etc/chef/validation.pem
and start everything again. It will regenerate the keys. The downside is that you have to
knife configure -i
all you knife setup locations again.

Why nm Applet does not show up

When search online for answers on how the Network Manager doesn't show up in the GNOME notification area one find hundreds of confused forum posts (mostly Ubuntu). There are only two reasons:
  1. Your Network Manager setup is somehow broken
  2. There is no network device to manage
The second case is propably going on in most of the cases. When you check your /etc/network/interfaces and see something like:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp
... it cannot work, as Network Manager to manage your connections it needs to look like:
auto lo
iface lo inet loopback
Restart Network Manager (e.g. "/etc/init.d/network-manager restart") for the nm-applet icon to show up.

When rsync Delete doesn't work

David Grant explains why rsync might silentely refuse to delete anything. It happens when you call rsync without a trailing slash in the source path like this: rsync -az -e ssh --delete /data server:/data It just won't delete anything. It will when running it like this: rsync -az -e ssh --delete /data/ server:/data

Use a graphical linux editor for vcs commits

1. Using gedit

If your main editor is the graphical GNOME editor gedit you can also use it when doing version control system commits, crontab edits, visudo and others things on the command line. All you need is to set it to the $EDITOR environment variable. The only thing you need to be careful about is wether the editor detaches from the terminal right after starting. This must not happen as the calling command (e.g. "git commit -a") would never know when the editing is finished. So for gedit you have to add the "-w" switch to keep it waiting.
export EDITOR="gedit -w"

2. Using Emacs

For XEmacs simply set
export EDITOR=xemacs
With Emacs itself you also have the possibility to use server mode by running emacs-client (see "Starting emacs automatically" in the EmacsWiki). To do so set
export ALTERNATE_EDITOR=emacs EDITOR=emacsclient

3. Using Other Editors

If you use the goold old Nirvana Editor nedit you can simply
export EDITOR=nedit
the same goes for the KDE editor kate:
export EDITOR=kate
and if you want it really hurting try something like this:
export EDITOR=eclipse
export VISUAL=oowriter

4. $EDITOR and sudo

When using sudo you need to pass $EDITOR along to the root environment. This can be done by using "sudo -e" e.g.
sudo -e visudo
Wether passing the enviroment to root is a good idea might be a good question though... Have fun!

Ubuntu, apache and ulimit

When on Ubuntu setting ulimits is strange enough as upstart does ignore /etc/security/limits.conf. You need to use the "limit" stanza to change any limit. But try it with Apache and it won't help as Debian invented another way to ensure Apache ulimits can be changed. So you need to always check
/etc/apache2/envvars
which in a fresh installation contains a line
#APACHE_ULIMIT_MAX_FILES="ulimit -n 65535"
Uncomment it to set any max file limit you want. Restart Apache and verify the process limit in /proc/<pid>/limits which should give you something like
$ egrep "^Limit|Max open files" /proc/3643/limits
Limit Soft Limit Hard Limit Units 
Max open files 1024 4096 files
$

Ubuntu 12.04 on xen: blkfront: barrier: empty write xvda op failed

When you run Ubuntu 12.04 VM on XenServer 6.0 (kernel 2.6.32.12) you can get the following errors and your local file systems will mount read-only. Also see Debian #637234 or Ubuntu #824089.
blkfront: barrier: empty write xvda op failed
blkfront: xvda: barrier or flush: disabled
You also won't be able to remount read-write using "mount -o remount,rw /" as this will give you a kernel error like this
EXT4-fs (xvda1): re-mounted. Opts: errors=remount-ro
This problem more or less sporadically affects paravirtualized Ubuntu 12.04 VMs. Note that officially Ubuntu 12.04 is not listed as a supported system in the Citrix documentation. Note that this problem doesn't affect fully virtualized VMs. The Solution:
  • According to the Debian bug report the correct solution is to upgrade the dom0 kernel to at least 2.6.32-41.
  • To solve the problem without upgrading the dom0 kernel: reboot until you get a writable filesystem and add "barrier=0" to the mount options of all your local filesystems.
  • Alternatively: do not use paravirtualization :-)

The Debian/Ubuntu ulimit check list

This is a check list of all you can do wrong when trying to set limits on Debian/Ubuntu. The hints might apply to other distros too, but I didn't check. If you have additional suggestions please leave a comment!

Always Check Effective Limit

The best way to check the effective limits of a process is to dump

/proc/<pid>/limits
which gives you a table like this
Limit  Soft Limit Hard Limit Units 
Max cpu time unlimited unlimited seconds 
Max file size unlimited unlimited bytes 
Max data size unlimited unlimited bytes 
Max stack size 10485760 unlimited bytes 
Max core file size 0 unlimited bytes 
Max resident set unlimited unlimited bytes 
Max processes 528384 528384 processes 
Max open files 1024 1024 files 
Max locked memory 32768 32768 bytes 
Max address space unlimited unlimited bytes 
Max file locks unlimited unlimited locks 
Max pending signals 528384 528384 signals 
Max msgqueue size 819200 819200 bytes 
Max nice priority 0 0 
Max realtime priority 0 0 
Running "ulimit -a" in the shell of the respective user rarely tells something because the init daemon responsible for launching services might be ignoring /etc/security/limits.conf as this is a configuration file for PAM only and is applied on login only per default.

Do Not Forget The OS File Limit

If you suspect a limit hit on a system with many processes also check the global limit:
$ cat /proc/sys/fs/file-nr
7488	0	384224
$
The first number is the number of all open files of all processes, the third is the maximum. If you need to increase the maximum:
# sysctl -w fs.file-max=500000
Ensure to persist this in /etc/sysctl.conf to not loose it on reboot.

Check "nofile" Per Process

Just checking the number of files per process often helps to identify bottlenecks. For every process you can count open files from using lsof:

lsof -n -p <pid> | wc -l
So a quick check on a burning system might be:
lsof -n 2>/dev/null | awk '{print $1 " (PID " $2 ")"}' | sort | uniq -c | sort -nr | head -25
whic returns the top 25 file descriptor eating processes
 139 mysqld (PID 2046)
 105 httpd2-pr (PID 25956)
 105 httpd2-pr (PID 24384)
 105 httpd2-pr (PID 24377)
 105 httpd2-pr (PID 24301)
 105 httpd2-pr (PID 24294)
 105 httpd2-pr (PID 24239)
 105 httpd2-pr (PID 24120)
 105 httpd2-pr (PID 24029)
 105 httpd2-pr (PID 23714)
 104 httpd2-pr (PID 3206)
 104 httpd2-pr (PID 26176)
 104 httpd2-pr (PID 26175)
 104 httpd2-pr (PID 26174)
 104 httpd2-pr (PID 25957)
 104 httpd2-pr (PID 24378)
 102 httpd2-pr (PID 32435)
 53 sshd (PID 25607)
 49 sshd (PID 25601)
The same more comfortable including the hard limit:
lsof -n 2>/dev/null | awk '{print $1,$2}' | sort | uniq -c | sort -nr | head -25 | while read nr name pid ; do printf "%10d / %-10d %-15s (PID %5s)\n" $nr $(cat /proc/$pid/limits | grep 'open files' | awk '{print $5}') $name $pid; done
returns
 105 / 1024 httpd2-pr (PID 5368)
 105 / 1024 httpd2-pr (PID 3834)
 105 / 1024 httpd2-pr (PID 3407)
 104 / 1024 httpd2-pr (PID 5392)
 104 / 1024 httpd2-pr (PID 5378)
 104 / 1024 httpd2-pr (PID 5377)
 104 / 1024 httpd2-pr (PID 4035)
 104 / 1024 httpd2-pr (PID 4034)
 104 / 1024 httpd2-pr (PID 3999)
 104 / 1024 httpd2-pr (PID 3902)
 104 / 1024 httpd2-pr (PID 3859)
 104 / 1024 httpd2-pr (PID 3206)
 102 / 1024 httpd2-pr (PID 32435)
 55 / 1024 mysqld (PID 2046)
 53 / 1024 sshd (PID 25607)
 49 / 1024 sshd (PID 25601)
 46 / 1024 dovecot-a (PID 1869)
 42 / 1024 python (PID 1850)
 41 / 1048576 named (PID 3130)
 40 / 1024 su (PID 25855)
 40 / 1024 sendmail (PID 3172)
 40 / 1024 dovecot-a (PID 14057)
 35 / 1024 sshd (PID 3160)
 34 / 1024 saslauthd (PID 3156)
 34 / 1024 saslauthd (PID 3146)

Upstart doesn't care about limits.conf!

The most common mistake is believing upstart behaves like the Debian init script handling. When on Ubuntu a service is being started by upstart /etc/security/limits.conf will never apply! To get upstart to change the limits of a managed service you need to insert a line like

limit nofile 10000 20000
into the upstart job file in /etc/init.

When Changing /etc/security/limits.conf Re-Login!

After you apply a change to /etc/security/limits.conf you need to login again to have the change applied to your next shell instance by PAM. Alternatively you can use sudo -i to switch to user whose limits you modified and simulate a login.

Special Debian Apache Handling

The Debian Apache package which is also included in Ubuntu has a separate way of configuring "nofile" limits. If you run the default Apache in 12.04 and check /proc/<pid>/limits of a Apache process you'll find it is allowing 8192 open file handles. No matter what you configured elsewhere. This is because Apache defaults to 8192 files. If you want another setting for "nofile" then you need to [edit /etc/apache2/envvars](/blog/Ubuntu,-Apache-and-ulimit).

For Emergencies: prlimit!

Starting with util-linux-2.21 there will be a new "prlimit" tool which allows you to easily get/set limits for running processes. Sadly Debian is and will be for some time on util-linux-2.20. So what do we do in the meantime? The prlimit(2) manpage which is for the system call prlimit() gives a hint: at the end of the page there is a code snippet to change the CPU time limit. You can adapt it to any limit you want by replacing RLIMIT_CPU with any of
  • RLIMIT_NOFILE
  • RLIMIT_OFILE
  • RLIMIT_AS
  • RLIMIT_NPROC
  • RLIMIT_MEMLOCK
  • RLIMIT_LOCKS
  • RLIMIT_SIGPENDING
  • RLIMIT_MSGQUEUE
  • RLIMIT_NICE
  • RLIMIT_RTPRIO
  • RLIMIT_RTTIME
  • RLIMIT_NLIMITS
You might want to check "/usr/include/$(uname -i)-linux-gnu/bits/resource.h". Check the next section for an ready made example for "nofile".

Build Your Own set_nofile_limit

The per-process limit most often hit is propably "nofile". Imagine you production database suddenly running out of files. Imagine a tool that can instant-fix it without restarting the DB! Copy the following code to a file "set_limit_nofile.c"

#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/resource.h>

#define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \
 } while (0)

int
main(int argc, char *argv[])
{
 struct rlimit old, new;
 struct rlimit *newp;
 pid_t pid;

 if (!(argc == 2 || argc == 4)) {
 fprintf(stderr, "Usage: %s <pid> [<new-soft-limit> "
 "<new-hard-limit>]\n", argv[0]);
 exit(EXIT_FAILURE);
 }

 pid = atoi(argv[1]); /* PID of target process */

 newp = NULL;
 if (argc == 4) {
 new.rlim_cur = atoi(argv[2]);
 new.rlim_max = atoi(argv[3]);
 newp = &new;
 }

 if (prlimit(pid, RLIMIT_NOFILE, newp, &old) == -1)
 errExit("prlimit-1");
 printf("Previous limits: soft=%lld; hard=%lld\n",
 (long long) old.rlim_cur, (long long) old.rlim_max);

 if (prlimit(pid, RLIMIT_NOFILE, NULL, &old) == -1)
 errExit("prlimit-2");
 printf("New limits: soft=%lld; hard=%lld\n",
 (long long) old.rlim_cur, (long long) old.rlim_max);

 exit(EXIT_FAILURE);
}
and compile it with
gcc -o set_nofile_limit set_nofile_limit.c
And now you have a tool to change any processes "nofile" limit. To dump the limit just pass a PID:
$ ./set_limit_nofile 17006
Previous limits: soft=1024; hard=1024
New limits: soft=1024; hard=1024
$
To change limits pass PID and two limits:
# ./set_limit_nofile 17006 1500 1500
Previous limits: soft=1024; hard=1024
New limits: soft=1500; hard=1500
# 
And the production database is saved.

This is a check list of all you can do wrong when trying to set limits on Debian/Ubuntu. The hints might apply to other distros too, but I didn't check. If you have additional suggestions please leave a comment!

Always Check Effective Limit

The best way to check the effective limits of a process is to dump

/proc/<pid>/limits
which gives you a table like this
Limit  Soft Limit Hard Limit Units 
Max cpu time unlimited unlimited seconds 
Max file size unlimited unlimited bytes 
Max data size unlimited unlimited bytes 
Max stack size 10485760 unlimited bytes 
Max core file size 0 unlimited bytes 
Max resident set unlimited unlimited bytes 
Max processes 528384 528384 processes 
Max open files 1024 1024 files 
Max locked memory 32768 32768 bytes 
Max address space unlimited unlimited bytes 
Max file locks unlimited unlimited locks 
Max pending signals 528384 528384 signals 
Max msgqueue size 819200 819200 bytes 
Max nice priority 0 0 
Max realtime priority 0 0 
Running "ulimit -a" in the shell of the respective user rarely tells something because the init daemon responsible for launching services might be ignoring /etc/security/limits.conf as this is a configuration file for PAM only and is applied on login only per default.

Do Not Forget The OS File Limit

If you suspect a limit hit on a system with many processes also check the global limit:
$ cat /proc/sys/fs/file-nr
7488	0	384224
$
The first number is the number of all open files of all processes, the third is the maximum. If you need to increase the maximum:
# sysctl -w fs.file-max=500000
Ensure to persist this in /etc/sysctl.conf to not loose it on reboot.

Check "nofile" Per Process

Just checking the number of files per process often helps to identify bottlenecks. For every process you can count open files from using lsof:

lsof -n -p <pid> | wc -l
So a quick check on a burning system might be:
lsof -n 2>/dev/null | awk '{print $1 " (PID " $2 ")"}' | sort | uniq -c | sort -nr | head -25
whic returns the top 25 file descriptor eating processes
 139 mysqld (PID 2046)
 105 httpd2-pr (PID 25956)
 105 httpd2-pr (PID 24384)
 105 httpd2-pr (PID 24377)
 105 httpd2-pr (PID 24301)
 105 httpd2-pr (PID 24294)
 105 httpd2-pr (PID 24239)
 105 httpd2-pr (PID 24120)
 105 httpd2-pr (PID 24029)
 105 httpd2-pr (PID 23714)
 104 httpd2-pr (PID 3206)
 104 httpd2-pr (PID 26176)
 104 httpd2-pr (PID 26175)
 104 httpd2-pr (PID 26174)
 104 httpd2-pr (PID 25957)
 104 httpd2-pr (PID 24378)
 102 httpd2-pr (PID 32435)
 53 sshd (PID 25607)
 49 sshd (PID 25601)
The same more comfortable including the hard limit:
lsof -n 2>/dev/null | awk '{print $1,$2}' | sort | uniq -c | sort -nr | head -25 | while read nr name pid ; do printf "%10d / %-10d %-15s (PID %5s)\n" $nr $(cat /proc/$pid/limits | grep 'open files' | awk '{print $5}') $name $pid; done
returns
 105 / 1024 httpd2-pr (PID 5368)
 105 / 1024 httpd2-pr (PID 3834)
 105 / 1024 httpd2-pr (PID 3407)
 104 / 1024 httpd2-pr (PID 5392)
 104 / 1024 httpd2-pr (PID 5378)
 104 / 1024 httpd2-pr (PID 5377)
 104 / 1024 httpd2-pr (PID 4035)
 104 / 1024 httpd2-pr (PID 4034)
 104 / 1024 httpd2-pr (PID 3999)
 104 / 1024 httpd2-pr (PID 3902)
 104 / 1024 httpd2-pr (PID 3859)
 104 / 1024 httpd2-pr (PID 3206)
 102 / 1024 httpd2-pr (PID 32435)
 55 / 1024 mysqld (PID 2046)
 53 / 1024 sshd (PID 25607)
 49 / 1024 sshd (PID 25601)
 46 / 1024 dovecot-a (PID 1869)
 42 / 1024 python (PID 1850)
 41 / 1048576 named (PID 3130)
 40 / 1024 su (PID 25855)
 40 / 1024 sendmail (PID 3172)
 40 / 1024 dovecot-a (PID 14057)
 35 / 1024 sshd (PID 3160)
 34 / 1024 saslauthd (PID 3156)
 34 / 1024 saslauthd (PID 3146)

Upstart doesn't care about limits.conf!

The most common mistake is believing upstart behaves like the Debian init script handling. When on Ubuntu a service is being started by upstart /etc/security/limits.conf will never apply! To get upstart to change the limits of a managed service you need to insert a line like

limit nofile 10000 20000
into the upstart job file in /etc/init.

When Changing /etc/security/limits.conf Re-Login!

After you apply a change to /etc/security/limits.conf you need to login again to have the change applied to your next shell instance by PAM. Alternatively you can use sudo -i to switch to user whose limits you modified and simulate a login.

It never works with start-stop-daemon

Do not expect ulimits to work with init scripts using start-stop-daemon. In such cases add "ulimit" statements before any start-stop-daemon invocation in the init script!

Special Debian Apache Handling

The Debian Apache package which is also included in Ubuntu has a separate way of configuring "nofile" limits. If you run the default Apache in 12.04 and check /proc/<pid>/limits of a Apache process you'll find it is allowing 8192 open file handles. No matter what you configured elsewhere. This is because Apache defaults to 8192 files. If you want another setting for "nofile" then you need to edit /etc/apache2/envvars.

For Emergencies: prlimit!

Starting with util-linux-2.21 there will be a new "prlimit" tool which allows you to easily get/set limits for running processes. Sadly Debian is and will be for some time on util-linux-2.20. So what do we do in the meantime? The prlimit(2) manpage which is for the system call prlimit() gives a hint: at the end of the page there is a code snippet to change the CPU time limit. You can adapt it to any limit you want by replacing RLIMIT_CPU with any of
  • RLIMIT_NOFILE
  • RLIMIT_OFILE
  • RLIMIT_AS
  • RLIMIT_NPROC
  • RLIMIT_MEMLOCK
  • RLIMIT_LOCKS
  • RLIMIT_SIGPENDING
  • RLIMIT_MSGQUEUE
  • RLIMIT_NICE
  • RLIMIT_RTPRIO
  • RLIMIT_RTTIME
  • RLIMIT_NLIMITS
You might want to check "/usr/include/$(uname -i)-linux-gnu/bits/resource.h". Check the next section for an ready made example for "nofile".

Build Your Own set_nofile_limit

The per-process limit most often hit is propably "nofile". Imagine you production database suddenly running out of files. Imagine a tool that can instant-fix it without restarting the DB! Copy the following code to a file "set_limit_nofile.c"

#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/resource.h>

#define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \
 } while (0)

int
main(int argc, char *argv[])
{
 struct rlimit old, new;
 struct rlimit *newp;
 pid_t pid;

 if (!(argc == 2 || argc == 4)) {
 fprintf(stderr, "Usage: %s <pid> [<new-soft-limit> "
 "<new-hard-limit>]\n", argv[0]);
 exit(EXIT_FAILURE);
 }

 pid = atoi(argv[1]); /* PID of target process */

 newp = NULL;
 if (argc == 4) {
 new.rlim_cur = atoi(argv[2]);
 new.rlim_max = atoi(argv[3]);
 newp = &new;
 }

 if (prlimit(pid, RLIMIT_NOFILE, newp, &old) == -1)
 errExit("prlimit-1");
 printf("Previous limits: soft=%lld; hard=%lld\n",
 (long long) old.rlim_cur, (long long) old.rlim_max);

 if (prlimit(pid, RLIMIT_NOFILE, NULL, &old) == -1)
 errExit("prlimit-2");
 printf("New limits: soft=%lld; hard=%lld\n",
 (long long) old.rlim_cur, (long long) old.rlim_max);

 exit(EXIT_FAILURE);
}
and compile it with
gcc -o set_nofile_limit set_nofile_limit.c
And now you have a tool to change any processes "nofile" limit. To dump the limit just pass a PID:
$ ./set_limit_nofile 17006
Previous limits: soft=1024; hard=1024
New limits: soft=1024; hard=1024
$
To change limits pass PID and two limits:
# ./set_limit_nofile 17006 1500 1500
Previous limits: soft=1024; hard=1024
New limits: soft=1500; hard=1500
# 
And the production database is saved.

Static code analysis of any autotools project with oclint

The following is a HowTo describing the setup of OCLint for any C/C++ project using autotools.

1. OCLint Setup

First step is downloading OCLint, as there are no package so far, it's just extracting the tarball somewhere in $HOME. Check out the latest release link on http://archives.oclint.org/releases/.
cd
wget "http://archives.oclint.org/releases/0.8/oclint-0.8.1-x86_64-linux-3.13.0-35-generic.tar.gz"
tar zxvf oclint-0.8.1-x86_64-linux-3.13.0-35-generic.tar.gz 
This should leave you with a copy of OCLint in ~/oclint-0.8.1

2. Bear Setup

As project usually consist of a lot of source files in different subdirectories it is hard for a linter to know where to look for files. While "cmake" has support for dumping a list of source files it processes during a run "make" doesn't. This is where the "Bear" wrapper comes to play: instead of
make
you run
bear make
so "bear" can track all files being compiled. It will dump a JSON file "compile_commands.json" which OCLint can use to do analysis of all files. To setup Bear do the following
cd
git clone https://github.com/rizsotto/Bear.git
cd Bear
cmake .
make

3. Analyzing Code

Now we have all the tools we need. Let's download some autotools project like Liferea. Before doing code analysis it should be downloaded and build at least once:
git clone https://github.com/lwindolf/liferea.git
cd liferea
sh autogen.sh
make
Now we collect all code file compilation instructions with bear:
make clean
bear make
And if this succeed we can start a complete analysis with
~/oclint-0.8.1/bin/oclint-json-compilation-database
which will run OCLint with the input from "compile_commands.json" produced by "bear". Don't call "oclint" directly as you'd need to pass all compile flags manually. If all went well you could see code analysis lines like those:
[...]
conf.c:263:9: useless parentheses P3 
conf.c:274:9: useless parentheses P3 
conf.c:284:9: useless parentheses P3 
conf.c:46:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 33 exceeds limit of 10
conf.c:157:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 12 exceeds limit of 10
conf.c:229:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 30 exceeds limit of 10
conf.c:78:1: long method P3 Method with 55 lines exceeds limit of 50
conf.c:50:2: short variable name P3 Variable name with 2 characters is shorter than the threshold of 3
conf.c:52:2: short variable name P3 Variable name with 1 characters is shorter than the threshold of 3
[...]

Splunk cheat sheet

Basic Searching Concepts

Simple searches look like the following examples. Note that there are literals with and without quoting and that there are field selections with an "=":
Exception                # just the word
One Two Three            # those three words in any order
"One Two Three"          # the exact phrase

# Filter all lines where field "status" has value 500 from access.log
source="/var/log/apache/access.log" status=500

# Give me all fatal errors from syslog of the blog host
host="myblog" source="/var/log/syslog" Fatal

Basic Filtering

Two important filters are "rex" and "regex". "rex" is for extraction a pattern and storing it as a new field. This is why you need to specifiy a named extraction group in Perl like manner "(?...)" for example
source="some.log" Fatal | rex "(?i) msg=(?P[^,]+)"
When running above query check the list of "interesting fields" it now should have an entry "FIELDNAME" listing you the top 10 fatal messages from "some.log" What is the difference to "regex" now? Well "regex" is like grep. Actually you can rephrase
source="some.log" Fatal
to
source="some.log" | regex _raw=".*Fatal.*"
and get the same result. The syntax of "regex" is simply "=". Using it makes sense once you want to filter for a specific field.

Calculations

Sum up a field and do some arithmetics:
... | stats sum(<field>) as result | eval result=(result/1000)
Determine the size of log events by checking len() of _raw. The p10() and p90() functions are returning the 10 and 90 percentiles:
| eval raw_len=len(_raw) | stats avg(raw_len), p10(raw_len), p90(raw_len) by sourcetype

Simple Useful Examples

Splunk usually auto-detects access.log fields so you can do queries like:
source="/var/log/nginx/access.log" HTTP 500
source="/var/log/nginx/access.log" HTTP (200 or 30*)
source="/var/log/nginx/access.log" status=404 | sort - uri 
source="/var/log/nginx/access.log" | head 1000 | top 50 clientip
source="/var/log/nginx/access.log" | head 1000 | top 50 referer
source="/var/log/nginx/access.log" | head 1000 | top 50 uri
source="/var/log/nginx/access.log" | head 1000 | top 50 method
...

Emailing Results

By appending "sendemail" to any query you get the result by mail!
... | sendemail to="[email protected]"

Timecharts

Create a timechart from a single field that should be summed up
... | table _time, <field> | timechart span=1d sum(<field>)
... | table _time, <field>, name | timechart span=1d sum(<field>) by name

Index Statistics

List All Indices
 | eventcount summarize=false index=* | dedup index | fields index
 | eventcount summarize=false report_size=true index=* | eval size_MB = round(size_bytes/1024/1024,2)
 | REST /services/data/indexes | table title
 | REST /services/data/indexes | table title splunk_server currentDBSizeMB frozenTimePeriodInSecs maxTime minTime totalEventCount
on the command line you can call
$SPLUNK_HOME/bin/splunk list index
To query write amount of per index the metrics.log can be used:
index=_internal source=*metrics.log group=per_index_thruput series=* | eval MB = round(kb/1024,2) | timechart sum(MB) as MB by series
MB per day per indexer / index
index=_internal metrics kb series!=_* "group=per_host_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb

index=_internal metrics kb series!=_* "group=per_index_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb

Solving chef Client errors

Problem:
merb : chef-server (api) : worker (port 4000) ~ Connection refused - connect(2) - (Errno::ECONNREFUSED)
Solution: Check why solr is not running and start it
/etc/init.d/chef-solr start

Problem:
merb : chef-server (api) : worker (port 4000) ~ Net::HTTPFatalError: 503 "Service Unavailable" - (Chef::Exceptions::SolrConnectionError)
Solution: You need to check solr log for error. You can find
  • the access log in /var/log/chef/2013_03_01.jetty.log (adapt the date)
  • the solr error log in /var/log/chef/solr.log
Hopefully you find an error trace there.
Problem:
# chef-expander -n 1
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- http11_client (LoadError)
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/em-http.rb:8:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/em-http-request.rb:1:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/solrizer.rb:24:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/vnode.rb:26:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/vnode_supervisor.rb:28:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/cluster_supervisor.rb:25:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/bin/chef-expander:27:in `
'
Solution: This is a gems dependency issue with the HTTP client gem. Read about it here: http://tickets.opscode.com/browse/CHEF-3495. You might want to check the active Ruby version you have on your system e.g. on Debian run
update-alternatives --config ruby
to find out and change it. Note that the emhttp package from Opscode might require a special version. You can check by listing the package files:
dpkg -L libem-http-request-ruby
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/libem-http-request-ruby
/usr/share/doc/libem-http-request-ruby/changelog.Debian.gz
/usr/share/doc/libem-http-request-ruby/copyright
/usr/lib
/usr/lib/ruby
/usr/lib/ruby/vendor_ruby
/usr/lib/ruby/vendor_ruby/em-http.rb
/usr/lib/ruby/vendor_ruby/em-http-request.rb
/usr/lib/ruby/vendor_ruby/em-http
/usr/lib/ruby/vendor_ruby/em-http/http_options.rb
/usr/lib/ruby/vendor_ruby/em-http/http_header.rb
/usr/lib/ruby/vendor_ruby/em-http/client.rb
/usr/lib/ruby/vendor_ruby/em-http/http_encoding.rb
/usr/lib/ruby/vendor_ruby/em-http/multi.rb
/usr/lib/ruby/vendor_ruby/em-http/core_ext
/usr/lib/ruby/vendor_ruby/em-http/core_ext/bytesize.rb
/usr/lib/ruby/vendor_ruby/em-http/mock.rb
/usr/lib/ruby/vendor_ruby/em-http/decoders.rb
/usr/lib/ruby/vendor_ruby/em-http/version.rb
/usr/lib/ruby/vendor_ruby/em-http/request.rb
/usr/lib/ruby/vendor_ruby/1.8
/usr/lib/ruby/vendor_ruby/1.8/x86_64-linux
/usr/lib/ruby/vendor_ruby/1.8/x86_64-linux/em_buffer.so
/usr/lib/ruby/vendor_ruby/1.8/x86_64-linux/http11_client.so
The listing above for example indicates ruby1.8.

Solving 100% cpu usage of chef beam.smp (rabbitmq)

Search for chef 100% cpu issue and you will find a lot of sugestions ranging from reboot the server, to restart RabbitMQ and often to check the kernel max file limit. All of those do not help! What does help is checking RabbitMQ with
rabbitmqctl report | grep -A3 file_descriptors
and have a look at the printed limits and usage. Here is an example:
 {file_descriptors,[{total_limit,8900},
                    {total_used,1028},
                    {sockets_limit,8008},
                    {sockets_used,2}]},
In my case the 100% CPU usage was caused by all of the file handles being used up which for some reason causes RabbitMQ 2.8.4 to go into a crazy endless loop rarely responding at all. The "total_limit" value is the "nofile" limit for the maximum number of open files you can check using "ulimit -n" as RabbitMQ user. Increase it permanently by defining a RabbitMQ specific limit for example in /etc/security/limits.d/rabbitmq.conf:
rabbitmq    soft   nofile   10000
or using for example
ulimit -n 10000
from the start script or login scripts. Then restart RabbitMQ. The CPU usage should be gone. Update: This problem only affects RabbitMQ releases up to 1.8.4 and should be fixed starting with 1.8.5.

Solaris administration commands

This is a list of non-trivial Solaris commands and can be used as a cheat sheet or link collection. If you find errors or want to add something please post a comment below!

Debugging

  • mdb - Analysing core files:
    $ mdb core.xxxx        # Open core file
    > ::status             # Print core summary
    
    > ::stacks             # Backtrace
    > ::stacks -v          # Backtrace verbose
    
    > ::quit
    
  • Changing Solaris Kernel Parameters:
    # mdb -kw
    > maxusers/W 500
    > $q
    
  • Library Dependencies of a running process:
    pldd <pid>
  • Details of Memory Usage - pmap:
    # pmap 19463
    19463:  -sh
    08047000       4K rw---    [ stack ]
    08050000      76K r-x--  /sbin/sh
    08073000       4K rw---  /sbin/sh
    08074000      16K rw---    [ heap ]
    FEE60000      24K r-x--  /lib/libgen.so.1
    FEE76000       4K rw---  /lib/libgen.so.1
    FEE80000    1072K r-x--  /lib/libc.so.1
    FEF90000      24K rwx--    [ anon ]
    FEF9C000      32K rw---  /lib/libc.so.1
    FEFA4000       8K rw---  /lib/libc.so.1
    FEFC4000     156K r-x--  /lib/ld.so.1
    FEFF0000       4K rwx--    [ anon ]
    FEFF7000       4K rwxs-    [ anon ]
    FEFFB000       8K rwx--  /lib/ld.so.1
    FEFFD000       4K rwx--  /lib/ld.so.1
     total      1440K
    
  • kstat API: Accessing Solaris kernel statistics using C-API
  • infocmp - Compare terminal settings: This is not Solaris specific, but you need it quite often.
    infocmp -L
  • DTraceToolkit: Useful dtrace scripts for all types of debugging tasks.
  • How to kill a Solaris machine

Network

  • snoop vs. tcpdump: How to use snoop:
    snoop -v -d qfe0 -x0 host 192.168.1.87
    snoop -v -d qfe0 -x0 port 22
    
  • Show installed NIcs:
    dladm show-dev 
    dladm show-link
  • iSCSI on Solaris
  • Find unknown NIC: When you do not know the network interface name and don't want to guess: simple plumb all unplumbed NICs with
    ifconfig plumb -a

Legacy

  • Extend 256 file descriptor limit for 32bit binaries: This requires preloading a helper library
    % ulimit -n 256
    
    % echo 'rlim_fd_max/D' | mdb -k | awk '{ print $2 }'  # determine allowed maximum
    65536
    
    % ulimit -n 65536
    
    % export LD_PRELOAD_32=/usr/lib/extendedFILE.so.1
  • Determine if Solaris is 32 or 64 bit:
    isainfo -b

Monitoring

  • SEtoolkit: Performance data gathering script collection based on orcallator.
  • Orcallator: Provides a variety of Solaris specific probes.
  • NICstat: Source (C) for a monitoring NICs in Solaris. vmstat/iostat like command line client.
  • Munin on Solaris

Package Installation

  • Resolve File to Package:
    pkgchk -l -p /usr/bin/ls

Service Management

  • svcs - List Service Infos
    svcs -a              # List all installed services and their current state
    svcs -d <service>    # List all dependencies of a service
    svcs -D <service>    # List who is depending on a service
    svcs -xv             # List why something is failed
    
  • svcadm - Control Services
    svcadm enable <service>
    svcadm disable <service>
    svcadm refresh <service>    # like init reload
    svcadm restart <service>
    
    svcadm clear <service>      # Clear errors: try starting again...
    

General

  • Jumpstart HowTo
  • SUNWdhcs DHCPd Setup
  • Sun Packaging Guide
  • Solaris Event Notification API
  • Suns OpenBoot PROM reference manual
  • Solaris IPv6 Administration Guide
  • ALOM/iLOM - Get OS Console:
    start /SP/console
    If the console is already in use you can kill it with
    stop /SP/console
  • ALOM - Set/Get Infos from CLI: When you want to fetch infos or change settings from a running system (e.g. from scripts) you can use the scadm (or rscadm) command. Examples:
    # Show log
    scadm loghistory
    
    # Send a normal or critical console message
    scadm send_event "Important"
    scadm send_event -c "Critical!"
    
    # Dump all or single settings
    scadm show 
    scadm show sc_customerinfo
    
  • Dump HW Infos:
    prtconf -v
  • ZFS Cheat Sheet:
    # Analysis
    zpool list             # List pools
    zpool status -v        # Tree like summary of all disks
    zpool iostat 1         # iostat for all ZFS pools
    zpool history          # Show recent commands
    
    # Handling properties
    zfs get all z0
    zfs get all z0/data
    zfs set sharenfs=on z0/data
    zfs set sharesmb=on z0/data
    zfs set compression=on z0/data
    
    # Mounting 
    zfs mount               # List all ZFS mount points
    zfs set mountpoint=/export/data z0/data
    zfs mount /export/data
    zfs unmount /export/data
    
    # NFS Shares
    zfs set sharenfs=on z1/backup/mydata         # Enable as NFS share
    zfs get sharenfs z1/backup/mydata            # List share options
    zfs sharenfs="<options>" z1/backup/mydata    # Overwrite share options
    
    # Create and load snapshots
    zfs snapshot z0/data@backup-20120601
    zfs rollback z0/data@backup-20120601
    

Simple chef to nagios hostgroup export

When you are automatizing with chef and use plain Nagios for monitoring you will find duplication quite some configuration. One large part is the hostgroup definitions which usually map many of the chef roles. So if the roles are defined in chef anyway they should be sync'ed to Nagios. Using "knife" one can extract the roles of a node like this
knife node show -a roles $node | grep -v "^roles:"

Scripting The Role Dumping

Note though that knife only shows roles that were applied on the server already. But this shouldn't be a big problem for a synchronization solution. Next step is to create a usable hostgroup definition in Nagios. To avoid colliding with existing hostgroups let's prefix the generated hostgroup names with "chef-". The only challenge is the regrouping of the role lists given per node by chef into host name lists per role. In Bash 4 using an fancy hash this could be done like this:
declare -A roles

for node in $(knife node list); do
   for role in $(knife node show -a roles $i |grep -v "roles" ); do
      roles["$role"]=${roles["$role"]}"$i "
   done
done
Given this it is easy to dump Icinga hostgroup definitions. For example
for role in ${!roles[*]}; do
   echo "define hostgroup {
   hostgroup_name chef-$role
   members ${roles[$role]}
}
"
done
That makes ~15 lines of shell script and a cronjob entry to integrate Chef with Nagios. Of course you also need to ensure that each host name provided by chef has a Nagios host definition. If you know how it resolves you could just dump a host definition while looping over the host list. In any case there is no excuse not to export the chef config :-)

Easy Migrating

Migrating to such an export is easy by using the "chef-" namespace prefix for generated hostgroups. This allows you to smoothly migrate existing Nagions definitions at your own pace. Be sure to only reload Nagios and not restart via cron and to do it at reasonable time to avoid breaking things.

Silencing the nagios plugin check_ntp_peer

The Nagios plugin "check_ntp_peer" from Debian package "nagios-plugins-basic" is not very nice. It shouts at you about LI_ALARM bit and negative jitter all the time after a machine reboots despite everything actually being fine.
#!/bin/bash

result=$(/usr/lib/nagios/plugins/check_ntp_peer $@)
status=$?

if echo "$result" | egrep 'jitter=-1.00000|has the LI_ALARM' >/dev/null; then
	echo "Unknown state after reboot."
	exit 0
fi

echo $result
exit $status
Using above wrapper you get rid of the warnings.

Shell ansi color matrix

The following script will dump a ANSI color matrix that allows you to easily choose colours. I ripped this off somewhere, just forgot where... This post is for conservation only. Don't read it!
#!/bin/sh

T='gYw'   # The test text

echo -e "\n                 40m     41m     42m     43m\
     44m     45m     46m     47m";

for FGs in '    m' '   1m' '  30m' '1;30m' '  31m' '1;31m' '  32m' \
           '1;32m' '  33m' '1;33m' '  34m' '1;34m' '  35m' '1;35m' \
           '  36m' '1;36m' '  37m' '1;37m';
  do FG=${FGs// /}
  echo -en " $FGs \033[$FG  $T  "
  for BG in 40m 41m 42m 43m 44m 45m 46m 47m;
    do echo -en "$EINS \033[$FG\033[$BG  $T  \033[0m";
  done
  echo;
done
echo

Screen tmux cheat sheet

Here is a side by side comparison of screen and tmux commands and hotkeys.
FunctionScreentmux
Start instancescreen screen -S <name>tmux
Attach to instancescreen -r <name> screen -x <name>tmux attach
List instancesscreen -ls screen -ls <user name>/tmux ls
New Window^a c^b c
Switch Window^a n ^a p^b n ^b p
List Windows^a "^b w
Name Window^a A^b ,
Split Horizontal^a S^b "
Split Vertical^a |^b %
Switch Pane^a Tab^b o
Kill Pane^a x^b x
Paging^b PgUp ^b PgDown
Scrolling Mode^a [^b [

Removing newlines with sed

My goal for today: I want to remember the official sed FAQ solution to replace multiple newlines:
sed ':a;N;$!ba;s/\n//g' file
to avoid spending a lot of time on it when I need it again.

Regex in postgres update statement

--- categories: Postgres --- Want to use regular expressions in Postgres UPDATE statements? BEGIN; UPDATE table SET field=regexp_replace(field, 'match pattern', 'replace string', 'g'); END;

Redis performance debugging

Here are some simple hints on debugging Redis performance issues.

Monitoring Live Redis Queries

Run the "monitor" command to see queries as they are sent against an Redis instance. Do not use on high traffic instance!
redis-cli monitor
The output looks like this
redis 127.0.0.1:6379> MONITOR
OK
1371241093.375324 "monitor"
1371241109.735725 "keys" "*"
1371241152.344504 "set" "testkey" "1"
1371241165.169184 "get" "testkey"

Analyzing Slow Commands

When there are too many queries better use "slowlog" to see the top slow queries running against your Redis instance:
slowlog get 25		# print top 25 slow queries
slowlog len		
slowlog reset

Debugging Latency

If you suspect latency to be an issue use "redis-cli" built-in support for latency measuring. First measure system latency on your Redis server with
redis-cli --intrinsic-latency 100
and then sample from your Redis clients with
redis-cli --latency -h <host> -p <port>
If you have problems with high latency check if transparent huge pages are disabled. Disable it with
echo never > /sys/kernel/mm/transparent_hugepage/enabled

Check Background Save Settings

If your instance seemingly freezes peridiocally you probably have background dumping enabled.
grep ^save /etc/redis/redis.conf
Comment out all save lines and setup a cron job to do dumping or a Redis slave who can dump whenever he wants to. Alternatively you can try to mitigate the effect using the "no-appendfsync-on-rewrite" option (set to "yes") in redis.conf.

Check fsync Setting

Per default Redis runs fsync() every 1s. Other possibilities are "always" and "no".
grep ^appendfsync /etc/redis/redis.conf
So if you do not care about DB corruption you might want to set "no" here.

Reasons for ffmpeg "av_interleaved_write_frame(): io error occurred"

If you are unlucky you might see the following ffmpeg error message: Output #0, image2, to 'output.ppm': Stream #0.0: Video: ppm, rgb24, 144x108, q=2-31, 200 kb/s, 90k tbn, 29.97 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding av_interleaved_write_frame(): I/O error occurred Usually that means that input file is truncated and/or corrupted. The above error message was produced with a command like this ffmpeg -v 0 -y -i 'input.flv' -ss 00:00:01 -vframes 1 -an -sameq -vcodec ppm -s 140x100 'output.ppm' There are several possible reasons for the error message "av_interleaved_write_frame(): I/O error occurred".
  1. You are extracting a thumb and forgot to specify to extract a single frame only (-vframes 1)
  2. You have a broken input file.
  3. And finally: The target file cannot be written.
The above was caused by problem three. After a lot of trying I found that the target directory did not exist. Quite confusing.

Puppet: list changed files

If you want to know which files where changed by puppet in the last days:
cd /var/lib/puppet
for i in $(find clientbucket/ -name paths); do
	echo "$(stat -c %y $i | sed 's/\..*//')       $(cat $i)";
done | sort -n
will give you an output like
[...]
2015-02-10 12:36:25       /etc/resolv.conf
2015-02-17 10:52:09       /etc/bash.bashrc
2015-02-20 14:48:18       /etc/snmp/snmpd.conf
2015-02-20 14:50:53       /etc/snmp/snmpd.conf
[...]

Puppet check erbs for dynamic scoping

If you ever need to upgrade a code base to Puppet 3.0 and strip all dynamic scoping from your templates:
for file in $( find . -name "*.erb" | sort); do 
    echo "------------ [ $file ]"; 
    if grep -q "<%[^>]*$" $file; then 
        content=$(sed '/<%/,/%>/!d' $file); 
    else
        content=$(grep "<%" $file); 
    fi;
    echo "$content" | egrep "(.each|if |%=)" | egrep -v "scope.lookupvar|@|scope\["; 
done

This is of course just a fuzzy match, but should catch quite some of the dynamic scope expressions there are. The limits of this solution are:
  • false positives on loop and declared variables that must not be scoped
  • and false negatives when mixing of correct scope and missing scope in the same line.
So use with care.

Postgres administration commands

Getting Help in psql

It doesn't matter if you do not remember a single command as long as you follow the hints given:
Type:  \copyright for distribution terms
       \h for help with SQL commands
       \? for help with psql commands
       \g or terminate with semicolon to execute query
       \q to quit
While many know their way around SQL, you might want to always use \? to find the specific psql commands.

Using Regular Expressions in Postgres

You can edit column using regular expressions by running regexp_replace()
UPDATE table SET field=regexp_replace(field, 'match pattern', 'replace string', 'g');

List Postgres Clusters

Under Debian use the pg_wrapper command
pg_lsclusters

List Postgres Settings

SHOW ALL;

List Databases and Sizes

SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database;

Analyze Queries in Postgres

EXPLAIN ANALYZE <sql statement>;

Show Running Queries in Postgres

SELECT * FROM pg_stat_activity;

Show Blocking Locks

 SELECT bl.pid AS blocked_pid, a.usename AS blocked_user, 
         kl.pid AS blocking_pid, ka.usename AS blocking_user, a.current_query AS blocked_statement
  FROM pg_catalog.pg_locks bl
       JOIN pg_catalog.pg_stat_activity a
       ON bl.pid = a.procpid
       JOIN pg_catalog.pg_locks kl
            JOIN pg_catalog.pg_stat_activity ka
            ON kl.pid = ka.procpid
       ON bl.transactionid = kl.transactionid AND bl.pid != kl.pid
  WHERE NOT bl.granted ;

Show Table Usage

If you want to know accesses or I/O per table or index you can use the pg_stat_*_tables and pg_statio_*_tables relations. For example:
SELECT * FROM pg_statio_user_tables;
to show the I/O caused by your relations. Or for the number of accesses and scan types and tuples fetched:
SELECT * FROM pg_stat_user_tables;

Kill Postgres Query

First find the query and it's PID:
SELECT procpid, current_query FROM pg_stat_activity;
And then kill the PID on the Unix shell. Or use
SELECT pg_terminate_backend('12345');

Kill all Connections to a DB

The following was suggested here. Replace "TARGET_DB" with the name of the database whose connections should be killed.
SELECT pg_terminate_backend(pg_stat_activity.procpid) FROM pg_stat_activity WHERE pg_stat_activity.datname = 'TARGET_DB';

Checking Replication

Compared to MySQL checking for replication delay is rather hard. It is usually good to script this or use ready monitoring tools (e.g. Nagios Postgres check). Still it can be done manually by running this command on the master:
SELECT pg_current_xlog_location();
and those two commands on the slave:
SELECT pg_last_xlog_receive_location();
SELECT pg_last_xlog_replay_location()
The first query gives you the most recent xlog position on the master, while the other two queries give you the most recently received xlog and the replay position in this xlog on the slave. A Nagios check plugin could look like this:
#!/bin/bash

# Checks master and slave xlog difference...
# Pass slave IP/host via $1

PSQL="psql -A -t "

# Get master status
master=$(echo "SELECT pg_current_xlog_location();" | $PSQL)

# Get slave receive location
slave=$(echo "select pg_last_xlog_replay_location();" | $PSQL -h$1)

master=$(echo "$master" | sed "s/\/.*//")
slave=$(echo "$slave" | sed "s/\/.*//")

master_dec=$(echo "ibase=16; $master" | bc)
slave_dec=$(echo "ibase=16; $slave" | bc)
diff=$(expr $master_dec - $slave_dec)

if [ "$diff" == "" ]; then
	echo "Failed to retrieve replication info!"
	exit 3
fi

# Choose some good threshold here...
status=0
if [ $diff -gt 3 ]; then
	status=1
fi
if [ $diff -gt 5 ]; then
	status=2
fi

echo "Master at $master, Slave at $slave , difference: $diff"
exit $status

Postgres Backup Mode

To be able to copy Postgres files e.g. to a slave or a backup you need to put the server into backup mode.
SELECT pg_start_backup('label', true);
SELECT pg_stop_backup();
Read more: Postgres - Set Backup Mode

Debugging PgBouncer

To inspect pgbouncer operation ensure to add at least one user you defined in the user credentials (e.g. /etc/pgbouncer/userlist.txt) to the "stats_users" key in pgbouncer.ini:
stats_users = myuser
Use this user to connect to pgbouncer with psql by requesting the "pgbouncer" database:
psql -p 6432 -U myuser -W pgbouncer
At psql prompt list supported commands
SHOW HELP;
PgBouncer will present all statistics and configuration options:
pgbouncer=# SHOW HELP;
NOTICE:  Console usage
DETAIL:  
	SHOW HELP|CONFIG|DATABASES|POOLS|CLIENTS|SERVERS|VERSION
	SHOW STATS|FDS|SOCKETS|ACTIVE_SOCKETS|LISTS|MEM
	SET key = arg
	RELOAD
	PAUSE []
	SUSPEND
	RESUME []
	SHUTDOWN
The "SHOW" commands are all self-explanatory. Very useful are the "SUSPEND" and "RESUME" commands when you use pools.

PgBouncer Online Restart

If you ever need to restart pgbouncer under traffic load use "-R" to avoid disconnecting clients. This option gets the new process to reuse the Unix sockets of the old one. A possible use case could be that you think pgbouncer has become stuck, overloaded or instable.
pgbouncer -R
Aside from this in most cases SIGHUP should be fine.

Further Reading

The must have reading for Postgres is for sure this book:

Patch to create custom flv tags with yamdi

As described in the Comparison of FLV and MP4 metadata tagging tools (injectors) post yamdi is propably the best and fastest Open Source FLV metadata injector.

Still yamdi is missing the possibility to add custom FLV tags. I posted a patch upstream some months ago with no feedback so far. So if you need custom tags as provided by flvtool2 you might want to merge this patch against the yamdi sources (tested with 1.8.0.

--- ../yamdi.c	2010-10-17 20:46:40.000000000 +0200
+++ yamdi.c	2010-10-19 11:32:34.000000000 +0200
@@ -105,6 +105,9 @@
 	FLVTag_t *flvtag;
 } FLVIndex_t;
 
+// max number of user defined tags
+#define MAX_USER_DEFINED	10
+
 typedef struct {
 	FLVIndex_t index;
 
@@ -168,6 +171,8 @@
 
 	struct {
 		char creator[256];		// -c
+		char *user_defined[MAX_USER_DEFINED];	// -a
+		int user_defined_count;		// number of user defined parameters
 
 		short addonlastkeyframe;	// -k
 		short addonlastsecond;		// -s, -l (deprecated)
@@ -288,8 +293,15 @@
 
 	initFLV(&flv);
 
-	while((c = getopt(argc, argv, "i:o:x:t:c:lskMXh")) != -1) {
+	while((c = getopt(argc, argv, "a:i:o:x:t:c:lskMXh")) != -1) {
 		switch(c) {
+			case 'a':
+				if(flv.options.user_defined_count + 1 == MAX_USER_DEFINED) {
+					fprintf(stderr, "ERROR: to many -a options\n");
+					exit(1);
+				}
+				printf("Adding tag >>>%s<<<\n", optarg);
+				flv.options.user_defined[flv.options.user_defined_count++] = strdup (optarg);
 			case 'i':
 				infile = optarg;
 				break;
@@ -1055,6 +1067,7 @@
 
 int createFLVEventOnMetaData(FLV_t *flv) {
 	int pass = 0;
+	int j;
 	size_t i, length = 0;
 	buffer_t b;
 
@@ -1073,6 +1086,21 @@
 	if(strlen(flv->options.creator) != 0) {
 		writeBufferFLVScriptDataValueString(&b, "creator", flv->options.creator); length++;
 	}
+	
+	printf("Adding %d user defined tags\n", flv->options.user_defined_count);
+	for(j = 0; j < flv->options.user_defined_count; j++) {
+		char *key = strdup (flv->options.user_defined[j]);
+		char *value = strchr(key, '=');
+		if(value != NULL) {
+			*value++ = 0;
+			printf("Adding tag #%d %s=%s\n", j, key, value);
+			writeBufferFLVScriptDataValueString(&b, key, value);
+			length++;
+		} else {
+			fprintf(stderr, "ERROR: Invalid key value pair: >>>%s<<<\n", key);
+		}
+		free(key);
+	} 
 
 	writeBufferFLVScriptDataValueString(&b, "metadatacreator", "Yet Another Metadata Injector for FLV - Version " YAMDI_VERSION "\0"); length++;
 	writeBufferFLVScriptDataValueBool(&b, "hasKeyframes", flv->video.nkeyframes != 0 ? 1 : 0); length++;

Using the patch you then can add up to 10 custom tags using the new "-a" switch. The syntax is

-a <key>=<value>

Overview on automated linux package vulnerability scanning

I got some really helpful comments on my recent post Scan Linux for Vulnerable Packages. The suggestions on how to do it on Debian and Redhat made me wonder: which distributions provide tools and what are they capable of? So the goal is to check wether each distribution has a way to automatically check for vulnerable packages that need upgrades. Below you find an overview of the tools I've found and the distributions that might not have a good solution yet.
DistributionScannerRatingDescription
DebiandebsecansuperbEasy to use. Maintained by the Debian testing team. Lists packages, CVE numbers and details.
UbuntudebsecanuselessThey just packaged the Debian scanner without providing a database for it! And since 2008 there is a bug about it being 100% useless.
CentOS Fedora Redhat"yum list-security"goodProvides package name and CVE number. Note: On older systems there is only "yum list updates".
OpenSuSE"zypper list-patches"okProvides packages names with security relevant updates. You need to filter the list yourself or use the "--cve" switch to limit to CVEs only.
SLES"rug lu"okProvides packages names with security relevant updates. Similar to zypper you need to do the filtering yourself.
Gentooglsa-checkbadThere is a dedicated scanner, but no documentation.
FreeBSDPortauditsuperbNo Linux? Still a nice solution... Lists vulnerable ports and vulnerability details.
I know I didn't cover all Linux distributions and I rely on your comments for details I've missed. Ubuntu doesn't look good here, but maybe there will be some solution one day :-)

Never forget _netdev with glusterfs mounts

When adding GlusterFS share to /etc/fstab do not forget to add "_netdev" to the mount options. Otherwise on next boot your system will just hang! Actually there doesn't seem to be a timeout. That would be nice too. As a side-note: do not forget that Ubuntu 12.04 doesn't care about the "_netdev" even. So network is not guaranteed to be up when mounting. So an additional upstart task or init script is needed anyway. But you need "_netdev" to prevent hanging on boot. I also have the impression that this only happens with stock kernel 3.8.x and not with 3.4.x!

Nfs administration commands

Update Exported Shares

After editing /etc/exports run
exportfs -a

List Exported Shares

# showmount -e
Export list for myserver:
/export/home       10.1.0.0/24
#

List Mounts on NFS-Server

# showmount 
Hosts on myserver:
10.1.0.15
#

List active Protocols/Services

To list local services run:
# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  48555  status
    100024    1   tcp  49225  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  51841  nlockmgr
    100021    3   udp  51841  nlockmgr
    100021    4   udp  51841  nlockmgr
    100021    1   tcp  37319  nlockmgr
    100021    3   tcp  37319  nlockmgr
    100021    4   tcp  37319  nlockmgr
    100005    1   udp  57376  mountd
    100005    1   tcp  37565  mountd
    100005    2   udp  36255  mountd
    100005    2   tcp  36682  mountd
    100005    3   udp  54897  mountd
    100005    3   tcp  51122  mountd
Above output is from an NFS server. You can also run it for remote servers by passing an IP. NFS clients usually just run status and portmapper:
# rpcinfo -p 10.1.0.15
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  44152  status
    100024    1   tcp  53182  status

Mounting NFSv4 Shares

The difference in mounting is that you need to provide "nfs4" and transport and port options like this:
mount -t nfs4 -o proto=tcp,port=2049 server:/export/home /mnt

Ensure NFSv4 Id Mapper Running

When using NFSv4 share ensure to have the id mapper running on all clients. On Debian you need to explicitely start it:
service idmapd start

Configure NFSv4 User Mappings

You might want to set useful NFSv4 default mappings and some explicit mappings for unknown users:
#cat /etc/idmapd.conf
[...]
[Mapping]
Nobody-User = nobody
Nobody-Group = nogroup

[Static]
someuser@otherserver = localuser

Tuning NFS Client Mount Options

Try the following client mount option changes:
  • Use "hard" instead of "soft"
  • Add "intr" to allow for dead server and killable client programs
  • Increase "mtu" to maximum
  • Increase "rsize" and "wsize" to maximum supported by clients and server
  • Remove "sync"
After changing and remounting check for effective options using "nfsstat -m" which will give you a list like this:
$ nfsstat -m
/data from 10.1.0.16:/data
 Flags:	rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.0.16,local_lock=none,addr=10.1.0.15
$

Tuning NFS Server

For the exported filesystem mount options:
  • Use "noatime"
  • Use "async" if you can (risk of data corruption)
  • Use "no_subtree_check"
Other than that:
  • Use CFQ I/O scheduler
  • Increase /sys/block/sda/device/block/sda/queue/max_sectors_kb
  • Check /sys/block/sda/device/block/sda/queue/read_ahead_kb
  • Increase number of nfsd threads

Getting NFS Statistics

Use "nfsstat" for detailed NFS statistics! The options are "-c" for client and "-s" for server statistics. On the server caching statistics are most interesting,
# nfsstat -o rc
Server reply cache:
hits       misses     nocache
0          63619      885550  
#
on the client probably errors and retries. Also note that you can get live per-interval results when running with "--sleep=<interval>". For example
# nfsstat -o fh --sleep=2

Mysql dump skip event table

If your MySQL backup tool or self-written script complains about an event table than you have run into an issue caused by newer MySQL versions (>5.5.30) that introduced a new table "events" in the internal schema. If you run into this you need to decide wether you want to include or exclude the new events table when dumping your database. To skip: Due to a MySQL bug #68376 you have two choices. You can check documentation and add the logical option
--skip-events
which will cause the event table not to be exported. But the warning won't go away. To also get rid of the warning you need to use this instead:
--events --ignore-table=mysql.events
And of course you can also choose just to dump the events table: Add the option
--events
to your "mysqldump" invocation. If you use a tool that invokes "mysqldump" indirectly check if the tool allows to inject additional parameters.

Mysql administration commands

Below you find a unordered list of solutions by tasks useful for a MySQL DBA:

Live Monitoring of MySQL

There are two useful tools:
  • mytop
  • innotop
with "mytop" being an own Debian package, while "innotop" is included in the "mysql-client" package. From both innotop has the more advanced functionality. Both need to be called with credentials to connect to the database:
mytop -u <user> -p<password>
innotop -u <user> -p<password>
Alternatively you can provide a .mytop file to provide the credentials automatically.

Show MySQL Status

You can get a very simple status by just entering "\s" in the "mysql" command line client prompt:
mysql> \s
You can show the replication status using
SHOW SLAVE STATUS \G
SHOW MASTER STATUS \G
Note that the "\G" instead of ";" just makes the output more readable. If you have configured slaves to report names you can list them on the master with:
SHOW SLAVE HOSTS;

Check InnoDB status

show /*!50000 ENGINE*/ INNODB STATUS;

List Databases/Tables/Colums

You can either use the "mysqlshow" tool:
mysqlshow                         # List all databases
mysqlshow <database>              # List all tables of the given database
mysqlshow <database> <table>      # List all columns of the given table in the given DB
And you can also do it using queries:
SHOW DATABASES;

USE <database>;
SHOW TABLES;
DESCRIBE <table>;

Check and Change Live Configuration Parameters

Note that you cannot change all existing parameters. Some like innodb_pool_buffer require a DB restart.
show variables;                          # List all configuration settings
show variables like 'key_buffer_size';   # List a specific parameter

set global key_buffer_size=100000000;    # Set a specific parameter

# Finally ensure to edit my.cnf to make the change persistent

MySQL Parameter Optimization

You can check MySQL parameters of a running instance using tools likeAlso have a look at this MySQL config parameter explanation.

Remote MySQL Dump and Import

The following command allows dumping a database from one source host that doesn't see the target host when executed on a third host that can access both. If both hosts can see each other and one has SSH access to the other you can simply drop one of the ssh calls.
ssh <user@source host> "mysqldump --single-transaction -u root --password=<DB root pwd> <DB name>" | ssh <user@target host> "mysql -u root --password=<DB root pwd> <DB name>"

How to solve: Could not find target log during relay log initialization

Happens on corrupted/missing relay logs. To get the DB working
  • Stop MySQL
  • Remove /var/lib/mysql/relay-log-index.*
  • Remove all relay log files
  • Remove relog log file index
  • Start MySQL

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table

This is caused by timeouts when copying overly large database tables. The default network timeouts are very short per-default. So you can workaround this by increasing network timeouts
set global net_write_timeout = 100000;
set global net_read_timeout = 100000;

Forgotten MySQL root Password

# 1. Stop MySQL and start without grant checks

/usr/bin/mysqld_safe --skip-grant-tables &
mysql --user=root mysql

# 2. Change root password
UPDATE user SET password=PASSWORD('xxxxx') WHERE user = 'root';

Import a CSV file into MySQL

LOAD DATA IN '<CSV filename>' INTO TABLE <table name> FIELDS TERMINATED BY ',' (<name of column #1>,<<name of column #2>,<...>);

MySQL Pager - Output Handling

Using "PAGER" or \P you can control output handling. Instead of having 10k lines scrolling by you can write everything to a file or use "less" to scroll through it for example. To use less issue
pager less
Page output into a script
pager /home/joe/myscript.sh
Or if you have Percona installed get a tree-like "EXPLAIN" output with
pager mk-visual-explain
and then run the "EXPLAIN" query.

MySQL - Check Query Cache

# Check if enabled
SHOW VARIABLES LIKE 'have_query_cache';

# Statistics
SHOW STATUS LIKE 'Qcache%';

Check for currently running MySQL queries

show processlist;
show full processlist;
Filter items in process list by setting grep as a pager. The following example will only print replication connections:
\P grep system
show processlist;
To abort/terminate a statement determine it's id and kill it:
kill <id>;    # Kill running queries by id from process listing

Show Recent Commands

SHOW BINLOG EVENTS;
SHOW BINLOG EVENTS IN '<some bin file name>';

Inspect a MySQL binlog file

There is an extra tool to inspect bin logs:
mysqlbinlog <binary log file>

Skip one statement on replication issue HA_ERR_FOUND_DUPP_KEY

If replication stops with "HA_ERR_FOUND_DUPP_KEY" you can skip the current statement and continue with the next one by running:
STOP SLAVE;
 SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
START SLAVE;

Changing Replication Format

When you want to change the replication format of a running setup you might want to follow this steps:
  1. Ensure you have a database backup
  2. Make master read-only by running
    FLUSH TABLES WITH READ LOCK;
  3. Wait until all slaves do catch up
  4. Stop all slaves (shutdown MySQL)
  5. On master:
    FLUSH LOGS;
    SET GLOBAL binlog_format='xxxxxx';
    FLUSH LOGS;
    UNLOCK TABLES;
    (ensure to replace 'xxxxxx' with for example 'ROW')
  6. Start all slaves
  7. Ensure to put the new binlog_format in all /etc/mysql/my.cnf
Note: the second "FLUSH LOGS;" ensures that the a new binary log is opened on the master with the new binlog_format. The stopping of the slaves ensures that they open a new relay log matching the new binlog_format.

Munin MySQL Plugin Setup on Debian

apt-get install libcache-cache-perl

for i in `./mysql_ suggest`
do
   do ln -sf /usr/share/munin/plugins/mysql_ $i;
done

/etc/init.d/munin-node reload

Fix Slow Replication

When replication is slow check the status of the replication connection. If it is too often in "invalidating query cache" status you need to decrease your query cache size. You might even consider disabling query cache for the moment if the DB load does allow it:
set global query_cache_size=0;

Debug DB Response Time

There is generic TCP response analysis tool developed by Percona called tcprstat. Download the binary from Percona, make it executable and run it like
tcprstat -p 3306 -t 1 -n 0
to get continuous statistics on the response time. This is helpful each time some developer claims the DB doesn't respond fast enough!

Further Reading

If you think about buying MySQL books...

Most important redis commands for sysadmins

When you encounter a Redis instance and you quickly want to learn about the setup you just need a few simple commands to peak into the setup. Of course it doesn't hurt to look at the official full command documentation, but below is a listing just for sysadmins.

Accessing Redis

First thing to know is that you can use "telnet" (usually on default port 6397)
telnet localhost 6397
or the Redis CLI client
redis-cli
to connect to Redis. The advantage of redis-cli is that you have a help interface and command line history.

Scripting Redis Commands

For scripting just pass commands to "redis-cli". For example:
$ redis-cli INFO | grep connected
connected_clients:2
connected_slaves:0
$

Server Statistics

The statistics command is "INFO" and will give you an output as following:
$ redis-cli INFO
redis_version:2.2.12
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:8353
uptime_in_seconds:2592232
uptime_in_days:30
lru_clock:809325
used_cpu_sys:199.20
used_cpu_user:309.26
used_cpu_sys_children:12.04
used_cpu_user_children:1.47
connected_clients:2
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:6596112
used_memory_human:6.29M
used_memory_rss:17571840
mem_fragmentation_ratio:2.66
use_tcmalloc:0
loading:0
aof_enabled:0
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1371241671
bgrewriteaof_in_progress:0
total_connections_received:118
total_commands_processed:1091
expired_keys:441
evicted_keys:0
keyspace_hits:6
keyspace_misses:1070
hash_max_zipmap_entries:512
hash_max_zipmap_value:64
pubsub_channels:0
pubsub_patterns:0
vm_enabled:0
role:master
db0:keys=91,expires=88

Changing Runtime Configuration

The command
CONFIG GET *
gives you a list of all active configuration variables you can change. The output might look like this:
redis 127.0.0.1:6379> CONFIG GET *
 1) "dir"
 2) "/var/lib/redis"
 3) "dbfilename"
 4) "dump.rdb"
 5) "requirepass"
 6) (nil)
 7) "masterauth"
 8) (nil)
 9) "maxmemory"
10) "0"
11) "maxmemory-policy"
12) "volatile-lru"
13) "maxmemory-samples"
14) "3"
15) "timeout"
16) "300"
17) "appendonly"
18) "no"
19) "no-appendfsync-on-rewrite"
20) "no"
21) "appendfsync"
22) "everysec"
23) "save"
24) "900 1 300 10 60 10000"
25) "slave-serve-stale-data"
26) "yes"
27) "hash-max-zipmap-entries"
28) "512"
29) "hash-max-zipmap-value"
30) "64"
31) "list-max-ziplist-entries"
32) "512"
33) "list-max-ziplist-value"
34) "64"
35) "set-max-intset-entries"
36) "512"
37) "slowlog-log-slower-than"
38) "10000"
39) "slowlog-max-len"
40) "64"
Note that keys and values are alternating and you can change each key by issuing a "CONFIG SET" command like:
CONFIG SET timeout 900
Such a change will be effective instantly. When changing values consider also updating the redis configuration file.

Multiple Databases

Redis has a concept of separated namespaces called "databases". You can select the database number you want to use with "SELECT". By default the database with index 0 is used. So issuing
redis 127.0.0.1:6379> SELECT 1
OK
redis 127.0.0.1:6379[1]>
switches to the second database. Note how the prompt changed and now has a "[1]" to indicate the database selection. To find out how many databases there are you might want to run redis-cli from the shell:
$ redis-cli INFO | grep ^db
db0:keys=91,expires=88
db1:keys=1,expires=0

Dropping Databases

To drop the currently selected database run
FLUSHDB
to drop all databases at once run
FLUSHALL

Checking for Replication

To see if the instance is a replication slave or master issue
redis 127.0.0.1:6379> INFO
[...]
role:master
and watch for the "role" line which shows either "master" or "slave". Starting with version 2.8 the "INFO" command also gives you per slave replication status looking like this
slave0:ip=127.0.0.1,port=6380,state=online,offset=281,lag=0

Enabling Replication

If you quickly need to set up replication just issue
SLAVEOF <IP> <port>
on a machine that you want to become slave of the given IP. It will immediately get values from the master. Note that this instance will still be writable. If you want it to be read-only change the redis config file (only available in most recent version, e.g. not on Debian). To revert the slave setting run
SLAVEOF NO ONE

Dump Database Backup

As Redis allows RDB database dumps in background, you can issue a dump at any time. Just run:
BGSAVE
When running this command Redis will fork and the new process will dump into the "dbfilename" configured in the Redis configuration without the original process being blocked. Of course the fork itself might cause an interruption. Use "LASTSAVE" to check when the dump file was last updated. For a simple backup solution just backup the dump file. If you need a synchronous save run "SAVE" instead of "BGSAVE".

Listing Connections

Starting with version 2.4 you can list connections with
CLIENT LIST
and you can terminate connections with
CLIENT KILL <IP>:<port>

Monitoring Traffic

The propably most useful command compared to memcached where you need to trace network traffic is the "MONITOR" command which will dump incoming commands in real time.
redis 127.0.0.1:6379> MONITOR
OK
1371241093.375324 "monitor"
1371241109.735725 "keys" "*"
1371241152.344504 "set" "testkey" "1"
1371241165.169184 "get" "testkey"

Checking for Keys

If you want to know if an instance has a key or keys matching some pattern use "KEYS" instead of "GET" to get an overview.
redis 127.0.0.1:6379> KEYS test*
1) "testkey2"
2) "testkey3"
3) "testkey"
On production servers use "KEYS" with care as you can limit it and it will cause a full scan of all keys!

Missing roles in knife node show output

Sometimes the knife output can be really confusing:
$ knife node show myserver
Node Name:   myserver1
Environment: _default
FQDN:        myserver1
IP:          
Run List:    role[base], role[mysql], role[apache]
Roles:       base, nrpe, mysql
Recipes:     [...]
Platform:    ubuntu 12.04
Tags:        
Noticed the difference in "Run List" and "Roles"? The run list says "role[apache]", but the list of "Roles" has no Apache. This is because of the role not yet being run on the server. So a
ssh root@myserver chef-client
Solves the issue and Apache appears in the roles list. The learning: do not use "knife node show" to get the list of configured roles!

Mind map zu den fundamental modelling concepts (fmc)

Diese Mind Map gibt eine kurze Übersicht über die Terminologie des Fundamental Modelling Concepts (FMC) Standards der am Hasso Plattner Institut der Uni Potsdam für die Systemmodellierung oberhalb von UML entworfen wurde. Wie viele gute Bemühungen hat FMC allerdings keine große Verbreitung gefunden.
Die Mindmap wird am besten mit dem Flash-Viewer angezeigt: bitte hier klicken!
All + All -

Systemmodellierung und FMC

  • + -System
    • ein konkretes oder konkret vorstellbares Gebilde aus Komponenten
    • zeigt Verhalten als Ergebnis des Zusammenwirkens der Systemteile
  • + -Modellierung
    • Ziel: Erfassen der wesentlichen bzw. interessierenden Merkmale eines Systems zwecks Bewahrung oder Weitergabe dieses Wissens

      schafft die Grundlage für die effiziente Kommunikation bei der arbeitsteiligen Softwareentwicklung

      erstellt eine Beschreibung für Menschen Sinn ist die Komplexitätsbeherrschung

      Vorgang des Modellierens

      • 1. Finden/Festlegen eines/mehrerer Systemmodelle

        2. Darstellung des/der Modelle und damit Erstellung einer Beschreibung ein oder mehrerer für menschliches Verständnis optimierter Teile (Pläne)

        bei komplexen Systemen sind stets mehrere, unterschiedliche Systemmodelle gegeben, entsprechend der unterschiedlichen Abstraktionsebenen

        die Betrachtungsebene ergibt sich aus dem jeweiligen Interesse: - nur ein bestimmter Teil des Systems - nur ein bestimmter Anwendungsfall - gewählte Abstraktionshöhe

      besondere Anforderungen

      • + - Anschaulichkeit

        • primär graphische Darstellung
        • semiformal
        • anschauliche begriffliche Basis

        + - Einfachheit

        • wenige einfache grafische Elemente bzw. Grundbegriffe

        + - Universalität

        • Grundelemente so einfach aber auch so vielseitig als möglich
        • möglichst alle Systeme (Typen) sollen beschreibbar sein

        "Separation of Concerns"

        • gedanklich trennbare Aspekte müssen getrent beschreibbar sein

        + - Ästhetik

        • anschauliches klares Layout steigert Akzeptanz und Nutzen der Beschreibung

        im Gegensatz dazu: Maschinenbeschreibung: - formal, Syntax und Semantik explizit   formal festgelegt - Vollständigkeit - konsistent, keine Widersprüche

  • + -Systemmodell
    • + - grundlegende Aspekte
      • Unterscheidung verschiedener Systemmodellen und ihrer Beziehungen

        Strukturtypen _innerhalb eines_ Modells: - Verhalten (dynamisches System) - Aufbau (dynamisches System) - Wertestrukturen (informationelles System!)

    • + - 3 Strukturtypen
      • im Zeitpunkt vorliegende
        • + - Aufbaustruktur
          • Wie ist das System aufgebaut?
        • + - Wertestruktur
          • welche Information (Art) ist im System vorhanden
      • in Zeitintervallen betrachtbare
        • + - Ablaufstruktur
          • Welche Vorgänge passieren im System?
      • diese 3 Typen findet man auch in FMC
    • + - Implementierung
      • ein Modell B ist die Implementierung von Modell A, wenn sich Modell B durch Entwurfsentscheidungen von A herleitet.

        Auswahl aus alternativen Mitteln treffen

        elementare vs. nicht-elementare Entwurfsentscheidungen...

        unabhängige vs. abhängige Entwurfs- entscheidungen: zwei Entwurfsent- scheidungen sind unabhängig wenn keine die andere voraussetzt

    • aufgabenvollständige vs. aufgaben- unvollständige Modelle...
  • + -Architektur
    • nach Hofmeister, Nord & Soni: "Applied Software Architecture"
    • Grundaussagen für gute SW-Architektur
      • kritischer Faktor für den Produkterfolg (Überschaubarkeit/Verständlichkeit für alle Stakeholder muß gewährleistet sein)

        "design plan", "blue print" des Systems (Aushandeln/Festlegen der Anforderungen an das System, wichtige Grundlage für die Arbeitsteilung)

        dient der Komplexitätsbeherrschung

        gibt den aktuellen Entwurfsstand wieder

    • es gibt im Wesentlichen 4 Sichten
    • + - Code Architecture View
      • Strukturierung der Sourcen bzgl. ihrer Form (SW Paketierung)
      • + - Format der Komponenten
        • object code
        • library
        • binary
        • byte code
      • Aufteilung/Ablage (als Datei, in Verzeichnis, welche Versionierung...)
      • wirkt sich aus auf...
        • CM
        • Installation
        • Änderbarkeit im Betrieb
        • Wiederverwendung (bei Komponenten)
    • + - Module Architecture View
      • inhaltliche Strukturisierung der Sourcen = MODULARISIERUNG
      • + - Aufteilung in
        • Prozeduren
        • Klassen
        • Interfaces
      • Kapselung
      • (Aufruf-)Schichtung
      • wirkt sich auf aus...
        • Änderbarkeit des Source
        • Arbeitsteilung
        • Wiederverwendbarkeit
    • + - Execution Architecture View
      • Systemstrukturen auf tiefer "technischer" Ebene (entspricht "fertigungsvollständigem Modell")
      • + - Abbildung konzeptioneller, abstrakter Modellelemente auf...
        • Hardwarestruktur: - Rechnerknoten - Kommunikationsmittel
        • Softwarestruktur: - BS-Prozesse, Threads - Shared Memory - Locks
      • wirkt sich aus auf...
        • Verteilung
        • load balancing
        • Replikation
        • Ausfallen
    • + - conceptional Architecture View
      • aufgaben- bzw. anwendungsnahe Sicht
      • Systemkomponenten (Akteure) mit bestimmter Funktionalität
      • besteht typischerweise aus interagierenden Komponenten (components and connectors)
      • abstrahiert von der Hardware
      • zeigt prinzipiell wie das System seine Aufgaben/Anforderungen erfüllt
  • + -FMC
    • Fundamental Modelling Concepts

      Erfassung der _Systemarchitektur_, d.h. aller interessierender Systemmodelle und ihrer Beziehungen. Nicht jedoch: Visualisierung von Codestrukturen (der Software-Architektur)!

      + - Grundelemente der Darstellung: FMC-Pläne

      • + - Aufbaustruktur
        • hat aktive Systemkomponenten = AKTEURE

          hat passive Systemkomponenten - Orte zu denen Akteure Zugang   haben = SPEICHER - Orte über die Information   ausgetausch wird = KANÄLE

          + - Programmnetz

          • stellt Programmstrukturen kompakt und sprachunabhängig dar, kennzeichend ist: jeder Stelle im PN läßt sich eindeutig eine Position im Programm (ein bestimmter Befehlszählerzustand) zuordnen

          + - Aktionsnetz

          • Dient der Verallgemeinerung von Programm- strukturen, ist jedoch kein Programmnetz! Wenigstens eine Stelle im AN ist nicht eindeutig auf eine Position im Programm abbildbar!

      • + - Werte(bereichs)struktur
        • Objekte die Attribute haben und untereinander in Beziehung stehen können = ENTITÄTEN + ATTRIBUTE
        • Beziehungen dieser Objekte = RELATIONEN
      • + - Ablaufstruktur
        • elementare Aktivität eines Akteurs: = OPERATION Akteur liest Info/Werte von einem oder mehreren Orten und schreibt abgeleiteten Wert (das Operationsergebnis) auf einen Ort.

          Wertewechsel an einem Ort in einem Zeitpunkt = EREIGNIS

          KAUSALE BEZIEHUNGEN: Ereignisse stoßen Operationen an, die wiederum Ereignisse auslösen können.

      Systemarchitektur

      • Menge der Systemmodelle (jeweils Aufbau, Ablauf und Wertebereich) und deren Beziehungen untereinander

        treibende Kraft bei der Aufstellung/Beschreibung von Systemarchitekturen: Verständlichkeit und Nachvollziehbarkeit der Systemkonstruktionen

        Vorstellung eines Aufbaus aus interagierenden Komponenten ist weit verbreitet

        • components in FMC: primär operationelle Akteure
        • connectors in FMC: primär steuernde Akteure und Kommunikationsakteure aber auch Orte

        die Funktionalität läßt sich im System lokalisieren

      Softwarearchitektur

  • + -über diese Mindmap
    • Autor: Lars Windolf

      Digitalisierung einer Mitschrift aus der Vorlesung Systemmodellierung 3 von Prof. Tabeling am HPI.

      last update: 18.01. 14:30

  • + -Begriffe
    • + - Modul
      • SW-Einheit die Entwurfsentscheidung enthält (Parmas 1972).
      • in OOP: Klassen
    • + - Komponente
      • "binär wiederverwendbarer" (besser in unmittelbar auf dem Abwicklersystem abwickelbarer Form/ installierbarer Form vorliegender) Softwarebestandteil

        wiederverwendbar heißt dabei, universell und breit einsetzbar

        es gibt einen "Markt" für die Komponente

        nach Szyperski: - unit of independent deployment - unit of third party composition - has no (externally) observable state

    • + - Schnittstelle
      • es gibt Software- und Systemschnittstellen

        Teil eines Moduls auf den aus anderen Modulen heraus Bezug genommen werden kann und in dem implementierte Prozeduren, Datentypen... benannt werden.

        aber auch: Stelle an der ein System gedanklich "zerschnitten" werden kann, d.h. in aktive Systemkomponenten getrennt werden kann.

    • + - ADT
      • ein Speichertyp mit bestimmten Wertebereich
      • mit bestimmten repertoirezulässigen Operationstypen
    • + - Ereignis
      • elementar bezüglich Beobachtbarkeit
      • findet in einem Zeitpunkt statt
      • findet an einem Ort statt
      • verursacht eine Wertänderung
    • + - Operation
      • elementar bezüglich Verarbeitung

        ein Akteur bestimmt einen "neuen" Wert (Operationsergebnis)

        auf einem Ort (Zielort der Operation)

        Ergebnis hängt von Werten ab, die i.d.R. (beachte Zufallswerte) von mindestens einem Ort gelesen werden.

    • + - Akteur
      • aktive (aktivitätstragende) Komponente
    • + - Ort
      • passive Komponente
      • + - Speicher
        • für dauerhafte Informationsspeicherung
      • + - Kanäle
        • für flüchtigen Informationsaustausch
    • + - Transition
      • stellt Typen dar
      • Operationen
      • Ereignisse
      • Aktivitäten (mehrere Operationen)
    • + - Stellen
      • Steuerzustände
      • zusätzliche Bedingungen
    • + - Objekt
      • auch Instanz oder Exemplar
      • gegenständliches oder abstraktes Objekt
      • eindeutig identifizierbar
      • eindeutig unterscheidbar
      • hat Werte für seine Attribute
      • steht zu anderen Objekten in Relation
    • + - Attribut
      • gehört immer zu Objekt
      • nicht immer scharfe Abgrenzung zu Objekten
    • + - Aspekt
      • ausgewählter (einzelner) Sachverhalt auf einer bestimmten Betrachtungsebene: - Aufbau - Ablauf - Wertebereich

        aber auch (Beispiele) - Beschreibung eines Teils des Aufbau - Beschreibung einer Betriebsphase

Memcache monitoring guis

When using memcached or memcachedb everything is fine as long as it is running. But from an operating perspective memcached is a black box. There is no real logging you can only use the -v/-vv/-vvv switches when not running in daemon mode to see what your instance does. And it becomes even more complex if you run multiple or distributed memcache instances available on different hosts and ports. So the question is: How to monitor your distributed memcache setup? There are not many tools out there, but some useful are. We'll have a look at the following tools. Note that some can monitor multiple memcached instances, while others can only monitor a single instance at a time.
NameMulti-InstancesComplexity/Features
telnetnoSimple CLI via telnet
memcached-topnoCLI
stats-proxyyesSimple Web GUI
memcache.phpyesSimple Web GUI
PhpMemcacheAdminyesComplex Web GUI
Memcache ManageryesComplex Web GUI

1. Use telnet!

memcached already brings it own statistics which are accessible via telnet for manual interaction and are the basis for all other monitoring tools. Read more about using the telnet interface.

2. Live Console Monitoring with memcached-top

You can use memcache-top for live-monitoring a single memcached instance. It will give you the I/O throughput, the number of evictions, the current hit ratio and if run with "--commands" it will also provide the number of GET/SET operations per interval.
memcache-top v0.6       (default port: 11211, color: on, refresh: 3 seconds)

INSTANCE                USAGE   HIT %   CONN    TIME    EVICT/s GETS/s  SETS/s  READ/s  WRITE/s 
10.50.11.5:11211        88.9%   69.7%   1661    0.9ms   0.3     47      9       13.9K   9.8K    
10.50.11.5:11212        88.8%   69.9%   2121    0.7ms   1.3     168     10      17.6K   68.9K   
10.50.11.5:11213        88.9%   69.4%   1527    0.7ms   1.7     48      16      14.4K   13.6K   
[...]

AVERAGE:                84.7%   72.9%   1704    1.0ms   1.3     69      11      13.5K   30.3K   

TOTAL:          19.9GB/ 23.4GB          20.0K   11.7ms  15.3    826     132     162.6K  363.6K  
(ctrl-c to quit.)
(Example output)

3. Live browser monitoring with statsproxy

Using the statsproxy tool you get a browser-based statistics tool for multiple memcached instances. The basic idea of statsproxy is to provide the unmodified memcached statistics via HTTP. It also provide a synthetic health check for service monitoring tools like Nagios. To compile statsproxy on Debian:
# Ensure you have bison
sudo apt-get install bison

# Download tarball
tar zxvf statsproxy-1.0.tgz
cd statsproxy-1.0
make
Now you can run the "statsproxy" binary, but it will inform you that it needs a configuration file. I suggest to redirect the output to a new file e.g. "statsproxy.conf" and remove the information text on top and bottom and then to modify the configuration section as needed.
./statsproxy > statsproxy.conf 2>&1
Ensure to add as many "proxy-mapping" sections as you have memcached instances. In each "proxy-mapping" section ensure that "backend" points to your memcached instance and "frontend" to a port on your webserver where you want to access the information for this backend. Once finished run:
./statsproxy -F statsproxy.conf
Below you find a screenshot of what stats-proxy looks like:

4. Live browser monitoring with memcache.php

Using this PHP script you can quickly add memcached statistics to a webserver of your choice. Most useful is the global memory usage graph which helps to identify problematic instances in a distributed environment. Here is how it should look (screenshot from the project homepage): When using this script ensure access is protected and not to trigger the "flush_all" menu option by default. Also on large memcached instances refrain from dumping the keys as it might cause some load on your server.

Memcache alternatives

If you are using memcached or are planning to you might wonder wether, giving the age of memcached being over 10 years, it is the right tool for a scalable, robust and easy to use key-value store. Below is a list of tools competing with memcached in some manner and a probably subjective rating of each.
NameDifferenceWhy [Not] Use It?
memcached%Because it simple and fast
memcachedbPersistence with BDBBecause it is a simple and fast as memcached and allows easy persistence and backup. But not maintained anymore since 2008!
BDBSimple and oldUse when you want an embedded database. Rarely used for web platforms. Has replication.
CouchDB, CouchBaseHTTP transport. Tries to find a middleground between heavy RDBMS and a key-value store cache.Sharding, replication and online rebalancing. Often found in small Hadoop setup. Easy drop-in for memcached caching in nginx.
DynamoDBHTTP transport, Amazon cloudIf you are in AWS anyway and want sharding and persistency
RedisHashes, Lists, Scanning for Keys, ReplicationGreat bindings. Good documentation. Flexible yet simple data types. Slower than memcached (read more).
RiakSharded partitioning in a commerical cloud.Key-value store as a service. Transparent scaling. Automatic sharding. Map reduce support.
SphinxSearch Engine with SQL query cachingSupports sharding and full text search. Useful for static medium data sets (e.g. web site product search)
MySQL 5.6Full RDBMS with memcached APIBecause you can run queries against the DB via memcached protocol.
There are many more key-value stores. If you wonder what else is out there look at the db-engines.com rankings.

Media player with gstreamer and pygi

When trying to implement a media plugin for Liferea I learned at lot from Laszlo Pandy's session slides from the Ubuntu Opportunistic Developer Week (PDF, source). The only problem was that it is for PyGtk, the GTK binding for Python, which is now more or less deprecated for PyGI, the GTK+ 3.0 GObject introspection based binding. While it is easy to convert all pieces manually following the Novacut/GStreamer1.0 documentation I still want to share a complete music player source example that everyone interested can copy from. I hope this saves the one or the other some time in guessing how to write something like "gst.FORMAT_TIME" in PyGI (actually it is "Gst.Format.TIME"). So here is the example code (download file):
from gi.repository import GObject
from gi.repository import GLib
from gi.repository import Gtk
from gi.repository import Gst

class PlaybackInterface:

    def __init__(self):
	self.playing = False

	# A free example sound track
	self.uri = "http://cdn02.cdn.gorillavsbear.net/wp-content/uploads/2012/10/GORILLA-VS-BEAR-OCTOBER-2012.mp3"

	# GTK window and widgets
	self.window = Gtk.Window()
	self.window.set_size_request(300,50)

	vbox = Gtk.Box(Gtk.Orientation.HORIZONTAL, 0)
	vbox.set_margin_top(3)
	vbox.set_margin_bottom(3)
	self.window.add(vbox)

	self.playButtonImage = Gtk.Image()
	self.playButtonImage.set_from_stock("gtk-media-play", Gtk.IconSize.BUTTON)
	self.playButton = Gtk.Button.new()
	self.playButton.add(self.playButtonImage)
	self.playButton.connect("clicked", self.playToggled)
	Gtk.Box.pack_start(vbox, self.playButton, False, False, 0)

	self.slider = Gtk.HScale()
	self.slider.set_margin_left(6)
	self.slider.set_margin_right(6)
	self.slider.set_draw_value(False)
	self.slider.set_range(0, 100)
	self.slider.set_increments(1, 10)

	Gtk.Box.pack_start(vbox, self.slider, True, True, 0)

	self.label = Gtk.Label(label='0:00')
	self.label.set_margin_left(6)
	self.label.set_margin_right(6)
	Gtk.Box.pack_start(vbox, self.label, False, False, 0)

	self.window.show_all()

        # GStreamer Setup
        Gst.init_check(None)
        self.IS_GST010 = Gst.version()[0] == 0
	self.player = Gst.ElementFactory.make("playbin2", "player")
	fakesink = Gst.ElementFactory.make("fakesink", "fakesink")
	self.player.set_property("video-sink", fakesink)
	bus = self.player.get_bus()
	#bus.add_signal_watch_full()
	bus.connect("message", self.on_message)
	self.player.connect("about-to-finish",  self.on_finished)

    def on_message(self, bus, message):
	t = message.type
	if t == Gst.Message.EOS:
		self.player.set_state(Gst.State.NULL)
		self.playing = False
	elif t == Gst.Message.ERROR:
		self.player.set_state(Gst.State.NULL)
		err, debug = message.parse_error()
		print "Error: %s" % err, debug
		self.playing = False

	self.updateButtons()

    def on_finished(self, player):
	self.playing = False
        self.slider.set_value(0)
	self.label.set_text("0:00")
        self.updateButtons()

    def play(self):
	self.player.set_property("uri", self.uri)
	self.player.set_state(Gst.State.PLAYING)
	GObject.timeout_add(1000, self.updateSlider)

    def stop(self):
	self.player.set_state(Gst.State.NULL)
	
    def playToggled(self, w):
        self.slider.set_value(0)
	self.label.set_text("0:00")

	if(self.playing == False):
		self.play()
	else:
		self.stop()

	self.playing=not(self.playing)
	self.updateButtons()

    def updateSlider(self):
	if(self.playing == False):
	   return False # cancel timeout

	try:
	   if self.IS_GST010:
	      nanosecs = self.player.query_position(Gst.Format.TIME)[2]
	      duration_nanosecs = self.player.query_duration(Gst.Format.TIME)[2]
	   else:
	      nanosecs = self.player.query_position(Gst.Format.TIME)[1]
	      duration_nanosecs = self.player.query_duration(Gst.Format.TIME)[1]

	   # block seek handler so we don't seek when we set_value()
	   # self.slider.handler_block_by_func(self.on_slider_change)

           duration = float(duration_nanosecs) / Gst.SECOND
	   position = float(nanosecs) / Gst.SECOND
	   self.slider.set_range(0, duration)
	   self.slider.set_value(position)
           self.label.set_text ("%d" % (position / 60) + ":%02d" % (position % 60))

	   #self.slider.handler_unblock_by_func(self.on_slider_change)
	
	except Exception as e:
		# pipeline must not be ready and does not know position
		print e
		pass

	return True

    def updateButtons(self):
        if(self.playing == False):
           self.playButtonImage.set_from_stock("gtk-media-play", Gtk.IconSize.BUTTON)
        else:
           self.playButtonImage.set_from_stock("gtk-media-stop", Gtk.IconSize.BUTTON)

if __name__ == "__main__":
    PlaybackInterface()
    Gtk.main()
and this is how it should look like if everything goes well: Example Player Screenshot Please post comments below if you have improvement suggestions!

Making flvtool++ work with large files

The flvtool++ by Facebook is a fast FLV metadata tagger, but at least up to v1.2.1 it lacks large file support. Here is a simple patch to make it work with large files:
--- flvtool++.orig/fout.h	2009-06-19 05:06:47.000000000 +0200
+++ flvtool++/fout.h	2010-10-12 15:51:37.000000000 +0200
@@ -21,7 +21,7 @@
   void open(const char* fn) {
     if (fp) this->close();
 
-    fp = fopen(fn, "wb");
+    fp = fopen64(fn, "wb");
     if (fp == NULL) {
       char errbuf[256];
       snprintf(errbuf, 255, "Error opening output file \"%s\": %s", fn, strerror(errno));

--- flvtool++.orig/mmfile.h	2009-06-19 05:29:43.000000000 +0200
+++ flvtool++/mmfile.h	2010-10-12 15:46:00.000000000 +0200
@@ -16,7 +16,7 @@
 public:
   mmfile() : fd(-1) {} 
   mmfile(char* fn) {
-    fd = open(fn, O_RDONLY);
+    fd = open(fn, O_RDONLY | O_LARGEFILE);
     if (fd == -1) throw std::runtime_error(string("mmfile: unable to open file ") + string(fn));
     struct stat statbuf;
     fstat(fd, &statbuf);
Note: While this patch helps you to process large files flvtool++ will still load the entire file into memory!!! Given this you might want to use a different injector like yamdi. For a comparsion of existing tools have a look at the Comparison of FLV and MP4 metadata tagging tools.

List of unix linux it security news feeds

This is a collection of all major known security advisory feeds per operating system or topic. It is a superset of the 2010 post at geekscrap.com which misses Linux specific feeds. The idea is for you to grab the links below and drop those you need in your favourite feed reader (even Thunderbird will do). Alternatively you can pay others to do it for you :-) If you find broken links please post a comment below!

Unix / Linux Distributions

Application Specific

Platforms/Middleware

Collections

By Vendor

How to get a feed for software not listed here?

1. Check cvedetails.com

When you need a security feed not listed above visit http://cvedetails.com and search for the product or vendor you are interested in. If you find it and recent CVEs are listed click on "Vulnerability Feeds and Widgets" which opens up a dialog where you can configure a filter and click "Generate RSS Feed". Note: If you don't find the "Vulnerability Feed and Widgets" link ensure you are on the "Vulnerability Statistics" page of the product/vendor!

2. Check gmane.org

If it is an open source product you are looking for and it has a security related mailing list chances are high that is being tracked by gmane.org which provides RSS feeds for each mailing list.

Linux: half Maximize applications like in windows

There is a really useful short-cut in Windows which allows you to align a window to the left or the right of a screen. This way you can use your 16:9 wide screen efficiently using keyboard without any mouse resizing. This is possible on Ubuntu Unity, GNOME 2, KDE and XFCE too! Just with different mechanisms...
DesktopHalf Maximize LeftHalf Maximize Right
Windows[Windows]+[Left][Windows]+[Right]
Ubuntu Unity[Ctrl]+[Super]+[Left][Ctrl]+[Super]+[Right]
GNOME 3Drag to left edgeDrag to right edge
GNOME 2 + Compiz Grid pluginDrag to left borderDrag to right border
GNOME 2/XFCE/KDE + BrightsideDrag to configured edgeDrag to configured edge
XFCE 4.10[Super]+[Left][Super]+[Right]

Linux sysadmin links

This is a list of non-trivial Linux administration commands and can be used as a cheat sheet or link collection. If you find errors or want to add something please post a comment below!

Automation Products

Which automation tools are actually out there?
  • Bcfg2: Alternative to puppet and cfengine by Argonne National Laboratory. (IMO out-dated)
  • cfengine (active, commercially backed, large user base)
  • Chef: Alternative to puppet (Ruby, active, commercially backed, large user base)
  • JuJu: mostly for Ubuntu, service orchestration tool (Python, commercially backed)
  • Puppet (Ruby-like + Ruby, active, commercially backed, large user base)
  • slaughter (Perl, active, small user base)
  • Sprinkle (Ruby, quite recent)
  • Wikipedia Comparison Chart: Check here for other less known and new tools!

Automation

  • Augeas: Very flexible file editor to be used with Puppet or standalone. Could also work with cfengine.
    $ augtool
    augtool> set /files/etc/ssh/sshd_config/PermitRootLogin no
    augtool> save
  • Augeas - in Puppet: Using Puppet with Augeas
    augeas { "sshd_config":
     changes => [
     "set /files/etc/ssh/sshd_config/PermitRootLogin no",
     ],
    }
  • cfengine: Force running shortly after a recent execution
    cfagent -K
  • cfengine - Design Center: Git repository with sketches and examples for cfengine.
  • cfengine - cf-sketch: Find and install sketches from the Design Center repository
  • detox: Tool for recursive cleanup of file names.
    detox -v -r <directory>
  • Chef - List Nodes per Role:
    knife search node 'roles:<role name>'
  • Chef - Fix RabbitMQ 100% CPU usage
  • Chef - Edit Files: using a Script resource.
  • Chef - Manage Amazon EC2 instances
  • Chef - Tutorial on how to Setup Nagios in EC2
  • puppet: Debugging deployment and rules on a local machine. This only makes sense in "one time" mode running in one of the following variants:
    puppetd --test # enable standard debugging options
    puppetd --debug # enable full debugging
    puppetd --one-time --detailed-exitcodes # Enable exit codes:
               # 2=changes applied
               # 4=failure
    

Database

Debian

  • Build Kernel Package: How to build kernel packages with make-pkg cd /usr/src/linux && make-kpkg clean && make-kpkg --initrd --revision=myrev kernel_image
  • Setup Keyring: How to solve "The following packages cannot be authenticated" apt-get install debian-archive-keyring apt-get update
  • Force remove broken "reportbug": This can happen during dist-upgrades from Etch/Sarge to Lenny.
  • Packages - Reconfigure after installation: dpkg-reconfigure -a
  • dpkg Cheat-Sheet: Query package infos
    # Resolve file to package
    dpkg -S /etc/fstab
    
    # Print all files of a package
    dpkg -L passwd # provided files
    dpkg -c passwd # owned files
    
    # Find packages by name
    dpkg -l gnome*
    
    # Package details
    dpkg -p passwd
    
  • Ubuntu - Access Repositories for older releases. Once a release is deprecated it is moved to old-releases.ubuntu.com. You need to adapt /etc/apt/sources.list to fetch packages from there
    sed -i 's/archive.ubuntu.com/old-releases.ubuntu.com/' /etc/apt/sources.list

Debugging / Performance Tools

  • dmesg - block IO debugging:
    echo 1 > /proc/sys/vm/block_dump
    
    # wait some time...
    echo 0 > /proc/sys/vm/block_dump
    
    # Now check syslog for block dump lines
    
  • dmesg - Filtering Output:
    dmesg -T      # Enable human readable timestamps
    dmesg -x      # Show facility and log level
    dmesg -f daemon     # Filter for facility daemon
    dmesg -l err,crit,alert,emerg # Filter for errors
    
  • lslk - Find file locks: Use lslk to find which PID is blocking an flock() to a file.
  • lsof - Find owners of open file handles:
    lsof      # Complete list
    lsof -i :22    # Filter single TCP port
    lsof [email protected]:22 # Filter single connection endpoint
    lsof -u <user>   # Filter per user
    lsof -c <name>   # Filter per process name
    lsof -p 12345    # Filter by PID
    lsof /etc/hosts   # Filter single file
    
  • Perf Tutorial: 2.6+ generic kernel performance statistics tool. perf stat -B some_command
  • dstat: Replaces vmstat, iostat, netstat and ifstat and allows to determine PID that is most CPU and most I/O expensive dstat -a --top-bio --top-cpu
  • iotop: Python script to monitor I/O like top
  • PHP - How to setup the APD debugger

Filesystem / Partitioning

  • uNetBootin: Create bootable media for any distribution. Most useful with USB sticks.
  • Convert ext2 to ext3: tune2fs -j /dev/hda1
  • Convert ext3 to ext4: tune2fs -O extents,uninit_bg,dir_index /dev/sda1
  • Determine Inode Count: tune2fs -l /dev/sda1 | grep Inode
  • Disable ext4 barriers: Add "barrier=0" to the mount options.
  • LVM - Add another disk: How to add a disk to an existing volume
    # Setup partition with (use parted for >2TB)
    (parted) mklabel gpt       # only when >2TB
    (parted) mkpart primary lvm 0 4T    # setup disk full size (e.g. 4TB)
    
    pvcreate /dev/sdb1       # Create physical LVM disk
    vgextend vg01 /dev/sdb1      # Add to volume group
    vgextend -L +4t /dev/mapper/vg01-lvdata  # Extend your volume 
    resize2fs /dev/mapper/vg01-lvdata   # Auto-resize file system
  • rsync - --delete doesn't work: How to debug this.

Mail

Middleware

  • Heartbeat - Manual IP Failover
    # Either run on the node that should take over
    /usr/share/heartbeat/hb_failover
    
    # Or run on the node to should stop working
    /usr/share/heartbeat/hb_standby
  • Pacemaker - Setup Steps
  • RabbitMQ - Commands
    rabbitmqctl list_vhosts   # List all defined vhosts
    rabbitmqctl list_queues <vhost> # List all queues for the vhost
    
    rabbitmqctl report    # Dump detailed report on RabbitMQ instance  
    
  • RabbitMQ - Fix Chef 100% CPU usage
  • RabbitMQ - Setup Clustering

Monitoring

  • Munin - Test Plugins:
    /usr/sbin/munin-run <plugin name> # for values
    /usr/sbin/munin-run <plugin name> config # for configuration
  • Munin - Manual Update Run:
    sudo -u munin /usr/bin/munin-cron
  • Munin - Test available plugins
    /usr/sbin/munin-node-configure --suggest
    
    # and enable them using
    /usr/sbin/munin-node-configure --shell | sh

Network

  • NFS - Tuning Secrets: SGI Slides on NFS Performance
  • nttcp - TCP performance testing
    # On sending host
    nttcp -t -s
    
    # On receiving host
    nttcp -r -s
    
  • tcpdump - Be verbose and print full package hex dumps:
     tcpdump -i eth0 -nN -vvv -xX -s 1500 port <some port>
  • SNMP - Dump all MIBs: When you need to find the MIB for an object known only by name try snmpwalk -c public -v 1 -O s <myhost> .iso | grep <search string>
  • Hurricane Electric - BGP Tools: Statistics on all AS as well as links to their looking glasses.

Package Management

  • Debian
    apt-get install <package> 
    apt-get remove <package> # Remove files installed by <package>
    apt-get purge <package>  # Remove <package> and all the files it did create
    
    apt-get upgrade    # Upgrade all packages
    apt-get install <package> # Upgrade an install package
    
    apt-get dist-upgrade  # Upgrade distribution
    
    apt-cache search <package> # Check if there is such a package name in the repos
    apt-cache clean    # Remove all downloaded .debs
    
    dpkg -l      # List all installed/known packages
    
    # More dpkg invocations above in the "Debian" section!
    
  • Ubuntu (like Debian) with the addition of
    do-release-upgrade   # For Ubuntu release upgrades
  • OpenSuSE
    zypper install <package> 
    
    zypper refresh    # Update repository infos
    
    zypper list-updates
    zypper repos    # List configured repositories
    
    zypper dist-upgrade   # Upgrade distribution
    zypper dup     # Upgrade distribution (alias)
    
    zypper search <package>  # Search for <package>
    zypper search --search-descriptions <package>
    
    zypper clean      # Clean package cache
    
    # For safe updates:
    zypper mr –keep-packages –remote # Enable caching of packages
    zypper dup -D      # Fetch packages using a dry run
    zypper mr –all –no-refresh  # Set cache usage for following dup
    zypper dup      # Upgrade!
    
  • Redhat:
    up2date
  • Centos:
    yum update     # Upgrade distro
    yum install <package>  # Install <package>

RAID

  • mdadm - Commands
    cat /proc/mdstat   # Print status
    
    mdadm --detail /dev/md0  # Print status per md
    
    mdadm --manage -r /dev/md0 /dev/sda1 # Remove a disk
    mdadm --zero-superblock /dev/sda1  # Initialize a disk
    mdadm --manage -a /dev/md0 /dev/sda1 # Add a disk
    
    mdadm --manage --set-faulty /dev/md0 /dev/sda1
    
  • hpacucli - Commands
    # Show status of all arrays on all controllers
    hpacucli all show config
    hpacucli all show config detail
    
    # Show status of specific controller
    hpacucli ctrl=0 pd all show
    
    # Show Smart Array status
    hpacucli all show status
    
  • LSI MegaRAID - Commands
    # Get number of controllers
    /opt/MegaRAID/MegaCli/MegaCli64 -adpCount -NoLog
    
    # Get number of logical drives on controller #0
    /opt/MegaRAID/MegaCli/MegaCli64 -LdGetNum -a0 -NoLog
    
    # Get info on logical drive #0 on controller #0
    /opt/MegaRAID/MegaCli/MegaCli64 -LdInfo -L0 -a0 -NoLog
    

Security

Shell

SSH

  • authorized_keys HowTo: Syntax and options...
  • Easy Key Copying: Stop editing authorized_keys remote. Use the standard OpenSSH ssh-copy-id instead.
    ssh-copy-id [-i keyfile] user@maschine
  • ProxyCommand: Run SSH over a gateway and forward to other hosts based and/or perform some type of authentication. In .ssh/config you can have:
    Host unreachable_host
      ProxyCommand ssh gateway_host exec nc %h %p
  • Transparent Multi-Hop: ssh host1 -A -t host2 -A -t host3 ...
  • 100% non-interactive SSH: What parameters to use to avoid any interaction. ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"
  • SFTP chroot with umask: How to enforce a umask with SFTP Subsystem sftp /usr/libexec/openssh/sftp-server -u 0002
  • Agent Forwarding explained with pictures! Configured in /etc/ssh_config with
    Host *
    ForwardAgent yes
  • How to use a SOCKS Proxy On the client start proxy by
    ssh -D <port> <remote host>

Webserver Stack

This is a list of non-trivial Linux administration commands and can be used as a cheat sheet or link collection. If you find errors or want to add something please post a comment below!

Automation Products

Which automation tools are actually out there?
  • Bcfg2: Alternative to puppet and cfengine by Argonne National Laboratory. (IMO out-dated)
  • cfengine (active, commercially backed, large user base)
  • Chef: Alternative to puppet (Ruby, active, commercially backed, large user base)
  • JuJu: mostly for Ubuntu, service orchestration tool (Python, commercially backed)
  • Puppet (Ruby-like + Ruby, active, commercially backed, large user base)
  • slaughter (Perl, active, small user base)
  • Sprinkle (Ruby, quite recent)
  • Wikipedia Comparison Chart: Check here for other less known and new tools!

Automation

  • Augeas: Very flexible file editor to be used with Puppet or standalone. Could also work with cfengine.
    $ augtool
    augtool> set /files/etc/ssh/sshd_config/PermitRootLogin no
    augtool> save
  • Augeas - in Puppet: Using Puppet with Augeas
    augeas { "sshd_config":
     changes => [
     "set /files/etc/ssh/sshd_config/PermitRootLogin no",
     ],
    }
  • cfengine: Force running shortly after a recent execution
    cfagent -K
  • cfengine - Design Center: Git repository with sketches and examples for cfengine.
  • cfengine - cf-sketch: Find and install sketches from the Design Center repository
  • detox: Tool for recursive cleanup of file names.
    detox -v -r <directory>
  • Chef - List Nodes per Role:
    knife search node 'roles:<role name>'
  • Chef - Fix RabbitMQ 100% CPU usage
  • Chef - Edit Files: using a Script resource.
  • Chef - Manage Amazon EC2 instances
  • Chef - Tutorial on how to Setup Nagios in EC2
  • puppet: Debugging deployment and rules on a local machine. This only makes sense in "one time" mode running in one of the following variants:
    puppetd --test # enable standard debugging options
    puppetd --debug # enable full debugging
    puppetd --one-time --detailed-exitcodes # Enable exit codes:
               # 2=changes applied
               # 4=failure
    

Database

Debian

  • Build Kernel Package: How to build kernel packages with make-pkg
    cd /usr/src/linux && make-kpkg clean && make-kpkg --initrd --revision=myrev kernel_image
  • Setup Keyring: How to solve "The following packages cannot be authenticated"
     apt-get install debian-archive-keyring
    apt-get update
  • Force remove broken "reportbug": This can happen during dist-upgrades from Etch/Sarge to Lenny.
  • Packages - Reconfigure after installation:
    dpkg-reconfigure -a
  • dpkg Cheat-Sheet: Query package infos
    # Resolve file to package
    dpkg -S /etc/fstab
    
    # Print all files of a package
    dpkg -L passwd # provided files
    dpkg -c passwd # owned files
    
    # Find packages by name
    dpkg -l gnome*
    
    # Package details
    dpkg -p passwd
    
  • Ubuntu - Access Repositories for older releases. Once a release is deprecated it is moved to old-releases.ubuntu.com. You need to adapt /etc/apt/sources.list to fetch packages from there
    sed -i 's/archive.ubuntu.com/old-releases.ubuntu.com/' /etc/apt/sources.list

Debugging / Performance Tools

  • Drop Filesystem Cache
    echo 1 > /proc/sys/vm/drop_caches
  • dmesg - block IO debugging:
    echo 1 > /proc/sys/vm/block_dump
    
    # wait some time...
    echo 0 > /proc/sys/vm/block_dump
    
    # Now check syslog for block dump lines
    
  • dmesg - Filtering Output:
    dmesg -T      # Enable human readable timestamps
    dmesg -x      # Show facility and log level
    dmesg -f daemon     # Filter for facility daemon
    dmesg -l err,crit,alert,emerg # Filter for errors
    
  • lslk - Find file locks: Use lslk to find which PID is blocking an flock() to a file.
  • lsof - Find owners of open file handles:
    lsof      # Complete list
    lsof -i :22    # Filter single TCP port
    lsof [email protected]:22 # Filter single connection endpoint
    lsof -u <user>   # Filter per user
    lsof -c <name>   # Filter per process name
    lsof -p 12345    # Filter by PID
    lsof /etc/hosts   # Filter single file
    
  • Perf Tutorial: 2.6+ generic kernel performance statistics tool.
    perf stat -B some_command
  • dstat: Replaces vmstat, iostat, netstat and ifstat and allows to determine PID that is most CPU and most I/O expensive
    dstat -a --top-bio --top-cpu
  • iotop: Python script to monitor I/O like top
  • PHP - How to setup the APD debugger

Filesystem / Partitioning

  • uNetBootin: Create bootable media for any distribution. Most useful with USB sticks.
  • Convert ext2 to ext3:
    tune2fs -j /dev/hda1
  • Convert ext3 to ext4:
    tune2fs -O extents,uninit_bg,dir_index /dev/sda1
  • Determine Inode Count:
    tune2fs -l /dev/sda1 | grep Inode
  • Disable ext4 barriers: Add "barrier=0" to the mount options.
  • LVM - Add another disk: How to add a disk to an existing volume
    # Setup partition with (use parted for >2TB)
    (parted) mklabel gpt       # only when >2TB
    (parted) mkpart primary lvm 0 4T    # setup disk full size (e.g. 4TB)
    
    pvcreate /dev/sdb1       # Create physical LVM disk
    vgextend vg01 /dev/sdb1      # Add to volume group
    vgextend -L +4t /dev/mapper/vg01-lvdata  # Extend your volume 
    resize2fs /dev/mapper/vg01-lvdata   # Auto-resize file system
  • rsync - --delete doesn't work: It happens when you call rsync without a trailing slash in the source path like this:
    rsync -az -e ssh --delete /data server:/data
    It just won't delete anything. It will when running it like this:
    rsync -az -e ssh --delete /data/ server:/data

Mail

Middleware

  • Heartbeat - Manual IP Failover
    # Either run on the node that should take over
    /usr/share/heartbeat/hb_failover
    
    # Or run on the node to should stop working
    /usr/share/heartbeat/hb_standby
  • Pacemaker - Setup Steps
  • RabbitMQ - Commands
    rabbitmqctl list_vhosts   # List all defined vhosts
    rabbitmqctl list_queues <vhost> # List all queues for the vhost
    
    rabbitmqctl report    # Dump detailed report on RabbitMQ instance  
    
  • RabbitMQ - Fix Chef 100% CPU usage
  • RabbitMQ - Setup Clustering

Monitoring

  • Munin - Test Plugins:
    /usr/sbin/munin-run <plugin name> # for values
    /usr/sbin/munin-run <plugin name> config # for configuration
  • Munin - Manual Update Run:
    sudo -u munin /usr/bin/munin-cron
  • Munin - Test available plugins
    /usr/sbin/munin-node-configure --suggest
    
    # and enable them using
    /usr/sbin/munin-node-configure --shell | sh

Network

  • ethtool - Usage
    ethtool eth0                       # Print general info on eth0
    ethtool -i eth0                    # Print kernel module info
    ethtool -S eth0                    # Print eth0 traffic statistics
    ethtool -a eth0                    # Print RX, TX and auto-negotiation settings
    
    # Changing NIC settings...
    ethtool -s eth0 speed 100
    ethtool -s eth0 autoneg off
    ethtool -s eth0 duplex full
    ethtool -s eth0 wol g               # Turn on wake-on-LAN
    
    Do not forget to make changes permanent in e.g. /etc/network/interfaces.
  • NFS - Tuning Secrets: SGI Slides on NFS Performance
  • nttcp - TCP performance testing
    # On sending host
    nttcp -t -s
    
    # On receiving host
    nttcp -r -s
    
  • tcpdump - Be verbose and print full package hex dumps:
     tcpdump -i eth0 -nN -vvv -xX -s 1500 port <some port>
  • SNMP - Dump all MIBs: When you need to find the MIB for an object known only by name try
    snmpwalk -c public -v 1 -O s <myhost> .iso | grep <search string>
  • Hurricane Electric - BGP Tools: Statistics on all AS as well as links to their looking glasses.

Package Management

  • Debian
    apt-get install <package> 
    apt-get remove <package> # Remove files installed by <package>
    apt-get purge <package>  # Remove <package> and all the files it did create
    
    apt-get upgrade    # Upgrade all packages
    apt-get install <package> # Upgrade an install package
    
    apt-get dist-upgrade  # Upgrade distribution
    
    apt-cache search <package> # Check if there is such a package name in the repos
    apt-cache clean    # Remove all downloaded .debs
    
    dpkg -l      # List all installed/known packages
    
    # More dpkg invocations above in the "Debian" section!
    
  • Ubuntu (like Debian) with the addition of
    do-release-upgrade   # For Ubuntu release upgrades
  • OpenSuSE
    zypper install <package> 
    
    zypper refresh    # Update repository infos
    
    zypper list-updates
    zypper repos    # List configured repositories
    
    zypper dist-upgrade   # Upgrade distribution
    zypper dup     # Upgrade distribution (alias)
    
    zypper search <package>  # Search for <package>
    zypper search --search-descriptions <package>
    
    zypper clean      # Clean package cache
    
    # For safe updates:
    zypper mr –keep-packages –remote # Enable caching of packages
    zypper dup -D      # Fetch packages using a dry run
    zypper mr –all –no-refresh  # Set cache usage for following dup
    zypper dup      # Upgrade!
    
  • Redhat:
    up2date
  • Centos:
    yum update     # Upgrade distro
    yum install <package>  # Install <package>

RAID

  • mdadm - Commands
    cat /proc/mdstat   # Print status
    
    mdadm --detail /dev/md0  # Print status per md
    
    mdadm --manage -r /dev/md0 /dev/sda1 # Remove a disk
    mdadm --zero-superblock /dev/sda1  # Initialize a disk
    mdadm --manage -a /dev/md0 /dev/sda1 # Add a disk
    
    mdadm --manage --set-faulty /dev/md0 /dev/sda1
    
  • hpacucli - Commands
    # Show status of all arrays on all controllers
    hpacucli all show config
    hpacucli all show config detail
    
    # Show status of specific controller
    hpacucli ctrl=0 pd all show
    
    # Show Smart Array status
    hpacucli all show status
    
  • LSI MegaRAID - Commands
    # Get number of controllers
    /opt/MegaRAID/MegaCli/MegaCli64 -adpCount -NoLog
    
    # Get number of logical drives on controller #0
    /opt/MegaRAID/MegaCli/MegaCli64 -LdGetNum -a0 -NoLog
    
    # Get info on logical drive #0 on controller #0
    /opt/MegaRAID/MegaCli/MegaCli64 -LdInfo -L0 -a0 -NoLog
    

Security

Shell

  • date: Convert To Unix Timestamp:
    date -d "$date" +%s
  • date: Convert From Unix Timestamp:
    date -d "1970-01-01 1234567890 sec GMT"
  • date: Calculate Last Day of Month:
    cal $(date "+%M %y") | grep -v ^$ | tail -1 | sed 's/^.* \([0-9]*\)$/\1/'
  • bash: Extend Completion: How to setup your own bash completion schemas.
    complete -W 'add branch checkout clone commit diff grep init log merge mv pull push rebase rm show status tag' git
  • bash - Pass file descriptor insteaf of commands: This can be used with all tools that demand a file name paramter:
    diff <(echo abc;echo def) <(echo abc;echo abc)
  • bash - Regexp matching:
    if [[ "$string" =~ ^[0-9]+$ ]]; then 
        echo "Is a number"
    fi
  • bash - Regexp match extraction variant #1: Note how you need to set the regexp into a variable because you must not quote it in the if condition!
    REGEXP="2013:06:23 ([0-9]+):([0-9]+)"
    if [[ "$string" =~ $REGEXP ]]; then
        echo "Hour ${BASH_REMATCH[1]} Minute ${BASH_REMATCH[2]}"
    fi
  • bash - Regexp match extraction variant #2: Actually using "expr" can much simpler especially when only on value is to be extracted:
    hour=$(expr match "$string" '2013:06:23 \([0-9]\+\)')
    
  • bash - kill all childs on exit:
    trap true TERM
    kill -- -$$
  • bash - Control History Handling:
    unset HISTFILE      # Stop logging history in this bash instance
    HISTIGNORE="[ ]*"      # Do not log commands with leading spaces
    HISTIGNORE="&"      # Do not log a command multiple times
    
    HISTTIMEFORMAT="%Y-%m-%d %H:%M:%S" # Log with timestamps
    
  • bash - apply /etc/security/limits.conf change immediately:
    sudo -i -u <user>
  • Mail Attachments: Dozens of variants to mail attachments using Unix tools.
  • tail -f until removed: When you want to tail a file until it gets removed
    tail --follow=name myfile
  • join - DB-like joining of CSV files:
    join -o1.2,2.3 -t ";" -1 1 -2 2 employee.csv tasks.csv
  • shell - list all commands:
    compgen -c |sort -u
  • shell - Check for interactive terminal: Run "tty" in silent mode and check the exit code
    tty -s
  • shell - ANSI color matrix
  • Sorting column: Use the -k switch of "sort" to sort lines by a column. E.g.
    cat access.log | sort -k 1
  • watch: wait for file/directory changes and run a command
    watch -d ls -l
  • Shell - Unbuffer Output:
    stdbuf -i0 -o0 -e0 <some command>  # Best solution
    
    unbuffer <some command>     # Wrapper script from expect
    
  • dos2unix with vi:
    :%s/^V^M//g

SSH

  • authorized_keys HowTo: Syntax and options...
  • Easy Key Copying: Stop editing authorized_keys remote. Use the standard OpenSSH ssh-copy-id instead.
    ssh-copy-id [-i keyfile] user@maschine
  • ProxyCommand: Run SSH over a gateway and forward to other hosts based and/or perform some type of authentication. In .ssh/config you can have:
    Host unreachable_host
      ProxyCommand ssh gateway_host exec nc %h %p
  • Transparent Multi-Hop:
    ssh host1 -A -t host2 -A -t host3 ...
  • 100% non-interactive SSH: What parameters to use to avoid any interaction.
    ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"
  • SFTP chroot with umask: How to enforce a umask with SFTP
    Subsystem sftp /usr/libexec/openssh/sftp-server -u 0002
  • Agent Forwarding explained with pictures! Configured in /etc/ssh_config with
    Host *
    ForwardAgent yes
  • How to use a SOCKS Proxy On the client start proxy by
    ssh -D <port> <remote host>

Webserver Stack

This is a list of non-trivial Linux administration commands and can be used as a cheat sheet or link collection. If you find errors or want to add something please post a comment below!

Automation Products

Which automation tools are actually out there?
  • Bcfg2: Alternative to puppet and cfengine by Argonne National Laboratory. (IMO out-dated)
  • cdist: configuration with shell scripting
  • cfengine (active, commercially backed, large user base)
  • Chef: Alternative to puppet (Ruby, active, commercially backed, large user base)
  • EMC UIM - Unified Infrastructure Manager, VCE VBlock (enterprise, commercial)
  • Puppet (Ruby-like + Ruby, active, commercially backed, large user base)
  • slaughter (Perl, active, small user base)
  • Sprinkle (Ruby, quite recent)
  • Rundeck - Workflow manager for node - role systems like EC2, chef, puppet ...
  • SaltStack - (Python, semi-commercial, new, small user base)

Finally it is worth to check the Wikipedia Comparison Chart for other less known and new tools!

Automation

  • Augeas: Very flexible file editor to be used with Puppet or standalone. Could also work with cfengine.
    $ augtool
    augtool> set /files/etc/ssh/sshd_config/PermitRootLogin no
    augtool> save
  • Augeas - in Puppet: Using Puppet with Augeas
    augeas { "sshd_config":
     changes => [
     "set /files/etc/ssh/sshd_config/PermitRootLogin no",
     ],
    }
  • cfengine: Force running shortly after a recent execution
    cfagent -K
  • cfengine - Design Center: Git repository with sketches and examples for cfengine.
  • cfengine - cf-sketch: Find and install sketches from the Design Center repository
  • detox: Tool for recursive cleanup of file names.
    detox -v -r <directory>
  • Chef - Dry Run:
    chef-client -Fmin --why-run
  • Chef - List System Info:
    ohai
  • Chef - List Node Info:
    knife node show <node>
  • Chef - List Nodes per Role:
    knife search node 'roles:<role name>'
  • Chef - Fix RabbitMQ 100% CPU usage
  • Chef - knife + SSH:
    knife ssh -a ipaddress name:server1 "chef-client"
    you can also use patterns:
    knife ssh -a ipaddress name:www* "uptime"
  • Chef - Edit Files: using a Script resource.
  • Chef - Manage Amazon EC2 instances
  • Chef - Tutorial on how to Setup Nagios in EC2
  • puppet: Debugging deployment and rules on a local machine. This only makes sense in "one time" mode running in one of the following variants:
    puppetd --test # enable standard debugging options
    puppetd --debug # enable full debugging
    puppetd --one-time --detailed-exitcodes # Enable exit codes:
               # 2=changes applied
               # 4=failure
    

Software Firewalls, LBs

Install Servers

  • Cobbler
  • MAAS - Ubuntu "Metal As A Service" install server

Orchestration Tools

  • JuJu: mostly for Ubuntu, service orchestration tool (Python, commercially backed)
  • Maestro (enterprise, commercial)
  • mcollective - Puppet parallelizing and orchestration framework
  • SaltStack

Database

Debian

  • Build Kernel Package: How to build kernel packages with make-pkg
    cd /usr/src/linux && make-kpkg clean && make-kpkg --initrd --revision=myrev kernel_image
  • Setup Keyring: How to solve "The following packages cannot be authenticated"
    apt-get install debian-archive-keyring
    apt-get update
  • Force remove broken "reportbug": This can happen during dist-upgrades from Etch/Sarge to Lenny.
  • Packages - Reconfigure after installation:
    dpkg-reconfigure -a
  • dpkg Cheat-Sheet: Query package infos
    # Resolve file to package
    dpkg -S /etc/fstab
    
    # Print all files of a package
    dpkg -L passwd # provided files
    dpkg -c passwd # owned files
    
    # Find packages by name
    dpkg -l gnome*
    
    # Package details
    dpkg -p passwd
    
  • Ubuntu - Access Repositories for older releases. Once a release is deprecated it is moved to old-releases.ubuntu.com. You need to adapt /etc/apt/sources.list to fetch packages from there
    sed -i 's/archive.ubuntu.com/old-releases.ubuntu.com/' /etc/apt/sources.list

Debugging / Performance Tools

  • Reboot when /sbin is unusable
    echo b >/proc/sysrq-trigger
  • List Context Switches per Process
    pidstat -w
  • Drop Filesystem Cache
    echo 1 > /proc/sys/vm/drop_caches
  • dmesg - block IO debugging:
    echo 1 > /proc/sys/vm/block_dump
    
    # wait some time...
    echo 0 > /proc/sys/vm/block_dump
    
    # Now check syslog for block dump lines
    
  • Check for changed sysctl() settings:
    sysctl -p
  • dmesg - Filtering Output:
    dmesg -T      # Enable human readable timestamps
    dmesg -x      # Show facility and log level
    dmesg -f daemon     # Filter for facility daemon
    dmesg -l err,crit,alert,emerg # Filter for errors
    
  • lslk - Find file locks: Use lslk to find which PID is blocking an flock() to a file.
  • lsof - Find owners of open file handles:
    lsof      # Complete list
    lsof -i :22    # Filter single TCP port
    lsof [email protected]:22 # Filter single connection endpoint
    lsof -u <user>   # Filter per user
    lsof -c <name>   # Filter per process name
    lsof -p 12345    # Filter by PID
    lsof /etc/hosts   # Filter single file
    
  • Perf Tutorial: 2.6+ generic kernel performance statistics tool.
    perf stat -B some_command
  • dstat: Replaces vmstat, iostat, netstat and ifstat and allows to determine PID that is most CPU and most I/O expensive
    dstat -a --top-bio --top-cpu
  • iotop: Python script to monitor I/O like top
  • PHP - How to setup the APD debugger

Filesystem / Partitioning

  • uNetBootin: Create bootable media for any distribution. Most useful with USB sticks.
  • Convert ext2 to ext3:
    tune2fs -j /dev/hda1
  • Convert ext3 to ext4:
    tune2fs -O extents,uninit_bg,dir_index /dev/sda1
  • Determine Inode Count:
    tune2fs -l /dev/sda1 | grep Inode
  • Disable ext4 barriers: Add "barrier=0" to the mount options.
  • LVM - Add another disk: How to add a disk to an existing volume
    # Setup partition with (use parted for >2TB)
    (parted) mklabel gpt       # only when >2TB
    (parted) mkpart primary lvm 0 4T    # setup disk full size (e.g. 4TB)
    
    pvcreate /dev/sdb1       # Create physical LVM disk
    vgextend vg01 /dev/sdb1      # Add to volume group
    vgextend -L +4t /dev/mapper/vg01-lvdata  # Extend your volume 
    resize2fs /dev/mapper/vg01-lvdata   # Auto-resize file system
  • rsync - --delete doesn't work: It happens when you call rsync without a trailing slash in the source path like this:
    rsync -az -e ssh --delete /data server:/data
    It just won't delete anything. It will when running it like this:
    rsync -az -e ssh --delete /data/ server:/data

Hosting

  • Hoster Lookup: whoishosthingthis.com, who-hosts.com
  • iplist.net: Simple reverse lookup of neighbour IPs
  • Hoster Status: Status Channels for different hosters:
    • Rackspace:
    • CloudFlare:
    • Hetzner:

Hardware Info

  • HP - Find Installed Memory:
    dmidecode 2>&1 |grep -A17 -i "Memory Device" |egrep "Memory Device|Locator: PROC|Size" |grep -v "No Module Installed" |grep -A1 -B1 "Size:"

Mail

Middleware

  • Heartbeat - Manual IP Failover
    # Either run on the node that should take over
    /usr/share/heartbeat/hb_failover
    
    # Or run on the node to should stop working
    /usr/share/heartbeat/hb_standby
  • keepalived: Simple VRRP solution
  • Pacemaker - Commands
    # Cluster Resource Status
    crm_mon
    crm_mon -1
    crm_mon -f   # failure count
    
    # Dump and Import Config
    cibadmin --query --obj_type resources >file.xml
    cibadmin --replace --obj_type resources --xml-file file.xml
    
    # Resource Handling
    crm resource stop <name>
    crm resource start <name>
    crm resource move <name> <node>
    
    # Put entire cluster in maintenance
    crm configure property maintenance-mode=true
    crm configure property maintenance-mode=false
    
    # Unmanaged Mode for single services
    crm resource unmanage <name>
    crm resource manage <name>
    
  • Pacemaker - Setup Steps
  • RabbitMQ - Commands
    rabbitmqctl list_vhosts   # List all defined vhosts
    rabbitmqctl list_queues <vhost> # List all queues for the vhost
    
    rabbitmqctl report    # Dump detailed report on RabbitMQ instance  
    
    # Plugin management
    /usr/lib/rabbitmq/bin/rabbitmq-plugins enable <name>
    /usr/lib/rabbitmq/bin/rabbitmq-plugins list   
    
  • RabbitMQ - Fix Chef 100% CPU usage
  • RabbitMQ - Setup Clustering
  • wackamole - Commands
    wackatrl -l     # List status
    wackatrl -f     # Remove node from cluster
    wackatrl -s     # Add node to cluster again
    

Monitoring

Network Administration Commands

Package Management

  • Debian
    apt-get install <package> 
    apt-get remove <package> # Remove files installed by <package>
    apt-get purge <package>  # Remove <package> and all the files it did create
    
    apt-get upgrade    # Upgrade all packages
    apt-get install <package> # Upgrade an install package
    
    apt-get dist-upgrade  # Upgrade distribution
    
    apt-cache search <package> # Check if there is such a package name in the repos
    apt-cache clean    # Remove all downloaded .debs
    
    dpkg -l      # List all installed/known packages
    
    # More dpkg invocations above in the "Debian" section!
    
  • Ubuntu (like Debian) with the addition of
    # 1. Edit settings in  /etc/update-manager/release-upgrades
    # e.g. set "Prompt=lts"
    
    # 2. Run upgrade
    do-release-upgrade -d   # For Ubuntu release upgrades
  • OpenSuSE
    zypper install <package> 
    
    zypper refresh    # Update repository infos
    
    zypper list-updates
    zypper repos    # List configured repositories
    
    zypper dist-upgrade   # Upgrade distribution
    zypper dup     # Upgrade distribution (alias)
    
    zypper search <package>  # Search for <package>
    zypper search --search-descriptions <package>
    
    zypper clean      # Clean package cache
    
    # For safe updates:
    zypper mr –keep-packages –remote # Enable caching of packages
    zypper dup -D      # Fetch packages using a dry run
    zypper mr –all –no-refresh  # Set cache usage for following dup
    zypper dup      # Upgrade!
    
  • Redhat:
    up2date
  • Centos:
    yum update     # Upgrade distro
    yum install <package>  # Install <package>

RAID

  • mdadm - Commands
    cat /proc/mdstat   # Print status
    
    mdadm --detail /dev/md0  # Print status per md
    
    mdadm --manage -r /dev/md0 /dev/sda1 # Remove a disk
    mdadm --zero-superblock /dev/sda1  # Initialize a disk
    mdadm --manage -a /dev/md0 /dev/sda1 # Add a disk
    
    mdadm --manage --set-faulty /dev/md0 /dev/sda1
    
  • hpacucli - Commands
    # Show status of all arrays on all controllers
    hpacucli all show config
    hpacucli all show config detail
    
    # Show status of specific controller
    hpacucli ctrl=0 pd all show
    
    # Show Smart Array status
    hpacucli all show status
    
  • LSI MegaRAID - Commands
    # Get number of controllers
    /opt/MegaRAID/MegaCli/MegaCli64 -adpCount -NoLog
    
    # Get number of logical drives on controller #0
    /opt/MegaRAID/MegaCli/MegaCli64 -LdGetNum -a0 -NoLog
    
    # Get info on logical drive #0 on controller #0
    /opt/MegaRAID/MegaCli/MegaCli64 -LdInfo -L0 -a0 -NoLog
    

Security

Shell Scripting - Cheat Sheet

SSH

  • SSH Escape Key: Pressing "~?" (directly following a newline) gives a menu for escape sequences:
    Supported escape sequences:
      ~.  - terminate connection (and any multiplexed sessions)
      ~B  - send a BREAK to the remote system
      ~C  - open a command line
      ~R  - Request rekey (SSH protocol 2 only)
      ~^Z - suspend ssh
      ~#  - list forwarded connections
      ~&  - background ssh (when waiting for connections to terminate)
      ~?  - this message
      ~~  - send the escape character by typing it twice
    (Note that escapes are only recognized immediately after newline.)
    
  • SSH Mounting remote filesystem:
    # To mount a remote home dir 
    sshfs user@server: /mnt/home/user/
    
    # Unmount again with
    fuserumount -u /mnt/home/user
  • authorized_keys HowTo: Syntax and options...
  • Automatic Jump Host Proxying: Use the following ~/.ssh/config snippet and create ~/.ssh/tmp before using it
    ControlMaster auto
    ControlPath /home/<user name>/.ssh/tmp/%h_%p_%r
     
    Host <your jump host>
      ForwardAgent yes
      Hostname <your jump host>
      User <your user name on jump host>
    
    # Note the server list can have wild cards, e.g. "webserver-* database*"
    Host <server list>
      ForwardAgent yes
      User <your user name on all these hosts>
      ProxyCommand ssh -q <your jump host> nc -q0 %h 22
    
  • Easy Key Copying: Stop editing authorized_keys remote. Use the standard OpenSSH ssh-copy-id instead.
    ssh-copy-id [-i keyfile] user@maschine
  • ProxyCommand: Run SSH over a gateway and forward to other hosts based and/or perform some type of authentication. In .ssh/config you can have:
    Host unreachable_host
      ProxyCommand ssh gateway_host exec nc %h %p
  • Transparent Multi-Hop:
    ssh host1 -A -t host2 -A -t host3 ...
  • 100% non-interactive SSH: What parameters to use to avoid any interaction.
    ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"
  • SFTP chroot with umask: How to enforce a umask with SFTP
    Subsystem sftp /usr/libexec/openssh/sftp-server -u 0002
  • Agent Forwarding explained with pictures! Configured in /etc/ssh_config with
    Host *
    ForwardAgent yes
  • How to use a SOCKS Proxy On the client start proxy by
    ssh -D <port> <remote host>
  • Parallel SSH on Debian
    apt-get install pssh
    and use it like this
    pssh -h host_list.txt <args>
  • Clustered SSH on Debian
    apt-get install clusterssh
    and use it like this
    cssh server1 server2

Webserver Stack

Automation - Products

FrameworkDSLCMCM EncryptionOrchestration
cfenginePropietary??Enterprise Only
PuppetRubyHieraHiera Eyamlmcollective
ChefRubyBuiltinBuiltinPushy (knife plugin + ZeroMQ)
SaltstackPythonBuiltinBuiltinBuiltin
Other tools
  • Bcfg2: Alternative to puppet and cfengine by Argonne National Laboratory. (IMO out-dated)
  • cdist: configuration with shell scripting
  • EMC UIM - Unified Infrastructure Manager, VCE VBlock (enterprise, commercial)
  • slaughter (Perl, active, small user base)
  • Sprinkle (Ruby, quite recent)
  • Rundeck - Workflow manager for node - role systems like EC2, chef, puppet ...
  • IBM Tivoli

Finally it is worth to check the Wikipedia Comparison Chart for other less known and new tools!

Automation

  • Augeas: Very flexible file editor to be used with Puppet or standalone. Could also work with cfengine.
    $ augtool
    augtool> set /files/etc/ssh/sshd_config/PermitRootLogin no
    augtool> save
  • Augeas - in Puppet: Using Puppet with Augeas
    augeas { "sshd_config":
     changes => [
     "set /files/etc/ssh/sshd_config/PermitRootLogin no",
     ],
    }
  • cfengine: Force running shortly after a recent execution
    cfagent -K
  • cfengine - Design Center: Git repository with sketches and examples for cfengine.
  • cfengine - cf-sketch: Find and install sketches from the Design Center repository

Automation - Chef

  • Chef - Dry Run:
    chef-client -Fmin --why-run
  • Chef - List System Info:
    ohai
  • Chef - Bootstrap client:
    knife bootstrap <FQDN/IP>
  • Chef - Change Run List:
    knife node run_list <add|remove> <node> <cookbook>::<recipe>
  • Chef - List Node Info:
    knife node show <node>
  • Chef - List Nodes per Role:
    knife search node 'roles:<role name>'
  • Chef - Fix RabbitMQ 100% CPU usage
  • Chef - knife + SSH:
    knife ssh -a ipaddress name:server1 "chef-client"
    you can also use patterns:
    knife ssh -a ipaddress name:www* "uptime"
  • Chef - Edit Files: using a Script resource.
  • Chef - Manage Amazon EC2 instances
  • Chef - Tutorial on how to Setup Nagios in EC2
  • puppet: Debugging deployment and rules on a local machine. This only makes sense in "one time" mode running in one of the following variants:
    puppetd --test # enable standard debugging options
    puppetd --debug # enable full debugging
    puppetd --one-time --detailed-exitcodes # Enable exit codes:
               # 2=changes applied
               # 4=failure
    

Automation - Puppet

  • Bootstrap client
    puppet agent -t --server <puppet master> [<options>]
    
  • Managing Certificates (on master)
    puppet cert list
    puppet cert list --all
    puppet cert sign <name>
    puppet cert clean <name>   # removes cert
    
  • Managing Modules
    puppet module list
    puppet module install <name>
    puppet module uninstall <name>
    puppet module upgrade <name>
    puppet module search <name>
    
  • Inspecting Resources/Types
    puppet describe -l
    puppet resource <type name>
    
    # Querying Examples
    puppet resource user john.smith
    puppet resource service apache
    puppet resource mount /data
    puppet resource file /etc/motd
    puppet resource package wget
    
  • Gepetto: Puppet IDE
  • puppet - Correctly using Roles and Profiles
  • eyaml usage
    eyaml encrypt -f <filename>
    eyaml encrypt -s <string>
    eyaml encrypt -p      # Encrypt password, will prompt for it
    
    eyaml decrypt -f <filename>
    eyaml decrypt -s <string>
    
    eyaml edit -f <filename>    # Decrypts, launches in editor and reencrypts
    
  • mcollective commands
    mco ping
    mco ping -W "/some match pattern/"
    mco ping -S "<some select query>"
    
    # List agents, queries, plugins...
    mco plugin doc
    mco plugin doc <name>
    
    mco rpc service start service=httpd
    mco rpc service stop service=httpd
    
    mco facts <keyword>
    
    mco inventory <node name>
    
    # With shell plugin installed
    mco shell run <command>
    mco shell run --tail <command>
    
    mco shell start <command>    # Returns an id
    mco shell watch <id>
    mco shell kill <id>
    mco shell list
    

Software Firewalls, LBs

Install Servers

Orchestration Tools

  • JuJu: mostly for Ubuntu, service orchestration tool (Python, commercially backed)
  • Maestro (enterprise, commercial)
  • mcollective - Puppet parallelizing and orchestration framework
  • SaltStack

Database

Debian

  • Check for security upgrades
    # With apt-show-versions
    apt-show-versions | grep "security upgradeable"
    
    # With aptitude
    aptitude search '?and(~U,~Asecurity)'
    
  • Build Kernel Package: How to build kernel packages with make-pkg
    cd /usr/src/linux && make-kpkg clean && make-kpkg --initrd --revision=myrev kernel_image
  • Setup Keyring: How to solve "The following packages cannot be authenticated"
    apt-get install debian-archive-keyring
    apt-get update
  • Force remove broken "reportbug": This can happen during dist-upgrades from Etch/Sarge to Lenny.
  • Packages - Reconfigure after installation:
    dpkg-reconfigure -a
  • dpkg Cheat-Sheet: Query package infos
    # Resolve file to package
    dpkg -S /etc/fstab
    
    # Print all files of a package
    dpkg -L passwd # provided files
    dpkg -c passwd # owned files
    
    # Find packages by name
    dpkg -l gnome*
    
    # Package details
    dpkg -p passwd
    
  • Ubuntu - Access Repositories for older releases. Once a release is deprecated it is moved to old-releases.ubuntu.com. You need to adapt /etc/apt/sources.list to fetch packages from there
    sed -i 's/archive.ubuntu.com/old-releases.ubuntu.com/' /etc/apt/sources.list
  • Ubuntu - List Security Updates
    # Print summary
    /usr/lib/update-notifier/apt-check --human-readable
    
    # Print package names
    /usr/lib/update-notifier/apt-check -p
  • Ubuntu - Upgrade Security Fixes Only
    apt-get dist-upgrade -o Dir::Etc::SourceList=/etc/apt/sources.security.repos.only.list

Debugging / Performance Tools

  • Reboot when /sbin is unusable
    echo b >/proc/sysrq-trigger
  • List Context Switches per Process
    pidstat -w
  • Drop Filesystem Cache
    echo 1 > /proc/sys/vm/drop_caches
  • dmesg - block IO debugging:
    echo 1 > /proc/sys/vm/block_dump
    
    # wait some time...
    echo 0 > /proc/sys/vm/block_dump
    
    # Now check syslog for block dump lines
    
  • Check for changed sysctl() settings:
    sysctl -p
  • dmesg - Filtering Output:
    dmesg -T      # Enable human readable timestamps
    dmesg -x      # Show facility and log level
    dmesg -f daemon     # Filter for facility daemon
    dmesg -l err,crit,alert,emerg # Filter for errors
    
  • lslk - Find file locks: Use lslk to find which PID is blocking an flock() to a file.
  • lsof - Find owners of open file handles:
    lsof      # Complete list
    lsof -i :22    # Filter single TCP port
    lsof [email protected]:22 # Filter single connection endpoint
    lsof -u <user>   # Filter per user
    lsof -c <name>   # Filter per process name
    lsof -p 12345    # Filter by PID
    lsof /etc/hosts   # Filter single file
    
  • Perf Tutorial: 2.6+ generic kernel performance statistics tool.
    perf stat -B some_command
  • dstat: Replaces vmstat, iostat, netstat and ifstat and allows to determine PID that is most CPU and most I/O expensive
    dstat -a --top-bio --top-cpu
  • iotop: Python script to monitor I/O like top
  • PHP - How to setup the APD debugger
  • PHP - How to build Debian package for modules from PECL
    apt-get install dh-make-php
    dh-make-pecl <module name>
    cd <source directory>
    debuild
    # .deb package will be in ...
    
  • Sysdig: Some of the project examples
    sysdig fd.name contains /etc
    sysdig -c topscalls_time    # Top system calls
    sysdig -c topfiles_time proc.name=httpd    # Top files by process
    sysdig -c topfiles_bytes     # Top I/O per file
    sysdig -c fdcount_by fd.cip "evt.type=accept"   # Top connections by IP
    sysdig -c fdbytes_by fd.cip  # Top bytes per IP
    
    # Sick MySQL check via Apache
    sysdig -A -c echo_fds fd.sip=192.168.30.5 and proc.name=apache2 and evt.buffer contains SELECT
    
    sysdig -cl # List plugins
    sysdig -c bottlenecks  # Run bottlenecks plugin
    

Filesystem / Partitioning

  • detox: Tool for recursive cleanup of file names.
    detox -v -r <directory>
  • Fast File Deletion:
    perl -e 'for(<*>){((stat)[9]<(unlink))}'
  • POSIX ACLs:
    getfacl <file>  # List ACLs for file 
    setfacl -m user:joe:rwx dir # Modify ACL
    ls -ld <file>    # Check for active ACL (indicates a "+")
  • uNetBootin: Create bootable media for any distribution. Most useful with USB sticks.
  • Convert ext2 to ext3:
    tune2fs -j /dev/hda1
  • Convert ext3 to ext4:
    tune2fs -O extents,uninit_bg,dir_index /dev/sda1
  • Determine Inode Count:
    tune2fs -l /dev/sda1 | grep Inode
  • Disable ext4 barriers: Add "barrier=0" to the mount options.
  • LVM - Add another disk: How to add a disk to an existing volume
    # Setup partition with (use parted for >2TB)
    (parted) mklabel gpt       # only when >2TB
    (parted) mkpart primary lvm 0 4T    # setup disk full size (e.g. 4TB)
    
    pvcreate /dev/sdb1       # Create physical LVM disk
    vgextend vg01 /dev/sdb1      # Add to volume group
    vgextend -L +4t /dev/mapper/vg01-lvdata  # Extend your volume 
    resize2fs /dev/mapper/vg01-lvdata   # Auto-resize file system
  • rsync - --delete doesn't work: It happens when you call rsync without a trailing slash in the source path like this:
    rsync -az -e ssh --delete /data server:/data
    It just won't delete anything. It will when running it like this:
    rsync -az -e ssh --delete /data/ server:/data

Hosting

  • Hoster Lookup: whoishosthingthis.com, who-hosts.com
  • iplist.net: Simple reverse lookup of neighbour IPs
  • Hoster Status: Status Channels for different hosters:
    • Rackspace:
    • CloudFlare:
    • Hetzner:

Hardware Info

  • HP - Find Installed Memory:
    dmidecode 2>&1 |grep -A17 -i "Memory Device" |egrep "Memory Device|Locator: PROC|Size" |grep -v "No Module Installed" |grep -A1 -B1 "Size:"

Mail

Middleware

  • Heartbeat - Manual IP Failover
    # Either run on the node that should take over
    /usr/share/heartbeat/hb_failover
    
    # Or run on the node to should stop working
    /usr/share/heartbeat/hb_standby
  • keepalived: Simple VRRP solution
  • Pacemaker - Commands
    # Cluster Resource Status
    crm_mon
    crm_mon -1
    crm_mon -f   # failure count
    
    # Dump and Import Config
    cibadmin --query --obj_type resources >file.xml
    cibadmin --replace --obj_type resources --xml-file file.xml
    
    # Resource Handling
    crm resource stop <name>
    crm resource start <name>
    crm resource move <name> <node>
    
    # Put entire cluster in maintenance
    crm configure property maintenance-mode=true
    crm configure property maintenance-mode=false
    
    # Unmanaged Mode for single services
    crm resource unmanage <name>
    crm resource manage <name>
    
  • Pacemaker - Setup Steps
  • RabbitMQ - Commands
    rabbitmqctl list_vhosts   # List all defined vhosts
    rabbitmqctl list_queues <vhost> # List all queues for the vhost
    
    rabbitmqctl report    # Dump detailed report on RabbitMQ instance  
    
    # Plugin management
    /usr/lib/rabbitmq/bin/rabbitmq-plugins enable <name>
    /usr/lib/rabbitmq/bin/rabbitmq-plugins list   
    
  • RabbitMQ - Fix Chef 100% CPU usage
  • RabbitMQ - Setup Clustering
  • wackamole - Commands
    wackatrl -l     # List status
    wackatrl -f     # Remove node from cluster
    wackatrl -s     # Add node to cluster again
    

Monitoring

Network Administration Commands

Package Management

  • Debian File Diversion:
    # Register diverted path and move away
    dpkg-divert --add --rename --divert <renamed file path> &file path>
    
    # Remove a diversion again (remove file first!)
    dpkg-divert --rename --remove <file path>
    
  • Debian
    apt-get install <package> 
    apt-get remove <package> # Remove files installed by <package>
    apt-get purge <package>  # Remove <package> and all the files it did create
    
    apt-get upgrade    # Upgrade all packages
    apt-get install <package> # Upgrade an install package
    
    apt-get dist-upgrade  # Upgrade distribution
    
    apt-cache search <package> # Check if there is such a package name in the repos
    apt-cache clean    # Remove all downloaded .debs
    
    dpkg -l      # List all installed/known packages
    
    # More dpkg invocations above in the "Debian" section!
    
  • Ubuntu (like Debian) with the addition of
    # 1. Edit settings in  /etc/update-manager/release-upgrades
    # e.g. set "Prompt=lts"
    
    # 2. Run upgrade
    do-release-upgrade -d   # For Ubuntu release upgrades
  • Ubuntu: Unattended Upgrades
    apt-get install unattended-upgrades
    dpkg-reconfigure -plow unattended-upgrades 
    # and maybe set notification mail address in /etc/apt/apt.conf.d/50unattended-upgrades
  • OpenSuSE
    zypper install <package> 
    
    zypper refresh    # Update repository infos
    
    zypper list-updates
    zypper repos    # List configured repositories
    
    zypper dist-upgrade   # Upgrade distribution
    zypper dup     # Upgrade distribution (alias)
    
    zypper search <package>  # Search for <package>
    zypper search --search-descriptions <package>
    
    zypper clean      # Clean package cache
    
    # For safe updates:
    zypper mr –keep-packages –remote # Enable caching of packages
    zypper dup -D      # Fetch packages using a dry run
    zypper mr –all –no-refresh  # Set cache usage for following dup
    zypper dup      # Upgrade!
    
  • Redhat:
    up2date
  • Centos:
    yum update     # Upgrade distro
    yum install <package>  # Install <package>

RAID

  • mdadm - Commands
    cat /proc/mdstat   # Print status
    
    mdadm --detail /dev/md0  # Print status per md
    
    mdadm --manage -r /dev/md0 /dev/sda1 # Remove a disk
    mdadm --zero-superblock /dev/sda1  # Initialize a disk
    mdadm --manage -a /dev/md0 /dev/sda1 # Add a disk
    
    mdadm --manage --set-faulty /dev/md0 /dev/sda1
    
  • hpacucli - Commands
    # Show status of all arrays on all controllers
    hpacucli all show config
    hpacucli all show config detail
    
    # Show status of specific controller
    hpacucli ctrl=0 pd all show
    
    # Show Smart Array status
    hpacucli all show status
    
    # Create new Array
    hpacucli ctrl slot=0 create type=logicaldrive drives=1I:1:3,1I:1:4 raid=1
    
  • LSI MegaRAID - Commands
    # Get number of controllers
    /opt/MegaRAID/MegaCli/MegaCli64 -adpCount -NoLog
    
    # Get number of logical drives on controller #0
    /opt/MegaRAID/MegaCli/MegaCli64 -LdGetNum -a0 -NoLog
    
    # Get info on logical drive #0 on controller #0
    /opt/MegaRAID/MegaCli/MegaCli64 -LdInfo -L0 -a0 -NoLog
    

Security

Shell Scripting - Cheat Sheet

SSH

  • SSH Escape Key: Pressing "~?" (directly following a newline) gives a menu for escape sequences:
    Supported escape sequences:
      ~.  - terminate connection (and any multiplexed sessions)
      ~B  - send a BREAK to the remote system
      ~C  - open a command line
      ~R  - Request rekey (SSH protocol 2 only)
      ~^Z - suspend ssh
      ~#  - list forwarded connections
      ~&  - background ssh (when waiting for connections to terminate)
      ~?  - this message
      ~~  - send the escape character by typing it twice
    (Note that escapes are only recognized immediately after newline.)
    
  • SSH Mounting remote filesystem:
    # To mount a remote home dir 
    sshfs user@server: /mnt/home/user/
    
    # Unmount again with
    fuserumount -u /mnt/home/user
  • authorized_keys HowTo: Syntax and options...
  • Automatic Jump Host Proxying: Use the following ~/.ssh/config snippet and create ~/.ssh/tmp before using it
    ControlMaster auto
    ControlPath /home/<user name>/.ssh/tmp/%h_%p_%r
     
    Host <your jump host>
      ForwardAgent yes
      Hostname <your jump host>
      User <your user name on jump host>
    
    # Note the server list can have wild cards, e.g. "webserver-* database*"
    Host <server list>
      ForwardAgent yes
      User <your user name on all these hosts>
      ProxyCommand ssh -q <your jump host> nc -q0 %h 22
    
  • Easy Key Copying: Stop editing authorized_keys remote. Use the standard OpenSSH ssh-copy-id instead.
    ssh-copy-id [-i keyfile] user@maschine
  • ProxyCommand: Run SSH over a gateway and forward to other hosts based and/or perform some type of authentication. In .ssh/config you can have:
    Host unreachable_host
      ProxyCommand ssh gateway_host exec nc %h %p
  • Transparent Multi-Hop:
    ssh host1 -A -t host2 -A -t host3 ...
  • 100% non-interactive SSH: What parameters to use to avoid any interaction.
    ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"
  • SFTP chroot with umask: How to enforce a umask with SFTP
    Subsystem sftp /usr/libexec/openssh/sftp-server -u 0002
  • Agent Forwarding explained with pictures! Configured in /etc/ssh_config with
    Host *
    ForwardAgent yes
  • How to use a SOCKS Proxy On the client start proxy by
    ssh -D <port> <remote host>
  • Parallel SSH on Debian
    apt-get install pssh
    and use it like this
    pssh -h host_list.txt <args>
  • Clustered SSH on Debian
    apt-get install clusterssh
    and use it like this
    cssh server1 server2
  • Vim Remote File Editing:
    vim scp://user@host//some/directory/file.txt

Webserver Stack

Linux network administration commands

Basics

  • Resolve a name via nsswitch
    getent hosts <host name>
  • CloudShark: Sharing network traces

Configuration

  • ethtool - Usage
    ethtool eth0                       # Print general info on eth0
    ethtool -i eth0                    # Print kernel module info
    ethtool -S eth0                    # Print eth0 traffic statistics
    ethtool -a eth0                    # Print RX, TX and auto-negotiation settings
    
    # Changing NIC settings...
    ethtool -s eth0 speed 100
    ethtool -s eth0 autoneg off
    ethtool -s eth0 duplex full
    ethtool -s eth0 wol g               # Turn on wake-on-LAN
    
    Do not forget to make changes permanent in e.g. /etc/network/interfaces.
  • miitool - Show Link Infos
    # mii-tool -v
    eth0: negotiated 100baseTx-FD flow-control, link ok
      product info: vendor 00:07:32, model 17 rev 4
      basic mode:   autonegotiation enabled
      basic status: autonegotiation complete, link ok
      capabilities: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD
      advertising:  100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control
      link partner: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control
    
  • Enable Jumbo Frames
    ifconfig eth1 mtu 9000
  • ipsets - Using IP sets for simpler iptables rules
    ipset create smtpblocks hash:net counters
    ipset add smtpblocks 27.112.32.0/19
    ipset add smtpblocks 204.8.87.0/24
    iptables -A INPUT -p tcp --dport 25 -m set --match-set smtpblocks src -j DROP
    
  • iptables - Loopback Routing:
    iptables -t nat -A POSTROUTING -d <internal web server IP> -s <internal network address> -p tcp --dport 80 -j SNAT --to-source <external web server IP>
  • NFS - Tuning Secrets: SGI Slides on NFS Performance

Troubleshooting

  • Black Hole Route: To block IPs create route on loopback
    route add -net 91.65.16.0/24 gw 127.0.0.1 lo   # for a subnet
    route add  91.65.16.4 gw 127.0.0.1 lo   # for a single IP
  • Quick Access Log IP Top List
    tail -100000 access.log | awk '{print $1}' | sort | uniq -c |sort -nr|head -25
  • Find out if IP is used before configuring it
    arping <IP>
  • Traceroute with AS and network name lookup
    lft -AN www.google.de
  • Manually lookup AS
  • dailychanges.com: Tracks DNS changes

Measuring

  • vnstat - Short term measurement bytes/packets min/avg/max:
    vnstat -l      # Live listing until Ctrl-C and summary
    vnstat -tr     # 5s automatic traffic sample
  • vnstat - Long term statistics:
    vnstat -h      # last hours (including ASCII graph)
    vnstat -d      # last days
    vnstat -w      # last weeks
    vnstat -m     # last months
    
    vnstat -t       # top 10 days

Discovery

  • nmap commands
    # Network scan
    nmap -sP 192.168.0.0/24
    
    # Host scan
    nmap <ip>
    nmap -F <ip>      # fast
    nmap -O <ip>     # detect OS
    nmap -sV <ip>     # detect services and versions
    nmap -sU <ip>     # detect UDP services
    
    # Alternative host discovery
    nmap -PS <ip>     # TCP SYN scan
    nmap -PA <ip>     # TCP ACK scan
    nmap -PO <ip>     # IP ping
    nmap -PU <ip>     # UDP ping
    
    # Alternative service discovery
    nmap -sS <ip>      
    nmap -sT <ip>
    nmap -sA <ip>
    nmap -sW <ip>
    
    # Checking firewalls
    nmap -sN <ip>
    nmap -sF <ip>
    nmap -sX <ip>
    

Debugging

  • X-Trace - Multi-protocol tracing framework
  • iptraf - Real-time statistics in ncurses interfaces
  • mtr - Debug routing/package loss issues
  • netstat - The different modes
    # Typically used modes
    netstat -rn          # List routes
    netstat -tlnp       # List all open TCP connections
    netstat -tlnpc      # Continuously do the above
    netstat -tulpen    # Extended connection view
    netstat -a           # List all sockets
    
    # And more rarely used
    netstat -s            # List per protocol statistics
    netstat -su          # List UDP statistics
    netstat -M           # List masqueraded connections
    netstat -i            # List interfaces and counters
    netstat -o           # Watch time/wait handling
    
  • nttcp - TCP performance testing
    # On sending host
    nttcp -t -s
    
    # On receiving host
    nttcp -r -s
    
  • List Kernel Settings
    sysctl net
  • tcpdump - Be verbose and print full package hex dumps:
     tcpdump -i eth0 -nN -vvv -xX -s 1500 port <some port>
  • SNMP - Dump all MIBs: When you need to find the MIB for an object known only by name try
    snmpwalk -c public -v 1 -O s <myhost> .iso | grep <search string>
  • Hurricane Electric - BGP Tools: Statistics on all AS as well as links to their looking glasses.
  • tcpdump - Tutorial: Many usage examples.
    # Filter port
    tcpdump port 80
    tcpdump src port 1025 
    tcpdump dst port 389
    tcpdump portrange 21-23
    
    # Filter source or destination IP
    tcpdump src 10.0.0.1
    tcpdump dest 10.0.0.2
    
    # Filter  everything on network 
    tcpdump net 1.2.3.0/24
    
    # Logically operators
    tcpdump src port 1025 and tcp 
    
    # Provide full hex dump of captured HTTP packages
    tcpdump -s0 -x port 80
    
    # Filter TCP flags (e.g. RST)
    tcpdump 'tcp[13] & 4!=0'
    

NFS Administration Commands

Linux ha architectures

When you want to use free cluster solutions in Debian/Ubuntu you will wonder about their complexity and the big question: which one to use. With this post I want to shed some light on the different software stacks and give some opinions on their advantages and possible pitfalls.

Heartbeat

I think heartbeat is probably the oldest still existing wide-spread free failover solution on Linux. It must date back to 1998 or earlier. These days Debian Wheezy has heartbeat2 included which comes bundled together with Pacemaker which will be discussed below. Still you can use Heartbeat standalone as a simply IP-failover solution. And that's all what heartbeat does. From the manpage:
[...] heartbeat is a basic heartbeat subsystem for Linux-HA. It will 
run scripts at initialisation, and when machines go up or down. This 
version will also perform IP address takeover using gratuitous 
ARPs. It works correctly for a 2-node configuration, and is 
extensible to larger configurations. [...]
So without any intermediate layer heartbeat manages virtual IPs on multiple nodes which communicate via unicast/broadcast/multicast UDP or Ping, so they can be used as a cluster IP by any service on top. To get some service-like handling you can hook scripts to be triggered on failover, so could start/stop services as needed. So if you just want to protect youself against physical or network layer problem heartbeat might work out.

Wackamole

Wackamole is probably as complex as heartbeat, but a bit younger and from 2001. It delegates the problem of detecting peer problems and visibility to the Spread toolkit. Spread is used by other applications e.g. for database replication or Apache SSL session cache sharing. For wackamole it is used to detect the availability of the peers. The special thing about wackamole is that you have one virtual IP per peer and if a peer disappears this VIP fails over to one who is still in the Spread group. On the homepage the core difference is expressed as follows:
[...]Wackamole is quite unique in that it operates in a completely peer-to-peer mode within the cluster. Other products that provide the same high-availability guarantees use a "VIP" method. A networking appliance assumes a single virtual IP address and "maps" requests to that IP address to the machines in the cluster. This networking appliance is a single point of failure by itself, so most industry accepted solutions incorporate classic master-slave failover or bonding between two identical appliances. [...]
My experience with wackamole is that with certain network problems you can run into split brain situations with an isolated node grabbing all virtual IPs and given his visibility in the network killing the traffic by doing so. So running wackamole from time to time you will have to restart all peers just to get a working Spread group again.

Pacemaker

Linux desktop feed reader usage declining?

When working on your open source pet project there is always the ego boost of asking yourself how popular is this thing that we are building. Who is using it? Why is there no feedback. Or why are there suddenly so many bug reports? So what is the amount of users of Liferea and other feed readers and how is it changing?

Popcon

Well for Debian and Ubuntu there is the famous popularity contest which tracks installation and usage count per package. Let's look into the statistics over the years. Note that while the Debian graph is official from the Debian popcon.debian.org, the Ubuntu graph is from lesbonscomptes.com/upopcon as Ubuntu itself doesn't provide a graphs. Also the Ubuntu graph only covers the time from 2010 to now, while the Debian graph dates back to 2004.

Liferea and Akregator

The two widely used feed readers under Debian/Ubuntu are Liferea (GTK) and Akregator (KDE). While it is possible that more people use Thunderbird or Firefox for feed reading it cannot be measured using popcon as there is no dedicated package for Thunderbird nor Firefox that could indicate the feed reading habits of their users. So we can only look at the standalone feed reader packages. Debian Ubuntu The graphs indicate a decline from up to over 4000+ users on each distributions which seems to have been peak usage to recently roughly over 1k Debian users and 700 Ubuntu users. Interesting is the difference on Debian with until 2014 more Liferea users vs. Ubuntu which always had more Akregator users.

Other Market Shares

Of course there are several other news readers for Debian and Ubuntu. Note that Ubuntu has RSSOwl which Debian hasn't. Snownews isn't listed in the Debian graph anymore as it was dropped with Wheezy. Debian Ubuntu All other feed readers on Ubuntu count roughly 250+ votes in 2010 and roughly 80 in 2014.

Installation Base

Usage is falling. Well there are mobile devices and the reduced visibility of syndication techniques in the browser (we do not have the feed button in the browser location bar anymore)... So people might just not install feed readers anymore. Is this the case? Let's limit the analysis just on Liferea and Akregator. Debian On Debian the install base of Liferea is rapidly declining as it is not in the default desktop selection, while Akregator installations still grow maybe due to the kde-pim integration. Ubuntu A different situation on Ubuntu. Both feed readers are in the default desktop packages of their respective desktop environments. So installations seem to scale linearly upwards along with the growth of Ubuntu installations. Update: Jeff Fortin hinted on only Akregator being in a default package selection on Ubuntu. This makes the linear growth of the Liferea install base strange.

It looks bleak... Checking the baseline!

Well let do some verification of the results to be sure popcon can be trusted. Let's have a look at a basic Debian package needed during installation like "debianutils" which users do not unselect and which is automatically used. And let's also add a basic GNOME package like "gnome-session" which always will be used if GNOME is used. Here are the Ubuntu popcon results for both It looks (most obvious with "debian-utils") that there was a 50% reduction of the popcon votes in over 2013. Please note that the staircase steps in all the Ubuntu curves do probably indicate there are only 2 samples in the given time range! I guess the decline was rather continuous as can be found in the Debian curve. When checking the installations at the same time there is no drop. So some mechanic in the popcon voting counting could have changed. I found no hints online why this is the case so far.

Conclusion

At this point I think the results are too confusing to actually read much into it. I believe all graphs indicate a decline of the feed reader usage over the years, especially after the peak in 2010, and at the same time the graphs indicate changes in the vote counting with differences in Ubuntu and Debian.

Lpvs: the linux package vulnerability scanner

About

This LPVS Linux package vulnerability scanner uses public security news feeds provided by the Linux distributions to detect out-of-date packages that could pose a threat for your server. The scanner currently runs on
  • Ubuntu
  • CentOS
Additional distributions might be added...

Limitations

Please know that the scanner works by comparing complex package version numbers and therefore is limited to do overly exact matches. It works best on an almost up-to-date installation. For example where you run the latest Ubuntu LTS release and do weekly or on demand updates. The current goal of the scanner is to avoid false positives and to be useful for daily analysis of a large number of systems. Note: When on Debian use debsecan instead! On FreeBSD use Portaudit.

Installation + Running

Download lpvs-scan.pl version 0.2, put it anywhere you like and run it like this

./lpvs-scan.pl
No need to run as root, any user will do. It just needsPlease keep in mind that this is an experimental script which might report false positives and negatives!

Screenshots

Below you find a screenshot from a CentOS setup. Green lines indicate security advisory covering packages that are installed and up-to-date. Yellow lines indicate security advisories not applicable as the related packages are not installed. Red ones of course indicate a vulnerability.

Howto munin and rrdcached on ubuntu 12.04

Let's expect you already have Munin installed and working and you want to reduce disk I/O and improve responsiveness by adding rrdcached... Here are the complete steps to integrate rrdcached:

Basic Installation

First install the stock package
apt-get install rrdcached
and integrate it with Munin:
  1. Enable the rrdcached socket line in /etc/munin/munin.conf
  2. Disable munin-html and munin-graph calls in /usr/bin/munin-cron
  3. Create /usr/bin/munin-graph with
    #!/bin/bash
    
    nice /usr/share/munin/munin-html $@ || exit 1
    
    nice /usr/share/munin/munin-graph --cron $@ || exit 1 
    and make it executable
  4. Add a cron job (e.g. to /etc/cron.d/munin) to start munin-graph:
    10 * * * *      munin if [ -x /usr/bin/munin-graph ]; then /usr/bin/munin-graph; fi

The Critical Stuff

To get Munin to use rrdcached on Ubuntu 12.04 ensure to follow these vital steps:
  1. Add "-s <webserver group>" to $OPT in /etc/init.d/rrdcached (in front of the first -l switch)
  2. Change "-b /var/lib/rrdcached/db/" to "-b /var/lib/munin" (or wherever you keep your RRDs)
So a patched default Debian/Ubuntu with Apache /etc/init.d/rrdcached would have
OPTS="-s www-data -l unix:/var/run/rrdcached.sock"
OPTS="$OPTS -j /var/lib/rrdcached/journal/ -F"
OPTS="$OPTS -b /var/lib/munin/ -B"
If you do not set the socket user with "-s" you will see "Permission denied" in /var/log/munin/munin-cgi-graph.log
[RRD ERROR] Unable to graph /var/lib/munin/
cgi-tmp/munin-cgi-graph/[...].png : Unable to connect to rrdcached: 
Permission denied
If you do not change the rrdcached working directory you will see "rrdc_flush" errors in your /var/log/munin/munin-cgi-graph.log
[RRD ERROR] Unable to graph /var/lib/munin/
cgi-tmp/munin-cgi-graph/[...].png : 
rrdc_flush (/var/lib/munin/[...].rrd) failed with status -1.
Some details on this can be found in the Munin wiki.

Howto consistent hashing with different memcached bindings

A short HowTo on memcached consistent hashing. Of course also works with memcached protocol compatible software as CouchBase, MySQL...

Papers

Papers to read to learn about what consistent hashing is about:

Consistent Hashing with nginx

 upstream somestream {
      consistent_hash $request_uri;
      server 10.0.0.1:11211;
      server 10.0.0.2:11211;
      ...
    }

Consistent Hashing with PHP

Note: the order of setOption() and addServers() is important. When using OPT_LIBKETAMA_COMPATIBLE the hashing is compatible with all other runtimes using libmemcached.
$memcached = new Memcached();
$memcached->setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT);
$memcached->setOption(Memcached::OPT_LIBKETAMA_COMPATIBLE, true);
$memcached->addServers($servers);

Consistent Hashing in Perl

As in PHP the order of setOptions() and addServers() matters. After all both languages use the same library in the background, so behaviour is the same.
$m = new Memcached('mymemcache');
$m->setOptions(array(
   ...
   Memcached::OPT_LIBKETAMA_COMPATIBLE => true,
   Memcached::OPT_DISTRIBUTION => Memcached::DISTRIBUTION_CONSISTENT,
   ...
));
$m->addServers(...);

How To: migrate linux applications to xdg directories

If you are maintaining a Linux Glib-based or GTK application for some time you might want to migrate it to the XDG way of user data layout. This is something I had to do for Liferea (around since 2003) recently. Also when creating a new application you might ask yourself where to put the user data. This post tries to describe how to access the correct paths using Glib.

1. Preparation: Categorize your data

Determine what types of data you have. The specification knows three major directories:
  1. $XDG_DATA_HOME: usually ~/.local/share
  2. $XDG_CONFIG_HOME: usually ~/.config
  3. $XDG_CACHE_HOME: usually ~/.cache
In each of the directories your application should create a subfolder with the unique name of the application and place relevant files there. While volatile cache files go into ~/.cache, persistent important data should go to ~/.local/share and all configuration to ~/.config.

2. Migrate the code

The simple task is to rewrite the old code creating directory paths some arbitrary way to use XDG style directory paths now. As the specification is non-trivial when finding the directory base paths (via multiple paths in $XDG_DATA_DIRS and $XDG_CONFIG_DIRS) it might be good to rely on a library for doing this.

2.1 Using Glib

When developing for GTK or maybe only using Glib one already gets support since GTK uses Glib and Glib 2.6 introduced support for the XDG base directory specification. So with Glib use the following methods to find the target directories:
$XDG_DATA_HOMEg_get_user_data_dir()
$XDG_CONFIG_HOMEg_get_user_config_dir()
$XDG_CACHE_HOMEg_get_user_cache_dir()
Given your application being named "coolApp" and you want to create a cache file named "render.dat" you could use the following C snippet:
g_build_filename (g_get_user_cache_dir (), "coolApp", "render.dat", NULL);
to produce a path. Most likely you'll get something like "/home/joe/.cache/coolApp/render.dat".

2.2 Using wxWidgets

When programming for wxWidgets you need to use the wxStandardPaths class. The methods are
$XDG_DATA_HOMEwxStandardPaths::GetDataDir()
$XDG_CONFIG_HOMEwxStandardPaths::GetConfigDir()
$XDG_CACHE_HOMEwxStandardPaths::GetLocalDataDir()

2.3 With KDE

Since KDE 3.2 it also supports the XDG base specification. But honestly: googling our trying to browse the KDE API I couldn't find any pointers on how to do it. If you know please leave a comment!

How To write a chef recipe for editing config files

Most chef recipes are about installing new software including all config files. Also if they are configuration recipes they usually overwrite the whole file and provide a completely recreated configuration. When you have used cfengine and puppet with augtool before you'll miss a possibility to edit files.

In cfengine2...

You could write
editfiles:
{ home/.bashrc
   AppendIfNoSuchLine "alias rm='rm -i'"
}

While in puppet...

You'd have:
augeas { "sshd_config":
  context => "/files/etc/ssh/sshd_config",
  changes => [
    "set PermitRootLogin no",
  ],
}

Now how to do it in Chef?

Maybe I missed the correct way to do it until now (please comment if this is the case!) but there seems to be no way to use for example augtool with chef and there is no built-in cfengine like editing. The only way I've seen so far is to use Ruby as a scripting language to change files using the Ruby runtime or to use the Script ressource which allows running other interpreters like bash, csh, perl, python or ruby. To use it you can define a block named like the interpreter you need and add a "code" attribute with a "here doc" operator (e.g. <<-EOT) describing the commands. Additionally you specify a working directory and a user for the script to be executed with. Example:
bash "some_commands" do
    user "root"
    cwd "/tmp"
    code <<-EOT
       echo "alias rm='rm -i'" >> /root/.bashrc
    EOT
end
While it is not a one-liner statement as possible as in cfengine it is very flexible. The Script resource is widely used to perform ad-hoc source compilation and installations in the community codebooks, but we can also use it for standard file editing. Finally to do conditional editing use not_if/only_if clauses at the end of the Script resource block.

How To dump keys from memcache

You spent already 50GB on the memcache cluster, but you still see many evictions and the cache hit ratio doesn't look good since a few days. The developers swear that they didn't change the caching recently, they checked the code twice and have found no problem. What now? How to get some insight into the black box of memcached? One way would be to add logging to the application to see and count what is being read and written and then to guess from this about the cache efficiency. For to debug what's happening we need to set how the cache keys are used by the application.

An Easier Way

Memcache itself provides a means to peek into its content. The memcache protocol provides commands to peek into the data that is organized by slabs (categories of data of a given size range). There are some significant limitations though:
  1. You can only dump keys per slab class (keys with roughly the same content size)
  2. You can only dump one page per slab class (1MB of data)
  3. This is an unofficial feature that might be removed anytime.
The second limitation is propably the hardest because 1MB of several gigabytes is almost nothing. Still it can be useful to watch how you use a subset of your keys. But this might depend on your use case. If you don't care about the technical details just skip to the tools section to learn about what tools allow you to easily dump everything. Alternatively follow the following guide and try the commands using telnet against your memcached setup.

How it Works

First you need to know how memcache organizes its memory. If you start memcache with option "-vv" you see the slab classes it creates. For example
$ memcached -vv
slab class   1: chunk size        96 perslab   10922
slab class   2: chunk size       120 perslab    8738
slab class   3: chunk size       152 perslab    6898
slab class   4: chunk size       192 perslab    5461
[...]
In the configuration printed above memcache will keep fit 6898 pieces of data between 121 and 152 byte in a single slab of 1MB size (6898*152). All slabs are sized as 1MB per default. Use the following command to print all currently existing slabs:
stats slabs
If you've added a single key to an empty memcached 1.4.13 with
set mykey 0 60 1
1
STORED
you'll now see the following result for the "stats slabs" command:
stats slabs
STAT 1:chunk_size 96
STAT 1:chunks_per_page 10922
STAT 1:total_pages 1
STAT 1:total_chunks 10922
STAT 1:used_chunks 1
STAT 1:free_chunks 0
STAT 1:free_chunks_end 10921
STAT 1:mem_requested 71
STAT 1:get_hits 0
STAT 1:cmd_set 2
STAT 1:delete_hits 0
STAT 1:incr_hits 0
STAT 1:decr_hits 0
STAT 1:cas_hits 0
STAT 1:cas_badval 0
STAT 1:touch_hits 0
STAT active_slabs 1
STAT total_malloced 1048512
END
The example shows that we have only one active slab type #1. Our key being just one byte large fits into this as the smallest possible chunk size. The slab statistics show that currently on one page of the slab class exists and that only one chunk is used. Most importantly it shows a counter for each write operation (set, incr, decr, cas, touch) and one for gets. Using those you can determine a hit ratio! You can also fetch another set of infos using "stats items" with interesting counters concerning evictions and out of memory counters.
stats items
STAT items:1:number 1
STAT items:1:age 4
STAT items:1:evicted 0
STAT items:1:evicted_nonzero 0
STAT items:1:evicted_time 0
STAT items:1:outofmemory 0
STAT items:1:tailrepairs 0
STAT items:1:reclaimed 0
STAT items:1:expired_unfetched 0
STAT items:1:evicted_unfetched 0
END

What We Can Guess Already...

Given the statistics infos per slabs class we can already guess a lot of thing about the application behaviour:
  1. How is the cache ratio for different content sizes?
    • How good is the caching of large HTML chunks?
  2. How much memory do we spend on different content sizes?
    • How much do we spend on simple numeric counters?
    • How much do we spend on our session data?
    • How much do we spend on large HTML chunks?
  3. How many large objects can we cache at all?
Of course to answer the questions you need to know about the cache objects of your application.

Now: How to Dump Keys?

Keys can be dumped per slabs class using the "stats cachedump" command.
stats cachedump <slab class> <number of items to dump>
To dump our single key in class #1 run
stats cachedump 1 1000
ITEM mykey [1 b; 1350677968 s]
END
The "cachedump" returns one item per line. The first number in the braces gives the size in bytes, the second the timestamp of the creation. Given the key name you can now also dump its value using
get mykey
VALUE mykey 0 1
1
END
This is it: iterate over all slabs classes you want, extract the key names and if need dump there contents.

Dumping Tools

There are different dumping tools sometimes just scripts out there that help you with printing memcache keys:
PHPsimple scriptPrints key names.
Perlsimple scriptPrints keys and values
Rubysimple scriptPrints key names.
PerlmemdumpTool in CPAN module Memcached-libmemcached
PHPmemcache.phpMemcache Monitoring GUI that also allows dumping keys
libmemcachedpeep

Does freeze your memcached process!!!

Be careful when using this in production. Still using it you can workaround the 1MB limitation and really dump all keys.

How to write gobject introspection based plugins

This is a short introduction in how to write plugins (in this case Python plugins) for a GTK+ 3.0 application. One of the major new features in GTK+ 3.0 is GObject Introspection which allows applications to be accessed from practically any scripting language out there. The motivation for this post is that when I tried to add plugin support to Liferea with libpeas it took me three days to work me through the somewhat sparse documentation which at no point is a good HowTo on how to proceed step by step. This post tries to give one...

1. Implement a Plugin Engine with libpeas

For the integration of libpeas you need to write a lot of boiler plate code for initialisation and plugin path registration. Take the gtranslator gtr-plugins-engince.c implementation as an example. Most important are the path registrations with peas_engine_add_search_path:
  peas_engine_add_search_path (PEAS_ENGINE (engine),
                               gtr_dirs_get_user_plugins_dir (),
                               gtr_dirs_get_user_plugins_dir ());

  peas_engine_add_search_path (PEAS_ENGINE (engine),
                               gtr_dirs_get_gtr_plugins_dir (),
                               gtr_dirs_get_gtr_plugins_data_dir ());
It is useful to have two registrations one pointing to some user writable subdirectory in $HOME and a second one for package installed plugins in a path like /usr/share/<application>/plugins. Finally ensure to call the init method of this boiler plate code during your initialization.

2. Implement Plugin Preferences with libpeasgtk

To libpeas also belongs a UI library providing a plugin preference tab that you can add to your preferences dialog. Here is a screenshot from the implementation in Liferea: To add such a tab add the a "Plugins" tab to your preferences dialog and the following code to the plugin dialog setup:
#include <libpeas-gtk/peas-gtk-plugin-manager.h>

[...]

/* assuming "plugins_box" is an existing tab container widget */

GtkWidget *alignment;

alignment = gtk_alignment_new (0., 0., 1., 1.);
gtk_alignment_set_padding (GTK_ALIGNMENT (alignment), 12, 12, 12, 12);

widget = peas_gtk_plugin_manager_new (NULL);
gtk_container_add (GTK_CONTAINER (alignment), widget);
gtk_box_pack_start (GTK_BOX (plugins_box), alignment, TRUE, TRUE, 0);
At this point you can already compile everything and test it. The new tab with the plugin manager should show up empty but working.

3. Define Activatable Class

We've initialized the plugin library in step 1. Now we need to add some hooks to the program so called "Activatables" which we'll use in the code to create a PeasExtensionSet representing all plugins providing this interface. For example gtranslator has an GtrWindowActivable interface for plugins that should be triggered when a gtranslator window is created. It looks like this:
struct _GtrWindowActivatableInterface
{
  GTypeInterface g_iface;

  /* Virtual public methods */
  void (*activate) (GtrWindowActivatable * activatable);
  void (*deactivate) (GtrWindowActivatable * activatable);
  void (*update_state) (GtrWindowActivatable * activatable);
};
The activate() and deactivate() methods are to be called by conventing using the "extension-added" / "extension-removed" signal emitted by the PeasExtensionSet. The additional method update_state() is called in the gtranslator code when user interactions happen and the plugin needs to reflect it. Add as many methods you need many plugins do not need special methods as they can connect to application signals themselves. So keep the Activatable interface simple! As for how many Activatables to add: in the most simple case in a single main window application you could just implement a single Activatable for the main window and all plugins no matter what they do initialize with the main window.

4. Implement and Use the Activatable Class

No we've defined Activatables and need to implement and use the corresponding class. The interface implementation itself is just a lot of boilerplate code: check out gtr-window-activatable.c implementing GtrWindowActivatable. In the class the Activable belongs to (in case of gtranslator GtrWindowActivatable belongs to GtrWindow) a PeasExtensionSet needs to be initialized:
window->priv->extensions = peas_extension_set_new (PEAS_ENGINE (gtr_plugins_engine_get_default ()),
                                                     GTR_TYPE_WINDOW_ACTIVATABLE,
                                                     "window", window,
                                                     NULL);

  g_signal_connect (window->priv->extensions,
                    "extension-added",
                    G_CALLBACK (extension_added),
                    window);
  g_signal_connect (window->priv->extensions,
                    "extension-removed",
                    G_CALLBACK (extension_removed),
                    window);
The extension set instance, representing all plugins implementing the interface, is used to trigger the methods on all or only selected plugins. One of the first things to do after creating the extension set is to initialize all plugins using the signal "extension-added":
  peas_extension_set_foreach (window->priv->extensions,
                              (PeasExtensionSetForeachFunc) extension_added,
                              window);
As there might be more than one registered extension we need to implement a PeasExtensionSetForeachFunc method handling each plugin. This method uses the previously implemented interface. Example from gtranslator:
static void
extension_added (PeasExtensionSet *extensions,
                 PeasPluginInfo   *info,
                 PeasExtension    *exten,
                 GtrWindow        *window)
{
  gtr_window_activatable_activate (GTR_WINDOW_ACTIVATABLE (exten));
}
Note: Up until libpeas version 1.1 you'd simply call peas_extension_call() to issue the name of the interface method to trigger instead.
peas_extension_call (extension, "activate");
Ensure to
  1. Initially call the "extension-added" signal handler for each plugin registered at startup using peas_extension_set_foreach()
  2. Implement and connect the "extension-added" / "extension-removed" signal handlers
  3. Implement one PeasExtensionSetForeachFunc for each additional interface method you defined in step 3
  4. Provide a caller method running peas_extension_set_foreach() for each of those interface methods.

5. Expose some API

Now you are almost ready to code a plugin. But for it to access business logic you might want to expose some API from your program. This is done using markup in the function/interface/class definitions and running g-ir-scanner on the code to create a GObject introspection metadata (one .gir and one .typelib file per package). To learn about the markup checkt the Annotation Guide and other projects for examples. During compilation g-ir-scanner will issue warnings on incomplete or wrong syntax.

6. Write a Plugin

When writing plugins you always have to create two things:
  • A .plugin file describing the plugin
  • At least one executable/script implementing the plugin
Those files you should put into a seperate "plugins" directory in your source tree as they need an extra install target. Assuming you'd want to write a python plugin named "myplugin.py" you'd create a "myplugin.plugin" with the following content
[Plugin]
Module=myplugin
Loader=python
IAge=2
Name=My Plugin
Description=My example plugin for testing only
Authors=Joe, Sue
Copyright=Copyright © 2012 Joe
Website=...
Help=...
Now for the plugin: in Python you'd import packages from the GObject Introspection repository like this
from gi.repository import GObject
from gi.repository import Peas
from gi.repository import PeasGtk
from gi.repository import Gtk
from gi.repository import <your package prefix>
The imports of GObject, Peas, PeasGtk and your package are mandatory. Others depend on what you want to do with your plugin. Usually you'll want to interact with Gtk. Next you need to implement a simple class with all the interface methods we defined earlier:
class MyPlugin(GObject.Object, <your package prefix>.<Type>Activatable):
    __gtype_name__ = 'MyPlugin'

    object = GObject.property(type=GObject.Object)

    def do_activate(self):
        print "activate"

    def do_deactivate(self):
        print "deactivate"

    def do_update_state(self):
        print "updated state!"
Ensure to fill in the proper package prefix for your program and the correct Activatable name (like GtkWindowActivatable). Now flesh out the methods. That's all. Things to now:
  • Your binding will use some namespace separation schema. Python uses dots to separate the elements in the inheritance hierarchy. If unsure check the inofficial online API
  • If you have a syntax error during activation libpeas will permanently deactivate your plugin in the preferences. You need to manually reenable it.
  • You can disable/enable your plugin multiple times to debug problems during activation.
  • To avoid endless "make install" calls register a plugin engine directory in your home directory and edit experimental plugins there.

7. Setup autotools Install Hooks

If you use automake extend the Makefile.am in your sources directory by something similar to
if HAVE_INTROSPECTION
-include $(INTROSPECTION_MAKEFILE)
INTROSPECTION_GIRS = Gtranslator-3.0.gir

Gtranslator-3.0.gir: gtranslator
INTROSPECTION_SCANNER_ARGS = -I$(top_srcdir) --warn-all --identifier-prefix=Gtr
Gtranslator_3_0_gir_NAMESPACE = Gtranslator
Gtranslator_3_0_gir_VERSION = 3.0
Gtranslator_3_0_gir_PROGRAM = $(builddir)/gtranslator
Gtranslator_3_0_gir_FILES = $(INST_H_FILES) $(libgtranslator_c_files)
Gtranslator_3_0_gir_INCLUDES = Gtk-3.0 GtkSource-3.0

girdir = $(datadir)/gtranslator/gir-1.0
gir_DATA = $(INTROSPECTION_GIRS)

typelibdir = $(libdir)/gtranslator/girepository-1.0
typelib_DATA = $(INTROSPECTION_GIRS:.gir=.typelib)

CLEANFILES =	\
	$(gir_DATA)	\
	$(typelib_DATA)	\
	$(BUILT_SOURCES) \
	$(BUILT_SOURCES_PRIVATE)
endif
Ensure to
  1. Pass all files you want to have scanned to xxx_gir_FILES
  2. To provide a namespace prefix in INTROSPECTION_SCANNER_ARGS with --identifier-prefix=xxx
  3. To add --accept-unprefixed to INTROSPECTION_SCANNER_ARGS if you have no common prefix
Next create an install target for the plugins you have:
plugindir = $(pkglibdir)/plugins
plugin_DATA = \
        plugins/one_plugin.py \
        plugins/one_plugin.plugin \
        plugins/another_plugin.pl \
        plugins/another_plugin.plugin
Additionally add package dependencies and GIR macros to configure.ac
pkg_modules="[...]
       libpeas-1.0 >= 1.0.0
       libpeas-gtk-1.0 >= 1.0.0"

GOBJECT_INTROSPECTION_CHECK([0.9.3])
GLIB_GSETTINGS

8. Try to Compile Everything

Check that when running "make"
  1. Everything compiles
  2. g-ir-scanner doesn't complain too much
  3. A .gir and .typelib file is placed in your sources directory
Check that when running "make install"
  1. Your .gir file is installed in <prefix>/share/<package>/gir-1.0/
  2. Your plugins are installed to <prefix>/lib/<package>/plugins/
Launch the program and
  1. Enable the plugins using the preferences for the first time
  2. If in doubt always check if the plugin is still enabled (it will get disabled on syntax errors during activation
  3. Add a lot of debug output to your plugin and watch it telling your things on the console the program is running at
This should do. Please post comments if you miss stuff or find errors! I hope this tutorial helps the one or the other reader.

How to dry Run with chef Client

The answer is simple: do not "dry-run", do "why-run"!
chef-client --why-run
chef-client -W
And the output looks nicer when using "-Fmin"
chef-client -Fmin -W
As with all other automation tools, the dry-run mode is not very predictive. Still it might indicate some of the things that will happen.

How to decrease the munin log level

Munin tends to spam the log files it writes (munin-update.log, munin-graph.log...) with many lines at INFO log level. It also doesn't respect syslog log levels as it uses Log4perl. In difference to the munin-node configuration (munin-node.conf) there is no "log_level" setting in the munin.conf at least in the versions I'm using right now. So let's fix the code. In /usr/share/munin/munin-<update|graph> find the following lines:
logger_open($config->{'logdir'});
logger_debug() if $config->{debug} or defined($ENV{CGI_DEBUG});
and change it to
logger_open($config->{'logdir'});
logger_debug() if $config->{debug} or defined($ENV{CGI_DEBUG});
logger_level("warn"); 
As parameter to logger_level() you can provide "debug", "info", "warn", "error" or "fatal" (see manpage "Munin::Master::Logger". And finally: silent logs!

How to vacuum sqlite

This post is a summary on how to effectively VACUUM SQLite databases. Actually open source project like Firefox and Liferea were significantly hurt by not efficiently VACUUMing their SQLite databases. For Firefox this was caused by the Places database containing bookmarks and the history. In case of Liferea it was the feed cache database. Both projects suffered from fragmentation caused by frequent insertion and deletion while not vacuuming the database. This of course caused much frustration with end users and workarounds to vacuum manually. In the end both projects started to automatically vacuum their sqlite databases on demand based on free list threshold thereby solving the performance issues. Read on to learn how to perform vacuum and why not to use auto-vacuum in those cases!

1. Manual VACUUM

First for the basics: with SQLite 3 you simply vacuum by running:
sqlite3 my.db "VACUUM;"
Depending on the database size and the last vacuum run it might take a while for sqlite3 to finish with it. Using this you can perform manual VACUUM runs (e.g. nightly) or on demand runs (for example on application startup).

2. Using Auto-VACCUM

Note: SQLite Auto-VACUUM does not do the same as VACUUM! It only moves free pages to the end of the database thereby reducing the database size. By doing so it can significantly fragment the database while VACUUM ensures defragmentation. So Auto-VACUUM just keeps the database small! You can enable/disable SQLite auto-vacuuming by the following pragmas:
PRAGMA auto_vacuum = NONE;
PRAGMA auto_vacuum = INCREMENTAL;
PRAGMA auto_vacuum = FULL;
So effectively you have two modes: full and incremental. In full mode free pages are removed from the database upon each transaction. When in incremental mode no pages are free'd automatically, but only metadata is kept to help freeing them. At any time you can call
PRAGMA incremental_vacuum(n);
to free up to n pages and resize the database by this amount of pages. To check the auto-vacuum setting in a sqlite database run
sqlite3 my.db "PRAGMA auto_vacuum;"
which should return a number from 0 to 2 meaning: 0=None, 1=Incremental, 2=Full.

3. On Demand VACUUM

Another possibility is to VACUUM on demand based on the fragmentation level of your sqlite database. Compared to peridioc or auto-vaccum this is propably the best solution as (depending on your application) it might only rarely be necessary. You could for example decide to perform on demand VACUUM upon startup when the empty page ratio reaches a certain threshold which you can determine by running
PRAGMA page_count;
PRAGMA freelist_count;
Both PRAGMA statements return a number of pages which together give you a rough guess at the fragmentation ratio. As far as I know there is currently no real measurement for the exact table fragmentation so we have to go with the free list ratio.

How to test for colors in shell scripts

When watching thousands of log lines from some long running script you might want to have color coding to highlight new sections of the process or to have errors standing out when scrolling through the terminal buffer. Using colors in a script with tput or escape sequences is quite easy, but you also want to check when not to use colors to avoid messing up terminals not supporting them or when logging to a file.

How to Check for Terminal Support

There are at least the following two ways to check for color support. The first variant is using infocmp
$ TERM=linux infocmp -L1 | grep color
[...]
	max_colors#8,
[...]
or using tput
$ TERM=vt100 tput colors
-1

$ TERM=linux tput colors
8
tput is propably the best choice.

Checking the Terminal

So a sane check for color support along with a check for output redirection could look like this
#!/bin/bash

use_colors=1

# Check wether stdout is redirected
if [ ! -t 1 ]; then
    use_colors=0
fi

max_colors=$(tput colors)
if [ $max_colors -lt 8 ]; then 
    use_colors=0
fi

[...]
This should ensure no ANSI sequences ending up in your logs while still printing colors on every capable terminal.

Use More Colors!

And finally if normal colors are not enough for you: use the secret 256 color mode of your terminal! I'm not sure how to test for this but it seems to be related to the "max_pairs" terminal capability listed by infocmp.

How to run vacuum on sqlite

On any SQLite based database (e.g. Firefox or Liferea) you can run "VACUUM" to reduce the database file size. To do this you need the SQLite command line client which can be run using
sqlite3 <database file>
When called like this you get a query prompt. To directly run "VACUUM" just call
sqlite3 <database file> "VACUUM;"
Ensure that the program using the database file is not running!

Alternatives to Manual VACUUM

If you are unsure how to do it manually you can also use a helper tool like BleachBit which along many other cleanup jobs also performs SQLite database compaction.

How to quickly set up squid

Ever needed to test your HTTP client app proxy support? Need an instant proxy test setup? Here is a fast way to set up a local proxy server on Debian using squid: 1.) # apt-get install squid 2.) Edit the squid configuration /etc/squid/squid.conf 2.) a) Edit ACLs. Ensure to have something like the following: acl all src all acl users proxy_auth REQUIRED 2.) b) Edit access definitions. You need (order is important): http_access allow users http_access deny all 2.) c) Setup a dummy authentication module auth_param basic program /usr/local/bin/squid_dummy_auth auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off 3.) Create a authentication script # vi /usr/local/bin/squid_dummy_auth Insert something like:
#!/bin/sh

while read dummy;
do
   echo OK
done
# chmod a+x /usr/local/bin/squid_dummy_auth 4.) Restart squid # /etc/init.d/squid restart With this you have a working Basic Auth proxy test setup running on localhost:3128.

How to munin graph jvm memory usage with ubuntu tomcat

The following description works when using the Ubuntu "tomcat7" package: Grab the "java/jstat__heap" plugin from munin-contrib @ github and place it into "/usr/share/munin/plugins/jstat__heap". Link the plugin into /etc/munin/plugins
ln -s /usr/share/munin/plugins/jstat__heap /etc/munin/plugins/jstat_myname_heap
Choose some useful name instead of "myname". This allows to monitor multiple JVM setups. Configure each link you created in for example a new plugin config file named "/etc/munin/plugin-conf.d/jstat" which should contain one section per JVM looking like this
[jstat_myname_heap]
user tomcat7
env.pidfilepath /var/run/tomcat7.pid
env.javahome /usr/

How to merge csv files

When reading the meager manpage of the "join" command many Unix users propably give up immediately. Still it can be worth using it instead of scripting the same task in your favourite scripting language. Here is an example on how to merge 2 CSV files:

CSV File 1 "employees.csv"

# <employee id>;<name>;<location>
1;Anton;37;Geneva
2;Marie;28;Paris
3;Tom;25;London

CSV File 2 "task.csv"

# <task id>;<employee id>;<task description>
1000;2;Setup new Project
1001;3;Call customer X
1002;3;Get package from post office
And now some action:

A naive try...

The following command

join employee.csv tasks.csv

... doesn't produce any output. This is because it expects the shared key to reside at the first column in both file, which is not the case. Also the default separator for 'join' is a whitespace.

Full Join

join -t ";" -1 1 -2 2 employee.csv tasks.csv

We need to run join with '-t ";"' to get tell it that we have CSV format. Then to avoid the pitfall of not having the common key in the first column we need to tell join where the join key is in each file. The switch "-1" gets the key index for the first file and "-2" for the second file.

2;Marie;28;Paris;1000;Setup new Project
3;Tom;25;London;1001;Call customer X
3;Tom;25;London;1002;Get package from post office

Print only name and task

join -o1.2,2.3 -t ";" -1 1 -2 2 employee.csv tasks.csv

We use "-o" to limit the fields to be printed. "-o" takes a comma separated list of "<file nr>.<field nr>" definitions. So we only want the second file of the first file (1.2) and the third field of the second file (2.3)...

Marie;Setup new Project
Tom;Call customer X
Tom;Get package from post office

Summary

While the syntax of join is not that straight forward, it still allows doing things quite quickly that one is often tempted to implement in a script. It is quite easy to convert batch input data to CSV format. Using join it can be easily grouped and reduced according to the your task.

If this got you interested you can find more and non-CSV examples on this site.

How to get flash working with webkitgtk3

With the switch to GTK3 all Webkit using applications like Epiphany, Liferea, devhelp, yelp and others lost Flash support. The reason is that Linux-Flash is GTK2 only! And of course there won't be new releases from Adobe ever. So we have the following compatibility situation for Liferea
Release LineUsesFlashStatus
1.6 1.8GTK2 + WebkitGTK2any native FlashWorks
1.10GTK3 + WebkitGTK3 v1.832bit native FlashBroken
1.10GTK3 + WebkitGTK3 v1.864bit native FlashBroken
1.10GTK3 + WebkitGTK3 v1.832bit Flash + nspluginwrapperWorks
1.10GTK3 + WebkitGTK3 v2.0any native FlashWorks
The WebkitGTK+ solution for the Flash problem was implemented in version 2.0 by having a second process linked against GTK2 to run browser plugins while Webkit itself is linked to GTK3. This makes Flash work again. But the currently widely distributed WebkitGTK3 v1.8 does not have this feature yet and fails to use the native flash.

nspluginwrapper Workaround

The only workaround is to use nspluginwrapper to run the 32bit version of Flash. This is guaranteed to work on 64bit platforms. It might not work on 32bit hardware, sometimes also because nspluginwrapper is not available there. The steps to install it are:
  1. Install nspluginwrapper. On Debian
    apt-get install nspluginwrapper
  2. Download 32bit Flash .tar.gz from Adobe
  3. Extract /usr files according to the Adobe instructions
  4. In the tarball directory run
    nspluginwrapper -i -a -v -n libflashplayer.so
    to install the plugin
Now all WebkitGTK3 using applications should be able to run Flash. Ensure to restart them and check command line output for any plugin errors.

Upgrading to WebkitGTK3 2.0

If you can try upgrading to WebkitGTK3 2.0 (e.g. Debian Experimental).

How to debug pgbouncer

When you use Postgres with pgbouncer when you have database problems you want to have a look at pgbouncer too. To inspect pgbouncer operation ensure to add at least one user you defined in the user credentials file (e.g. on Debian per-default /etc/pgbouncer/userlist.txt) to the "stats_users" key in pgbouncer.ini:
stats_users = myuser
Now reload pgbouner and use this user "myuser" to connect to pgbouncer with psql by requesting the special "pgbouncer" database:
psql -p 6432 -U myuser -W pgbouncer
At the psql prompt list the supported pgbouncer commands with
SHOW HELP;
PgBouncer will present all statistics and configuration options:
pgbouncer=# SHOW HELP;
NOTICE:  Console usage
DETAIL:  
	SHOW HELP|CONFIG|DATABASES|POOLS|CLIENTS|SERVERS|VERSION
	SHOW STATS|FDS|SOCKETS|ACTIVE_SOCKETS|LISTS|MEM
	SET key = arg
	RELOAD
	PAUSE []
	SUSPEND
	RESUME []
	SHUTDOWN
The "SHOW" commands are all self-explanatory. Very useful are the "SUSPEND" and "RESUME" commands when you use pools.

How common are http security headers really?

A recent issue of the German iX magazin featured an article on improving end user security by enabling HTTP security headers
  • X-XSS-Protection,
  • X-Content-Type-Options MIME type sniffing,
  • Content-Security-Policy,
  • X-Frame-Options,
  • and HSTS Strict-Transport-Security.
The article gave the impression of all of them quite common and a good DevOps being unreasonable not implementing them immediately if the application supports them without problems. This lead me to check my monthly domain scan results of April 2014 on who is actually using which header on their main pages. Results as always limited to top 200 Alexa sites and all larger German websites.

Usage of X-XSS-Protection

Header visible for only 14 of 245 (5%) of the scanned websites. As 2 are just disabling the setting it is only 4% of the websites enabling it.
WebsiteHeader
www.adcash.comX-XSS-Protection: 1; mode=block
www.badoo.comX-XSS-Protection: 1; mode=block
www.blogger.comX-XSS-Protection: 1; mode=block
www.blogspot.comX-XSS-Protection: 1; mode=block
www.facebook.comX-XSS-Protection: 0
www.feedburner.comX-XSS-Protection: 1; mode=block
www.github.comX-XSS-Protection: 1; mode=block
www.google.deX-XSS-Protection: 1; mode=block
www.live.comX-XSS-Protection: 0
www.meinestadt.deX-XSS-Protection: 1; mode=block
www.openstreetmap.orgX-XSS-Protection: 1; mode=block
www.tape.tvX-XSS-Protection: 1; mode=block
www.xing.deX-XSS-Protection: 1; mode=block; report=https://www.xing.com/tools/xss_reporter
www.youtube.deX-XSS-Protection: 1; mode=block; report=https://www.google.com/appserve/security-bugs/log/youtube

Usage of X-Content-Type-Options

Here 15 of 245 websites (6%) enable the option.
WebsiteHeader
www.blogger.comX-Content-Type-Options: nosniff
www.blogspot.comX-Content-Type-Options: nosniff
www.deutschepost.deX-Content-Type-Options: NOSNIFF
www.facebook.comX-Content-Type-Options: nosniff
www.feedburner.comX-Content-Type-Options: nosniff
www.github.comX-Content-Type-Options: nosniff
www.linkedin.comX-Content-Type-Options: nosniff
www.live.comX-Content-Type-Options: nosniff
www.meinestadt.deX-Content-Type-Options: nosniff
www.openstreetmap.orgX-Content-Type-Options: nosniff
www.spotify.comX-Content-Type-Options: nosniff
www.tape.tvX-Content-Type-Options: nosniff
www.wikihow.comX-Content-Type-Options: nosniff
www.wikipedia.orgX-Content-Type-Options: nosniff
www.youtube.deX-Content-Type-Options: nosniff

Usage of Content-Security-Policy

Actually only 1 website in the top 200 Alexa ranked websites uses CSP and this lonely site is github. The problem with CSP obviously being the necessity to have a clear structure for the origin domains of the site elements. And the less advertisments and tracking pixels you have the easier it becomes...
WebsiteHeader
www.github.comContent-Security-Policy: default-src *; script-src https://github.global.ssl.fastly.net https://ssl.google-analytics.com https://collector-cdn.github.com; style-src 'self' 'unsafe-inline' 'unsafe-eval' https://github.global.ssl.fastly.net; object-src https://github.global.ssl.fastly.net

Usage of X-Frame-Options

The X-Frame-Options header is currently delivered by 43 of 245 websites (17%).
WebsiteHeader
www.adcash.comX-Frame-Options: SAMEORIGIN
www.adf.lyX-Frame-Options: SAMEORIGIN
www.avg.comX-Frame-Options: SAMEORIGIN
www.badoo.comX-Frame-Options: DENY
www.battle.netX-Frame-Options: SAMEORIGIN
www.blogger.comX-Frame-Options: SAMEORIGIN
www.blogspot.comX-Frame-Options: SAMEORIGIN
www.dailymotion.comX-Frame-Options: deny
www.deutschepost.deX-Frame-Options: SAMEORIGIN
www.ebay.deX-Frame-Options: SAMEORIGIN
www.facebook.comX-Frame-Options: DENY
www.feedburner.comX-Frame-Options: SAMEORIGIN
www.github.comX-Frame-Options: deny
www.gmx.deX-Frame-Options: deny
www.gmx.netX-Frame-Options: deny
www.google.deX-Frame-Options: SAMEORIGIN
www.groupon.deX-Frame-Options: SAMEORIGIN
www.imdb.comX-Frame-Options: SAMEORIGIN
www.indeed.comX-Frame-Options: SAMEORIGIN
www.instagram.comX-Frame-Options: SAMEORIGIN
www.java.comX-Frame-Options: SAMEORIGIN
www.linkedin.comX-Frame-Options: SAMEORIGIN
www.live.comX-Frame-Options: deny
www.mail.ruX-Frame-Options: SAMEORIGIN
www.mozilla.orgX-Frame-Options: DENY
www.netflix.comX-Frame-Options: SAMEORIGIN
www.openstreetmap.orgX-Frame-Options: SAMEORIGIN
www.oracle.comX-Frame-Options: SAMEORIGIN
www.paypal.comX-Frame-Options: SAMEORIGIN
www.pingdom.comX-Frame-Options: SAMEORIGIN
www.skype.comX-Frame-Options: SAMEORIGIN
www.skype.deX-Frame-Options: SAMEORIGIN
www.softpedia.comX-Frame-Options: SAMEORIGIN
www.soundcloud.comX-Frame-Options: SAMEORIGIN
www.sourceforge.netX-Frame-Options: SAMEORIGIN
www.spotify.comX-Frame-Options: SAMEORIGIN
www.stackoverflow.comX-Frame-Options: SAMEORIGIN
www.tape.tvX-Frame-Options: SAMEORIGIN
www.web.deX-Frame-Options: deny
www.wikihow.comX-Frame-Options: SAMEORIGIN
www.wordpress.comX-Frame-Options: SAMEORIGIN
www.yandex.ruX-Frame-Options: DENY
www.youtube.deX-Frame-Options: SAMEORIGIN

Usage of HSTS Strict-Transport-Security

HSTS headers can only be found on a few front pages (8 of 245). Maybe it is visible more on the login pages and is avoided on front pages for performance reasons, maybe not. That would require further analysis. What can be said is only some larger technology leaders are brave enough to use it on the front page:
WebsiteHeader
www.blogger.comStrict-Transport-Security: max-age=10893354; includeSubDomains
www.blogspot.comStrict-Transport-Security: max-age=10893354; includeSubDomains
www.facebook.comStrict-Transport-Security: max-age=2592000
www.feedburner.comStrict-Transport-Security: max-age=10893354; includeSubDomains
www.github.comStrict-Transport-Security: max-age=31536000
www.paypal.comStrict-Transport-Security: max-age=14400
www.spotify.comStrict-Transport-Security: max-age=31536000
www.upjers.comStrict-Transport-Security: max-age=47336400

Conclusion

Security headers are not wide-spread on website front pages at least. Most used is the X-Frame-Option header to prevent clickjacking. Next following is X-Content-Type-Options to prevent MIME sniffing. Both of course are easy to implement as they most probably do not change your websites behaviour. I'd expect to see more HSTS on bank and other online payment service websites, but it might well be that the headers appear only on subsequent redirects when logging in, which this scan doesn't do. With CSP being the hardest to implement, as you need to have complete control over all domain usage by application content and partner content you embed, it is no wonder that only Github.com has implemented it. For me it is an indication how clean their web application actually is.

Hotkeys to efficiently use bash in emacs mode

When watching other people familiar with Linux type commands on a bash shell what one often needs is a lot of patience. It seems that many of us (and I include myself) do not know or regularily forget about the biggest timesavers. Here is a list of maybe trivial things I consider absolutely helpful to efficiently use Bash in Emacs mode.

1. Jump Words not Characters!

When scrolling through a long command: save time by holding Ctrl to skip words. Alternatively press ESC before each cursor press or use Alt-B and Alt-F.

2. Kill Words not Characters!

The same when deleting stuff. Don't press Backspace 30 times. Use a hotkey to do it in one step:
  • Alt-Backspace or Ctrl-w to delete a word backwards (the difference lies in delimiters)
  • Alt-d to delete a word from start

3. Lookup Filenames while Writing

Do not cancel writing commands just because you forgot a path somewhere on the system. Look it up while writing the command. If paths are in quotes, temporarily disable the quotes to make expansion work again.

4. Save Cancelled Commands

If you still decide to cancel a command because you want to look something up, then keep it in the history. Just type a "#" in front of it and Enter to safely store it for later reference.

5. Use Ctrl-A and Ctrl-E

Most beginners do scroll around a lot for no good reason. This hurts on slow connections. Do never scroll to the start/end of the command. Use Ctrl-a for start of line and Ctrl-E for end of line. When in screen use Ctrl-a-a instead of Ctrl-a!

6. Do not overuse Ctrl-R!

Many users heavily rely on Ctrl-R for reverse history search and use it to go back >50 times to find a command of 10 characters! Just type the 10 characters! Or if you thought it was only 10 steps away in the history just cancel when you do not find it early on. This is mostly a psychological thing: "it might be around the corner" but wastes a lot of time. When your command is only a few steps away in the history use the Up/Down leys until you find it. Cancel this once you press as many times as your command is long!

7. Use Undo!

Avoid rewriting stuff if you mess up. Case 1: If you delete something and want to restore it use Ctrl+_ (Ctrl+Shift+Dash) to undo the last change. Case 2:If you have edited a history entry you got using Ctrl-R or Up/Down navigation and want to restore it to it's original value use Alt-R. In general try to avoid editing history commands as this is confusing!

8. Use History with Line Numbers

Often I see people calling "history" and then mouse copy&pasting a command the find. The faster thing to do is to type "!<number>" to execute a specific command. Usually just 4-5 characters eliminating the possibility of pasting the wrong copy buffer or a command with line breaks after a lot of fiddling with the mouse.

What Else?

There might be other useful commands, but I think I've listed the time savers above. Feel free to comment your suggestions!

Getting apd to run properly

This is a short summary of everything to is a precondition to be able to run APD as a PHP profiler. The description applies for PHP 5.6.2 and APD 1.0.1 and might be/is incorrect for other PHP/APD versions.

Absolute Preconditions

Do not start if you didn't ensure the following:

  • Deactivate the Zend platform or any other PHP optimizer. In general it is wise to disable all Zend extensions.
  • Install a debugging enabled PHP version (compiled using --enable-debug)

Correct APD Compilation

If you have a working PEAR setup you might want to setup APD as described in this Linux Journal article. Also try distribution packages. Otherwise APD is build as following:

  • Extract tarball.
  • Change working directory into tarball.
  • Run
    <apache root>/bin/phpize
  • Run
    ./configure
    Add "--with-php-config=<apache root>/bin/php-config" if configure fails.
  • Compile and install everything using
    make
    make install
  • Edit php.ini and add at least
    zend_extension=<path from make install>/apd.so
    apd.statement=1
    apd.tracedir=/tmp/apd-traces
  • Create the output directory specified in php.ini.
Now before restarting your Apache ensure that the APD extension works properly. To do so simply run PHP
<apache root>/bin/php
Once entered no message should be given if the extension is loaded properly. If you get an error message here that the "apd.so" extension could not be loaded you have a problem. Ensure that you compiled against the correct PHP/Apache version and are using the same PHP runtime right now.

If PHP doesn't complain about anything enter

<?php
phpinfo();
?>
and check for some lines about the APD. If you find them you are ready for work.

Getting Some Traces

To start tracing first restart your Apache to allow the PHP module to load APD. Next you need to identify a script to trace. Add the APD call at the top of the script:

apd_set_pprof_trace();
Then make some requests and remove the statement again to avoid causing further harm.

Now have a look at the trace directory. You should find files with a naming scheme of "pprof[0-9]*.[0-9]" here. Decode them using the "pprofp" tool from your APD source tarball. Example:

<apache root>/bin/php <apd source root>/pprofp -u <trace file>
Redirect stdout if necessary. Use -t instead of -u (summary output) to get calling trees.

Tracing Pitfalls

When you create traces with -t you get a summary output too, but it doesn't contain the per-call durations. I suggest to always create both a call tree and a summary trace.

Get recipes from the cfengine design center

Today I learned that the makers of cfengine have launched a "Design Center" which essentially is a git repository with recipes (called "sketches") for cfengine. This really helps learning the cfengine syntax and getting quick results. It also saves googling wildy for hints on how to do special stuff. Beside just copy&pasting the sketches that cfengine guarantees to be runnable without modifications the design center wiki explains how to directly install sketches from the repository using cf-sketch. Using cf-sketch you can search for installable recipes:
# cf-sketch --search utilities
Monitoring::nagios_plugin_agent /tmp/design-center/sketches/utilities/nagios_plugin_agent
[...]
...and install them:
# cf-sketch --install Monitoring::nagios_plugin_agent
cf-sketch itself is a Perl program that need to be set up separately by running
git clone https://github.com/cfengine/design-center/
cd design-center/tools/cf-sketch
make install

Gtk+: how to select the nth entry in a tree view?

If you don't build your own GtkTreePaths and thus are able to calculate the one to select, then for flat lists at least you can use the GtkTreeModel function that returns the nth child of a given GtkTreeIter. By passing NULL for the parent you get the nth child on the top-level of the tree. The code looks like this:
GtkTreeIter iter;

if (gtk_tree_model_iter_nth_child (treemodel, &iter, NULL, position)) {
   GtkTreeSelection *selection = gtk_tree_view_get_selection (treeview)

   if (selection) 
      gtk_tree_selection_select_iter (selection, &iter);
}

Gtk tray statusicon example with pygi

Here is an example on how to build a GtkStatusIcon using PyGI (Python GObject). The code actually implements a libpeas plugin that could be used with any GTK+ project that allows GI plugins. The tray icon could respond to left clicking by toggling the application window like many instant messengers do. On right clicks it presents a menu with the options to toggle the application window or quit the application.
from gi.repository import GObject, Peas, PeasGtk, Gtk

class TrayiconPlugin (GObject.Object, PeasActivatable):
    __gtype_name__ = 'TrayiconPlugin'

    object = GObject.property (type=GObject.Object)

    def do_activate (self):
        self.staticon = Gtk.StatusIcon ()
	self.staticon.set_from_stock (Gtk.STOCK_ABOUT)
        self.staticon.connect ("activate", self.trayicon_activate)
        self.staticon.connect ("popup_menu", self.trayicon_popup)
        self.staticon.set_visible (True)

    def trayicon_activate (self, widget, data = None):
        print "toggle app window!"

    def trayicon_quit (self, widget, data = None):
        print "quit app!"

    def trayicon_popup (self, widget, button, time, data = None):
        self.menu = Gtk.Menu ()

        menuitem_toggle = Gtk.MenuItem ("Show / Hide")
        menuitem_quit = Gtk.MenuItem ("Quit")

        menuitem_toggle.connect ("activate", self.trayicon_activate)
        menuitem_quit.connect ("activate", self.trayicon_quit)

        self.menu.append (menuitem_toggle)
        self.menu.append (menuitem_quit)

        self.menu.show_all ()
	self.menu.popup(None, None, lambda w,x: self.staticon.position_menu(self.menu, self.staticon), self.staticon, 3, time)

    def do_deactivate (self):
        self.staticon.set_visible (False)
        del self.staticon

Glib gregex regular expression cheat sheet

Glib supports PCRE based regular expressions since v2.14 with the GRegex class.

Usage

GError *err = NULL;
GMatchInfo *matchInfo;
GRegex *regex;
   
regex = g_regex_new ("text", 0, 0, &err);
// check for compilation errors here!
     
g_regex_match (regex, "Some text to match", 0, &matchInfo);
Not how g_regex_new() gets the pattern as first parameter without any regex delimiters. As the regex is created separately it can and should be reused.

Checking if a GRegex did match

Above example just ran the regular expression, but did not test for matching. To simply test for a match add something like this:
if (g_match_info_matches (matchInfo))
    g_print ("Text found!\n");

Extracting Data

If you are interested in data matched you need to use matching groups and need to iterate over the matches in the GMatchInfo structure. Here is an example (without any error checking):
regex = g_regex_new (" mykey=(\w+) ", 0, 0, &err);   
g_regex_match (regex, content, 0, &matchInfo);

while (g_match_info_matches (matchInfo)) {
   gchar *result = g_match_info_fetch (matchInfo, 0);

   g_print ("mykey=%s\n", result);
         
   g_match_info_next (matchInfo, &err);
   g_free (result);
}

Easy String Splitting

Another nice feature in Glib is regex based string splitting with g_regex_split() or g_regex_split_simple():
gchar **results = g_regex_split_simple ("\s+", 
       "White space separated list", 0, 0);
Use g_regex_split for a precompiled regex or use the "simple" function to just pass the pattern.

Gcc linking and mixed static and dynamic linking

GCC syntax schema to link some libraries statically and others dynamically: gcc <options> <sources> -o <binary> -Wl,-Bstatic <list of static libs> -Wl,Bdynamic <list of dynamic libs>

Follow file with tail until it gets removed

Instead of tail -f /var/log/myserver.log use tail --follow=name /var/log/myserver.log Using the long form --follow instead of -f you can tell tail to watch the file name and not the file descriptor. So shortly after the file name was removed tail will notice it and terminate itself.

Fix broken text encoding

You have a text file with broken encoding? You want to strip it from all invalid characters? Here is how to do it: iconv -c -t ASCII input.txt The result will be printed to stdout. The -c switch does the stripping. Using -t you can select every target encoding you like.

Filtering dmesg output

Many administrators just run "dmesg" to check a system for problems and do not bother with its options. But actually it is worth to know about the filtering and output options of the most recent versions (Note: older distros e.g. CentOS5 might not yet ship these options!). You always might want to use "-T" to show human readable timestamps:
$ dmesg -T
[...]
[Wed Oct 10 20:31:22 2012] Buffer I/O error on device sr0, logical block 0
[Wed Oct 10 20:31:22 2012] sr 1:0:0:0: [sr0]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[...]
Additionally the severity and source of the messages is interesting (option -x):
$ dmesg -xT
[...]
kern  :err   : [Wed Oct 10 20:31:21 2012] Buffer I/O error on device sr0, logical block 0
kern  :info  : [Wed Oct 10 20:31:21 2012] sr 1:0:0:0: [sr0]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[...]
Now we see that only one of those example message lines was an actual error. But we can even filter for errors or worse ignoring all the boot messages (option -l):
$ dmesg -T -l err,crit,alert,emerg
[...]
[Wed Oct 10 20:31:21 2012] Buffer I/O error on device sr0, logical block 0
[...]
In the same way it is possible to filter the facility (the first column in the -x output). For example this could return:
$ dmesg -T -f daemon
[...]
[Wed Oct 10 19:57:50 2012] udevd[106]: starting version 175
[Wed Oct 10 19:58:08 2012] udevd[383]: starting version 175
[...]
In any case it might be worth remembering:
  • -xT for a quick overview with readable timestamps
  • -T -l err,crit,alert,emerg to just check for errors
I recently created a simple dmesg Nagios plugin to monitor for important messages with Nagios. You can find it here.

Filesystem sync + cloud storage + android app

I want to share some ideas I have for after the 1.10 release. One of the more requested features is better synching. Well... Google Reader did provide a choice, but is dead soon. And TinyTinyRSS is an option but requires sysadmin skills to set it up. Native Liferea synchronization would be a better choice. The following post is a collection of some first ideas on how to do it in a very easy, simple and light-weight way.

Sync Services

IMO the easiest solution often suggested by users would be a SFTP / WebDAV / Cloud Storage based storage that the user mounts and Liferea just syncs files to. So some well known options would be
Ubuntu One5GB freeNative in Ubuntu, known to work elsewhere too
Dropbox2GB freePackages Debian,Ubuntu,Fedora (link)
SpiderOak2GB freePackages Debian, Fedora, Slackware (link)
Wuala2GB freeInstaller for Debian, Ubuntu, Fedora, Redhat, Centos; OpenSuse (link)

Sync Schema

So at the moment I'm wondering how to implement a synchronization schema that can sync several Liferea instances and a mobile client. Consider this simple schema as a starting point: Sync Concept Chart The most important implicit points:
  1. We do not sync all items. Just the reading state!!!
  2. Users want synchronization to not read things twice
  3. Users want synchronization to never loose there subscriptions
  4. Users want synchronization to keep important stuff (flagged items and newsbin)
  5. Only one client at a time. Locking with lock expiration.
  6. We rely on RSS/Atom GUIDs for having synchronous item ids amongst different sync clients
  7. We simplify read state sync by only listing unread ranges

XML File Layout

So an implementation might be working with a set of simple XML files in a directory structure like this:
clients/
clients/01d263e0-dde9-11e2-a28f-0800200c9a66.xml     (Client 1 state)
clients/0b6f7af0-dde9-11e2-a28f-0800200c9a66.xml      (Client 2 state)
clients/lock.xml                                                         (might be missing)
data/feedlist.opml
data/read-states.xml
data/items/flagged-chunk1.xml
data/items/flagged-chunk2.xml
data/items/flagged-chunk3.xml
data/items/newsbin-wtwxzo34-chunk1.xml
data/items/newsbin-wtwxzo34-chunk2.xml

Sync Logic

Each client can read files at once and relies on them being being written atomically. As it is XML it shouldn't parse if files are not complete. Then the client can cancel a sync and read again later. If a client can obtain all files it should:
  • Check if sync replay is needed (another client did write more recently)
  • Merge changes in the feed list, fetch new feeds initially
  • Merge all read states that do not match yet
  • Merge all new flagged items chunks.
  • Merge all new newsbin chunks.
If a clients wants to sync (e.g. periodically, or on user request or on shutdown) it should:
  • Aquire the lock (even if this might not make sense on delayed sync'ed directories)
  • Update the client meta data
  • Update read states
  • Add new flagged/newsbin item chunks if needed.
  • Remove items from older chunks if needed.
  • Join chunk files if there are too many.
  • Release the lock.

Determine memory configuration of hp servers

Use dmidecode like this
dmidecode 2>&1 |grep -A17 -i "Memory Device" |egrep "Memory Device|Locator: PROC|Size" |grep -v "No Module Installed" |grep -A1 -B1 "Size:"
The "Locator:" line gives you the slot assignements as listed in the HP documentation, e.g. the HP ProLiant DL380 G7 page. Of course you can also look this up in the ILO GUI.

Detecting a dark theme in gtk

When implementing an improved unread news counter rendering for Liferea if found it necessary to detect wether there is a light or dark GTK theme active. The reason for this was that I didn't want to use the foreground and background colors which are often black and something very bright. So instead from the GtkStyle struct which looks like this
typedef struct {
  GdkColor fg[5];
  GdkColor bg[5];
  GdkColor light[5];
  GdkColor dark[5];
  GdkColor mid[5];
  GdkColor text[5];
  GdkColor base[5];
  GdkColor text_aa[5];          /* Halfway between text/base */
[...]
I decided to use the "dark" and "bg" colors with "dark" for background and "bg" for the number text. For a light standard theme this results mostly to a white number on some shaded background. This is how it looks (e.g. the number "4" behind the feed "Boing Boing"):

Inverse Colors For Dark Theme Are Hard!

The problem is when you use for example the "High Contrast Inverse" dark theme. Then "dark" suddenly is undistinguishable from "bg" which makes sense of course. So we need to choose different colors with dark themes. Actually the implementation uses "bg" as foreground and "light" for background.

How to Detect Dark Themes

To do the color switching I first googled for a official GTK solution but found none. If you know of one please let me know! For the meantime I implemented the following simple logic:
	gint		textAvg, bgAvg;

	textAvg = style->text[GTK_STATE_NORMAL].red / 256 +
	        style->text[GTK_STATE_NORMAL].green / 256 +
	        style->text[GTK_STATE_NORMAL].blue / 256;

	bgAvg = style->bg[GTK_STATE_NORMAL].red / 256 +
	        style->bg[GTK_STATE_NORMAL].green / 256 +
	        style->bg[GTK_STATE_NORMAL].blue / 256;

	if (textAvg > bgAvg)
		darkTheme = TRUE;
As "text" color and "background" color should always be contrasting colors the comparison of the sum of their RGB components should produce a useful result. If the theme is a colorful one (e.g. a very saturated red theme) it might sometimes cause the opposite result than intended, but still background and foreground will be contrasting enough that the results stays readable, only the number background will not contrast well to the widget background. For light or dark themes the comparison should always work well and produce optimal contrast. Now it is up to the Liferea users to decide wether they like it or not.

Debugging the openemm bounce handling setup

The following is a short summary of things to configure to get OpenEMM bounce handling to work. The problem is mostly setting up the connection from your sendmail setup, through the milter plugin provided by OpenEMM which then communicates with another daemon "bavd" which as I understand it keeps per mail address statistics and writes the bounce results into the DB.

The things that can be the cause for problems are these:

  1. Your sendmail setup.
  2. The bav* python scripts not working.
  3. The bavd daemon not running/working.
  4. DB access not working
  5. Missing bounce filter in OpenEMM.
  6. Missing bounce alias in bav config.

The real good thing is that OpenEMM is so very well documented that you just need to lookup the simple how to documentation and everything will work within 5min... Sorry just kidding! They seem to want to make money on books and support and somehow don't write documentation and rely on endless forum posts of clueless users.

Enough of a rant below you find some hints how to workaround the problem causes I mentioned above:

Setup Preconditions:

  1. Within OpenEMM a bounce filter has to be configured. Name and description do not matter, it just needs to exist.
  2. The sendmail setup must have milter running the OpenEMM "bav" filter. So /etc/mail/sendmail.mc should have a line like
    INPUT_MAIL_FILTER(`bav', `S=unix:/var/run/bav.sock, F=T')dnl
  3. The sendmail log (e.g. /var/log/mail.log) must be readable by your OpenEMM user

Setup:

  1. Define mail alias matching the bounce filter: Edit your bav.conf-local (e.g. /etc/openemm/conf/bav/bav.conf-local) and add something like
    [email protected] alias:[email protected]
    with "ext_6" being the sender address and ext_12 the bounce filter address.

Check List:

  1. Ensure the "bavd" daemon is running (its log file can be found in /var/log/*-<host>-bavd.log)
    • Ensure bavd server port 5166 is open
    • Ensure mails are passed to bavd (look for "scanMessage" lines in log file)
    • Ensure both soft and hard bounces are found (DSN 4.x.x and DSN 5.x.x)
  2. Ensure the bav filter is running. Check "ps -ef" output for "bav -L INFO"
  3. Ensure bounces are set your OpenEMM DB instance:
    select count(*) from customer_1_binding_tbl where user_status=2;
    The meaning of the user_status value is as following:
    ValueMeaning
    1active
    2hard bounce
    3opt out by admin
    4opt out by user
    (see OpenEMM Handbuch)

    Also remember that hard bounces might not be generated immediately. In case of soft bounces OpenEMM waits for up to 7 bounces to consider the mail address as bounced.

Create random passwords

In Debian there is a package "mkpasswd" which allows creating passwords like this: echo | mkpasswd -s

Confluence: query data from different space with {content Reporter}

You can use the {content-reporter} macro (provided by the commercial Customware Confluence plugin) to access pages in different spaces, but you need to
  1. Add "space=<space name>" to the {content-reporter} parameters
  2. Add "<space name>:" in front of the page path in the scope parameter
to make it work. Example to query "somefield" from all child pages of "ParentPage" in "THESPACE": {report-table} {content-reporter:space=THESPACE|scope="THESPACE:ParentPage" > descendents} {text-filter:data:somefield|required=true} {content-reporter} {report-column:title=somefield}{report-info:data:somefield}{report-column} {report-table}

Configure hadoop to use syslog on ubuntu

If you come here and search for a good description on how to use syslog with Hadoop you might have run into this issue: As documented on apache.org (HowToConfigurate) you have setup the log4j configuration similar to this # Log at INFO level to DRFAAUDIT, SYSLOG appenders log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=INFO,DRFAAUDIT,SYSLOG # Do not forward audit events to parent appenders (i.e. namenode) log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false # Configure local appender log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender log4j.appender.DRFAAUDIT.File=/var/log/audit.log log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n # Configure syslog appender log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender log4j.appender.SYSLOG.syslogHost=loghost log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout log4j.appender.SYSLOG.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n log4j.appender.SYSLOG.Facility=LOCAL1 It is important to have "SYSLOG" in the "...FSNamesystem.audit" definition at the top and to define such a "SYSLOG" appender below with "log4j.appender.SYSLOG". There you configure your loghost and facility. Now it might be that you still do not get anything in syslog on your loghost when using Hadoop version 0.18 up to at least 0.20. I found a solution to this only at this Japanese blog post which suggested to modify the Hadoop start helper script /usr/lib/hadoop/bin/hadoop-daemon.sh to make it work. You need to change the environment variables export HADOOP_ROOT_LOGGER="INFO,DRFA" export HADOOP_SECURITY_LOGGER="INFO,DRFAS" to include "SYSLOG": export HADOOP_ROOT_LOGGER="INFO,SYSLOG,DRFA" export HADOOP_SECURITY_LOGGER="INFO,SYSLOG,DRFAS" After making this change the syslog logging will work.

Comparison of flv and mp4 metadata tagging tools (injectors)

This post is a comparison of the performance of different tools available to tag FLV and MP4 containers with specific metadata (e.g. title, keyframes, generator or other custom fields...). For FLV containers flvtool2, flvtool++ and yamdi are compared. For the MP4 container MP4box, AtomicParsley and ffmpeg are compared.

Here are the IMO three most important FLV taggers tested on a 125MB FLV:

NameDurationLarge FilesIn MemoryCustom TagsCommand
flvtool2 1.0.63min 11snonoyesflvtool2 -UP -band:Test -user:Test -date:1995 -genres:pop test.flv
flvtool++ 1.2.13snoyesyesflvtool++ test.flv -tag band "Test" -tag user "Test" -tag date "1995" -tag genres "pop" test2.flv
yamdi 1.61.5syesnono
(patch)
yamdi -i test.flv -o test2.flv -c "Test"

The performance of flvtool2 is horrendous. For films of 120min it will take hours to process. Therefore: Do not use it! Use Facebooks flvtool++ instead. I guess the bad performance results from it being built in Ruby. Also notice the "Large File" column indicating large file support which officially only yamdi support (by adding compile flag -D_FILE_OFFSET_BITS=64). Another important point is the "In Memory" column indicating that flvtool++ loads the entire file into memory when tagging, which is problematic when tagging large files. Given this results only yamdi should be used for FLV tagging!

Now for the MP4 tagging. Here you can select between a lot of tools from the net, but only a few of them are command line based and available for Unix. The MP4 test file used is 100MB large.

NameDurationCommand
AtomicParsely0.6sAtomicParsley test.mp4 --artist "Test" --genre "Test" --year "1995"
mp4box0.6sMP4Box -itags Name=Test:Artist=Me:disk=95/100 test.mp4
ffmpeg 0.60.8sffmpeg -i test.mp4 -metadata title="Test" -metadata artist="Test" -metadata date="1995" -acodec copy -vcodec copy test2.mp4

Given that recent ffmpeg brings the tagging for MP4 out of the box (it doesn't for FLV though) you do not even need an external tool to add the metadata,

Chef which nodes have role

Why is it so hard to find out which nodes have a given role or recipe in chef? The only way seems to loop yourself:
for node in $(knife node list); do
   if knife node show -r $node | grep 'role\[base\]' >/dev/null; then       
     echo $node;
   fi;
done
Did I miss some other obvious way? I'd like to have some "knife run_list filter ..." command!

Chef how to debug active attributes

If you experience problems with attribute inheritance on a chef client and watch the chef-client output without knowing what attributes are effective you can either look at the chef GUI or do the same on console using "shef" or in "chef-shell" in newer chef releases. So run
chef-shell -z
The "-z" is important to get chef-shell to load the currently active run list for the node that a "chef-client" run would use. Then enter "attributes" to switch to attribute mode
chef > attributes
chef:attributes >
and query anything you like by specifying the attribute path as you do in recipes:
chef:attributes > default["authorized_keys"]
[...]
chef:attributes > node["packages"]
[...]
By just querying for "node" you get a full dump of all attributes.

Chef gets push in q12014

Sysadvent features a puppetlabs sponsered article (yes, honestly, check the bottom of the page!) about chef enterprise getting push support. It is supposed to be included in the open source release in Q1/2014. With this change you can use a push jobs cookbook to define jobs and an extended "knife" with new commands to start and query about jobs:
knife job start ...
knife job list
and
knife node status ...
will tell about job execution status on the remote node. At a first glance it seems nice. Then again I feel worried when this is intended to get rid of SSH keys. Why do we need to get rid of them exactly? And in exchange for what?

Cfagent refuses to run on new client

Sometimes Cfengine refuse to run on a new client and simply does not explain what the problem is. The error message (which you only see when running cfagent with -v) looks like this:
cfengine:client:/var/opt/dsau/cfengine/inputs/update.conf:194: Warning:
actionsequence is empty
cfengine:client:/var/opt/dsau/cfengine/inputs/update.conf:194: Warning: perhaps
cfagent.conf/update.conf have not yet been set up? 
The message "actionsecquence is empty" really means that the cfagent.conf is empty, because it could not be retrieved. The question then is why couldn't it be retrieved. Here is a check list:
  1. update.conf is somehow broken
  2. Master host cannot be contacted in network
  3. Master host and client believe the hostname to be a different IP
In my case the problem was caused by the #3 due to different /etc/hosts the hostname did not resolve into the same IP as on the cfengine master. Usability of cfengine sucks...

Corba mind map

An old mind map on CORBA from University time...
Best viewed with the Flash browser: click here!
All + All -

CORBA

  • + -C++ Language Mapping
    • + - client side
      • + - interface
        • interface maps to abstract classes
        • actual proxy classes are subclasses
      • + - interface inheritance
        • object references can be widened and narrowed
        • + - widening
          • no type conversion
          • no duplication
        • + - narrowing
          • requires explicit operation that may create a new stub object
          • always triggers duplication
          • potential remote communication to validate target type
          • returns _nil on failure
      • + - object references
        • local reference counting
        • xxx_ptr for pointer-like types
        • + - xxx_var for auto-managed references
          • constructor invokes duplicate
          • destructor invokes release
          • can be used like pointers (typically overload operator->)
        • CORBA::release(obj)
        • xxx::_duplicate to create new object
          • implemented by the interface class
        • + - lifecycle
          • a client never creates object references
          • + - a client can get an object reference
            • as result/out parameter of operation
            • initial reference
            • narrowing
            • nil reference
          • a client keeps references and may duplicate it
          • eventually the client releases the references
      • + - ORB
        • + - ORB_init
          • may get cmd line parameters
        • + - operations to create stringified object references
          • IOR
          • corbaloc
          • corbaname
      • + - operations
        • map to member functions with same name
        • + - parameter passing
          • must consider memory mgmt
          • must differ between fixed- and variable-sized types
          • must consider parameter directionality
          • + - out parameters in general
            • _out suffix
            • fixed size -> &
            • variable size -> T_out performs memory mgmt
          • + - passing of simple types
            • + - in
              • pass by value
            • + - out
              • passed as _out type
            • + - inout
              • passed by reference
            • + - result
              • passed as return value
          • + - passing of fixed-size types
            • goal: avoid copy
            • + - in
              • pass as const &
            • + - out
              • as _out type (can also pass _var)
            • + - inout
              • reference
          • + - passing of variable-length string types
            • + - in
              • const char *
            • + - out
              • CORBA::String_out
            • + - inout
              • char *
          • + - passing of variable-length complex types
            • + - in
              • const & (to avoid copying)
            • + - out
              • as _out type
              • caller must explicitly deallocate using delete
            • + - inout
              • references
          • + - passing of object reference
            • + - in
              • as _ptr type
            • + - out
              • as _ptr reference or _var type
            • + - inout
              • as _ptr reference
          • + - passing _var types
            • + - in
              • const _var &
            • + - out
              • _var type
            • + - inout
              • _var reference
      • + - attributes
        • map to pair of get/set methods overloaded on attribute name
      • + - exception
        • mapped to own classes
    • + - server side
      • the server side mapping is a generalization of the client side mapping
      • + - skeleton class for each interface
        • class with prefix POA_
      • servant classes inherit from skeleton class
      • + - ORB initialization steps
        • + - 1.Initialize ORB
          • CORBA::ORB_init();
        • + - 2. get RootPOA
          • orb->resolve_initial_references("RootPOA");
        • 3. Narrow RootPOA to PortableServer::POA
        • + - 4. Obtain servant manager
          • poa->the_POAManager();
        • + - 5. Activate servant manager
          • poa->activate();
        • 6. Create servant instance
        • + - 7. Activate servant
          • servant->_this();
        • + - 8. Start ORB mainloop
          • orb->run()
      • + - parameter passing to servants
        • + - simple types
          • pass by value of reference (depending on direction)
        • + - fixed-length complex types
          • (const) reference
          • _out type for output parameters
        • + - strings
          • pass as char * or references
          • String_out for output
        • + - complex types & any
          • + - in
            • pass by const reference
          • + - out
            • pass by reference
            • use _out type
          • + - result
            • return pointer
        • + - object references
          • pass _ptr value or reference
      • + - raising exceptions
        • throw by value
        • catch by reference
      • ORB deallocates all in and inout parameters
    • + - types
      • here you find a node for each IDL construct and a child node for the C++ mapping
      • + - module
        • namespace
      • + - enum
        • enum
        • dummy values allowed to enforces 32bit enum values
      • + - const
        • static const (in classes)
        • const (outside classes)
      • + - primitive types
        • + - boolean
          • CORBA::Boolean
        • + - strings
          • + - char
            • CORBA::Char
          • + - wchar
            • CORBA:WChar
          • helper functions for allocation, duplication and freeing
          • terminated with \0
        • + - short
          • CORBA:Short
        • + - float
          • CORBA:float
        • ...
      • + - variable length types
        • + - memory mgmt convention
          • producer allocates
          • consumer deallocates
          • example: results on client side: ORB allocates, caller deallocates
        • + - _var types are generated
          • wrappers for lower-level mapped types
          • manages storage for lower-level type
          • generated for both fixed-size and variable-sized types
        • example string -> CORBA::String_var
      • + - struct
        • fixed-length fields are mapped like primitive types
        • variable-length fields are mapped to memory-managed classes
      • + - sequence
        • maps to vector-like type
      • + - union
        • does not map to C++ unions because they have no discriminator
        • union discriminator is set by using one of the access methods
        • rw access methods provided for all possible types
        • for default case there is a _default() method
  • + -CCM
    • CORBA Component Model, introduced with CORBA 3
    • includes IDL extension (CIDL)
    • implementation framework
    • container framework
    • simplifies state mgmt and persistance
    • packaging + deployment specification
  • about this mindmap
  • + -IDL
    • + - Interface
      • + - defines object services

        • horizontal services

        + - defines domain interfaces

        • vertical services

        defines application interfaces

        no private/protected possible

        smallest unit of distribution

        can have exceptions

        is a type (can be passed with operations, but passing is always done per reference, valuetypes can used for per value passing)

        + - interfaces can be derived from other interfaces

        • but still no function overloading allowed

        multiple inheritance allowed

    • + - IDL user roles
      • IDL developer
        • write IDL specification
      • server developer
        • provides server implementation
      • application developer
        • implements client application
    • + - Exceptions
      • IDL developer can defines new ones
      • + - CORBA provides system exceptions
        • need not to be defined to be thrown
      • can have members like structures
    • defines an interface contract between client and server
    • compiled by IDL compiler provided with language mapping
    • purely declarative
    • Java/C++ syntax
    • only IDL types can be used for client/server communication
    • + - Types
      • there are a lot of standard types

        a struct statement can only define a type and no data, the defined type has to be used anywhere to make any effect

        recursive type definitions can only be created by using sequences

    • + - Valuetype
      • creates an interfaces that when used is passed by value and not by reference

        to prevent one communcation round-trip per object data access

        + - use cases

        • extensible structures
        • recursive and cyclic structures
        • allows complete implementation of Java RMI with CORBA

        needs a factory to create an instance in the receiving ORB, application must register the value factory

        receiving ORB must know complete structure

        method definitions require local implementations

        can be truncatable (receiving ORB omits unknown fields)

        can be recursive

        can have private fields

        value boxes: can be used to create valuetypes from standard types, this is useful for building recursive valuetypes containing standard type fields

    • + - Operations
      • have a name
      • have a return type
      • + - can have parameters
        • + - parameters have directionality
          • in
          • out
          • inout
      • can throw exceptions
      • can be oneway
      • + - can have context clauses
        • a feature to pass something similar to Unix environment vars
      • no function overloading
    • + - Attributes
      • cause the generation of set/get operations
      • can be read-only (no get operation)
    • + - Modules
      • to define namespaces
      • can be extended
  • + -architecture
    • OO RPC's
    • + - Object Services
      • + - what are object services?
        • Object Services are infrastructure services, independent from specific applications.
        • Are defined in IDL
        • can be implemented independently from a specific CORBA implementation (cf. ORB services)
      • + - Naming Service
        • associates names with object references

          can be used instead of IORs to find the "initial" application object

          ORB provides standard interface to locate name service : orb->resolve_initial_references("NameService");

          + - binding

          • inserting a name

          + - unbinding

          • deleting a name

          + - names

          • have an id
          • have a kind
          • a name defines a path relative to the naming context

          + - URLs

          • + - corbaloc
            • syntax: host,port,object key
          • + - corbaname
            • string version of name service entry
            • syntax: host, default object key, #, path

          there where propietary approaches...

          + - and there is INS

          • Interoparable Naming Service
          • std cmd line -ORBInitRef NameService=...
          • shorter URLs
      • Trading Services
      • Property Service
      • + - Event Service
        • alternative for call-based client-server architecture
        • simple decoupled communication
        • 1:n communication possible
        • n:n medium may be used
        • source of messages does not know consumers
        • emitting messages is typically non-blocking
        • + - OMG standardized interfaces
          • for event consumers and suppliers
          • for push and pull communication
          • for typed and untyped communication
          • also available: event channel interfaces
      • + - Notification Service
        • extension of Event Service
      • (Transactions)
      • (Persistency)
    • + - interoperability protocols
      • CDR (Common Data Representation)
      • GIOP (General Inter-ORB Protocol)
      • IIOP (Internet Inter-ORB Protocol)
      • + - alternative protocols
        • + - ESIOP: environment specific
          • e.g. DCE-CIOP
        • SCCP (Signal Control Part) IOP
        • pluggable protocols
    • + - standard control flow
      • + - 1. client sends request
        • client ORB analyses object reference
        • connects to server if necessary
        • sends request
      • + - 2. ORB receives request
        • activates server
        • activates target servant
        • invokes method
        • sends back reply
  • + -ORB
    • + - POA
      • replaces BOA (CORBA 2.1)
      • POA interfaces are local
      • + - details object activation
        • POA has an active object map
      • on creation each POA is assigned to a POA manager
      • a POA defines a namespace for servants
      • + - servant
        • implementation object, determines run-time semantics of one or more CORBA objects
        • has an ObjectID (unique within a POA)
        • can be active or inactive
        • + - default servant
          • servant associated with all ObjectIDs
        • + - etherealization
          • the action of detaching a servant from an ObjectID
        • + - incarnation
          • the action of creating or specifying a servant for a given ObjectID
          • object reference needs to be given
        • + - activation
          • creates object reference
          • servant must be given
      • + - POA manager
        • controls flow of requests for one or multiple POAs
        • + - has an activity state
          • + - active
            • requests are processed
          • + - holding
            • requests are saved
          • + - discarding
            • requests are refused with TRANSIENT exception
          • + - inactive
            • requests rejected
            • connections are shut down
      • + - responsibilities
        • assigns object references to servants
        • transparently activates servants
        • + - assigns policy to servants
          • all servants within a POA have the same implementation characteristics (policies)
          • the root POA has a standardized set of policies
        • determines relevant servant for incoming requests and invokes the requested operation at the servant
      • + - object references
        • + - consists of
          • repository id
          • transport address
          • + - object key
            • POA Name
            • ObjectID
            • the object key is a specific format for each ORB
    • + - BOA
      • deprecated
  • + -open, standardized and portable middleware platform
    • no more socket programming!
    • programming conventions and design patterns
    • + - advantages
      • vendor independent
      • platform independent
      • solves repetive tasks for developers of distributed systems
      • there are a lot of language mappings
    • + - disadvantages
      • no reference implementation
      • consensus/compromise architecture
      • not perfect
      • can be overkill
  • + -CORBA objects
    • have interfaces defined in IDL
    • implemented in programming language mapping
    • + - objects are accessed through object references
      • + - reference
        • identifies exactly one object
        • more than one reference my refer one object
        • is strongly type
        • + - references are opaque
          • IOR (interoperable object ref) needed to use between incompatible ORBs
        • can be "nil" (does not yet refer to an object)
        • can dangle (object doesn't exist anymore)
        • allows late banding
        • maybe persistent
  • + -is transparent in terms of
    • language
    • location
    • service
    • implementation
    • architecture
    • operation system
    • (protocol)
    • (transport)
  • + -CORBA 2 vs. 3
    • + - in v2 objects are restricted to single interface
      • separate service and mgmt interface
      • async communication hard to implement
    • handling a large number of objects requires significant book-keeping in application
    • deployment of applications is difficult

Bad and good extraction with regular expressions in perl

Again and again I find myself writing stuff like this:
if($str =~ /(\w+)\s+(\w+)(\s+(\w+))?/) {
      $result{id} = $1;
      $result{status} = $2;
      $result{details} = $4 if(defined($4));
}
when I should write:
if($str =~ /(?<id>\w+)\s+(?<status>\w+)(\s+(?<details>\w+))?/) {
      %result = %+;
}
as described in the perlre manual: Capture group contents are dynamically scoped and available to you outside the pattern until the end of the enclosing block or until the next successful match, whichever comes first. (See Compound Statements in perlsyn.) You can refer to them by absolute number (using "$1" instead of "\g1" , etc); or by name via the %+ hash, using "$+{name}".

Adding timestamps to legacy script output

Imagine a legacy script. Very long, complex and business critical. No time for a rewrite and no fixed requirements. But you want to have timestamps added to the output it produces.

Search and Replace

One way to do this is find each and every echo and replace it.
echo "Doing X with $file."
becomes
log "Doing X with $file."
and you implement log() as a function prefixing the timestamp. The danger here is to not replace some echo that needs being redirected somewhere else.

Alternative: Wrap in a Subshell

Instead of modifying all echo's one could do the following and just "wrap" the whole script:
#!/bin/bash

$(

<the original script body>

) | stdbuf -i0 -o0 -e0 awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; }'
You can drop the stdbuf invocation that unbuffers the pipe if you do not need 100% exact timestamps. Of course you can use this also with any slow running command on the shell no matter how complex, just add the pipe command. The obvious advantage: you do not touch the legacy code and can be 100% sure the script is still working after adding the timestamps.

Acer aspire one linux flash video performance

There are dozens of guides on how to run the Acer Aspire One netbook with Ubuntu 12.04 (and derivates Lubuntu, Xubuntu & Co) which provides reasonably good hardware support. For performance reasons most of them suggest to install the AMD Catalyst driver. And this is good because it allows playing HD videos without problems on this otherwise small netbook. Still Flash video doesn't work! Most guides do not have this hint, but the solution is quite simple: One also needs to install the XvBA Linux support. In Debian and Ubuntu this means
sudo apt-get install xvba-va-driver libva-glx1 libva-egl1 vainfo
This is also described on the Ubuntu BinaryDriverHowto page but missing in almost all other tutorials on how to get Radeon chips working on Linux. So again: Install XvBA!!! If you are unsure wether it is working on your system just run
vainfo
and check if it lists "Supported profile and entrypoints". If it does not or the tool doesn't exist you probably run without hardware accelaration for Flash.

Access gnomekeyring with python gir

Since GTK+ 3.0 and the broad introduction of GObject Introspection (GIR) one now can switch from using the existing GnomeKeyring Python module to direct GIR-based access. This allows reducing the Python runtime dependency. Below you find a simple keyring access script unlocking a keyring named "test" adding a new entry and dumping all entries in the keyring. This code uses the generic secret keyring type and was originally written for a Liferea plugin that allows Liferea to store feed passwords into GnomeKeyring:
from gi.repository import GObject
from gi.repository import GnomeKeyring

keyringName = 'test'

def unlock():
	print 'Unlocking keyring %s...' % keyringName
	GnomeKeyring.unlock_sync(keyringName, None)

def dump_all():
	print "Dump all keyring entries..."
	(result, ids) = GnomeKeyring.list_item_ids_sync(keyringName)
	for id in ids:	
	   (result, item) = GnomeKeyring.item_get_info_sync(keyringName, id)
	   if result != GnomeKeyring.Result.OK:
	      print '%s is locked!' % (id)
	   else:
	         print '  => %s = %s' % (item.get_display_name(), item.get_secret())

def do_query(id):
	print 'Fetch secret for id %s' % id
	attrs = GnomeKeyring.Attribute.list_new()
	GnomeKeyring.Attribute.list_append_string(attrs, 'id', id)
	result, value = GnomeKeyring.find_items_sync(GnomeKeyring.ItemType.GENERIC_SECRET, attrs)
	if result != GnomeKeyring.Result.OK:
		return

        print '  => password %s = %s' % (id, value[0].secret)
	print '     keyring id  = %s' % value[0].item_id

def do_store(id, username, password):
	print 'Adding keyring entry for id %s' % id
	GnomeKeyring.create_sync(keyringName, None)
	attrs = GnomeKeyring.Attribute.list_new()
	GnomeKeyring.Attribute.list_append_string(attrs, 'id', id)
	GnomeKeyring.item_create_sync(keyringName, GnomeKeyring.ItemType.GENERIC_SECRET, repr(id), attrs, '@@@'.join([username, password]), True)
	print '  => Stored.'

# Our test code...
unlock()
dump_all()
do_store('id1', 'User1', 'Password1')
do_query('id1')
dump_all()
For simplicity the username and password are stored together as the secret token separated by "@@@". According to the documentation it should be possible to store them separately, but given my limited Python knowledge and the missing GIR documentation made me use this simple method. If I find a better way I'll update this post. If you know how to improve the code please post a comment! The code should raise a keyring password dialog when run for the first time in the session and give an output similar to this:
Unlocking keyring test...
Dump all keyring entries...
  => 'id1' = TestA@@@PassA
Adding keyring entry for id id1
  => Stored.
Fetch secret for id id1
  => password id1 = TestA@@@PassA
     keyring id  = 1
Dump all keyring entries...
  => 'id1' = TestA@@@PassA
You can also check the keyring contents using the seahorse GUI where you should see the "test" keyring with an entry with id "1" as in the screenshot below.

1000 problems when compiling ffmpeg and mplayer

This is a short compilation of ffmpeg/mplayer compilation pitfalls: libx264: If compilation fails with an error about the numbers of parameters in common/cpu.c you need to check which glibc version is used. Remove the second parameter to sched_getaffinity() if necessary and recompile. ffmpeg+x264 ffmpeg configure fails with:
ERROR: libx264 not found
If you think configure made a mistake, make sure you are using the latest
version from SVN.  If the latest version fails, report the problem to the
[email protected] mailing list or IRC #ffmpeg on irc.freenode.net.
Include the log file "config.err" produced by configure as this will help
solving the problem.
This can be caused by two effects:
  • Unintended library is used for linking. Check wether you have different ones installed. Avoid this and uninstall them if possible. If necessary use LD_LIBRARY_PATH or --extra-ldflags to change the search order.
  • Incompatible combination of ffmpeg and libx264. Older libx264 provide a method x264_encoder_open which older ffmpeg versions do check for. More recent libx264 add a version number to the method name. Now when you compile a new libx264 against an older ffmpeg the libx264 detection that relies on the symbol name fails. As a workaround you could hack the configure script to check for "x264_encoder_open_78" instead of "x264_encoder_open" (given that 78 is the libx264 version you use).
ffmpeg+x264 ffmpeg compilation fails on AMD64 with:
libavcodec/svq3.c: In function 'svq3_decode_slice_header':
libavcodec/svq3.c:721: warning: cast discards qualifiers from pointer target type
libavcodec/svq3.c:724: warning: cast discards qualifiers from pointer target type
libavcodec/svq3.c: In function 'svq3_decode_init':
libavcodec/svq3.c:870: warning: dereferencing type-punned pointer will break strict-aliasing rules
/tmp/ccSySbTo.s: Assembler messages:
/tmp/ccSySbTo.s:10644: Error: suffix or operands invalid for `add'
/tmp/ccSySbTo.s:10656: Error: suffix or operands invalid for `add'
/tmp/ccSySbTo.s:12294: Error: suffix or operands invalid for `add'
/tmp/ccSySbTo.s:12306: Error: suffix or operands invalid for `add'
make: *** [libavcodec/h264.o] Error 1
This post explains that this is related to a glibc issue and how to patch it. ffmpeg+x264 ffmpeg compilation fails with:
libavcodec/libx264.c: In function 'encode_nals':
libavcodec/libx264.c:60: warning: implicit declaration of function 'x264_nal_encode'
libavcodec/libx264.c: In function 'X264_init':
libavcodec/libx264.c:169: error: 'x264_param_t' has no member named 'b_bframe_pyramid'
make: *** [libavcodec/libx264.o] Error 1
This means you are using incompatible ffmpeg and libx264 versions. Try to upgrade ffmpeg or to downgrade libx264. ffmpeg+video4linux
/usr/include/linux/videodev.h:55: error: syntax error before "ulong"
/usr/include/linux/videodev.h:71: error: syntax error before '}' token
Workaround:
--- configure.ac.080605 2005-06-08 21:56:04.000000000 +1200
+++ configure.ac        2005-06-08 21:56:42.000000000 +1200
@@ -1226,6 +1226,7 @@
 AC_CHECK_HEADERS(linux/videodev.h,,,
 [#ifdef HAVE_SYS_TIME_H
 #include <sys/time.h>
+#include <sys/types.h>
 #endif
 #ifdef HAVE_ASM_TYPES_H
 #include <asm/types.h>
http://www.winehq.org/pipermail/wine-devel/2005-June/037400.html oder Workaround: --disable-demuxer=v4l --disable-muxer=v4l --disable-demuxer=v4l2 --disable-muxer=v4l2 ffmpeg+old make
make: *** No rule to make target `libavdevice/libavdevice.so', needed by `all'.  Stop.
Problem: GNU make is too old, you need at least v3.81 http://www.mail-archive.com/[email protected]/msg01284.html
make: *** No rule to make target `install-libs', needed by `install'.  Stop.
Problem: GNU make is too old, you need at least v3.81 http://ffmpeg.arrozcru.org/forum/viewtopic.php?f=1&t=833 Mplayer+old make
make: expand.c:489: allocated_variable_append: Assertion `current_variable_set_list->next != 0' failed.
Problem: GNU make is too old, you need at least v3.81 MPlayer
i386/dsputil_mmx.o i386/dsputil_mmx.c
i386/dsputil_mmx.c: In function `transpose4x4':
i386/dsputil_mmx.c:621: error: can't find a register in class `GENERAL_REGS' while reloading `asm'
Workaround: Add the following to your configure call --extra-cflags="-O3 -fomit-frame-pointer" Note: if this somehow helped you and you know something to be added feel free to post a comment!

Frame exact splitting with ffmpeg

When preparing videos for Apples HTTP streaming for iPad/iPhone you need to split your video into 10s chunks and provide a play list for Quicktime to process. The problem lies with frame exact splitting of arbitrary video input material. Wether you split the file using ffmpeg or the Apple segmenter tool you often end up with
  • asynchronous audio in some or all segments
  • missing video frames at the start of each segment
  • audio glitches between two segements
  • missing audio+video between otherwise audio-synchronous consecutive segments
When using the Apple segmenter the only safe way to split files is to convert into an intermediate format which allows frame-exact splitting. As the segmenter only supports transport stream only MPEG-2 TS and MPEG-4 TS do make sense. To allow frame-exact splitting on problematic input files the easiest way is to blow them up to consist only of I-frames. The parameter for this depends on the output video codec. An ffmpeg command line for MPEG-2 TS can look like this: ffmpeg -i inputfile -vcodec mpeg2video -pix_fmt yuv422p -qscale 1 -qmin 1 -intra outputfile The relevant piece is the "-intra" switch. For MPEG-4 TS something like the following should work: ffmpeg -i inputfile -vcodec libx264 -vpre slow -vpre baseline -acodec libfaac -ab 128k -ar 44100 -intra -b 2000k -minrate 2000k -maxrate 2000k outputfile Note: It is important to watch the resulting muxing overhead which might lower the effective bitrate a lot! The resulting output files should be safe to be passed to the Apple segmenter.