TheOldReader Sync Implemented

I originally intended to make AOL Reader work, but failed due to the undocumented and propably not yet published authentication API. I'll ask AOL about it, but if anyone knows please drop me a mail.

So I could not work on AOL Reader, so I choose to work on TheOldReader (http://theoldreader.com) which I found very pleasant and stable. The API is close to Google Reader with the only limitation that it sometimes only has JSON support and no XML. As we use JSON already for TinyTinyRSS synching I could build on existing code.

So git master now has working experimental TheOldReader support!

This will be released soon with the initial 1.10 release.

So Liferea now has the following online synchronization support:

Name Status Import Google Reader
Google Reader Deprecated %
TinyTinyRSS Implemented, Stable No
TheOldReader Implemented, Experimental No
AOL Reader Planned, API not complete Yes
Digg Reader Planned, API not public Yes
NewsBlur Not Planned, API too different Yes
Feedly I registered Liferea as interested Client Yes

On the long term I'd like to support to new Google Reader clones to give Liferea users some choice along with TinyTinyRSS for the self-hosting choice.

Detecting a Dark Theme in GTK

When implementing an improved unread news counter rendering for Liferea if found it necessary to detect wether there is a light or dark GTK theme active. The reason for this was that I didn't want to use the foreground and background colors which are often black and something very bright. So instead from the GtkStyle struct which looks like this

typedef struct {
  GdkColor fg[5];
  GdkColor bg[5];
  GdkColor light[5];
  GdkColor dark[5];
  GdkColor mid[5];
  GdkColor text[5];
  GdkColor base[5];
  GdkColor text_aa[5];          /* Halfway between text/base */
[...]

I decided to use the "dark" and "bg" colors with "dark" for background and "bg" for the number text. For a light standard theme this results mostly to a white number on some shaded background. This is how it looks (e.g. the number "4" behind the feed "Boing Boing"):

Inverse Colors For Dark Theme Are Hard!

The problem is when you use for example the "High Contrast Inverse" dark theme. Then "dark" suddenly is undistinguishable from "bg" which makes sense of course. So we need to choose different colors with dark themes. Actually the implementation uses "bg" as foreground and "light" for background.

How to Detect Dark Themes

To do the color switching I first googled for a official GTK solution but found none. If you know of one please let me know! For the meantime I implemented the following simple logic:

	gint		textAvg, bgAvg;

	textAvg = style->text[GTK_STATE_NORMAL].red / 256 +
	        style->text[GTK_STATE_NORMAL].green / 256 +
	        style->text[GTK_STATE_NORMAL].blue / 256;

	bgAvg = style->bg[GTK_STATE_NORMAL].red / 256 +
	        style->bg[GTK_STATE_NORMAL].green / 256 +
	        style->bg[GTK_STATE_NORMAL].blue / 256;

	if (textAvg > bgAvg)
		darkTheme = TRUE;

As "text" color and "background" color should always be contrasting colors the comparison of the sum of their RGB components should produce a useful result. If the theme is a colorful one (e.g. a very saturated red theme) it might sometimes cause the opposite result than intended, but still background and foreground will be contrasting enough that the results stays readable, only the number background will not contrast well to the widget background.

For light or dark themes the comparison should always work well and produce optimal contrast. Now it is up to the Liferea users to decide wether they like it or not.

Most Important Redis Commands for Sysadmins

When you encounter a Redis instance and you quickly want to learn about the setup you just need a few simple commands to peak into the setup. Of course it doesn't hurt to look at the official full command documentation, but below is a listing just for sysadmins.

Accessing Redis

First thing to know is that you can use "telnet" (usually on default port 6397)

telnet localhost 6397

or the Redis CLI client

redis-cli

to connect to Redis. The advantage of redis-cli is that you have a help interface and command line history.

Scripting Redis Commands

For scripting just pass commands to "redis-cli". For example:

$ redis-cli INFO | grep connected
connected_clients:2
connected_slaves:0
$

Server Statistics

The statistics command is "INFO" and will give you an output as following:

$ redis-cli INFO
redis_version:2.2.12
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:8353
uptime_in_seconds:2592232
uptime_in_days:30
lru_clock:809325
used_cpu_sys:199.20
used_cpu_user:309.26
used_cpu_sys_children:12.04
used_cpu_user_children:1.47
connected_clients:2
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:6596112
used_memory_human:6.29M
used_memory_rss:17571840
mem_fragmentation_ratio:2.66
use_tcmalloc:0
loading:0
aof_enabled:0
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1371241671
bgrewriteaof_in_progress:0
total_connections_received:118
total_commands_processed:1091
expired_keys:441
evicted_keys:0
keyspace_hits:6
keyspace_misses:1070
hash_max_zipmap_entries:512
hash_max_zipmap_value:64
pubsub_channels:0
pubsub_patterns:0
vm_enabled:0
role:master
db0:keys=91,expires=88

Changing Runtime Configuration

The command

CONFIG GET *

gives you a list of all active configuration variables you can change. The output might look like this:

redis 127.0.0.1:6379> CONFIG GET *
 1) "dir"
 2) "/var/lib/redis"
 3) "dbfilename"
 4) "dump.rdb"
 5) "requirepass"
 6) (nil)
 7) "masterauth"
 8) (nil)
 9) "maxmemory"
10) "0"
11) "maxmemory-policy"
12) "volatile-lru"
13) "maxmemory-samples"
14) "3"
15) "timeout"
16) "300"
17) "appendonly"
18) "no"
19) "no-appendfsync-on-rewrite"
20) "no"
21) "appendfsync"
22) "everysec"
23) "save"
24) "900 1 300 10 60 10000"
25) "slave-serve-stale-data"
26) "yes"
27) "hash-max-zipmap-entries"
28) "512"
29) "hash-max-zipmap-value"
30) "64"
31) "list-max-ziplist-entries"
32) "512"
33) "list-max-ziplist-value"
34) "64"
35) "set-max-intset-entries"
36) "512"
37) "slowlog-log-slower-than"
38) "10000"
39) "slowlog-max-len"
40) "64"

Note that keys and values are alternating and you can change each key by issuing a "CONFIG SET" command like:

CONFIG SET timeout 900

Such a change will be effective instantly. When changing values consider also updating the redis configuration file.

Multiple Databases

Redis has a concept of separated namespaces called "databases". You can select the database number you want to use with "SELECT". By default the database with index 0 is used. So issuing

redis 127.0.0.1:6379> SELECT 1
OK
redis 127.0.0.1:6379[1]>

switches to the second database. Note how the prompt changed and now has a "[1]" to indicate the database selection.

To find out how many databases there are you might want to run redis-cli from the shell:

$ redis-cli INFO | grep ^db
db0:keys=91,expires=88
db1:keys=1,expires=0

Dropping Databases

To drop the currently selected database run

FLUSHDB

to drop all databases at once run

FLUSHALL

Checking for Replication

To see if the instance is a replication slave or master issue

redis 127.0.0.1:6379> INFO
[...]
role:master

and watch for the "role" line which shows either "master" or "slave".

Starting with version 2.8 the "INFO" command also gives you per slave replication status looking like this

slave0:ip=127.0.0.1,port=6380,state=online,offset=281,lag=0

Enabling Replication

If you quickly need to set up replication just issue

SLAVEOF <IP> <port>

on a machine that you want to become slave of the given IP. It will immediately get values from the master. Note that this instance will still be writable. If you want it to be read-only change the redis config file (only available in most recent version, e.g. not on Debian).

To revert the slave setting run

SLAVEOF NO ONE

Dump Database Backup

As Redis allows RDB database dumps in background, you can issue a dump at any time. Just run:

BGSAVE

When running this command Redis will fork and the new process will dump into the "dbfilename" configured in the Redis configuration without the original process being blocked. Of course the fork itself might cause an interruption.

Use "LASTSAVE" to check when the dump file was last updated. For a simple backup solution just backup the dump file.

If you need a synchronous save run "SAVE" instead of "BGSAVE".

Listing Connections

Starting with version 2.4 you can list connections with

CLIENT LIST

and you can terminate connections with

CLIENT KILL <IP>:<port>

Monitoring Traffic

The propably most useful command compared to memcached where you need to trace network traffic is the "MONITOR" command which will dump incoming commands in real time.

redis 127.0.0.1:6379> MONITOR
OK
1371241093.375324 "monitor"
1371241109.735725 "keys" "*"
1371241152.344504 "set" "testkey" "1"
1371241165.169184 "get" "testkey"

Checking for Keys

If you want to know if an instance has a key or keys matching some pattern use "KEYS" instead of "GET" to get an overview.

redis 127.0.0.1:6379> KEYS test*
1) "testkey2"
2) "testkey3"
3) "testkey"

On production servers use "KEYS" with care as you can limit it and it will cause a full scan of all keys!

Nagios Check Plugin for "nofile" Limit

Following the recent post on how to investigate limit related issues which gave instructions what to check if you suspect a system limit to be hit I want to share this Nagios check to cover the open file descriptor limit. Note that existing Nagios plugins like this only check the global limit, only check one application or do not output all problems. So here is my solution which does:

  1. Check the global file descriptor limit
  2. Uses lsof to check all processes "nofile" hard limit

It has two simple parameters -w and -c to specify a percentage threshold. An example call:

./check_nofile_limit.sh -w 70 -c 85

could result in the following output indicating two problematic processes:

WARNING memcached (PID 2398) 75% of 1024 used CRITICAL apache (PID 2392) 94% of 4096 used

Here is the check script doing this:

#!/bin/bash

# Check "nofile" limit for all running processes using lsof

MIN_COUNT=0	# default "nofile" limit is usually 1024, so no checking for 
		# processes with much less open fds needed

WARN_THRESHOLD=80	# default warning:  80% of file limit used
CRITICAL_THRESHOLD=90	# default critical: 90% of file limit used

while getopts "hw:c:" option; do
	case $option in
		w) WARN_THRESHOLD=$OPTARG;;
		c) CRITICAL_THRESHOLD=$OPTARG;;	
		h) echo "Syntax: $0 [-w <warning percentage>] [-c <critical percentage>]"; exit 1;;
	esac
done

results=$(
# Check global limit
global_max=$(cat /proc/sys/fs/file-nr 2>&1 |cut -f 3)
global_cur=$(cat /proc/sys/fs/file-nr 2>&1 |cut -f 1)
ratio=$(( $global_cur * 100 / $global_max))

if [ $ratio -ge $CRITICAL_THRESHOLD ]; then
	echo "CRITICAL global file usage $ratio% of $global_max used"
elif [ $ratio -ge $WARN_THRESHOLD ]; then
	echo "WARNING global file usage $ratio% of $global_max used"
fi

# We use the following lsof options:
#
# -n 	to avoid resolving network names
# -b	to avoid kernel locks
# -w	to avoid warnings caused by -b
# +c15	to get somewhat longer process names
#
lsof -wbn +c15 2>/dev/null | awk '{print $1,$2}' | sort | uniq -c |\
while read count name pid remainder; do
	# Never check anything above a sane minimum
	if [ $count -gt $MIN_COUNT ]; then
		# Extract the hard limit from /proc
		limit=$(cat /proc/$pid/limits 2>/dev/null| grep 'open files' | awk '{print $5}')

		# Check if we got something, if not the process must have terminated
		if [ "$limit" != "" ]; then
			ratio=$(( $count * 100 / $limit ))
			if [ $ratio -ge $CRITICAL_THRESHOLD ]; then
				echo "CRITICAL $name (PID $pid) $ratio% of $limit used"
			elif [ $ratio -ge $WARN_THRESHOLD ]; then
				echo "WARNING $name (PID $pid) $ratio% of $limit used"
			fi
		fi
	fi
done
)

if echo $results | grep CRITICAL; then
	exit 2
fi
if echo $results | grep WARNING; then
	exit 1
fi

echo "All processes are fine."

Use the script with caution! At the moment it has no protection against a hanging lsof. So the script might mess up your system if it hangs for some reason. If you have ideas how to improve it please share them in the comments!

The Debian/Ubuntu ulimit Check List

This is a check list of all you can do wrong when trying to set limits on Debian/Ubuntu. The hints might apply to other distros too, but I didn't check. If you have additional suggestions please leave a comment!

Always Check Effective Limit

The best way to check the effective limits of a process is to dump

/proc/<pid>/limits

which gives you a table like this

Limit  Soft Limit Hard Limit Units 
Max cpu time unlimited unlimited seconds 
Max file size unlimited unlimited bytes 
Max data size unlimited unlimited bytes 
Max stack size 10485760 unlimited bytes 
Max core file size 0 unlimited bytes 
Max resident set unlimited unlimited bytes 
Max processes 528384 528384 processes 
Max open files 1024 1024 files 
Max locked memory 32768 32768 bytes 
Max address space unlimited unlimited bytes 
Max file locks unlimited unlimited locks 
Max pending signals 528384 528384 signals 
Max msgqueue size 819200 819200 bytes 
Max nice priority 0 0 
Max realtime priority 0 0 

Running "ulimit -a" in the shell of the respective user rarely tells something because the init daemon responsible for launching services might be ignoring /etc/security/limits.conf as this is a configuration file for PAM only and is applied on login only per default.

Do Not Forget The OS File Limit

If you suspect a limit hit on a system with many processes also check the global limit:

$ cat /proc/sys/fs/file-nr
7488	0	384224
$

The first number is the number of all open files of all processes, the third is the maximum. If you need to increase the maximum:

# sysctl -w fs.file-max=500000

Ensure to persist this in /etc/sysctl.conf to not loose it on reboot.

Check "nofile" Per Process

Just checking the number of files per process often helps to identify bottlenecks. For every process you can count open files from using lsof:

lsof -n -p <pid> | wc -l

So a quick check on a burning system might be:

lsof -n 2>/dev/null | awk '{print $1 " (PID " $2 ")"}' | sort | uniq -c | sort -nr | head -25

whic returns the top 25 file descriptor eating processes

 139 mysqld (PID 2046)
 105 httpd2-pr (PID 25956)
 105 httpd2-pr (PID 24384)
 105 httpd2-pr (PID 24377)
 105 httpd2-pr (PID 24301)
 105 httpd2-pr (PID 24294)
 105 httpd2-pr (PID 24239)
 105 httpd2-pr (PID 24120)
 105 httpd2-pr (PID 24029)
 105 httpd2-pr (PID 23714)
 104 httpd2-pr (PID 3206)
 104 httpd2-pr (PID 26176)
 104 httpd2-pr (PID 26175)
 104 httpd2-pr (PID 26174)
 104 httpd2-pr (PID 25957)
 104 httpd2-pr (PID 24378)
 102 httpd2-pr (PID 32435)
 53 sshd (PID 25607)
 49 sshd (PID 25601)

The same more comfortable including the hard limit:

lsof -n 2>/dev/null | awk '{print $1,$2}' | sort | uniq -c | sort -nr | head -25 | while read nr name pid ; do printf "%10d / %-10d %-15s (PID %5s)\n" $nr $(cat /proc/$pid/limits | grep 'open files' | awk '{print $5}') $name $pid; done

returns

 105 / 1024 httpd2-pr (PID 5368)
 105 / 1024 httpd2-pr (PID 3834)
 105 / 1024 httpd2-pr (PID 3407)
 104 / 1024 httpd2-pr (PID 5392)
 104 / 1024 httpd2-pr (PID 5378)
 104 / 1024 httpd2-pr (PID 5377)
 104 / 1024 httpd2-pr (PID 4035)
 104 / 1024 httpd2-pr (PID 4034)
 104 / 1024 httpd2-pr (PID 3999)
 104 / 1024 httpd2-pr (PID 3902)
 104 / 1024 httpd2-pr (PID 3859)
 104 / 1024 httpd2-pr (PID 3206)
 102 / 1024 httpd2-pr (PID 32435)
 55 / 1024 mysqld (PID 2046)
 53 / 1024 sshd (PID 25607)
 49 / 1024 sshd (PID 25601)
 46 / 1024 dovecot-a (PID 1869)
 42 / 1024 python (PID 1850)
 41 / 1048576 named (PID 3130)
 40 / 1024 su (PID 25855)
 40 / 1024 sendmail (PID 3172)
 40 / 1024 dovecot-a (PID 14057)
 35 / 1024 sshd (PID 3160)
 34 / 1024 saslauthd (PID 3156)
 34 / 1024 saslauthd (PID 3146)

Upstart doesn't care about limits.conf!

The most common mistake is believing upstart behaves like the Debian init script handling. When on Ubuntu a service is being started by upstart /etc/security/limits.conf will never apply! To get upstart to change the limits of a managed service you need to insert a line like

limit nofile 10000 20000

into the upstart job file in /etc/init.

When Changing /etc/security/limits.conf Re-Login!

After you apply a change to /etc/security/limits.conf you need to login again to have the change applied to your next shell instance by PAM. Alternatively you can use sudo -i to switch to user whose limits you modified and simulate a login.

Special Debian Apache Handling

The Debian Apache package which is also included in Ubuntu has a separate way of configuring "nofile" limits. If you run the default Apache in 12.04 and check /proc/<pid>/limits of a Apache process you'll find it is allowing 8192 open file handles. No matter what you configured elsewhere.

This is because Apache defaults to 8192 files. If you want another setting for "nofile" then you need to edit /etc/apache2/envvars.

For Emergencies: prlimit!

Starting with util-linux-2.21 there will be a new "prlimit" tool which allows you to easily get/set limits for running processes. Sadly Debian is and will be for some time on util-linux-2.20. So what do we do in the meantime?

The prlimit(2) manpage which is for the system call prlimit() gives a hint: at the end of the page there is a code snippet to change the CPU time limit. You can adapt it to any limit you want by replacing RLIMIT_CPU with any of

  • RLIMIT_NOFILE
  • RLIMIT_OFILE
  • RLIMIT_AS
  • RLIMIT_NPROC
  • RLIMIT_MEMLOCK
  • RLIMIT_LOCKS
  • RLIMIT_SIGPENDING
  • RLIMIT_MSGQUEUE
  • RLIMIT_NICE
  • RLIMIT_RTPRIO
  • RLIMIT_RTTIME
  • RLIMIT_NLIMITS

You might want to check "/usr/include/$(uname -i)-linux-gnu/bits/resource.h". Check the next section for an ready made example for "nofile".

Build Your Own set_nofile_limit

The per-process limit most often hit is propably "nofile". Imagine you production database suddenly running out of files. Imagine a tool that can instant-fix it without restarting the DB!

Copy the following code to a file "set_limit_nofile.c"

#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/resource.h>

#define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \
 } while (0)

int
main(int argc, char *argv[])
{
 struct rlimit old, new;
 struct rlimit *newp;
 pid_t pid;

 if (!(argc == 2 || argc == 4)) {
 fprintf(stderr, "Usage: %s <pid> [<new-soft-limit> "
 "<new-hard-limit>]\n", argv[0]);
 exit(EXIT_FAILURE);
 }

 pid = atoi(argv[1]); /* PID of target process */

 newp = NULL;
 if (argc == 4) {
 new.rlim_cur = atoi(argv[2]);
 new.rlim_max = atoi(argv[3]);
 newp = &new;
 }

 if (prlimit(pid, RLIMIT_NOFILE, newp, &old) == -1)
 errExit("prlimit-1");
 printf("Previous limits: soft=%lld; hard=%lld\n",
 (long long) old.rlim_cur, (long long) old.rlim_max);

 if (prlimit(pid, RLIMIT_NOFILE, NULL, &old) == -1)
 errExit("prlimit-2");
 printf("New limits: soft=%lld; hard=%lld\n",
 (long long) old.rlim_cur, (long long) old.rlim_max);

 exit(EXIT_FAILURE);
}

and compile it with

gcc -o set_nofile_limit set_nofile_limit.c

And now you have a tool to change any processes "nofile" limit. To dump the limit just pass a PID:

$ ./set_limit_nofile 17006
Previous limits: soft=1024; hard=1024
New limits: soft=1024; hard=1024
$

To change limits pass PID and two limits:

# ./set_limit_nofile 17006 1500 1500
Previous limits: soft=1024; hard=1024
New limits: soft=1500; hard=1500
# 

And the production database is saved.

Ubuntu + Apache + ulimit -n

When on Ubuntu setting ulimits is strange enough as upstart does ignore /etc/security/limits.conf. You need to use the "limit" stanza to change any limit.

But try it with Apache and it won't help as Debian invented another way to ensure Apache ulimits can be changed. So you need to always check

/etc/apache2/envvars

which in a fresh installation contains a line

#APACHE_ULIMIT_MAX_FILES="ulimit -n 65535"

Uncomment it to set any max file limit you want. Restart Apache and verify the process limit in /proc/<pid>/limits which should give you something like

$ egrep "^Limit|Max open files" /proc/3643/limits
Limit Soft Limit Hard Limit Units 
Max open files 1024 4096 files
$

Solving chef-client Errors

Problem:

merb : chef-server (api) : worker (port 4000) ~ Connection refused - connect(2) - (Errno::ECONNREFUSED)

Solution: Check why solr is not running and start it

/etc/init.d/chef-solr start

Problem:

merb : chef-server (api) : worker (port 4000) ~ Net::HTTPFatalError: 503 "Service Unavailable" - (Chef::Exceptions::SolrConnectionError)

Solution: You need to check solr log for error. You can find

  • the access log in /var/log/chef/2013_03_01.jetty.log (adapt the date)
  • the solr error log in /var/log/chef/solr.log

Hopefully you find an error trace there.


Problem:

# chef-expander -n 1
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- http11_client (LoadError)
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/em-http.rb:8:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/em-http-request.rb:1:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/solrizer.rb:24:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/vnode.rb:26:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/vnode_supervisor.rb:28:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/cluster_supervisor.rb:25:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/bin/chef-expander:27:in `
'

Solution: This is a gems dependency issue with the HTTP client gem. Read about it here: http://tickets.opscode.com/browse/CHEF-3495. You might want to check the active Ruby version you have on your system e.g. on Debian run

update-alternatives --config ruby

to find out and change it. Note that the emhttp package from Opscode might require a special version. You can check by listing the package files:

dpkg -L libem-http-request-ruby
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/libem-http-request-ruby
/usr/share/doc/libem-http-request-ruby/changelog.Debian.gz
/usr/share/doc/libem-http-request-ruby/copyright
/usr/lib
/usr/lib/ruby
/usr/lib/ruby/vendor_ruby
/usr/lib/ruby/vendor_ruby/em-http.rb
/usr/lib/ruby/vendor_ruby/em-http-request.rb
/usr/lib/ruby/vendor_ruby/em-http
/usr/lib/ruby/vendor_ruby/em-http/http_options.rb
/usr/lib/ruby/vendor_ruby/em-http/http_header.rb
/usr/lib/ruby/vendor_ruby/em-http/client.rb
/usr/lib/ruby/vendor_ruby/em-http/http_encoding.rb
/usr/lib/ruby/vendor_ruby/em-http/multi.rb
/usr/lib/ruby/vendor_ruby/em-http/core_ext
/usr/lib/ruby/vendor_ruby/em-http/core_ext/bytesize.rb
/usr/lib/ruby/vendor_ruby/em-http/mock.rb
/usr/lib/ruby/vendor_ruby/em-http/decoders.rb
/usr/lib/ruby/vendor_ruby/em-http/version.rb
/usr/lib/ruby/vendor_ruby/em-http/request.rb
/usr/lib/ruby/vendor_ruby/1.8
/usr/lib/ruby/vendor_ruby/1.8/x86_64-linux
/usr/lib/ruby/vendor_ruby/1.8/x86_64-linux/em_buffer.so
/usr/lib/ruby/vendor_ruby/1.8/x86_64-linux/http11_client.so

The listing above for example indicates ruby1.8.

How to decrease the Munin log level

Munin tends to spam the log files it writes (munin-update.log, munin-graph.log...) with many lines at INFO log level. It also doesn't respect syslog log levels as it uses Log4perl.

In difference to the munin-node configuration (munin-node.conf) there is no "log_level" setting in the munin.conf at least in the versions I'm using right now.

So let's fix the code. In /usr/share/munin/munin-<update|graph> find the following lines:

logger_open($config->{'logdir'});
logger_debug() if $config->{debug} or defined($ENV{CGI_DEBUG});

and change it to

logger_open($config->{'logdir'});
logger_debug() if $config->{debug} or defined($ENV{CGI_DEBUG});
logger_level("warn"); 

As parameter to logger_level() you can provide "debug", "info", "warn", "error" or "fatal" (see manpage "Munin::Master::Logger".

And finally: silent logs!

PHP: How to correctly use strpos()

Here is a short summary on what not to do wrong when using strpos() and what other matching methods do exist in PHP.

Important: Use the Correct Comparison Operator!

Most PHP beginners fall for this:

WRONG
if (strpos($str, $substr) != false) {
if (strpos($str, $substr) == false) {
CORRECT
if (strpos($str, $substr) === false) {
if (strpos($str, $substr) !== false) {

The reason for using the identical operators (that also check for the type of the value) "===" or "!==" is that strpos() might return "0" for match at the beginning or "false" for no match which cannot be distinguished by "==" or "!=".

Simple rule: With strpos() functions always compare to "false" with the identical operators.

Case Sensitivity

It is simple:

strpos() case sensitive searching
stripos() case insensitive searching

Search From Offset

If you want to continue a search or want to skip something at the start use an offset as third parameter:

if (strpos($str, $substr, 5) === false) {

By using the return value of strpos() you can do repeated searches:

$pos1 = strpos($str, $substr);
$pos2 = strpos($str, $substr, $pos1 + strlen($pos1));

Search Backwards

There is an additional set of methods for backwards searching:

strrpos() case sensitive backwards searching
strripos() case insensitive backwards searching

Extract Found Strings

If you want to extract a string at the position you've found you need to use the substr() method.

An example:

$pos = strpos($str, $substr);
$result = substr($str, $pos);

When not to use strpos()?

Do not use strpos() if you want to do the following things:

  1. Test for a pattern at the start or end of a string -> use preg_match()
  2. Extract multiple substrings -> use preg_match()
  3. Match a string for a complex pattern -> use preg_match()
  4. If you want to split a string a delimiters -> use strpbrk()

About Performance

If you are wondering how fast strpos() is compared to using regular expressions check this post: strpos() vs preg_match()

Adding Timestamps to Legacy Script Output

Imagine a legacy script. Very long, complex and business critical. No time for a rewrite and no fixed requirements. But you want to have timestamps added to the output it produces.

Search and Replace

One way to do this is find each and every echo and replace it.

echo "Doing X with $file."

becomes

log "Doing X with $file."

and you implement log() as a function prefixing the timestamp. The danger here is to not replace some echo that needs being redirected somewhere else.

Alternative: Wrap in a Subshell

Instead of modifying all echo's one could do the following and just "wrap" the whole script:

#!/bin/bash

$(

<the original script body>

) | stdbuf -i0 -o0 -e0 awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; }'

You can drop the stdbuf invocation that unbuffers the pipe if you do not need 100% exact timestamps. Of course you can use this also with any slow running command on the shell no matter how complex, just add the pipe command.

The obvious advantage: you do not touch the legacy code and can be 100% sure the script is still working after adding the timestamps.

Syndicate content Syndicate content