Blogs

Liferea 1.10-RC4 Released

There is a new release candidate for 1.10 with several bug fix
contributions. It also introduces a feature to convert Google Reader
subscriptions to local feeds to allow everyone to keep their item history.

Read more about it in the previous blog post!

The Changes:

* Added an option to convert Google Reader subscriptions
  to local feeds (Lars Windolf)
* Fixes SF #1080: segfault opening attachment due to incorrect g_free()
  (reported by Adam Nielsen)
* Fixes SF #1075: GLib warnings of "string != NULL" assertion failure
  (reported by Simon Kågedal Reimer)
* Fixes missing shading in 2-pane mode rendering
  (reported by Zoho Vignochi)
* Fixes search folders including comment items
  (reported by David Willmore)

A corresponding maintenance release for 1.8 will follow!

Save Your Google Reader Data with Liferea!

If you were using Google Reader synchronization with Liferea do not delete your subscription! Instead convert it to local subscriptions and keep all your item history.

To do so choose the option "Convert To Local Subscriptions" from the feed list context menu:

Screenshot on converting Google Reader subscription

I hope this feature helps everyone who cares about the item history he has from his synchronized Google Reader subscriptions.

Two Important Notes

  1. This feature will be included starting with release 1.8.15 and 1.10-RC4. Don't worry you can migrate all data even after Google Reader shuts down as all data is kept locally in the Liferea sqlite database. Just do not delete the Google Reader subscription in the meantime.
  2. While the conversion is quite simple it is not perfect. There will be item duplication afterwards as the item id matching is not consistent. Google Reader item ids do not match the original item ids so Liferea will produce some duplicates right after converting. But only for those items overlapping at this moment. Please just mark them as read!

Liferea 1.8.14 and 1.10-RC3 Released

Following the previous releases this new stable and unstable release fix a problem with the TinyTinyRSS support which caused broken headline rendering.

Please upgrade if you use TinyTinyRSS!

Bug: Endless TinyTinyRSS Updates

If you experience not ending TinyTinyRSS feed updates or high load when using TinyTinyRSS subscriptions please upgrade to the newest releases 1.8.13 and 1.10-RC2 which solves this bug.

Nagios Check Plugin for "nofile" Limit

Following the recent post on how to investigate limit related issues which gave instructions what to check if you suspect a system limit to be hit I want to share this Nagios check to cover the open file descriptor limit. Note that existing Nagios plugins like this only check the global limit, only check one application or do not output all problems. So here is my solution which does:

  1. Check the global file descriptor limit
  2. Uses lsof to check all processes "nofile" hard limit

It has two simple parameters -w and -c to specify a percentage threshold. An example call:

./check_nofile_limit.sh -w 70 -c 85

could result in the following output indicating two problematic processes:

WARNING memcached (PID 2398) 75% of 1024 used CRITICAL apache (PID 2392) 94% of 4096 used

Here is the check script doing this:

#!/bin/bash

# Check "nofile" limit for all running processes using lsof

MIN_COUNT=0	# default "nofile" limit is usually 1024, so no checking for 
		# processes with much less open fds needed

WARN_THRESHOLD=80	# default warning:  80% of file limit used
CRITICAL_THRESHOLD=90	# default critical: 90% of file limit used

while getopts "hw:c:" option; do
	case $option in
		w) WARN_THRESHOLD=$OPTARG;;
		c) CRITICAL_THRESHOLD=$OPTARG;;	
		h) echo "Syntax: $0 [-w <warning percentage>] [-c <critical percentage>]"; exit 1;;
	esac
done

results=$(
# Check global limit
global_max=$(cat /proc/sys/fs/file-nr 2>&1 |cut -f 3)
global_cur=$(cat /proc/sys/fs/file-nr 2>&1 |cut -f 1)
ratio=$(( $global_cur * 100 / $global_max))

if [ $ratio -ge $CRITICAL_THRESHOLD ]; then
	echo "CRITICAL global file usage $ratio% of $global_max used"
elif [ $ratio -ge $WARN_THRESHOLD ]; then
	echo "WARNING global file usage $ratio% of $global_max used"
fi

# We use the following lsof options:
#
# -n 	to avoid resolving network names
# -b	to avoid kernel locks
# -w	to avoid warnings caused by -b
# +c15	to get somewhat longer process names
#
lsof -wbn +c15 2>/dev/null | awk '{print $1,$2}' | sort | uniq -c |\
while read count name pid remainder; do
	# Never check anything above a sane minimum
	if [ $count -gt $MIN_COUNT ]; then
		# Extract the hard limit from /proc
		limit=$(cat /proc/$pid/limits 2>/dev/null| grep 'open files' | awk '{print $5}')

		# Check if we got something, if not the process must have terminated
		if [ "$limit" != "" ]; then
			ratio=$(( $count * 100 / $limit ))
			if [ $ratio -ge $CRITICAL_THRESHOLD ]; then
				echo "CRITICAL $name (PID $pid) $ratio% of $limit used"
			elif [ $ratio -ge $WARN_THRESHOLD ]; then
				echo "WARNING $name (PID $pid) $ratio% of $limit used"
			fi
		fi
	fi
done
)

if echo $results | grep CRITICAL; then
	exit 2
fi
if echo $results | grep WARNING; then
	exit 1
fi

echo "All processes are fine."

Use the script with caution! At the moment it has no protection against a hanging lsof. So the script might mess up your system if it hangs for some reason. If you have ideas how to improve it please share them in the comments!

The Debian/Ubuntu ulimit Check List

This is a check list of all you can do wrong when trying to set limits on Debian/Ubuntu. The hints might apply to other distros too, but I didn't check. If you have additional suggestions please leave a comment!

Always Check Effective Limit

The best way to check the effective limits of a process is to dump

/proc/<pid>/limits

which gives you a table like this

Limit  Soft Limit Hard Limit Units 
Max cpu time unlimited unlimited seconds 
Max file size unlimited unlimited bytes 
Max data size unlimited unlimited bytes 
Max stack size 10485760 unlimited bytes 
Max core file size 0 unlimited bytes 
Max resident set unlimited unlimited bytes 
Max processes 528384 528384 processes 
Max open files 1024 1024 files 
Max locked memory 32768 32768 bytes 
Max address space unlimited unlimited bytes 
Max file locks unlimited unlimited locks 
Max pending signals 528384 528384 signals 
Max msgqueue size 819200 819200 bytes 
Max nice priority 0 0 
Max realtime priority 0 0 

Running "ulimit -a" in the shell of the respective user rarely tells something because the init daemon responsible for launching services might be ignoring /etc/security/limits.conf as this is a configuration file for PAM only and is applied on login only per default.

Do Not Forget The OS File Limit

If you suspect a limit hit on a system with many processes also check the global limit:

$ cat /proc/sys/fs/file-nr
7488	0	384224
$

The first number is the number of all open files of all processes, the third is the maximum. If you need to increase the maximum:

# sysctl -w fs.file-max=500000

Ensure to persist this in /etc/sysctl.conf to not loose it on reboot.

Check "nofile" Per Process

Just checking the number of files per process often helps to identify bottlenecks. For every process you can count open files from using lsof:

lsof -n -p <pid> | wc -l

So a quick check on a burning system might be:

lsof -n 2>/dev/null | awk '{print $1 " (PID " $2 ")"}' | sort | uniq -c | sort -nr | head -25

whic returns the top 25 file descriptor eating processes

 139 mysqld (PID 2046)
 105 httpd2-pr (PID 25956)
 105 httpd2-pr (PID 24384)
 105 httpd2-pr (PID 24377)
 105 httpd2-pr (PID 24301)
 105 httpd2-pr (PID 24294)
 105 httpd2-pr (PID 24239)
 105 httpd2-pr (PID 24120)
 105 httpd2-pr (PID 24029)
 105 httpd2-pr (PID 23714)
 104 httpd2-pr (PID 3206)
 104 httpd2-pr (PID 26176)
 104 httpd2-pr (PID 26175)
 104 httpd2-pr (PID 26174)
 104 httpd2-pr (PID 25957)
 104 httpd2-pr (PID 24378)
 102 httpd2-pr (PID 32435)
 53 sshd (PID 25607)
 49 sshd (PID 25601)

The same more comfortable including the hard limit:

lsof -n 2>/dev/null | awk '{print $1,$2}' | sort | uniq -c | sort -nr | head -25 | while read nr name pid ; do printf "%10d / %-10d %-15s (PID %5s)\n" $nr $(cat /proc/$pid/limits | grep 'open files' | awk '{print $5}') $name $pid; done

returns

 105 / 1024 httpd2-pr (PID 5368)
 105 / 1024 httpd2-pr (PID 3834)
 105 / 1024 httpd2-pr (PID 3407)
 104 / 1024 httpd2-pr (PID 5392)
 104 / 1024 httpd2-pr (PID 5378)
 104 / 1024 httpd2-pr (PID 5377)
 104 / 1024 httpd2-pr (PID 4035)
 104 / 1024 httpd2-pr (PID 4034)
 104 / 1024 httpd2-pr (PID 3999)
 104 / 1024 httpd2-pr (PID 3902)
 104 / 1024 httpd2-pr (PID 3859)
 104 / 1024 httpd2-pr (PID 3206)
 102 / 1024 httpd2-pr (PID 32435)
 55 / 1024 mysqld (PID 2046)
 53 / 1024 sshd (PID 25607)
 49 / 1024 sshd (PID 25601)
 46 / 1024 dovecot-a (PID 1869)
 42 / 1024 python (PID 1850)
 41 / 1048576 named (PID 3130)
 40 / 1024 su (PID 25855)
 40 / 1024 sendmail (PID 3172)
 40 / 1024 dovecot-a (PID 14057)
 35 / 1024 sshd (PID 3160)
 34 / 1024 saslauthd (PID 3156)
 34 / 1024 saslauthd (PID 3146)

Upstart doesn't care about limits.conf!

The most common mistake is believing upstart behaves like the Debian init script handling. When on Ubuntu a service is being started by upstart /etc/security/limits.conf will never apply! To get upstart to change the limits of a managed service you need to insert a line like

limit nofile 10000 20000

into the upstart job file in /etc/init.

When Changing /etc/security/limits.conf Re-Login!

After you apply a change to /etc/security/limits.conf you need to login again to have the change applied to your next shell instance by PAM. Alternatively you can use sudo -i to switch to user whose limits you modified and simulate a login.

Special Debian Apache Handling

The Debian Apache package which is also included in Ubuntu has a separate way of configuring "nofile" limits. If you run the default Apache in 12.04 and check /proc/<pid>/limits of a Apache process you'll find it is allowing 8192 open file handles. No matter what you configured elsewhere.

This is because Apache defaults to 8192 files. If you want another setting for "nofile" then you need to edit /etc/apache2/envvars.

For Emergencies: prlimit!

Starting with util-linux-2.21 there will be a new "prlimit" tool which allows you to easily get/set limits for running processes. Sadly Debian is and will be for some time on util-linux-2.20. So what do we do in the meantime?

The prlimit(2) manpage which is for the system call prlimit() gives a hint: at the end of the page there is a code snippet to change the CPU time limit. You can adapt it to any limit you want by replacing RLIMIT_CPU with any of

  • RLIMIT_NOFILE
  • RLIMIT_OFILE
  • RLIMIT_AS
  • RLIMIT_NPROC
  • RLIMIT_MEMLOCK
  • RLIMIT_LOCKS
  • RLIMIT_SIGPENDING
  • RLIMIT_MSGQUEUE
  • RLIMIT_NICE
  • RLIMIT_RTPRIO
  • RLIMIT_RTTIME
  • RLIMIT_NLIMITS

You might want to check "/usr/include/$(uname -i)-linux-gnu/bits/resource.h". Check the next section for an ready made example for "nofile".

Build Your Own set_nofile_limit

The per-process limit most often hit is propably "nofile". Imagine you production database suddenly running out of files. Imagine a tool that can instant-fix it without restarting the DB!

Copy the following code to a file "set_limit_nofile.c"

#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/resource.h>

#define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \
 } while (0)

int
main(int argc, char *argv[])
{
 struct rlimit old, new;
 struct rlimit *newp;
 pid_t pid;

 if (!(argc == 2 || argc == 4)) {
 fprintf(stderr, "Usage: %s <pid> [<new-soft-limit> "
 "<new-hard-limit>]\n", argv[0]);
 exit(EXIT_FAILURE);
 }

 pid = atoi(argv[1]); /* PID of target process */

 newp = NULL;
 if (argc == 4) {
 new.rlim_cur = atoi(argv[2]);
 new.rlim_max = atoi(argv[3]);
 newp = &new;
 }

 if (prlimit(pid, RLIMIT_NOFILE, newp, &old) == -1)
 errExit("prlimit-1");
 printf("Previous limits: soft=%lld; hard=%lld\n",
 (long long) old.rlim_cur, (long long) old.rlim_max);

 if (prlimit(pid, RLIMIT_NOFILE, NULL, &old) == -1)
 errExit("prlimit-2");
 printf("New limits: soft=%lld; hard=%lld\n",
 (long long) old.rlim_cur, (long long) old.rlim_max);

 exit(EXIT_FAILURE);
}

and compile it with

gcc -o set_nofile_limit set_nofile_limit.c

And now you have a tool to change any processes "nofile" limit. To dump the limit just pass a PID:

$ ./set_limit_nofile 17006
Previous limits: soft=1024; hard=1024
New limits: soft=1024; hard=1024
$

To change limits pass PID and two limits:

# ./set_limit_nofile 17006 1500 1500
Previous limits: soft=1024; hard=1024
New limits: soft=1500; hard=1500
# 

And the production database is saved.

Ubuntu + Apache + ulimit -n

When on Ubuntu setting ulimits is strange enough as upstart does ignore /etc/security/limits.conf. You need to use the "limit" stanza to change any limit.

But try it with Apache and it won't help as Debian invented another way to ensure Apache ulimits can be changed. So you need to always check

/etc/apache2/envvars

which in a fresh installation contains a line

#APACHE_ULIMIT_MAX_FILES="ulimit -n 65535"

Uncomment it to set any max file limit you want. Restart Apache and verify the process limit in /proc/<pid>/limits which should give you something like

$ egrep "^Limit|Max open files" /proc/3643/limits
Limit Soft Limit Hard Limit Units 
Max open files 1024 4096 files
$

Google Reader Shutting Down

in

As it was covered widely in media most of you probably have heard that Google is shutting down Google Reader after 1st July 2013.

Will the Google Reader data be lost?

No. As it is implemented now once Google Reader is not available Liferea will present you only a Google Reader feed list element without a subscriptions subtree. This can happen when you have no network connectivity for example. The subscriptions are only displayed after a successful Google account login. Even when the subscription list is not shown Liferea will still have all data in the local sqlite database, so no headlines will be lost.

To allow everyone to still access the items a code change is needed. With this it will be possible to access the old headlines from the local cache DB even after the shutdown so you loose nothing. I hope to find time to work on this in the next weeks.

Any Migration Suggestions?

Well the best choice is for sure self-hosting a TinyTinyRSS instance. Since the shutdown announcement there also seems to be a huge move to Feedly, but be aware that this is the same vendor-lock as Google was! Remember the API shutdown of Bloglines more than two years ago?

Solving chef-client Errors

Problem:

merb : chef-server (api) : worker (port 4000) ~ Connection refused - connect(2) - (Errno::ECONNREFUSED)

Solution: Check why solr is not running and start it

/etc/init.d/chef-solr start

Problem:

merb : chef-server (api) : worker (port 4000) ~ Net::HTTPFatalError: 503 "Service Unavailable" - (Chef::Exceptions::SolrConnectionError)

Solution: You need to check solr log for error. You can find

  • the access log in /var/log/chef/2013_03_01.jetty.log (adapt the date)
  • the solr error log in /var/log/chef/solr.log

Hopefully you find an error trace there.


Problem:

# chef-expander -n 1
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- http11_client (LoadError)
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/em-http.rb:8:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/em-http-request.rb:1:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/solrizer.rb:24:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/vnode.rb:26:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/vnode_supervisor.rb:28:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/vendor_ruby/chef/expander/cluster_supervisor.rb:25:in `'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /usr/bin/chef-expander:27:in `
'

Solution: This is a gems dependency issue with the HTTP client gem. Read about it here: http://tickets.opscode.com/browse/CHEF-3495. You might want to check the active Ruby version you have on your system e.g. on Debian run

update-alternatives --config ruby

to find out and change it. Note that the emhttp package from Opscode might require a special version. You can check by listing the package files:

dpkg -L libem-http-request-ruby
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/libem-http-request-ruby
/usr/share/doc/libem-http-request-ruby/changelog.Debian.gz
/usr/share/doc/libem-http-request-ruby/copyright
/usr/lib
/usr/lib/ruby
/usr/lib/ruby/vendor_ruby
/usr/lib/ruby/vendor_ruby/em-http.rb
/usr/lib/ruby/vendor_ruby/em-http-request.rb
/usr/lib/ruby/vendor_ruby/em-http
/usr/lib/ruby/vendor_ruby/em-http/http_options.rb
/usr/lib/ruby/vendor_ruby/em-http/http_header.rb
/usr/lib/ruby/vendor_ruby/em-http/client.rb
/usr/lib/ruby/vendor_ruby/em-http/http_encoding.rb
/usr/lib/ruby/vendor_ruby/em-http/multi.rb
/usr/lib/ruby/vendor_ruby/em-http/core_ext
/usr/lib/ruby/vendor_ruby/em-http/core_ext/bytesize.rb
/usr/lib/ruby/vendor_ruby/em-http/mock.rb
/usr/lib/ruby/vendor_ruby/em-http/decoders.rb
/usr/lib/ruby/vendor_ruby/em-http/version.rb
/usr/lib/ruby/vendor_ruby/em-http/request.rb
/usr/lib/ruby/vendor_ruby/1.8
/usr/lib/ruby/vendor_ruby/1.8/x86_64-linux
/usr/lib/ruby/vendor_ruby/1.8/x86_64-linux/em_buffer.so
/usr/lib/ruby/vendor_ruby/1.8/x86_64-linux/http11_client.so

The listing above for example indicates ruby1.8.

First Liferea 1.10 Release Candidate!

Today sees the first release candidate for the new 1.10 series. Compared to the previous unstable release 1.9.7 it now provides a clean migration to GSettings, fixed support for TinyTinyRSS 1.6 and an improved player plugin.

Please help testing this release to get out 1.10!

The detailed changes:

    * Patch SF #3407290: Migrate to GSettings
      (by Mikel Olasagasti)
    * Patch SF #3579177: Change .desktop category to News;Feed;
      (by Stanislav Brabec)
    * Fix for Debian #668197: x-www-browser preference not working
      (David Smith)
    * Added slider and time display to media player plugin.
    * Added Google Plus to social bookmarking options.
    * Removing deprecated g_thread_init() call
    * Auto-enable plugins on migration
    * Added missing -a option to manpage
    * Updated manpage to reflect XDG path migration
    * Changing GSettings path from /apps/liferea to /org/gnome/liferea
    * Changes default download thread concurrency from 2 to 3
    * Fixes regression about using the GNOME default font
    * Improves all item/link launching menus to consistently provide
      three options: Tab, Browser and External Browser
    * Fixes SF #1037: Incorrect notifications for Google Reader
      (patch by David Smith)
    * Fixes SF #1048: Removed all feedvalidator.org references from FAQ
      and XSLT as it was reported to host malware.
      (reported by bkat)
    * Fixes SF #1041: Some GPLv2 license headers were outdated
      (reported by Emmanuel Seyman)
    * Fixes SF #1044: tt-rss API changed (we now support only 1.6 API)
      (patch by Sebastian Noel)
    * Fixes assertion when creating new tt-rss subscriptions
    * Fixes XHTML errors caused by extra  tags returned by tt-rss
    * Fixes missing item list update when browsing item URLs in Liferea 
Syndicate content