Scan Linux for Vulnerable Packages

How do you know wether your Linux server (which has no desktop update notifier or unattended security updates running) does need to be updated? Of course an

apt-get update && apt-get --dry-run upgrade

might give an indication. But what of the package upgrades do stand for security risks and whose are only simple bugfixes you do not care about?

Check using APT

One useful possibility is apticron which will tell you which packages should be upgraded and why. It presents you the package ChangeLog to decided wether you want to upgrade a package or not. Similar but less details is cron-apt which also informs you of new package updates.

Analyze Security Advisories

Now with all those CERT newsletters, security mailing lists and even security news feeds out there: why can't we check the other way around? Why not find out:

  1. Which security advisories do affect my system?
  2. Which ones I have already complied with?
  3. And which vulnerabilities are still there?

My mad idea was to take those security news feeds (as a start I tried with the ones from Ubuntu and CentOS) and parse out the package versions and compare them to the installed packages. The result was a script producing the following output:

screenshot of

In the output you see lines starting with "CEBA-2012-xxxx" which is CentOS security advisory naming schema (while Ubuntu has USN-xxxx-x). Yellow color means the security advisory doesn't apply because the relevant packages are not installed. Green means the most recent package version is installed and the advisory shouldn't affect the system anymore. Finally red, of course meaning that the machine is vulnerable.

Does it Work Reliably?

The script producing this output can be found here. I'm not yet satisfied with how it works and I'm not sure if it can be maintained at all given the brittle nature of the arbitrarily formatted/rich news feeds provided by the distros. But I like how it gives a clear indication of current advisories and their effect on the system.

Maybe persuading the Linux distributions into using a common feed format with easy to parse metadata might be a good idea...

How do you check your systems? What do you think of a package scanner using XML security advisory feeds?

visudo: #includedir sudoers.d

WTF. Today I fell for this sudo madness and uncommented this "comment" in /etc/sudoers:

#includedir /etc/sudoers.d

which gives a

visudo: >>> /etc/sudoers: syntax error near line 28 <<<

Let's check the "sudoers" manpage again: full of EBNF notations! But nothing in the EBNF about comments being commands. At least under Other special characters and reserved words one finds

The pound sign ('#') is used to indicate a comment (unless it is part
of a #include directive or unless it occurs in the context of a user
name and is followed by one or more digits, in which case it is treated
as a uid).  Both the comment character and any text after it, up to the
end of the line, are ignored.

Cannot this be done in a better way?

How-to Write a Chef Recipe for Editing Config Files

Most chef recipes are about installing new software including all config files. Also if they are configuration recipes they usually overwrite the whole file and provide a completely recreated configuration. When you have used cfengine and puppet with augtool before you'll miss a possibility to edit files.

In cfengine2...

You could write

{ home/.bashrc
   AppendIfNoSuchLine "alias rm='rm -i'"

While in puppet...

You'd have:

augeas { "sshd_config":
  context => "/files/etc/ssh/sshd_config",
  changes => [
    "set PermitRootLogin no",

Now how to do it in Chef?

Maybe I missed the correct way to do it until now (please comment if this is the case!) but there seems to be no way to use for example augtool with chef and there is no built-in cfengine like editing. The only way I've seen so far is to use Ruby as a scripting language to change files using the Ruby runtime or to use the Script ressource which allows running other interpreters like bash, csh, perl, python or ruby.

To use it you can define a block named like the interpreter you need and add a "code" attribute with a "here doc" operator (e.g. <<-EOT) describing the commands. Additionally you specify a working directory and a user for the script to be executed with. Example:

bash "some_commands" do
    user "root"
    cwd "/tmp"
    code <<-EOT
       echo "alias rm='rm -i'" >> /root/.bashrc

While it is not a one-liner statement as possible as in cfengine it is very flexible. The Script resource is widely used to perform ad-hoc source compilation and installations in the community codebooks, but we can also use it for standard file editing.

Finally to do conditional editing use not_if/only_if clauses at the end of the Script resource block.

Use a graphical Linux Editor for VCS Commits

1. Using gedit

If your main editor is the graphical GNOME editor gedit you can also use it when doing version control system commits, crontab edits, visudo and others things on the command line. All you need is to set it to the $EDITOR environment variable.

The only thing you need to be careful about is wether the editor detaches from the terminal right after starting. This must not happen as the calling command (e.g. "git commit -a") would never know when the editing is finished. So for gedit you have to add the "-w" switch to keep it waiting.

export EDITOR="gedit -w"

2. Using Emacs

For XEmacs simply set

export EDITOR=xemacs

With Emacs itself you also have the possibility to use server mode by running emacs-client (see "Starting emacs automatically" in the EmacsWiki). To do so set

export ALTERNATE_EDITOR=emacs EDITOR=emacsclient

3. Using Other Editors

If you use the goold old Nirvana Editor nedit you can simply

export EDITOR=nedit

the same goes for the KDE editor kate:

export EDITOR=kate

and if you want it really hurting try something like this:

export EDITOR=eclipse
export VISUAL=oowriter

4. $EDITOR and sudo

When using sudo you need to pass $EDITOR along to the root environment. This can be done by using "sudo -e" e.g.

sudo -e visudo

Wether passing the enviroment to root is a good idea might be a good question though...

Have fun!

How-to: Migrate Linux applications to XDG directories

If you are maintaining a Linux Glib-based or GTK application for some time you might want to migrate it to the XDG way of user data layout. This is something I had to do for Liferea (around since 2003) recently. Also when creating a new application you might ask yourself where to put the user data. This post tries to describe how to access the correct paths using Glib.

1. Preparation: Categorize your data

Determine what types of data you have. The specification knows three major directories:

  1. $XDG_DATA_HOME: usually ~/.local/share
  2. $XDG_CONFIG_HOME: usually ~/.config
  3. $XDG_CACHE_HOME: usually ~/.cache

In each of the directories your application should create a subfolder with the unique name of the application and place relevant files there. While volatile cache files go into ~/.cache, persistent important data should go to ~/.local/share and all configuration to ~/.config.

2. Migrate the code

The simple task is to rewrite the old code creating directory paths some arbitrary way to use XDG style directory paths now. As the specification is non-trivial when finding the directory base paths (via multiple paths in $XDG_DATA_DIRS and $XDG_CONFIG_DIRS) it might be good to rely on a library for doing this.

2.1 Using Glib

When developing for GTK or maybe only using Glib one already gets support since GTK uses Glib and Glib 2.6 introduced support for the XDG base directory specification. So with Glib use the following methods to find the target directories:

$XDG_DATA_HOME g_get_user_data_dir()
$XDG_CONFIG_HOME g_get_user_config_dir()
$XDG_CACHE_HOME g_get_user_cache_dir()

Given your application being named "coolApp" and you want to create a cache file named "render.dat" you could use the following C snippet:

g_build_filename (g_get_user_cache_dir (), "coolApp", "render.dat", NULL);

to produce a path. Most likely you'll get something like "/home/joe/.cache/coolApp/render.dat".

2.2 Using wxWidgets

When programming for wxWidgets you need to use the wxStandardPaths class. The methods are

$XDG_DATA_HOME wxStandardPaths::GetDataDir()
$XDG_CONFIG_HOME wxStandardPaths::GetConfigDir()
$XDG_CACHE_HOME wxStandardPaths::GetLocalDataDir()

2.3 With KDE

Since KDE 3.2 it also supports the XDG base specification. But honestly: googling our trying to browse the KDE API I couldn't find any pointers on how to do it. If you know please leave a comment!

Get Recipes from the cfengine Design Center

Today I learned that the makers of cfengine have launched a "Design Center" which essentially is a git repository with recipes (called "sketches") for cfengine. This really helps learning the cfengine syntax and getting quick results. It also saves googling wildy for hints on how to do special stuff.

Beside just copy&pasting the sketches that cfengine guarantees to be runnable without modifications the design center wiki explains how to directly install sketches from the repository using cf-sketch.

Using cf-sketch you can search for installable recipes:

# cf-sketch --search utilities
Monitoring::nagios_plugin_agent /tmp/design-center/sketches/utilities/nagios_plugin_agent

...and install them:

# cf-sketch --install Monitoring::nagios_plugin_agent

cf-sketch itself is a Perl program that need to be set up separately by running

git clone
cd design-center/tools/cf-sketch
make install

How to write GObject Introspection based Plugins

This is a short introduction in how to write plugins (in this case Python plugins) for a GTK+ 3.0 application. One of the major new features in GTK+ 3.0 is GObject Introspection which allows applications to be accessed from practically any scripting language out there.

The motivation for this post is that when I tried to add plugin support to Liferea with libpeas it took me three days to work me through the somewhat sparse documentation which at no point is a good HowTo on how to proceed step by step. This post tries to give one...

1. Implement a Plugin Engine with libpeas

For the integration of libpeas you need to write a lot of boiler plate code for initialisation and plugin path registration. Take the gtranslator gtr-plugins-engince.c implementation as an example.

Most important are the path registrations with peas_engine_add_search_path:

  peas_engine_add_search_path (PEAS_ENGINE (engine),
                               gtr_dirs_get_user_plugins_dir (),
                               gtr_dirs_get_user_plugins_dir ());

  peas_engine_add_search_path (PEAS_ENGINE (engine),
                               gtr_dirs_get_gtr_plugins_dir (),
                               gtr_dirs_get_gtr_plugins_data_dir ());

It is useful to have two registrations one pointing to some user writable subdirectory in $HOME and a second one for package installed plugins in a path like /usr/share/<application>/plugins. Finally ensure to call the init method of this boiler plate code during your initialization.

2. Implement Plugin Preferences with libpeasgtk

To libpeas also belongs a UI library providing a plugin preference tab that you can add to your preferences dialog. Here is a screenshot from the implementation in Liferea:

To add such a tab add the a "Plugins" tab to your preferences dialog and the following code to the plugin dialog setup:

#include <libpeas-gtk/peas-gtk-plugin-manager.h>


/* assuming "plugins_box" is an existing tab container widget */

GtkWidget *alignment;

alignment = gtk_alignment_new (0., 0., 1., 1.);
gtk_alignment_set_padding (GTK_ALIGNMENT (alignment), 12, 12, 12, 12);

widget = peas_gtk_plugin_manager_new (NULL);
gtk_container_add (GTK_CONTAINER (alignment), widget);
gtk_box_pack_start (GTK_BOX (plugins_box), alignment, TRUE, TRUE, 0);

At this point you can already compile everything and test it. The new tab with the plugin manager should show up empty but working.

3. Define Activatable Class

We've initialized the plugin library in step 1. Now we need to add some hooks to the program so called "Activatables" which we'll use in the code to create a PeasExtensionSet representing all plugins providing this interface.

For example gtranslator has an GtrWindowActivable interface for plugins that should be triggered when a gtranslator window is created. It looks like this:

struct _GtrWindowActivatableInterface
  GTypeInterface g_iface;

  /* Virtual public methods */
  void (*activate) (GtrWindowActivatable * activatable);
  void (*deactivate) (GtrWindowActivatable * activatable);
  void (*update_state) (GtrWindowActivatable * activatable);

The activate() and deactivate() methods are to be called by conventing using the "extension-added" / "extension-removed" signal emitted by the PeasExtensionSet. The additional method update_state() is called in the gtranslator code when user interactions happen and the plugin needs to reflect it.

Add as many methods you need many plugins do not need special methods as they can connect to application signals themselves. So keep the Activatable interface simple!

As for how many Activatables to add: in the most simple case in a single main window application you could just implement a single Activatable for the main window and all plugins no matter what they do initialize with the main window.

4. Implement and Use the Activatable Class

No we've defined Activatables and need to implement and use the corresponding class. The interface implementation itself is just a lot of boilerplate code: check out gtr-window-activatable.c implementing GtrWindowActivatable.

In the class the Activable belongs to (in case of gtranslator GtrWindowActivatable belongs to GtrWindow) a PeasExtensionSet needs to be initialized:

window->priv->extensions = peas_extension_set_new (PEAS_ENGINE (gtr_plugins_engine_get_default ()),
                                                     "window", window,

  g_signal_connect (window->priv->extensions,
                    G_CALLBACK (extension_added),
  g_signal_connect (window->priv->extensions,
                    G_CALLBACK (extension_removed),

The extension set instance, representing all plugins implementing the interface, is used to trigger the methods on all or only selected plugins. One of the first things to do after creating the extension set is to initialize all plugins using the signal "extension-added":

  peas_extension_set_foreach (window->priv->extensions,
                              (PeasExtensionSetForeachFunc) extension_added,

As there might be more than one registered extension we need to implement a PeasExtensionSetForeachFunc method handling each plugin. This method uses the previously implemented interface. Example from gtranslator:

static void
extension_added (PeasExtensionSet *extensions,
                 PeasPluginInfo   *info,
                 PeasExtension    *exten,
                 GtrWindow        *window)
  gtr_window_activatable_activate (GTR_WINDOW_ACTIVATABLE (exten));

Note: Up until libpeas version 1.1 you'd simply call peas_extension_call() to issue the name of the interface method to trigger instead.

peas_extension_call (extension, "activate");

Ensure to

  1. Initially call the "extension-added" signal handler for each plugin registered at startup using peas_extension_set_foreach()
  2. Implement and connect the "extension-added" / "extension-removed" signal handlers
  3. Implement one PeasExtensionSetForeachFunc for each additional interface method you defined in step 3
  4. Provide a caller method running peas_extension_set_foreach() for each of those interface methods.

5. Expose some API

Now you are almost ready to code a plugin. But for it to access business logic you might want to expose some API from your program. This is done using markup in the function/interface/class definitions and running g-ir-scanner on the code to create a GObject introspection metadata (one .gir and one .typelib file per package).

To learn about the markup checkt the Annotation Guide and other projects for examples. During compilation g-ir-scanner will issue warnings on incomplete or wrong syntax.

6. Write a Plugin

When writing plugins you always have to create two things:

  • A .plugin file describing the plugin
  • At least one executable/script implementing the plugin

Those files you should put into a seperate "plugins" directory in your source tree as they need an extra install target. Assuming you'd want to write a python plugin named "" you'd create a "myplugin.plugin" with the following content

Name=My Plugin
Description=My example plugin for testing only
Authors=Joe, Sue
Copyright=Copyright © 2012 Joe

Now for the plugin: in Python you'd import packages from the GObject Introspection repository like this

from gi.repository import GObject
from gi.repository import Peas
from gi.repository import PeasGtk
from gi.repository import Gtk
from gi.repository import <your package prefix>

The imports of GObject, Peas, PeasGtk and your package are mandatory. Others depend on what you want to do with your plugin. Usually you'll want to interact with Gtk.

Next you need to implement a simple class with all the interface methods we defined earlier:

class MyPlugin(GObject.Object, <your package prefix>.<Type>Activatable):
    __gtype_name__ = 'MyPlugin'

    object =

    def do_activate(self):
        print "activate"

    def do_deactivate(self):
        print "deactivate"

    def do_update_state(self):
        print "updated state!"

Ensure to fill in the proper package prefix for your program and the correct Activatable name (like GtkWindowActivatable). Now flesh out the methods. That's all.

Things to now:

  • Your binding will use some namespace separation schema. Python uses dots to separate the elements in the inheritance hierarchy. If unsure check the inofficial online API
  • If you have a syntax error during activation libpeas will permanently deactivate your plugin in the preferences. You need to manually reenable it.
  • You can disable/enable your plugin multiple times to debug problems during activation.
  • To avoid endless "make install" calls register a plugin engine directory in your home directory and edit experimental plugins there.

7. Setup autotools Install Hooks

If you use automake extend the in your sources directory by something similar to

INTROSPECTION_GIRS = Gtranslator-3.0.gir

Gtranslator-3.0.gir: gtranslator
INTROSPECTION_SCANNER_ARGS = -I$(top_srcdir) --warn-all --identifier-prefix=Gtr
Gtranslator_3_0_gir_NAMESPACE = Gtranslator
Gtranslator_3_0_gir_VERSION = 3.0
Gtranslator_3_0_gir_PROGRAM = $(builddir)/gtranslator
Gtranslator_3_0_gir_FILES = $(INST_H_FILES) $(libgtranslator_c_files)
Gtranslator_3_0_gir_INCLUDES = Gtk-3.0 GtkSource-3.0

girdir = $(datadir)/gtranslator/gir-1.0

typelibdir = $(libdir)/gtranslator/girepository-1.0
typelib_DATA = $(INTROSPECTION_GIRS:.gir=.typelib)

	$(gir_DATA)	\
	$(typelib_DATA)	\

Ensure to

  1. Pass all files you want to have scanned to xxx_gir_FILES
  2. To provide a namespace prefix in INTROSPECTION_SCANNER_ARGS with --identifier-prefix=xxx
  3. To add --accept-unprefixed to INTROSPECTION_SCANNER_ARGS if you have no common prefix

Next create an install target for the plugins you have:

plugindir = $(pkglibdir)/plugins
plugin_DATA = \
        plugins/ \
        plugins/one_plugin.plugin \
        plugins/ \

Additionally add package dependencies and GIR macros to

       libpeas-1.0 >= 1.0.0
       libpeas-gtk-1.0 >= 1.0.0"


8. Try to Compile Everything

Check that when running "make"

  1. Everything compiles
  2. g-ir-scanner doesn't complain too much
  3. A .gir and .typelib file is placed in your sources directory

Check that when running "make install"

  1. Your .gir file is installed in <prefix>/share/<package>/gir-1.0/
  2. Your plugins are installed to <prefix>/lib/<package>/plugins/

Launch the program and

  1. Enable the plugins using the preferences for the first time
  2. If in doubt always check if the plugin is still enabled (it will get disabled on syntax errors during activation
  3. Add a lot of debug output to your plugin and watch it telling your things on the console the program is running at

This should do. Please post comments if you miss stuff or find errors! I hope this tutorial helps the one or the other reader.

Access GnomeKeyring with Python via GObject Introspection (GIR)

Since GTK+ 3.0 and the broad introduction of GObject Introspection (GIR) one now can switch from using the existing GnomeKeyring Python module to direct GIR-based access. This allows reducing the Python runtime dependency.

Below you find a simple keyring access script unlocking a keyring named "test" adding a new entry and dumping all entries in the keyring. This code uses the generic secret keyring type and was originally written for a Liferea plugin that allows Liferea to store feed passwords into GnomeKeyring:

from gi.repository import GObject
from gi.repository import GnomeKeyring

keyringName = 'test'

def unlock():
	print 'Unlocking keyring %s...' % keyringName
	GnomeKeyring.unlock_sync(keyringName, None)

def dump_all():
	print "Dump all keyring entries..."
	(result, ids) = GnomeKeyring.list_item_ids_sync(keyringName)
	for id in ids:	
	   (result, item) = GnomeKeyring.item_get_info_sync(keyringName, id)
	   if result != GnomeKeyring.Result.OK:
	      print '%s is locked!' % (id)
	         print '  => %s = %s' % (item.get_display_name(), item.get_secret())

def do_query(id):
	print 'Fetch secret for id %s' % id
	attrs = GnomeKeyring.Attribute.list_new()
	GnomeKeyring.Attribute.list_append_string(attrs, 'id', id)
	result, value = GnomeKeyring.find_items_sync(GnomeKeyring.ItemType.GENERIC_SECRET, attrs)
	if result != GnomeKeyring.Result.OK:

        print '  => password %s = %s' % (id, value[0].secret)
	print '     keyring id  = %s' % value[0].item_id

def do_store(id, username, password):
	print 'Adding keyring entry for id %s' % id
	GnomeKeyring.create_sync(keyringName, None)
	attrs = GnomeKeyring.Attribute.list_new()
	GnomeKeyring.Attribute.list_append_string(attrs, 'id', id)
	GnomeKeyring.item_create_sync(keyringName, GnomeKeyring.ItemType.GENERIC_SECRET, repr(id), attrs, '@@@'.join([username, password]), True)
	print '  => Stored.'

# Our test code...
do_store('id1', 'User1', 'Password1')

For simplicity the username and password are stored together as the secret token separated by "@@@". According to the documentation it should be possible to store them separately, but given my limited Python knowledge and the missing GIR documentation made me use this simple method. If I find a better way I'll update this post. If you know how to improve the code please post a comment!

The code should raise a keyring password dialog when run for the first time in the session and give an output similar to this:

Unlocking keyring test...
Dump all keyring entries...
  => 'id1' = TestA@@@PassA
Adding keyring entry for id id1
  => Stored.
Fetch secret for id id1
  => password id1 = TestA@@@PassA
     keyring id  = 1
Dump all keyring entries...
  => 'id1' = TestA@@@PassA

You can also check the keyring contents using the seahorse GUI where you should see the "test" keyring with an entry with id "1" as in the screenshot below.

Memcache Monitoring GUIs

When using memcached or memcachedb everything is fine as long as it is running. But from an operating perspective memcached is a black box. There is no real logging you can only use the -v/-vv/-vvv switches when not running in daemon mode to see what your instance does. And it becomes even more complex if you run multiple or distributed memcache instances available on different hosts and ports.

So the question is: How to monitor your distributed memcache setup?

There are not many tools out there, but some useful are. We'll have a look at the following tools. Note that some can monitor multiple memcached instances, while others can only monitor a single instance at a time.

Name Multi-Instances Complexity/Features
telnet no Simple CLI via telnet
memcached-top no CLI
stats-proxy yes Simple Web GUI
memcache.php yes Simple Web GUI
PhpMemcacheAdmin yes Complex Web GUI
Memcache Manager yes Complex Web GUI

1. Use telnet!

memcached already brings it own statistics which are accessible via telnet for manual interaction and are the basis for all other monitoring tools. Read more about using the telnet interface.

2. Live Console Monitoring with memcached-top

You can use memcache-top for live-monitoring a single memcached instance. It will give you the I/O throughput, the number of evictions, the current hit ratio and if run with "--commands" it will also provide the number of GET/SET operations per interval.

memcache-top v0.6       (default port: 11211, color: on, refresh: 3 seconds)

INSTANCE                USAGE   HIT %   CONN    TIME    EVICT/s GETS/s  SETS/s  READ/s  WRITE/s        88.9%   69.7%   1661    0.9ms   0.3     47      9       13.9K   9.8K        88.8%   69.9%   2121    0.7ms   1.3     168     10      17.6K   68.9K        88.9%   69.4%   1527    0.7ms   1.7     48      16      14.4K   13.6K   

AVERAGE:                84.7%   72.9%   1704    1.0ms   1.3     69      11      13.5K   30.3K   

TOTAL:          19.9GB/ 23.4GB          20.0K   11.7ms  15.3    826     132     162.6K  363.6K  
(ctrl-c to quit.)

(Example output)

3. Live browser monitoring with statsproxy

Using the statsproxy tool you get a browser-based statistics tool for multiple memcached instances. The basic idea of statsproxy is to provide the unmodified memcached statistics via HTTP. It also provide a synthetic health check for service monitoring tools like Nagios.

To compile statsproxy on Debian:

# Ensure you have bison
sudo apt-get install bison

# Download tarball
tar zxvf statsproxy-1.0.tgz
cd statsproxy-1.0

Now you can run the "statsproxy" binary, but it will inform you that it needs a configuration file. I suggest to redirect the output to a new file e.g. "statsproxy.conf" and remove the information text on top and bottom and then to modify the configuration section as needed.

./statsproxy > statsproxy.conf 2>&1

Ensure to add as many "proxy-mapping" sections as you have memcached instances. In each "proxy-mapping" section ensure that "backend" points to your memcached instance and "frontend" to a port on your webserver where you want to access the information for this backend.

Once finished run:

./statsproxy -F statsproxy.conf

Below you find a screenshot of what stats-proxy looks like:

4. Live browser monitoring with memcache.php

Using this PHP script you can quickly add memcached statistics to a webserver of your choice. Most useful is the global memory usage graph which helps to identify problematic instances in a distributed environment.

Here is how it should look (screenshot from the project homepage):

When using this script ensure access is protected and not to trigger the "flush_all" menu option by default. Also on large memcached instances refrain from dumping the keys as it might cause some load on your server.

Acer Aspire One - Linux Flash Video Performance


There are dozens of guides on how to run the Acer Aspire One netbook with Ubuntu 12.04 (and derivates Lubuntu, Xubuntu & Co) which provides reasonably good hardware support. For performance reasons most of them suggest to install the AMD Catalyst driver. And this is good because it allows playing HD videos without problems on this otherwise small netbook.

Still Flash video doesn't work!

Most guides do not have this hint, but the solution is quite simple: One also needs to install the XvBA Linux support. In Debian and Ubuntu this means

sudo apt-get install xvba-va-driver libva-glx1 libva-egl1 vainfo

This is also described on the Ubuntu BinaryDriverHowto page but missing in almost all other tutorials on how to get Radeon chips working on Linux.

So again: Install XvBA!!!

If you are unsure wether it is working on your system just run


and check if it lists "Supported profile and entrypoints". If it does not or the tool doesn't exist you probably run without hardware accelaration for Flash.

Syndicate content Syndicate content