Recent Posts

More Changes after the Dyn DDoS Attack

Looking at NS records again of all scanned top 500 Alexa domains after the recent Dyn DDoS attack now 13 of the previously 14 Dyn customers which previously relied solely on Dyn now switched away entirely or added additional non-Dyn DNS servers.

Who Switched Entirely

about.com, addthis.com, exoclick.com, github.com/.io, quora.com, speedtest.net, zendesk.com

Who Switched to "Multi"-DNS

etsy.com, paypal.com, shutterstock.com, therverge.com, weebly.com

Details

Here is a summary of the NS record changes. To automatically compare the server names all numbers in the DNS server names having been stripped in the following table:

SiteBefore (15.10.)After (24.10.)
about.comns.p.dynect.net.dns.p.nsone.net.
addthis.comns.p.dynect.net.matt.ns.cloudflare.com.
wanda.ns.cloudflare.com.
etsy.comns.p.dynect.net.ns-.awsdns-.co.uk.
ns-.awsdns-.com.
ns-.awsdns-.net.
ns-.awsdns-.org.
ns.p.dynect.net.
exoclick.comns.p.dynect.net.dns.p.nsone.net.
ns.p.dynect.net.
github.comns.p.dynect.net.ns-.awsdns-.co.uk.
ns-.awsdns-.com.
ns-.awsdns-.net.
ns-.awsdns-.org.
github.ions.p.dynect.net.ns-.awsdns-.co.uk.
ns-.awsdns-.com.
ns-.awsdns-.net.
ns-.awsdns-.org.
paypal.comns.p.dynect.net.ns.p.dynect.net.
pdns.ultradns.com.
pdns.ultradns.net.
quora.comns.p.dynect.net.ns-.awsdns-.co.uk.
ns-.awsdns-.com.
ns-.awsdns-.net.
ns-.awsdns-.org.
shutterstock.comns.p.dynect.net.a.verisigndns.com.
ns.p.dynect.net.
speedtest.netns.p.dynect.net.ns-.awsdns-.co.uk.
ns-.awsdns-.com.
ns-.awsdns-.net.
ns-.awsdns-.org.
theverge.comns.p.dynect.net.ns-.awsdns-.co.uk.
ns-.awsdns-.com.
ns-.awsdns-.net.
ns-.awsdns-.org.
ns.p.dynect.net.
weebly.comns.p.dynect.net.ns-.awsdns-.co.uk.
ns-.awsdns-.com.
ns-.awsdns-.net.
ns-.awsdns-.org.
ns.p.dynect.net.
zendesk.comns.p.dynect.net.pdns.ultradns.biz.
pdns.ultradns.co.uk.
pdns.ultradns.com.
pdns.ultradns.info.
pdns.ultradns.net.
pdns.ultradns.org.
The note-worthy non-changer is Twitter which is still exclusively at Dyn, eryone else seems to have mitigated. Some to using two providers, most of them switching to AWS DNS some to UltraDNS exclusively.

Changes after the Dyn DNS Outage

Looking at NS records today some of yesterdays affected companies decided to change things after the DoS on Dyn. As the NS record is not really a customer facing feature this is more an indication of Dyn's customers expectations. One could argue switching away from Dyn could mean fear of more downtimes to come.

Here is a summary of changed NS records so far:
SiteBefore (15.10.)After (22.10.)
about.com dynect.net nsone.com
etsy.com dynect.net dynect.net
awsdns
github.com dynect.net awsdns
paypal.com dynect.net dynect.net
ultradns.org
paypal.com
xhamster.com dynect.net anycastns*.org
zendesk.com dynect.net ultradns.*
speedtest.net dynect.net awsdns
I did only check some NS records for changes, just several of the top sites. There are two note-worthy non-changers being Twitter and Github though, everyone else seems to have mitigated. Some to using two providers, several switching to AWS DNS or UltraDNS exclusively.

Whom you can DDoS via DynDNS

After todays various affected major websites you might ask who is actually also affected together with the well known sites as Amazon, Twitter or Github.

Using the results of a monthly scan I automatically run on the top 500 Alexa sites it is easy to find out. The only thing you need to know is that the Dyn DNS server domain is dynect.net (detailed results).

Top Affected Sites

6pm.com about.com adcash.com addthis.com amazon.ca amazon.cn amazon.co.jp amazon.co.uk amazon.com amazon.de amazon.es amazon.fr amazon.in amazon.it answers.com bitly.com businessinsider.com chase.com chip.de disqus.com ebay.co.uk ebay.com ebay.com.au ebay.de ebay.in ebay.it etsy.com evernote.com exoclick.com github.com github.io goodreads.com hostgator.com huffingtonpost.com imdb.com indeed.com indiatimes.com jmpdirect01.com moz.com nytimes.com outbrain.com overstock.com pandora.com paypal.com photobucket.com pornhub.com quora.com redtube.com scribd.com shutterstock.com soundcloud.com speedtest.net stumbleupon.com t.co theguardian.com theverge.com tripadvisor.com trovi.com tube8.com tumblr.com twimg.com twitch.tv twitter.com uploaded.net webmd.com weebly.com wikia.com wix.com xhamster.com youporn.com zendesk.com zillow.com

Probably Different Impact

Note that not all of these sites were equally affected as some of them like Amazon are using multiple DNS providers. The Amazon main domains NS records do point to Dyn and UltraDNS. The same way probably none of the major adult sites was down as the also relied on at least two providers.

So while Amazon users probably got to the website after one DNS timeout and switching over to UltraDNS, Twitter and Github users were not so lucky and had to hope for Dyn to respond. It will be interesting to see if Twitter and Github will add a second DNS provider in the next time as a result of this.

The Need For "Multi-DNS"

Reading different reports on this incident it seems to me the headlines are focussing on those sites using just Dyns DNS and not on those having a "Multi-DNS".

Detailed results on who is using which DNS domain can be found in the monthly DNS usage index.

Apply-changes-to-limits.conf-immediately

See also ulimit - Cheat Sheet

Sometimes you need to increase the open file limit for an application server or the maximum shared memory for your ever-growing master database. In such a case you edit your /etc/security/limits.conf and then wonder how to get the changed limits to be visible to check wether you have set them correctly. You do not want to find out that they were wrong after your master DB doesn't come up after some incident in the middle of the night...

Instant Applying Limits to Running Processes

Actually you might want to apply the changes directly to a running process additionally to changing /etc/security/limits.conf. In recent edge Linux distributions (e.g. Debian Jessie) there is a tool "prlimit" to get/set limits.

Usage for changing limits for a PID is

prlimit --pid <pid> --<limit>=<soft>:<hard>
for example
prlimit --pid 12345 --nofile=1024:2048
If you are unlucky and do not have prlimit yet check out this instruction to compile your own version because despite missing user tool the prlimit() system call is in the kernel for quite a while (since 2.6.36).

Alternative #1: Re-Login with "sudo -i"

If you do not have prlimit yet and want a changed limit configuration to become visible you might want to try "sudo -i". The reason: you need to re-login as limits from /etc/security/* are only applied on login!

But wait: what about users without login? In such a case you login as root (which might not share their limits) and sudo into the user: so no real login as the user. In this case you must ensure to use the "-i" option of sudo:
sudo -i -u <user>
to simulate an initial login with sudo. This will apply the new limits.

Alternative #2: Make it work for sudo without "-i"

Wether you need "-i" depends on the PAM configuration of your Linux distribution. If you need it then PAM probably loads "pam_limit.so" only in /etc/pam.d/login which means at login time but no on sudo. This was introduced in Ubuntu Precise for example. By adding this line

session    required   pam_limits.so
in /etc/pam.d/sudo limits will also be applied when running sudo without "-i". Still using "-i" might be easier.

Finally: Always Check Effective Limits

The best way is to change the limits and check them by running
prlimit               # for current shell
prlimit --pid <pid>   # for a running process
because it shows both soft and hard limits together. Alternatively call
ulimit -a                # for current shell
cat /proc/<pid>/limits   # for a running process
with the affected user.

Sharing Screen With Multiple Users

How to detect screen sessions of other users:

screen -ls <user name>/

How to open screen to other users:

  1. Ctrl-A :multiuser on
  2. Ctrl-A :acladd <user to grant access>

Attach to other users screen session:

With session name
screen -x <user name>/<session name>
With PID and tty
screen -x <user name>/<pid>.<ptty>.<host>

Linux HTML Rendering Widgets

In 2010 I compiled a summary of HTML rendering widgets useful for embedding in Linux applications. Given recent changes and switching Liferea from Webkit to Webkit2 I felt it is time to post an updated version.

The following table give a summary of the different HTML renderers some long gone, some fully maintained:
Name Toolkit Platform Derived From Driving Force Active
KHTMLQT%KDEKDEYes
wxHtmlwxWidgetsGTK, WindowsKHTMLwxWidgetsYes
GtkHtmlGTK+ 1.0GNOME 1KHTMLGNOME 1No, long gone
GtkHtml2GTK+ 2.0GNOME 2GtkHtmlGNOME 2No, v2.11: Aug 2007
GtkHtml3GTK+ 2.0GNOME 2GtkHtmlXimian, EvolutionNo, v3.14: May 2008
GtkHtml4GTK+ 3.0GNOME 3GtkHtmlXimian, EvolutionNo, v4.6.6: Jul 2013
GtkMozEmbedGTK+ 2.0Gecko%MozillaNo
WebKitGtkGTK+ 2.0
GTK+ 3.0
WebkitKHTML/WebkitApple SafariNo
WebKitGtk2GTK+ 3.0WebkitWebkitApple SafariYes
Note: My summary somewhat complements this Wikipedia list. Still it focusses more on Linux renderers and does correctly distinguish between the rather mad history of GtkHtml*.

Given the list above one could conclude the only acceptable renderers are KTHML, wxHtml and WebkitGtk simply based on project activity. Still other renderers like GtkHtml2 and GtkHtml3 have gone a long way and provide a limited but stable functionality.

But the important question is: What features are supported by the different renderers?
Name Widget
Embed
Full
HTML
CSS JS Java/Flash Editor MathML
KHTMLyy1,2,3yynn
wxHtmlynnonennnn
GtkHtmlyynonennyn
GtkHtml2yy1,2 inlinennnn
GtkHtml3yynonennyn
GtkHtml4yynonennyn
GtkMozEmbedny1,2,3yyny
WebKitGtkny1,2,3yynn
WebKitGtk2ny1,2,3yynin work
The feature matrix along with the platform listing explains why a lot of those old renderer libraries are still around. Given you want to render simple markup in an email client you might still choose wxHtml or GtkHtml4, with the latter one providing you with a HTML editor for rich mail editing. Of course when you want to allow your users to have fully fledged inline browsing you need to use either KHTML or Webkit. If you are developing for GTK you need to use Webkit, if on KDE you probably will use KHTML.

If you find mistakes or have something to add please post a comment!

Hiera EYAML GPG Troubleshooting

When using Hiera + Eyaml + GPG as Puppet configuration backend one can run into a multitude of really bad error message. The problem here is mostly the obscene layering of libraries e.g. Eyaml on top of Eyaml-GPG on top of either GPGME or Ruby GPG on top on GnuPG. Most errors originate from/are reported by GnuPG and are badly unspecified.

This post gives some hints on some of the errors

[hiera-eyaml-core] General error

This is one of the worst errors you can get. One common cause is an expired GPG key. Check for it using
LANG=C gpg -k | grep expired
and remove the expired key with
gpg --delete-key <name
As the error label indicates this can have other causes. In such a case check out the GPGME Debugging section below.

[hiera-eyaml-core] no such file to load -- hiera/backend/eyaml/encryptors/gpg

If you got this you probably forgot to install the Ruby GEM. Fix it by running
gem install hiera-eyaml-gpg

[hiera-eyaml-core] GPG command (gpg --homedir /home/lars/.gnupg --quiet --no-secmem-warning --no-permission-warning --no-tty --yes --decrypt) failed with: gpg: Sorry, no terminal at all requested - can't get input

This error indicates a problem getting your secret key password. As Eyaml triggers GPG in background no password prompt can be issued. So the only way to get one is the PHP agent. In this case it might be dead.Check if one is running:
pgrep -fl gpg-agent

[gpg] !!! Fatal: Failed to decrypt ciphertext (check settings and that you are a recipient) [hiera-eyaml-core] !!! Decryption failed

If you get this error message you might want to check if you have a matching private key listed in your GPG recipient using
gpg -K

GPGME Debugging

No matter what error message you get if you cannot solve consider enabling debug traces by setting
export GPGME_DEBUG=9
Then run "eyaml" and check the output for sections of "_gpgme_io_read" that indicate the GnuPG responses like this one:
GPGME 2016-06-16 12:33:55 <0x45b7>    _gpgme_run_io_cb: call: item=0x2363d70, handler (0x21abc30, 7)
GPGME 2016-06-16 12:33:55 <0x45b7>    _gpgme_io_read: enter: fd=0x7, buffer=0x238b6c0, count=1024
GPGME 2016-06-16 12:33:55 <0x45b7>    _gpgme_io_read: check: 5b474e5550473a5d 20494e565f524543 [GNUPG:] INV_REC
GPGME 2016-06-16 12:33:55 <0x45b7>    _gpgme_io_read: check: 5020302035444136 3939343530393537 P 0 5DA699450957
GPGME 2016-06-16 12:33:55 <0x45b7>    _gpgme_io_read: check: 3346354543394341 4138413232433134 3F5EC9CAA8A22C14
GPGME 2016-06-16 12:33:55 <0x45b7>    _gpgme_io_read: check: 3846433938453339 374335430a5b474e 8FC98E397C5C.[GN
GPGME 2016-06-16 12:33:55 <0x45b7>    _gpgme_io_read: check: 5550473a5d204641 494c55524520656e UPG:] FAILURE en
GPGME 2016-06-16 12:33:55 <0x45b7>    _gpgme_io_read: check: 6372797074203533 0a               crypt 53.
GPGME 2016-06-16 12:33:55 <0x45b7>    _gpgme_io_read: leave: result=89
If you overlook the bad wrapping you see the following info here:
INV_RECP 0 5DA699450957.... FAILURE encrypt 53
Google for those messages and you often get a GnuPG related result hinting on the cause. Above trace is about an invalid key with fingerprint 5DA699450957.... which you can find with listing your GPG keys and checking for expiration messages.

Workaround OpenSSH 7.0 Problems

OpenSSH 7+ deprecates weak key exchange algorithm diffie-hellman-group1-sha1 and DSA public keys for both host and user keys which lead to the following error messages:

Unable to negotiate with 172.16.0.10 port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1
or a simple permission denied when using a user DSA public key or
Unable to negotiate with 127.0.0.1: no matching host key type found.
Their offer: ssh-dss
when connecting to a host with a DSA host key.

Workaround

Allow the different deprecated features in ~/.ssh/config
Host myserver
  # To make pub ssh-dss keys work again
  PubkeyAcceptedKeyTypes +ssh-dss

# To make host ssh-dss keys work again HostkeyAlgorithms +ssh-dss

# To allow weak remote key exchange algorithm KexAlgorithms +diffie-hellman-group1-sha1
Alternatively pass those three options using -o. For example allow the key exchange when running SSH
ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 <host>

Solution

Replace all your dss keys to avoid keys stopping to work. And upgrade all SSH version to avoid offering legacy key exchange algorithms.

Scan Linux for Vulnerable Packages

How do you know wether your Linux server (which has no desktop update notifier or unattended security updates running) does need to be updated? Of course an

apt-get update && apt-get --dry-run upgrade
might give an indication. But what of the package upgrades do stand for security risks and whose are only simple bugfixes you do not care about?

Check using APT

One useful possibility is apticron which will tell you which packages should be upgraded and why. It presents you the package ChangeLog to decided wether you want to upgrade a package or not. Similar but less details is cron-apt which also informs you of new package updates.

Analyze Security Advisories

Now with all those CERT newsletters, security mailing lists and even security news feeds out there: why can't we check the other way around? Why not find out:
  1. Which security advisories do affect my system?
  2. Which ones I have already complied with?
  3. And which vulnerabilities are still there?
My mad idea was to take those security news feeds (as a start I tried with the ones from Ubuntu and CentOS) and parse out the package versions and compare them to the installed packages. The result was a script producing the following output:

screenshot of lpvs-scan.pl

In the output you see lines starting with "CEBA-2012-xxxx" which is CentOS security advisory naming schema (while Ubuntu has USN-xxxx-x). Yellow color means the security advisory doesn't apply because the relevant packages are not installed. Green means the most recent package version is installed and the advisory shouldn't affect the system anymore. Finally red, of course meaning that the machine is vulnerable.

Does it Work Reliably?

The script producing this output can be found here. I'm not yet satisfied with how it works and I'm not sure if it can be maintained at all given the brittle nature of the arbitrarily formatted/rich news feeds provided by the distros. But I like how it gives a clear indication of current advisories and their effect on the system.

Maybe persuading the Linux distributions into using a common feed format with easy to parse metadata might be a good idea...

How do you check your systems? What do you think of a package scanner using XML security advisory feeds?

Do Not List Iptables NAT Rules Without Care

What I do not want to do ever again is running

iptables -L -t nat
on a core production server with many many connections.

And why?

Well because running "iptables -L" auto-loads the table specific iptables kernel module which for the "nat" table is "iptables_nat" which has a dependency on "nf_conntrack".

While "iptables_nat" doesn't do anything when there are no configured iptables rules, "nf_conntrack" immediately starts to drop connections as it cannot handle the many many connections the server has.

The probably only safe way to check for NAT rules is:
grep -q ^nf_conntrack /proc/modules && iptables -L -t nat

Gerrit Howto Remove Changes

Sometimes you want to delete a Gerrit change to make it invisible for everyone (for example when you did commit an unencrypted secret...). AFAIK this is only possible via the SQL interface which you can enter with

ssh <gerrit host>:29418 gerrit gsql
and issue a delete with:
update changes set status='d' where change_id='<change id>';
For more Gerrit hints check out the Gerrit Cheat Sheet

SSH ProxyCommand Examples

Use "ProxyCommand" in your ~/.ssh/config to easily access servers hidden behind port knocking and jump hosts.

Also check out the SSH - Cheat Sheet.

Use Gateway/Jumphost

Host unreachable_host
  ProxyCommand ssh gateway_host exec nc %h %p

Automatic Jump Host Proxying

Host «your jump host>
  ForwardAgent yes
  Hostname «your jump host>
  User «your user name on jump host>

# Note the server list can have wild cards, e.g. "webserver-* database*" Host «server list> ForwardAgent yes User «your user name on all these hosts> ProxyCommand ssh -q «your jump host> nc -q0 %h 22

Automatic Port Knocking

Host myserver
   User myuser
   Host myserver.com
   ProxyCommand bash -c '/usr/bin/knock %h 1000 2000 3000 4000; sleep 1; exec /bin/nc %h %p'

Nagios Plugin for dmesg Monitoring

So far I found no easy solution to monitor for Linux kernel messages. So here is a simple Nagios plugin to scan dmesg output for interesting stuff:

#!/bin/bash


SEVERITIES="err,alert,emerg,crit" WHITELIST="microcode: |\ Firmware Bug|\ i8042: No controller|\ Odd, counter constraints enabled but no core perfctrs detected|\ Failed to access perfctr msr|\ echo 0 > /proc/sys"

# Check for critical dmesg lines from this day date=$(date "+%a %b %e") output=$(dmesg -T -l "$SEVERITIES" | egrep -v "$WHITELIST" | grep "$date" | tail -5)

if [ "$output" == "" ]; then echo "All is fine." exit 0 fi

echo "$output" | xargs exit 1
"Features" of the script above: This script helped a lot to early on detect I/O errors, recoverable as well as corruptions. It even worked when entire root partition wasn't readable anymore, because then the Nagios check failed with "NRPE: unable to read output" which indicated that dmesg didn't work anymore. By always showing all errors from the entire day one cannot miss recovered errors that happened in non-office hours.

Another good thing about the check is detecting OOM kills or fast spawning of processes.

Providing Links into Grafana Templates

As a Grafana user it is not obvious how to share links of template based dashboards.

Grafana does not change the request URI to reflect template variables you might enter (e.g. the server name).

Solution

There is a hidden feature: you can pass all template values via URL parameters in the following syntax
var-<parameter name>=<value>
Example link:
http://mygrafana.local/#/dashboard/db/mydashboard?var-server=web01

Hubot Setup Problems

When setting up Hubot you can run into

Error: EACCES, permission denied '/home/xyz/.config/configstore/insight-yo.yml'
when installing Hubot with yeoman (check out Github #1292).

The solution is simple:

Recent Node.js with Hubot Hipchat Adapter

Today I had a strange issue when setting up Hubot with Hipchat according to the installation instructions from hubot-hipchat.

The build with

yo hubot --adapter hipchat
fails because it downloads the most recent hubot-hipchat NPM package 2.12.0 and then tries to extract 2.7.5 which of course fails.

The simple workaround is

Port Knocking And SSH ProxyCommand

When you use a port knocker like knockd you might want to do the knocking automatically from your ~/.ssh/config using "ProxyCommand".

Example Config

Host myserver
   User myuser
   Host myserver.com
   ProxyCommand bash -c '/usr/bin/knock %h 1000 2000 3000 4000; sleep 1; exec /bin/nc %h %p'
It is important not to forget the "exec" before invoking netcat!

See also SSH - Cheat Sheet