Someone at Debian increase security for all Debian servers by breaking debsecan
a while ago for everything before Jessie by moving the vulnerability definitions
Of course there was no way to issue a security fix for Wheezy debsecan...
Workaround 1: HotfixSo if you still want to scan your Wheezy systems you can hotfix debsecan before running it like this:
sed -i "s/http:\/\/secure-testing.debian.net\/debian-secure-testing/https:\/\/security-tracker.debian.org\/tracker/;s/project\/debsecan\/release\/1\//debsecan\/release\/1\//" /usr/bin/debsecan
Workaround 2: Pass ConfigYou can also pass an in-place config file:
debsecan --config <(echo SOURCE="https://security-tracker.debian.org/tracker/debsecan/release/1/")
Looking at NS records again of all scanned top 500 Alexa domains after the recent Dyn DDoS attack now 13 of the previously 14 Dyn customers which previously relied solely on Dyn now switched away entirely or added additional non-Dyn DNS servers.
Who Switched Entirelyabout.com, addthis.com, exoclick.com, github.com/.io, quora.com, speedtest.net, zendesk.com
Who Switched to "Multi"-DNSetsy.com, paypal.com, shutterstock.com, therverge.com, weebly.com
DetailsHere is a summary of the NS record changes. To automatically compare the server names all numbers in the DNS server names having been stripped in the following table:
|Site||Before (15.10.)||After (24.10.)|
Looking at NS records today some of yesterdays affected companies decided to change things after
the DoS on Dyn. As the NS record is not really a customer facing feature
this is more an indication of Dyn's customers expectations. One could argue
switching away from Dyn could mean fear of more downtimes to come.
Here is a summary of changed NS records so far:
|Site||Before (15.10.)||After (22.10.)|
After todays various affected major websites you might ask who is actually also affected together with the well known sites as Amazon, Twitter or Github.
Using the results of a monthly scan I automatically run on the top 500 Alexa sites it is easy to find out. The only thing you need to know is that the Dyn DNS server domain is dynect.net (detailed results).
Top Affected Sites
6pm.com about.com adcash.com addthis.com amazon.ca amazon.cn amazon.co.jp amazon.co.uk amazon.com amazon.de amazon.es amazon.fr amazon.in amazon.it answers.com bitly.com businessinsider.com chase.com chip.de disqus.com ebay.co.uk ebay.com ebay.com.au ebay.de ebay.in ebay.it etsy.com evernote.com exoclick.com github.com github.io goodreads.com hostgator.com huffingtonpost.com imdb.com indeed.com indiatimes.com jmpdirect01.com moz.com nytimes.com outbrain.com overstock.com pandora.com paypal.com photobucket.com pornhub.com quora.com redtube.com scribd.com shutterstock.com soundcloud.com speedtest.net stumbleupon.com t.co theguardian.com theverge.com tripadvisor.com trovi.com tube8.com tumblr.com twimg.com twitch.tv twitter.com uploaded.net webmd.com weebly.com wikia.com wix.com xhamster.com youporn.com zendesk.com zillow.com
Probably Different ImpactNote that not all of these sites were equally affected as some of them like Amazon are using multiple DNS providers. The Amazon main domains NS records do point to Dyn and UltraDNS. The same way probably none of the major adult sites was down as the also relied on at least two providers.
So while Amazon users probably got to the website after one DNS timeout and switching over to UltraDNS, Twitter and Github users were not so lucky and had to hope for Dyn to respond. It will be interesting to see if Twitter and Github will add a second DNS provider in the next time as a result of this.
The Need For "Multi-DNS"Reading different reports on this incident it seems to me the headlines are focussing on those sites using just Dyns DNS and not on those having a "Multi-DNS".
Detailed results on who is using which DNS domain can be found in the monthly DNS usage index.
Sometimes you need to increase the open file limit for an application server or the maximum shared memory for your ever-growing master database. In such a case you edit your /etc/security/limits.conf and then wonder how to get the changed limits to be visible to check wether you have set them correctly. You do not want to find out that they were wrong after your master DB doesn't come up after some incident in the middle of the night...
Instant Applying Limits to Running Processes
Actually you might want to apply the changes directly to a running process additionally to changing /etc/security/limits.conf. In recent edge Linux distributions (e.g. Debian Jessie) there is a tool "prlimit" to get/set limits.
Usage for changing limits for a PID is
prlimit --pid <pid> --<limit>=<soft>:<hard>for example
prlimit --pid 12345 --nofile=1024:2048If you are unlucky and do not have prlimit yet check out this instruction to compile your own version because despite missing user tool the prlimit() system call is in the kernel for quite a while (since 2.6.36).
Alternative #1: Re-Login with "sudo -i"If you do not have prlimit yet and want a changed limit configuration to become visible you might want to try "sudo -i". The reason: you need to re-login as limits from /etc/security/* are only applied on login!
But wait: what about users without login? In such a case you login as root (which might not share their limits) and sudo into the user: so no real login as the user. In this case you must ensure to use the "-i" option of sudo:
sudo -i -u <user>to simulate an initial login with sudo. This will apply the new limits.
Alternative #2: Make it work for sudo without "-i"
Wether you need "-i" depends on the PAM configuration of your Linux distribution. If you need it then PAM probably loads "pam_limit.so" only in /etc/pam.d/login which means at login time but no on sudo. This was introduced in Ubuntu Precise for example. By adding this line
session required pam_limits.soin /etc/pam.d/sudo limits will also be applied when running sudo without "-i". Still using "-i" might be easier.
Finally: Always Check Effective LimitsThe best way is to change the limits and check them by running
prlimit # for current shell prlimit --pid <pid> # for a running processbecause it shows both soft and hard limits together. Alternatively call
ulimit -a # for current shell cat /proc/<pid>/limits # for a running processwith the affected user.
How to detect screen sessions of other users:
screen -ls <user name>/
How to open screen to other users:
- Ctrl-A :multiuser on
- Ctrl-A :acladd <user to grant access>
Attach to other users screen session:With session name
screen -x <user name>/<session name>With PID and tty
screen -x <user name>/<pid>.<ptty>.<host>
In 2010 I compiled a summary of HTML rendering widgets useful for embedding in Linux applications. Given recent changes and switching Liferea from Webkit to Webkit2 I felt it is time to post an updated version.
The following table give a summary of the different HTML renderers some long gone, some fully maintained:
|Name||Toolkit||Platform||Derived From||Driving Force||Active|
|GtkHtml||GTK+ 1.0||GNOME 1||KHTML||GNOME 1||No, long gone|
|GtkHtml2||GTK+ 2.0||GNOME 2||GtkHtml||GNOME 2||No, v2.11: Aug 2007|
|GtkHtml3||GTK+ 2.0||GNOME 2||GtkHtml||Ximian, Evolution||No, v3.14: May 2008|
|GtkHtml4||GTK+ 3.0||GNOME 3||GtkHtml||Ximian, Evolution||No, v4.6.6: Jul 2013|
|WebKitGtk2||GTK+ 3.0||Webkit||Webkit||Apple Safari||Yes|
Given the list above one could conclude the only acceptable renderers are KTHML, wxHtml and WebkitGtk simply based on project activity. Still other renderers like GtkHtml2 and GtkHtml3 have gone a long way and provide a limited but stable functionality.
But the important question is: What features are supported by the different renderers?
If you find mistakes or have something to add please post a comment!
When using Hiera + Eyaml + GPG as Puppet configuration backend one can run into a multitude of really bad error message. The problem here is mostly the obscene layering of libraries e.g. Eyaml on top of Eyaml-GPG on top of either GPGME or Ruby GPG on top on GnuPG. Most errors originate from/are reported by GnuPG and are badly unspecified.
This post gives some hints on some of the errors
[hiera-eyaml-core] General errorThis is one of the worst errors you can get. One common cause is an expired GPG key. Check for it using
LANG=C gpg -k | grep expiredand remove the expired key with
gpg --delete-key <nameAs the error label indicates this can have other causes. In such a case check out the GPGME Debugging section below.
[hiera-eyaml-core] no such file to load -- hiera/backend/eyaml/encryptors/gpgIf you got this you probably forgot to install the Ruby GEM. Fix it by running
gem install hiera-eyaml-gpg
[hiera-eyaml-core] GPG command (gpg --homedir /home/lars/.gnupg --quiet --no-secmem-warning --no-permission-warning --no-tty --yes --decrypt) failed with: gpg: Sorry, no terminal at all requested - can't get inputThis error indicates a problem getting your secret key password. As Eyaml triggers GPG in background no password prompt can be issued. So the only way to get one is the PHP agent. In this case it might be dead.Check if one is running:
pgrep -fl gpg-agent
[gpg] !!! Fatal: Failed to decrypt ciphertext (check settings and that you are a recipient) [hiera-eyaml-core] !!! Decryption failedIf you get this error message you might want to check if you have a matching private key listed in your GPG recipient using
GPGME DebuggingNo matter what error message you get if you cannot solve consider enabling debug traces by setting
export GPGME_DEBUG=9Then run "eyaml" and check the output for sections of "_gpgme_io_read" that indicate the GnuPG responses like this one:
GPGME 2016-06-16 12:33:55 <0x45b7> _gpgme_run_io_cb: call: item=0x2363d70, handler (0x21abc30, 7) GPGME 2016-06-16 12:33:55 <0x45b7> _gpgme_io_read: enter: fd=0x7, buffer=0x238b6c0, count=1024 GPGME 2016-06-16 12:33:55 <0x45b7> _gpgme_io_read: check: 5b474e5550473a5d 20494e565f524543 [GNUPG:] INV_REC GPGME 2016-06-16 12:33:55 <0x45b7> _gpgme_io_read: check: 5020302035444136 3939343530393537 P 0 5DA699450957 GPGME 2016-06-16 12:33:55 <0x45b7> _gpgme_io_read: check: 3346354543394341 4138413232433134 3F5EC9CAA8A22C14 GPGME 2016-06-16 12:33:55 <0x45b7> _gpgme_io_read: check: 3846433938453339 374335430a5b474e 8FC98E397C5C.[GN GPGME 2016-06-16 12:33:55 <0x45b7> _gpgme_io_read: check: 5550473a5d204641 494c55524520656e UPG:] FAILURE en GPGME 2016-06-16 12:33:55 <0x45b7> _gpgme_io_read: check: 6372797074203533 0a crypt 53. GPGME 2016-06-16 12:33:55 <0x45b7> _gpgme_io_read: leave: result=89If you overlook the bad wrapping you see the following info here:
INV_RECP 0 5DA699450957.... FAILURE encrypt 53Google for those messages and you often get a GnuPG related result hinting on the cause. Above trace is about an invalid key with fingerprint 5DA699450957.... which you can find with listing your GPG keys and checking for expiration messages.
OpenSSH 7+ deprecates weak key exchange algorithm diffie-hellman-group1-sha1 and DSA public keys for both host and user keys which lead to the following error messages:
Unable to negotiate with 172.16.0.10 port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1or a simple permission denied when using a user DSA public key or
Unable to negotiate with 127.0.0.1: no matching host key type found. Their offer: ssh-dsswhen connecting to a host with a DSA host key.
WorkaroundAllow the different deprecated features in ~/.ssh/config
Host myserver # To make pub ssh-dss keys work again PubkeyAcceptedKeyTypes +ssh-dssAlternatively pass those three options using -o. For example allow the key exchange when running SSH
# To make host ssh-dss keys work again HostkeyAlgorithms +ssh-dss
# To allow weak remote key exchange algorithm KexAlgorithms +diffie-hellman-group1-sha1
ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 <host>
SolutionReplace all your dss keys to avoid keys stopping to work. And upgrade all SSH version to avoid offering legacy key exchange algorithms.
How do you know wether your Linux server (which has no desktop update notifier or unattended security updates running) does need to be updated? Of course an
apt-get update && apt-get --dry-run upgrademight give an indication. But what of the package upgrades do stand for security risks and whose are only simple bugfixes you do not care about?
Check using APTOne useful possibility is apticron which will tell you which packages should be upgraded and why. It presents you the package ChangeLog to decided wether you want to upgrade a package or not. Similar but less details is cron-apt which also informs you of new package updates.
Analyze Security AdvisoriesNow with all those CERT newsletters, security mailing lists and even security news feeds out there: why can't we check the other way around? Why not find out:
- Which security advisories do affect my system?
- Which ones I have already complied with?
- And which vulnerabilities are still there?
In the output you see lines starting with "CEBA-2012-xxxx" which is CentOS security advisory naming schema (while Ubuntu has USN-xxxx-x). Yellow color means the security advisory doesn't apply because the relevant packages are not installed. Green means the most recent package version is installed and the advisory shouldn't affect the system anymore. Finally red, of course meaning that the machine is vulnerable.
Does it Work Reliably?The script producing this output can be found here. I'm not yet satisfied with how it works and I'm not sure if it can be maintained at all given the brittle nature of the arbitrarily formatted/rich news feeds provided by the distros. But I like how it gives a clear indication of current advisories and their effect on the system.
Maybe persuading the Linux distributions into using a common feed format with easy to parse metadata might be a good idea...
How do you check your systems? What do you think of a package scanner using XML security advisory feeds?
What I do not want to do ever again is running
iptables -L -t naton a core production server with many many connections.
And why?Well because running "iptables -L" auto-loads the table specific iptables kernel module which for the "nat" table is "iptables_nat" which has a dependency on "nf_conntrack".
While "iptables_nat" doesn't do anything when there are no configured iptables rules, "nf_conntrack" immediately starts to drop connections as it cannot handle the many many connections the server has.
The probably only safe way to check for NAT rules is:
grep -q ^nf_conntrack /proc/modules && iptables -L -t nat
Sometimes you want to delete a Gerrit change to make it invisible for everyone (for example when you did commit an unencrypted secret...). AFAIK this is only possible via the SQL interface which you can enter with
ssh <gerrit host>:29418 gerrit gsqland issue a delete with:
update changes set status='d' where change_id='<change id>';For more Gerrit hints check out the Gerrit Cheat Sheet
Use "ProxyCommand" in your ~/.ssh/config to easily access servers hidden behind port knocking and jump hosts.
Also check out the .
Host unreachable_host ProxyCommand ssh gateway_host exec nc %h %p
Automatic Jump Host Proxying
Host «your jump host> ForwardAgent yes Hostname «your jump host> User «your user name on jump host>
# Note the server list can have wild cards, e.g. "webserver-* database*" Host «server list> ForwardAgent yes User «your user name on all these hosts> ProxyCommand ssh -q «your jump host> nc -q0 %h 22
Automatic Port Knocking
Host myserver User myuser Host myserver.com ProxyCommand bash -c '/usr/bin/knock %h 1000 2000 3000 4000; sleep 1; exec /bin/nc %h %p'
So far I found no easy solution to monitor for Linux kernel messages. So here is a simple Nagios plugin to scan dmesg output for interesting stuff:
#!/bin/bash"Features" of the script above:
SEVERITIES="err,alert,emerg,crit" WHITELIST="microcode: |\ Firmware Bug|\ i8042: No controller|\ Odd, counter constraints enabled but no core perfctrs detected|\ Failed to access perfctr msr|\ echo 0 > /proc/sys"
# Check for critical dmesg lines from this day date=$(date "+%a %b %e") output=$(dmesg -T -l "$SEVERITIES" | egrep -v "$WHITELIST" | grep "$date" | tail -5)
if [ "$output" == "" ]; then echo "All is fine." exit 0 fi
echo "$output" | xargs exit 1
- It gives you the 5 most recent messages from today
- It allows to whitelist common but useless errors in $WHITELIST
- It uses "dmesg" to work when you already have disk I/O errors and to be faster than syslog parsing
Another good thing about the check is detecting OOM kills or fast spawning of processes.
As a Grafana user it is not obvious how to share links of template based dashboards.
Grafana does not change the request URI to reflect template variables you might enter (e.g. the server name).
SolutionThere is a hidden feature: you can pass all template values via URL parameters in the following syntax
var-<parameter name>=<value>Example link:
When setting up Hubot you can run into
Error: EACCES, permission denied '/home/xyz/.config/configstore/insight-yo.yml'when installing Hubot with yeoman (check out Github #1292).
The solution is simple:
- Do not install the NPM modules globally
- Or properly use sudo when installing
Today I had a strange issue when setting up Hubot with Hipchat
according to the installation instructions from hubot-hipchat.
The build with
yo hubot --adapter hipchatfails because it downloads the most recent hubot-hipchat NPM package 2.12.0 and then tries to extract 2.7.5 which of course fails.
The simple workaround is
- To patch the package.json of the partial installation and add and explicit hubot-hipchat require for 2.12.0.
- Rerun the "yo" command and say no when being asked to overwrite package.json