Recent Posts

Puppet Solve Invalid byte sequence in US-ASCII

When you run "puppet agent" and get
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: invalid byte 
sequence in US-ASCII at /etc/puppet/modules/vendor/
or run "puppet apply" and get
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not 
parse for environment production: invalid byte sequence in US-ASCII at /etc/puppet/manifests/someclass.pp:1
then the root case is probably the currently configured locale. Check the effective Ruby locale with
ruby -e 'puts Encoding.default_external'
Ensure that it returns a UTF-8 capable locale, if needed set it and rerun Puppet:
export LANG=de_DE.utf-8
export LC_ALL=de_DE.utf-8

Benchmarking-Redis-and-Memcache



If you ever need to get some meaningful facts in a possible Redis vs memcached discussion you might want to benchmark both on your target system.

While Redis brings a tool redis-benchmark, memcached doesn't. But Redis author Salvatore Sanfilippo ported the Redis benchmark to memcached! So it is possible to measure quite similar metrics using the same math and result summaries for both key value stores.

Benchmarking Redis

So setup Redis in cluster mode, master/slave, whatever you like and run the Redis benchmark
apt-get install redis-tools	# available starting with Wheezy backports
redis-benchmark -h <host>

Benchmarking Memcached

And do the same for memcache by compiling the memcached port of the benchmark
apt-get install build-essentials libevent-dev
git clone https://github.com/antirez/mc-benchmark.git
cd mc-benchmark.git
make
and running it with
./mc-benchmark -h <host>
The benchmark output has the same structure, with more output in the Redis version compared to the memcached variant as each command type is tested and the Redis protocol knows many more commands.

Web-Developer-Solution-Index

After a friend of mine suggested reading "The things you need to know to do web development I felt the need to compile a solution index for the experiences described. In this interesting blog post describes his view of the typical learning curve of a web developer and the tools, solutions and concepts he discovers when he becomes a successful developer.

I do not want to summarize the post but I wanted to compile a list of those solutions and concepts affecting the life of a web developer.























Markup Standards Knowledge HTML, CSS, JSON, YAML
Web Stack Layering Basic knowledge about
  • Using TCP as transport protocol
  • Using HTTP as application protocol
  • Using SSL to encrypt the application layer with HTTPS
  • Using SSL certificates to proof identity for websites
  • Using (X)HTML for the application layer
  • Using DOM to access/manipulate application objects
Web Development Concepts
  • 3 tier server architecture
  • Distinction of static/dynamic content
  • Asynchronous CSS, JS loading
  • Asynchronous networking with Ajax
  • CSS box models
  • CSS Media Queries
  • Content Delivery Networks
  • UX, Usability...
  • Responsive Design
  • Page Speed Optimization
  • HTTP/HTTPS content mixing
  • Cross domain content mixing
  • MIME types
  • API-Pattern: RPC, SOAP, REST
  • Localization and Internationalization
Developer Infrastructure
  • Code Version Repo: usually Git. Hosted github.com or self-hosted e.g. gitlab
  • Continuous Integration: Jenkins, Travis
  • Deployment: Jenkins, Travis, fabric, Bamboo, CruiseControl
Frontend JS Frameworks Mandatory knowledge in jQuery as well as knowing one or more JS frameworks as

Bootstrap, Foundation, React, Angular, Ember, Backbone, Prototype, GWT, YUI
Localization and Internationalization Frontend: usually in JS lib e.g. LocalePlanet or Globalize

Backend: often in gettext or a similar mechanism
Precompiling Resources For Javascript: Minify

For CSS: For Images: ImageMagick

Test everything with Google PageSpeed Insights
Backend Frameworks By language
  • PHP: CakePHP, CodeIgniter, Symfony, Seagull, Zend, Yii (choose one)
  • Python: Django, Tornado, Pylons, Zope, Bottle (choose one)
  • Ruby: Rails, Merb, Camping, Ramaze (choose one)
Web Server Solutions nginx, Apache

For loadbalancing: nginx, haproxy

As PHP webserver: nginx+PHPFPM
RDBMS MySQL (maybe Percona, MariaDB), Postgres
Caching/NoSQL Without replication: memcached, memcachedb, Redis

With replication: Redis, Couchbase, MongoDB, Cassandra

Good comparisons: #1 #2
Hosting If you are unsure about self-hosting vs. cloud hosting have a look at the Cloud Calculator.
Blogs Do not try to self-host blogs. You will fail on keeping them secure and up-to-date and sooner or later they are hacked. Start with a blog hoster right from the start: Choose provider

Chef-Editing-Config-Files

Most chef recipes are about installing new software including all config files. Also if they are configuration recipes they usually overwrite the whole file and provide a completely recreated configuration. When you have used cfengine and puppet with augtool before you'll be missing the agile editing of config files.

In cfengine2...

You could write
editfiles:
{ home/.bashrc
   AppendIfNoSuchLine "alias rm='rm -i'"
}

While in puppet...

You'd have:
augeas { "sshd_config":
  context => "/files/etc/ssh/sshd_config",
  changes => [
    "set PermitRootLogin no",
  ],
}

Now how to do it in Chef?

Maybe I missed the correct way to do it until now (please comment if this is the case!) but there seems to be no way to use for example augtool with chef and there is no built-in cfengine like editing. The only way I've seen so far is to use Ruby as a scripting language to change files using the Ruby runtime or to use the Script ressource which allows running other interpreters like bash, csh, perl, python or ruby.

To use it you can define a block named like the interpreter you need and add a "code" attribute with a "here doc" operator (e.g. <<-EOT) describing the commands. Additionally you specify a working directory and a user for the script to be executed with. Example:
bash "some_commands" do
    user "root"
    cwd "/tmp"
    code <<-EOT
       echo "alias rm='rm -i'" >> /root/.bashrc
    EOT
end
While it is not a one-liner statement as possible as in cfengine it is very flexible. The Script resource is widely used to perform ad-hoc source compilation and installations in the community codebooks, but we can also use it for standard file editing.

Finally to do conditional editing use not_if/only_if clauses at the end of the Script resource block.

Puppet Apply Only Specific Classes

If you want to apply Puppet changes in an selective manner you can run
puppet apply -t --tags Some::Class
on the client node to only run the single class named "Some::Class".

Why does this work? Because Puppet automatically creates tags for all classes you have. Ensure to upper-case all parts of the class name, because even if you actual Ruby class is "some::class" the Puppet tag will be "Some::Class".

Puppet Agent Noop Pitfalls

The puppet agent command has a --noop switch that allows you to perform a dry-run of your Puppet code.
puppet agent -t --noop
It doesn't change anything, it just tells you what it would change. More or less exact due to the nature of dependencies that might come into existance by runtime changes. But it is pretty helpful and all Puppet users I know use it from time to time.

Unexpected Things

But there are some unexpected things about the noop mode:
  1. A --noop run does trigger the report server.
  2. The --noop run rewrites the YAML state files in /var/lib/puppet
  3. And there is no state on the local machine that gives you the last "real" run result after you overwrite the state files with the --noop run.

Why might this be a problem?

Or the other way around: why Puppet think this is not a problem? Probably because Puppet as an automation tool should overwrite and the past state doesn't really matter. If you use PE or Puppet with PuppetDB or Foreman you have an reporting for past runs anyway, so no need to have a history on the Puppet client.

Why I still do not like it: it avoids having safe and simple local Nagios checks. Using the state YAML you might want to build a simple script checking for run errors. Because you might want a Nagios alert about all errors that appear. Or about hosts that did not run Puppet for quite some time (for example I wanted to disable Puppet on a server for some action and forgot to reenable). Such a check reports false positives each time someone does a --noop run until the next normal run. This hides errors.

Of course you can build all this with cool Devops style SQL/REST/... queries to PuppetDB/Foreman, but checking state locally seems a bit more the old-style robust and simpler sysadmin way. Actively asking the Puppet master or report server for the client state seems wrong. The client should know too.

From a software usability perspective I do not expect a tool do change it's state when I pass --noop. It's unexpected. Of course the documentation is carefull phrased:
Use 'noop' mode where the daemon runs in a no-op or dry-run mode. This is useful for seeing what changes Puppet will make without actually executing the changes.

Getting rid of Bash Ctrl-R

Today was a good day, as I stumbled over this post (at http://architects.dzone.com) hinting on the following bash key bindings:
bind '"\e[A":history-search-backward'
bind '"\e[B":history-search-forward'
It changes the behaviour of the up and down cursor keys to not go blindly through the history but only through items matching the current prompt. Of course at the disadvantage of having to clear the line to go through the full history. But as this can be achieved by a Ctrl-C at any time it is still preferrable to Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R ....

Redis Performance Debugging

Here are some simple hints on debugging Redis performance issues.

Monitoring Live Redis Queries

Run the "monitor" command to see queries as they are sent against an Redis instance. Do not use on high traffic instance!
redis-cli monitor
The output looks like this
redis 127.0.0.1:6379> MONITOR
OK
1371241093.375324 "monitor"
1371241109.735725 "keys" "*"
1371241152.344504 "set" "testkey" "1"
1371241165.169184 "get" "testkey"

Analyzing Slow Commands

When there are too many queries better use "slowlog" to see the top slow queries running against your Redis instance:
slowlog get 25		# print top 25 slow queries
slowlog len		
slowlog reset

Debugging Latency

If you suspect latency to be an issue use "redis-cli" built-in support for latency measuring. First measure system latency on your Redis server with
redis-cli --intrinsic-latency 100
and then sample from your Redis clients with
redis-cli --latency -h <host> -p <port>
If you have problems with high latency check if transparent huge pages are disabled. Disable it with
echo never > /sys/kernel/mm/transparent_hugepage/enabled

Check Background Save Settings

If your instance seemingly freezes peridiocally you probably have background dumping enabled.
grep ^save /etc/redis/redis.conf
Comment out all save lines and setup a cron job to do dumping or a Redis slave who can dump whenever he wants to.

Alternatively you can try to mitigate the effect using the "no-appendfsync-on-rewrite" option (set to "yes") in redis.conf.

Check fsync Setting

Per default Redis runs fsync() every 1s. Other possibilities are "always" and "no".
grep ^appendfsync /etc/redis/redis.conf
So if you do not care about DB corruption you might want to set "no" here.

Python re.sub Examples

Example for re.sub() usage in Python

Syntax

import re


result = re.sub(pattern, repl, string, count=0, flags=0);

Simple Examples

num = re.sub(r'abc', '', input)              # Delete pattern abc
num = re.sub(r'abc', 'def', input)           # Replace pattern abc -> def
num = re.sub(r'\s+', '\s', input)            # Eliminate duplicate whitespaces
num = re.sub(r'abc(def)ghi', '\1', input)    # Replace a string with a part of itself

Advance Usage

Replacement Function

Instead of a replacement string you can provide a function performing dynamic replacements based on the match string like this:
def my_replace(m):
    if :
       return <replacement variant 1>
    return <replacement variant 2>


result = re.sub("\w+", my_replace, input)

Count Replacements

When you want to know how many replacements did happen use re.subn() instead:
result = re.sub(pattern, replacement, input)
print ('Result: ', result[0])
print ('Replacements: ', result[1])

Puppet: List Changed Files

If you want to know which files where changed by puppet in the last days:
cd /var/lib/puppet
for i in $(find clientbucket/ -name paths); do
	echo "$(stat -c %y $i | sed 's/\..*//')       $(cat $i)";
done | sort -n
will give you an output like
[...]
2015-02-10 12:36:25       /etc/resolv.conf
2015-02-17 10:52:09       /etc/bash.bashrc
2015-02-20 14:48:18       /etc/snmp/snmpd.conf
2015-02-20 14:50:53       /etc/snmp/snmpd.conf
[...]

How to Munin Graph JVM Memory Usage with Ubuntu tomcat

The following description works when using the Ubuntu "tomcat7" package:

Grab the "java/jstat__heap" plugin from munin-contrib @ github and place it into "/usr/share/munin/plugins/jstat__heap".

Link the plugin into /etc/munin/plugins
ln -s /usr/share/munin/plugins/jstat__heap /etc/munin/plugins/jstat_myname_heap
Choose some useful name instead of "myname". This allows to monitor multiple JVM setups.

Configure each link you created in for example a new plugin config file named "/etc/munin/plugin-conf.d/jstat" which should contain one section per JVM looking like this
[jstat_myname_heap]
user tomcat7
env.pidfilepath /var/run/tomcat7.pid
env.javahome /usr/

Puppet Check ERBs for Dynamic Scoping

If you ever need to upgrade a code base to Puppet 3.0 and strip all dynamic scoping from your templates:
for file in $( find . -name "*.erb" | sort); do 
    echo "------------ [ $file ]"; 
    if grep -q "<%[^>]*$" $file; then 
        content=$(sed '/<%/,/%>/!d' $file); 
    else
        content=$(grep "<%" $file); 
    fi;
    echo "$content" | egrep "(.each|if |%=)" | egrep -v "scope.lookupvar|@|scope\["; 
done


This is of course just a fuzzy match, but should catch quite some of the dynamic scope expressions there are. The limits of this solution are: So use with care.

Removing newlines with sed

My goal for today: I want to remember the official sed FAQ solution to replace multiple newlines:
sed ':a;N;$!ba;s/\n//g' file
to avoid spending a lot of time on it when I need it again.

Splunk Cheat Sheet

Basic Searching Concepts

Simple searches look like the following examples. Note that there are literals with and without quoting and that there are field selections with an "=":
Exception                # just the word
One Two Three            # those three words in any order
"One Two Three"          # the exact phrase


# Filter all lines where field "status" has value 500 from access.log source="/var/log/apache/access.log" status=500

# Give me all fatal errors from syslog of the blog host host="myblog" source="/var/log/syslog" Fatal

Basic Filtering

Two important filters are "rex" and "regex".

"rex" is for extraction a pattern and storing it as a new field. This is why you need to specifiy a named extraction group in Perl like manner "(?...)" for example
source="some.log" Fatal | rex "(?i) msg=(?P[^,]+)"
When running above query check the list of "interesting fields" it now should have an entry "FIELDNAME" listing you the top 10 fatal messages from "some.log"

What is the difference to "regex" now? Well "regex" is like grep. Actually you can rephrase
source="some.log" Fatal
to
source="some.log" | regex _raw=".*Fatal.*"
and get the same result. The syntax of "regex" is simply "=". Using it makes sense once you want to filter for a specific field.

Calculations

Sum up a field and do some arithmetics:
... | stats sum(<field>) as result | eval result=(result/1000)
Determine the size of log events by checking len() of _raw. The p10() and p90() functions are returning the 10 and 90 percentiles:
| eval raw_len=len(_raw) | stats avg(raw_len), p10(raw_len), p90(raw_len) by sourcetype

Simple Useful Examples

Splunk usually auto-detects access.log fields so you can do queries like:
source="/var/log/nginx/access.log" HTTP 500
source="/var/log/nginx/access.log" HTTP (200 or 30*)
source="/var/log/nginx/access.log" status=404 | sort - uri 
source="/var/log/nginx/access.log" | head 1000 | top 50 clientip
source="/var/log/nginx/access.log" | head 1000 | top 50 referer
source="/var/log/nginx/access.log" | head 1000 | top 50 uri
source="/var/log/nginx/access.log" | head 1000 | top 50 method
...

Emailing Results

By appending "sendemail" to any query you get the result by mail!
... | sendemail to="[email protected]"

Timecharts

Create a timechart from a single field that should be summed up
... | table _time, <field> | timechart span=1d sum(<field>)
... | table _time, <field>, name | timechart span=1d sum(<field>) by name

Index Statistics

List All Indices
 | eventcount summarize=false index=* | dedup index | fields index
 | eventcount summarize=false report_size=true index=* | eval size_MB = round(size_bytes/1024/1024,2)
 | REST /services/data/indexes | table title
 | REST /services/data/indexes | table title splunk_server currentDBSizeMB frozenTimePeriodInSecs maxTime minTime totalEventCount
on the command line you can call
$SPLUNK_HOME/bin/splunk list index
To query write amount of per index the metrics.log can be used:
index=_internal source=*metrics.log group=per_index_thruput series=* | eval MB = round(kb/1024,2) | timechart sum(MB) as MB by series
MB per day per indexer / index
index=_internal metrics kb series!=_* "group=per_host_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb


index=_internal metrics kb series!=_* "group=per_index_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb

Static Code Analysis of any Autotools Project with OCLint

The following is a HowTo describing the setup of OCLint for any C/C++ project using autotools.

1. OCLint Setup

First step is downloading OCLint, as there are no package so far, it's just extracting the tarball somewhere in $HOME. Check out the latest release link on http://archives.oclint.org/releases/.
cd
wget "http://archives.oclint.org/releases/0.8/oclint-0.8.1-x86_64-linux-3.13.0-35-generic.tar.gz"
tar zxvf oclint-0.8.1-x86_64-linux-3.13.0-35-generic.tar.gz 
This should leave you with a copy of OCLint in ~/oclint-0.8.1

2. Bear Setup

As project usually consist of a lot of source files in different subdirectories it is hard for a linter to know where to look for files. While "cmake" has support for dumping a list of source files it processes during a run "make" doesn't. This is where the "Bear" wrapper comes to play: instead of
make
you run
bear make
so "bear" can track all files being compiled. It will dump a JSON file "compile_commands.json" which OCLint can use to do analysis of all files.

To setup Bear do the following
cd
git clone https://github.com/rizsotto/Bear.git
cd Bear
cmake .
make

3. Analyzing Code

Now we have all the tools we need. Let's download some autotools project like Liferea. Before doing code analysis it should be downloaded and build at least once:
git clone https://github.com/lwindolf/liferea.git
cd liferea
sh autogen.sh
make
Now we collect all code file compilation instructions with bear:
make clean
bear make
And if this succeed we can start a complete analysis with
~/oclint-0.8.1/bin/oclint-json-compilation-database
which will run OCLint with the input from "compile_commands.json" produced by "bear". Don't call "oclint" directly as you'd need to pass all compile flags manually.

If all went well you could see code analysis lines like those:
[...]
conf.c:263:9: useless parentheses P3 
conf.c:274:9: useless parentheses P3 
conf.c:284:9: useless parentheses P3 
conf.c:46:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 33 exceeds limit of 10
conf.c:157:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 12 exceeds limit of 10
conf.c:229:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 30 exceeds limit of 10
conf.c:78:1: long method P3 Method with 55 lines exceeds limit of 50
conf.c:50:2: short variable name P3 Variable name with 2 characters is shorter than the threshold of 3
conf.c:52:2: short variable name P3 Variable name with 1 characters is shorter than the threshold of 3
[...]

How Common Are HTTP Security Headers Really?

A recent issue of the German iX magazin featured an article on improving end user security by enabling HTTP security headers The article gave the impression of all of them quite common and a good DevOps being unreasonable not implementing them immediately if the application supports them without problems.

This lead me to check my monthly domain scan results of April 2014 on who is actually using which header on their main pages. Results as always limited to top 200 Alexa sites and all larger German websites.

Usage of X-XSS-Protection

Header visible for only 14 of 245 (5%) of the scanned websites. As 2 are just disabling the setting it is only 4% of the websites enabling it.
WebsiteHeader
www.adcash.comX-XSS-Protection: 1; mode=block
www.badoo.comX-XSS-Protection: 1; mode=block
www.blogger.comX-XSS-Protection: 1; mode=block
www.blogspot.comX-XSS-Protection: 1; mode=block
www.facebook.comX-XSS-Protection: 0
www.feedburner.comX-XSS-Protection: 1; mode=block
www.github.comX-XSS-Protection: 1; mode=block
www.google.deX-XSS-Protection: 1; mode=block
www.live.comX-XSS-Protection: 0
www.meinestadt.deX-XSS-Protection: 1; mode=block
www.openstreetmap.orgX-XSS-Protection: 1; mode=block
www.tape.tvX-XSS-Protection: 1; mode=block
www.xing.deX-XSS-Protection: 1; mode=block; report=https://www.xing.com/tools/xss_reporter
www.youtube.deX-XSS-Protection: 1; mode=block; report=https://www.google.com/appserve/security-bugs/log/youtube

Usage of X-Content-Type-Options

Here 15 of 245 websites (6%) enable the option.
WebsiteHeader
www.blogger.comX-Content-Type-Options: nosniff
www.blogspot.comX-Content-Type-Options: nosniff
www.deutschepost.deX-Content-Type-Options: NOSNIFF
www.facebook.comX-Content-Type-Options: nosniff
www.feedburner.comX-Content-Type-Options: nosniff
www.github.comX-Content-Type-Options: nosniff
www.linkedin.comX-Content-Type-Options: nosniff
www.live.comX-Content-Type-Options: nosniff
www.meinestadt.deX-Content-Type-Options: nosniff
www.openstreetmap.orgX-Content-Type-Options: nosniff
www.spotify.comX-Content-Type-Options: nosniff
www.tape.tvX-Content-Type-Options: nosniff
www.wikihow.comX-Content-Type-Options: nosniff
www.wikipedia.orgX-Content-Type-Options: nosniff
www.youtube.deX-Content-Type-Options: nosniff

Usage of Content-Security-Policy

Actually only 1 website in the top 200 Alexa ranked websites uses CSP and this lonely site is github. The problem with CSP obviously being the necessity to have a clear structure for the origin domains of the site elements. And the less advertisments and tracking pixels you have the easier it becomes...
WebsiteHeader
www.github.comContent-Security-Policy: default-src *; script-src https://github.global.ssl.fastly.net https://ssl.google-analytics.com https://collector-cdn.github.com; style-src 'self' 'unsafe-inline' 'unsafe-eval' https://github.global.ssl.fastly.net; object-src https://github.global.ssl.fastly.net

Usage of X-Frame-Options

The X-Frame-Options header is currently delivered by 43 of 245 websites (17%).
WebsiteHeader
www.adcash.comX-Frame-Options: SAMEORIGIN
www.adf.lyX-Frame-Options: SAMEORIGIN
www.avg.comX-Frame-Options: SAMEORIGIN
www.badoo.comX-Frame-Options: DENY
www.battle.netX-Frame-Options: SAMEORIGIN
www.blogger.comX-Frame-Options: SAMEORIGIN
www.blogspot.comX-Frame-Options: SAMEORIGIN
www.dailymotion.comX-Frame-Options: deny
www.deutschepost.deX-Frame-Options: SAMEORIGIN
www.ebay.deX-Frame-Options: SAMEORIGIN
www.facebook.comX-Frame-Options: DENY
www.feedburner.comX-Frame-Options: SAMEORIGIN
www.github.comX-Frame-Options: deny
www.gmx.deX-Frame-Options: deny
www.gmx.netX-Frame-Options: deny
www.google.deX-Frame-Options: SAMEORIGIN
www.groupon.deX-Frame-Options: SAMEORIGIN
www.imdb.comX-Frame-Options: SAMEORIGIN
www.indeed.comX-Frame-Options: SAMEORIGIN
www.instagram.comX-Frame-Options: SAMEORIGIN
www.java.comX-Frame-Options: SAMEORIGIN
www.linkedin.comX-Frame-Options: SAMEORIGIN
www.live.comX-Frame-Options: deny
www.mail.ruX-Frame-Options: SAMEORIGIN
www.mozilla.orgX-Frame-Options: DENY
www.netflix.comX-Frame-Options: SAMEORIGIN
www.openstreetmap.orgX-Frame-Options: SAMEORIGIN
www.oracle.comX-Frame-Options: SAMEORIGIN
www.paypal.comX-Frame-Options: SAMEORIGIN
www.pingdom.comX-Frame-Options: SAMEORIGIN
www.skype.comX-Frame-Options: SAMEORIGIN
www.skype.deX-Frame-Options: SAMEORIGIN
www.softpedia.comX-Frame-Options: SAMEORIGIN
www.soundcloud.comX-Frame-Options: SAMEORIGIN
www.sourceforge.netX-Frame-Options: SAMEORIGIN
www.spotify.comX-Frame-Options: SAMEORIGIN
www.stackoverflow.comX-Frame-Options: SAMEORIGIN
www.tape.tvX-Frame-Options: SAMEORIGIN
www.web.deX-Frame-Options: deny
www.wikihow.comX-Frame-Options: SAMEORIGIN
www.wordpress.comX-Frame-Options: SAMEORIGIN
www.yandex.ruX-Frame-Options: DENY
www.youtube.deX-Frame-Options: SAMEORIGIN

Usage of HSTS Strict-Transport-Security

HSTS headers can only be found on a few front pages (8 of 245). Maybe it is visible more on the login pages and is avoided on front pages for performance reasons, maybe not. That would require further analysis. What can be said is only some larger technology leaders are brave enough to use it on the front page:
WebsiteHeader
www.blogger.comStrict-Transport-Security: max-age=10893354; includeSubDomains
www.blogspot.comStrict-Transport-Security: max-age=10893354; includeSubDomains
www.facebook.comStrict-Transport-Security: max-age=2592000
www.feedburner.comStrict-Transport-Security: max-age=10893354; includeSubDomains
www.github.comStrict-Transport-Security: max-age=31536000
www.paypal.comStrict-Transport-Security: max-age=14400
www.spotify.comStrict-Transport-Security: max-age=31536000
www.upjers.comStrict-Transport-Security: max-age=47336400

Conclusion

Security headers are not wide-spread on website front pages at least. Most used is the X-Frame-Option header to prevent clickjacking. Next following is X-Content-Type-Options to prevent MIME sniffing. Both of course are easy to implement as they most probably do not change your websites behaviour. I'd expect to see more HSTS on bank and other online payment service websites, but it might well be that the headers appear only on subsequent redirects when logging in, which this scan doesn't do. With CSP being the hardest to implement, as you need to have complete control over all domain usage by application content and partner content you embed, it is no wonder that only Github.com has implemented it. For me it is an indication how clean their web application actually is.

Sharing Screen With Multiple Users

How to detect screen sessions of other users:

screen -ls <user name>/

How to open screen to other users:

  1. Ctrl-A :multiuser on
  2. Ctrl-A :acladd <user to grant access>

Attach to other users screen session:

With session name
screen -x <user name>/<session name>
With PID and tty
screen -x <user name>/<pid>.<ptty>.<host>