Recent Posts

The damage of one second

Update: According to the AWS status page the incident was a problem related to BGP route leaking. AWS does not hint on a leap second related incident as originally suggested by this post!

Tonight we had another leap second and not without suffering at the same time. At the end of the post you can find two screenshots of outages suggested by downdetector.com. The screenshots were taken shortly after midnight UTC and you can easily spot those sites with problems by the disting peak at the right site of the graph.

AWS Outage

What is common to many of the affected sites: them being hosted at AWS which had some problems.

Quote:
[RESOLVED] Internet connectivity issues

Between 5:25 PM and 6:07 PM PDT we experienced an Internet connectivity issue with a provider outside of our network which affected traffic from some end-user networks. The issue has been resolved and the service is operating normally.

The root cause of this issue was an external Internet service provider incorrectly accepting a set of routes for some AWS addresses from a third-party who inadvertently advertised these routes. Providers should normally reject these routes by policy, but in this case the routes were accepted and propagated to other ISPs affecting some end-user’s ability to access AWS resources. Once we identified the provider and third-party network, we took action to route traffic around this incorrect routing configuration. We have worked with this external Internet service provider to ensure that this does not reoccur.

Incident Details

Graphs from downdetector.com

Note that those graphs indicate user reported issues:

Using the memcached telnet interface

This is a short summary of everything important that helps to inspect a running memcached instance. You need to know that memcached requires you to connect to it via telnet. The following post describes the usage of this interface.

How To Connect

Use "ps -ef" to find out which IP and port was passed when memcached was started and use the same with telnet to connect to memcache. Example:

telnet 10.0.0.2 11211

Supported Commands

The supported commands (the official ones and some unofficial) are documented in the doc/protocol.txt document.

Sadly the syntax description isn't really clear and a simple help command listing the existing commands would be much better. Here is an overview of the commands you can find in the source (as of 16.12.2008):

Command Description Example
get Reads a value get mykey
set Set a key unconditionally set mykey 0 60 5
add Add a new key add newkey 0 60 5
replace Overwrite existing key replace key 0 60 5
append Append data to existing key append key 0 60 15
prepend Prepend data to existing key prepend key 0 60 15
incr Increments numerical key value by given number incr mykey 2
decr Decrements numerical key value by given number decr mykey 5
delete Deletes an existing key delete mykey
flush_all Invalidate specific items immediately flush_all
Invalidate all items in n seconds flush_all 900
stats Prints general statistics stats
Prints memory statistics stats slabs
Prints memory statistics stats malloc
Print higher level allocation statistics stats items
stats detail
stats sizes
Resets statistics stats reset
version Prints server version. version
verbosity Increases log level verbosity
quit Terminate telnet session quit

Traffic Statistics

You can query the current traffic statistics using the command

stats
You will get a listing which serves the number of connections, bytes in/out and much more.

Example Output:

STAT pid 14868
STAT uptime 175931
STAT time 1220540125
STAT version 1.2.2
STAT pointer_size 32
STAT rusage_user 620.299700
STAT rusage_system 1545.703017
STAT curr_items 228
STAT total_items 779
STAT bytes 15525
STAT curr_connections 92
STAT total_connections 1740
STAT connection_structures 165
STAT cmd_get 7411
STAT cmd_set 28445156
STAT get_hits 5183
STAT get_misses 2228
STAT evictions 0
STAT bytes_read 2112768087
STAT bytes_written 1000038245
STAT limit_maxbytes 52428800
STAT threads 1
END

Memory Statistics

You can query the current memory statistics using

stats slabs

Example Output:

STAT 1:chunk_size 80
STAT 1:chunks_per_page 13107
STAT 1:total_pages 1
STAT 1:total_chunks 13107
STAT 1:used_chunks 13106
STAT 1:free_chunks 1
STAT 1:free_chunks_end 12886
STAT 2:chunk_size 100
STAT 2:chunks_per_page 10485
STAT 2:total_pages 1
STAT 2:total_chunks 10485
STAT 2:used_chunks 10484
STAT 2:free_chunks 1
STAT 2:free_chunks_end 10477
[...]
STAT active_slabs 3
STAT total_malloced 3145436
END

If you are unsure if you have enough memory for your memcached instance always look out for the "evictions" counters given by the "stats" command. If you have enough memory for the instance the "evictions" counter should be 0 or at least not increasing.

Which Keys Are Used?

There is no builtin function to directly determine the current set of keys. However you can use the

stats items
command to determine how many keys do exist.
stats items
STAT items:1:number 220
STAT items:1:age 83095
STAT items:2:number 7
STAT items:2:age 1405
[...]
END
This at least helps to see if any keys are used. To dump the key names from a PHP script that already does the memcache access you can use the PHP code from 100days.de.

Puppet Solve Invalid byte sequence in US-ASCII

When you run "puppet agent" and get
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: invalid byte 
sequence in US-ASCII at /etc/puppet/modules/vendor/
or run "puppet apply" and get
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not 
parse for environment production: invalid byte sequence in US-ASCII at /etc/puppet/manifests/someclass.pp:1
then the root case is probably the currently configured locale. Check the effective Ruby locale with
ruby -e 'puts Encoding.default_external'
Ensure that it returns a UTF-8 capable locale, if needed set it and rerun Puppet:
export LANG=de_DE.utf-8
export LC_ALL=de_DE.utf-8

Benchmarking-Redis-and-Memcache



If you ever need to get some meaningful facts in a possible Redis vs memcached discussion you might want to benchmark both on your target system.

While Redis brings a tool redis-benchmark, memcached doesn't. But Redis author Salvatore Sanfilippo ported the Redis benchmark to memcached! So it is possible to measure quite similar metrics using the same math and result summaries for both key value stores.

Benchmarking Redis

So setup Redis in cluster mode, master/slave, whatever you like and run the Redis benchmark
apt-get install redis-tools	# available starting with Wheezy backports
redis-benchmark -h <host>

Benchmarking Memcached

And do the same for memcache by compiling the memcached port of the benchmark
apt-get install build-essentials libevent-dev
git clone https://github.com/antirez/mc-benchmark.git
cd mc-benchmark.git
make
and running it with
./mc-benchmark -h <host>
The benchmark output has the same structure, with more output in the Redis version compared to the memcached variant as each command type is tested and the Redis protocol knows many more commands.

Web-Developer-Solution-Index

After a friend of mine suggested reading "The things you need to know to do web development I felt the need to compile a solution index for the experiences described. In this interesting blog post describes his view of the typical learning curve of a web developer and the tools, solutions and concepts he discovers when he becomes a successful developer.

I do not want to summarize the post but I wanted to compile a list of those solutions and concepts affecting the life of a web developer.























Markup Standards Knowledge HTML, CSS, JSON, YAML
Web Stack Layering Basic knowledge about
  • Using TCP as transport protocol
  • Using HTTP as application protocol
  • Using SSL to encrypt the application layer with HTTPS
  • Using SSL certificates to proof identity for websites
  • Using (X)HTML for the application layer
  • Using DOM to access/manipulate application objects
Web Development Concepts
  • 3 tier server architecture
  • Distinction of static/dynamic content
  • Asynchronous CSS, JS loading
  • Asynchronous networking with Ajax
  • CSS box models
  • CSS Media Queries
  • Content Delivery Networks
  • UX, Usability...
  • Responsive Design
  • Page Speed Optimization
  • HTTP/HTTPS content mixing
  • Cross domain content mixing
  • MIME types
  • API-Pattern: RPC, SOAP, REST
  • Localization and Internationalization
Developer Infrastructure
  • Code Version Repo: usually Git. Hosted github.com or self-hosted e.g. gitlab
  • Continuous Integration: Jenkins, Travis
  • Deployment: Jenkins, Travis, fabric, Bamboo, CruiseControl
Frontend JS Frameworks Mandatory knowledge in jQuery as well as knowing one or more JS frameworks as

Bootstrap, Foundation, React, Angular, Ember, Backbone, Prototype, GWT, YUI
Localization and Internationalization Frontend: usually in JS lib e.g. LocalePlanet or Globalize

Backend: often in gettext or a similar mechanism
Precompiling Resources For Javascript: Minify

For CSS: For Images: ImageMagick

Test everything with Google PageSpeed Insights
Backend Frameworks By language
  • PHP: CakePHP, CodeIgniter, Symfony, Seagull, Zend, Yii (choose one)
  • Python: Django, Tornado, Pylons, Zope, Bottle (choose one)
  • Ruby: Rails, Merb, Camping, Ramaze (choose one)
Web Server Solutions nginx, Apache

For loadbalancing: nginx, haproxy

As PHP webserver: nginx+PHPFPM
RDBMS MySQL (maybe Percona, MariaDB), Postgres
Caching/NoSQL Without replication: memcached, memcachedb, Redis

With replication: Redis, Couchbase, MongoDB, Cassandra

Good comparisons: #1 #2
Hosting If you are unsure about self-hosting vs. cloud hosting have a look at the Cloud Calculator.
Blogs Do not try to self-host blogs. You will fail on keeping them secure and up-to-date and sooner or later they are hacked. Start with a blog hoster right from the start: Choose provider

Chef-Editing-Config-Files

Most chef recipes are about installing new software including all config files. Also if they are configuration recipes they usually overwrite the whole file and provide a completely recreated configuration. When you have used cfengine and puppet with augtool before you'll be missing the agile editing of config files.

In cfengine2...

You could write
editfiles:
{ home/.bashrc
   AppendIfNoSuchLine "alias rm='rm -i'"
}

While in puppet...

You'd have:
augeas { "sshd_config":
  context => "/files/etc/ssh/sshd_config",
  changes => [
    "set PermitRootLogin no",
  ],
}

Now how to do it in Chef?

Maybe I missed the correct way to do it until now (please comment if this is the case!) but there seems to be no way to use for example augtool with chef and there is no built-in cfengine like editing. The only way I've seen so far is to use Ruby as a scripting language to change files using the Ruby runtime or to use the Script ressource which allows running other interpreters like bash, csh, perl, python or ruby.

To use it you can define a block named like the interpreter you need and add a "code" attribute with a "here doc" operator (e.g. <<-EOT) describing the commands. Additionally you specify a working directory and a user for the script to be executed with. Example:
bash "some_commands" do
    user "root"
    cwd "/tmp"
    code <<-EOT
       echo "alias rm='rm -i'" >> /root/.bashrc
    EOT
end
While it is not a one-liner statement as possible as in cfengine it is very flexible. The Script resource is widely used to perform ad-hoc source compilation and installations in the community codebooks, but we can also use it for standard file editing.

Finally to do conditional editing use not_if/only_if clauses at the end of the Script resource block.

Puppet Apply Only Specific Classes

If you want to apply Puppet changes in an selective manner you can run
puppet apply -t --tags Some::Class
on the client node to only run the single class named "Some::Class".

Why does this work? Because Puppet automatically creates tags for all classes you have. Ensure to upper-case all parts of the class name, because even if you actual Ruby class is "some::class" the Puppet tag will be "Some::Class".

Puppet Agent Noop Pitfalls

The puppet agent command has a --noop switch that allows you to perform a dry-run of your Puppet code.
puppet agent -t --noop
It doesn't change anything, it just tells you what it would change. More or less exact due to the nature of dependencies that might come into existance by runtime changes. But it is pretty helpful and all Puppet users I know use it from time to time.

Unexpected Things

But there are some unexpected things about the noop mode:
  1. A --noop run does trigger the report server.
  2. The --noop run rewrites the YAML state files in /var/lib/puppet
  3. And there is no state on the local machine that gives you the last "real" run result after you overwrite the state files with the --noop run.

Why might this be a problem?

Or the other way around: why Puppet think this is not a problem? Probably because Puppet as an automation tool should overwrite and the past state doesn't really matter. If you use PE or Puppet with PuppetDB or Foreman you have an reporting for past runs anyway, so no need to have a history on the Puppet client.

Why I still do not like it: it avoids having safe and simple local Nagios checks. Using the state YAML you might want to build a simple script checking for run errors. Because you might want a Nagios alert about all errors that appear. Or about hosts that did not run Puppet for quite some time (for example I wanted to disable Puppet on a server for some action and forgot to reenable). Such a check reports false positives each time someone does a --noop run until the next normal run. This hides errors.

Of course you can build all this with cool Devops style SQL/REST/... queries to PuppetDB/Foreman, but checking state locally seems a bit more the old-style robust and simpler sysadmin way. Actively asking the Puppet master or report server for the client state seems wrong. The client should know too.

From a software usability perspective I do not expect a tool do change it's state when I pass --noop. It's unexpected. Of course the documentation is carefull phrased:
Use 'noop' mode where the daemon runs in a no-op or dry-run mode. This is useful for seeing what changes Puppet will make without actually executing the changes.

Getting rid of Bash Ctrl-R

Today was a good day, as I stumbled over this post (at http://architects.dzone.com) hinting on the following bash key bindings:
bind '"\e[A":history-search-backward'
bind '"\e[B":history-search-forward'
It changes the behaviour of the up and down cursor keys to not go blindly through the history but only through items matching the current prompt. Of course at the disadvantage of having to clear the line to go through the full history. But as this can be achieved by a Ctrl-C at any time it is still preferrable to Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R ....

Simple Chef to Nagios Hostgroup Export

When you are automatizing with chef and use plain Nagios for monitoring you will find duplication quite some configuration. One large part is the hostgroup definitions which usually map many of the chef roles. So if the roles are defined in chef anyway they should be sync'ed to Nagios.

Using "knife" one can extract the roles of a node like this
knife node show -a roles $node | grep -v "^roles:"

Scripting The Role Dumping

Note though that knife only shows roles that were applied on the server already. But this shouldn't be a big problem for a synchronization solution. Next step is to create a usable hostgroup definition in Nagios. To avoid colliding with existing hostgroups let's prefix the generated hostgroup names with "chef-". The only challenge is the regrouping of the role lists given per node by chef into host name lists per role. In Bash 4 using an fancy hash this could be done like this:
declare -A roles


for node in $(knife node list); do for role in $(knife node show -a roles $i |grep -v "roles" ); do roles["$role"]=${roles["$role"]}"$i " done done
Given this it is easy to dump Icinga hostgroup definitions. For example
for role in ${!roles[*]}; do
   echo "define hostgroup {
   hostgroup_name chef-$role
   members ${roles[$role]}
}
"
done
That makes ~15 lines of shell script and a cronjob entry to integrate Chef with Nagios. Of course you also need to ensure that each host name provided by chef has a Nagios host definition. If you know how it resolves you could just dump a host definition while looping over the host list. In any case there is no excuse not to export the chef config :-)

Easy Migrating

Migrating to such an export is easy by using the "chef-" namespace prefix for generated hostgroups. This allows you to smoothly migrate existing Nagions definitions at your own pace. Be sure to only reload Nagios and not restart via cron and to do it at reasonable time to avoid breaking things.

How to Munin Graph JVM Memory Usage with Ubuntu tomcat

The following description works when using the Ubuntu "tomcat7" package:

Grab the "java/jstat__heap" plugin from munin-contrib @ github and place it into "/usr/share/munin/plugins/jstat__heap".

Link the plugin into /etc/munin/plugins
ln -s /usr/share/munin/plugins/jstat__heap /etc/munin/plugins/jstat_myname_heap
Choose some useful name instead of "myname". This allows to monitor multiple JVM setups.

Configure each link you created in for example a new plugin config file named "/etc/munin/plugin-conf.d/jstat" which should contain one section per JVM looking like this
[jstat_myname_heap]
user tomcat7
env.pidfilepath /var/run/tomcat7.pid
env.javahome /usr/

Splunk Cheat Sheet

Basic Searching Concepts

Simple searches look like the following examples. Note that there are literals with and without quoting and that there are field selections with an "=":
Exception                # just the word
One Two Three            # those three words in any order
"One Two Three"          # the exact phrase


# Filter all lines where field "status" has value 500 from access.log source="/var/log/apache/access.log" status=500

# Give me all fatal errors from syslog of the blog host host="myblog" source="/var/log/syslog" Fatal

Basic Filtering

Two important filters are "rex" and "regex".

"rex" is for extraction a pattern and storing it as a new field. This is why you need to specifiy a named extraction group in Perl like manner "(?...)" for example
source="some.log" Fatal | rex "(?i) msg=(?P[^,]+)"
When running above query check the list of "interesting fields" it now should have an entry "FIELDNAME" listing you the top 10 fatal messages from "some.log"

What is the difference to "regex" now? Well "regex" is like grep. Actually you can rephrase
source="some.log" Fatal
to
source="some.log" | regex _raw=".*Fatal.*"
and get the same result. The syntax of "regex" is simply "=". Using it makes sense once you want to filter for a specific field.

Calculations

Sum up a field and do some arithmetics:
... | stats sum(<field>) as result | eval result=(result/1000)
Determine the size of log events by checking len() of _raw. The p10() and p90() functions are returning the 10 and 90 percentiles:
| eval raw_len=len(_raw) | stats avg(raw_len), p10(raw_len), p90(raw_len) by sourcetype

Simple Useful Examples

Splunk usually auto-detects access.log fields so you can do queries like:
source="/var/log/nginx/access.log" HTTP 500
source="/var/log/nginx/access.log" HTTP (200 or 30*)
source="/var/log/nginx/access.log" status=404 | sort - uri 
source="/var/log/nginx/access.log" | head 1000 | top 50 clientip
source="/var/log/nginx/access.log" | head 1000 | top 50 referer
source="/var/log/nginx/access.log" | head 1000 | top 50 uri
source="/var/log/nginx/access.log" | head 1000 | top 50 method
...

Emailing Results

By appending "sendemail" to any query you get the result by mail!
... | sendemail to="[email protected]"

Timecharts

Create a timechart from a single field that should be summed up
... | table _time, <field> | timechart span=1d sum(<field>)
... | table _time, <field>, name | timechart span=1d sum(<field>) by name

Index Statistics

List All Indices
 | eventcount summarize=false index=* | dedup index | fields index
 | eventcount summarize=false report_size=true index=* | eval size_MB = round(size_bytes/1024/1024,2)
 | REST /services/data/indexes | table title
 | REST /services/data/indexes | table title splunk_server currentDBSizeMB frozenTimePeriodInSecs maxTime minTime totalEventCount
on the command line you can call
$SPLUNK_HOME/bin/splunk list index
To query write amount of per index the metrics.log can be used:
index=_internal source=*metrics.log group=per_index_thruput series=* | eval MB = round(kb/1024,2) | timechart sum(MB) as MB by series
MB per day per indexer / index
index=_internal metrics kb series!=_* "group=per_host_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb


index=_internal metrics kb series!=_* "group=per_index_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb

Static Code Analysis of any Autotools Project with OCLint

The following is a HowTo describing the setup of OCLint for any C/C++ project using autotools.

1. OCLint Setup

First step is downloading OCLint, as there are no package so far, it's just extracting the tarball somewhere in $HOME. Check out the latest release link on http://archives.oclint.org/releases/.
cd
wget "http://archives.oclint.org/releases/0.8/oclint-0.8.1-x86_64-linux-3.13.0-35-generic.tar.gz"
tar zxvf oclint-0.8.1-x86_64-linux-3.13.0-35-generic.tar.gz 
This should leave you with a copy of OCLint in ~/oclint-0.8.1

2. Bear Setup

As project usually consist of a lot of source files in different subdirectories it is hard for a linter to know where to look for files. While "cmake" has support for dumping a list of source files it processes during a run "make" doesn't. This is where the "Bear" wrapper comes to play: instead of
make
you run
bear make
so "bear" can track all files being compiled. It will dump a JSON file "compile_commands.json" which OCLint can use to do analysis of all files.

To setup Bear do the following
cd
git clone https://github.com/rizsotto/Bear.git
cd Bear
cmake .
make

3. Analyzing Code

Now we have all the tools we need. Let's download some autotools project like Liferea. Before doing code analysis it should be downloaded and build at least once:
git clone https://github.com/lwindolf/liferea.git
cd liferea
sh autogen.sh
make
Now we collect all code file compilation instructions with bear:
make clean
bear make
And if this succeed we can start a complete analysis with
~/oclint-0.8.1/bin/oclint-json-compilation-database
which will run OCLint with the input from "compile_commands.json" produced by "bear". Don't call "oclint" directly as you'd need to pass all compile flags manually.

If all went well you could see code analysis lines like those:
[...]
conf.c:263:9: useless parentheses P3 
conf.c:274:9: useless parentheses P3 
conf.c:284:9: useless parentheses P3 
conf.c:46:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 33 exceeds limit of 10
conf.c:157:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 12 exceeds limit of 10
conf.c:229:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 30 exceeds limit of 10
conf.c:78:1: long method P3 Method with 55 lines exceeds limit of 50
conf.c:50:2: short variable name P3 Variable name with 2 characters is shorter than the threshold of 3
conf.c:52:2: short variable name P3 Variable name with 1 characters is shorter than the threshold of 3
[...]

PHP ini_set() Examples

Syntax of ini_set()

The ini_set() syntax is simple:
string ini_set ( string $varname , string $newvalue )
it is just a key value setter. The important question is which values can be set. Below you find a list of working examples. Please note that you cannot change all php.ini options especially those that need to be set before PHP initializes.

Useful Working ini_set() Examples

1. Enabling error display

On production sites you typically do not show errors in the page for usability and security reasons. But when you when you debug something live you might want to enable it temporarily and just for you:
# Assuming 5.44.33.22 is your IP...
if ( $_SERVER["REMOTE_ADDR"] == "5.44.33.22") {
    ini_set('display_errors', '1');
}
Note: you may want to combine this with
error_reporting(E_ALL | E_STRICT);

2. Changing Memory Limit

When you need to increase memory from within the code:
ini_set("memory_limit","1000M");
Note though that this might be prevent by a Suhosin hardended PHP installation.

3. Adding include paths

Normally this shouldn't be necessary. It is way cleaner to do it in php.ini, but if you bundle libraries and you administrator doesn't know:
<?php ini_set('include_path',ini_get('include_path').':../my-libs:');  ?>

When You Cannot Use ini_set()

For most php.ini settings you can't use ini_set(). To workaround consider deploying a .htaccess along with your code as this .htaccess can provide all PHP options to overwrite the default php.ini settings.

For example to change the HTTP POST limit add this line to a .htaccess read by your webserver:
php_value post_max_size 2500000
Note how the "php_value" prefix indicates settings for PHP. So simple syntax is
php_value <key name> <value>

Never Forget _netdev with GlusterFS Mounts

When adding GlusterFS share to /etc/fstab do not forget to add "_netdev" to the mount options. Otherwise on next boot your system will just hang!

Actually there doesn't seem to be a timeout. That would be nice too.

As a side-note: do not forget that Ubuntu 12.04 doesn't care about the "_netdev" even. So network is not guaranteed to be up when mounting. So an additional upstart task or init script is needed anyway. But you need "_netdev" to prevent hanging on boot.

I also have the impression that this only happens with stock kernel 3.8.x and not with 3.4.x!

PHP preg_replace() Examples

This post gives some simple examples for using regular expressions with preg_replace() in PHP scripts.

1. Syntax of preg_replace

While full syntax is
mixed preg_replace ( mixed $pattern , mixed 
$replacement , mixed $subject [, int $limit = -1 [, int &$count ]] )

2. Simple Replacing with preg_replace()

$result = preg_replace('/abc/', 'def', $string);   # Replace all 'abc' with 'def'
$result = preg_replace('/abc/i', 'def', $string);  # Replace with case insensitive matching
$result = preg_replace('/\s+/', '', $string);      # Strip all whitespaces

3. Advanced Usage of preg_replace()

Multiple replacements:

$result = preg_replace(
    array('/pattern1/', '/pattern2/'),
    array('replace1', 'replace2'),
    $string
);

Replacement Back References:

$result = preg_replace('/abc(def)hij/', '/\\1/', $string);
$result = preg_replace('/abc(def)hij/', '/$1/', $string);
$result = preg_replace('/abc(def)hij/', '/${1}/', $string);

Do only a finite number of replacements:

# Perform maximum of 5 replacements
$result = preg_replace('/abc/', 'def', $string, -1, 5);

Multi-line replacement

# Strip HTML tag
$result = preg_replace('#.*#m', '', $string);

Sharing Screen With Multiple Users

How to detect screen sessions of other users:

screen -ls <user name>/

How to open screen to other users:

  1. Ctrl-A :multiuser on
  2. Ctrl-A :acladd <user to grant access>

Attach to other users screen session:

With session name
screen -x <user name>/<session name>
With PID and tty
screen -x <user name>/<pid>.<ptty>.<host>