Feb 28, 2024
Argocd debugging reconciliation loops
This is post to document a somewhat troublesome ArgoCD performance optimization result I experienced.
Feb 28, 2024
This is post to document a somewhat troublesome ArgoCD performance optimization result I experienced.
Jan 18, 2024
A typical problem when using Helm charts in Openshift is handling security context UID ranges. As Helm charts are usually targeting kubernetes they do not force you to set the proper security context.
May 02, 2023
Do you also have this utterly useless “sleep” key on your keyboard right above the keypad? At the right upper corner of the keyboard? Right where accidentily hit in the midst of a meeting when reaching over for something?
Mar 25, 2022
The Github “Pull requests” link that you can find on top menu bar leads you to four useful pull request queries:
Apr 06, 2021
This is a short howto on workarounds needed to use the artifact metadata DB Grafeas (developed by Google and JFrog). While the upstream project does provide Swagger definitions those do not work out-of-the-box with for example Swagger 2.4. The Grafeas server is being implemented in Golang using Protobuf for API bindings, so the offered Swagger bindings are not really used and are thus untested.
Mar 03, 2021
Azure DevOps wants you to provide secrets to pipelines using a so called pipeline library
. You can store single line strings as secrets
in the pipeline library. You cannot though store multi-line strings as secrets
without messing up the line-breaks.
Mar 01, 2021
Feb 17, 2021
This can be done using the -async
parameter of ffmpeg which according to the documentation “Stretches/squeezes” the audio stream to match the timestamps. The parameter takes a numeric value for the samples per seconds to enforce.
ffmpeg -async 25 -i input.mpg <encoding options> -r 25
Try slowly increasing the -async value until audio and video matches.
When audio is ahead of video: As a special case the -async
switch auto-corrects the start of the audio stream when passed as -async 1
. So try running
ffmpeg -async 1 -i input.mpg <encoding options>
Instead of using -async
you need to use -vsync
to drop/duplicate frames in the video stream. There are two methods in the manual page “-vsync 1” and “-vsync 2” and an method auto-detection with “-vsync -1”. But using “-map” it is possible to specify the stream to sync against.
ffmpeg -vsync 1 -i input.mpg <encoding options>
ffmpeg -vsync 2 -i input.mpg <encoding options>
Interestingly Google shows people using -async
and -vsync
together. So it might be worth experimenting a bit to achieve the intended result :-)
If you have a constantly shifted sound/video track that the previous fix doesn’t work with, but you know the time shift that needs to be corrected, then you can easily fix it with one of the following two commands:
Example to shift by 3 seconds:
ffmpeg -i input.mp4 -itsoffset 00:00:03.0 -i input.mp4 -vcodec copy -acodec copy -map 0:1 -map 1:0 output_shift3s.mp4
Note how you specify your input file 2 times with the first one followed by a time offset. Later in the command there are two -map
parameters which tell ffmpeg to use the time-shifted video stream from the first -i input.mp4
and the audio stream from the second one.
I also added -vcodec copy -acodec copy
to avoid reencoding the video and loose quality. These parameters have to be added after the second input file and before the mapping options. Otherwise one runs into mapping errors.
Again an example to shift by 3 seconds:
ffmpeg -i input.mp4 -itsoffset 00:00:03.0 -i input.mp4 -vcodec copy -acodec copy -map 1:0 -map 0:1 output_shift3s.mp4
Note how the command is nearly identical to the previous command with the exception of the -map
parameters being switched. So from the time-shifted first -i input.mp4
we now take the audio instead of the video and combine it with the normal video.
Feb 15, 2021
This post documents quite some research through the honestly quite sad Openshift documentation. The content below roughly corresponds to Openshift releases 4.4 to 4.7
Dec 01, 2020
When you run a larger multi-tenant Jenkins instance you might wonder if everyone properly hides secrets from logs. The script below needs to be run as admin and will uncover all unmasked passwords in any pipeline job build:
Nov 19, 2020
I want to share this little video conversion script for the GNOME file manager Nautilus. As Nautilus supports custom scripts being executed for selected files I wrote a script to do allow video transcoding from Nautilus.
Nov 17, 2020
With Helm3 find releases in unexpected state:
May 12, 2020
This post provides a summary of the possibilities to perform HTTP requests from a Jenkins pipeline.
Mar 30, 2020
Researching this for some hours I want to document it:
Mar 12, 2020
When you use Docker’s new BuildKit build engine either by
Mar 10, 2020
When using the Jenkins kubernetes plugin you can list all active pod templates like this
Mar 10, 2020
When providing this blog with some custom CSS to better format code examples I had troubles applying several of the online suggestions on how to add custom CSS in a Jekyll setup with Minima theme active.
Feb 11, 2020
The old github based Helm chart repository is going to be deprecated soon and charts might be vanishing depending how this goes.
Feb 11, 2020
As a reminder to myself I have compiled this list of opinionated best practices (in no particalur order) to follow when using Helm seriously:
Feb 05, 2020
If checking a Redis database you can list all entries using
Nov 13, 2019
When using Helm > 2.14.3 you can suddenly end up with
Jul 03, 2019
It is quite impressive how hard it is to check a map key in Go templates to do some simple if conditions in your Helm charts or other kubernetes templates.
Jun 03, 2019
Stream #0.0(eng): Audio: aac, 48000 Hz, 2 channels, s16 Stream #0.1(eng): Video: h264, yuv420p, 1280x530, PAR 1:1 DAR 128:53, 25 tbr, 25 tbn, 50 tbc Output #0, flv, to 'test.flv': Stream #0.0(eng): Video: flv (hq), yuv420p, 400x164 [PAR 101:102 DAR 050:2091], q=2-31, 300 kb/s, 1k tbn, 25 tbc Stream #0.1(eng): Audio: libmp3lame, 22050 Hz, 2 channels, s16, 64 kb/s Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Press [q] to stop encoding [aac @ 0x80727a0]channel element 1.0 is not allocated Error while decoding stream #0.0 Error while decoding stream #0.0 Error while decoding stream #0.0 Error while decoding stream #0.0 Error while decoding stream #0.0 Error while decoding stream #0.0 [...]The message "Error while decoding stream #0.0" is repeated continuously. The resulting video is either unplayable or has no sound. Still the input video is playable in all standard players (VLC, in Windows...). The reason for the problem as I understood it is that the ffmpeg-builtin AAC codec cannot handle an audio stream stream with index "1.0". This is documented in various bugs (see ffmpeg issues #800, #871, #999, #1733...). It doesn't look like this will be handled by ffmpeg very soon. In fact it could well be that they'll handle it as an invalid input file. Solution: Upgrade to latest ffmpeg and faad library version and add " -acodec libfaad " in front of the "-i" switch. This uses the libfaad AAC decoder, which is said to be a bit slower than the ffmpeg-builtin, but which decodes the AAC without complaining. For example:
ffmpeg -acodec libfaad -i input.mov -b 300kbit/s -ar 22050 -o test.flvThe "-acodec" preceding the "-i" option only influences the input audio decoding, not the audio encoding.
Jun 03, 2019
This is a short summary what you need to avoid any type of interaction when accessing a machine by SSH.
Interaction Pitfalls:
Here is what you need to do to circumvent everything:
Example command line:
ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"
May 29, 2019
When you unit test your bash scripts with bats and have tests like
May 29, 2019
When you write a Helm template and want to create a list using range use the following syntax
May 29, 2019
In all cases ensure you have the ansi-color plugin installed
May 05, 2019
./check_nofile_limit.sh -w 70 -c 85could result in the following output indicating two problematic processes:
WARNING memcached (PID 2398) 75% of 1024 used CRITICAL apache (PID 2392) 94% of 4096 usedHere is the check script doing this:class="brush: bash">#!/bin/bash # MIT License # # Copyright (c) 2017 Lars Windolf
Jan 09, 2019
Oct 30, 2018
Oct 30, 2018
Jun 20, 2018
Mar 27, 2018
Mar 27, 2018
Mar 17, 2018
Dec 27, 2017
Dec 22, 2017
Dec 20, 2017
Dec 20, 2017
Dec 19, 2017
Dec 18, 2017
Nov 22, 2017
Nov 14, 2017
Nov 13, 2017
Nov 12, 2017
Nov 07, 2017
Oct 24, 2017
Oct 23, 2017
Oct 19, 2017
Oct 15, 2017
Oct 15, 2017
Oct 11, 2017
Sep 30, 2017
Aug 07, 2017
Mar 28, 2017
Mar 01, 2017
Feb 27, 2017
Feb 06, 2017
Oct 25, 2016
Oct 22, 2016
Oct 21, 2016
Oct 09, 2016
screen -ls <user name>/
screen -x <user name>/<session name>With PID and tty
screen -x <user name>/<pid>.<ptty>.<host>
Oct 09, 2016
Oct 07, 2016
Jul 14, 2016
May 23, 2016
apt-get update && apt-get --dry-run upgrademight give an indication. But what of the package upgrades do stand for security risks and whose are only simple bugfixes you do not care about?
May 17, 2016
Apr 01, 2016
Mar 31, 2016
Mar 29, 2016
#!/bin/bash SEVERITIES="err,alert,emerg,crit" WHITELIST="microcode: |\ Firmware Bug|\ i8042: No controller|\ Odd, counter constraints enabled but no core perfctrs detected|\ Failed to access perfctr msr|\ echo 0 > /proc/sys" # Check for critical dmesg lines from this day date=$(date "+%a %b %e") output=$(dmesg -T -l "$SEVERITIES" | egrep -v "$WHITELIST" | grep "$date" | tail -5) if [ "$output" == "" ]; then echo "All is fine." exit 0 fi echo "$output" | xargs exit 1"Features" of the script above:
Jan 15, 2016
Jan 13, 2016
Jan 13, 2016
Dec 11, 2015
Dec 11, 2015
Sep 18, 2015
Sep 02, 2015
Sep 02, 2015
Aug 09, 2015
plugin { acl_shared_dict = file:/var/lib/dovecot/db/shared-mailboxes.db }Check if such a file was created and is populated with new entries when you add ACLs from the mail client. As long as entries do not appear here, nothing can work.
doveadm acl debug -u [email protected] shared/users/box
Jul 02, 2015
Jun 20, 2015
Apr 15, 2015
Mar 28, 2015
Mar 27, 2015
Mar 25, 2015
puppet apply -t --tags Some::Classon the client node to only run the single class named "Some::Class". Why does this work? Because Puppet automatically creates tags for all classes you have. Ensure to upper-case all parts of the class name, because even if you actual Ruby class is "some::class" the Puppet tag will be "Some::Class".
Mar 25, 2015
puppet agent -t --noopIt doesn't change anything, it just tells you what it would change. More or less exact due to the nature of dependencies that might come into existance by runtime changes. But it is pretty helpful and all Puppet users I know use it from time to time.
Use 'noop' mode where the daemon runs in a no-op or dry-run mode. This is useful for seeing what changes Puppet will make without actually executing the changes.
Mar 25, 2015
bind '"\e[A":history-search-backward' bind '"\e[B":history-search-forward'It changes the behaviour of the up and down cursor keys to not go blindly through the history but only through items matching the current prompt. Of course at the disadvantage of having to clear the line to go through the full history. But as this can be achieved by a Ctrl-C at any time it is still preferrable to Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R Ctrl+R ....
Mar 20, 2015
Mar 20, 2015
#includedir /etc/sudoers.dwhich gives a
visudo: >>> /etc/sudoers: syntax error near line 28 <<<Let's check the "sudoers" manpage again: full of EBNF notations! But nothing in the EBNF about comments being commands. At least under Other special characters and reserved words one finds
The pound sign ('#') is used to indicate a comment (unless it is part of a #include directive or unless it occurs in the context of a user name and is followed by one or more digits, in which case it is treated as a uid). Both the comment character and any text after it, up to the end of the line, are ignored.Cannot this be done in a better way?
Mar 20, 2015
// Copyright (c) 2012 Lars Lindner <[email protected]> // // GPLv2 and later or MIT License - http://www.opensource.org/licenses/mit-license.php var dayName = new Array("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"); var monthName = new Array("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dez"); /* simulates some of the format strings of strptime() */ function strptime(format, date) { var last = -2; var result = ""; var hour = date.getHours(); /* Expand aliases */ format = format.replace(/%D/, "%m/%d/%y"); format = format.replace(/%R/, "%H:%M"); format = format.replace(/%T/, "%H:%M:%S"); /* Note: we fail on strings without format characters */ while(1) { /* find next format char */ var pos = format.indexOf('%', last + 2); if(-1 == pos) { /* dump rest of text if no more format chars */ result += format.substr(last + 2); break; } else { /* dump text after last format code */ result += format.substr(last + 2, pos - (last + 2)); /* apply format code */ formatChar = format.charAt(pos + 1); switch(formatChar) { case '%': result += '%'; break; case 'C': result += date.getYear(); break; case 'H': case 'k': if(hour < 10) result += "0"; result += hour; break; case 'M': if(date.getMinutes() < 10) result += "0"; result += date.getMinutes(); break; case 'S': if(date.getSeconds() < 10) result += "0"; result += date.getSeconds(); break; case 'm': if(date.getMonth() < 10) result += "0"; result += date.getMonth(); break; case 'a': case 'A': result += dayName[date.getDay() - 1]; break; case 'b': case 'B': case 'h': result += monthName[date.getMonth()]; break; case 'Y': result += date.getFullYear(); break; case 'd': case 'e': if(date.getDate() < 10) result += "0"; result += date.getDate(); break; case 'w': result += date.getDay(); break; case 'p': case 'P': if(hour < 12) { result += "am"; } else { result += "pm"; } break; case 'l': case 'I': if(hour % 12 < 10) result += "0"; result += (hour % 12); break; } } last = pos; } return result; }
Mar 20, 2015
sqlite3 <database file>
SELECT name FROM sqlite_master;It will dump all schema elements. One field of the result table contains the SQL used to create each schema element. If you only want a list of tables use the client command ".tables"
.tables
sqlite3 <database file> .dump >output.sqlSee the next sections how to dump a single table or in SQL.
sqlite3 <database file> ".dump <table name>" >output.sql
echo ".mode csv select * from <table name>;" | sqlite3 >output.sql
To run a one time cleanup just run the following command on an sqlite database file. Ensure there is no program accessing the database file. If there is it will fail and do nothing:
sqlite3 my.db "VACUUM;"
PRAGMA auto_vacuum = INCREMENTAL; PRAGMA auto_vacuum = FULL;To quere the current "auto_vacuum setting run
PRAGMA auto_vacuumRead more in this detailed post about sqlite vacuuming!
Mar 20, 2015
buffer_append_space: alloc 10512504 not supported rsync: writefd_unbuffered failed to write 4092 bytes [sender]: Broken pipe (32) rsync: connection unexpectedly closed (36 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(635) [sender=3.0.3]
Don't bother to debug rsync! This is an SSH problem. You need to upgrade your SSH version (which is propably some 4.3 or older).Mar 20, 2015
Mar 20, 2015
g++ -DHAVE_CONFIG_H -I. -I../.. -I../../include -Wall -g -O2 -MT 3gp.o -MD -MP -MF .deps/3gp.Tpo -c -o 3gp.o 3gp.cpp In file included from mp4common.h:29:0, from 3gp.cpp:28: mpeg4ip.h:126:58: error: new declaration ‘char* strcasestr(const char*, const char*)’ /usr/include/string.h:369:28: error: ambiguates old declaration ‘const char* strcasestr(const char*, const char*)’ make[3]: *** [3gp.o] Error 1
Solution is to remove the declaration of strcasestr() in commom/mp4v2/mpeg4ip.h (suggested here).Mar 20, 2015
$ flvtool2 -kUP -metadatacreator:'some label' video.flv ERROR: EOFError ERROR: /usr/lib/ruby/site_ruby/1.8/flv/amf_string_buffer.rb:37:in `read' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/amf_string_buffer.rb:243:in `read__STRING' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/audio_tag.rb:56:in `read_header' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/audio_tag.rb:47:in `after_initialize' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/tag.rb:56:in `initialize' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/stream.rb:447:in `new' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/stream.rb:447:in `read_tags' ERROR: /usr/lib/ruby/site_ruby/1.8/flv/stream.rb:58:in `initialize' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:272:in `new' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:272:in `open_stream' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:238:in `process_files' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:225:in `each' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:225:in `process_files' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2/base.rb:44:in `execute!' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2.rb:168:in `execute!' ERROR: /usr/lib/ruby/site_ruby/1.8/flvtool2.rb:228 ERROR: /usr/bin/flvtool2:2:in `require' ERROR: /usr/bin/flvtool2:2 $
In the Wowza Media Server support forum is a hint on how to patch flvtool2 to solve the issue:--- /usr/local/lib/site_ruby/1.8/flv/audio_tag.rb 2009-11-12 10:46:13.000000000 +0100 +++ lib/flv/audio_tag.rb 2010-03-17 11:25:35.000000000 +0100 @@ -44,7 +44,9 @@ def after_initialize(new_object) @tag_type = AUDIO - read_header + if data_size > 0 + read_header + end end def nameCrash Variant #2 Here is another crashing variant:
$ flvtool2 -kUP -metadatacreator:'some label' video.flv ERROR: EOFError ERROR: /usr/local/lib/site_ruby/1.8/flv/amf_string_buffer.rb:37:in `read' ERROR: /usr/local/lib/site_ruby/1.8/flv/amf_string_buffer.rb:75:in `read__AMF_string' ERROR: /usr/local/lib/site_ruby/1.8/flv/amf_string_buffer.rb:90:in `read__AMF_mixed_array' ERROR: /usr/local/lib/site_ruby/1.8/flv/amf_string_buffer.rb:134:in `read__AMF_data' ERROR: /usr/local/lib/site_ruby/1.8/flv/meta_tag.rb:40:in `after_initialize' ERROR: /usr/local/lib/site_ruby/1.8/flv/tag.rb:56:in `initialize' ERROR: /usr/local/lib/site_ruby/1.8/flv/stream.rb:451:in `new' ERROR: /usr/local/lib/site_ruby/1.8/flv/stream.rb:451:in `read_tags' ERROR: /usr/local/lib/site_ruby/1.8/flv/stream.rb:58:in `initialize' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:272:in `new' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:272:in `open_stream' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:238:in `process_files' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:225:in `each' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:225:in `process_files' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2/base.rb:44:in `execute!' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2.rb:168:in `execute!' ERROR: /usr/local/lib/site_ruby/1.8/flvtool2.rb:228 ERROR: /usr/local/bin/flvtool2:2:in `require' ERROR: /usr/local/bin/flvtool2:2 $
I have not yet found a solution... Update: I have found crash variant #2 to often happen with larger files. Using flvtool++ instead of flvtool2 always solved the problem. Using flvtool++ is also a good idea as it is much faster than flvtool2. Still both tools have their problems. More about this in the Comparison of FLV and MP4 metadata tagging tools.Mar 20, 2015
Resampling with input channels greater than 2 unsupported. Can not resample 6 channels @ 48000 Hz to 6 channels @ 48000you are probably trying to encode from AAC with 5.1 audio to less than 6 channels or different audio sampling rate. There are three solutions:
ffmpeg -y -i source.avi -acodec copy source.6.aac
faad -d -o source.2.pcm source.6.aac
ffmpeg -y -i source.avi -i source.2.pcm -map 0:0 -map 1:0 -vcodec copy -acodec copy output.avi
Mar 20, 2015
Stream #0.0(eng): Video: svq3, yuvj420p, 640x476, 1732 kb/s, 25 fps, 25 tbr, 600 tbn, 600 tbc ... [svq3 @ 0x806bfe0] SVQ3 does not support multithreaded decoding, patch welcome! (check latest SVN too) ... Error while opening decoder for input stream #0.0
Instead of simply just using only one thread and just working ffmpeg bails. What a pain. You need to specify "-threads 1" or no threads option at all for decoding to work.Mar 20, 2015
[...] sudo will read each file in /etc/sudoers.d, skipping file names that end in ~ or contain a . character to avoid causing problems with package manager or editor temporary/backup files. [...]This mean if you have a Unix user like "lars.windolf" you do not want to create a file
/etc/sudoers.d/lars.windolfThe evil thing is neither sudo nor visudo warns you about the mistake and the rules just do not work. And if you have some other definition files with the same rule and just a file name without a dot you might wonder about your sanity :-)
Mar 20, 2015
merb : chef-server (api) : worker (port 4000) ~ Failed to authenticate. Ensure that your client key is valid. - (Merb::ControllerExceptions::Unauthorized) /usr/share/chef-server-api/app/controllers/application.rb:56:in `authenticate_every' /usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:352:in `send' /usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:352:in `_call_filters' /usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:344:in `each' /usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:344:in `_call_filters' /usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:286:in `_dispatch' /usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:284:in `catch' /usr/lib/ruby/vendor_ruby/merb-core/controller/abstract_controller.rb:284:in `_dispatch' [...]Then try stopping all chef processes, remove
/etc/chef/webui.pem /etc/chef/validation.pemand start everything again. It will regenerate the keys. The downside is that you have to
knife configure -iall you knife setup locations again.
Mar 20, 2015
qwinsta
Mar 20, 2015
auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp... it cannot work, as Network Manager to manage your connections it needs to look like:
auto lo iface lo inet loopbackRestart Network Manager (e.g. "/etc/init.d/network-manager restart") for the nm-applet icon to show up.
Mar 20, 2015
rsync -az -e ssh --delete /data server:/data
It just won't delete anything. It will when running it like this: rsync -az -e ssh --delete /data/ server:/data
Mar 20, 2015
export EDITOR="gedit -w"
export EDITOR=xemacsWith Emacs itself you also have the possibility to use server mode by running emacs-client (see "Starting emacs automatically" in the EmacsWiki). To do so set
export ALTERNATE_EDITOR=emacs EDITOR=emacsclient
export EDITOR=neditthe same goes for the KDE editor kate:
export EDITOR=kateand if you want it really hurting try something like this:
export EDITOR=eclipse export VISUAL=oowriter
sudo -e visudoWether passing the enviroment to root is a good idea might be a good question though... Have fun!
Mar 20, 2015
/etc/apache2/envvarswhich in a fresh installation contains a line
#APACHE_ULIMIT_MAX_FILES="ulimit -n 65535"Uncomment it to set any max file limit you want. Restart Apache and verify the process limit in /proc/<pid>/limits which should give you something like
$ egrep "^Limit|Max open files" /proc/3643/limits Limit Soft Limit Hard Limit Units Max open files 1024 4096 files $
Mar 20, 2015
blkfront: barrier: empty write xvda op failed blkfront: xvda: barrier or flush: disabledYou also won't be able to remount read-write using "mount -o remount,rw /" as this will give you a kernel error like this
EXT4-fs (xvda1): re-mounted. Opts: errors=remount-roThis problem more or less sporadically affects paravirtualized Ubuntu 12.04 VMs. Note that officially Ubuntu 12.04 is not listed as a supported system in the Citrix documentation. Note that this problem doesn't affect fully virtualized VMs. The Solution:
Mar 20, 2015
This is a check list of all you can do wrong when trying to set limits on Debian/Ubuntu. The hints might apply to other distros too, but I didn't check. If you have additional suggestions please leave a comment!
The best way to check the effective limits of a process is to dump
/proc/<pid>/limitswhich gives you a table like this
Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 10485760 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 528384 528384 processes Max open files 1024 1024 files Max locked memory 32768 32768 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 528384 528384 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0Running "ulimit -a" in the shell of the respective user rarely tells something because the init daemon responsible for launching services might be ignoring /etc/security/limits.conf as this is a configuration file for PAM only and is applied on login only per default.
$ cat /proc/sys/fs/file-nr 7488 0 384224 $The first number is the number of all open files of all processes, the third is the maximum. If you need to increase the maximum:
# sysctl -w fs.file-max=500000Ensure to persist this in /etc/sysctl.conf to not loose it on reboot.
Just checking the number of files per process often helps to identify bottlenecks. For every process you can count open files from using lsof:
lsof -n -p <pid> | wc -lSo a quick check on a burning system might be:
lsof -n 2>/dev/null | awk '{print $1 " (PID " $2 ")"}' | sort | uniq -c | sort -nr | head -25whic returns the top 25 file descriptor eating processes
139 mysqld (PID 2046) 105 httpd2-pr (PID 25956) 105 httpd2-pr (PID 24384) 105 httpd2-pr (PID 24377) 105 httpd2-pr (PID 24301) 105 httpd2-pr (PID 24294) 105 httpd2-pr (PID 24239) 105 httpd2-pr (PID 24120) 105 httpd2-pr (PID 24029) 105 httpd2-pr (PID 23714) 104 httpd2-pr (PID 3206) 104 httpd2-pr (PID 26176) 104 httpd2-pr (PID 26175) 104 httpd2-pr (PID 26174) 104 httpd2-pr (PID 25957) 104 httpd2-pr (PID 24378) 102 httpd2-pr (PID 32435) 53 sshd (PID 25607) 49 sshd (PID 25601)The same more comfortable including the hard limit:
lsof -n 2>/dev/null | awk '{print $1,$2}' | sort | uniq -c | sort -nr | head -25 | while read nr name pid ; do printf "%10d / %-10d %-15s (PID %5s)\n" $nr $(cat /proc/$pid/limits | grep 'open files' | awk '{print $5}') $name $pid; donereturns
105 / 1024 httpd2-pr (PID 5368) 105 / 1024 httpd2-pr (PID 3834) 105 / 1024 httpd2-pr (PID 3407) 104 / 1024 httpd2-pr (PID 5392) 104 / 1024 httpd2-pr (PID 5378) 104 / 1024 httpd2-pr (PID 5377) 104 / 1024 httpd2-pr (PID 4035) 104 / 1024 httpd2-pr (PID 4034) 104 / 1024 httpd2-pr (PID 3999) 104 / 1024 httpd2-pr (PID 3902) 104 / 1024 httpd2-pr (PID 3859) 104 / 1024 httpd2-pr (PID 3206) 102 / 1024 httpd2-pr (PID 32435) 55 / 1024 mysqld (PID 2046) 53 / 1024 sshd (PID 25607) 49 / 1024 sshd (PID 25601) 46 / 1024 dovecot-a (PID 1869) 42 / 1024 python (PID 1850) 41 / 1048576 named (PID 3130) 40 / 1024 su (PID 25855) 40 / 1024 sendmail (PID 3172) 40 / 1024 dovecot-a (PID 14057) 35 / 1024 sshd (PID 3160) 34 / 1024 saslauthd (PID 3156) 34 / 1024 saslauthd (PID 3146)
The most common mistake is believing upstart behaves like the Debian init script handling. When on Ubuntu a service is being started by upstart /etc/security/limits.conf will never apply! To get upstart to change the limits of a managed service you need to insert a line like
limit nofile 10000 20000into the upstart job file in /etc/init.
After you apply a change to /etc/security/limits.conf you need to login again to have the change applied to your next shell instance by PAM. Alternatively you can use sudo -i to switch to user whose limits you modified and simulate a login.
The Debian Apache package which is also included in Ubuntu has a separate way of configuring "nofile" limits. If you run the default Apache in 12.04 and check /proc/<pid>/limits of a Apache process you'll find it is allowing 8192 open file handles. No matter what you configured elsewhere. This is because Apache defaults to 8192 files. If you want another setting for "nofile" then you need to [edit /etc/apache2/envvars](/blog/Ubuntu,-Apache-and-ulimit).
The per-process limit most often hit is propably "nofile". Imagine you production database suddenly running out of files. Imagine a tool that can instant-fix it without restarting the DB! Copy the following code to a file "set_limit_nofile.c"
#define _GNU_SOURCE #define _FILE_OFFSET_BITS 64 #include <stdio.h> #include <time.h> #include <stdlib.h> #include <unistd.h> #include <sys/resource.h> #define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \ } while (0) int main(int argc, char *argv[]) { struct rlimit old, new; struct rlimit *newp; pid_t pid; if (!(argc == 2 || argc == 4)) { fprintf(stderr, "Usage: %s <pid> [<new-soft-limit> " "<new-hard-limit>]\n", argv[0]); exit(EXIT_FAILURE); } pid = atoi(argv[1]); /* PID of target process */ newp = NULL; if (argc == 4) { new.rlim_cur = atoi(argv[2]); new.rlim_max = atoi(argv[3]); newp = &new; } if (prlimit(pid, RLIMIT_NOFILE, newp, &old) == -1) errExit("prlimit-1"); printf("Previous limits: soft=%lld; hard=%lld\n", (long long) old.rlim_cur, (long long) old.rlim_max); if (prlimit(pid, RLIMIT_NOFILE, NULL, &old) == -1) errExit("prlimit-2"); printf("New limits: soft=%lld; hard=%lld\n", (long long) old.rlim_cur, (long long) old.rlim_max); exit(EXIT_FAILURE); }and compile it with
gcc -o set_nofile_limit set_nofile_limit.cAnd now you have a tool to change any processes "nofile" limit. To dump the limit just pass a PID:
$ ./set_limit_nofile 17006 Previous limits: soft=1024; hard=1024 New limits: soft=1024; hard=1024 $To change limits pass PID and two limits:
# ./set_limit_nofile 17006 1500 1500 Previous limits: soft=1024; hard=1024 New limits: soft=1500; hard=1500 #And the production database is saved.
This is a check list of all you can do wrong when trying to set limits on Debian/Ubuntu. The hints might apply to other distros too, but I didn't check. If you have additional suggestions please leave a comment!
The best way to check the effective limits of a process is to dump
/proc/<pid>/limitswhich gives you a table like this
Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 10485760 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 528384 528384 processes Max open files 1024 1024 files Max locked memory 32768 32768 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 528384 528384 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0Running "ulimit -a" in the shell of the respective user rarely tells something because the init daemon responsible for launching services might be ignoring /etc/security/limits.conf as this is a configuration file for PAM only and is applied on login only per default.
$ cat /proc/sys/fs/file-nr 7488 0 384224 $The first number is the number of all open files of all processes, the third is the maximum. If you need to increase the maximum:
# sysctl -w fs.file-max=500000Ensure to persist this in /etc/sysctl.conf to not loose it on reboot.
Just checking the number of files per process often helps to identify bottlenecks. For every process you can count open files from using lsof:
lsof -n -p <pid> | wc -lSo a quick check on a burning system might be:
lsof -n 2>/dev/null | awk '{print $1 " (PID " $2 ")"}' | sort | uniq -c | sort -nr | head -25whic returns the top 25 file descriptor eating processes
139 mysqld (PID 2046) 105 httpd2-pr (PID 25956) 105 httpd2-pr (PID 24384) 105 httpd2-pr (PID 24377) 105 httpd2-pr (PID 24301) 105 httpd2-pr (PID 24294) 105 httpd2-pr (PID 24239) 105 httpd2-pr (PID 24120) 105 httpd2-pr (PID 24029) 105 httpd2-pr (PID 23714) 104 httpd2-pr (PID 3206) 104 httpd2-pr (PID 26176) 104 httpd2-pr (PID 26175) 104 httpd2-pr (PID 26174) 104 httpd2-pr (PID 25957) 104 httpd2-pr (PID 24378) 102 httpd2-pr (PID 32435) 53 sshd (PID 25607) 49 sshd (PID 25601)The same more comfortable including the hard limit:
lsof -n 2>/dev/null | awk '{print $1,$2}' | sort | uniq -c | sort -nr | head -25 | while read nr name pid ; do printf "%10d / %-10d %-15s (PID %5s)\n" $nr $(cat /proc/$pid/limits | grep 'open files' | awk '{print $5}') $name $pid; donereturns
105 / 1024 httpd2-pr (PID 5368) 105 / 1024 httpd2-pr (PID 3834) 105 / 1024 httpd2-pr (PID 3407) 104 / 1024 httpd2-pr (PID 5392) 104 / 1024 httpd2-pr (PID 5378) 104 / 1024 httpd2-pr (PID 5377) 104 / 1024 httpd2-pr (PID 4035) 104 / 1024 httpd2-pr (PID 4034) 104 / 1024 httpd2-pr (PID 3999) 104 / 1024 httpd2-pr (PID 3902) 104 / 1024 httpd2-pr (PID 3859) 104 / 1024 httpd2-pr (PID 3206) 102 / 1024 httpd2-pr (PID 32435) 55 / 1024 mysqld (PID 2046) 53 / 1024 sshd (PID 25607) 49 / 1024 sshd (PID 25601) 46 / 1024 dovecot-a (PID 1869) 42 / 1024 python (PID 1850) 41 / 1048576 named (PID 3130) 40 / 1024 su (PID 25855) 40 / 1024 sendmail (PID 3172) 40 / 1024 dovecot-a (PID 14057) 35 / 1024 sshd (PID 3160) 34 / 1024 saslauthd (PID 3156) 34 / 1024 saslauthd (PID 3146)
The most common mistake is believing upstart behaves like the Debian init script handling. When on Ubuntu a service is being started by upstart /etc/security/limits.conf will never apply! To get upstart to change the limits of a managed service you need to insert a line like
limit nofile 10000 20000into the upstart job file in /etc/init.
After you apply a change to /etc/security/limits.conf you need to login again to have the change applied to your next shell instance by PAM. Alternatively you can use sudo -i to switch to user whose limits you modified and simulate a login.
The Debian Apache package which is also included in Ubuntu has a separate way of configuring "nofile" limits. If you run the default Apache in 12.04 and check /proc/<pid>/limits of a Apache process you'll find it is allowing 8192 open file handles. No matter what you configured elsewhere. This is because Apache defaults to 8192 files. If you want another setting for "nofile" then you need to edit /etc/apache2/envvars.
The per-process limit most often hit is propably "nofile". Imagine you production database suddenly running out of files. Imagine a tool that can instant-fix it without restarting the DB! Copy the following code to a file "set_limit_nofile.c"
#define _GNU_SOURCE #define _FILE_OFFSET_BITS 64 #include <stdio.h> #include <time.h> #include <stdlib.h> #include <unistd.h> #include <sys/resource.h> #define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \ } while (0) int main(int argc, char *argv[]) { struct rlimit old, new; struct rlimit *newp; pid_t pid; if (!(argc == 2 || argc == 4)) { fprintf(stderr, "Usage: %s <pid> [<new-soft-limit> " "<new-hard-limit>]\n", argv[0]); exit(EXIT_FAILURE); } pid = atoi(argv[1]); /* PID of target process */ newp = NULL; if (argc == 4) { new.rlim_cur = atoi(argv[2]); new.rlim_max = atoi(argv[3]); newp = &new; } if (prlimit(pid, RLIMIT_NOFILE, newp, &old) == -1) errExit("prlimit-1"); printf("Previous limits: soft=%lld; hard=%lld\n", (long long) old.rlim_cur, (long long) old.rlim_max); if (prlimit(pid, RLIMIT_NOFILE, NULL, &old) == -1) errExit("prlimit-2"); printf("New limits: soft=%lld; hard=%lld\n", (long long) old.rlim_cur, (long long) old.rlim_max); exit(EXIT_FAILURE); }and compile it with
gcc -o set_nofile_limit set_nofile_limit.cAnd now you have a tool to change any processes "nofile" limit. To dump the limit just pass a PID:
$ ./set_limit_nofile 17006 Previous limits: soft=1024; hard=1024 New limits: soft=1024; hard=1024 $To change limits pass PID and two limits:
# ./set_limit_nofile 17006 1500 1500 Previous limits: soft=1024; hard=1024 New limits: soft=1500; hard=1500 #And the production database is saved.
Mar 20, 2015
cd wget "http://archives.oclint.org/releases/0.8/oclint-0.8.1-x86_64-linux-3.13.0-35-generic.tar.gz" tar zxvf oclint-0.8.1-x86_64-linux-3.13.0-35-generic.tar.gzThis should leave you with a copy of OCLint in ~/oclint-0.8.1
makeyou run
bear makeso "bear" can track all files being compiled. It will dump a JSON file "compile_commands.json" which OCLint can use to do analysis of all files. To setup Bear do the following
cd git clone https://github.com/rizsotto/Bear.git cd Bear cmake . make
git clone https://github.com/lwindolf/liferea.git cd liferea sh autogen.sh makeNow we collect all code file compilation instructions with bear:
make clean bear makeAnd if this succeed we can start a complete analysis with
~/oclint-0.8.1/bin/oclint-json-compilation-databasewhich will run OCLint with the input from "compile_commands.json" produced by "bear". Don't call "oclint" directly as you'd need to pass all compile flags manually. If all went well you could see code analysis lines like those:
[...] conf.c:263:9: useless parentheses P3 conf.c:274:9: useless parentheses P3 conf.c:284:9: useless parentheses P3 conf.c:46:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 33 exceeds limit of 10 conf.c:157:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 12 exceeds limit of 10 conf.c:229:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 30 exceeds limit of 10 conf.c:78:1: long method P3 Method with 55 lines exceeds limit of 50 conf.c:50:2: short variable name P3 Variable name with 2 characters is shorter than the threshold of 3 conf.c:52:2: short variable name P3 Variable name with 1 characters is shorter than the threshold of 3 [...]
Mar 20, 2015
Exception # just the word One Two Three # those three words in any order "One Two Three" # the exact phrase # Filter all lines where field "status" has value 500 from access.log source="/var/log/apache/access.log" status=500 # Give me all fatal errors from syslog of the blog host host="myblog" source="/var/log/syslog" Fatal
source="some.log" Fatal | rex "(?i) msg=(?PWhen running above query check the list of "interesting fields" it now should have an entry "FIELDNAME" listing you the top 10 fatal messages from "some.log" What is the difference to "regex" now? Well "regex" is like grep. Actually you can rephrase[^,]+)"
source="some.log" Fatalto
source="some.log" | regex _raw=".*Fatal.*"and get the same result. The syntax of "regex" is simply "
... | stats sum(<field>) as result | eval result=(result/1000)Determine the size of log events by checking len() of _raw. The p10() and p90() functions are returning the 10 and 90 percentiles:
| eval raw_len=len(_raw) | stats avg(raw_len), p10(raw_len), p90(raw_len) by sourcetype
source="/var/log/nginx/access.log" HTTP 500 source="/var/log/nginx/access.log" HTTP (200 or 30*) source="/var/log/nginx/access.log" status=404 | sort - uri source="/var/log/nginx/access.log" | head 1000 | top 50 clientip source="/var/log/nginx/access.log" | head 1000 | top 50 referer source="/var/log/nginx/access.log" | head 1000 | top 50 uri source="/var/log/nginx/access.log" | head 1000 | top 50 method ...
... | sendemail to="[email protected]"
... | table _time, <field> | timechart span=1d sum(<field>) ... | table _time, <field>, name | timechart span=1d sum(<field>) by name
| eventcount summarize=false index=* | dedup index | fields index | eventcount summarize=false report_size=true index=* | eval size_MB = round(size_bytes/1024/1024,2) | REST /services/data/indexes | table title | REST /services/data/indexes | table title splunk_server currentDBSizeMB frozenTimePeriodInSecs maxTime minTime totalEventCounton the command line you can call
$SPLUNK_HOME/bin/splunk list indexTo query write amount of per index the metrics.log can be used:
index=_internal source=*metrics.log group=per_index_thruput series=* | eval MB = round(kb/1024,2) | timechart sum(MB) as MB by seriesMB per day per indexer / index
index=_internal metrics kb series!=_* "group=per_host_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb index=_internal metrics kb series!=_* "group=per_index_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb
Mar 20, 2015
merb : chef-server (api) : worker (port 4000) ~ Connection refused - connect(2) - (Errno::ECONNREFUSED)Solution: Check why solr is not running and start it
/etc/init.d/chef-solr start
merb : chef-server (api) : worker (port 4000) ~ Net::HTTPFatalError: 503 "Service Unavailable" - (Chef::Exceptions::SolrConnectionError)Solution: You need to check solr log for error. You can find
# chef-expander -n 1 /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- http11_client (LoadError) from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/vendor_ruby/em-http.rb:8:in `Solution: This is a gems dependency issue with the HTTP client gem. Read about it here: http://tickets.opscode.com/browse/CHEF-3495. You might want to check the active Ruby version you have on your system e.g. on Debian run' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/vendor_ruby/em-http-request.rb:1:in ` ' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/vendor_ruby/chef/expander/solrizer.rb:24:in ` ' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/vendor_ruby/chef/expander/vnode.rb:26:in ` ' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/vendor_ruby/chef/expander/vnode_supervisor.rb:28:in ` ' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/vendor_ruby/chef/expander/cluster_supervisor.rb:25:in ` ' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/bin/chef-expander:27:in ` '
update-alternatives --config rubyto find out and change it. Note that the emhttp package from Opscode might require a special version. You can check by listing the package files:
dpkg -L libem-http-request-ruby /. /usr /usr/share /usr/share/doc /usr/share/doc/libem-http-request-ruby /usr/share/doc/libem-http-request-ruby/changelog.Debian.gz /usr/share/doc/libem-http-request-ruby/copyright /usr/lib /usr/lib/ruby /usr/lib/ruby/vendor_ruby /usr/lib/ruby/vendor_ruby/em-http.rb /usr/lib/ruby/vendor_ruby/em-http-request.rb /usr/lib/ruby/vendor_ruby/em-http /usr/lib/ruby/vendor_ruby/em-http/http_options.rb /usr/lib/ruby/vendor_ruby/em-http/http_header.rb /usr/lib/ruby/vendor_ruby/em-http/client.rb /usr/lib/ruby/vendor_ruby/em-http/http_encoding.rb /usr/lib/ruby/vendor_ruby/em-http/multi.rb /usr/lib/ruby/vendor_ruby/em-http/core_ext /usr/lib/ruby/vendor_ruby/em-http/core_ext/bytesize.rb /usr/lib/ruby/vendor_ruby/em-http/mock.rb /usr/lib/ruby/vendor_ruby/em-http/decoders.rb /usr/lib/ruby/vendor_ruby/em-http/version.rb /usr/lib/ruby/vendor_ruby/em-http/request.rb /usr/lib/ruby/vendor_ruby/1.8 /usr/lib/ruby/vendor_ruby/1.8/x86_64-linux /usr/lib/ruby/vendor_ruby/1.8/x86_64-linux/em_buffer.so /usr/lib/ruby/vendor_ruby/1.8/x86_64-linux/http11_client.soThe listing above for example indicates ruby1.8.
Mar 20, 2015
rabbitmqctl report | grep -A3 file_descriptorsand have a look at the printed limits and usage. Here is an example:
{file_descriptors,[{total_limit,8900}, {total_used,1028}, {sockets_limit,8008}, {sockets_used,2}]},In my case the 100% CPU usage was caused by all of the file handles being used up which for some reason causes RabbitMQ 2.8.4 to go into a crazy endless loop rarely responding at all. The "total_limit" value is the "nofile" limit for the maximum number of open files you can check using "ulimit -n" as RabbitMQ user. Increase it permanently by defining a RabbitMQ specific limit for example in /etc/security/limits.d/rabbitmq.conf:
rabbitmq soft nofile 10000or using for example
ulimit -n 10000from the start script or login scripts. Then restart RabbitMQ. The CPU usage should be gone. Update: This problem only affects RabbitMQ releases up to 1.8.4 and should be fixed starting with 1.8.5.
Mar 20, 2015
$ mdb core.xxxx # Open core file > ::status # Print core summary > ::stacks # Backtrace > ::stacks -v # Backtrace verbose > ::quit
# mdb -kw > maxusers/W 500 > $q
pldd <pid>
# pmap 19463 19463: -sh 08047000 4K rw--- [ stack ] 08050000 76K r-x-- /sbin/sh 08073000 4K rw--- /sbin/sh 08074000 16K rw--- [ heap ] FEE60000 24K r-x-- /lib/libgen.so.1 FEE76000 4K rw--- /lib/libgen.so.1 FEE80000 1072K r-x-- /lib/libc.so.1 FEF90000 24K rwx-- [ anon ] FEF9C000 32K rw--- /lib/libc.so.1 FEFA4000 8K rw--- /lib/libc.so.1 FEFC4000 156K r-x-- /lib/ld.so.1 FEFF0000 4K rwx-- [ anon ] FEFF7000 4K rwxs- [ anon ] FEFFB000 8K rwx-- /lib/ld.so.1 FEFFD000 4K rwx-- /lib/ld.so.1 total 1440K
infocmp -L
snoop -v -d qfe0 -x0 host 192.168.1.87 snoop -v -d qfe0 -x0 port 22
dladm show-dev dladm show-link
ifconfig plumb -a
% ulimit -n 256 % echo 'rlim_fd_max/D' | mdb -k | awk '{ print $2 }' # determine allowed maximum 65536 % ulimit -n 65536 % export LD_PRELOAD_32=/usr/lib/extendedFILE.so.1
isainfo -b
pkgchk -l -p /usr/bin/ls
svcs -a # List all installed services and their current state svcs -d <service> # List all dependencies of a service svcs -D <service> # List who is depending on a service svcs -xv # List why something is failed
svcadm enable <service> svcadm disable <service> svcadm refresh <service> # like init reload svcadm restart <service> svcadm clear <service> # Clear errors: try starting again...
start /SP/consoleIf the console is already in use you can kill it with
stop /SP/console
# Show log scadm loghistory # Send a normal or critical console message scadm send_event "Important" scadm send_event -c "Critical!" # Dump all or single settings scadm show scadm show sc_customerinfo
prtconf -v
# Analysis zpool list # List pools zpool status -v # Tree like summary of all disks zpool iostat 1 # iostat for all ZFS pools zpool history # Show recent commands # Handling properties zfs get all z0 zfs get all z0/data zfs set sharenfs=on z0/data zfs set sharesmb=on z0/data zfs set compression=on z0/data # Mounting zfs mount # List all ZFS mount points zfs set mountpoint=/export/data z0/data zfs mount /export/data zfs unmount /export/data # NFS Shares zfs set sharenfs=on z1/backup/mydata # Enable as NFS share zfs get sharenfs z1/backup/mydata # List share options zfs sharenfs="<options>" z1/backup/mydata # Overwrite share options # Create and load snapshots zfs snapshot z0/data@backup-20120601 zfs rollback z0/data@backup-20120601
Mar 20, 2015
knife node show -a roles $node | grep -v "^roles:"
declare -A roles for node in $(knife node list); do for role in $(knife node show -a roles $i |grep -v "roles" ); do roles["$role"]=${roles["$role"]}"$i " done doneGiven this it is easy to dump Icinga hostgroup definitions. For example
for role in ${!roles[*]}; do echo "define hostgroup { hostgroup_name chef-$role members ${roles[$role]} } " doneThat makes ~15 lines of shell script and a cronjob entry to integrate Chef with Nagios. Of course you also need to ensure that each host name provided by chef has a Nagios host definition. If you know how it resolves you could just dump a host definition while looping over the host list. In any case there is no excuse not to export the chef config :-)
Mar 20, 2015
#!/bin/bash result=$(/usr/lib/nagios/plugins/check_ntp_peer $@) status=$? if echo "$result" | egrep 'jitter=-1.00000|has the LI_ALARM' >/dev/null; then echo "Unknown state after reboot." exit 0 fi echo $result exit $statusUsing above wrapper you get rid of the warnings.
Mar 20, 2015
#!/bin/sh T='gYw' # The test text echo -e "\n 40m 41m 42m 43m\ 44m 45m 46m 47m"; for FGs in ' m' ' 1m' ' 30m' '1;30m' ' 31m' '1;31m' ' 32m' \ '1;32m' ' 33m' '1;33m' ' 34m' '1;34m' ' 35m' '1;35m' \ ' 36m' '1;36m' ' 37m' '1;37m'; do FG=${FGs// /} echo -en " $FGs \033[$FG $T " for BG in 40m 41m 42m 43m 44m 45m 46m 47m; do echo -en "$EINS \033[$FG\033[$BG $T \033[0m"; done echo; done echo
Mar 20, 2015
Function | Screen | tmux |
---|---|---|
Start instance | screen screen -S <name> | tmux |
Attach to instance | screen -r <name> screen -x <name> | tmux attach |
List instances | screen -ls screen -ls <user name>/ | tmux ls |
New Window | ^a c | ^b c |
Switch Window | ^a n ^a p | ^b n ^b p |
List Windows | ^a " | ^b w |
Name Window | ^a A | ^b , |
Split Horizontal | ^a S | ^b " |
Split Vertical | ^a | | ^b % |
Switch Pane | ^a Tab | ^b o |
Kill Pane | ^a x | ^b x |
Paging | ^b PgUp ^b PgDown | |
Scrolling Mode | ^a [ | ^b [ |
Mar 20, 2015
sed ':a;N;$!ba;s/\n//g' fileto avoid spending a lot of time on it when I need it again.
Mar 20, 2015
BEGIN; UPDATE table SET field=regexp_replace(field, 'match pattern', 'replace string', 'g'); END;
Mar 20, 2015
redis-cli monitorThe output looks like this
redis 127.0.0.1:6379> MONITOR OK 1371241093.375324 "monitor" 1371241109.735725 "keys" "*" 1371241152.344504 "set" "testkey" "1" 1371241165.169184 "get" "testkey"
slowlog get 25 # print top 25 slow queries slowlog len slowlog reset
redis-cli --intrinsic-latency 100and then sample from your Redis clients with
redis-cli --latency -h <host> -p <port>If you have problems with high latency check if transparent huge pages are disabled. Disable it with
echo never > /sys/kernel/mm/transparent_hugepage/enabled
grep ^save /etc/redis/redis.confComment out all save lines and setup a cron job to do dumping or a Redis slave who can dump whenever he wants to. Alternatively you can try to mitigate the effect using the "no-appendfsync-on-rewrite" option (set to "yes") in redis.conf.
grep ^appendfsync /etc/redis/redis.confSo if you do not care about DB corruption you might want to set "no" here.
Mar 20, 2015
Output #0, image2, to 'output.ppm': Stream #0.0: Video: ppm, rgb24, 144x108, q=2-31, 200 kb/s, 90k tbn, 29.97 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding av_interleaved_write_frame(): I/O error occurred Usually that means that input file is truncated and/or corrupted.
The above error message was produced with a command like this ffmpeg -v 0 -y -i 'input.flv' -ss 00:00:01 -vframes 1 -an -sameq -vcodec ppm -s 140x100 'output.ppm'
There are several possible reasons for the error message "av_interleaved_write_frame(): I/O error occurred".Mar 20, 2015
cd /var/lib/puppet for i in $(find clientbucket/ -name paths); do echo "$(stat -c %y $i | sed 's/\..*//') $(cat $i)"; done | sort -nwill give you an output like
[...] 2015-02-10 12:36:25 /etc/resolv.conf 2015-02-17 10:52:09 /etc/bash.bashrc 2015-02-20 14:48:18 /etc/snmp/snmpd.conf 2015-02-20 14:50:53 /etc/snmp/snmpd.conf [...]
Mar 20, 2015
for file in $( find . -name "*.erb" | sort); do echo "------------ [ $file ]"; if grep -q "<%[^>]*$" $file; then content=$(sed '/<%/,/%>/!d' $file); else content=$(grep "<%" $file); fi; echo "$content" | egrep "(.each|if |%=)" | egrep -v "scope.lookupvar|@|scope\["; doneThis is of course just a fuzzy match, but should catch quite some of the dynamic scope expressions there are. The limits of this solution are:
Mar 20, 2015
Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quitWhile many know their way around SQL, you might want to always use \? to find the specific psql commands.
UPDATE table SET field=regexp_replace(field, 'match pattern', 'replace string', 'g');
pg_lsclusters
SHOW ALL;
SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database;
EXPLAIN ANALYZE <sql statement>;
SELECT * FROM pg_stat_activity;
SELECT bl.pid AS blocked_pid, a.usename AS blocked_user, kl.pid AS blocking_pid, ka.usename AS blocking_user, a.current_query AS blocked_statement FROM pg_catalog.pg_locks bl JOIN pg_catalog.pg_stat_activity a ON bl.pid = a.procpid JOIN pg_catalog.pg_locks kl JOIN pg_catalog.pg_stat_activity ka ON kl.pid = ka.procpid ON bl.transactionid = kl.transactionid AND bl.pid != kl.pid WHERE NOT bl.granted ;
SELECT * FROM pg_statio_user_tables;to show the I/O caused by your relations. Or for the number of accesses and scan types and tuples fetched:
SELECT * FROM pg_stat_user_tables;
SELECT procpid, current_query FROM pg_stat_activity;And then kill the PID on the Unix shell. Or use
SELECT pg_terminate_backend('12345');
SELECT pg_terminate_backend(pg_stat_activity.procpid) FROM pg_stat_activity WHERE pg_stat_activity.datname = 'TARGET_DB';
SELECT pg_current_xlog_location();and those two commands on the slave:
SELECT pg_last_xlog_receive_location(); SELECT pg_last_xlog_replay_location()The first query gives you the most recent xlog position on the master, while the other two queries give you the most recently received xlog and the replay position in this xlog on the slave. A Nagios check plugin could look like this:
#!/bin/bash # Checks master and slave xlog difference... # Pass slave IP/host via $1 PSQL="psql -A -t " # Get master status master=$(echo "SELECT pg_current_xlog_location();" | $PSQL) # Get slave receive location slave=$(echo "select pg_last_xlog_replay_location();" | $PSQL -h$1) master=$(echo "$master" | sed "s/\/.*//") slave=$(echo "$slave" | sed "s/\/.*//") master_dec=$(echo "ibase=16; $master" | bc) slave_dec=$(echo "ibase=16; $slave" | bc) diff=$(expr $master_dec - $slave_dec) if [ "$diff" == "" ]; then echo "Failed to retrieve replication info!" exit 3 fi # Choose some good threshold here... status=0 if [ $diff -gt 3 ]; then status=1 fi if [ $diff -gt 5 ]; then status=2 fi echo "Master at $master, Slave at $slave , difference: $diff" exit $status
SELECT pg_start_backup('label', true); SELECT pg_stop_backup();Read more: Postgres - Set Backup Mode
stats_users = myuserUse this user to connect to pgbouncer with psql by requesting the "pgbouncer" database:
psql -p 6432 -U myuser -W pgbouncerAt psql prompt list supported commands
SHOW HELP;PgBouncer will present all statistics and configuration options:
pgbouncer=# SHOW HELP; NOTICE: Console usage DETAIL: SHOW HELP|CONFIG|DATABASES|POOLS|CLIENTS|SERVERS|VERSION SHOW STATS|FDS|SOCKETS|ACTIVE_SOCKETS|LISTS|MEM SET key = arg RELOAD PAUSE [The "SHOW" commands are all self-explanatory. Very useful are the "SUSPEND" and "RESUME" commands when you use pools.] SUSPEND RESUME [ ] SHUTDOWN
pgbouncer -RAside from this in most cases SIGHUP should be fine.
Mar 20, 2015
As described in the Comparison of FLV and MP4 metadata tagging tools (injectors) post yamdi is propably the best and fastest Open Source FLV metadata injector.
Still yamdi is missing the possibility to add custom FLV tags. I posted a patch upstream some months ago with no feedback so far. So if you need custom tags as provided by flvtool2 you might want to merge this patch against the yamdi sources (tested with 1.8.0.
--- ../yamdi.c 2010-10-17 20:46:40.000000000 +0200 +++ yamdi.c 2010-10-19 11:32:34.000000000 +0200 @@ -105,6 +105,9 @@ FLVTag_t *flvtag; } FLVIndex_t; +// max number of user defined tags +#define MAX_USER_DEFINED 10 + typedef struct { FLVIndex_t index; @@ -168,6 +171,8 @@ struct { char creator[256]; // -c + char *user_defined[MAX_USER_DEFINED]; // -a + int user_defined_count; // number of user defined parameters short addonlastkeyframe; // -k short addonlastsecond; // -s, -l (deprecated) @@ -288,8 +293,15 @@ initFLV(&flv); - while((c = getopt(argc, argv, "i:o:x:t:c:lskMXh")) != -1) { + while((c = getopt(argc, argv, "a:i:o:x:t:c:lskMXh")) != -1) { switch(c) { + case 'a': + if(flv.options.user_defined_count + 1 == MAX_USER_DEFINED) { + fprintf(stderr, "ERROR: to many -a options\n"); + exit(1); + } + printf("Adding tag >>>%s<<<\n", optarg); + flv.options.user_defined[flv.options.user_defined_count++] = strdup (optarg); case 'i': infile = optarg; break; @@ -1055,6 +1067,7 @@ int createFLVEventOnMetaData(FLV_t *flv) { int pass = 0; + int j; size_t i, length = 0; buffer_t b; @@ -1073,6 +1086,21 @@ if(strlen(flv->options.creator) != 0) { writeBufferFLVScriptDataValueString(&b, "creator", flv->options.creator); length++; } + + printf("Adding %d user defined tags\n", flv->options.user_defined_count); + for(j = 0; j < flv->options.user_defined_count; j++) { + char *key = strdup (flv->options.user_defined[j]); + char *value = strchr(key, '='); + if(value != NULL) { + *value++ = 0; + printf("Adding tag #%d %s=%s\n", j, key, value); + writeBufferFLVScriptDataValueString(&b, key, value); + length++; + } else { + fprintf(stderr, "ERROR: Invalid key value pair: >>>%s<<<\n", key); + } + free(key); + } writeBufferFLVScriptDataValueString(&b, "metadatacreator", "Yet Another Metadata Injector for FLV - Version " YAMDI_VERSION "\0"); length++; writeBufferFLVScriptDataValueBool(&b, "hasKeyframes", flv->video.nkeyframes != 0 ? 1 : 0); length++;
Using the patch you then can add up to 10 custom tags using the new "-a" switch. The syntax is
-a <key>=<value>
Mar 20, 2015
Distribution | Scanner | Rating | Description |
---|---|---|---|
Debian | debsecan | superb | Easy to use. Maintained by the Debian testing team. Lists packages, CVE numbers and details. |
Ubuntu | debsecan | useless | They just packaged the Debian scanner without providing a database for it! And since 2008 there is a bug about it being 100% useless. |
CentOS Fedora Redhat | "yum list-security" | good | Provides package name and CVE number. Note: On older systems there is only "yum list updates". |
OpenSuSE | "zypper list-patches" | ok | Provides packages names with security relevant updates. You need to filter the list yourself or use the "--cve" switch to limit to CVEs only. |
SLES | "rug lu" | ok | Provides packages names with security relevant updates. Similar to zypper you need to do the filtering yourself. |
Gentoo | glsa-check | bad | There is a dedicated scanner, but no documentation. |
FreeBSD | Portaudit | superb | No Linux? Still a nice solution... Lists vulnerable ports and vulnerability details. |
Mar 20, 2015
Mar 20, 2015
exportfs -a
# showmount -e Export list for myserver: /export/home 10.1.0.0/24 #
# showmount Hosts on myserver: 10.1.0.15 #
# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 48555 status 100024 1 tcp 49225 status 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 100227 3 tcp 2049 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 100227 3 udp 2049 100021 1 udp 51841 nlockmgr 100021 3 udp 51841 nlockmgr 100021 4 udp 51841 nlockmgr 100021 1 tcp 37319 nlockmgr 100021 3 tcp 37319 nlockmgr 100021 4 tcp 37319 nlockmgr 100005 1 udp 57376 mountd 100005 1 tcp 37565 mountd 100005 2 udp 36255 mountd 100005 2 tcp 36682 mountd 100005 3 udp 54897 mountd 100005 3 tcp 51122 mountdAbove output is from an NFS server. You can also run it for remote servers by passing an IP. NFS clients usually just run status and portmapper:
# rpcinfo -p 10.1.0.15 program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 44152 status 100024 1 tcp 53182 status
mount -t nfs4 -o proto=tcp,port=2049 server:/export/home /mnt
service idmapd start
#cat /etc/idmapd.conf [...] [Mapping] Nobody-User = nobody Nobody-Group = nogroup [Static] someuser@otherserver = localuser
$ nfsstat -m /data from 10.1.0.16:/data Flags: rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.0.16,local_lock=none,addr=10.1.0.15 $
# nfsstat -o rc Server reply cache: hits misses nocache 0 63619 885550 #on the client probably errors and retries. Also note that you can get live per-interval results when running with "--sleep=<interval>". For example
# nfsstat -o fh --sleep=2
Mar 20, 2015
--skip-eventswhich will cause the event table not to be exported. But the warning won't go away. To also get rid of the warning you need to use this instead:
--events --ignore-table=mysql.eventsAnd of course you can also choose just to dump the events table: Add the option
--eventsto your "mysqldump" invocation. If you use a tool that invokes "mysqldump" indirectly check if the tool allows to inject additional parameters.
Mar 20, 2015
mytop -u <user> -p<password> innotop -u <user> -p<password>Alternatively you can provide a .mytop file to provide the credentials automatically.
mysql> \sYou can show the replication status using
SHOW SLAVE STATUS \G SHOW MASTER STATUS \GNote that the "\G" instead of ";" just makes the output more readable. If you have configured slaves to report names you can list them on the master with:
SHOW SLAVE HOSTS;
show /*!50000 ENGINE*/ INNODB STATUS;
mysqlshow # List all databases mysqlshow <database> # List all tables of the given database mysqlshow <database> <table> # List all columns of the given table in the given DBAnd you can also do it using queries:
SHOW DATABASES; USE <database>; SHOW TABLES; DESCRIBE <table>;
show variables; # List all configuration settings show variables like 'key_buffer_size'; # List a specific parameter set global key_buffer_size=100000000; # Set a specific parameter # Finally ensure to edit my.cnf to make the change persistent
ssh <user@source host> "mysqldump --single-transaction -u root --password=<DB root pwd> <DB name>" | ssh <user@target host> "mysql -u root --password=<DB root pwd> <DB name>"
set global net_write_timeout = 100000; set global net_read_timeout = 100000;
# 1. Stop MySQL and start without grant checks /usr/bin/mysqld_safe --skip-grant-tables & mysql --user=root mysql # 2. Change root password UPDATE user SET password=PASSWORD('xxxxx') WHERE user = 'root';
LOAD DATA IN '<CSV filename>' INTO TABLE <table name> FIELDS TERMINATED BY ',' (<name of column #1>,<<name of column #2>,<...>);
pager lessPage output into a script
pager /home/joe/myscript.shOr if you have Percona installed get a tree-like "EXPLAIN" output with
pager mk-visual-explainand then run the "EXPLAIN" query.
# Check if enabled SHOW VARIABLES LIKE 'have_query_cache'; # Statistics SHOW STATUS LIKE 'Qcache%';
show processlist; show full processlist;Filter items in process list by setting grep as a pager. The following example will only print replication connections:
\P grep system show processlist;To abort/terminate a statement determine it's id and kill it:
kill <id>; # Kill running queries by id from process listing
SHOW BINLOG EVENTS; SHOW BINLOG EVENTS IN '<some bin file name>';
mysqlbinlog <binary log file>
STOP SLAVE; SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; START SLAVE;
FLUSH TABLES WITH READ LOCK;
FLUSH LOGS; SET GLOBAL binlog_format='xxxxxx'; FLUSH LOGS; UNLOCK TABLES;(ensure to replace 'xxxxxx' with for example 'ROW')
apt-get install libcache-cache-perl for i in `./mysql_ suggest` do do ln -sf /usr/share/munin/plugins/mysql_ $i; done /etc/init.d/munin-node reload
set global query_cache_size=0;
tcprstat -p 3306 -t 1 -n 0to get continuous statistics on the response time. This is helpful each time some developer claims the DB doesn't respond fast enough!
Mar 20, 2015
telnet localhost 6397or the Redis CLI client
redis-clito connect to Redis. The advantage of redis-cli is that you have a help interface and command line history.
$ redis-cli INFO | grep connected connected_clients:2 connected_slaves:0 $
$ redis-cli INFO redis_version:2.2.12 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll process_id:8353 uptime_in_seconds:2592232 uptime_in_days:30 lru_clock:809325 used_cpu_sys:199.20 used_cpu_user:309.26 used_cpu_sys_children:12.04 used_cpu_user_children:1.47 connected_clients:2 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:0 used_memory:6596112 used_memory_human:6.29M used_memory_rss:17571840 mem_fragmentation_ratio:2.66 use_tcmalloc:0 loading:0 aof_enabled:0 changes_since_last_save:0 bgsave_in_progress:0 last_save_time:1371241671 bgrewriteaof_in_progress:0 total_connections_received:118 total_commands_processed:1091 expired_keys:441 evicted_keys:0 keyspace_hits:6 keyspace_misses:1070 hash_max_zipmap_entries:512 hash_max_zipmap_value:64 pubsub_channels:0 pubsub_patterns:0 vm_enabled:0 role:master db0:keys=91,expires=88
CONFIG GET *gives you a list of all active configuration variables you can change. The output might look like this:
redis 127.0.0.1:6379> CONFIG GET * 1) "dir" 2) "/var/lib/redis" 3) "dbfilename" 4) "dump.rdb" 5) "requirepass" 6) (nil) 7) "masterauth" 8) (nil) 9) "maxmemory" 10) "0" 11) "maxmemory-policy" 12) "volatile-lru" 13) "maxmemory-samples" 14) "3" 15) "timeout" 16) "300" 17) "appendonly" 18) "no" 19) "no-appendfsync-on-rewrite" 20) "no" 21) "appendfsync" 22) "everysec" 23) "save" 24) "900 1 300 10 60 10000" 25) "slave-serve-stale-data" 26) "yes" 27) "hash-max-zipmap-entries" 28) "512" 29) "hash-max-zipmap-value" 30) "64" 31) "list-max-ziplist-entries" 32) "512" 33) "list-max-ziplist-value" 34) "64" 35) "set-max-intset-entries" 36) "512" 37) "slowlog-log-slower-than" 38) "10000" 39) "slowlog-max-len" 40) "64"Note that keys and values are alternating and you can change each key by issuing a "CONFIG SET" command like:
CONFIG SET timeout 900Such a change will be effective instantly. When changing values consider also updating the redis configuration file.
redis 127.0.0.1:6379> SELECT 1 OK redis 127.0.0.1:6379[1]>switches to the second database. Note how the prompt changed and now has a "[1]" to indicate the database selection. To find out how many databases there are you might want to run redis-cli from the shell:
$ redis-cli INFO | grep ^db db0:keys=91,expires=88 db1:keys=1,expires=0
FLUSHDBto drop all databases at once run
FLUSHALL
redis 127.0.0.1:6379> INFO [...] role:masterand watch for the "role" line which shows either "master" or "slave". Starting with version 2.8 the "INFO" command also gives you per slave replication status looking like this
slave0:ip=127.0.0.1,port=6380,state=online,offset=281,lag=0
SLAVEOF <IP> <port>on a machine that you want to become slave of the given IP. It will immediately get values from the master. Note that this instance will still be writable. If you want it to be read-only change the redis config file (only available in most recent version, e.g. not on Debian). To revert the slave setting run
SLAVEOF NO ONE
BGSAVEWhen running this command Redis will fork and the new process will dump into the "dbfilename" configured in the Redis configuration without the original process being blocked. Of course the fork itself might cause an interruption. Use "LASTSAVE" to check when the dump file was last updated. For a simple backup solution just backup the dump file. If you need a synchronous save run "SAVE" instead of "BGSAVE".
CLIENT LISTand you can terminate connections with
CLIENT KILL <IP>:<port>
redis 127.0.0.1:6379> MONITOR OK 1371241093.375324 "monitor" 1371241109.735725 "keys" "*" 1371241152.344504 "set" "testkey" "1" 1371241165.169184 "get" "testkey"
redis 127.0.0.1:6379> KEYS test* 1) "testkey2" 2) "testkey3" 3) "testkey"On production servers use "KEYS" with care as you can limit it and it will cause a full scan of all keys!
Mar 20, 2015
$ knife node show myserver Node Name: myserver1 Environment: _default FQDN: myserver1 IP: Run List: role[base], role[mysql], role[apache] Roles: base, nrpe, mysql Recipes: [...] Platform: ubuntu 12.04 Tags:Noticed the difference in "Run List" and "Roles"? The run list says "role[apache]", but the list of "Roles" has no Apache. This is because of the role not yet being run on the server. So a
ssh root@myserver chef-clientSolves the issue and Apache appears in the roles list. The learning: do not use "knife node show" to get the list of configured roles!
Mar 20, 2015
Ziel: Erfassen der wesentlichen bzw. interessierenden Merkmale eines Systems zwecks Bewahrung oder Weitergabe dieses Wissens
schafft die Grundlage für die effiziente Kommunikation bei der arbeitsteiligen Softwareentwicklung
erstellt eine Beschreibung für Menschen Sinn ist die Komplexitätsbeherrschung
Vorgang des Modellierens
1. Finden/Festlegen eines/mehrerer Systemmodelle
2. Darstellung des/der Modelle und damit Erstellung einer Beschreibung ein oder mehrerer für menschliches Verständnis optimierter Teile (Pläne)
bei komplexen Systemen sind stets mehrere, unterschiedliche Systemmodelle gegeben, entsprechend der unterschiedlichen Abstraktionsebenen
die Betrachtungsebene ergibt sich aus dem jeweiligen Interesse: - nur ein bestimmter Teil des Systems - nur ein bestimmter Anwendungsfall - gewählte Abstraktionshöhe
besondere Anforderungen
+ - Anschaulichkeit
+ - Einfachheit
+ - Universalität
"Separation of Concerns"
+ - Ästhetik
im Gegensatz dazu: Maschinenbeschreibung: - formal, Syntax und Semantik explizit formal festgelegt - Vollständigkeit - konsistent, keine Widersprüche
Unterscheidung verschiedener Systemmodellen und ihrer Beziehungen
Strukturtypen _innerhalb eines_ Modells: - Verhalten (dynamisches System) - Aufbau (dynamisches System) - Wertestrukturen (informationelles System!)
ein Modell B ist die Implementierung von Modell A, wenn sich Modell B durch Entwurfsentscheidungen von A herleitet.
Auswahl aus alternativen Mitteln treffen
elementare vs. nicht-elementare Entwurfsentscheidungen...
unabhängige vs. abhängige Entwurfs- entscheidungen: zwei Entwurfsent- scheidungen sind unabhängig wenn keine die andere voraussetzt
kritischer Faktor für den Produkterfolg (Überschaubarkeit/Verständlichkeit für alle Stakeholder muß gewährleistet sein)
"design plan", "blue print" des Systems (Aushandeln/Festlegen der Anforderungen an das System, wichtige Grundlage für die Arbeitsteilung)
dient der Komplexitätsbeherrschung
gibt den aktuellen Entwurfsstand wieder
Fundamental Modelling Concepts
Erfassung der _Systemarchitektur_, d.h. aller interessierender Systemmodelle und ihrer Beziehungen. Nicht jedoch: Visualisierung von Codestrukturen (der Software-Architektur)!
+ - Grundelemente der Darstellung: FMC-Pläne
hat aktive Systemkomponenten = AKTEURE
hat passive Systemkomponenten - Orte zu denen Akteure Zugang haben = SPEICHER - Orte über die Information ausgetausch wird = KANÄLE
+ - Programmnetz
stellt Programmstrukturen kompakt und sprachunabhängig dar, kennzeichend ist: jeder Stelle im PN läßt sich eindeutig eine Position im Programm (ein bestimmter Befehlszählerzustand) zuordnen
+ - Aktionsnetz
Dient der Verallgemeinerung von Programm- strukturen, ist jedoch kein Programmnetz! Wenigstens eine Stelle im AN ist nicht eindeutig auf eine Position im Programm abbildbar!
elementare Aktivität eines Akteurs: = OPERATION Akteur liest Info/Werte von einem oder mehreren Orten und schreibt abgeleiteten Wert (das Operationsergebnis) auf einen Ort.
Wertewechsel an einem Ort in einem Zeitpunkt = EREIGNIS
KAUSALE BEZIEHUNGEN: Ereignisse stoßen Operationen an, die wiederum Ereignisse auslösen können.
Systemarchitektur
Menge der Systemmodelle (jeweils Aufbau, Ablauf und Wertebereich) und deren Beziehungen untereinander
treibende Kraft bei der Aufstellung/Beschreibung von Systemarchitekturen: Verständlichkeit und Nachvollziehbarkeit der Systemkonstruktionen
Vorstellung eines Aufbaus aus interagierenden Komponenten ist weit verbreitet
die Funktionalität läßt sich im System lokalisieren
Softwarearchitektur
Autor: Lars Windolf
Digitalisierung einer Mitschrift aus der Vorlesung Systemmodellierung 3 von Prof. Tabeling am HPI.
last update: 18.01. 14:30
"binär wiederverwendbarer" (besser in unmittelbar auf dem Abwicklersystem abwickelbarer Form/ installierbarer Form vorliegender) Softwarebestandteil
wiederverwendbar heißt dabei, universell und breit einsetzbar
es gibt einen "Markt" für die Komponente
nach Szyperski: - unit of independent deployment - unit of third party composition - has no (externally) observable state
es gibt Software- und Systemschnittstellen
Teil eines Moduls auf den aus anderen Modulen heraus Bezug genommen werden kann und in dem implementierte Prozeduren, Datentypen... benannt werden.
aber auch: Stelle an der ein System gedanklich "zerschnitten" werden kann, d.h. in aktive Systemkomponenten getrennt werden kann.
elementar bezüglich Verarbeitung
ein Akteur bestimmt einen "neuen" Wert (Operationsergebnis)
auf einem Ort (Zielort der Operation)
Ergebnis hängt von Werten ab, die i.d.R. (beachte Zufallswerte) von mindestens einem Ort gelesen werden.
ausgewählter (einzelner) Sachverhalt auf einer bestimmten Betrachtungsebene: - Aufbau - Ablauf - Wertebereich
aber auch (Beispiele) - Beschreibung eines Teils des Aufbau - Beschreibung einer Betriebsphase
Mar 20, 2015
Name | Multi-Instances | Complexity/Features |
---|---|---|
telnet | no | Simple CLI via telnet |
memcached-top | no | CLI |
stats-proxy | yes | Simple Web GUI |
memcache.php | yes | Simple Web GUI |
PhpMemcacheAdmin | yes | Complex Web GUI |
Memcache Manager | yes | Complex Web GUI |
memcache-top v0.6 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s 10.50.11.5:11211 88.9% 69.7% 1661 0.9ms 0.3 47 9 13.9K 9.8K 10.50.11.5:11212 88.8% 69.9% 2121 0.7ms 1.3 168 10 17.6K 68.9K 10.50.11.5:11213 88.9% 69.4% 1527 0.7ms 1.7 48 16 14.4K 13.6K [...] AVERAGE: 84.7% 72.9% 1704 1.0ms 1.3 69 11 13.5K 30.3K TOTAL: 19.9GB/ 23.4GB 20.0K 11.7ms 15.3 826 132 162.6K 363.6K (ctrl-c to quit.)(Example output)
# Ensure you have bison sudo apt-get install bison # Download tarball tar zxvf statsproxy-1.0.tgz cd statsproxy-1.0 makeNow you can run the "statsproxy" binary, but it will inform you that it needs a configuration file. I suggest to redirect the output to a new file e.g. "statsproxy.conf" and remove the information text on top and bottom and then to modify the configuration section as needed.
./statsproxy > statsproxy.conf 2>&1Ensure to add as many "proxy-mapping" sections as you have memcached instances. In each "proxy-mapping" section ensure that "backend" points to your memcached instance and "frontend" to a port on your webserver where you want to access the information for this backend. Once finished run:
./statsproxy -F statsproxy.confBelow you find a screenshot of what stats-proxy looks like:
Mar 20, 2015
Name | Difference | Why [Not] Use It? |
---|---|---|
memcached | % | Because it simple and fast |
memcachedb | Persistence with BDB | Because it is a simple and fast as memcached and allows easy persistence and backup. But not maintained anymore since 2008! |
BDB | Simple and old | Use when you want an embedded database. Rarely used for web platforms. Has replication. |
CouchDB, CouchBase | HTTP transport. Tries to find a middleground between heavy RDBMS and a key-value store cache. | Sharding, replication and online rebalancing. Often found in small Hadoop setup. Easy drop-in for memcached caching in nginx. |
DynamoDB | HTTP transport, Amazon cloud | If you are in AWS anyway and want sharding and persistency |
Redis | Hashes, Lists, Scanning for Keys, Replication | Great bindings. Good documentation. Flexible yet simple data types. Slower than memcached (read more). |
Riak | Sharded partitioning in a commerical cloud. | Key-value store as a service. Transparent scaling. Automatic sharding. Map reduce support. |
Sphinx | Search Engine with SQL query caching | Supports sharding and full text search. Useful for static medium data sets (e.g. web site product search) |
MySQL 5.6 | Full RDBMS with memcached API | Because you can run queries against the DB via memcached protocol. |
Mar 20, 2015
from gi.repository import GObject from gi.repository import GLib from gi.repository import Gtk from gi.repository import Gst class PlaybackInterface: def __init__(self): self.playing = False # A free example sound track self.uri = "http://cdn02.cdn.gorillavsbear.net/wp-content/uploads/2012/10/GORILLA-VS-BEAR-OCTOBER-2012.mp3" # GTK window and widgets self.window = Gtk.Window() self.window.set_size_request(300,50) vbox = Gtk.Box(Gtk.Orientation.HORIZONTAL, 0) vbox.set_margin_top(3) vbox.set_margin_bottom(3) self.window.add(vbox) self.playButtonImage = Gtk.Image() self.playButtonImage.set_from_stock("gtk-media-play", Gtk.IconSize.BUTTON) self.playButton = Gtk.Button.new() self.playButton.add(self.playButtonImage) self.playButton.connect("clicked", self.playToggled) Gtk.Box.pack_start(vbox, self.playButton, False, False, 0) self.slider = Gtk.HScale() self.slider.set_margin_left(6) self.slider.set_margin_right(6) self.slider.set_draw_value(False) self.slider.set_range(0, 100) self.slider.set_increments(1, 10) Gtk.Box.pack_start(vbox, self.slider, True, True, 0) self.label = Gtk.Label(label='0:00') self.label.set_margin_left(6) self.label.set_margin_right(6) Gtk.Box.pack_start(vbox, self.label, False, False, 0) self.window.show_all() # GStreamer Setup Gst.init_check(None) self.IS_GST010 = Gst.version()[0] == 0 self.player = Gst.ElementFactory.make("playbin2", "player") fakesink = Gst.ElementFactory.make("fakesink", "fakesink") self.player.set_property("video-sink", fakesink) bus = self.player.get_bus() #bus.add_signal_watch_full() bus.connect("message", self.on_message) self.player.connect("about-to-finish", self.on_finished) def on_message(self, bus, message): t = message.type if t == Gst.Message.EOS: self.player.set_state(Gst.State.NULL) self.playing = False elif t == Gst.Message.ERROR: self.player.set_state(Gst.State.NULL) err, debug = message.parse_error() print "Error: %s" % err, debug self.playing = False self.updateButtons() def on_finished(self, player): self.playing = False self.slider.set_value(0) self.label.set_text("0:00") self.updateButtons() def play(self): self.player.set_property("uri", self.uri) self.player.set_state(Gst.State.PLAYING) GObject.timeout_add(1000, self.updateSlider) def stop(self): self.player.set_state(Gst.State.NULL) def playToggled(self, w): self.slider.set_value(0) self.label.set_text("0:00") if(self.playing == False): self.play() else: self.stop() self.playing=not(self.playing) self.updateButtons() def updateSlider(self): if(self.playing == False): return False # cancel timeout try: if self.IS_GST010: nanosecs = self.player.query_position(Gst.Format.TIME)[2] duration_nanosecs = self.player.query_duration(Gst.Format.TIME)[2] else: nanosecs = self.player.query_position(Gst.Format.TIME)[1] duration_nanosecs = self.player.query_duration(Gst.Format.TIME)[1] # block seek handler so we don't seek when we set_value() # self.slider.handler_block_by_func(self.on_slider_change) duration = float(duration_nanosecs) / Gst.SECOND position = float(nanosecs) / Gst.SECOND self.slider.set_range(0, duration) self.slider.set_value(position) self.label.set_text ("%d" % (position / 60) + ":%02d" % (position % 60)) #self.slider.handler_unblock_by_func(self.on_slider_change) except Exception as e: # pipeline must not be ready and does not know position print e pass return True def updateButtons(self): if(self.playing == False): self.playButtonImage.set_from_stock("gtk-media-play", Gtk.IconSize.BUTTON) else: self.playButtonImage.set_from_stock("gtk-media-stop", Gtk.IconSize.BUTTON) if __name__ == "__main__": PlaybackInterface() Gtk.main()and this is how it should look like if everything goes well: Please post comments below if you have improvement suggestions!
Mar 20, 2015
--- flvtool++.orig/fout.h 2009-06-19 05:06:47.000000000 +0200 +++ flvtool++/fout.h 2010-10-12 15:51:37.000000000 +0200 @@ -21,7 +21,7 @@ void open(const char* fn) { if (fp) this->close(); - fp = fopen(fn, "wb"); + fp = fopen64(fn, "wb"); if (fp == NULL) { char errbuf[256]; snprintf(errbuf, 255, "Error opening output file \"%s\": %s", fn, strerror(errno)); --- flvtool++.orig/mmfile.h 2009-06-19 05:29:43.000000000 +0200 +++ flvtool++/mmfile.h 2010-10-12 15:46:00.000000000 +0200 @@ -16,7 +16,7 @@ public: mmfile() : fd(-1) {} mmfile(char* fn) { - fd = open(fn, O_RDONLY); + fd = open(fn, O_RDONLY | O_LARGEFILE); if (fd == -1) throw std::runtime_error(string("mmfile: unable to open file ") + string(fn)); struct stat statbuf; fstat(fd, &statbuf);Note: While this patch helps you to process large files flvtool++ will still load the entire file into memory!!! Given this you might want to use a different injector like yamdi. For a comparsion of existing tools have a look at the Comparison of FLV and MP4 metadata tagging tools.
Mar 20, 2015
Mar 20, 2015
Desktop | Half Maximize Left | Half Maximize Right |
---|---|---|
Windows | [Windows]+[Left] | [Windows]+[Right] |
Ubuntu Unity | [Ctrl]+[Super]+[Left] | [Ctrl]+[Super]+[Right] |
GNOME 3 | Drag to left edge | Drag to right edge |
GNOME 2 + Compiz Grid plugin | Drag to left border | Drag to right border |
GNOME 2/XFCE/KDE + Brightside | Drag to configured edge | Drag to configured edge |
XFCE 4.10 | [Super]+[Left] | [Super]+[Right] |
Mar 20, 2015
$ augtool augtool> set /files/etc/ssh/sshd_config/PermitRootLogin no augtool> save
augeas { "sshd_config": changes => [ "set /files/etc/ssh/sshd_config/PermitRootLogin no", ], }
cfagent -K
detox -v -r <directory>
knife search node 'roles:<role name>'
puppetd --test # enable standard debugging options puppetd --debug # enable full debugging puppetd --one-time --detailed-exitcodes # Enable exit codes: # 2=changes applied # 4=failure
cd /usr/src/linux && make-kpkg clean && make-kpkg --initrd --revision=myrev kernel_image
apt-get install debian-archive-keyring apt-get update
dpkg-reconfigure -a
# Resolve file to package dpkg -S /etc/fstab # Print all files of a package dpkg -L passwd # provided files dpkg -c passwd # owned files # Find packages by name dpkg -l gnome* # Package details dpkg -p passwd
sed -i 's/archive.ubuntu.com/old-releases.ubuntu.com/' /etc/apt/sources.list
echo 1 > /proc/sys/vm/block_dump # wait some time... echo 0 > /proc/sys/vm/block_dump # Now check syslog for block dump lines
dmesg -T # Enable human readable timestamps dmesg -x # Show facility and log level dmesg -f daemon # Filter for facility daemon dmesg -l err,crit,alert,emerg # Filter for errors
lsof # Complete list lsof -i :22 # Filter single TCP port lsof [email protected]:22 # Filter single connection endpoint lsof -u <user> # Filter per user lsof -c <name> # Filter per process name lsof -p 12345 # Filter by PID lsof /etc/hosts # Filter single file
perf stat -B some_command
dstat -a --top-bio --top-cpu
tune2fs -j /dev/hda1
tune2fs -O extents,uninit_bg,dir_index /dev/sda1
tune2fs -l /dev/sda1 | grep Inode
# Setup partition with (use parted for >2TB) (parted) mklabel gpt # only when >2TB (parted) mkpart primary lvm 0 4T # setup disk full size (e.g. 4TB) pvcreate /dev/sdb1 # Create physical LVM disk vgextend vg01 /dev/sdb1 # Add to volume group vgextend -L +4t /dev/mapper/vg01-lvdata # Extend your volume resize2fs /dev/mapper/vg01-lvdata # Auto-resize file system
postsuper -d ALL postsuper -d ALL deferred
# Either run on the node that should take over /usr/share/heartbeat/hb_failover # Or run on the node to should stop working /usr/share/heartbeat/hb_standby
rabbitmqctl list_vhosts # List all defined vhosts rabbitmqctl list_queues <vhost> # List all queues for the vhost rabbitmqctl report # Dump detailed report on RabbitMQ instance
/usr/sbin/munin-run <plugin name> # for values /usr/sbin/munin-run <plugin name> config # for configuration
sudo -u munin /usr/bin/munin-cron
/usr/sbin/munin-node-configure --suggest # and enable them using /usr/sbin/munin-node-configure --shell | sh
# On sending host nttcp -t -s # On receiving host nttcp -r -s
tcpdump -i eth0 -nN -vvv -xX -s 1500 port <some port>
snmpwalk -c public -v 1 -O s <myhost> .iso | grep <search string>
apt-get install <package> apt-get remove <package> # Remove files installed by <package> apt-get purge <package> # Remove <package> and all the files it did create apt-get upgrade # Upgrade all packages apt-get install <package> # Upgrade an install package apt-get dist-upgrade # Upgrade distribution apt-cache search <package> # Check if there is such a package name in the repos apt-cache clean # Remove all downloaded .debs dpkg -l # List all installed/known packages # More dpkg invocations above in the "Debian" section!
do-release-upgrade # For Ubuntu release upgrades
zypper install <package> zypper refresh # Update repository infos zypper list-updates zypper repos # List configured repositories zypper dist-upgrade # Upgrade distribution zypper dup # Upgrade distribution (alias) zypper search <package> # Search for <package> zypper search --search-descriptions <package> zypper clean # Clean package cache # For safe updates: zypper mr –keep-packages –remote # Enable caching of packages zypper dup -D # Fetch packages using a dry run zypper mr –all –no-refresh # Set cache usage for following dup zypper dup # Upgrade!
up2date
yum update # Upgrade distro yum install <package> # Install <package>
cat /proc/mdstat # Print status mdadm --detail /dev/md0 # Print status per md mdadm --manage -r /dev/md0 /dev/sda1 # Remove a disk mdadm --zero-superblock /dev/sda1 # Initialize a disk mdadm --manage -a /dev/md0 /dev/sda1 # Add a disk mdadm --manage --set-faulty /dev/md0 /dev/sda1
# Show status of all arrays on all controllers hpacucli all show config hpacucli all show config detail # Show status of specific controller hpacucli ctrl=0 pd all show # Show Smart Array status hpacucli all show status
# Get number of controllers /opt/MegaRAID/MegaCli/MegaCli64 -adpCount -NoLog # Get number of logical drives on controller #0 /opt/MegaRAID/MegaCli/MegaCli64 -LdGetNum -a0 -NoLog # Get info on logical drive #0 on controller #0 /opt/MegaRAID/MegaCli/MegaCli64 -LdInfo -L0 -a0 -NoLog
debsecan --suite=sid
portaudit -Fda
openssl x509 -text -in my.crt
By replacing "x509" with "ca" or "crt" you can dump other file types too.date -d "$date" +%s
date -d "1970-01-01 1234567890 sec GMT"
cal $(date "+%M %y") | grep -v ^$ | tail -1 | sed 's/^.* \([0-9]*\)$/\1/'
complete -W 'add branch checkout clone commit diff grep init log merge mv pull push rebase rm show status tag' git
trap true TERM kill -- -$$
unset HISTFILE # Stop logging history in this bash instance HISTIGNORE="[ ]*" # Do not log commands with leading spaces HISTIGNORE="&" # Do not log a command multiple times HISTTIMEFORMAT="%Y-%m-%d %H:%M:%S" # Log with timestamps
sudo -i -u <user>
tail --follow=name myfile
join -o1.2,2.3 -t ";" -1 1 -2 2 employee.csv tasks.csv
compgen -c |sort -u
tty -s
cat access.log | sort -k 1
watch -d ls -l
stdbuf -i0 -o0 -e0 <some command> # Best solution unbuffer <some command> # Wrapper script from expect
:%s/^V^M//g
ssh-copy-id [-i keyfile] user@maschine
Host unreachable_host ProxyCommand ssh gateway_host exec nc %h %p
ssh host1 -A -t host2 -A -t host3 ...
ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"
Subsystem sftp /usr/libexec/openssh/sftp-server -u 0002
Host * ForwardAgent yes
ssh -D <port> <remote host>
RewriteCond %{REQUEST_FILENAME} (.*)\.(html|htm)$
RewriteCond %{HTTP_USER_AGENT} (iPhone|iPad)
EnableExceptionHook on
stats cachedump <slab class> <number of keys to dump>
# Define a control flag set $extra_handling = 0; # Set the control flag when needed if ($variable1 ~* pattern) { set $extra_handling = 1; } # Unset the flag if needed if ( $variable2 = 1 ) { set $extra_handling = 0; } if ( $extra_handling = 1 ) { # Trigger intended behaviour }
$ augtool augtool> set /files/etc/ssh/sshd_config/PermitRootLogin no augtool> save
augeas { "sshd_config": changes => [ "set /files/etc/ssh/sshd_config/PermitRootLogin no", ], }
cfagent -K
detox -v -r <directory>
knife search node 'roles:<role name>'
puppetd --test # enable standard debugging options puppetd --debug # enable full debugging puppetd --one-time --detailed-exitcodes # Enable exit codes: # 2=changes applied # 4=failure
cd /usr/src/linux && make-kpkg clean && make-kpkg --initrd --revision=myrev kernel_image
apt-get install debian-archive-keyring apt-get update
dpkg-reconfigure -a
# Resolve file to package dpkg -S /etc/fstab # Print all files of a package dpkg -L passwd # provided files dpkg -c passwd # owned files # Find packages by name dpkg -l gnome* # Package details dpkg -p passwd
sed -i 's/archive.ubuntu.com/old-releases.ubuntu.com/' /etc/apt/sources.list
echo 1 > /proc/sys/vm/drop_caches
echo 1 > /proc/sys/vm/block_dump # wait some time... echo 0 > /proc/sys/vm/block_dump # Now check syslog for block dump lines
dmesg -T # Enable human readable timestamps dmesg -x # Show facility and log level dmesg -f daemon # Filter for facility daemon dmesg -l err,crit,alert,emerg # Filter for errors
lsof # Complete list lsof -i :22 # Filter single TCP port lsof [email protected]:22 # Filter single connection endpoint lsof -u <user> # Filter per user lsof -c <name> # Filter per process name lsof -p 12345 # Filter by PID lsof /etc/hosts # Filter single file
perf stat -B some_command
dstat -a --top-bio --top-cpu
tune2fs -j /dev/hda1
tune2fs -O extents,uninit_bg,dir_index /dev/sda1
tune2fs -l /dev/sda1 | grep Inode
# Setup partition with (use parted for >2TB) (parted) mklabel gpt # only when >2TB (parted) mkpart primary lvm 0 4T # setup disk full size (e.g. 4TB) pvcreate /dev/sdb1 # Create physical LVM disk vgextend vg01 /dev/sdb1 # Add to volume group vgextend -L +4t /dev/mapper/vg01-lvdata # Extend your volume resize2fs /dev/mapper/vg01-lvdata # Auto-resize file system
rsync -az -e ssh --delete /data server:/dataIt just won't delete anything. It will when running it like this:
rsync -az -e ssh --delete /data/ server:/data
postsuper -d ALL postsuper -d ALL deferred
# Either run on the node that should take over /usr/share/heartbeat/hb_failover # Or run on the node to should stop working /usr/share/heartbeat/hb_standby
rabbitmqctl list_vhosts # List all defined vhosts rabbitmqctl list_queues <vhost> # List all queues for the vhost rabbitmqctl report # Dump detailed report on RabbitMQ instance
/usr/sbin/munin-run <plugin name> # for values /usr/sbin/munin-run <plugin name> config # for configuration
sudo -u munin /usr/bin/munin-cron
/usr/sbin/munin-node-configure --suggest # and enable them using /usr/sbin/munin-node-configure --shell | sh
ethtool eth0 # Print general info on eth0 ethtool -i eth0 # Print kernel module info ethtool -S eth0 # Print eth0 traffic statistics ethtool -a eth0 # Print RX, TX and auto-negotiation settings # Changing NIC settings... ethtool -s eth0 speed 100 ethtool -s eth0 autoneg off ethtool -s eth0 duplex full ethtool -s eth0 wol g # Turn on wake-on-LANDo not forget to make changes permanent in e.g. /etc/network/interfaces.
# On sending host nttcp -t -s # On receiving host nttcp -r -s
tcpdump -i eth0 -nN -vvv -xX -s 1500 port <some port>
snmpwalk -c public -v 1 -O s <myhost> .iso | grep <search string>
apt-get install <package> apt-get remove <package> # Remove files installed by <package> apt-get purge <package> # Remove <package> and all the files it did create apt-get upgrade # Upgrade all packages apt-get install <package> # Upgrade an install package apt-get dist-upgrade # Upgrade distribution apt-cache search <package> # Check if there is such a package name in the repos apt-cache clean # Remove all downloaded .debs dpkg -l # List all installed/known packages # More dpkg invocations above in the "Debian" section!
do-release-upgrade # For Ubuntu release upgrades
zypper install <package> zypper refresh # Update repository infos zypper list-updates zypper repos # List configured repositories zypper dist-upgrade # Upgrade distribution zypper dup # Upgrade distribution (alias) zypper search <package> # Search for <package> zypper search --search-descriptions <package> zypper clean # Clean package cache # For safe updates: zypper mr –keep-packages –remote # Enable caching of packages zypper dup -D # Fetch packages using a dry run zypper mr –all –no-refresh # Set cache usage for following dup zypper dup # Upgrade!
up2date
yum update # Upgrade distro yum install <package> # Install <package>
cat /proc/mdstat # Print status mdadm --detail /dev/md0 # Print status per md mdadm --manage -r /dev/md0 /dev/sda1 # Remove a disk mdadm --zero-superblock /dev/sda1 # Initialize a disk mdadm --manage -a /dev/md0 /dev/sda1 # Add a disk mdadm --manage --set-faulty /dev/md0 /dev/sda1
# Show status of all arrays on all controllers hpacucli all show config hpacucli all show config detail # Show status of specific controller hpacucli ctrl=0 pd all show # Show Smart Array status hpacucli all show status
# Get number of controllers /opt/MegaRAID/MegaCli/MegaCli64 -adpCount -NoLog # Get number of logical drives on controller #0 /opt/MegaRAID/MegaCli/MegaCli64 -LdGetNum -a0 -NoLog # Get info on logical drive #0 on controller #0 /opt/MegaRAID/MegaCli/MegaCli64 -LdInfo -L0 -a0 -NoLog
debsecan --suite=sid
portaudit -Fda
openssl x509 -text -in my.crtBy replacing "x509" with "ca" or "crt" you can dump other file types too.
date -d "$date" +%s
date -d "1970-01-01 1234567890 sec GMT"
cal $(date "+%M %y") | grep -v ^$ | tail -1 | sed 's/^.* \([0-9]*\)$/\1/'
complete -W 'add branch checkout clone commit diff grep init log merge mv pull push rebase rm show status tag' git
diff <(echo abc;echo def) <(echo abc;echo abc)
if [[ "$string" =~ ^[0-9]+$ ]]; then echo "Is a number" fi
REGEXP="2013:06:23 ([0-9]+):([0-9]+)" if [[ "$string" =~ $REGEXP ]]; then echo "Hour ${BASH_REMATCH[1]} Minute ${BASH_REMATCH[2]}" fi
hour=$(expr match "$string" '2013:06:23 \([0-9]\+\)')
trap true TERM kill -- -$$
unset HISTFILE # Stop logging history in this bash instance HISTIGNORE="[ ]*" # Do not log commands with leading spaces HISTIGNORE="&" # Do not log a command multiple times HISTTIMEFORMAT="%Y-%m-%d %H:%M:%S" # Log with timestamps
sudo -i -u <user>
tail --follow=name myfile
join -o1.2,2.3 -t ";" -1 1 -2 2 employee.csv tasks.csv
compgen -c |sort -u
tty -s
cat access.log | sort -k 1
watch -d ls -l
stdbuf -i0 -o0 -e0 <some command> # Best solution unbuffer <some command> # Wrapper script from expect
:%s/^V^M//g
ssh-copy-id [-i keyfile] user@maschine
Host unreachable_host ProxyCommand ssh gateway_host exec nc %h %p
ssh host1 -A -t host2 -A -t host3 ...
ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"
Subsystem sftp /usr/libexec/openssh/sftp-server -u 0002
Host * ForwardAgent yes
ssh -D <port> <remote host>
RewriteCond %{REQUEST_FILENAME} (.*)\.(html|htm)$
RewriteCond %{HTTP_USER_AGENT} (iPhone|iPad)
EnableExceptionHook on
stats cachedump <slab class> <number of keys to dump>
db_archiveto clean unused log files
# Define a control flag set $extra_handling = 0; # Set the control flag when needed if ($variable1 ~* pattern) { set $extra_handling = 1; } # Unset the flag if needed if ( $variable2 = 1 ) { set $extra_handling = 0; } if ( $extra_handling = 1 ) { # Trigger intended behaviour }
Finally it is worth to check the Wikipedia Comparison Chart for other less known and new tools!
$ augtool augtool> set /files/etc/ssh/sshd_config/PermitRootLogin no augtool> save
augeas { "sshd_config": changes => [ "set /files/etc/ssh/sshd_config/PermitRootLogin no", ], }
cfagent -K
detox -v -r <directory>
chef-client -Fmin --why-run
ohai
knife node show <node>
knife search node 'roles:<role name>'
knife ssh -a ipaddress name:server1 "chef-client"you can also use patterns:
knife ssh -a ipaddress name:www* "uptime"
puppetd --test # enable standard debugging options puppetd --debug # enable full debugging puppetd --one-time --detailed-exitcodes # Enable exit codes: # 2=changes applied # 4=failure
cd /usr/src/linux && make-kpkg clean && make-kpkg --initrd --revision=myrev kernel_image
apt-get install debian-archive-keyring apt-get update
dpkg-reconfigure -a
# Resolve file to package dpkg -S /etc/fstab # Print all files of a package dpkg -L passwd # provided files dpkg -c passwd # owned files # Find packages by name dpkg -l gnome* # Package details dpkg -p passwd
sed -i 's/archive.ubuntu.com/old-releases.ubuntu.com/' /etc/apt/sources.list
echo b >/proc/sysrq-trigger
pidstat -w
echo 1 > /proc/sys/vm/drop_caches
echo 1 > /proc/sys/vm/block_dump # wait some time... echo 0 > /proc/sys/vm/block_dump # Now check syslog for block dump lines
sysctl -p
dmesg -T # Enable human readable timestamps dmesg -x # Show facility and log level dmesg -f daemon # Filter for facility daemon dmesg -l err,crit,alert,emerg # Filter for errors
lsof # Complete list lsof -i :22 # Filter single TCP port lsof [email protected]:22 # Filter single connection endpoint lsof -u <user> # Filter per user lsof -c <name> # Filter per process name lsof -p 12345 # Filter by PID lsof /etc/hosts # Filter single file
perf stat -B some_command
dstat -a --top-bio --top-cpu
tune2fs -j /dev/hda1
tune2fs -O extents,uninit_bg,dir_index /dev/sda1
tune2fs -l /dev/sda1 | grep Inode
# Setup partition with (use parted for >2TB) (parted) mklabel gpt # only when >2TB (parted) mkpart primary lvm 0 4T # setup disk full size (e.g. 4TB) pvcreate /dev/sdb1 # Create physical LVM disk vgextend vg01 /dev/sdb1 # Add to volume group vgextend -L +4t /dev/mapper/vg01-lvdata # Extend your volume resize2fs /dev/mapper/vg01-lvdata # Auto-resize file system
rsync -az -e ssh --delete /data server:/dataIt just won't delete anything. It will when running it like this:
rsync -az -e ssh --delete /data/ server:/data
dmidecode 2>&1 |grep -A17 -i "Memory Device" |egrep "Memory Device|Locator: PROC|Size" |grep -v "No Module Installed" |grep -A1 -B1 "Size:"
postsuper -d ALL postsuper -d ALL deferred
# Either run on the node that should take over /usr/share/heartbeat/hb_failover # Or run on the node to should stop working /usr/share/heartbeat/hb_standby
# Cluster Resource Status crm_mon crm_mon -1 crm_mon -f # failure count # Dump and Import Config cibadmin --query --obj_type resources >file.xml cibadmin --replace --obj_type resources --xml-file file.xml # Resource Handling crm resource stop <name> crm resource start <name> crm resource move <name> <node> # Put entire cluster in maintenance crm configure property maintenance-mode=true crm configure property maintenance-mode=false # Unmanaged Mode for single services crm resource unmanage <name> crm resource manage <name>
rabbitmqctl list_vhosts # List all defined vhosts rabbitmqctl list_queues <vhost> # List all queues for the vhost rabbitmqctl report # Dump detailed report on RabbitMQ instance # Plugin management /usr/lib/rabbitmq/bin/rabbitmq-plugins enable <name> /usr/lib/rabbitmq/bin/rabbitmq-plugins list
wackatrl -l # List status wackatrl -f # Remove node from cluster wackatrl -s # Add node to cluster again
/usr/sbin/munin-run <plugin name> # for values /usr/sbin/munin-run <plugin name> config # for configuration
/usr/sbin/munin-node-configure --suggest # and enable them using /usr/sbin/munin-node-configure --shell | sh
sudo -u munin /usr/bin/munin-cron
apt-get install <package> apt-get remove <package> # Remove files installed by <package> apt-get purge <package> # Remove <package> and all the files it did create apt-get upgrade # Upgrade all packages apt-get install <package> # Upgrade an install package apt-get dist-upgrade # Upgrade distribution apt-cache search <package> # Check if there is such a package name in the repos apt-cache clean # Remove all downloaded .debs dpkg -l # List all installed/known packages # More dpkg invocations above in the "Debian" section!
# 1. Edit settings in /etc/update-manager/release-upgrades # e.g. set "Prompt=lts" # 2. Run upgrade do-release-upgrade -d # For Ubuntu release upgrades
zypper install <package> zypper refresh # Update repository infos zypper list-updates zypper repos # List configured repositories zypper dist-upgrade # Upgrade distribution zypper dup # Upgrade distribution (alias) zypper search <package> # Search for <package> zypper search --search-descriptions <package> zypper clean # Clean package cache # For safe updates: zypper mr –keep-packages –remote # Enable caching of packages zypper dup -D # Fetch packages using a dry run zypper mr –all –no-refresh # Set cache usage for following dup zypper dup # Upgrade!
up2date
yum update # Upgrade distro yum install <package> # Install <package>
cat /proc/mdstat # Print status mdadm --detail /dev/md0 # Print status per md mdadm --manage -r /dev/md0 /dev/sda1 # Remove a disk mdadm --zero-superblock /dev/sda1 # Initialize a disk mdadm --manage -a /dev/md0 /dev/sda1 # Add a disk mdadm --manage --set-faulty /dev/md0 /dev/sda1
# Show status of all arrays on all controllers hpacucli all show config hpacucli all show config detail # Show status of specific controller hpacucli ctrl=0 pd all show # Show Smart Array status hpacucli all show status
# Get number of controllers /opt/MegaRAID/MegaCli/MegaCli64 -adpCount -NoLog # Get number of logical drives on controller #0 /opt/MegaRAID/MegaCli/MegaCli64 -LdGetNum -a0 -NoLog # Get info on logical drive #0 on controller #0 /opt/MegaRAID/MegaCli/MegaCli64 -LdInfo -L0 -a0 -NoLog
debsecan --suite=sid
portaudit -Fda
openssl x509 -text -in my.crtBy replacing "x509" with "ca" or "crt" you can dump other file types too.
Supported escape sequences: ~. - terminate connection (and any multiplexed sessions) ~B - send a BREAK to the remote system ~C - open a command line ~R - Request rekey (SSH protocol 2 only) ~^Z - suspend ssh ~# - list forwarded connections ~& - background ssh (when waiting for connections to terminate) ~? - this message ~~ - send the escape character by typing it twice (Note that escapes are only recognized immediately after newline.)
# To mount a remote home dir sshfs user@server: /mnt/home/user/ # Unmount again with fuserumount -u /mnt/home/user
ControlMaster auto ControlPath /home/<user name>/.ssh/tmp/%h_%p_%r Host <your jump host> ForwardAgent yes Hostname <your jump host> User <your user name on jump host> # Note the server list can have wild cards, e.g. "webserver-* database*" Host <server list> ForwardAgent yes User <your user name on all these hosts> ProxyCommand ssh -q <your jump host> nc -q0 %h 22
ssh-copy-id [-i keyfile] user@maschine
Host unreachable_host ProxyCommand ssh gateway_host exec nc %h %p
ssh host1 -A -t host2 -A -t host3 ...
ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"
Subsystem sftp /usr/libexec/openssh/sftp-server -u 0002
Host * ForwardAgent yes
ssh -D <port> <remote host>
apt-get install psshand use it like this
pssh -h host_list.txt <args>
apt-get install clustersshand use it like this
cssh server1 server2
http://data.alexa.com/data?cli=10&url=$DOMAIN
RewriteCond %{REQUEST_FILENAME} (.*)\.(html|htm)$
RewriteCond %{HTTP_USER_AGENT} (iPhone|iPad)
EnableExceptionHook on
LoadModule logio_module modules/mod_logio.so <IfModule mod_logio.c> CustomLog "| some-script.sh" "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" </IfModule>
# Turning it on/off globally <meta http-equiv="x-dns-prefetch-control" content="off"> # Turning it on per-domain <link rel="dns-prefetch" href="http://www.spreadfirefox.com/">
stats cachedump <slab class> <number of keys to dump>
db_archiveto clean unused log files
# Define a control flag set $extra_handling = 0; # Set the control flag when needed if ($variable1 ~* pattern) { set $extra_handling = 1; } # Unset the flag if needed if ( $variable2 = 1 ) { set $extra_handling = 0; } if ( $extra_handling = 1 ) { # Trigger intended behaviour }
ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on;
# Ensure to maximize available ports cat /proc/sys/net/ipv4/ip_local_port_range echo 1024 65535 >/proc/sys/net/ipv4/ip_local_port_rangeand set sockets to reuse
# sysctl -p net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1
ip route change default via <gateway> dev eth0 initcwnd 10consider also to increase net.ipv4.tcp_wmem[1]
Framework | DSL | CM | CM Encryption | Orchestration |
---|---|---|---|---|
cfengine | Propietary | ? | ? | Enterprise Only |
Puppet | Ruby | Hiera | Hiera Eyaml | mcollective |
Chef | Ruby | Builtin | Builtin | Pushy (knife plugin + ZeroMQ) |
Saltstack | Python | Builtin | Builtin | Builtin |
Finally it is worth to check the Wikipedia Comparison Chart for other less known and new tools!
$ augtool augtool> set /files/etc/ssh/sshd_config/PermitRootLogin no augtool> save
augeas { "sshd_config": changes => [ "set /files/etc/ssh/sshd_config/PermitRootLogin no", ], }
cfagent -K
chef-client -Fmin --why-run
ohai
knife bootstrap <FQDN/IP>
knife node run_list <add|remove> <node> <cookbook>::<recipe>
knife node show <node>
knife search node 'roles:<role name>'
knife ssh -a ipaddress name:server1 "chef-client"you can also use patterns:
knife ssh -a ipaddress name:www* "uptime"
puppetd --test # enable standard debugging options puppetd --debug # enable full debugging puppetd --one-time --detailed-exitcodes # Enable exit codes: # 2=changes applied # 4=failure
puppet agent -t --server <puppet master> [<options>]
puppet cert list puppet cert list --all puppet cert sign <name> puppet cert clean <name> # removes cert
puppet module list puppet module install <name> puppet module uninstall <name> puppet module upgrade <name> puppet module search <name>
puppet describe -l puppet resource <type name> # Querying Examples puppet resource user john.smith puppet resource service apache puppet resource mount /data puppet resource file /etc/motd puppet resource package wget
eyaml encrypt -f <filename> eyaml encrypt -s <string> eyaml encrypt -p # Encrypt password, will prompt for it eyaml decrypt -f <filename> eyaml decrypt -s <string> eyaml edit -f <filename> # Decrypts, launches in editor and reencrypts
mco ping mco ping -W "/some match pattern/" mco ping -S "<some select query>" # List agents, queries, plugins... mco plugin doc mco plugin doc <name> mco rpc service start service=httpd mco rpc service stop service=httpd mco facts <keyword> mco inventory <node name> # With shell plugin installed mco shell run <command> mco shell run --tail <command> mco shell start <command> # Returns an id mco shell watch <id> mco shell kill <id> mco shell list
# With apt-show-versions apt-show-versions | grep "security upgradeable" # With aptitude aptitude search '?and(~U,~Asecurity)'
cd /usr/src/linux && make-kpkg clean && make-kpkg --initrd --revision=myrev kernel_image
apt-get install debian-archive-keyring apt-get update
dpkg-reconfigure -a
# Resolve file to package dpkg -S /etc/fstab # Print all files of a package dpkg -L passwd # provided files dpkg -c passwd # owned files # Find packages by name dpkg -l gnome* # Package details dpkg -p passwd
sed -i 's/archive.ubuntu.com/old-releases.ubuntu.com/' /etc/apt/sources.list
# Print summary /usr/lib/update-notifier/apt-check --human-readable # Print package names /usr/lib/update-notifier/apt-check -p
apt-get dist-upgrade -o Dir::Etc::SourceList=/etc/apt/sources.security.repos.only.list
echo b >/proc/sysrq-trigger
pidstat -w
echo 1 > /proc/sys/vm/drop_caches
echo 1 > /proc/sys/vm/block_dump # wait some time... echo 0 > /proc/sys/vm/block_dump # Now check syslog for block dump lines
sysctl -p
dmesg -T # Enable human readable timestamps dmesg -x # Show facility and log level dmesg -f daemon # Filter for facility daemon dmesg -l err,crit,alert,emerg # Filter for errors
lsof # Complete list lsof -i :22 # Filter single TCP port lsof [email protected]:22 # Filter single connection endpoint lsof -u <user> # Filter per user lsof -c <name> # Filter per process name lsof -p 12345 # Filter by PID lsof /etc/hosts # Filter single file
perf stat -B some_command
dstat -a --top-bio --top-cpu
apt-get install dh-make-php dh-make-pecl <module name> cd <source directory> debuild # .deb package will be in ...
sysdig fd.name contains /etc sysdig -c topscalls_time # Top system calls sysdig -c topfiles_time proc.name=httpd # Top files by process sysdig -c topfiles_bytes # Top I/O per file sysdig -c fdcount_by fd.cip "evt.type=accept" # Top connections by IP sysdig -c fdbytes_by fd.cip # Top bytes per IP # Sick MySQL check via Apache sysdig -A -c echo_fds fd.sip=192.168.30.5 and proc.name=apache2 and evt.buffer contains SELECT sysdig -cl # List plugins sysdig -c bottlenecks # Run bottlenecks plugin
detox -v -r <directory>
perl -e 'for(<*>){((stat)[9]<(unlink))}'
getfacl <file> # List ACLs for file setfacl -m user:joe:rwx dir # Modify ACL ls -ld <file> # Check for active ACL (indicates a "+")
tune2fs -j /dev/hda1
tune2fs -O extents,uninit_bg,dir_index /dev/sda1
tune2fs -l /dev/sda1 | grep Inode
# Setup partition with (use parted for >2TB) (parted) mklabel gpt # only when >2TB (parted) mkpart primary lvm 0 4T # setup disk full size (e.g. 4TB) pvcreate /dev/sdb1 # Create physical LVM disk vgextend vg01 /dev/sdb1 # Add to volume group vgextend -L +4t /dev/mapper/vg01-lvdata # Extend your volume resize2fs /dev/mapper/vg01-lvdata # Auto-resize file system
rsync -az -e ssh --delete /data server:/dataIt just won't delete anything. It will when running it like this:
rsync -az -e ssh --delete /data/ server:/data
dmidecode 2>&1 |grep -A17 -i "Memory Device" |egrep "Memory Device|Locator: PROC|Size" |grep -v "No Module Installed" |grep -A1 -B1 "Size:"
postsuper -d ALL postsuper -d ALL deferred
# Either run on the node that should take over /usr/share/heartbeat/hb_failover # Or run on the node to should stop working /usr/share/heartbeat/hb_standby
# Cluster Resource Status crm_mon crm_mon -1 crm_mon -f # failure count # Dump and Import Config cibadmin --query --obj_type resources >file.xml cibadmin --replace --obj_type resources --xml-file file.xml # Resource Handling crm resource stop <name> crm resource start <name> crm resource move <name> <node> # Put entire cluster in maintenance crm configure property maintenance-mode=true crm configure property maintenance-mode=false # Unmanaged Mode for single services crm resource unmanage <name> crm resource manage <name>
rabbitmqctl list_vhosts # List all defined vhosts rabbitmqctl list_queues <vhost> # List all queues for the vhost rabbitmqctl report # Dump detailed report on RabbitMQ instance # Plugin management /usr/lib/rabbitmq/bin/rabbitmq-plugins enable <name> /usr/lib/rabbitmq/bin/rabbitmq-plugins list
wackatrl -l # List status wackatrl -f # Remove node from cluster wackatrl -s # Add node to cluster again
/usr/sbin/munin-run <plugin name> # for values /usr/sbin/munin-run <plugin name> config # for configuration
/usr/sbin/munin-node-configure --suggest # and enable them using /usr/sbin/munin-node-configure --shell | sh
sudo -u munin /usr/bin/munin-cron
# Register diverted path and move away dpkg-divert --add --rename --divert <renamed file path> &file path> # Remove a diversion again (remove file first!) dpkg-divert --rename --remove <file path>
apt-get install <package> apt-get remove <package> # Remove files installed by <package> apt-get purge <package> # Remove <package> and all the files it did create apt-get upgrade # Upgrade all packages apt-get install <package> # Upgrade an install package apt-get dist-upgrade # Upgrade distribution apt-cache search <package> # Check if there is such a package name in the repos apt-cache clean # Remove all downloaded .debs dpkg -l # List all installed/known packages # More dpkg invocations above in the "Debian" section!
# 1. Edit settings in /etc/update-manager/release-upgrades # e.g. set "Prompt=lts" # 2. Run upgrade do-release-upgrade -d # For Ubuntu release upgrades
apt-get install unattended-upgrades dpkg-reconfigure -plow unattended-upgrades # and maybe set notification mail address in /etc/apt/apt.conf.d/50unattended-upgrades
zypper install <package> zypper refresh # Update repository infos zypper list-updates zypper repos # List configured repositories zypper dist-upgrade # Upgrade distribution zypper dup # Upgrade distribution (alias) zypper search <package> # Search for <package> zypper search --search-descriptions <package> zypper clean # Clean package cache # For safe updates: zypper mr –keep-packages –remote # Enable caching of packages zypper dup -D # Fetch packages using a dry run zypper mr –all –no-refresh # Set cache usage for following dup zypper dup # Upgrade!
up2date
yum update # Upgrade distro yum install <package> # Install <package>
cat /proc/mdstat # Print status mdadm --detail /dev/md0 # Print status per md mdadm --manage -r /dev/md0 /dev/sda1 # Remove a disk mdadm --zero-superblock /dev/sda1 # Initialize a disk mdadm --manage -a /dev/md0 /dev/sda1 # Add a disk mdadm --manage --set-faulty /dev/md0 /dev/sda1
# Show status of all arrays on all controllers hpacucli all show config hpacucli all show config detail # Show status of specific controller hpacucli ctrl=0 pd all show # Show Smart Array status hpacucli all show status # Create new Array hpacucli ctrl slot=0 create type=logicaldrive drives=1I:1:3,1I:1:4 raid=1
# Get number of controllers /opt/MegaRAID/MegaCli/MegaCli64 -adpCount -NoLog # Get number of logical drives on controller #0 /opt/MegaRAID/MegaCli/MegaCli64 -LdGetNum -a0 -NoLog # Get info on logical drive #0 on controller #0 /opt/MegaRAID/MegaCli/MegaCli64 -LdInfo -L0 -a0 -NoLog
debsecan --suite=sid
portaudit -Fda
openssl x509 -text -in my.crtBy replacing "x509" with "ca" or "crt" you can dump other file types too.
Supported escape sequences: ~. - terminate connection (and any multiplexed sessions) ~B - send a BREAK to the remote system ~C - open a command line ~R - Request rekey (SSH protocol 2 only) ~^Z - suspend ssh ~# - list forwarded connections ~& - background ssh (when waiting for connections to terminate) ~? - this message ~~ - send the escape character by typing it twice (Note that escapes are only recognized immediately after newline.)
# To mount a remote home dir sshfs user@server: /mnt/home/user/ # Unmount again with fuserumount -u /mnt/home/user
ControlMaster auto ControlPath /home/<user name>/.ssh/tmp/%h_%p_%r Host <your jump host> ForwardAgent yes Hostname <your jump host> User <your user name on jump host> # Note the server list can have wild cards, e.g. "webserver-* database*" Host <server list> ForwardAgent yes User <your user name on all these hosts> ProxyCommand ssh -q <your jump host> nc -q0 %h 22
ssh-copy-id [-i keyfile] user@maschine
Host unreachable_host ProxyCommand ssh gateway_host exec nc %h %p
ssh host1 -A -t host2 -A -t host3 ...
ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"
Subsystem sftp /usr/libexec/openssh/sftp-server -u 0002
Host * ForwardAgent yes
ssh -D <port> <remote host>
apt-get install psshand use it like this
pssh -h host_list.txt <args>
apt-get install clustersshand use it like this
cssh server1 server2
vim scp://user@host//some/directory/file.txt
http://data.alexa.com/data?cli=10&url=$DOMAIN
RewriteCond %{REQUEST_FILENAME} (.*)\.(html|htm)$
RewriteCond %{HTTP_USER_AGENT} (iPhone|iPad)
EnableExceptionHook on
LoadModule logio_module modules/mod_logio.so <IfModule mod_logio.c> CustomLog "| some-script.sh" "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" </IfModule>
# Turning it on/off globally <meta http-equiv="x-dns-prefetch-control" content="off"> # Turning it on per-domain <link rel="dns-prefetch" href="http://www.spreadfirefox.com/">
# Connect with zkCli.sh -server 127.0.0.1:2181 # Commands ls <path> get <path> set <path>delete <path>
TLS_ECDHE_RSA_WITH_RC4_128_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
stats cachedump <slab class> <number of keys to dump>
db_archiveto clean unused log files
# Define a control flag set $extra_handling = 0; # Set the control flag when needed if ($variable1 ~* pattern) { set $extra_handling = 1; } # Unset the flag if needed if ( $variable2 = 1 ) { set $extra_handling = 0; } if ( $extra_handling = 1 ) { # Trigger intended behaviour }
ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on;
# Ensure to maximize available ports cat /proc/sys/net/ipv4/ip_local_port_range echo 1024 65535 >/proc/sys/net/ipv4/ip_local_port_rangeand set sockets to reuse
# sysctl -p net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1
ip route change default via <gateway> dev eth0 initcwnd 10consider also to increase net.ipv4.tcp_wmem[1]
Mar 20, 2015
getent hosts <host name>
ethtool eth0 # Print general info on eth0 ethtool -i eth0 # Print kernel module info ethtool -S eth0 # Print eth0 traffic statistics ethtool -a eth0 # Print RX, TX and auto-negotiation settings # Changing NIC settings... ethtool -s eth0 speed 100 ethtool -s eth0 autoneg off ethtool -s eth0 duplex full ethtool -s eth0 wol g # Turn on wake-on-LANDo not forget to make changes permanent in e.g. /etc/network/interfaces.
# mii-tool -v eth0: negotiated 100baseTx-FD flow-control, link ok product info: vendor 00:07:32, model 17 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control
ifconfig eth1 mtu 9000
ipset create smtpblocks hash:net counters ipset add smtpblocks 27.112.32.0/19 ipset add smtpblocks 204.8.87.0/24 iptables -A INPUT -p tcp --dport 25 -m set --match-set smtpblocks src -j DROP
iptables -t nat -A POSTROUTING -d <internal web server IP> -s <internal network address> -p tcp --dport 80 -j SNAT --to-source <external web server IP>
route add -net 91.65.16.0/24 gw 127.0.0.1 lo # for a subnet route add 91.65.16.4 gw 127.0.0.1 lo # for a single IP
tail -100000 access.log | awk '{print $1}' | sort | uniq -c |sort -nr|head -25
arping <IP>
lft -AN www.google.de
vnstat -l # Live listing until Ctrl-C and summary vnstat -tr # 5s automatic traffic sample
vnstat -h # last hours (including ASCII graph) vnstat -d # last days vnstat -w # last weeks vnstat -m # last months vnstat -t # top 10 days
# Network scan nmap -sP 192.168.0.0/24 # Host scan nmap <ip> nmap -F <ip> # fast nmap -O <ip> # detect OS nmap -sV <ip> # detect services and versions nmap -sU <ip> # detect UDP services # Alternative host discovery nmap -PS <ip> # TCP SYN scan nmap -PA <ip> # TCP ACK scan nmap -PO <ip> # IP ping nmap -PU <ip> # UDP ping # Alternative service discovery nmap -sS <ip> nmap -sT <ip> nmap -sA <ip> nmap -sW <ip> # Checking firewalls nmap -sN <ip> nmap -sF <ip> nmap -sX <ip>
# Typically used modes netstat -rn # List routes netstat -tlnp # List all open TCP connections netstat -tlnpc # Continuously do the above netstat -tulpen # Extended connection view netstat -a # List all sockets # And more rarely used netstat -s # List per protocol statistics netstat -su # List UDP statistics netstat -M # List masqueraded connections netstat -i # List interfaces and counters netstat -o # Watch time/wait handling
# On sending host nttcp -t -s # On receiving host nttcp -r -s
sysctl net
tcpdump -i eth0 -nN -vvv -xX -s 1500 port <some port>
snmpwalk -c public -v 1 -O s <myhost> .iso | grep <search string>
# Filter port tcpdump port 80 tcpdump src port 1025 tcpdump dst port 389 tcpdump portrange 21-23 # Filter source or destination IP tcpdump src 10.0.0.1 tcpdump dest 10.0.0.2 # Filter everything on network tcpdump net 1.2.3.0/24 # Logically operators tcpdump src port 1025 and tcp # Provide full hex dump of captured HTTP packages tcpdump -s0 -x port 80 # Filter TCP flags (e.g. RST) tcpdump 'tcp[13] & 4!=0'
Mar 20, 2015
[...] heartbeat is a basic heartbeat subsystem for Linux-HA. It will run scripts at initialisation, and when machines go up or down. This version will also perform IP address takeover using gratuitous ARPs. It works correctly for a 2-node configuration, and is extensible to larger configurations. [...]So without any intermediate layer heartbeat manages virtual IPs on multiple nodes which communicate via unicast/broadcast/multicast UDP or Ping, so they can be used as a cluster IP by any service on top. To get some service-like handling you can hook scripts to be triggered on failover, so could start/stop services as needed. So if you just want to protect youself against physical or network layer problem heartbeat might work out.
[...]Wackamole is quite unique in that it operates in a completely peer-to-peer mode within the cluster. Other products that provide the same high-availability guarantees use a "VIP" method. A networking appliance assumes a single virtual IP address and "maps" requests to that IP address to the machines in the cluster. This networking appliance is a single point of failure by itself, so most industry accepted solutions incorporate classic master-slave failover or bonding between two identical appliances. [...]My experience with wackamole is that with certain network problems you can run into split brain situations with an isolated node grabbing all virtual IPs and given his visibility in the network killing the traffic by doing so. So running wackamole from time to time you will have to restart all peers just to get a working Spread group again.
Mar 20, 2015
Mar 20, 2015
Download lpvs-scan.pl version 0.2, put it anywhere you like and run it like this
./lpvs-scan.plNo need to run as root, any user will do. It just needs
Mar 20, 2015
apt-get install rrdcachedand integrate it with Munin:
#!/bin/bash nice /usr/share/munin/munin-html $@ || exit 1 nice /usr/share/munin/munin-graph --cron $@ || exit 1and make it executable
10 * * * * munin if [ -x /usr/bin/munin-graph ]; then /usr/bin/munin-graph; fi
OPTS="-s www-data -l unix:/var/run/rrdcached.sock" OPTS="$OPTS -j /var/lib/rrdcached/journal/ -F" OPTS="$OPTS -b /var/lib/munin/ -B"If you do not set the socket user with "-s" you will see "Permission denied" in /var/log/munin/munin-cgi-graph.log
[RRD ERROR] Unable to graph /var/lib/munin/ cgi-tmp/munin-cgi-graph/[...].png : Unable to connect to rrdcached: Permission deniedIf you do not change the rrdcached working directory you will see "rrdc_flush" errors in your /var/log/munin/munin-cgi-graph.log
[RRD ERROR] Unable to graph /var/lib/munin/ cgi-tmp/munin-cgi-graph/[...].png : rrdc_flush (/var/lib/munin/[...].rrd) failed with status -1.Some details on this can be found in the Munin wiki.
Mar 20, 2015
upstream somestream { consistent_hash $request_uri; server 10.0.0.1:11211; server 10.0.0.2:11211; ... }
$memcached = new Memcached(); $memcached->setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT); $memcached->setOption(Memcached::OPT_LIBKETAMA_COMPATIBLE, true); $memcached->addServers($servers);
$m = new Memcached('mymemcache'); $m->setOptions(array( ... Memcached::OPT_LIBKETAMA_COMPATIBLE => true, Memcached::OPT_DISTRIBUTION => Memcached::DISTRIBUTION_CONSISTENT, ... )); $m->addServers(...);
Mar 20, 2015
$XDG_DATA_HOME | g_get_user_data_dir() |
$XDG_CONFIG_HOME | g_get_user_config_dir() |
$XDG_CACHE_HOME | g_get_user_cache_dir() |
g_build_filename (g_get_user_cache_dir (), "coolApp", "render.dat", NULL);to produce a path. Most likely you'll get something like "/home/joe/.cache/coolApp/render.dat".
$XDG_DATA_HOME | wxStandardPaths::GetDataDir() |
$XDG_CONFIG_HOME | wxStandardPaths::GetConfigDir() |
$XDG_CACHE_HOME | wxStandardPaths::GetLocalDataDir() |
Mar 20, 2015
editfiles: { home/.bashrc AppendIfNoSuchLine "alias rm='rm -i'" }
augeas { "sshd_config": context => "/files/etc/ssh/sshd_config", changes => [ "set PermitRootLogin no", ], }
bash "some_commands" do user "root" cwd "/tmp" code <<-EOT echo "alias rm='rm -i'" >> /root/.bashrc EOT endWhile it is not a one-liner statement as possible as in cfengine it is very flexible. The Script resource is widely used to perform ad-hoc source compilation and installations in the community codebooks, but we can also use it for standard file editing. Finally to do conditional editing use not_if/only_if clauses at the end of the Script resource block.
Mar 20, 2015
$ memcached -vv slab class 1: chunk size 96 perslab 10922 slab class 2: chunk size 120 perslab 8738 slab class 3: chunk size 152 perslab 6898 slab class 4: chunk size 192 perslab 5461 [...]In the configuration printed above memcache will keep fit 6898 pieces of data between 121 and 152 byte in a single slab of 1MB size (6898*152). All slabs are sized as 1MB per default. Use the following command to print all currently existing slabs:
stats slabsIf you've added a single key to an empty memcached 1.4.13 with
set mykey 0 60 1 1 STOREDyou'll now see the following result for the "stats slabs" command:
stats slabs STAT 1:chunk_size 96 STAT 1:chunks_per_page 10922 STAT 1:total_pages 1 STAT 1:total_chunks 10922 STAT 1:used_chunks 1 STAT 1:free_chunks 0 STAT 1:free_chunks_end 10921 STAT 1:mem_requested 71 STAT 1:get_hits 0 STAT 1:cmd_set 2 STAT 1:delete_hits 0 STAT 1:incr_hits 0 STAT 1:decr_hits 0 STAT 1:cas_hits 0 STAT 1:cas_badval 0 STAT 1:touch_hits 0 STAT active_slabs 1 STAT total_malloced 1048512 ENDThe example shows that we have only one active slab type #1. Our key being just one byte large fits into this as the smallest possible chunk size. The slab statistics show that currently on one page of the slab class exists and that only one chunk is used. Most importantly it shows a counter for each write operation (set, incr, decr, cas, touch) and one for gets. Using those you can determine a hit ratio! You can also fetch another set of infos using "stats items" with interesting counters concerning evictions and out of memory counters.
stats items STAT items:1:number 1 STAT items:1:age 4 STAT items:1:evicted 0 STAT items:1:evicted_nonzero 0 STAT items:1:evicted_time 0 STAT items:1:outofmemory 0 STAT items:1:tailrepairs 0 STAT items:1:reclaimed 0 STAT items:1:expired_unfetched 0 STAT items:1:evicted_unfetched 0 END
stats cachedump <slab class> <number of items to dump>To dump our single key in class #1 run
stats cachedump 1 1000 ITEM mykey [1 b; 1350677968 s] ENDThe "cachedump" returns one item per line. The first number in the braces gives the size in bytes, the second the timestamp of the creation. Given the key name you can now also dump its value using
get mykey VALUE mykey 0 1 1 ENDThis is it: iterate over all slabs classes you want, extract the key names and if need dump there contents.
PHP | simple script | Prints key names. |
Perl | simple script | Prints keys and values |
Ruby | simple script | Prints key names. |
Perl | memdump | Tool in CPAN module Memcached-libmemcached |
PHP | memcache.php | Memcache Monitoring GUI that also allows dumping keys |
libmemcached | peep | Does freeze your memcached process!!! Be careful when using this in production. Still using it you can workaround the 1MB limitation and really dump all keys. |
Mar 20, 2015
peas_engine_add_search_path (PEAS_ENGINE (engine), gtr_dirs_get_user_plugins_dir (), gtr_dirs_get_user_plugins_dir ()); peas_engine_add_search_path (PEAS_ENGINE (engine), gtr_dirs_get_gtr_plugins_dir (), gtr_dirs_get_gtr_plugins_data_dir ());It is useful to have two registrations one pointing to some user writable subdirectory in $HOME and a second one for package installed plugins in a path like /usr/share/<application>/plugins. Finally ensure to call the init method of this boiler plate code during your initialization.
#include <libpeas-gtk/peas-gtk-plugin-manager.h> [...] /* assuming "plugins_box" is an existing tab container widget */ GtkWidget *alignment; alignment = gtk_alignment_new (0., 0., 1., 1.); gtk_alignment_set_padding (GTK_ALIGNMENT (alignment), 12, 12, 12, 12); widget = peas_gtk_plugin_manager_new (NULL); gtk_container_add (GTK_CONTAINER (alignment), widget); gtk_box_pack_start (GTK_BOX (plugins_box), alignment, TRUE, TRUE, 0);At this point you can already compile everything and test it. The new tab with the plugin manager should show up empty but working.
struct _GtrWindowActivatableInterface { GTypeInterface g_iface; /* Virtual public methods */ void (*activate) (GtrWindowActivatable * activatable); void (*deactivate) (GtrWindowActivatable * activatable); void (*update_state) (GtrWindowActivatable * activatable); };The activate() and deactivate() methods are to be called by conventing using the "extension-added" / "extension-removed" signal emitted by the PeasExtensionSet. The additional method update_state() is called in the gtranslator code when user interactions happen and the plugin needs to reflect it. Add as many methods you need many plugins do not need special methods as they can connect to application signals themselves. So keep the Activatable interface simple! As for how many Activatables to add: in the most simple case in a single main window application you could just implement a single Activatable for the main window and all plugins no matter what they do initialize with the main window.
window->priv->extensions = peas_extension_set_new (PEAS_ENGINE (gtr_plugins_engine_get_default ()), GTR_TYPE_WINDOW_ACTIVATABLE, "window", window, NULL); g_signal_connect (window->priv->extensions, "extension-added", G_CALLBACK (extension_added), window); g_signal_connect (window->priv->extensions, "extension-removed", G_CALLBACK (extension_removed), window);The extension set instance, representing all plugins implementing the interface, is used to trigger the methods on all or only selected plugins. One of the first things to do after creating the extension set is to initialize all plugins using the signal "extension-added":
peas_extension_set_foreach (window->priv->extensions, (PeasExtensionSetForeachFunc) extension_added, window);As there might be more than one registered extension we need to implement a PeasExtensionSetForeachFunc method handling each plugin. This method uses the previously implemented interface. Example from gtranslator:
static void extension_added (PeasExtensionSet *extensions, PeasPluginInfo *info, PeasExtension *exten, GtrWindow *window) { gtr_window_activatable_activate (GTR_WINDOW_ACTIVATABLE (exten)); }Note: Up until libpeas version 1.1 you'd simply call peas_extension_call() to issue the name of the interface method to trigger instead.
peas_extension_call (extension, "activate");Ensure to
[Plugin] Module=myplugin Loader=python IAge=2 Name=My Plugin Description=My example plugin for testing only Authors=Joe, Sue Copyright=Copyright © 2012 Joe Website=... Help=...Now for the plugin: in Python you'd import packages from the GObject Introspection repository like this
from gi.repository import GObject from gi.repository import Peas from gi.repository import PeasGtk from gi.repository import Gtk from gi.repository import <your package prefix>The imports of GObject, Peas, PeasGtk and your package are mandatory. Others depend on what you want to do with your plugin. Usually you'll want to interact with Gtk. Next you need to implement a simple class with all the interface methods we defined earlier:
class MyPlugin(GObject.Object, <your package prefix>.<Type>Activatable): __gtype_name__ = 'MyPlugin' object = GObject.property(type=GObject.Object) def do_activate(self): print "activate" def do_deactivate(self): print "deactivate" def do_update_state(self): print "updated state!"Ensure to fill in the proper package prefix for your program and the correct Activatable name (like GtkWindowActivatable). Now flesh out the methods. That's all. Things to now:
if HAVE_INTROSPECTION -include $(INTROSPECTION_MAKEFILE) INTROSPECTION_GIRS = Gtranslator-3.0.gir Gtranslator-3.0.gir: gtranslator INTROSPECTION_SCANNER_ARGS = -I$(top_srcdir) --warn-all --identifier-prefix=Gtr Gtranslator_3_0_gir_NAMESPACE = Gtranslator Gtranslator_3_0_gir_VERSION = 3.0 Gtranslator_3_0_gir_PROGRAM = $(builddir)/gtranslator Gtranslator_3_0_gir_FILES = $(INST_H_FILES) $(libgtranslator_c_files) Gtranslator_3_0_gir_INCLUDES = Gtk-3.0 GtkSource-3.0 girdir = $(datadir)/gtranslator/gir-1.0 gir_DATA = $(INTROSPECTION_GIRS) typelibdir = $(libdir)/gtranslator/girepository-1.0 typelib_DATA = $(INTROSPECTION_GIRS:.gir=.typelib) CLEANFILES = \ $(gir_DATA) \ $(typelib_DATA) \ $(BUILT_SOURCES) \ $(BUILT_SOURCES_PRIVATE) endifEnsure to
plugindir = $(pkglibdir)/plugins plugin_DATA = \ plugins/one_plugin.py \ plugins/one_plugin.plugin \ plugins/another_plugin.pl \ plugins/another_plugin.pluginAdditionally add package dependencies and GIR macros to configure.ac
pkg_modules="[...] libpeas-1.0 >= 1.0.0 libpeas-gtk-1.0 >= 1.0.0" GOBJECT_INTROSPECTION_CHECK([0.9.3]) GLIB_GSETTINGS
Mar 20, 2015
chef-client --why-run chef-client -WAnd the output looks nicer when using "-Fmin"
chef-client -Fmin -WAs with all other automation tools, the dry-run mode is not very predictive. Still it might indicate some of the things that will happen.
Mar 20, 2015
logger_open($config->{'logdir'}); logger_debug() if $config->{debug} or defined($ENV{CGI_DEBUG});and change it to
logger_open($config->{'logdir'}); logger_debug() if $config->{debug} or defined($ENV{CGI_DEBUG}); logger_level("warn");As parameter to logger_level() you can provide "debug", "info", "warn", "error" or "fatal" (see manpage "Munin::Master::Logger". And finally: silent logs!
Mar 20, 2015
sqlite3 my.db "VACUUM;"Depending on the database size and the last vacuum run it might take a while for sqlite3 to finish with it. Using this you can perform manual VACUUM runs (e.g. nightly) or on demand runs (for example on application startup).
PRAGMA auto_vacuum = NONE; PRAGMA auto_vacuum = INCREMENTAL; PRAGMA auto_vacuum = FULL;So effectively you have two modes: full and incremental. In full mode free pages are removed from the database upon each transaction. When in incremental mode no pages are free'd automatically, but only metadata is kept to help freeing them. At any time you can call
PRAGMA incremental_vacuum(n);to free up to n pages and resize the database by this amount of pages. To check the auto-vacuum setting in a sqlite database run
sqlite3 my.db "PRAGMA auto_vacuum;"which should return a number from 0 to 2 meaning: 0=None, 1=Incremental, 2=Full.
PRAGMA page_count; PRAGMA freelist_count;Both PRAGMA statements return a number of pages which together give you a rough guess at the fragmentation ratio. As far as I know there is currently no real measurement for the exact table fragmentation so we have to go with the free list ratio.
Mar 20, 2015
$ TERM=linux infocmp -L1 | grep color [...] max_colors#8, [...]or using tput
$ TERM=vt100 tput colors -1 $ TERM=linux tput colors 8tput is propably the best choice.
#!/bin/bash use_colors=1 # Check wether stdout is redirected if [ ! -t 1 ]; then use_colors=0 fi max_colors=$(tput colors) if [ $max_colors -lt 8 ]; then use_colors=0 fi [...]This should ensure no ANSI sequences ending up in your logs while still printing colors on every capable terminal.
Mar 20, 2015
sqlite3 <database file>When called like this you get a query prompt. To directly run "VACUUM" just call
sqlite3 <database file> "VACUUM;"Ensure that the program using the database file is not running!
Mar 20, 2015
acl all src all acl users proxy_auth REQUIRED
2.) b) Edit access definitions. You need (order is important): http_access allow users http_access deny all
2.) c) Setup a dummy authentication module auth_param basic program /usr/local/bin/squid_dummy_auth auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off
3.) Create a authentication script # vi /usr/local/bin/squid_dummy_auth
Insert something like:#!/bin/sh while read dummy; do echo OK done
# chmod a+x /usr/local/bin/squid_dummy_auth
4.) Restart squid # /etc/init.d/squid restart
With this you have a working Basic Auth proxy test setup running on localhost:3128.Mar 20, 2015
ln -s /usr/share/munin/plugins/jstat__heap /etc/munin/plugins/jstat_myname_heapChoose some useful name instead of "myname". This allows to monitor multiple JVM setups. Configure each link you created in for example a new plugin config file named "/etc/munin/plugin-conf.d/jstat" which should contain one section per JVM looking like this
[jstat_myname_heap] user tomcat7 env.pidfilepath /var/run/tomcat7.pid env.javahome /usr/
Mar 20, 2015
# <employee id>;<name>;<location> 1;Anton;37;Geneva 2;Marie;28;Paris 3;Tom;25;London
# <task id>;<employee id>;<task description> 1000;2;Setup new Project 1001;3;Call customer X 1002;3;Get package from post officeAnd now some action:
The following command
join employee.csv tasks.csv
... doesn't produce any output. This is because it expects the shared key to reside at the first column in both file, which is not the case. Also the default separator for 'join' is a whitespace.
join -t ";" -1 1 -2 2 employee.csv tasks.csv
We need to run join with '-t ";"' to get tell it that we have CSV format. Then to avoid the pitfall of not having the common key in the first column we need to tell join where the join key is in each file. The switch "-1" gets the key index for the first file and "-2" for the second file.
2;Marie;28;Paris;1000;Setup new Project 3;Tom;25;London;1001;Call customer X 3;Tom;25;London;1002;Get package from post office
join -o1.2,2.3 -t ";" -1 1 -2 2 employee.csv tasks.csv
We use "-o" to limit the fields to be printed. "-o" takes a comma separated list of "<file nr>.<field nr>" definitions. So we only want the second file of the first file (1.2) and the third field of the second file (2.3)...
Marie;Setup new Project Tom;Call customer X Tom;Get package from post office
While the syntax of join is not that straight forward, it still allows doing things quite quickly that one is often tempted to implement in a script. It is quite easy to convert batch input data to CSV format. Using join it can be easily grouped and reduced according to the your task.
If this got you interested you can find more and non-CSV examples on this site.
Mar 20, 2015
Release Line | Uses | Flash | Status |
---|---|---|---|
1.6 1.8 | GTK2 + WebkitGTK2 | any native Flash | Works |
1.10 | GTK3 + WebkitGTK3 v1.8 | 32bit native Flash | Broken |
1.10 | GTK3 + WebkitGTK3 v1.8 | 64bit native Flash | Broken |
1.10 | GTK3 + WebkitGTK3 v1.8 | 32bit Flash + nspluginwrapper | Works |
1.10 | GTK3 + WebkitGTK3 v2.0 | any native Flash | Works |
apt-get install nspluginwrapper
nspluginwrapper -i -a -v -n libflashplayer.soto install the plugin
Mar 20, 2015
stats_users = myuserNow reload pgbouner and use this user "myuser" to connect to pgbouncer with psql by requesting the special "pgbouncer" database:
psql -p 6432 -U myuser -W pgbouncerAt the psql prompt list the supported pgbouncer commands with
SHOW HELP;PgBouncer will present all statistics and configuration options:
pgbouncer=# SHOW HELP; NOTICE: Console usage DETAIL: SHOW HELP|CONFIG|DATABASES|POOLS|CLIENTS|SERVERS|VERSION SHOW STATS|FDS|SOCKETS|ACTIVE_SOCKETS|LISTS|MEM SET key = arg RELOAD PAUSE [The "SHOW" commands are all self-explanatory. Very useful are the "SUSPEND" and "RESUME" commands when you use pools.] SUSPEND RESUME [ ] SHUTDOWN
Mar 20, 2015
Website | Header |
---|---|
www.adcash.com | X-XSS-Protection: 1; mode=block |
www.badoo.com | X-XSS-Protection: 1; mode=block |
www.blogger.com | X-XSS-Protection: 1; mode=block |
www.blogspot.com | X-XSS-Protection: 1; mode=block |
www.facebook.com | X-XSS-Protection: 0 |
www.feedburner.com | X-XSS-Protection: 1; mode=block |
www.github.com | X-XSS-Protection: 1; mode=block |
www.google.de | X-XSS-Protection: 1; mode=block |
www.live.com | X-XSS-Protection: 0 |
www.meinestadt.de | X-XSS-Protection: 1; mode=block |
www.openstreetmap.org | X-XSS-Protection: 1; mode=block |
www.tape.tv | X-XSS-Protection: 1; mode=block |
www.xing.de | X-XSS-Protection: 1; mode=block; report=https://www.xing.com/tools/xss_reporter |
www.youtube.de | X-XSS-Protection: 1; mode=block; report=https://www.google.com/appserve/security-bugs/log/youtube |
Website | Header |
---|---|
www.blogger.com | X-Content-Type-Options: nosniff |
www.blogspot.com | X-Content-Type-Options: nosniff |
www.deutschepost.de | X-Content-Type-Options: NOSNIFF |
www.facebook.com | X-Content-Type-Options: nosniff |
www.feedburner.com | X-Content-Type-Options: nosniff |
www.github.com | X-Content-Type-Options: nosniff |
www.linkedin.com | X-Content-Type-Options: nosniff |
www.live.com | X-Content-Type-Options: nosniff |
www.meinestadt.de | X-Content-Type-Options: nosniff |
www.openstreetmap.org | X-Content-Type-Options: nosniff |
www.spotify.com | X-Content-Type-Options: nosniff |
www.tape.tv | X-Content-Type-Options: nosniff |
www.wikihow.com | X-Content-Type-Options: nosniff |
www.wikipedia.org | X-Content-Type-Options: nosniff |
www.youtube.de | X-Content-Type-Options: nosniff |
Website | Header |
---|---|
www.github.com | Content-Security-Policy: default-src *; script-src https://github.global.ssl.fastly.net https://ssl.google-analytics.com https://collector-cdn.github.com; style-src 'self' 'unsafe-inline' 'unsafe-eval' https://github.global.ssl.fastly.net; object-src https://github.global.ssl.fastly.net |
Website | Header |
---|---|
www.adcash.com | X-Frame-Options: SAMEORIGIN |
www.adf.ly | X-Frame-Options: SAMEORIGIN |
www.avg.com | X-Frame-Options: SAMEORIGIN |
www.badoo.com | X-Frame-Options: DENY |
www.battle.net | X-Frame-Options: SAMEORIGIN |
www.blogger.com | X-Frame-Options: SAMEORIGIN |
www.blogspot.com | X-Frame-Options: SAMEORIGIN |
www.dailymotion.com | X-Frame-Options: deny |
www.deutschepost.de | X-Frame-Options: SAMEORIGIN |
www.ebay.de | X-Frame-Options: SAMEORIGIN |
www.facebook.com | X-Frame-Options: DENY |
www.feedburner.com | X-Frame-Options: SAMEORIGIN |
www.github.com | X-Frame-Options: deny |
www.gmx.de | X-Frame-Options: deny |
www.gmx.net | X-Frame-Options: deny |
www.google.de | X-Frame-Options: SAMEORIGIN |
www.groupon.de | X-Frame-Options: SAMEORIGIN |
www.imdb.com | X-Frame-Options: SAMEORIGIN |
www.indeed.com | X-Frame-Options: SAMEORIGIN |
www.instagram.com | X-Frame-Options: SAMEORIGIN |
www.java.com | X-Frame-Options: SAMEORIGIN |
www.linkedin.com | X-Frame-Options: SAMEORIGIN |
www.live.com | X-Frame-Options: deny |
www.mail.ru | X-Frame-Options: SAMEORIGIN |
www.mozilla.org | X-Frame-Options: DENY |
www.netflix.com | X-Frame-Options: SAMEORIGIN |
www.openstreetmap.org | X-Frame-Options: SAMEORIGIN |
www.oracle.com | X-Frame-Options: SAMEORIGIN |
www.paypal.com | X-Frame-Options: SAMEORIGIN |
www.pingdom.com | X-Frame-Options: SAMEORIGIN |
www.skype.com | X-Frame-Options: SAMEORIGIN |
www.skype.de | X-Frame-Options: SAMEORIGIN |
www.softpedia.com | X-Frame-Options: SAMEORIGIN |
www.soundcloud.com | X-Frame-Options: SAMEORIGIN |
www.sourceforge.net | X-Frame-Options: SAMEORIGIN |
www.spotify.com | X-Frame-Options: SAMEORIGIN |
www.stackoverflow.com | X-Frame-Options: SAMEORIGIN |
www.tape.tv | X-Frame-Options: SAMEORIGIN |
www.web.de | X-Frame-Options: deny |
www.wikihow.com | X-Frame-Options: SAMEORIGIN |
www.wordpress.com | X-Frame-Options: SAMEORIGIN |
www.yandex.ru | X-Frame-Options: DENY |
www.youtube.de | X-Frame-Options: SAMEORIGIN |
Website | Header |
---|---|
www.blogger.com | Strict-Transport-Security: max-age=10893354; includeSubDomains |
www.blogspot.com | Strict-Transport-Security: max-age=10893354; includeSubDomains |
www.facebook.com | Strict-Transport-Security: max-age=2592000 |
www.feedburner.com | Strict-Transport-Security: max-age=10893354; includeSubDomains |
www.github.com | Strict-Transport-Security: max-age=31536000 |
www.paypal.com | Strict-Transport-Security: max-age=14400 |
www.spotify.com | Strict-Transport-Security: max-age=31536000 |
www.upjers.com | Strict-Transport-Security: max-age=47336400 |
Mar 20, 2015
Mar 20, 2015
Mar 20, 2015
This is a short summary of everything to is a precondition to be able to run APD as a PHP profiler. The description applies for PHP 5.6.2 and APD 1.0.1 and might be/is incorrect for other PHP/APD versions.
Absolute Preconditions
Do not start if you didn't ensure the following:
Correct APD Compilation
If you have a working PEAR setup you might want to setup APD as described in this Linux Journal article. Also try distribution packages. Otherwise APD is build as following:
<apache root>/bin/phpize
./configureAdd "--with-php-config=<apache root>/bin/php-config" if configure fails.
make make install
zend_extension=<path from make install>/apd.so apd.statement=1 apd.tracedir=/tmp/apd-traces
<apache root>/bin/phpOnce entered no message should be given if the extension is loaded properly. If you get an error message here that the "apd.so" extension could not be loaded you have a problem. Ensure that you compiled against the correct PHP/Apache version and are using the same PHP runtime right now.
If PHP doesn't complain about anything enter
<?php phpinfo(); ?>and check for some lines about the APD. If you find them you are ready for work.
Getting Some Traces
To start tracing first restart your Apache to allow the PHP module to load APD. Next you need to identify a script to trace. Add the APD call at the top of the script:
apd_set_pprof_trace();Then make some requests and remove the statement again to avoid causing further harm.
Now have a look at the trace directory. You should find files with a naming scheme of "pprof[0-9]*.[0-9]" here. Decode them using the "pprofp" tool from your APD source tarball. Example:
<apache root>/bin/php <apd source root>/pprofp -u <trace file>Redirect stdout if necessary. Use -t instead of -u (summary output) to get calling trees.
Tracing Pitfalls
When you create traces with -t you get a summary output too, but it doesn't contain the per-call durations. I suggest to always create both a call tree and a summary trace.
Mar 20, 2015
# cf-sketch --search utilities Monitoring::nagios_plugin_agent /tmp/design-center/sketches/utilities/nagios_plugin_agent [...]...and install them:
# cf-sketch --install Monitoring::nagios_plugin_agentcf-sketch itself is a Perl program that need to be set up separately by running
git clone https://github.com/cfengine/design-center/ cd design-center/tools/cf-sketch make install
Mar 20, 2015
GtkTreeIter iter; if (gtk_tree_model_iter_nth_child (treemodel, &iter, NULL, position)) { GtkTreeSelection *selection = gtk_tree_view_get_selection (treeview) if (selection) gtk_tree_selection_select_iter (selection, &iter); }
Mar 20, 2015
from gi.repository import GObject, Peas, PeasGtk, Gtk class TrayiconPlugin (GObject.Object, PeasActivatable): __gtype_name__ = 'TrayiconPlugin' object = GObject.property (type=GObject.Object) def do_activate (self): self.staticon = Gtk.StatusIcon () self.staticon.set_from_stock (Gtk.STOCK_ABOUT) self.staticon.connect ("activate", self.trayicon_activate) self.staticon.connect ("popup_menu", self.trayicon_popup) self.staticon.set_visible (True) def trayicon_activate (self, widget, data = None): print "toggle app window!" def trayicon_quit (self, widget, data = None): print "quit app!" def trayicon_popup (self, widget, button, time, data = None): self.menu = Gtk.Menu () menuitem_toggle = Gtk.MenuItem ("Show / Hide") menuitem_quit = Gtk.MenuItem ("Quit") menuitem_toggle.connect ("activate", self.trayicon_activate) menuitem_quit.connect ("activate", self.trayicon_quit) self.menu.append (menuitem_toggle) self.menu.append (menuitem_quit) self.menu.show_all () self.menu.popup(None, None, lambda w,x: self.staticon.position_menu(self.menu, self.staticon), self.staticon, 3, time) def do_deactivate (self): self.staticon.set_visible (False) del self.staticon
Mar 20, 2015
GError *err = NULL; GMatchInfo *matchInfo; GRegex *regex; regex = g_regex_new ("text", 0, 0, &err); // check for compilation errors here! g_regex_match (regex, "Some text to match", 0, &matchInfo);Not how g_regex_new() gets the pattern as first parameter without any regex delimiters. As the regex is created separately it can and should be reused.
if (g_match_info_matches (matchInfo)) g_print ("Text found!\n");
regex = g_regex_new (" mykey=(\w+) ", 0, 0, &err); g_regex_match (regex, content, 0, &matchInfo); while (g_match_info_matches (matchInfo)) { gchar *result = g_match_info_fetch (matchInfo, 0); g_print ("mykey=%s\n", result); g_match_info_next (matchInfo, &err); g_free (result); }
gchar **results = g_regex_split_simple ("\s+", "White space separated list", 0, 0);Use g_regex_split for a precompiled regex or use the "simple" function to just pass the pattern.
Mar 20, 2015
gcc <options> <sources> -o <binary> -Wl,-Bstatic <list of static libs> -Wl,Bdynamic <list of dynamic libs>
Mar 20, 2015
tail -f /var/log/myserver.log
use tail --follow=name /var/log/myserver.log
Using the long form --follow instead of -f you can tell tail to watch the file name and not the file descriptor. So shortly after the file name was removed tail will notice it and terminate itself.Mar 20, 2015
iconv -c -t ASCII input.txt
The result will be printed to stdout. The -c switch does the stripping. Using -t you can select every target encoding you like.Mar 20, 2015
$ dmesg -T [...] [Wed Oct 10 20:31:22 2012] Buffer I/O error on device sr0, logical block 0 [Wed Oct 10 20:31:22 2012] sr 1:0:0:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [...]Additionally the severity and source of the messages is interesting (option -x):
$ dmesg -xT [...] kern :err : [Wed Oct 10 20:31:21 2012] Buffer I/O error on device sr0, logical block 0 kern :info : [Wed Oct 10 20:31:21 2012] sr 1:0:0:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [...]Now we see that only one of those example message lines was an actual error. But we can even filter for errors or worse ignoring all the boot messages (option -l):
$ dmesg -T -l err,crit,alert,emerg [...] [Wed Oct 10 20:31:21 2012] Buffer I/O error on device sr0, logical block 0 [...]In the same way it is possible to filter the facility (the first column in the -x output). For example this could return:
$ dmesg -T -f daemon [...] [Wed Oct 10 19:57:50 2012] udevd[106]: starting version 175 [Wed Oct 10 19:58:08 2012] udevd[383]: starting version 175 [...]In any case it might be worth remembering:
Mar 20, 2015
Ubuntu One | 5GB free | Native in Ubuntu, known to work elsewhere too |
Dropbox | 2GB free | Packages Debian,Ubuntu,Fedora (link) |
SpiderOak | 2GB free | Packages Debian, Fedora, Slackware (link) |
Wuala | 2GB free | Installer for Debian, Ubuntu, Fedora, Redhat, Centos; OpenSuse (link) |
clients/ clients/01d263e0-dde9-11e2-a28f-0800200c9a66.xml (Client 1 state) clients/0b6f7af0-dde9-11e2-a28f-0800200c9a66.xml (Client 2 state) clients/lock.xml (might be missing) data/feedlist.opml data/read-states.xml data/items/flagged-chunk1.xml data/items/flagged-chunk2.xml data/items/flagged-chunk3.xml data/items/newsbin-wtwxzo34-chunk1.xml data/items/newsbin-wtwxzo34-chunk2.xml
Mar 20, 2015
dmidecode 2>&1 |grep -A17 -i "Memory Device" |egrep "Memory Device|Locator: PROC|Size" |grep -v "No Module Installed" |grep -A1 -B1 "Size:"The "Locator:" line gives you the slot assignements as listed in the HP documentation, e.g. the HP ProLiant DL380 G7 page. Of course you can also look this up in the ILO GUI.
Mar 20, 2015
typedef struct { GdkColor fg[5]; GdkColor bg[5]; GdkColor light[5]; GdkColor dark[5]; GdkColor mid[5]; GdkColor text[5]; GdkColor base[5]; GdkColor text_aa[5]; /* Halfway between text/base */ [...]I decided to use the "dark" and "bg" colors with "dark" for background and "bg" for the number text. For a light standard theme this results mostly to a white number on some shaded background. This is how it looks (e.g. the number "4" behind the feed "Boing Boing"):
gint textAvg, bgAvg; textAvg = style->text[GTK_STATE_NORMAL].red / 256 + style->text[GTK_STATE_NORMAL].green / 256 + style->text[GTK_STATE_NORMAL].blue / 256; bgAvg = style->bg[GTK_STATE_NORMAL].red / 256 + style->bg[GTK_STATE_NORMAL].green / 256 + style->bg[GTK_STATE_NORMAL].blue / 256; if (textAvg > bgAvg) darkTheme = TRUE;As "text" color and "background" color should always be contrasting colors the comparison of the sum of their RGB components should produce a useful result. If the theme is a colorful one (e.g. a very saturated red theme) it might sometimes cause the opposite result than intended, but still background and foreground will be contrasting enough that the results stays readable, only the number background will not contrast well to the widget background. For light or dark themes the comparison should always work well and produce optimal contrast. Now it is up to the Liferea users to decide wether they like it or not.
Mar 20, 2015
The following is a short summary of things to configure to get OpenEMM bounce handling to work. The problem is mostly setting up the connection from your sendmail setup, through the milter plugin provided by OpenEMM which then communicates with another daemon "bavd" which as I understand it keeps per mail address statistics and writes the bounce results into the DB.
The things that can be the cause for problems are these:
The real good thing is that OpenEMM is so very well documented that you just need to lookup the simple how to documentation and everything will work within 5min... Sorry just kidding! They seem to want to make money on books and support and somehow don't write documentation and rely on endless forum posts of clueless users.
Enough of a rant below you find some hints how to workaround the problem causes I mentioned above:
Setup Preconditions:
INPUT_MAIL_FILTER(`bav', `S=unix:/var/run/bav.sock, F=T')dnl
Setup:
[email protected] alias:[email protected]with "ext_6" being the sender address and ext_12 the bounce filter address.
Check List:
select count(*) from customer_1_binding_tbl where user_status=2;The meaning of the user_status value is as following:
Value | Meaning |
---|---|
1 | active |
2 | hard bounce |
3 | opt out by admin |
4 | opt out by user |
Also remember that hard bounces might not be generated immediately. In case of soft bounces OpenEMM waits for up to 7 bounces to consider the mail address as bounced.
Mar 20, 2015
echo | mkpasswd -s
Mar 20, 2015
{report-table} {content-reporter:space=THESPACE|scope="THESPACE:ParentPage" > descendents} {text-filter:data:somefield|required=true} {content-reporter} {report-column:title=somefield}{report-info:data:somefield}{report-column} {report-table}
Mar 20, 2015
# Log at INFO level to DRFAAUDIT, SYSLOG appenders log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=INFO,DRFAAUDIT,SYSLOG # Do not forward audit events to parent appenders (i.e. namenode) log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false # Configure local appender log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender log4j.appender.DRFAAUDIT.File=/var/log/audit.log log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n # Configure syslog appender log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender log4j.appender.SYSLOG.syslogHost=loghost log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout log4j.appender.SYSLOG.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n log4j.appender.SYSLOG.Facility=LOCAL1
It is important to have "SYSLOG" in the "...FSNamesystem.audit" definition at the top and to define such a "SYSLOG" appender below with "log4j.appender.SYSLOG". There you configure your loghost and facility. Now it might be that you still do not get anything in syslog on your loghost when using Hadoop version 0.18 up to at least 0.20. I found a solution to this only at this Japanese blog post which suggested to modify the Hadoop start helper script /usr/lib/hadoop/bin/hadoop-daemon.sh to make it work. You need to change the environment variables export HADOOP_ROOT_LOGGER="INFO,DRFA" export HADOOP_SECURITY_LOGGER="INFO,DRFAS"
to include "SYSLOG": export HADOOP_ROOT_LOGGER="INFO,SYSLOG,DRFA" export HADOOP_SECURITY_LOGGER="INFO,SYSLOG,DRFAS"
After making this change the syslog logging will work.Mar 20, 2015
This post is a comparison of the performance of different tools available to tag FLV and MP4 containers with specific metadata (e.g. title, keyframes, generator or other custom fields...). For FLV containers flvtool2, flvtool++ and yamdi are compared. For the MP4 container MP4box, AtomicParsley and ffmpeg are compared.
Here are the IMO three most important FLV taggers tested on a 125MB FLV:
Name | Duration | Large Files | In Memory | Custom Tags | Command |
---|---|---|---|---|---|
flvtool2 1.0.6 | 3min 11s | no | no | yes | flvtool2 -UP -band:Test -user:Test -date:1995 -genres:pop test.flv |
flvtool++ 1.2.1 | 3s | no | yes | yes | flvtool++ test.flv -tag band "Test" -tag user "Test" -tag date "1995" -tag genres "pop" test2.flv |
yamdi 1.6 | 1.5s | yes | no | no (patch) | yamdi -i test.flv -o test2.flv -c "Test" |
The performance of flvtool2 is horrendous. For films of 120min it will take hours to process. Therefore: Do not use it! Use Facebooks flvtool++ instead. I guess the bad performance results from it being built in Ruby. Also notice the "Large File" column indicating large file support which officially only yamdi support (by adding compile flag -D_FILE_OFFSET_BITS=64). Another important point is the "In Memory" column indicating that flvtool++ loads the entire file into memory when tagging, which is problematic when tagging large files. Given this results only yamdi should be used for FLV tagging!
Now for the MP4 tagging. Here you can select between a lot of tools from the net, but only a few of them are command line based and available for Unix. The MP4 test file used is 100MB large.
Name | Duration | Command |
---|---|---|
AtomicParsely | 0.6s | AtomicParsley test.mp4 --artist "Test" --genre "Test" --year "1995" |
mp4box | 0.6s | MP4Box -itags Name=Test:Artist=Me:disk=95/100 test.mp4 |
ffmpeg 0.6 | 0.8s | ffmpeg -i test.mp4 -metadata title="Test" -metadata artist="Test" -metadata date="1995" -acodec copy -vcodec copy test2.mp4 |
Given that recent ffmpeg brings the tagging for MP4 out of the box (it doesn't for FLV though) you do not even need an external tool to add the metadata,
Mar 20, 2015
for node in $(knife node list); do if knife node show -r $node | grep 'role\[base\]' >/dev/null; then echo $node; fi; doneDid I miss some other obvious way? I'd like to have some "knife run_list filter ..." command!
Mar 20, 2015
chef-shell -zThe "-z" is important to get chef-shell to load the currently active run list for the node that a "chef-client" run would use. Then enter "attributes" to switch to attribute mode
chef > attributes chef:attributes >and query anything you like by specifying the attribute path as you do in recipes:
chef:attributes > default["authorized_keys"] [...] chef:attributes > node["packages"] [...]By just querying for "node" you get a full dump of all attributes.
Mar 20, 2015
knife job start ... knife job listand
knife node status ...will tell about job execution status on the remote node. At a first glance it seems nice. Then again I feel worried when this is intended to get rid of SSH keys. Why do we need to get rid of them exactly? And in exchange for what?
Mar 20, 2015
cfengine:client:/var/opt/dsau/cfengine/inputs/update.conf:194: Warning: actionsequence is empty cfengine:client:/var/opt/dsau/cfengine/inputs/update.conf:194: Warning: perhaps cfagent.conf/update.conf have not yet been set up?The message "actionsecquence is empty" really means that the cfagent.conf is empty, because it could not be retrieved. The question then is why couldn't it be retrieved. Here is a check list:
Mar 20, 2015
Creator: Lars Windolf
~ heavily base on a CCM lecture by Martin Löwis (HPI, Uni Potsdam)
last modified: 29.09.2004
+ - defines object services
+ - defines domain interfaces
defines application interfaces
no private/protected possible
smallest unit of distribution
can have exceptions
is a type (can be passed with operations, but passing is always done per reference, valuetypes can used for per value passing)
+ - interfaces can be derived from other interfaces
multiple inheritance allowed
there are a lot of standard types
a struct statement can only define a type and no data, the defined type has to be used anywhere to make any effect
recursive type definitions can only be created by using sequences
creates an interfaces that when used is passed by value and not by reference
to prevent one communcation round-trip per object data access
+ - use cases
needs a factory to create an instance in the receiving ORB, application must register the value factory
receiving ORB must know complete structure
method definitions require local implementations
can be truncatable (receiving ORB omits unknown fields)
can be recursive
can have private fields
value boxes: can be used to create valuetypes from standard types, this is useful for building recursive valuetypes containing standard type fields
associates names with object references
can be used instead of IORs to find the "initial" application object
ORB provides standard interface to locate name service : orb->resolve_initial_references("NameService");
+ - binding
+ - unbinding
+ - names
+ - URLs
there where propietary approaches...
+ - and there is INS
Mar 20, 2015
if($str =~ /(\w+)\s+(\w+)(\s+(\w+))?/) { $result{id} = $1; $result{status} = $2; $result{details} = $4 if(defined($4)); }when I should write:
if($str =~ /(?<id>\w+)\s+(?<status>\w+)(\s+(?<details>\w+))?/) { %result = %+; }as described in the perlre manual: Capture group contents are dynamically scoped and available to you outside the pattern until the end of the enclosing block or until the next successful match, whichever comes first. (See Compound Statements in perlsyn.) You can refer to them by absolute number (using "$1" instead of "\g1" , etc); or by name via the %+ hash, using "$+{name}".
Mar 20, 2015
echo "Doing X with $file."becomes
log "Doing X with $file."and you implement log() as a function prefixing the timestamp. The danger here is to not replace some echo that needs being redirected somewhere else.
#!/bin/bash $( <the original script body> ) | stdbuf -i0 -o0 -e0 awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; }'You can drop the stdbuf invocation that unbuffers the pipe if you do not need 100% exact timestamps. Of course you can use this also with any slow running command on the shell no matter how complex, just add the pipe command. The obvious advantage: you do not touch the legacy code and can be 100% sure the script is still working after adding the timestamps.
Mar 20, 2015
sudo apt-get install xvba-va-driver libva-glx1 libva-egl1 vainfoThis is also described on the Ubuntu BinaryDriverHowto page but missing in almost all other tutorials on how to get Radeon chips working on Linux. So again: Install XvBA!!! If you are unsure wether it is working on your system just run
vainfoand check if it lists "Supported profile and entrypoints". If it does not or the tool doesn't exist you probably run without hardware accelaration for Flash.
Mar 20, 2015
from gi.repository import GObject from gi.repository import GnomeKeyring keyringName = 'test' def unlock(): print 'Unlocking keyring %s...' % keyringName GnomeKeyring.unlock_sync(keyringName, None) def dump_all(): print "Dump all keyring entries..." (result, ids) = GnomeKeyring.list_item_ids_sync(keyringName) for id in ids: (result, item) = GnomeKeyring.item_get_info_sync(keyringName, id) if result != GnomeKeyring.Result.OK: print '%s is locked!' % (id) else: print ' => %s = %s' % (item.get_display_name(), item.get_secret()) def do_query(id): print 'Fetch secret for id %s' % id attrs = GnomeKeyring.Attribute.list_new() GnomeKeyring.Attribute.list_append_string(attrs, 'id', id) result, value = GnomeKeyring.find_items_sync(GnomeKeyring.ItemType.GENERIC_SECRET, attrs) if result != GnomeKeyring.Result.OK: return print ' => password %s = %s' % (id, value[0].secret) print ' keyring id = %s' % value[0].item_id def do_store(id, username, password): print 'Adding keyring entry for id %s' % id GnomeKeyring.create_sync(keyringName, None) attrs = GnomeKeyring.Attribute.list_new() GnomeKeyring.Attribute.list_append_string(attrs, 'id', id) GnomeKeyring.item_create_sync(keyringName, GnomeKeyring.ItemType.GENERIC_SECRET, repr(id), attrs, '@@@'.join([username, password]), True) print ' => Stored.' # Our test code... unlock() dump_all() do_store('id1', 'User1', 'Password1') do_query('id1') dump_all()For simplicity the username and password are stored together as the secret token separated by "@@@". According to the documentation it should be possible to store them separately, but given my limited Python knowledge and the missing GIR documentation made me use this simple method. If I find a better way I'll update this post. If you know how to improve the code please post a comment! The code should raise a keyring password dialog when run for the first time in the session and give an output similar to this:
Unlocking keyring test... Dump all keyring entries... => 'id1' = TestA@@@PassA Adding keyring entry for id id1 => Stored. Fetch secret for id id1 => password id1 = TestA@@@PassA keyring id = 1 Dump all keyring entries... => 'id1' = TestA@@@PassAYou can also check the keyring contents using the seahorse GUI where you should see the "test" keyring with an entry with id "1" as in the screenshot below.
Mar 20, 2015
ERROR: libx264 not found If you think configure made a mistake, make sure you are using the latest version from SVN. If the latest version fails, report the problem to the [email protected] mailing list or IRC #ffmpeg on irc.freenode.net. Include the log file "config.err" produced by configure as this will help solving the problem.This can be caused by two effects:
libavcodec/svq3.c: In function 'svq3_decode_slice_header': libavcodec/svq3.c:721: warning: cast discards qualifiers from pointer target type libavcodec/svq3.c:724: warning: cast discards qualifiers from pointer target type libavcodec/svq3.c: In function 'svq3_decode_init': libavcodec/svq3.c:870: warning: dereferencing type-punned pointer will break strict-aliasing rules /tmp/ccSySbTo.s: Assembler messages: /tmp/ccSySbTo.s:10644: Error: suffix or operands invalid for `add' /tmp/ccSySbTo.s:10656: Error: suffix or operands invalid for `add' /tmp/ccSySbTo.s:12294: Error: suffix or operands invalid for `add' /tmp/ccSySbTo.s:12306: Error: suffix or operands invalid for `add' make: *** [libavcodec/h264.o] Error 1This post explains that this is related to a glibc issue and how to patch it. ffmpeg+x264 ffmpeg compilation fails with:
libavcodec/libx264.c: In function 'encode_nals': libavcodec/libx264.c:60: warning: implicit declaration of function 'x264_nal_encode' libavcodec/libx264.c: In function 'X264_init': libavcodec/libx264.c:169: error: 'x264_param_t' has no member named 'b_bframe_pyramid' make: *** [libavcodec/libx264.o] Error 1This means you are using incompatible ffmpeg and libx264 versions. Try to upgrade ffmpeg or to downgrade libx264. ffmpeg+video4linux
/usr/include/linux/videodev.h:55: error: syntax error before "ulong" /usr/include/linux/videodev.h:71: error: syntax error before '}' tokenWorkaround:
--- configure.ac.080605 2005-06-08 21:56:04.000000000 +1200 +++ configure.ac 2005-06-08 21:56:42.000000000 +1200 @@ -1226,6 +1226,7 @@ AC_CHECK_HEADERS(linux/videodev.h,,, [#ifdef HAVE_SYS_TIME_H #include <sys/time.h> +#include <sys/types.h> #endif #ifdef HAVE_ASM_TYPES_H #include <asm/types.h>http://www.winehq.org/pipermail/wine-devel/2005-June/037400.html oder Workaround: --disable-demuxer=v4l --disable-muxer=v4l --disable-demuxer=v4l2 --disable-muxer=v4l2 ffmpeg+old make
make: *** No rule to make target `libavdevice/libavdevice.so', needed by `all'. Stop.Problem: GNU make is too old, you need at least v3.81 http://www.mail-archive.com/[email protected]/msg01284.html
make: *** No rule to make target `install-libs', needed by `install'. Stop.Problem: GNU make is too old, you need at least v3.81 http://ffmpeg.arrozcru.org/forum/viewtopic.php?f=1&t=833 Mplayer+old make
make: expand.c:489: allocated_variable_append: Assertion `current_variable_set_list->next != 0' failed.Problem: GNU make is too old, you need at least v3.81 MPlayer
i386/dsputil_mmx.o i386/dsputil_mmx.c i386/dsputil_mmx.c: In function `transpose4x4': i386/dsputil_mmx.c:621: error: can't find a register in class `GENERAL_REGS' while reloading `asm'Workaround: Add the following to your configure call --extra-cflags="-O3 -fomit-frame-pointer" Note: if this somehow helped you and you know something to be added feel free to post a comment!
Jan 15, 2012
ffmpeg -i inputfile -vcodec mpeg2video -pix_fmt yuv422p -qscale 1 -qmin 1 -intra outputfile
The relevant piece is the "-intra" switch. For MPEG-4 TS something like the following should work: ffmpeg -i inputfile -vcodec libx264 -vpre slow -vpre baseline -acodec libfaac -ab 128k -ar 44100 -intra -b 2000k -minrate 2000k -maxrate 2000k outputfile
Note: It is important to watch the resulting muxing overhead which might lower the effective bitrate a lot! The resulting output files should be safe to be passed to the Apple segmenter.