You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using previous table user_count, just select * from user_count where value > 1000, but does not allow variation over time.
with user_count as (
select
date_trunc('day', created_at)::dateas day,
count(1) as value
from users
group by1
), user_count_with_pct as (
select
day,
value,
value / (avg(value) over ()) as pct_of_mean
from user_count
order by1
)
The line value / (avg(value) over ()) uses a window function to divide each row’s value by the average value for the entire table.
With select * from user_count_with_pct where pct >= 2.0, we see the days where we had 200%+ of the average signup rate.
This requires however to choose the threshold manually.
Outlier detection with standard deviation
First, we need to pick a zscore (number of standard deviations) threshold. This page from Boston University has a good explanation and z-scores for different proabilities.
For example, if we care about high or low values that occur only 5% of the time by random chance, we’d use a zscore threshold of +/- 1.645. If we want a 5% threshold exclusively for high values, we’d pick 1.96.
with data as (
select
date_trunc('day', created_at)::dateas day,
count(1) as value
from disk_usage
group by1
), data_with_stddev as (
select
day,
value,
(value -avg(value) over ())
/ (stddev(value) over ()) as zscore
from data
order by1
)
The first part of the calculation is (value - avg(value) over ()) which calculates how much a single datapoint deviates from the mean.
The second part / (stddev(value) over ()) divides the deviation by the standard deviation, to measure how many standard deviations the data point is from the mean.
Here’s the outlier query for a two-tailed 5% threshold: select * from data_with_stddev where abs(stddev) >= 1.645
Oracle
Install sqlplus on linux :
First of all you need to download Instant Client Downloads. Install alien software so you can install rpm packages by typing following command in terminal.
sudo apt-get install alien
Once that is done, go to the folder where the rpm files are located and execute the following:
EXPLAIN PLAN
SET STATEMENT_ID ='st1' FOR
SELECT last_name FROM employees;
EXPLAIN PLAN
SET STATEMENT_ID ='st1'
INTO my_plan_table
FOR
SELECT last_name FROM employees;
SELECT PLAN_TABLE_OUTPUT
FROM TABLE(DBMS_XPLAN.DISPLAY('MY_PLAN_TABLE', 'st1','TYPICAL'));
Describe inside a standard JDBC connectoion
select column_name, data_type from all_tab_columns where table_name ='TABLE_NAME';
Find columns matching the data type
select
TABLE_NAME,
COLUMN_NAME,
DATA_TYPE,
DATA_TYPE_MOD,
DATA_LENGTH,
DATA_PRECISION,
DATA_UPGRADED
from ALL_TAB_COLUMNS
where DATA_TYPE ='LONG';
List long running operations and their percentage (operations in the explain plan)
SELECT sid, to_char(start_time,'hh24:mi:ss') stime, message, (sofar/totalwork)*100 percent
FROM v$session_longops
WHERE sofar/totalwork <=1;
Get the size in megabytes of tables for a particular owner
SELECT
owner, table_name, TRUNC(sum(bytes)/1024/1024) "SIZE (MB)"FROM (
SELECT segment_name table_name, owner, bytes
FROM dba_segments
WHERE segment_type ='TABLE'UNION ALLSELECTi.table_name, i.owner, s.bytesFROM dba_indexes i, dba_segments s
WHEREs.segment_name=i.index_nameANDs.owner=i.ownerANDs.segment_type='INDEX'UNION ALLSELECTl.table_name, l.owner, s.bytesFROM dba_lobs l, dba_segments s
WHEREs.segment_name=l.index_nameANDs.owner=l.ownerANDs.segment_type='LOBINDEX'
)
WHERE owner ='ONDEV'GROUP BY table_name, owner
HAVINGSUM(bytes)/1024/1024>10ORDER BYSUM(bytes) desc
;
Get the constraint (foreign key)
SELECTa.table_name,
a.column_name,
a.constraint_name,
c.owner,
-- referenced pkc.r_owner,
c_pk.table_name r_table_name,
c_pk.constraint_name r_pk
FROM all_cons_columns a
JOIN all_constraints c ONa.owner=c.ownerANDa.constraint_name=c.constraint_nameJOIN all_constraints c_pk ONc.r_owner=c_pk.ownerANDc.r_constraint_name=c_pk.constraint_nameWHEREc.constraint_type='R'ANDc.constraint_name= :ContraintName;
-- AND a.table_name = :TableName;
select'create sequence '||sequence_name||' increment by '||increment_by||' start with '||last_number||' maxvalue '||max_value||
decode(cycle_flag,'N',' NOCYCLE ',' CYCLE ')||
decode(cache_size,0,'NOCACHE ','CACHE '||cache_size)
from user_sequences;
Get DDL statements
selectDBMS_METADATA.get_ddl('TABLE', 'TEST') from DUAL;
selectDBMS_METADATA.get_ddl('SEQUENCE', 'SEQ_AB_ADDRESS') from DUAL;
selectDBMS_METADATA.get_ddl('VIEW', 'MY_TABLES') from DUAL;
Display all rows that have duplicate value
select*from (select
SERVICE_ID,
USER_ID,
CREATION_DATE,
MODIFICATION_DATE,
count(*)
over (partition by USER_ID) CNT
from CR_SERVICES)
where CNT >1;
select
USER_ID,
count(USER_ID)
from CR_SERVICES
group by USER_ID
havingcount(USER_ID) >1;
MySQL
Grant access to all machines
GRANT ALL PRIVILEGES ON *.* TO [email protected] IDENTIFIED BY '9302ty09fuy8CHIOp90u9IYFVKHi8h';
Identifiers #1-14 are all "commit-ish", because they all lead to commits, but because commits also point to directory trees, they all ultimately lead to (sub)directory tree objects, and can therefore also be used as "tree-ish".
#15 can also be used as tree-ish when it refers to a (sub)directory, but it can also be used to identify specific files. When it refers to files, I'm not sure if it's still considered "tree-ish", or if acts more like "blob-ish" (Git refers to files as "blobs").
Use custom SSH command with git (since Git 2.3.0)
To a pass custom key for example.
GIT_SSH_COMMAND="ssh -i $HOME/id_rsa" git ...
Clone with submodules
With version 1.6.5 of Git and later, you can use:
git clone --recursive git://github.com/foo/bar.git
cd bar
For already cloned repos, or older Git versions, just use:
git clone git://github.com/foo/bar.git
cd bar
git submodule update --init --recursive
Open file in another branch
git show the_branch:path/to/file | mate
git show the_branch:path/to/file > exported_file
the_branch can be any reference (tag, branch, commit, HEAD, etc.)
Fix mistakes in a previous commit (say 0f0d8a27622e7bf7f008983c4b8ee23bfb9843ab) on 'master'
git checkout 0f0d8a27622e7bf7f008983c4b8ee23bfb9843ab
your_editor path/to/file
git add path/to/file
git commit --amend -v
git rebase --onto HEAD 0f0d8a27622e7bf7f008983c4b8ee23bfb9843ab master
Stash specific changes
git stash --patch
Working with submodule
Since Git 1.8.4, git submodule update can optionally clone the submodule repositories shallowly.
git submodule add -- repo path # Add submodule (sub repo may be already cloned in path)
git submodule add --depth 1 -- repo path # Add submodule repo with depth 1
git submodule update --depth 1 -- [paths] # Update submodule keeping a depth of 1
git submodule update --recursive --remote # Update recursively all submodules (since 1.8.2)
Updates tag sha-1
# Delete the tag on any remote before you push
git push origin :refs/tags/<tagname># Replace the tag to reference the most recent commit
git tag -fa <tagname># Push the tag to the remote origin
git push origin master --tags
git reset --soft HEAD~1 # remove last commit locally (but keep changed files)
git reset --hard HEAD~3 # remove last 3 commits locally (and clear changed files)
git reset --hard # return to the last commited state
git reset --hard origin/master # reset local repo to match remote branch, use this on failed merge
File history
git log -- the.file # simple file history
git log --follow -p the.file # show the entire history of the file (including history beyond renames and with diffs for each change).
Files changed in a directory between two commits
git slog --follow --name-status bytebuddy-mockmaker~31..bytebuddy-mockmaker -- test
Git apply patch (without commiting) of specified files in another commit
# creates patch file
git format-patch -1 10d5a55 -- README README.md
# apply patch without commiting
git format-patch --stdout -1 10d5a55 -- README README.md | git apply
# apply apply filtered patch and commit using same message
git format-patch --stdout -1 10d5a55 -- README README.md | git am
Checkout changes from a commit
git checkout -p bc66559 # interactive checkout from a commit (and history)
git checkout -p bc66559 -- path/file.java # interactive checkout from a file in a commit
git show topic:main.cpp > old_main.cpp # _checkout_ other revision of a file under a new name
When --since or --until are not available it's possible to use the @ construct
git revert master@{"1 month ago"} # Revert the repo as it was 1 month ago
git diff master@{"yesterday"} master@{"1 year 6 months ago"} # Difference between master of yesterday with master of 1 year and 6 months ago
Note :
Date reference suh as master@{1979-02-26 18:30:00} are stored in the reflog
of the local repository and
reflog entries expire after 90 days
by default.
In order to get a reference for any date, it's possible to use the git-rev-list tool.
git rev-list -n 1 --before="2009-07-27" master # output the sha-1 available at this date
Now it's possible to use it anywhere a reference is needed
e.g. revert the last 3 commits, no commit keeps the reverting changes uncommited.
git revert [--no-commit] HEAD~3..
Unstage
git rm --cached the.file # Removes the file from the staging area, leaving the file uncommitted and untracked
git reset HEAD the.file # Removes changes on tracked the.file from the staging area, leaving changes on the filesystem
Patches
git format-patch A^..B # creates range of patches (lowest wanted commit hash ^.. highest wanted commit hash)
git format-patch master --stdout > the.patch # creates a patch file containing changes from current branch to master (excluded)
git am -3 *.patch # apply the patch with 3 way merge
git am --resolved # If conflict, resolve and enter :
git apply --stat the.patch # look at the patch without applying
git apply --check the.patch # dry run of the patch
git am --signoff < the.patch # apply the path using sign-off (to keep the original commiter)
Dry runs
For git merge
git merge --no-commit --no-ff <BRANCH># Avoid commit and fast-forward commit
git merge --abort # Then to unstage automatically merged files
List branches containing a commit
git branch -r --contains <commit>
Commit difference between two branches containing a the same changes useful with cherry-pick
git branch -u upstream/foo # in current branch
git branch -u upstream/foo foo # if local branch foo is not the current branch
Same but longer version
git branch --set-upstream-to=origin/release-s17 # in current local branch
git branch --set-upstream-to=origin/release-s17 release-s17 # in specified local branch
If commit C needs to go away, it is possible to do an interactive rebase with git rebase -i C~1 and remove the line referencing commit C.
There's a quicker way in a non-interactive fashion using git rebase --onto, it changes the base of a commit, or put in other words it rebase s it.
git rebase --onto B C # makes commit B the new base of commit D, commit C being the previous base of D
Of course it is possible to use backreferences :
git rebase --onto B D~1 # makes commit B the new base of commit D, D~1 being the previous base of D
After the rebase the repo will now be in the following state, meaning everything between B (not included) and D (not included) go away. In this case only commit C.
master
↓
A--B--D'--E'
D' and E' being the rewritten commits of D and E (new SHA1s).
To move a branch to another base
Given the following structure, supposing topicA is supposed to be feature branch of versionA on top of commit J
However topicA history needs to be reworked (commits too big, unneeded files, changes not related to topicA, etc.). So now topicA has completely different history
Now commits in topicB should be moved on top of proper-history-topicA and they should become part of the proper-history-topicA branch.
Using
git rebase --onto proper-history-topicA I~1 proper-history-topicA
Where proper-history-topicA is the newbase, I~1 is the oldbase, and proper-history-topicA is the reference for what HEAD of proper-history-topicA will become.
The five commits from topicB (I through M), get played on top of proper-history-topicA, starting from where topicB diverged, to create I’, J’, K’, L’, and M’.
The git rebase command allow to pass multiple -exec command however, the rebase
process will stop there's file that are left modified by those command, or if the command fails.
In this case since $COMMAND is modifying files, it is then necessary to commit
the changes in the same -exec.
Move / set branch reference to specific commit
git branch -f branch-name new-tip-commit # Force the branch-name head to new-tip-commit
For any references (for those that are not branch)
git update-ref -m "reset: branch-name to new-tip-commit" branch-name new-tip-commit
Checkout Github's pull requests locally
See this gist. Also take a look at this github help page.
Locate the section for your github remote in the .git/config file. It looks like this:
$ git checkout pr/999
Branch pr/999 set up to track remote branch pr/999 from origin.
Switched to a new branch 'pr/999'
Getting the current branch
git rev-parse --abbrev-ref HEAD # display the current branch
git symbolic-ref --short HEAD # same
git rev-parse --symbolic-full-name --abbrev-ref @{u} # display remote branch
Commit on a specified date
The following command allow the define the date for both the author date and the commit date, as the --date option only operates on author date.
GIT_AUTHOR_DATE="Wed Oct 30 10:51:12 CET 2013" GIT_COMMITTER_DATE="$GIT_AUTHOR_DATE" git commit ....
Checkout (and merge) a branch from a fork
On the target repo and branch, create the new branch from the target (for example master)
# start bisect current HEAD is bad, v2.0.26-beta is good
git bisect start
git bisect bad
git bisect good v2.0.26-beta
# or shorter
git bisect start HEAD v2.0.26-beta
# then automate search with ascript that will exit 0 if the project is good or non-0 if the project is bad
git bisect run ./gradlew :test --tests "org.mockitousage.bugs.ConfusedSignatureTest"
git bisect reset
Search lost commits
Searchig in the reflog
git log -g --grep="<some string from your commit message>"# search all commits that matches the given text
git log --all --grep="<some string from your commit message>"# search all commits in every branch that matches the given text
git fsck --full --no-reflogs --unreachable --lost-found # find every unreachable commits, or blobs that are not commits
ls -1 .git/lost-found/commit/ | xargs -n 1 git log -n 1 --pretty=oneline
It might be usefull to have a look at the reflog as well.
git reflog
Search changed lines containing a string since a date
git rev-list master branch-name will give all commits reachable from both master and branch-name, not hte thing we want.
With git rev-list there's a special syntax to exclude commits reachable from a branch ; place a ^ is in front of the branch. git rev-list ^master branch-name will show all commits in reachable in branch-name but not in master. As we want the first commit of branch-name the command is piped to tail -1.
git clean --force -d -dry-run # dry run clean files and directories
git clean -fdn # same
git clean --force -d -x # clean files and directory and untracked files including those in gitignore
Ignore changes in a tracked file
Suppose some tracked file need to be modified locally with changes that are not to be commited, to ignore such files :
# Go into the project rootcd~/my-project
# Create a branch which only contains commits for the children of 'foo'
git subtree split --prefix=foo --branch=foo-only
# Remove 'foo' from the project
git rm -rf ./foo
# Create a git repo for 'foo' (assuming we already created it on github)
mkdir foo
pushd foo
git init
git remote add origin [email protected]:my-user/new-project.git
git pull ../ foo-only
git push origin -u master
popd# Add 'foo' as a git submodule to `my-project`
git submodule add [email protected]:my-user/new-project.git foo
New wowrtree at ../bugfix, creates a new branch bugfix-1234 starts
from orign/master.
git worktree prune
Tig
sample commands
tig pom.xml # history of a specific file
tig show # last commit patch
View switching
m Show main view
d Show diff view
l Show log view
t Show tree view
B Show blame view
H Show branch view
h Show help view
c Show stage view
y Show stash view
Some commands
[ Decrease the diff context
] Increase the diff context
, Move to parent. In the tree view, this means switch to the parent directory. In the blame view it will load blame for the parent commit. For merges the parent is queried.
Does it support natively integration tests => nope
Custom source set and configuration for integration test
plugin {
id 'java'
}
repositories {
jcenter()
}
sourceSets {
slowTest {
// necessary otherwise gradle doesn't know where _main output_
compileClasspath = main.output
runtimeClasspath = main.output
}
}
configurations {
// necessary to acquire the same dependencies as main
slowTestImplementation.extendsFrom implementation
slowTestRuntime.extendsFrom runtime
}
dependencies {
implementation ""
runtime ""
slowTestImplementation "junit:junit:4.12"
}
task slowTest(type: Test) {
classpath = sourceSets.slowTest.runtimeClasspath
testClassesDirs = sourceSets.slowTest.output
}
==> src/slowTest/java
Add but do not apply plugin
plugins {
// Add Asciidoctor plugin, but do not apply it.
id 'org.asciidoctor.convert' version '1.5.3' apply false
}
configurations {
convert
}
repositories {
jcenter()
}
dependencies {
convert 'org.asciidoctor:asciidoctorj:1.5.4'
}
// Use of Asciidoctor task from the Asciidoctor plugin.
task convert(type: org.asciidoctor.gradle.AsciidoctorTask) {
classpath = configurations.convert
}
Or
subprojects {
if (name.endsWith('-doc')) {
apply plugin: 'org.asciidoctor.convert'
}
}
Custom plugin repository
In the settings.gradle :
// First statement of the settings.gradle file
pluginRepositories {
maven { url 'http://intranet/artifactory/libs-release/' }
gradlePluginPortal() // Include public Gradle plugin portal
}
Incremental
Incremental tasks
tasks not executed b/c
inputs not changed
outputs not changed
... ?
How :
hash input output
hash content of in/out folder
serialize input properties
Annotate task implementation with Input Output annotation.
Incremental task input
e.g. check which file changed
Extensibility
properties
gradle.properties
org.gradle.parallel=true
deployUrl = ...
can be overridden with ./gradlew -Dorg.gradle.parallel=false -PdeployUrl=-D system (gradle) property
-P project property
Best approach because, it allows every interested consumer of the sourceset
to be aware of the generated code. (Compare to adding the source set in the compile task.)
Dependency
.m2 does not say from where the dependency came from, gradle cache works per project
because in dependency to explain the reason
1.0 forever, range/dynamic cached for 24h, 1.0-SNAPSHOT, cached for 24h
Your high-powered server is suddenly running dog slow, and you need to remember
the troubleshooting steps again. Bookmark this page for a ready reminder the next
time you need to diagnose a slow server.
Get on on top of it
Linux's top command provides a wealth of troubleshooting information, but you
have to know what you're looking for. Reference this diagram as you go through
the steps below:
Step 1: Check I/O WAIT and CPU IDLE TIME
How: use top - look for wa (I/O wait) and id (CPU idletime)
Why: checking I/O wait is the best initial step to narrow down the root cause of
server slowness. If I/O wait is low, you can rule out disk access in your diagnosis.
I/O Wait represents the amount of time the CPU waiting for disk or network I/O.
Waiting is the key here - if your CPU is waiting, it's not doing useful work.
It's like a chef who can't serve a meal until he gets a delivery of ingredients.
Anything above 10% I/O wait should be considered high.
On the other hand, CPU idle time is a metric you want to be high -- the higher
this is, the more bandwidth your server has to handle whatever else you throw at it.
If your idle time is consistently above 25%, consider it high enough.
Step 2: IO WAIT is low and IDLE TIME is low: Check CPU USER TIME
How: use top again -- look for the %us column (first column), then look for a
process or processes that is doing the damage.
Why: at this point you expect the usertime percentage to be high -- there's most
likely a program or service you've configured on you server that's hogging CPU.
Checking the % user time just confirms this. When you see that the % usertime
is high, it's time to see what executable is monopolizing the CPU
Once you've confirmed that the % usertime is high, check the process list (also
provided by top). Be default, top sorts the process list by %CPU, so you can
just look at the top process or processes.
If there's a single process hogging the CPU in a way that seems abnormal, it's an
anomalous situation that a service restart can fix. If there are are multiple
processes taking up CPU resources, or it there's one process that takes lots of
resources while otherwise functioning normally, than your setup may just be
underpowered. You'll need to upgrade your server (add more cores), or split
services out onto other boxes. In either case, you have a resolution:
if situation seems anomalous: kill the offending processes.
if situation seems typical given history: upgrade server or add more servers.
This is an area where historical context can be a huge help in understanding what's
going in. If you're using Scout, check out the historical charts for these metrics.
A flat line for % user time followed by a huge increase in the last 10 minutes
tells a much different story than smooth, steady increase over the last 6 months.
Step 3: IO WAIT is low and IDLE TIME is high
Your slowness isn't due to CPU or IO problems, so it's likely an application-specific
issue. It's also possible that the slowness is being caused by another server in
your cluster, or by an external service you rely on.
start by checking important applications for uncharacteristic slowness (the DB
is a good place to start),
think through which parts of your infrastructure could be slowed down externally.
For example, do you use an externally hosted email service that could slow down
critical parts of your application?
If you suspect another server in your cluster, strace and lsof can provide information
on what the process is doing or waiting on. Strace will show you which file descriptors
are being read or written to (or being attempted to be read from) and lsof can give
you a mapping of those file descriptors to network connections.
Step 4: IO WAIT is high: Check your SWAP usage
How: use top or free -m
Why: if your box is swapping out to disk a lot, the cache swaps will monopolize the
disk and processes with legitimate IO needs will be starved for disk access. In other
words, checking disk swap separates realIO wait problems from what are actually
RAM problems that "look like" IO Wait problems.
An alternative to top is free -m -- this is useful if you find top's frequent
updates frustrating to use, and you don't have any console log of changes.
Step 5: Swap usage is high
High swap usage means that you are actually out of RAM. See step 7 below.
Step 6: Swap usage is low
Low swap means you have a realIO wait problem. The next step is to see what's
hogging your IO.
How: iotop
iotop is an awesome tool for identifying IO offenders. Two things to note:
unless you've already installed iotop, it's probably not already on your system.
Recommendation: install it before you need it -- it's no fun trying to install a
troubleshooting tool on an overloaded machine.
iotop requies a Linux of 2.62 or above
Step 7: Check memory usage
How: use top. Once top is running, press the M key - this will sort applications
by the memory used.
Important: don't look at the free memory -- it's misleading. To get the actual memory
available, subtract the cached memory from the used memory. This is because Linux
caches things liberally, and often the memory can be freed up when it's needed.
Read here for more info.
Once you've identified the offenders, the resolution will again depend on whether their
memory usage seems business-as-usual or not. For example, a memory leak can be
satisfactorily addressed by a one-time or periodic restart of the process.
if memory usage seems anomalous: kill the offending processes.
if memory usage seems business-as-usual: add RAM to the server, or split high-memory
using services to other servers.
A HANDY FLOW CHART TO TIE IT ALL TOGETHER
Additional tips
vmstat is also a very handy tool, because it shows past values instead of an in-place
update like top. Running vmstat 1 shows concise metrics on memory, swap, io, and
CPU every second.
Track your disk IO latency and compare to IOPS (I/O operations per second). Sometimes
it's not activity in your own server causing the disk IO to be slow in a cloud/virtual
environment. Proving this is hard, and you really want to have graphs of historical
performance to show your provider!
Increasing IO latency can mean a failing disk or bad sectors. Keep an eye on this
before it escalates to data corruption or complete failure of the disk.
Wrapping it up
Having concrete steps at your fingertips makes slow server troubleshooting a little easier.
top is a powerful tool that provides a wealth of metrics to help you narrow down the cause
of server slowness.
The metrics you'll be looking at are io wait, cpu idle %, user %, memory free (taking
into account the file cache), and swap usage. Depending on whether conditions are a one-off
or the result of growing demands on your infrastructure, you may be able to solve the
slowdown by restarting services, or you may need to upgrade your servers.
Historical context can be very useful in establishing what's normal for your machines.
-e /patterntoexclude/d you can exclude some unwanted patterns from the list using the d command of sed
-e s/:compile//p -e s/:runtime//p removes :compile and :runtime but print related lines
| sort | uniq for duplicate entries in multi-module
VM properties
java -XshowSettings:properties -version
java -XshowSettings:system -version # On Linux only
java -XshowSettings:vm -version
Final flags
java -XX:+PrintFlagsFinal -version
Running commands
jcmd <pid>help
jcmd <pid> VM.flags -all # equivalent to -XX:+PrintFlagsFinal on the jvm with the given PID
jcmd <pid> VM.info
ps -o rss,vsz,sz <pid># Reseident Set Size of the PID
Heap dumps
Make a heap dumps of live object in the hprof format, which can be opened in jVisualVm File → Load → Heap Dumps
It may be relevant to setup manual proxy settings in the Preferences / Options dialog, socks proxy using localhost and port 9696.
And in visual vm: add new remote connection, specify remote as host and the port for jstatd (1099 for default, or what you specified with -p when running jstatd)
(function(){if(!window.jQuery){vars=document.createElement('script');s.type='text/javascript';s.async=true;s.src='//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js';// you can change this url by latest jQuery version(document.getElementsByTagName('head')[0]||document.getElementsByTagName('body')[0]).appendChild(s);}}());
Maven instructs developers that deploy artefacts to sign those and to deploy a public key on a public server, this server usually is pgp.mit.edu. With gpg if the key is not present it mist be installed, then one can verify the artifact. With the following command line, gpg is instructed to import automatically author public key if found (from pgp.mit.edu, single point of failure).
Note that the protocol default port is 11371, this could be a problem behind most firewall, however it is possible to use port 80.
Using frames 34 through 99 from a video, take every 7th frame and overlay it into a composite image. The convert command is part of the ImageMagick package.
Hash a string with SHA-1 / HMAC SHA-1 / HMAC SHA-256
echo -n "text to hash"| openssl sha1
echo -n "text to hash"| openssl sha1 -hmac "secret"echo -n "text to hash"| openssl dgst -sha256 -hmac "secret"
Generates random data
openssl rand -base64 128 # write 128 random bytes of base64-encoded data to stdout
openssl rand -out random-data.bin 1024 # write 1024 bytes of binary random data to a file# seed openssl with semi-random bytes from browser cachecd$(find ~/.mozilla/firefox -type d -name Cache)
openssl rand -rand $(find . -type f -printf '%f:') -base64 1024
Generate random passwords
openssl passwd MySecret # generate a random crypted password
openssl passwd -salt 8E MySecret # generate the crypted password for the given secret and salt
Newer Unix systems use a more secure MD5-based hashing mechanism that uses an eight-character salt (as compared to the two-character salt in traditional crypt()-style hashes).
Generating them is still straightforward using the -1 option.
openssl passwd -1 MySecret # generate random shadow password
openssl passwd -1 -salt sXiKzkus MySecret # generate the shadow password for the given secret and salt
Benchmark system performance for different algorithms
-P -n prevents lsof from doing name resolution, and it doesn't block. Missing either one of these options can make lsof be very slow.
For UDP: sudo lsof -iUDP -P -n | egrep -v '(127|::1)'. Again without -n and -P, it takes a long time.
Reminder: This does not include firewall settings.
ngrep -q 'HTTP''tcp'
ngrep -q 'HTTP''udp'
ngrep -q 'HTTP''icmp'
ngrep -q 'HTTP''host 192.168'# matches all headers containing the string 'HTTP' sent to or from the ip address starting with 192.168
ngrep -q 'HTTP''dst host 192.168'# match a destination host
ngrep -q 'HTTP''src host 192.168'# match a source host
ngrep -q 'HTTP''port 80'# match a port
Make any command stay active when terminal is closed
This is done by ignoring the SIGHUP signal using nohup, this signal is sent to a process when the controlling terminal is closed.
For example, also note the & to display the process id :
-f removing_patterns to get patterns from removing_patterns, for fixed string use -F <file>
Get a specific line of a file
sed -n '2p'< file.txt # print 2nd line
sed -n '2011p'< file.txt # print 2011th line
sed -n '10,33p' file.txt # print line 10 up to line 33
sed -n '1p;3p' file.txt # print 1st and 3th line
Using pipes cat some.xml | sed -n '256p' | xml format | mate
Replace content
sed -i.bak s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g index.html
Negative lookahead with sed
sed --in-place '/non-wanted/! s/\(before\)\(after\)/\1to be inserted\2/' files*
--in-place changes the file in place
/non-wanted/! only perform next action on lines not! matching non-wanted
s/\(before\)\(after\)/\1to be inserted\2/ the usual sed action
Fun with awk
# find in a csv every value matching a regex and print the column position# column separator is '","|"' for CSV file in the following format "a","b","c"
awk -F '","|"''{ for (i=1; i<=NF; i++) if ($i ~ "some regex") print i ":" $i }'
Using screen
screen -R session_name # Create a screen session
screen -x -R session_name # Attach to an existing screen session without detaching (multi-display)
screen -d -R session_name # Attach to a screen session, and detach the previous screen
ctrl+actrl+d To detach from an active session
Using XMLStarlet
Selecting nodes
File containing some garbage and oneline xmls (identifed with the standard XML declaration <?xml?> ).
First using a subshell with ( commands ) whose purpose is to selecvt only the XML lines, removing the XML
declaration for each line s/<\?xml[^>]+>//, and creates a dumb XML root foobar.
Then XMLStarlet is used to get the values if the nodes in the following xpath from the root /foobar/roster/contact,
addtional commands allowed us to select transformed lines that didn't have any data in the 2 last nodes first-name
and last-name.
And find the thread native id by converting the decimal to hex :
echo"obase=16; <tid>"| bc
in thread dump :
"TP-Processor234786" daemon prio=10 tid=0x00002aaad8024800 nid=0x2035 runnable [0x00002aaadef29000]
java.lang.Thread.State: RUNNABLE
at java.util.HashMap.get(HashMap.java:303)
at ......
Parameter (variable) expansion
For parameter expansion ${...} see ZSH and/or Bash documentation.
More to come...
History reuse tricks
``!!` repeat last command
!$ last argument of last command
!^ first argument of last command
Here's more
!-3 # Execute the command that what was executed 3 commands ago!!# Execute the content of the previous commandcd!$ # Use the last arg of the previous command
vi !^ # Use the first arg of the previous
vi !-3^ # Use the first argument of the command executed 3 commands ago!keyword # Execute or copy the last command **begining** with the keyword!?keyword # Execute or copy the last command with the keyword appearing **anywhere**!tail:p # Adding :p at the end will only display the command
!:0 is the previous command name
!^, !:2, !:3, …, !$ are the arguments of the previous command
!* is all the arguments of the previous commands
!-2^, !-2:2, !-2$, !-2* are arguments of the command exectuted 2 commands ago
!# repeat everything written so far in this line
!find:5 # display the fifth word from the last command beginning with !find!find:$ # display the last word from the same command.!find:*# is same as !find:1-$ - both resulting in the complete command except for the first word.
ls path/to/project/pom.xml
ls !$:h # Use the path part of the last argument => path/to/project
ls !$:t # Use the file part of the last argument => pom.xml
ls !$:e # Use the whole path + file without the file extension of the last argument => path/to/project/pom
ls !$:t:q # 2 modifiers, :q will quote the modified string => 'pom.xml'
dig www.leftcolumn.net A # enquire for IPv4 entries
dig www.leftcolumn.net AAAA # enquire for IPv6 entries
dig www.leftcolumn.net SOA +multiline # enquire for Start of Authority entry
dig www.leftcolumn.net CNAME # look for CNAME entry
dig www.leftcolumn.net MX # enquire for mail domains
dig -t ANY google.co.nz # retrieve all available stuff in the DNS Zone for a domain
dig mockito.org \
+nostats \ # Don't show DNS server statistics
+nocomments \ # Don't show comments (section comments, ...)
+nocmd # Don't show command
-LLocal : Redirect a distant port visible by the remote host to a port on the local machine
-RRemote : Redirect a distant machine port to a port visible by the local machine
-DDynamic : Transfert de port dynamique basé sur SOCKS (qui va faciliter la gestion des services d'un firewall pour les applications clients serveurs, voir, pour application concrète, le point Configuration du navigateur internet dans l'explication sur putty, ou la configuration du serveur mandataire avec GNOME
remote_port the remote port on ssh_host that will be forwarded
visible_host the host on the current machine, either localhost or a host accessible by current machine
visible_host_port the port on the visible host to forward to
local_bind_address is the network interface on which the forward should listen, by default bound to the loopback interface, to bind on different network interface use the relevant IP. To bind to all interfaces :
Note that the (OpenSSL) sshd server config should be configured with the GatewayPorts option to either yes or clientspecified to enable the port binding to other interfaces than loopback.
-N will not issue command on remote host, it just opens ports.
-f will make the ssh command go background, don't forget to kill the command after use !
SSH Escape sequences
Normal keys are forwarded over the ssh session, so ctrl+c, ctrl+d or ctrl+z won't work. Instead, use the SSH escape sequences.
For example to kill the current session hit subsequently Enter ↵, ~, ..
More of these escape sequences can be listed with Enter ↵, ~, ?:
Supported escape sequences:
~. : terminate session
~B : send a BREAK to the remote system
~R : Request rekey (SSH protocol 2 only)
~# : list forwarded connections
~? : this message
~~ : send the escape character by typing it twice
(Note that escapes are only recognized immediately after newline.)
Check if certificate is signed by key
diff <(openssl rsa -in my.key -modulus | grep Modulus)<(openssl x509 -in my.crt -modulus | grep Modulus)# Check if cert was signed by key.
Start a simple HTTP server from the current dir
python -m SimpleHTTPServer 8000
Shortcuts on the terminal
ATM doesn't work with oh-my-zsh.
ctrl-A : moves to the start of the line
ctrl-E : moves to the end of the line
ctrl-B : move back on character
ctrl-F : move forward one character
esc-B : move back one word
esc-F : move forward one word
ctrl-U : delete from the cursor to the beginning of the line
ctrl-K : delete from the cursor to the end of the line
find ./ -type f -exec sed -i '' -e "s/192.168.20.1/new.domain.com/" {} \;
On the OSX version of sed, the -i option expects an extension argument so your command is actually parsed as the extension argument and the file path is interpreted as the command code.
Adding the -e argument explicitly for the search/replace command and giving '' as argument to -i solves the issue.
opensnoop uses DTrace to show you all of the files that are being accessed on your system, you need to execute it with superuser privileges.
sudo opensnoop
sudo opensnoop -p PID # watch a particular process
sudo opensnoop -f /etc/passwd # watch who is accessing a particular file
Networking
networksetup -getairportnetwork en0 # ESSID on OSX
networksetup -listallnetworkservices # Network services
networksetup -getdnsservers "Ethernet"# name servers
networksetup -setairportpower "Wi-Fi" on # switches on the airport power on
networksetup -setairportpower "Wi-Fi" off # switches on the airport power off
pmset -g
sudo pmset displaysleep 15 # Put display to sleep after 15 minutes of inactivity
sudo pmset sleep 30 # Put computer to sleep after 30 minutes of inactivity
sudo pmset repeat wakeorpoweron MTWRF 7:00:00 # Wake up every morning at 7am
# create the ramdisk device
ramdisk=$( hdiutil attach -nomount ram://$((720*1024*1024/512)))# returns device with 720 MB (in 512k blocks)# mount the volume to /Volumes/tmp
diskutil erasevolume HFS+ "tmp"$(echo "$ramdisk")# unmount the volume
diskutil unmount $(echo "$ramdisk")# detach the ramdisk device
hdiutil detach $(echo "$ramdisk")
Read system information
sysctl -a
sysctl -n machdep.cpu.brand_string
Free memory cache
Mac OS X keeps apps in memory for a while after you close them, so they will open fast if you open them again. Purge will remove them from memory and give your free memory back.
purge
Installing pandoc
Step 1: Install Haskell Platform
Using Homebrew install the Haskell Platform.
brew install haskell-platform
This takes a few minutes so you will need to be patient.
Also if you are replacing a previous version of haskell-platform, you may want
to unregister packages belonging to the old version. You can find broken
packages using:
Note that for A4 paper : 1684 pixel with 144 dpi, 3508 pixels with 300 dpi, 4678 pixels with 400 dpi.
Also see Color Sync utility, though added filters are not visible in other apps (they are added in ~/Library/Filters).
iotop is in fact a dtrace script, and dtrace isn't allowed on El Capitan. This script will fail with a weird error.
One has to reenabledtrace with SIP.
With OSX tooling
sudo fs_usage -f filesys
However iotop requires dtrace and dtrace is not activated on El Capitan thanks to the rootless (SIP) mode.
Measures the number of context switches
latency measures the number of context switches and interrupts of the system
❯ fluxctl list-images --k8s-fwd-ns flux --workload security:helmrelease/security-auth --namespace security --context gke_infra-prod-67bbc6f2_europe-west4_prod-1
WORKLOAD CONTAINER IMAGE CREATED
security:helmrelease/security-auth chart-image eu.gcr.io/bbc-registry/security-auth
'-> 1.20200304.160755-a1aa0d1 04 Mar 20 16:08 UTC 1.20200304.150021-92ecfbf 04 Mar 20 15:00 UTC 1.20200303.125505-57ae9dd 03 Mar 20 12:55 UTC 1.20200302.233522-5f5253c 02 Mar 20 23:35 UTC 1.20200302.230556-4024625 02 Mar 20 23:06 UTC 1.20200302.225035-898a035 02 Mar 20 22:50 UTC 1.20200302.222412-9bb62a8 02 Mar 20 22:24 UTC 1.20200302.215626-fb438b5 02 Mar 20 21:56 UTC 1.20200302.172527-fb438b5 02 Mar 20 17:25 UTC 1.20200302.143544-88f4c29 02 Mar 20 14:35 UTC
❯ fluxctl list-images --k8s-fwd-ns flux --workload security:deployment/security-auth --namespace security --context gke_infra-prod-67bbc6f2_europe-west4_prod-1
WORKLOAD CONTAINER IMAGE CREATED
security:deployment/security-auth security-auth eu.gcr.io/bbc-registry/security-auth
| 1.20200304.160755-a1aa0d1 04 Mar 20 16:08 UTC
| 1.20200304.150021-92ecfbf 04 Mar 20 15:00 UTC
| 1.20200303.125505-57ae9dd 03 Mar 20 12:55 UTC
'-> 1.20200302.233522-5f5253c 02 Mar 20 23:35 UTC 1.20200302.230556-4024625 02 Mar 20 23:06 UTC 1.20200302.225035-898a035 02 Mar 20 22:50 UTC 1.20200302.222412-9bb62a8 02 Mar 20 22:24 UTC 1.20200302.215626-fb438b5 02 Mar 20 21:56 UTC 1.20200302.172527-fb438b5 02 Mar 20 17:25 UTC 1.20200302.143544-88f4c29 02 Mar 20 14:35 UTC
❯ k get event --context gke_infra-prod-67bbc6f2_europe-west4_prod-1 --namespace security --field-selector involvedObject.name=security-auth-65754f9589-tzw5x
LAST SEEN TYPE REASON OBJECT MESSAGE
37m Normal Scheduled pod/security-auth-65754f9589-tzw5x Successfully assigned security/security-auth-65754f9589-tzw5x to gke-prod-1-n1-standard-32-cos-898812f7-b9kg
37m Normal Pulled pod/security-auth-65754f9589-tzw5x Container image "istio/proxyv2:1.4.4" already present on machine
37m Normal Created pod/security-auth-65754f9589-tzw5x Created container istio-init
37m Normal Started pod/security-auth-65754f9589-tzw5x Started container istio-init
37m Normal Pulling pod/security-auth-65754f9589-tzw5x Pulling image "eu.gcr.io/bbc-registry/security-auth:1.20200304.160755-a1aa0d1"
36m Normal Pulled pod/security-auth-65754f9589-tzw5x Successfully pulled image "eu.gcr.io/bbc-registry/security-auth:1.20200304.160755-a1aa0d1"
36m Normal Created pod/security-auth-65754f9589-tzw5x Created container security-auth
36m Normal Started pod/security-auth-65754f9589-tzw5x Started container security-auth
36m Normal Pulled pod/security-auth-65754f9589-tzw5x Container image "istio/proxyv2:1.4.4" already present on machine
36m Normal Created pod/security-auth-65754f9589-tzw5x Created container istio-proxy
36m Normal Started pod/security-auth-65754f9589-tzw5x Started container istio-proxy
36m Warning Unhealthy pod/security-auth-65754f9589-tzw5x Readiness probe failed: HTTP probe failed with statuscode: 503
36m Warning Unhealthy pod/security-auth-65754f9589-tzw5x Readiness probe failed: HTTP probe failed with statuscode: 503
36m Warning Unhealthy pod/security-auth-65754f9589-tzw5x Readiness probe failed: Get http://10.208.13.73:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
36m Normal Killing pod/security-auth-65754f9589-tzw5x Stopping container security-auth
36m Normal Killing pod/security-auth-65754f9589-tzw5x Stopping container istio-proxy
35m Warning Unhealthy pod/security-auth-65754f9589-tzw5x Readiness probe failed: Get http://10.208.13.73:15020/healthz/ready: dial tcp 10.208.13.73:15020: connect: connection refused
❯ k rollout status deployment/security-auth --context gke_infra-prod-67bbc6f2_europe-west4_prod-1 --namespace security
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 9 of 10 updated replicas are available...
deployment "security-auth" successfully rolled out
# requires iproute2
ss -nlp
ss --listening --numeric --process --unix --tcp
❯ docker run test-edge-api
Picked up JAVA_TOOL_OPTIONS:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic INFO: Configuration file not found. The agent will attempt to read required values from environment variables.
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic INFO: Using default collector host: collector.newrelic.com
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic ERROR: Unable to start the New Relic Agent. Your application will continue to run but it will not be monitored.
com.newrelic.agent.config.ConfigurationException: The agent requires an application name. Check the app_name setting in newrelic.yml
at com.newrelic.agent.config.ConfigServiceFactory.validateConfig(ConfigServiceFactory.java:64) ~[newrelic-agent.jar:5.8.0]
at com.newrelic.agent.config.ConfigServiceFactory.createConfigService(ConfigServiceFactory.java:27) ~[newrelic-agent.jar:5.8.0]
at com.newrelic.agent.service.ServiceManagerImpl.<init>(ServiceManagerImpl.java:121)~[newrelic-agent.jar:5.8.0]
at com.newrelic.agent.Agent.tryToInitializeServiceManager(Agent.java:194) [newrelic-agent.jar:5.8.0]
at com.newrelic.agent.Agent.continuePremain(Agent.java:137) [newrelic-agent.jar:5.8.0]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at com.newrelic.bootstrap.BootstrapAgent.startAgent(BootstrapAgent.java:140) [newrelic-agent.jar:5.8.0]
at com.newrelic.bootstrap.BootstrapAgent.premain(BootstrapAgent.java:77) [newrelic-agent.jar:5.8.0]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:513) [?:?]
at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:525) [?:?]
10:14:53.566 [main] INFO org.springframework.core.KotlinDetector - Kotlin reflection implementation not found at runtime, related features won't be available.2020-03-20 10:14:55.616 [] WARN --- [kground-preinit] o.s.h.c.j.Jackson2ObjectMapperBuilder : For Jackson Kotlin classes support please add "com.fasterxml.jackson.module:jackson-module-kotlin" to the classpath...
As the container runs java with native memory tracking (-XX:NativeMemoryTracking=summary),
it’s possible to ask the JVM some information about JVM memory zones other than heap.
This shows what the JVM reserved for memory 7168324 KB (~7.1 GB) and what is actually used
by the jvm process 4456448 KB (~4.45 GB).
heap arena, note reserved and committed values are the same 4456448 KB, I’m not sure why this
number is different from the VM flags -XX:MaxHeapSize=4563402752
~165 MB of class metadata
how many classes have been loaded : 28431
674 threads are using ~81 MB out of 696 MB reserved
Code cache area (assembly of the used methods) ~105 MB out of 251 MB which matches with -XX:ReservedCodeCacheSize=251658240
There’s also the MappedByteBuffers, these are the files mapped to virtual memory of a process.
NMT does not track them, however, MappedByteBuffers can also take physical memory. And there is
no a simple way to limit how much they can take. However it’s possible to see the actual usage
of a process memory map: pmap -x <pid>
That’s a lot of information, let’s refine that with more
knowledge about /proc/<pid>/maps,
each map is associated with a set of modes:
r-: readable memory mapping
w: writable memory mapping
x: executable memory mapping
s or p : shared memory mapping or private mapping. /proc/<pid>/maps shows both
but pmap only show the s flag.
Also, pmap has another mapping mode which I barely found any reference of,
here’s one and here
R: if set, the map has no swap space reserved (MAP_NORESERVE flag of mmap).
This means that we can get a segmentation fault by accessing that memory if it has not
already been mapped to physical memory, and the system is out of physical memory.
So what’s interesting us at this time are the process’s memory mapped (shared) files
Total memory = Heap + Code Cache + Metaspace + Symbol tables
+ Compiler + Other JVM structures + Thread stacks
+ Direct buffers + Mapped files +
+ Native Libraries + Malloc overhead + ...
Heap
4456448
Code Cache
105201
Metaspace
165788
Symbol tables
28915
Compiler
5914
Other JVM structures
(Internal + NMT + smaller area)
24460 + 8433 + 217 + 7 + 19 + 1362 + 837 + 8 + 32
Thread stacks
85455
Direct buffers (Other)
267034
Mapped files
36060 + 4 + 4 + 8 + 4 + 4 + 12 + 8 + 4 + 12 + 28
Native Libraries
unaccounted at this time
Malloc overhead
accounted in NMT
…
Total
5186278 KB
5186278 KB is just tad under 5 GB (5242880 KB).
More importantly is the actual non heap usage :
5186278 - 4456448 = 729830 KB
Non heap
5186278 - 4456448 = 729830
~14 %
Heap
4456448
~85 %
Total
5186278
100 %
This means the application needs at least 730 MB plus the heap to run.
The heap committed memory is 4563402752 B (set via -XX:MaxRAMPercentage=85.000000),
but the heap usage may have a different figure :
$ jcmd $(pgrep java) GC.heap_info
6:
garbage-first heap total 4456448K, used 925702K [0x00000006f0000000, 0x0000000800000000)
region size 2048K, 387 young (792576K), 12 survivors (24576K)
Metaspace used 154131K, capacity 160610K, committed 160976K, reserved 1189888K
class space used 18070K, capacity 20474K, committed 20556K, reserved 1048576K
Successive execution may give different results about the used memory
$ jcmd 6 GC.heap_info
6:
garbage-first heap total 4456448K, used 1245902K [0x00000006f0000000, 0x0000000800000000)
region size 2048K, 543 young (1112064K), 12 survivors (24576K)
Metaspace used 154163K, capacity 160620K, committed 160976K, reserved 1189888K
class space used 18071K, capacity 20476K, committed 20556K, reserved 1048576K
$ jcmd 6 GC.heap_info
6:
garbage-first heap total 4456448K, used 2421454K [0x00000006f0000000, 0x0000000800000000)
region size 2048K, 1117 young (2287616K), 12 survivors (24576K)
Metaspace used 154163K, capacity 160620K, committed 160976K, reserved 1189888K
class space used 18071K, capacity 20476K, committed 20556K, reserved 1048576K
The heap went from 925702 KB to 2421454 KB ! Following the trend of the heap usage
lead can lead to the actual memory usage for this app (in the given cluster topology).
2.5 GB of used heap + 0.8 GB of non heap + 0.2 MB margin = 3.5 GB
Which leads to set -XX:MaxRAMPercentage=71.0. if we want a lower memory footprint.
RSS ⇒ amount of physical memory allocated & used by a process
current memory usage ~4.9GB, but it’s recommended to read cache+rss+swap values in memory.stat
limit on the memory usage (~5.3GB)
current memory and swap usage (~4.9 GB)
limit on memory and swap (~5.3GB)
Note the memory.limit_in_bytes and memory.memsw.limit_in_bytes values are the same,
that means that the processes in the cgroup can use all the memory before swaping,
however it is not impossible for the process to be use the swap before this limit is reached.
In fact due to the swapiness value the kernel may try to reclaim memory.
There are other parameters related to the kernel and tcp allocations.
❯ fluxctl list-images --k8s-fwd-ns flux --workload security:helmrelease/security-auth --namespace security --context gke_infra-prod-67bbc6f2_europe-west4_prod-1
WORKLOAD CONTAINER IMAGE CREATED
security:helmrelease/security-auth chart-image eu.gcr.io/bbc-registry/security-auth
'-> 1.20200304.160755-a1aa0d1 04 Mar 20 16:08 UTC 1.20200304.150021-92ecfbf 04 Mar 20 15:00 UTC 1.20200303.125505-57ae9dd 03 Mar 20 12:55 UTC 1.20200302.233522-5f5253c 02 Mar 20 23:35 UTC 1.20200302.230556-4024625 02 Mar 20 23:06 UTC 1.20200302.225035-898a035 02 Mar 20 22:50 UTC 1.20200302.222412-9bb62a8 02 Mar 20 22:24 UTC 1.20200302.215626-fb438b5 02 Mar 20 21:56 UTC 1.20200302.172527-fb438b5 02 Mar 20 17:25 UTC 1.20200302.143544-88f4c29 02 Mar 20 14:35 UTC
❯ fluxctl list-images --k8s-fwd-ns flux --workload security:deployment/security-auth --namespace security --context gke_infra-prod-67bbc6f2_europe-west4_prod-1
WORKLOAD CONTAINER IMAGE CREATED
security:deployment/security-auth security-auth eu.gcr.io/bbc-registry/security-auth
| 1.20200304.160755-a1aa0d1 04 Mar 20 16:08 UTC
| 1.20200304.150021-92ecfbf 04 Mar 20 15:00 UTC
| 1.20200303.125505-57ae9dd 03 Mar 20 12:55 UTC
'-> 1.20200302.233522-5f5253c 02 Mar 20 23:35 UTC 1.20200302.230556-4024625 02 Mar 20 23:06 UTC 1.20200302.225035-898a035 02 Mar 20 22:50 UTC 1.20200302.222412-9bb62a8 02 Mar 20 22:24 UTC 1.20200302.215626-fb438b5 02 Mar 20 21:56 UTC 1.20200302.172527-fb438b5 02 Mar 20 17:25 UTC 1.20200302.143544-88f4c29 02 Mar 20 14:35 UTC
❯ k get event --context gke_infra-prod-67bbc6f2_europe-west4_prod-1 --namespace security --field-selector involvedObject.name=security-auth-65754f9589-tzw5x
LAST SEEN TYPE REASON OBJECT MESSAGE
37m Normal Scheduled pod/security-auth-65754f9589-tzw5x Successfully assigned security/security-auth-65754f9589-tzw5x to gke-prod-1-n1-standard-32-cos-898812f7-b9kg
37m Normal Pulled pod/security-auth-65754f9589-tzw5x Container image "istio/proxyv2:1.4.4" already present on machine
37m Normal Created pod/security-auth-65754f9589-tzw5x Created container istio-init
37m Normal Started pod/security-auth-65754f9589-tzw5x Started container istio-init
37m Normal Pulling pod/security-auth-65754f9589-tzw5x Pulling image "eu.gcr.io/bbc-registry/security-auth:1.20200304.160755-a1aa0d1"
36m Normal Pulled pod/security-auth-65754f9589-tzw5x Successfully pulled image "eu.gcr.io/bbc-registry/security-auth:1.20200304.160755-a1aa0d1"
36m Normal Created pod/security-auth-65754f9589-tzw5x Created container security-auth
36m Normal Started pod/security-auth-65754f9589-tzw5x Started container security-auth
36m Normal Pulled pod/security-auth-65754f9589-tzw5x Container image "istio/proxyv2:1.4.4" already present on machine
36m Normal Created pod/security-auth-65754f9589-tzw5x Created container istio-proxy
36m Normal Started pod/security-auth-65754f9589-tzw5x Started container istio-proxy
36m Warning Unhealthy pod/security-auth-65754f9589-tzw5x Readiness probe failed: HTTP probe failed with statuscode: 503
36m Warning Unhealthy pod/security-auth-65754f9589-tzw5x Readiness probe failed: HTTP probe failed with statuscode: 503
36m Warning Unhealthy pod/security-auth-65754f9589-tzw5x Readiness probe failed: Get http://10.208.13.73:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
36m Normal Killing pod/security-auth-65754f9589-tzw5x Stopping container security-auth
36m Normal Killing pod/security-auth-65754f9589-tzw5x Stopping container istio-proxy
35m Warning Unhealthy pod/security-auth-65754f9589-tzw5x Readiness probe failed: Get http://10.208.13.73:15020/healthz/ready: dial tcp 10.208.13.73:15020: connect: connection refused
❯ k rollout status deployment/security-auth --context gke_infra-prod-67bbc6f2_europe-west4_prod-1 --namespace security
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 9 of 10 updated replicas are available...
deployment "security-auth" successfully rolled out
# requires iproute2
ss -nlp
ss --listening --numeric --process --unix --tcp
❯ docker run test-edge-api
Picked up JAVA_TOOL_OPTIONS:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic INFO: Configuration file not found. The agent will attempt to read required values from environment variables.
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic INFO: Using default collector host: collector.newrelic.com
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic ERROR: Unable to start the New Relic Agent. Your application will continue to run but it will not be monitored.
com.newrelic.agent.config.ConfigurationException: The agent requires an application name. Check the app_name setting in newrelic.yml
at com.newrelic.agent.config.ConfigServiceFactory.validateConfig(ConfigServiceFactory.java:64) ~[newrelic-agent.jar:5.8.0]
at com.newrelic.agent.config.ConfigServiceFactory.createConfigService(ConfigServiceFactory.java:27) ~[newrelic-agent.jar:5.8.0]
at com.newrelic.agent.service.ServiceManagerImpl.<init>(ServiceManagerImpl.java:121)~[newrelic-agent.jar:5.8.0]
at com.newrelic.agent.Agent.tryToInitializeServiceManager(Agent.java:194) [newrelic-agent.jar:5.8.0]
at com.newrelic.agent.Agent.continuePremain(Agent.java:137) [newrelic-agent.jar:5.8.0]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at com.newrelic.bootstrap.BootstrapAgent.startAgent(BootstrapAgent.java:140) [newrelic-agent.jar:5.8.0]
at com.newrelic.bootstrap.BootstrapAgent.premain(BootstrapAgent.java:77) [newrelic-agent.jar:5.8.0]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:513) [?:?]
at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:525) [?:?]
10:14:53.566 [main] INFO org.springframework.core.KotlinDetector - Kotlin reflection implementation not found at runtime, related features won't be available.2020-03-20 10:14:55.616 [] WARN --- [kground-preinit] o.s.h.c.j.Jackson2ObjectMapperBuilder : For Jackson Kotlin classes support please add "com.fasterxml.jackson.module:jackson-module-kotlin" to the classpath...
As the container runs java with native memory tracking (-XX:NativeMemoryTracking=summary),
it’s possible to ask the JVM some information about JVM memory zones other than heap.
This shows what the JVM reserved for memory 7168324 KB (~7.1 GB) and what is actually used
by the jvm process 4456448 KB (~4.45 GB).
heap arena, note reserved and committed values are the same 4456448 KB, I’m not sure why this
number is different from the VM flags -XX:MaxHeapSize=4563402752
~165 MB of class metadata
how many classes have been loaded : 28431
674 threads are using ~81 MB out of 696 MB reserved
Code cache area (assembly of the used methods) ~105 MB out of 251 MB which matches with -XX:ReservedCodeCacheSize=251658240
There’s also the MappedByteBuffers, these are the files mapped to virtual memory of a process.
NMT does not track them, however, MappedByteBuffers can also take physical memory. And there is
no a simple way to limit how much they can take. However it’s possible to see the actual usage
of a process memory map: pmap -x <pid>
That’s a lot of information, let’s refine that with more
knowledge about /proc/<pid>/maps,
each map is associated with a set of modes:
r-: readable memory mapping
w: writable memory mapping
x: executable memory mapping
s or p : shared memory mapping or private mapping. /proc/<pid>/maps shows both
but pmap only show the s flag.
Also, pmap has another mapping mode which I barely found any reference of,
here’s one and here
R: if set, the map has no swap space reserved (MAP_NORESERVE flag of mmap).
This means that we can get a segmentation fault by accessing that memory if it has not
already been mapped to physical memory, and the system is out of physical memory.
So what’s interesting us at this time are the process’s memory mapped (shared) files
Total memory = Heap + Code Cache + Metaspace + Symbol tables
+ Compiler + Other JVM structures + Thread stacks
+ Direct buffers + Mapped files +
+ Native Libraries + Malloc overhead + ...
Heap
4456448
Code Cache
105201
Metaspace
165788
Symbol tables
28915
Compiler
5914
Other JVM structures
(Internal + NMT + smaller area)
24460 + 8433 + 217 + 7 + 19 + 1362 + 837 + 8 + 32
Thread stacks
85455
Direct buffers (Other)
267034
Mapped files
36060 + 4 + 4 + 8 + 4 + 4 + 12 + 8 + 4 + 12 + 28
Native Libraries
unaccounted at this time
Malloc overhead
accounted in NMT
…
Total
5186278 KB
5186278 KB is just tad under 5 GB (5242880 KB).
More importantly is the actual non heap usage :
5186278 - 4456448 = 729830 KB
Non heap
5186278 - 4456448 = 729830
~14 %
Heap
4456448
~85 %
Total
5186278
100 %
This means the application needs at least 730 MB plus the heap to run.
The heap committed memory is 4563402752 B (set via -XX:MaxRAMPercentage=85.000000),
but the heap usage may have a different figure :
$ jcmd $(pgrep java) GC.heap_info
6:
garbage-first heap total 4456448K, used 925702K [0x00000006f0000000, 0x0000000800000000)
region size 2048K, 387 young (792576K), 12 survivors (24576K)
Metaspace used 154131K, capacity 160610K, committed 160976K, reserved 1189888K
class space used 18070K, capacity 20474K, committed 20556K, reserved 1048576K
Successive execution may give different results about the used memory
$ jcmd 6 GC.heap_info
6:
garbage-first heap total 4456448K, used 1245902K [0x00000006f0000000, 0x0000000800000000)
region size 2048K, 543 young (1112064K), 12 survivors (24576K)
Metaspace used 154163K, capacity 160620K, committed 160976K, reserved 1189888K
class space used 18071K, capacity 20476K, committed 20556K, reserved 1048576K
$ jcmd 6 GC.heap_info
6:
garbage-first heap total 4456448K, used 2421454K [0x00000006f0000000, 0x0000000800000000)
region size 2048K, 1117 young (2287616K), 12 survivors (24576K)
Metaspace used 154163K, capacity 160620K, committed 160976K, reserved 1189888K
class space used 18071K, capacity 20476K, committed 20556K, reserved 1048576K
The heap went from 925702 KB to 2421454 KB ! Following the trend of the heap usage
lead can lead to the actual memory usage for this app (in the given cluster topology).
2.5 GB of used heap + 0.8 GB of non heap + 0.2 MB margin = 3.5 GB
Which leads to set -XX:MaxRAMPercentage=71.0. if we want a lower memory footprint.
RSS ⇒ amount of physical memory allocated & used by a process
current memory usage ~4.9GB, but it’s recommended to read cache+rss+swap values in memory.stat
limit on the memory usage (~5.3GB)
current memory and swap usage (~4.9 GB)
limit on memory and swap (~5.3GB)
Note the memory.limit_in_bytes and memory.memsw.limit_in_bytes values are the same,
that means that the processes in the cgroup can use all the memory before swaping,
however it is not impossible for the process to be use the swap before this limit is reached.
In fact due to the swapiness value the kernel may try to reclaim memory.
There are other parameters related to the kernel and tcp allocations.
* Mouse-mode has been rewritten. There's now no longer options for:
- mouse-resize-pane- mouse-select-pane- mouse-select-window- mode-mouse
Instead there is just one option: 'mouse' which turns on mouse support
entirely.
Possibly add this to ~/.tmux.conf to handle pre 2.1 and post 2.1 :
C-m means carriage return, one could use Enter instead.
And the resulting session should look like that.
------------
| tail |
|----------|
| | top |
------------
Now I tried to again sub-divide the bottom left pane, so switching either back using last-pane, or in more complex windows, with the select-pane -t 1 where 1 is the number of the pane in order created starting with 0.
Does that. Basically knowing your way around with split-window and select-pane is all you need. It's also handy to pass with -p 75 a percentage size of the pane created by split-window to have more control over the size of the panes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
g* : search for partial word under cursor (repeat with n)
ctrl-o, ctrl-i : go through jump locations
[I : show lines with matching word under cursor
Search and replace...
:%s/search_for_this/replace_with_this/ : search whole file and replace
:%s/search_for_this/replace_with_this/c : confirm each replace
Selecting
shift-v : selects entire lines
v : selects range of text
ctrl-v : selects columns
gv : reselect block
shift-i change column text
Identation
:set tabstop=8 : tabs are at proper location
:set expandtab : don't use actual tab character (ctrl-v)
:set shiftwidth=4 : indenting is 4 spaces
:set autoindent : turns it on
:set smartindent : does the right thing (mostly) in programs
:set cindent : stricter rules for C programs
To indent the current line, or a visual block:
ctrl-t, ctrl-d : indent current line forward, backwards (insert mode)
visual > or < : indent block by sw (repeat with . )
To stop indenting when pasting with the mouse, add this to your .vimrc:
:set pastetoggle=<f5>
Changing line ending
In the current view
:e ++ff=dos
:e ++ff=mac
:e ++ff=unix
While saving
:w ++ff=dos
:w ++ff=mac
:w ++ff=unix
And you can use it from the command-line
for file in $(ls *cpp)
do
vi +':w ++ff=unix' +':q' ${file}
done
Saving read-only file
:w !sudo tee %
Open file at a particular position
vim +commandHere file # open vim and execute the vim command
vim + file # open vim at the end of file
vim +362 file # open file at line 362
vim +/searched_term file # open vim at the first line matching 'searched_term'