DockerFile should have JProfiler installation.
RUN wget <JProfiler file location> -P /tmp/ && \
tar -xzf /tmp/<JProfiler file> -C /usr/local && \
rm /tmp/<JProfiler file>
Latency Comparison Numbers (~2012) | |
---------------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval
(in minutes)service.beta.kubernetes.io/aws-load-balancer-access-log-enabled
(true|false)service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags
(comma-separated list of key=value)service.beta.kubernetes.io/aws-load-balancer-backend-protocol
(http|https|ssl|tcp)service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled
(true|false)$ uname -r
DECLARE E INT DEFAULT 0; | |
DECLARE M TEXT DEFAULT NULL; | |
DECLARE CONTINUE HANDLER FOR 1000 SET E='1000', M="hashchk"; | |
DECLARE CONTINUE HANDLER FOR 1001 SET E='1001', M="isamchk"; | |
DECLARE CONTINUE HANDLER FOR 1002 SET E='1002', M="NO"; | |
DECLARE CONTINUE HANDLER FOR 1003 SET E='1003', M="YES"; | |
DECLARE CONTINUE HANDLER FOR 1004 SET E='1004', M="Can't create file '%s' (errno: %d)"; | |
DECLARE CONTINUE HANDLER FOR 1005 SET E='1005', M="Can't create table '%s' (errno: %d)"; | |
DECLARE CONTINUE HANDLER FOR 1006 SET E='1006', M="Can't create database '%s' (errno: %d)"; |
If you want, I can try and help with pointers as to how to improve the indexing speed you get. Its quite easy to really increase it by using some simple guidelines, for example: | |
- Use create in the index API (assuming you can). | |
- Relax the real time aspect from 1 second to something a bit higher (index.engine.robin.refresh_interval). | |
- Increase the indexing buffer size (indices.memory.index_buffer_size), it defaults to the value 10% which is 10% of the heap. | |
- Increase the number of dirty operations that trigger automatic flush (so the translog won't get really big, even though its FS based) by setting index.translog.flush_threshold (defaults to 5000). | |
- Increase the memory allocated to elasticsearch node. By default its 1g. | |
- Start with a lower replica count (even 0), and then once the bulk loading is done, increate it to the value you want it to be using the update_settings API. This will improve things as possibly less shards will be allocated to each machine. | |
- Increase the number of machines you have so |
// installed Clojure packages: | |
// | |
// * BracketHighlighter | |
// * lispindent | |
// * SublimeREPL | |
// * sublime-paredit | |
{ | |
"word_separators": "/\\()\"',;!@$%^&|+=[]{}`~?", | |
"paredit_enabled": true, |
{:user {:dependencies [[org.clojure/tools.namespace "0.2.3"] | |
[spyscope "0.1.3"] | |
[criterium "0.4.1"]] | |
:injections [(require '(clojure.tools.namespace repl find)) | |
; try/catch to workaround an issue where `lein repl` outside a project dir | |
; will not load reader literal definitions correctly: | |
(try (require 'spyscope.core) | |
(catch RuntimeException e))] | |
:plugins [[lein-pprint "1.1.1"] | |
[lein-beanstalk "0.2.6"] |
For companies that work with online advertising, more precisely DSPs and their Real-time Bidding platforms is very important to consider collecting information about the behavior and interests of users while they surfing on the internet. Therefore, in this graph gist, I describe a basic approach that can be used to analyze such data, considering a certain period of time and the products visualized by each user. Let’s consider some characters from Breaking Bad surfed on the internet a few days ago and found some products interesting, chemical elements, and they are thinking about buying them. Such information is extremely important when making a bid at auction advertising, knowing the profile and interests of a given user. Then we store these users, products and dates of views so we can extract this information in the future.