Name | Definition | Example | Characteristics | Organization |
---|---|---|---|---|
Connectivity Models | Data points closer in data space are more similar than those far away | hierachical cluster | easy to interpret but do not scale well | Hierachical |
Centroid models | iterative where similarity is intepreted as proximity of data point to centroid | K-means | provide final number of cluster | Non-Hierachical |
Distribution Models | Based on probability of data points in a cluster belonging to the same distribution | EM-Algorithm (Expectation-Maximization) | frequent problems of overfitting | Non-Hierachical |
Density Models | Isolate different density regions as basis for clustering | Density-Based Clustering of Application with Noise (DBSCAN) | Not good on high dimensional data or clusters with varying densities | Non-Hierachical |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# global config | |
global: | |
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. | |
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. | |
# A scrape configuration containing exactly one endpoint to scrape: | |
scrape_configs: | |
- job_name: 'any-name-you-want' | |
static_configs: | |
- targets: ['localhost:8080'] # the address of an application that exposes metrics for prometheus |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
input{ | |
kafka { | |
boostrap_servers => "localhost:9092" | |
topic => "a-kafka-topic" | |
} | |
} | |
filter{ | |
json { | |
source => "message" |
If the base image used to build the docker container does not have the CA certificates and you try to connect with an https or secure connection inside a docker container
Just install ca cetificate to your container. For example you are using alpine you can do it as follow
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: 1 | |
datasources: | |
- name: Prometheus | |
type: prometheus | |
access: proxy | |
url: http://prometheus-ip:9090 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
version: "3.1" | |
services: | |
spark-master: | |
image: bde2020/spark-master:2.4.0-hadoop2.7 | |
container_name: spark-master | |
ports: | |
- "8080:8080" | |
- "7077:7077" | |
volumes: | |
- ${PWD}/spark/metrics.properties:/spark/conf/metrics.properties |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
docker run --name spark-app \ | |
-e ENABLE_INIT_DAEMON=false \ | |
--link spark-master:spark-master \ | |
-v path/to-your/metrics.properties:/spark/conf/metrics.properties \ | |
--network=same-network-as-master-and-worker \ | |
spark-app |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
*.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink | |
*.sink.graphite.host=graphite_exporter | |
*.sink.graphite.port=9109 | |
*.sink.graphite.period=10 | |
*.sink.graphite.unit=seconds | |
# Enable JvmSource for instance master, worker, driver and executor | |
master.source.jvm.class=org.apache.spark.metrics.source.JvmSource | |
worker.source.jvm.class=org.apache.spark.metrics.source.JvmSource |
OlderNewer