KVM を libvirt でホストしていて、prometheus と grafana で観測したい場合。 exporter の実装は 3 種類みつかる。流儀が異なり、メトリクスが異なるので注意。
- libvirt_exporter (Go)
- grafana json + prometheus recording rule
- prometheus の設定ファイルで recording rule を有効化します。
- job 名は
libvirt_exporter
| library(dplyr) | |
| library(ggtree) | |
| tree <- read.tree("Benchmarking_tree_07Dec21.nwk") | |
| tips <- tree$tip.label %>% | |
| tibble(tip = .) %>% | |
| mutate(sample = gsub("_[a-zA-Z\\_]+","",tip)) | |
| #!/usr/bin/env nextflow | |
| nextflow.enable.dsl=2 | |
| ch_pilon = Channel.fromPath(params.sample_sheet) | |
| .splitCsv(header: true) | |
| .map {row -> tuple(row.sample_id,[row.sr1,row.sr2],row.contigs)} | |
| ch_pilon.view() |
| library(tidyverse) | |
| library(data.table) | |
| library(janitor) | |
| csv <- fread("Ebenn_code_data_21Jul21_18.08.csv") | |
| features <- names(csv)[-c(1,2,4)] | |
| sample_names <- csv$Name %>% | |
| gsub("_flye_[a-z\\_]*|_hybrid","",.) %>% |
| #Generate barcode indexes for NBD96 kit | |
| rm(list = ls()) | |
| library(dplyr) | |
| library(data.table) | |
| index <- sapply(seq(1:6), function(x) paste0(LETTERS[1:8],x)) %>% t() %>% as.vector() | |
| bc <- list() | |
| letter_length <- 8 |
| #!/usr/bin/env python3 | |
| """ | |
| Running this script is (intended to be) equivalent to running the following Snakefile: | |
| include: "pipeline.conf" # Should be an empty file | |
| shell.prefix("set -euo pipefail;") | |
| rule all: | |
| input: |
| Bootstrap: docker | |
| From: ubuntu:xenial | |
| %labels | |
| Author: Thanh Le Viet | |
| Software: pangolin | |
| Description: "Pangolin: Software package for assigning SARS-CoV-2 genome sequences to global lineages" | |
| Notes: "This singularity definition is based on https://github.com/StaPH-B/docker-builds" |
KVM を libvirt でホストしていて、prometheus と grafana で観測したい場合。 exporter の実装は 3 種類みつかる。流儀が異なり、メトリクスが異なるので注意。
libvirt_exporter| # List of cheatsheet for linux find. | |
| # Taken from here http://alvinalexander.com/unix/edu/examples/find.shtml | |
| # basic 'find file' commands | |
| # -------------------------- | |
| find / -name foo.txt -type f -print # full command | |
| find / -name foo.txt -type f # -print isn't necessary | |
| find / -name foo.txt # don't have to specify "type==file" | |
| find . -name foo.txt # search under the current dir | |
| find . -name "foo.*" # wildcard |
| #Required libraries | |
| library(tidyverse) | |
| library(rgdal) | |
| library(rgeos) | |
| library(broom) | |
| library(maps) | |
| #Postcode spatial data | |
| #You need postcode map here https://www.opendoorlogistics.com/wp-content/uploads/Data/UK-postcode-boundaries-Jan-2015.zip | |
| england <- readOGR( | |
| dsn= "./osm/postcode/Distribution/" , |
| #!/usr/bin/env bash | |
| # Author: Thanh Le Viet | |
| # This script will split every consecutive fast5 files into a batch file list for basecalling with guppy | |
| # It is used for "live" basecalling while the sequencing still running on another machine. | |
| # Command: bash ./run_basecalling.sh | |
| # summary_file="sequencing_summary_FAO15487_23198198.txt" | |
| # Usage: run watch_and_basecalling.sh sequencing_summary_FAO15487_23198198.txt | |
| # Note: each run has a different summary_file name. | |
| summary_file=$1 |