Skip to content

Instantly share code, notes, and snippets.

@sirselim
Last active August 16, 2024 07:37
Show Gist options
  • Save sirselim/2ebe2807112fae93809aa18f096dbb94 to your computer and use it in GitHub Desktop.
Save sirselim/2ebe2807112fae93809aa18f096dbb94 to your computer and use it in GitHub Desktop.
a collection of my notes while working on nanopore basecalling on the Jetson Xavier

Jetson Xavier basecalling notes

initial basecalling runs

'fast' flip-flop calling on the Jetson Xavier

guppy_basecaller --disable_pings --compress_fastq -c dna_r9.4.1_450bps_fast.cfg -i flongle_fast5_pass/ -s flongle_test2 -x 'auto' --recursive 
high-accuracy calling with base modifications on the Jetson Xavier
guppy_basecaller --disable_pings --compress_fastq -c dna_r9.4.1_450bps_modbases_dam-dcm-cpg_hac.cfg --fast5_out -i flongle_fast5_pass/ -s flongle_hac_fastq -x 'auto' --recursive 
$ guppy_basecaller --compress_fastq -c dna_r9.4.1_450bps_modbases_dam-dcm-cpg_hac.cfg -i flongle_fast5_pass/ -s flongle_hac_fastq -x 'auto' --recursive
ONT Guppy basecalling software version 3.4.1+213a60d0
config file:        /opt/ont/guppy/data/dna_r9.4.1_450bps_modbases_dam-dcm-cpg_hac.cfg
model file:         /opt/ont/guppy/data/template_r9.4.1_450bps_modbases_dam-dcm-cpg_hac.jsn
input path:         flongle_fast5_pass/
save path:          flongle_hac_fastq
chunk size:         1000
chunks per runner:  512
records per file:   4000
fastq compression:  ON
num basecallers:    1
gpu device:         auto
kernel path:
runners per device: 4

Found 105 fast5 files to process.
Init time: 2790 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 2493578 ms, Samples called: 3970728746, samples/s: 1.59238e+06
Finishing up any open output files.
Basecalling completed successfully.

So from the above we see in high accuracy mode it take the Xavier ~41 minutes to complete the base calling using the default configuration files. For reference the fast calling mode was ~8 minutes.

optimising settings for Jetson Xavier

  • When performing GPU basecalling there is always one CPU support thread per GPU caller, so the number of callers (--num_callers) dictates the maximum number of CPU threads used.
  • Max chunks per runner (--chunks_per_runner): The maximum number of chunks which can be submitted to a single neural network runner before it starts computation. Increasing this figure will increase GPU basecalling performance when it is enabled.
  • Number of GPU runners per device (--gpu_runners_per_device): The number of neural network runners to create per CUDA device. Increasing this number may improve performance on GPUs with a large number of compute cores, but will increase GPU memory use. This option only affects GPU calling.

There is a rough equation to estimate amount of ram:

runners * chunks_per_runner * chunk_size < 100000 * [max GPU memory in GB]

For example, a GPU with 8 GB of memory would require:

runners * chunks_per_runner * chunk_size < 800000

some suggested settings from ONT

NVIDIA Jetson TX2
--num_callers 1
--gpu_runners_per_device 2
--chunks_per_runner 48

from hac config file (dna_r9.4.1_450bps_modbases_dam-dcm-cpg_hac.cfg)

chunk_size                          = 1000
gpu_runners_per_device              = 4
chunks_per_runner                   = 512
chunks_per_caller                   = 10000

modified testing

'fast' flip-flop calling on the Jetson Xavier

guppy_basecaller --disable_pings --compress_fastq -c dna_r9.4.1_450bps_fast.cfg -i flongle_fast5_pass/ \
  -s flongle_test2 -x 'auto' --recursive --num_callers 4 --gpu_runners_per_device 8 --chunks_per_runner 256
$ guppy_basecaller --disable_pings --compress_fastq -c dna_r9.4.1_450bps_fast.cfg \
-i flongle_fast5_pass/ -s flongle_test2 -x 'auto' --recursive --num_callers 4 \
--gpu_runners_per_device 8 --chunks_per_runner 256
ONT Guppy basecalling software version 3.4.1+213a60d0
config file:        /opt/ont/guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /opt/ont/guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass/
save path:          flongle_test2
chunk size:         1000
chunks per runner:  256
records per file:   4000
fastq compression:  ON
num basecallers:    4
gpu device:         auto
kernel path:
runners per device: 8

Found 105 fast5 files to process.
Init time: 880 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 428745 ms, Samples called: 3970269916, samples/s: 9.26021e+06
Finishing up any open output files.
Basecalling completed successfully.

I was able to shave a minute off the fast model on the Xavier (above) getting it down to ~7 minutes.

jetson_xavier_jtop_screenshot

Update: (13th Dec 2019)

Just modifying the number of chunks per runner has allowed me to get the time down to under 6.5 mins (see table below).

chunks_per_runner time
(160) default ~8 mins
256 7 mins 6 secs
512 6 mins 28 secs
1024 6 min 23 secs

It looks like we might have reached an optimal point here. Next I'll test some of the other parameters and see if we can speed this up further.

high-accuracy calling with base modifications on the Jetson Xavier

guppy_basecaller --disable_pings --compress_fastq -c dna_r9.4.1_450bps_modbases_dam-dcm-cpg_hac.cfg \
  --num_callers 4 --gpu_runners_per_device 8 --fast5_out -i flongle_fast5_pass/ \
  -s flongle_hac_basemod_fastq -x 'auto' --recursive
increased number of callers
$ guppy_basecaller --disable_pings --compress_fastq -c dna_r9.4.1_450bps_fast.cfg \
  -i flongle_fast5_pass/ -s flongle_test2 -x 'auto' --recursive --num_callers 8 \
  --gpu_runners_per_device 8 --chunks_per_runner 1024
ONT Guppy basecalling software version 3.4.1+213a60d0
config file:        /opt/ont/guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /opt/ont/guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass/
save path:          flongle_test2
chunk size:         1000
chunks per runner:  1024
records per file:   4000
fastq compression:  ON
num basecallers:    8 
gpu device:         auto
kernel path:
runners per device: 8 

Found 105 fast5 files to process.
Init time: 897 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 383865 ms, Samples called: 3970269916, samples/s: 1.03429e+07
Finishing up any open output files.
Basecalling completed successfully.
increased chunk size
$ guppy_basecaller --disable_pings --compress_fastq -c dna_r9.4.1_450bps_fast.cfg \
  -i flongle_fast5_pass/ -s flongle_test2 -x 'auto' --recursive --num_callers 4 \
  --gpu_runners_per_device 8 --chunks_per_runner 1024 --chunk_size 2000
ONT Guppy basecalling software version 3.4.1+213a60d0
config file:        /opt/ont/guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /opt/ont/guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass/
save path:          flongle_test2
chunk size:         2000
chunks per runner:  1024
records per file:   4000
fastq compression:  ON
num basecallers:    4
gpu device:         auto
kernel path:
runners per device: 8

Found 105 fast5 files to process.
Init time: 1180 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 503532 ms, Samples called: 3970269916, samples/s: 7.88484e+06
Finishing up any open output files.
Basecalling completed successfully.
increased runners per device and number of callers
$ guppy_basecaller --disable_pings --compress_fastq -c dna_r9.4.1_450bps_fast.cfg \
  -i flongle_fast5_pass/ -s flongle_test2 -x 'auto' --recursive --num_callers 8 \
  --gpu_runners_per_device 16 --chunks_per_runner 1024 --chunk_size 1000
ONT Guppy basecalling software version 3.4.1+213a60d0
config file:        /opt/ont/guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /opt/ont/guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass/
save path:          flongle_test2
chunk size:         1000
chunks per runner:  1024
records per file:   4000
fastq compression:  ON
num basecallers:    8
gpu device:         auto
kernel path:
runners per device: 16

Found 105 fast5 files to process.
Init time: 1113 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 383466 ms, Samples called: 3970269916, samples/s: 1.03536e+07
Finishing up any open output files.
Basecalling completed successfully.

current 'optimal' parameters

The below parameters seem to provide the 'optimal' speed increase with a resultant run time of 6 mins and 23 secs.

$ guppy_basecaller --disable_pings --compress_fastq -c dna_r9.4.1_450bps_fast.cfg \
  -i flongle_fast5_pass/ -s flongle_test2 -x 'auto' --recursive --num_callers 4 \
  --gpu_runners_per_device 8 --chunks_per_runner 1024 --chunk_size 1000
ONT Guppy basecalling software version 3.4.1+213a60d0
config file:        /opt/ont/guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /opt/ont/guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass/
save path:          flongle_test2
chunk size:         1000
chunks per runner:  1024
records per file:   4000
fastq compression:  ON
num basecallers:    4
gpu device:         auto
kernel path:
runners per device: 8

Found 105 fast5 files to process.
Init time: 926 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 382714 ms, Samples called: 3970269916, samples/s: 1.0374e+07
Finishing up any open output files.
Basecalling completed successfully.

exploring portable batteries and power modes

We are currently using a 27000mAh AC Portable Charger from Ravpower.

Below: Ravpower Xtreme Series 27000mAh AC Portable Charger ravpower battery package This battery bank/charger has a built in 220V AC outlet and 1 usb-c and 2 usb 3.1 outputs.

Below: powerbank charging from the wall. ravpower battery on charge Ravpower claim this powerbank will charge a smartphone 11 times, a tablet 4 times or a laptop 3 times.

Below: running our first portable Xavier GPU basecalling of nanopore data! xavier running on battery power

changing power modes

Running the Xavier in different power states obviously influences the amount of run time on the battery.

Power mode Time
10W 33.4 mins
15W 14.3 mins
30W 2 cores 10.8 mins
30W 4 cores 10.8 mins
30W MAX (8 cores) 7.5 mins

The above benchmarks were performed on data generated from a flongle run (~0.5 Mb of sequence or 5.5 Gb of actual data).


potential V100 examples

V100 config example for high accuracy model
guppy_basecaller \
--disable_pings \
--compress_fastq \
-c dna_r9.4.1_450bps_modbases_dam-dcm-cpg_hac.cfg \
--ipc_threads 16 \
--num_callers 8 \
--gpu_runners_per_device 4 \
--chunks_per_runner 512 \
--device "cuda:0 cuda:1" \ # this parameter should now scale nicely across both cards, I haven't checked though
--recursive \
--fast5_out \
-i fast5_input \
-s fastq_output
V100 config example for fast calling model
guppy_basecaller \
--disable_pings \
--compress_fastq \
-c dna_r9.4.1_450bps_fast.cfg \
--ipc_threads 16 \
--num_callers 8 \
--gpu_runners_per_device 64 \
--chunks_per_runner 256 \
--device "cuda:0 cuda:1" \
--recursive \
-i fast5_input \
-s fastq_output

Guppy basecalling benchmarking on a Titan RTX

There has been some discussion about the recent release of Guppy (3.4.1 and 3.4.2) in terms of speed. I was interested in running some benchmarks across different versions. I had a hunch it may have been something to do with the newly introduced compression of the fast5 files...

Test parameters

The only things I am changing are the version of Guppy being used, and in the case of 3.4.3 I am trying with and without vbz compression of the fast5 files. Everything else is as below:

System:

  • Debian Sid (unstable)
  • 2x 12-Core Intel Xeon Gold 5118 (48 threads)
  • 256Gb RAM
  • Titan RTX
  • Nvidia drivers: 418.56
  • CUDA Version: 10.1

Guppy GPU basecalling parameters:

  • --disable_pings
  • --compress_fastq
  • --dna_r9.4.1_450bps_fast.cfg
  • --num_callers 8
  • --gpu_runners_per_device 64
  • --chunks_per_runner 256
  • --device "cuda:0"
  • --recursive

For each Guppy version I ran the basecaller three times in an attempt to ensure that results were consistent*.

Note: I chose the fast basecalling model as I wanted to do a quick set of benchmarks. If I feel up to it I may do the same thing for the high accuracy caller...

* Spoiler, I didn't originally do this and it proved misleading...

Results

guppy version time (seconds) samples/s
3.1.5# 93.278 4.25638e+07
3.2.4# 94.141 4.21737e+07
3.3.0# 94.953 4.1813e+07
3.3.3# 95.802 4.14425e+07
3.4.1 (no vbz compressed fast5) 79.913 4.96824e+07
3.4.1 (vbz compressed fast5) 90.895 4.36797e+07
3.4.3 (no vbz compressed fast5) 90.674 4.37862e+07
3.4.3 (vbz compressed fast5) 82.877 4.79056e+07

# these versions of Guppy did not support vbz compression of fast5 files (pre 3.4.X from memory).

Summary to date

I initially thought that there was something off with the compression imlementation in 3,4,3 as my first run on uncompressed data was ~3x slower than the run on the compressed data. When I grabbed 3.4.1 to perform the same check I noticed that it was fairly consistent between compressed and not. So I went back and was more rigorous and performed 3 iterations of each run for each version, ditto for versions 3.4.X compressed and not. This proved that the initial run was an anomaly and should be disregarded.

What was quite interesting is that running on vbz compressed fast5 data appears to be in the range of 8-10 seconds faster than uncompressed. So there is a slight added speed benefit on top of the nice reduction in file size - which is a little nicer for the SSD/HDD.

So at this stage I can't confirm any detrimental speed issues when using Guppy version 3.4.X, but this needs to be caveated with all the usual disclaimers:

  • all systems are different (I'm not on Ubuntu for instance).
  • drivers are different (I need to update).
  • GPUs are very different, i.e. many people (including me) are using 'non-supported' GPUs - in my case a Titan RTX which is no slouch.
    • For what it's worth, I can add a comment here saying that I haven't had any speed issues with basecalling on our Nvidia Jetson Xaviers using Guppy 3.4.1.
  • our 'little' Linux server isn't exactly a slouch either - so laptop/desktop builds could be very different.
  • I ran the fast basecaller (I'm currently flat out and can't wait for the high accuracy caller) - I may take a subset of data and revist with hac at some stage.

You can view the 'raw' results/output for each run below:

Guppy 3.1.5

~/Downloads/software/guppy/3.1.5/ont-guppy/bin/guppy_basecaller \
    --disable_pings \
    --compress_fastq \
    -c dna_r9.4.1_450bps_fast.cfg \
    --num_callers 8 \
    --gpu_runners_per_device 64 \
    --chunks_per_runner 256 \
    --device "cuda:0" \
    --recursive \
    -i flongle_fast5_pass \
    -s testrun_fast_3.1.5

ONT Guppy basecalling software version 3.1.5+781ed57
config file:        /home/miles/Downloads/software/guppy/3.1.5/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /home/miles/Downloads/software/guppy/3.1.5/ont-guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass
save path:          testrun_fast_3.1.5
chunk size:         1000
chunks per runner:  256
records per file:   4000
fastq compression:  ON
num basecallers:    8
gpu device:         cuda:0
kernel path:        
runners per device: 64

Found 105 fast5 files to process.
Init time: 1000 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 93278 ms, Samples called: 3970269916, samples/s: 4.25638e+07
Finishing up any open output files.
Basecalling completed successfully.

Guppy 3.2.4

~/Downloads/software/guppy/3.2.4/ont-guppy/bin/guppy_basecaller \
    --disable_pings \
    --compress_fastq \
    -c dna_r9.4.1_450bps_fast.cfg \
    --num_callers 8 \
    --gpu_runners_per_device 64 \
    --chunks_per_runner 256 \
    --device "cuda:0" \
    --recursive \
    -i flongle_fast5_pass \
    -s testrun_fast_3.2.4

ONT Guppy basecalling software version 3.2.4+d9ed22f
config file:        /home/miles/Downloads/software/guppy/3.2.4/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /home/miles/Downloads/software/guppy/3.2.4/ont-guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass
save path:          testrun_fast_3.2.4
chunk size:         1000
chunks per runner:  256
records per file:   4000
fastq compression:  ON
num basecallers:    8
gpu device:         cuda:0
kernel path:        
runners per device: 64

Found 105 fast5 files to process.
Init time: 836 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 94141 ms, Samples called: 3970269916, samples/s: 4.21737e+07
Finishing up any open output files.
Basecalling completed successfully.

Guppy 3.3.0

~/Downloads/software/guppy/3.3.0/ont-guppy/bin/guppy_basecaller \
    --disable_pings \
    --compress_fastq \
    -c dna_r9.4.1_450bps_fast.cfg \
    --num_callers 8 \
    --gpu_runners_per_device 64 \
    --chunks_per_runner 256 \
    --device "cuda:0" \
    --recursive \
    -i flongle_fast5_pass \
    -s testrun_fast_3.3.0

ONT Guppy basecalling software version 3.3.0+ef22818
config file:        /home/miles/Downloads/software/guppy/3.3.0/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /home/miles/Downloads/software/guppy/3.3.0/ont-guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass
save path:          testrun_fast_3.3.0
chunk size:         1000
chunks per runner:  256
records per file:   4000
fastq compression:  ON
num basecallers:    8
gpu device:         cuda:0
kernel path:        
runners per device: 64

Found 105 fast5 files to process.
Init time: 722 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 94953 ms, Samples called: 3970269916, samples/s: 4.1813e+07
Finishing up any open output files.
Basecalling completed successfully.

Guppy 3.3.3

~/Downloads/software/guppy/3.3.3/ont-guppy/bin/guppy_basecaller \
    --disable_pings \
    --compress_fastq \
    -c dna_r9.4.1_450bps_fast.cfg \
    --num_callers 8 \
    --gpu_runners_per_device 64 \
    --chunks_per_runner 256 \
    --device "cuda:0" \
    --recursive \
    -i flongle_fast5_pass \
    -s testrun_fast_3.3.3

ONT Guppy basecalling software version 3.3.3+fa743a6
config file:        /home/miles/Downloads/software/guppy/3.3.3/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /home/miles/Downloads/software/guppy/3.3.3/ont-guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass
save path:          testrun_fast_3.3.3
chunk size:         1000
chunks per runner:  256
records per file:   4000
fastq compression:  ON
num basecallers:    8
gpu device:         cuda:0
kernel path:        
runners per device: 64

Found 105 fast5 files to process.
Init time: 726 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 95802 ms, Samples called: 3970269916, samples/s: 4.14425e+07
Finishing up any open output files.
Basecalling completed successfully.

Guppy 3.4.1 (not compressed)

~/Downloads/software/guppy/3.4.1/ont-guppy/bin/guppy_basecaller \
    --disable_pings \
    --compress_fastq \
    -c dna_r9.4.1_450bps_fast.cfg \
    --num_callers 8 \
    --gpu_runners_per_device 64 \
    --chunks_per_runner 256 \
    --device "cuda:0" \
    --recursive \
    -i flongle_fast5_pass \
    -s testrun_fast_3.4.1

ONT Guppy basecalling software version 3.4.1+ad4f8b9
config file:        /home/miles/Downloads/software/guppy/3.4.1/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /home/miles/Downloads/software/guppy/3.4.1/ont-guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass
save path:          testrun_fast_3.4.1
chunk size:         1000
chunks per runner:  256CUDA Version: 10.1
fastq compression:  ON
num basecallers:    8
gpu device:         cuda:0
kernel path:        
runners per device: 64

Found 105 fast5 files to process.
Init time: 728 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 90895 ms, Samples called: 3970269916, samples/s: 4.36797e+07
Finishing up any open output files.
Basecalling completed successfully.

Guppy 3.4.1 (compressed)

~/Downloads/software/guppy/3.4.1/ont-guppy/bin/guppy_basecaller \
    --disable_pings \
    --compress_fastq \
    -c dna_r9.4.1_450bps_fast.cfg \
    --num_callers 8 \
    --gpu_runners_per_device 64 \
    --chunks_per_runner 256 \
    --device "cuda:0" \
    --recursive \
    -i flongle_compressed \
    -s testrun_fast_3.4.1

ONT Guppy basecalling software version 3.4.1+ad4f8b9
config file:        /home/miles/Downloads/software/guppy/3.4.1/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /home/miles/Downloads/software/guppy/3.4.1/ont-guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_compressed
save path:          testrun_fast_3.4.1
chunk size:         1000
chunks per runner:  256
records per file:   4000
fastq compression:  ON
num basecallers:    8
gpu device:         cuda:0
kernel path:        
runners per device: 64

Found 105 fast5 files to process.
Init time: 725 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 79913 ms, Samples called: 3970269916, samples/s: 4.96824e+07
Finishing up any open output files.
Basecalling completed successfully.

Guppy 3.4.3 (not compressed)

~/Downloads/software/guppy/3.4.3/ont-guppy/bin/guppy_basecaller \
    --disable_pings \
    --compress_fastq \
    -c dna_r9.4.1_450bps_fast.cfg \
    --num_callers 8 \
    --gpu_runners_per_device 64 \
    --chunks_per_runner 256 \
    --device "cuda:0" \
    --recursive \
    -i flongle_fast5_pass \
    -s testrun_fast_3.4.3_uncompressed
first run (it looks like this is anomaly)
ONT Guppy basecalling software version 3.4.3+f4fc735
config file:        /home/miles/Downloads/software/guppy/3.4.3/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /home/miles/Downloads/software/guppy/3.4.3/ont-guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass
save path:          testrun_fast_3.4.3_uncompressed
chunk size:         1000
chunks per runner:  256
records per file:   4000
fastq compression:  ON
num basecallers:    8
gpu device:         cuda:0
kernel path:
runners per device: 64

Found 105 fast5 files to process.
Init time: 738 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 270953 ms, Samples called: 3970269916, samples/s: 1.4653e+07
Finishing up any open output files.
Basecalling completed successfully.
second run
ONT Guppy basecalling software version 3.4.3+f4fc735
config file:        /home/miles/Downloads/software/guppy/3.4.3/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /home/miles/Downloads/software/guppy/3.4.3/ont-guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_fast5_pass
save path:          testrun_fast_3.4.3_uncompressed
chunk size:         1000
chunks per runner:  256
records per file:   4000
fastq compression:  ON
num basecallers:    8
gpu device:         cuda:0
kernel path:        
runners per device: 64

Found 105 fast5 files to process.
Init time: 705 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 90674 ms, Samples called: 3970269916, samples/s: 4.37862e+07
Finishing up any open output files.
Basecalling completed successfully.

third run

ONT Guppy basecalling software version 3.4.3+f4fc735 config file: /home/miles/Downloads/software/guppy/3.4.3/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg model file: /home/miles/Downloads/software/guppy/3.4.3/ont-guppy/data/template_r9.4.1_450bps_fast.jsn input path: flongle_fast5_pass save path: testrun_fast_3.4.3_uncompressed3 chunk size: 1000 chunks per runner: 256 records per file: 4000 fastq compression: ON num basecallers: 8 gpu device: cuda:0 kernel path:
runners per device: 64

Found 105 fast5 files to process. Init time: 719 ms

0% 10 20 30 40 50 60 70 80 90 100% |----|----|----|----|----|----|----|----|----|----|


Caller time: 94516 ms, Samples called: 3970269916, samples/s: 4.20063e+07 Finishing up any open output files. Basecalling completed successfully.

Guppy 3.4.3 (compressed)

~/Downloads/software/guppy/3.4.3/ont-guppy/bin/guppy_basecaller \
    --disable_pings \
    --compress_fastq \
    -c dna_r9.4.1_450bps_fast.cfg \
    --num_callers 8 \
    --gpu_runners_per_device 64 \
    --chunks_per_runner 256 \
    --device "cuda:0" \
    --recursive \
    -i flongle_compressed \
    -s testrun_fast_3.4.3

ONT Guppy basecalling software version 3.4.3+f4fc735
config file:        /home/miles/Downloads/software/guppy/3.4.3/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg
model file:         /home/miles/Downloads/software/guppy/3.4.3/ont-guppy/data/template_r9.4.1_450bps_fast.jsn
input path:         flongle_compressed
save path:          testrun_fast_3.4.3
chunk size:         1000
chunks per runner:  256
records per file:   4000
fastq compression:  ON
num basecallers:    8
gpu device:         cuda:0
kernel path:        
runners per device: 64

Found 105 fast5 files to process.
Init time: 721 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Caller time: 82877 ms, Samples called: 3970269916, samples/s: 4.79056e+07
Finishing up any open output files.
Basecalling completed successfully.
@hanfan1803
Copy link

Hi @sirselim

My OS is Ubuntu 18.04, and ONT repo is http://mirror.oxfordnanoportal.com/apt bionic-stable non-free" | sudo tee /etc/apt/sources.list.d/nanoporetech.sources.list. The last version minion-nc (21.06) was also installed through apt-get. I just simply remove/purge remove it from apt-get, then install it again.

Han

@hanfan1803
Copy link

Yes I know it's a generic error, but everytime, I try removing then installing new version of MinKnow I have this exactly issue. The only solution so far is that re-install OS. My PC also is setup with tons of other bioinformatic softwares. It's really annoying if I have to re-install the OS.

@hanfan1803
Copy link

Hi,

I already fix this issue with apt-get. Thank you for the instruction to setup live basecalling with guppy.

Han

@neuropathbasel
Copy link

Hi @hanfan1803,

we have had this issue for a long time, i.e. things change in MinKNOW / guppy so frequently that it is impossible for me and my (small) scientific team to keep up. We have also developed software around MinKNOW and hence use "frozen" versions, i.e. we keep all required *.deb files in a safe place. For our applications, this is good enough. Our strategy also allows for exact setup replication on new computer systems. The only thing one must not perform are updates. One way (at least with the versions we use) that allowed disabling updates is to remove the ONT repo from the /etc/apt/sources... and in addition to remove the ubuntu update manager (functionally) by removing executable permissions.

This may not be a solution for staying up-to-date; otherwise, if you would like to produce reproducible data and be sure you obtain them through the exact same setup over periods of several years, this will be the only way for the time being. Versions freezes have been announced by ONT but were never made available to me.

@Tetrakis
Copy link

Tetrakis commented Apr 28, 2022

Finally getting a chance to get back to this. First off I've got to say thanks so much Miles (and other contributors) for your dedication to this. Like I said I have not had time to dedicate to this and what has documented here and in the main Nanopore Sequencing with the Xavier page has saved my butt several times. So thanks!

Recently we started running into space problems, even though there was plenty of room on the SSD I'd installed (but the main boot SD only had 8GB of space left). I thought it was a misdirected swap file space because the sequencer would error out after 8GB worth of sequencing. Now looking over this gist again I wonder if it had something to do with some of the settings in the guppy.service file. My first try was redirecting the swap to the SSD, but no dice. So, I decided to try something that almost bricked the Jetson before and make the Xavier boot off of the SSD. As of JetPack 4.6 this should be possible. I found this wonderful site here (https://github.com/jetsonhacks) on github that had step by step how to do it. And without too much fiddling it actually worked! So now I wanted to reinstall of the new Mk1C update because I wanted to check out short fragment mode and play with adaptive sequencing. I dug around in the ONT forums and it looked like guppy 6.0.6 should work with software release 22.03.2 for Mk1C (in retrospect I probably should have picked 6.0.7, but I missed it). I changed the part of the mk1c setup instructions to reflect the version of guppy downloaded. Other than that I followed the instructions pretty much to the letter and right now I'm sequencing away with live basecalling!

Two things of note: 1) I think it's basecalling slower than it was before (averaging 50% on super-high accuracy with 25% of the pores sequencing). This may be the whole 6.0.6 vs. 6.0.7 problem. I want to explore more but my student may kill me if I don't get this setup back. 2) several of the features mentioned on ONT's site aren't there (the System overview, the basecall model in the fastq header and, sadly, the short fragment mode). I need to get the versions of all the components. Is there a simple command/place to look for getting version numbers for everything MinKNow is using?

Anyway I count the booting off the SSD and live basecalling with 6.0.6 as some kind of victory. I'll write again when I find out more. Thanks all.

-John

Edit: I'm an idiot. I installed the Xenial builds. So this is 21.11.7 running with guppy 6.0.6 not 22.0.3.2 as I previously said. I will fix and we'll see what happens.

@Tetrakis
Copy link

Well one step forward and two steps back. First I removed guppy and got version 6.0.7. Then I removed the Xenial builds and set the repos to the bionic repos in /etc/apt/sources.list.d/nanopore.sources.list. That didn't work because of some certificate problems. I set them to trusted so the lines looked like this

deb [trusted-yes arch=arm64] https://cdn.oxfordnanoportal.com/apt bionic-stable-mk1c non-free
deb [trusted-yes arch=arm64] https://mirror.oxfordnanoportal.com/apt bionic-stable-mk1c non-free

after which things downloaded nicely. Then I went through the mk1c setup guide and everything seemed file. MinKNOW booted up fine, but now plugging in the Mk1b resulted in a high fan noise and not being recognized. I went back to the part of the setup where a permissions problem was mentioned for the minknow service and the result being the mk1b not being recognized. Sadly that didn't allow it to be recognized either. I restarted, still not recognizing the MinION. Interesting there was a program that crashed in the background called ont-get-product-info.

I remember having to deal with these connection problems way in the beginning of looking at using the Xavier as a MinIT like device, but for the life of me I can't remember how they were solved. Anyone got any ideas? Thanks.

-John

@sirselim
Copy link
Author

Just wanted to touch base here after a long hiatus. I'm about to update the Jetson Nanopore GitHub repo. This update will provide instructions for getting the latest Mk1C software running on the Jetson boards. The big issue was the large update from Xenial to Bionic and a large amount of code refactoring with recent updates to MinKNOW. Plus I haven't had the time or hardware since changing jobs to look into these issues.

So apologies to anyone that has been struggling, but hopefully updating to the latest versions of MinKNOW and Guppy will fix things. A nice benefit is that now that things are inline again (i.e. Mk1C running Bionic and the Jetsons running Bionic) Guppy can be pulled stright from the repos and doesn't require manual downloading and configuration. 🥳

I'm hoping to have the update live in the next few days (here).

@pablosoup1
Copy link

It's great to hear you've almost completed the transition. Following your past instructions and hard work, I've had great success running the MinION with fast basecalling for all the 16S analyses I do. Unfortunately, last week something strange happened (MinKNOW GUI no longer shows the different kits), so I've been forced to use my M1 MacBook... and the time differences are tremendous.

@thalljiscience
Copy link

thalljiscience commented Jan 26, 2023 via email

@pablosoup1
Copy link

Thanks so much for pointing me in the right direction! Here's ONT's official response/fix:

https://community.nanoporetech.com/posts/certificate-expiry-for-min

And the first image they show is exactly what I saw in the GUI, with no real error codes showing up.

@thalljiscience
Copy link

thalljiscience commented Jan 26, 2023 via email

@pablosoup1
Copy link

BTW, I just downloaded the certs using the link provided for Mac/PC, then moved it into the appropriate spot on my Jetson. Worked like a charm (because I'd forced the system to quit updating the ONT packages, this was a quick workaround).

@pablosoup1
Copy link

ONT just announced that they are removing the old authentication system.

https://community.nanoporetech.com/posts/retiring-old-minknow-authe

I take it this will probably crater the Jetson nano?

@sirselim
Copy link
Author

sirselim commented Aug 1, 2023

@pablosoup1 - not sure what you mean by "crater the Jetson nano"?

The authentication change shouldn't cause any issues as long as you are using the latest MinION Mk1C software branch, as this is up to date and includes the change. Now, if you are still using the old MinIT branch then yes this will stop working. But there is no reason to be on that software when the Mk1C software runs well and gives you all the latest features.

On that note, the next version of MinKNOW sees Guppy being replaced by Dorado (even on the Mk1C). This is good for many reasons, but mainly: 1) dorado has matured and is much faster than Guppy now, 2) dorado has native ARM builds, including for the newer Jetson Orin architecture. So for all those out there with Orin boards, you should be able to get a fully working sequencing stack in the next month or so, as soon as MinKNOW has Dorado integrated.

@bokkoman
Copy link

bokkoman commented Aug 15, 2024

A colleague here is looking into doing basecalling. We are currently using a workstation with an RTX A4500. Was wondering if a Jetson Orin AGX would be faster. Can you give me any advice on this?

@sirselim
Copy link
Author

@bokkoman The workstation with the RTX A4500 is much faster than a Jetson Orin.

@bokkoman
Copy link

bokkoman commented Aug 15, 2024

@bokkoman The workstation with the RTX A4500 is much faster than a Jetson Orin.

Thanks for the quick response! I guess we will keep using that. Might be adding a second A4500. Cause it still takes quite some time to finish.

@sirselim
Copy link
Author

A second card will make a difference. However, those A4500 cards are pretty limited in their performance. A couple of RTX4090s would run circles around them, but I understand that sometimes it's hard for Institutes to purchase gaming cards.

@bokkoman
Copy link

bokkoman commented Aug 15, 2024

The problem is not the purchase, the problem is it doesn't fit in the workstation. Those cards are massive.
It would mean we have to create a DIY workstation, not sure my manager really likes that.
And how much circles are we talking about??

@sirselim
Copy link
Author

One RTX4090 would likely offer nearly the same performance as two RTX A4500 for many use cases. But I understand the issue, they are big, and they are power hungry. The reality though is that a single RTX4090 is very close in performance to the A100, for a fraction of the price.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment