Updated 19/10/2018
Here's my experience of installing the NVIDIA CUDA kit 9.0 on a fresh install of Ubuntu Desktop 16.04.4 LTS.
Updated 19/10/2018
Here's my experience of installing the NVIDIA CUDA kit 9.0 on a fresh install of Ubuntu Desktop 16.04.4 LTS.
| So you've cloned somebody's repo from github, but now you want to fork it and contribute back. Never fear! | |
| Technically, when you fork "origin" should be your fork and "upstream" should be the project you forked; however, if you're willing to break this convention then it's easy. | |
| * Off the top of my head * | |
| 1. Fork their repo on Github | |
| 2. In your local, add a new remote to your fork; then fetch it, and push your changes up to it | |
| git remote add my-fork git@github...my-fork.git |
First, list all folders in the time interval specified:
find the/root/folder/* -type d -newermt "2018-10-15 00:00:00" ! -newermt "2018-11-01 00:00:00" -ls
Second, delete all the folders!
find the/root/folder/* -type d -newermt 2018-10-15 ! -newermt 2018-11-01 -delete
| class ImageBaseAug(object): | |
| def __init__(self): | |
| sometimes = lambda aug: iaa.Sometimes(0.5, aug) | |
| self.seq = iaa.Sequential( | |
| [ | |
| # Blur each image with varying strength using | |
| # gaussian blur (sigma between 0 and 3.0), | |
| # average/uniform blur (kernel size between 2x2 and 7x7) | |
| # median blur (kernel size between 3x3 and 11x11). | |
| iaa.OneOf([ |
rsync -av --progress source_folder/* dest_folder/ --exclude folder_to_exclude/ -n
Use -n for dry run to see what will be copied, then remove -n to do real copy.
| az vm create --resource-group bushi-RG1 \ | |
| --name glm-bushi-2 --nics bushi-nic-2 \ | |
| --size Standard_DS1_v2 --os-type Linux \ | |
| --attach-os-disk glm-bushi-2 --attach-data-disks glm-bushi-disk-2 --plan-name linuxdsvmubuntu --plan-product linux-data-science-vm-ubuntu --plan-publisher microsoft-ads |
ls | parallel -n2000 mkdir {#}\;mv {} {#}
-n2000 takes 2000 arguments at a time and {#} is the sequence number of the job.
For FP, find a sweet point of confidence threshold that corresponds to X (X=0.3) mean FP/study, in order to predict the bboxes that are possible missings in GT.
Note : confidence threshold up -> mean FP/study down -> total FP down -> less time & effort, but can miss more good preds
For FN, find a sweet point of confidence threshold that corresponds to X (X=?) mean FN/study, in order to show the GTs that are not detected / badly detected by the model.