Skip to content

Instantly share code, notes, and snippets.

View bearpelican's full-sized avatar

Andrew Shaw bearpelican

View GitHub Profile
Forward-Looking Statements
 
In addition to current and historical information, this Annual Report on Form 10-K contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. These statements relate to our future operations, prospects, potential products, services, developments, and business strategies. These statements can, in some cases, be identified by the use of terms such as “may,” “will,” “should,” “could,” “would,” “intend,” “expect,” “plan,” “anticipate,” “believe,” “estimate,” “predict,” “project,” “potential,” or “continue,” the negative of such terms, or other comparable terminology. This Annual Report on Form 10-K includes, among others, forward-looking statements regarding:
 
 
expectations regarding the pending transaction with Verizon Communications Inc.;
 
 
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Command line run:
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=4 --node_rank=0 --master_addr=192.168.47.71 --master_port=6006 train_imagenet_nv.py ~/data/imagenet --save-dir ~/data/training/nv/2018-07-01T18:29:18-cluster_4_region_c_spot-0-lr1d6-e55 --loss-scale 512 --fp16 -b 192 --sz 224 -j 8 --lr 1.6 --epochs 55 --small --dist-url env:// --dist-backend nccl --distributed
~~epoch hours top1Accuracy
Distributed: init_process_group success
Loaded model
Defined loss and optimizer
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=8 --node_rank=1 --master_addr=192.168.18.149 --master_port=6006 train_imagenet_nv.py ~/data/imagenet --save-dir ~/data/training/nv/2018-07-03T01:13:29-cluster_8_region_b_spot-1-lr12-e68-bs256-warmup-4 --loss-scale 512 --fp16 -b 256 --sz 224 -j 8 --lr 3.2 --warmup 4 --epochs 68 --small --dist-url env:// --dist-backend nccl --distributed
~~epoch hours top1Accuracy
Distributed: init_process_group success
Loaded model
Defined loss and optimizer
Created data loaders
Begin training
~~epoch hours top1Accuracy
Distributed: init_process_group success
Loaded model
Created data loaders
Defined loss and optimizer
Begin training
~~0 0.01757196611111111 6.052
* Prec@1 1.864 Prec@5 6.052
@bearpelican
bearpelican / 64gpu_93_aspect_ratio.txt
Last active July 9, 2018 05:11
Training logs for 8 machines - p3.16xlarge. High learning rate warmup with batchnorm set to 0.
~~epoch hours top1Accuracy
Distributed: init_process_group success
Loaded model
Defined loss and optimizer
Created data loaders
Begin training
Changing LR from None to 1.5
~~0 0.015269303611111111 8.536