-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Fix image classification scripts and Improve Fp16 tutorial #11533
Conversation
docs/faq/float16.md
Outdated
1024 | float16 | 76.34% | 7.3 hrs | 1.62x | | ||
2048 | float16 | 76.29% | 6.5 hrs | 1.82x | | ||
|
||
![Training curves of Resnet50 v1 on Imagenet 2012](https://github.com/rahul003/web-data/blob/d415abf4a1c6df007483169c81807c250135f9a5/mxnet/tutorials/mixed-precision/resnet50v1b_imagenet_fp16_fp32_training.png?raw=true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we not use personal repo for images?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately I have a hard time getting anything merged into DMLC repos. Looks like there have been more than 25 commits since I opened this PR 4 weeks back there dmlc/web-data#79 but this is still unnoticed!
docs/faq/float16.md
Outdated
@@ -102,9 +102,17 @@ python fine-tune.py --network resnet --num-layers 50 --pretrained-model imagenet | |||
``` | |||
|
|||
## Example training results | |||
Here is a plot to compare the training curves of a Resnet50 v1 network on the Imagenet 2012 dataset. These training jobs ran for 95 epochs with a batch size of 1024 using a learning rate of 0.4 decayed by a factor of 1 at epochs 30,60,90 and used Gluon. The only changes made for the float16 job when compared to the float32 job were that the network and data were cast to float16, and the multi-precision mode was used for optimizer. The final accuracies at 95th epoch were **76.598% for float16** and **76.486% for float32**. The difference is within what's normal random variation, and there is no reason to expect float16 to have better accuracy than float32 in general. This run was approximately **65% faster** to train with float16. | |||
Let us consider training a Resnet50 v1 model on the Imagenet 2012 dataset. For this model, the GPU memory usage is close to the capacity of V100 GPU with a batch size of 128 when using float32. Using float16 allows the use of 256 batch size. Shared below are results using 8 V100 GPUs. Let us compare the three scenarios that arise here: float32 with 1024 batch size, float16 with 1024 batch size and float16 with 2048 batch size. These jobs trained for 90 epochs using a learning rate of 0.4 for 1024 batch size and 0.8 for 2048 batch size. This learning rate was decayed by a factor of 0.1 at the 30th, 60th and 80th epochs. The only changes made for the float16 jobs when compared to the float32 job were that the network and data were cast to float16, and the multi-precision mode was used for optimizer. The final accuracy at 90th epoch and the time to train are tabulated below for these three scenarios. The top-1 validation errors at the end of each epoch are also plotted below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better to be specific on the overall hardware setup (it's not done on DGX).
Shared below are results using 8 V100 GPUs
->
Shared below are results using 8 V100 GPUs on AWS p3.16xlarge instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok yeah
example/gluon/data.py
Outdated
transposed = nd.transpose(cropped, (2, 0, 1)) | ||
image = mx.nd.cast(image, dtype) | ||
return image, label | ||
transposed = mx.nd.transpose(cropped, (2, 0, 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is dtype casting no longer necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not in this script as it does astype() later. And that will also be more general for any dataset iterator
@hetong007 please confirm the resnet-aug change to turn on random_mirror. |
@eric-haibin-lin The training curve image is now in dmlc repository. |
For cifar training, the standard augmentation for benchmark is: mean_rgb = [125.307, 122.961, 113.8575]
std_rgb = [51.5865, 50.847, 51.255]
train_data = mx.io.ImageRecordIter(
path_imgrec = rec_train,
path_imgidx = rec_train_idx,
preprocess_threads = num_workers,
shuffle = True,
batch_size = batch_size,
data_shape = (3, 32, 32),
mean_r = mean_rgb[0],
mean_g = mean_rgb[1],
mean_b = mean_rgb[2],
std_r = std_rgb[0],
std_g = std_rgb[1],
std_b = std_rgb[2],
rand_mirror = True,
pad = 4,
fill_value = 0,
rand_crop = True,
max_crop_size = 32,
min_crop_size = 32,
)
val_data = mx.io.ImageRecordIter(
path_imgrec = rec_val,
path_imgidx = rec_val_idx,
preprocess_threads = num_workers,
shuffle = False,
batch_size = batch_size,
data_shape = (3, 32, 32),
mean_r = mean_rgb[0],
mean_g = mean_rgb[1],
mean_b = mean_rgb[2],
std_r = std_rgb[0],
std_g = std_rgb[1],
std_b = std_rgb[2],
) Can you change accordingly? |
So @hetong007 should the set_resnet_aug function be set_imagenet_aug or set_resnet_imagenet_aug? |
Are the above augmentations used for all Cifar models? |
@rahul003 I think it is more appropriate to call And yes the above params are standard for cifar model training & performance comparison. |
aug.set_defaults(random_mirror=1, pad=4, fill_value=0, random_crop=1) | ||
aug.set_defaults(min_random_size=32, max_random_size=32) | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a few blank lines (also in train_imagenet.py
:L26-27), otherwise lgtm.
@eric-haibin-lin @hetong007 Could you merge this? |
* Replace cublassgemm with cublassgemmex for >= 7.5 * Add comment for cublassgemmex Remove fixed seed for test_sparse_nd_save_load (apache#11920) * Remove fixed seed for test_sparse_nd_save_load * Add comments related to the commit Corrections to profiling tutorial (apache#11887) Corrected a race condition with stopping profiling. Added mx.nd.waitall to ensure all operations have completed, including GPU operations that might otherwise be missing. Also added alternative code for context selection GPU vs CPU, that had error before on machines with nvidia-smi. Fix image classification scripts and Improve Fp16 tutorial (apache#11533) * fix bugs and improve tutorial * improve logging * update benchmark_score * Update float16.md * update link to dmlc web data * fix train cifar and add random mirroring * set aug defaults * fix whitespace * fix typo
Description
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments