Long Dang

Results 14 comments of Long Dang

Hello ART, May I know there is any update on this. Many thanks.

You can reduce the training set size to 4,000 as the authors of Faster AA show in their paper.

@hx621 did you use the subset of the training set for training the sub-policies? With 4000 images, I trained 20 sub-policies with 4 operations for 200 epochs and it took...

@creafz: If I want to extend the base code for multiple gpu processing, where should I start? Also, can you help reupload Tensorboard logs for the CIFAR10, ImageNet, and Pascal...

> parallelization of client training in each round I also have the same question. Thanks.

Hello, Today I have got an issue when accessing autoalbument documentation at https://albumentations.ai/docs/autoalbument/. Can you help check if this issue can be fixed? Many thanks.

@ternaus thanks and I did a one-time donation because I am a student. Can you help check the first reference link for CIFAR-10 dataset at [this URL](https://tensorboard.dev/experiment/ZleMHe73QCGzPeDCRpFLfA/) in the following...

There might be some syntax errors. You need set the value of HYDRA_FULL_ERROR variable to 1 using this command 'export HYDRA_FULL_ERROR=1'.

@TimandXiyu : can you provide more information related to your issues? It is best if you can provide detailed information related to their implementation. Thanks.

Hi, To calculate the average training accuracy over all users (also known as the variable "idxs_users") participating in a training round, we should calculate the training accuracy of each user....