benchmarks icon indicating copy to clipboard operation
benchmarks copied to clipboard

A benchmark framework for Tensorflow

Results 102 benchmarks issues
Sort by recently updated
recently updated
newest added

The existing implementation use a L2 regularization on all model variables (including batch norm variables and biases). It's quite different from TF slim models which usually regularizes only conv2d weights,...

Hi, authors , the speed I achieved on AlexNet with Cifar10 dataset is only ~7000 images/sec using a TITAN X Pascal GPU. May I know what is the speed you...

stat:awaiting tensorflower

Hi, thanks for the wonderful code! I found the summaries added during image preprocessing are lost after calling [ds_iterator.get_next()](https://github.com/tensorflow/benchmarks/blob/9381e972bfe9f0ae0f68384a6f67d7c4b4f5ff12/scripts/tf_cnn_benchmarks/preprocessing.py#L503). I identify this issues by adding `tmp = tf.get_collection(key=tf.GraphKeys.SUMMARIES)` right after...

Hi, thanks a lot for sharing this awesome project. I wonder if the code currently support the Caffe "iter_size" like hyperparameter? That is, accumulating gradients for "iter_size" number of batches...

firstly , the _eval function currently doesn't support the mode of 'variable_update=parameter_server' and 'variable_update=distributed_replicated' ,and there will be some mistakes while using the mode of 'replicated' to restore parameters from...

I have met stuck problem when running `tf_cnn_benchmarks.py` in distributed mode, I think `global_step` should be protected by a lock in [this line](https://github.com/tensorflow/benchmarks/blob/master/scripts/tf_cnn_benchmarks/tf_cnn_benchmarks.py#L934).

stat:awaiting response

i am running tf_cnn_bnenchmarks.py,but i want to change the gpu uiltlization during running process,how should i change the code,i change the batch size but it does not change

after tensorflow 2.13, tf.keras actually points to the Keras 3 library.