PocketFlow icon indicating copy to clipboard operation
PocketFlow copied to clipboard

An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.

Results 75 PocketFlow issues
Sort by recently updated
recently updated
newest added

**Describe the bug** A clear and concise description of what the bug is. **To Reproduce** The command to reproduce the issue ``` ./scripts/run_local.sh nets/mobilenet_at_ilsvrc12_run.py --mobilenet_version 2 --ws_prune_ratio_prtl uniform --ws_prune_ratio 0.75...

I read the code in `export_pb_tflite_models.py`. The following code compress the model, but I think these operation will save in the `pb` file, and the new output `x` rely on...

I am trying to calculate manually the accuracy of a model with uniform-tf learner. After calling export_quant_tflite_model, a Pb file was generated, python ./tools/conversion/export_quant_tflite_model.py --model_dir ./models_uqtf_eval I am trying to...

I can't get the model converge on every learner , either imagenet or cifar10。Any suggestion about the hyper-parameters?Thanks

Please make sure that this is a documentation issue. **System information** - PocketFlow version: - Doc Link: **Describe the documentation issue** **We welcome contributions by users. Will you be able...

I want to know why restore model at the end of `__train_pruned_model`, because no operation after this. ``` def __calc_grads_pruned(self, grads_origin): ...... if self.__is_primary_worker(): with self.pruner.model.g.as_default(): self.pruner.saver = tf.train.Saver() #...

minor bug

when I use one GPU and it finished without any problem , but when using multi-GPU, it hung when runing bcast operation, I don't know how to solve it. code:...

I prune resnet_20 at cifar_10 by ChannelPrunedLearner, but I reader `original_model.ckpt`, `pruned_model.ckpt` and `best_model.ckpt`, the size of weight is same. `origin val` is the size of weight in `original_model.ckpt`, `pruned...

Hi @psyyz10, It's really hard to train the ddpg-agent and I have tried many combinations of hyper-parameters but only get 66.38% top1-acc on MobileNetV1 with 50% FLOPs. I will appreciate...

I compress the model by ChannelPrunedLearner, when executed to `self.sess_train.run(self.train_op)` in `__train_pruned_model(self, finetune=False)` fun, the program don't continue execution。 I don't know This is because my machine is too card...