Xu Zhao
Xu Zhao
You need following code snippet: `(setq-default custom-safe-themes t)` `(load-theme 'airline-solarized-alternate-gui t)`
@vyorkin Looks good, will give it a shot. Much obliged!
Sorry but I can't reproduce your result: ``` $ python run.py yolov3 -d cpu -t train --bs 1 Running train method from yolov3 on cpu in eager mode with input...
Closed as can't reproduce.
Correct, I agree that we should use small batch sizes for CPU inference. In addition, we should have different scales of for GPU inference, because some users use small GPU,...
> I'm also noticing very low CPU utilization. Usually only 1 thread active. Did something change in how we setup threading? I don't think anything changed in setup threading. Can...
> ``` > ./torchbench.py --nothing -n100 -k alexnet > ``` @jansel For the low cpu utilization, I tried to stress testing alexnet using the newly added `--stress` option (https://github.com/pytorch/benchmark/pull/1002): `$...
The PR is blocked by memory leak issue in wlm_lstm_train_cuda: https://app.circleci.com/pipelines/github/pytorch/benchmark/4705/workflows/b9b7e022-0a50-41eb-8088-4e9ca5d4f169/jobs/4836 @robieta I suspect this is because of cyclic references in the LSTM model (since wlm_transformer_train_cuda) doesn't have this problem....
Workflow: https://github.com/pytorch/benchmark/actions/runs/2841567257 Result: ``` { "start": "e1007950484aa1df4a2f87c9c14b514ffd7736a5", "end": "3aeb5e4ff9d56ecd680401cfa3f23e97a279efbe", "threshold": 7, "timeout": 120, "torchbench_branch": "v2.0", "result": [ { "commit1": "017ecb782d2", "commit1_time": "2022-08-10 21:50:13 +0000", "commit1_digest": { "test_train[hf_BigBird-cpu-eager]": 14.196933696744964 }, "commit2":...
@malfet @mingfeima Looks like [4e9b969baa6](https://github.com/pytorch/pytorch/commit/4e9b969baa6) slows down hf_BigBird test by 10%.