tfjs
tfjs copied to clipboard
[Optimization target] The inference time of DeepLabV3's cityscapes architecture is long
DeepLabV3 with cityscapes takes long time to execute and we need to figure out the reason and then add it.
Run on Mac pro 2019 with WebGL backend, the computer crashed.
Run on gLinux 16CPU with CPU backend, it takes ~14min:
Run on Windows with high performance video card on webgl backend, it takes 543ms:
@qjia7 @gyagp
As discussed with @lina128 , since the cityscape
'architecture' for DeepLabV3 model takes too much time to run inference and it is behind the times, we are considering not to add this architecture to DeepLabV3 for benchmark tools. Please let me know if you have any suggestions.
(Will also delete the row for it in the spreadsheet.)
@Linchenn Just tried webgpu (with #6760 ) on Intel TGL, about 700ms. Agree to not add it to DeepLabV3 in benchmarks for now. But there maybe some optimization opportunities. Can we keep this issue or file a new issue to optimize this model?
@Linchenn Just tried webgpu (with #6760 ) on Intel TGL, about 700ms. Agree to not add it to DeepLabV3 in benchmarks for now. But there maybe some optimization opportunities. Can we keep this issue or file a new issue to optimize this model?
Sure, we could keep it and I just updated the name of this ISSUE.