tfjs
tfjs copied to clipboard
Customize setTimeout
If the setTimeout nesting level is greater than 5 and timeout is less than 4ms, timeout will be clamped to 4ms, which hurts the perf. A custom setTimeout is provided to mitigate the perf impact.
BUG: #6687
To see the logs from the Cloud Build CI, please join either our discussion or announcement mailing list.
This PR is still WIP. @qjia7 @xhcao PTAL to see if you have any high-level comments. Do you have a good idea to write some test cases for this?
This PR is still WIP. @qjia7 @xhcao PTAL to see if you have any high-level comments. Do you have a good idea to write some test cases for this?
The implementation looks good for me. How about enabling setTimeoutWPM behind a flag for a while? For the test case, maybe add a new file util_base_test.js. Test that repeat call setTimeoutWPM 20(or a larger number) times. And expect the avg time of the last 15 is less than 4 ms or less?
The function could be called by tf.profile, if you want to write unit cases. You could also view the total of tfjs-models' demos, the time could be reduced.
With above changes, I get below from unit test: LOG: 'averageTime of setTimeoutWPM is 0.06666664282480876 ms' LOG: 'averageTime of setTimeout is 4 ms'
Thank you all for the comments! I think I have addressed all of them. Could you please take another look? I also tested bodypix-MobileNetV1-image-0.25 and got below data (best time, average time): [setTimeout] 16.5, 22.5 15.3, 20.7 15.8, 21.4 17.6, 23.2 17.4, 22.5
[setTimeoutCustom] 13.1, 16.0 13.2, 18.3 13.6, 15.8 12.7, 17.0 11.7, 15.2
The data shows we can get 2-4ms back.
It seems the bots are still not satisfied, but I couldn't figure out the reason after checking the detailed log. Could you please point me to the reason? Thanks!
@pyu10055 Can you review again the latest changes? Thanks!
I love this improvement as it results in quite significant performance improvements
but at the moment it causes conflict with @tensorflow/tfjs-backend-wasm as soon as that module is loaded
regardless if its used or not (IMO its due to bad behavior by EMScripten, but its a conflict non-the-less),
see https://github.com/tensorflow/tfjs/issues/6687#issuecomment-1209286270 for details
Thanks @vladmandic , I will investigate this.
@pyu10055 @mattsoulanille, given related Emscripten issues were fixed (Thanks @vladmandic !) and @mattsoulanille upgraded Emscripten to include the fixes, can we merge this change now?