qkeras
qkeras copied to clipboard
Add support for qdense_batchnorm in QKeras
Add support for qdense_batchnorm by folding qdense kernel with batchnorm parameters, then computing qdense_batchnorm output using the qdense inputs and folded kernel
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
:memo: Please visit https://cla.developers.google.com/ to sign.
Once you've signed (or fixed any issues), please reply here with @googlebot I signed it! and we'll verify it.
What to do if you already signed the CLA
Individual signers
- It's possible we don't have your GitHub username or you're using a different email address on your commit. Check your existing CLA data and verify that your email is set on your git commits.
Corporate signers
- Your company has a Point of Contact who decides which employees are authorized to participate. Ask your POC to be added to the group of authorized contributors. If you don't know who your Point of Contact is, direct the Google project maintainer to go/cla#troubleshoot (Public version).
- The email used to register you as an authorized contributor must be the email used for the Git commit. Check your existing CLA data and verify that your email is set on your git commits.
- The email used to register you as an authorized contributor must also be attached to your GitHub account.
ℹ️ Googlers: Go here for more info.
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
:memo: Please visit https://cla.developers.google.com/ to sign.
Once you've signed (or fixed any issues), please reply here with @googlebot I signed it! and we'll verify it.
What to do if you already signed the CLA
Individual signers
- It's possible we don't have your GitHub username or you're using a different email address on your commit. Check your existing CLA data and verify that your email is set on your git commits.
Corporate signers
- Your company has a Point of Contact who decides which employees are authorized to participate. Ask your POC to be added to the group of authorized contributors. If you don't know who your Point of Contact is, direct the Google project maintainer to go/cla#troubleshoot (Public version).
- The email used to register you as an authorized contributor must be the email used for the Git commit. Check your existing CLA data and verify that your email is set on your git commits.
- The email used to register you as an authorized contributor must also be attached to your GitHub account.
ℹ️ Googlers: Go here for more info.
@julesmuhizi thank you so much for your PR, could you sign CLA first?
@zhuangh I'm waiting on employer authorization for the CLA. Will do as soon as I get authorized.
@googlebot I signed it!
@julesmuhizi thanks! could you also add a test for your code change.
@zhuangh here a test --> https://gist.github.com/nicologhielmetti/84df61987476b031eb8fc6103f7e2915 @julesmuhizi and I compared the performance with a QDense + Batchnorm layer to see how much is the gap between the two. It actually resulted small, probably due to slightly difference with the quantization operations between the two versions.
@julesmuhizi Don't you also need to add the new layer to bn_folding_utils.py?
@zhuangh Related to folding of dense+bn, convert_to_folded_model will not include Dense layers. a bug?
thanks @vloncar
@lishanok has been reviewing this PR. we are thinking whether to add a follow-up code change for that or do it in this PR.
@julesmuhizi Thank you for the commit. I reviewed it and it looked good. Not sure why your test generates different output values between folded and non-folded models. Can you write a test similar to bn_folding_test.py/test_same_training_and_prediction() where weights are set with values that quantization won't result in a loss of precision and make sure the two versions result in the same results?
@vloncar There are quite a number of utility function to modify in order to support a new folded layer type. For example, convert_folded_model_to_normal, qtools, print_qmodel_summary, bn_folding_test, model_quantize, convert_folded_model_to_normal, etc. Regarding tests, I would suggest to write tests similar to qpooling_test.py (tests for regular new layers) and bn_folding_test.py (tests specific to bn folding type of layers) to check if all the utility functions are updated to support the new layer.
courtesy ping @julesmuhizi
In case you miss, there is a suggestion regarding test case from @lishanok
thanks
Hi, I have been occupied with another project but will review and begin addressing the issues in the comment above. Thanks for the ping @zhuangh
Hi, I would like to know if this thread is still active. I'm interested in having the QDenseBatchNorm :) Thank you for your work around QKeras, the use is really straightforward.
Hi @boubasse thanks for the reminder. @lishanok could you take a look?
Hi,
This thread is active and the layer is implemented on a separate fork/branch that’s not been merged yet as I don’t know how to format the unit tests.
https://github.com/julesmuhizi/qkeras/blob/qdense_batchnorm/qkeras/qdense_batchnorm.py
On Fri, May 13, 2022 at 11:58 AM Hao Zhuang @.***> wrote:
Hi @boubasse https://github.com/boubasse thanks for the reminder. @lishanok https://github.com/lishanok could you take a look?
— Reply to this email directly, view it on GitHub https://github.com/google/qkeras/pull/74#issuecomment-1126205712, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANDBM3O4NNYXUCFLP3YY6KLVJZ33PANCNFSM43VS2B6Q . You are receiving this because you were assigned.Message ID: @.***>
-- Jules Muhizi (he/him/his) Harvard College | Class of 2022 Electrical Engineering Secondary in Romance Languages and Literature (Spanish)
@lishanok @zhuangh Sorry for the delay. We've added the requested test.
An (unrelated) autoqkeras test was also failing (presumably also on master) due to the same legacy optimizer issue that was fixed in 5b1fe849f4a5e9126d0bd12a7b92bcc1a1d1b3e3. So we adopted a similar solution to that for the Adam optimizer. Let us know if you want us to split that into a separate PR.
We think this is ready to be merged and a follow-up PR should handle updating the utility functions to support a new folded layer type (convert_folded_model_to_normal, qtools, print_qmodel_summary, model_quantize, convert_folded_model_to_normal, etc.) and additional tests (similar to qpooling_test.py and bn_folding_test.py).
Thanks, Javier
Hi Shan and Daniele, could you take a look? @lishanok and @danielemoro
Hi all, any chance you could take a look? Thanks!
@zhuangh @lishanok @danielemoro
Sorry about the delay. We will take a look as soon as possible. -Shan
On Thu, Feb 16, 2023 at 11:09 AM Javier Duarte @.***> wrote:
Hi all, any chance you could take a look? Thanks!
@zhuangh https://github.com/zhuangh @lishanok https://github.com/lishanok @danielemoro https://github.com/danielemoro
— Reply to this email directly, view it on GitHub https://github.com/google/qkeras/pull/74#issuecomment-1433584290, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARFXNMM2T4PAEEHIBOOGSULWXZ3PTANCNFSM43VS2B6Q . You are receiving this because you were mentioned.Message ID: @.***>
Hi, @zhuangh @lishanok @danielemoro any chance you could take a look? Thank you!
Hi @zhuangh @lishanok @danielemoro are you able to merge this?
hi @lishanok can you take a look at this? thanks!
@lishanok thanks for looking! CI tests pass after rebase.
Hi. My team used this feature, and it worked well for us. We'd love to see it merged. Are there any blockers?