qkeras icon indicating copy to clipboard operation
qkeras copied to clipboard

Add support for qdense_batchnorm in QKeras

Open julesmuhizi opened this issue 4 years ago • 29 comments

Add support for qdense_batchnorm by folding qdense kernel with batchnorm parameters, then computing qdense_batchnorm output using the qdense inputs and folded kernel

julesmuhizi avatar Apr 27 '21 19:04 julesmuhizi

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

:memo: Please visit https://cla.developers.google.com/ to sign.

Once you've signed (or fixed any issues), please reply here with @googlebot I signed it! and we'll verify it.


What to do if you already signed the CLA

Individual signers
Corporate signers

ℹ️ Googlers: Go here for more info.

googlebot avatar Apr 27 '21 19:04 googlebot

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

:memo: Please visit https://cla.developers.google.com/ to sign.

Once you've signed (or fixed any issues), please reply here with @googlebot I signed it! and we'll verify it.


What to do if you already signed the CLA

Individual signers
Corporate signers

ℹ️ Googlers: Go here for more info.

google-cla[bot] avatar Apr 27 '21 19:04 google-cla[bot]

@julesmuhizi thank you so much for your PR, could you sign CLA first?

zhuangh avatar Apr 27 '21 19:04 zhuangh

@zhuangh I'm waiting on employer authorization for the CLA. Will do as soon as I get authorized.

julesmuhizi avatar Apr 27 '21 20:04 julesmuhizi

@googlebot I signed it!

julesmuhizi avatar Apr 27 '21 21:04 julesmuhizi

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

google-cla[bot] avatar Apr 27 '21 21:04 google-cla[bot]

@julesmuhizi thanks! could you also add a test for your code change.

zhuangh avatar Apr 29 '21 18:04 zhuangh

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

googlebot avatar Apr 30 '21 14:04 googlebot

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

google-cla[bot] avatar Apr 30 '21 14:04 google-cla[bot]

@zhuangh here a test --> https://gist.github.com/nicologhielmetti/84df61987476b031eb8fc6103f7e2915 @julesmuhizi and I compared the performance with a QDense + Batchnorm layer to see how much is the gap between the two. It actually resulted small, probably due to slightly difference with the quantization operations between the two versions.

nghielme avatar Apr 30 '21 16:04 nghielme

@julesmuhizi Don't you also need to add the new layer to bn_folding_utils.py?

@zhuangh Related to folding of dense+bn, convert_to_folded_model will not include Dense layers. a bug?

vloncar avatar May 05 '21 14:05 vloncar

thanks @vloncar

@lishanok has been reviewing this PR. we are thinking whether to add a follow-up code change for that or do it in this PR.

zhuangh avatar May 05 '21 16:05 zhuangh

@julesmuhizi Thank you for the commit. I reviewed it and it looked good. Not sure why your test generates different output values between folded and non-folded models. Can you write a test similar to bn_folding_test.py/test_same_training_and_prediction() where weights are set with values that quantization won't result in a loss of precision and make sure the two versions result in the same results?

@vloncar There are quite a number of utility function to modify in order to support a new folded layer type. For example, convert_folded_model_to_normal, qtools, print_qmodel_summary, bn_folding_test, model_quantize, convert_folded_model_to_normal, etc. Regarding tests, I would suggest to write tests similar to qpooling_test.py (tests for regular new layers) and bn_folding_test.py (tests specific to bn folding type of layers) to check if all the utility functions are updated to support the new layer.

lishanok avatar May 05 '21 23:05 lishanok

courtesy ping @julesmuhizi

In case you miss, there is a suggestion regarding test case from @lishanok

thanks

zhuangh avatar Jun 17 '21 04:06 zhuangh

Hi, I have been occupied with another project but will review and begin addressing the issues in the comment above. Thanks for the ping @zhuangh

julesmuhizi avatar Jun 28 '21 01:06 julesmuhizi

Hi, I would like to know if this thread is still active. I'm interested in having the QDenseBatchNorm :) Thank you for your work around QKeras, the use is really straightforward.

boubasse avatar May 13 '22 12:05 boubasse

Hi @boubasse thanks for the reminder. @lishanok could you take a look?

zhuangh avatar May 13 '22 15:05 zhuangh

Hi,

This thread is active and the layer is implemented on a separate fork/branch that’s not been merged yet as I don’t know how to format the unit tests.

https://github.com/julesmuhizi/qkeras/blob/qdense_batchnorm/qkeras/qdense_batchnorm.py

On Fri, May 13, 2022 at 11:58 AM Hao Zhuang @.***> wrote:

Hi @boubasse https://github.com/boubasse thanks for the reminder. @lishanok https://github.com/lishanok could you take a look?

— Reply to this email directly, view it on GitHub https://github.com/google/qkeras/pull/74#issuecomment-1126205712, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANDBM3O4NNYXUCFLP3YY6KLVJZ33PANCNFSM43VS2B6Q . You are receiving this because you were assigned.Message ID: @.***>

-- Jules Muhizi (he/him/his) Harvard College | Class of 2022 Electrical Engineering Secondary in Romance Languages and Literature (Spanish)

julesmuhizi avatar May 13 '22 20:05 julesmuhizi

@lishanok @zhuangh Sorry for the delay. We've added the requested test.

An (unrelated) autoqkeras test was also failing (presumably also on master) due to the same legacy optimizer issue that was fixed in 5b1fe849f4a5e9126d0bd12a7b92bcc1a1d1b3e3. So we adopted a similar solution to that for the Adam optimizer. Let us know if you want us to split that into a separate PR.

We think this is ready to be merged and a follow-up PR should handle updating the utility functions to support a new folded layer type (convert_folded_model_to_normal, qtools, print_qmodel_summary, model_quantize, convert_folded_model_to_normal, etc.) and additional tests (similar to qpooling_test.py and bn_folding_test.py).

jmduarte avatar Oct 25 '22 01:10 jmduarte

Thanks, Javier

Hi Shan and Daniele, could you take a look? @lishanok and @danielemoro

zhuangh avatar Oct 25 '22 03:10 zhuangh

Hi all, any chance you could take a look? Thanks!

@zhuangh @lishanok @danielemoro

jmduarte avatar Feb 16 '23 19:02 jmduarte

Sorry about the delay. We will take a look as soon as possible. -Shan

On Thu, Feb 16, 2023 at 11:09 AM Javier Duarte @.***> wrote:

Hi all, any chance you could take a look? Thanks!

@zhuangh https://github.com/zhuangh @lishanok https://github.com/lishanok @danielemoro https://github.com/danielemoro

— Reply to this email directly, view it on GitHub https://github.com/google/qkeras/pull/74#issuecomment-1433584290, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARFXNMM2T4PAEEHIBOOGSULWXZ3PTANCNFSM43VS2B6Q . You are receiving this because you were mentioned.Message ID: @.***>

lishanok avatar Feb 17 '23 20:02 lishanok

Hi, @zhuangh @lishanok @danielemoro any chance you could take a look? Thank you!

jmduarte avatar Mar 20 '23 18:03 jmduarte

Hi @zhuangh @lishanok @danielemoro are you able to merge this?

jmduarte avatar Jul 14 '23 15:07 jmduarte

hi @lishanok can you take a look at this? thanks!

jmduarte avatar Oct 20 '23 15:10 jmduarte

@lishanok thanks for looking! CI tests pass after rebase.

jmduarte avatar Oct 24 '23 01:10 jmduarte

Hi. My team used this feature, and it worked well for us. We'd love to see it merged. Are there any blockers?

AdrianAlan avatar Jan 26 '24 11:01 AdrianAlan