nntrainer icon indicating copy to clipboard operation
nntrainer copied to clipboard

NNtrainer is Software Framework for Training Neural Network Models on Devices.

Results 221 nntrainer issues
Sort by recently updated
recently updated
newest added

- This unittest have been disabled from commit f519056 in #1281. (Last commit that this unittest works is fe87882c) - Now we have done refactoring the layers, it needs to...

Daily Build also need to release stable version of nntrainer not just latest

Commit 1: [mol attention] adjust tensor lifespan to save memory - Modify tensor lifespan of fc_out to FORWARD_FUNC_LIFESPAN - remove unused enum updated_state - Change param enum name from AttentionParams...

We can optimize the memory consumption of the multi-head attention layer by combination of layers. By doing this, we could reduce the memory further. 1. compute multi-head one by one....

Layer realization should inherit its properties from its base layer, for example `trainable`, etc. This is currently missing in model creation.

bug

1. Layer semantics test (testing if given layer object obeys semantics, such as we shouldn't validate, valid properties should be valid and so on) 1. Layer properties test (90% gonna...

enhancement

Tensor::sum() has come a long way, and needs more unittests because of its complicated implementation.

good first issue

This patch enables address sanitizer for the debian build only for the CI. Resolves #1480. Signed-off-by: Parichay Kapoor

DO NOT MERGE

As pytorch package is available in gbs, we don't have to save the golden data in our repo rather we can generate on the runtime so that we can test...

Epic

- This commit adds `scaling` parameter to LoRA (fc) - In the original paper, they adopted `alpha (int)` as a parameter to derive the scaling factor internally, i.e., scaling =...

Need Review