tiny-cuda-nn
tiny-cuda-nn copied to clipboard
Can I use half precision training and inference data?
Is there any difference between using half precision or full precision inputs and outputs when the network internally uses half precision for weights or can I safely reduce memory consumption without consequences?
Ok, this seems not possible because Encoding enforces float input