About the randomness introduced during training
During my training, I found that even if I fixed all random seeds and used a deterministic algorithm, I still got similar but different results after several epochs of training. Which part of the algorithm might be introducing randomness?
I have the same issue. Did you find a solution for this problem? If you find that how to handle it please tell me. Thank you for your earliest response.
During my training, I found that even if I fixed all random seeds and used a deterministic algorithm,在训练过程中,我发现即使我固定了所有随机种子并使用确定性算法, I still got similar but different results after several epochs of training.经过几个时期的训练后,我仍然得到了相似但不同的结果。 Which part of the algorithm might be introducing randomness?算法的哪个部分可能会引入随机性?
I'm sorry, I still don't have a good way to resolve the randomness of KAN
I'm sorry, I still don't have a good way to resolve the randomness of KAN对不起,我仍然没有一个好的方法来解决 KAN 的随机性
I found a way to settle it. It is needed to add a code.
def setup_seed(seed): torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) np.random.seed(seed) random.seed(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True torch.use_deterministic_algorithms(True)
The last one is the way to handle it. But the result would be lower than before dramatically!