MaitHad
MaitHad
`activation = "swish" initializer = "Glorot uniform" net = dde.nn.ResNet(1,1,32,3, activation, initializer) model = dde.Model(data, net) model.compile("L-BFGS-B" ,loss_weights=[0.01, 10, 10]) model.train() learning_rate = 0.001 model.compile("adam", lr = learning_rate,loss_weights=[0.01,10, 10])`
For FNN is in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value, so it's better to use the ResNet. To solve the...
you can compile with L-BFGS first than Adam, I had a similar problem and its work, ``` model.compile("L-BFGS-B",loss_weights=[0.01, 0.01, 0.001, 0.001, 100, 100, 100, 100]) model.train() learning_rate = 0.0001 model.compile("adam",lr=learning_rate,...