deepxde icon indicating copy to clipboard operation
deepxde copied to clipboard

How to get a better result?

Open whtwu opened this issue 2 years ago • 6 comments

Hello, @lululxvi , I have tried many ways to optimize the model ,unfortunately, I cant't get a more accurate result, I would be grateful if there are any suggestions for me ,thanks a lot!

`import deepxde as dde import numpy as np from deepxde.backend import tf import matplotlib.pyplot as plt

def main(): b = 0.06 h = 0.002 e = 2e+11 i = 4 * pow(10, -11) rho = 7850 s = 12e-05 l = 1 f = 10 omiga = 1

def ddx(x, y):
    return dde.grad.hessian(y, x, i=0, j=0)

def ddt(x, y):
    return dde.grad.hessian(y, x, i=0, j=1)

def pde(x, y):
    f_xt = f * tf.sin(omiga * x[:, 1:])
    dy_tt = ddt(x,y)
    dy_xx = ddx(x,y)
    dy_xxxx = dde.grad.hessian(dy_xx, x, i=0, j=0)
    return e * i * dy_xxxx + rho * s * dy_tt - f_xt

def boundary_l(x, on_boundary):
    return on_boundary and np.isclose(x[0], 0)

def boundary_r(x, on_boundary):
    return on_boundary and np.isclose(x[0], l)

def func(x):
    sum = 0
    for n in range(1,22):
        omiga_i = ((n ** 2 * np.pi ** 2) / l ** 2) * (e * i / (rho * s)) ** 0.5
        solve = (2 / (rho * s * l) * f / (omiga_i ** 2 - omiga ** 2) * 2 * l / (n * np.pi) * \
        np.sin((n * np.pi * x[:, 0:1]) / l) * (np.sin(omiga * x[:, 1:]) - omiga / omiga_i * np.sin(omiga_i * x[:, 1:])))
        sum += solve
    return sum

geom = dde.geometry.Interval(0, 1)
timedomain = dde.geometry.TimeDomain(0, 1)
geomtime = dde.geometry.GeometryXTime(geom, timedomain)


# bc1 = dde.DirichletBC(geomtime, lambda x: 0, lambda _, on_boundary: on_boundary)
# bc2 = dde.OperatorBC(geomtime, lambda x, y, _: ddx(x, y), lambda _, on_boundary: on_boundary)

bc1 = dde.DirichletBC(geomtime, lambda x: 0, boundary_l)
bc2 = dde.OperatorBC(geomtime, lambda x, y, _: ddx(x, y), boundary_l)
bc3 = dde.DirichletBC(geomtime, lambda x: 0, boundary_r)
bc4 = dde.OperatorBC(geomtime, lambda x, y, _: ddx(x, y), boundary_r)

# x_ = np.linspace(0.3, 0.7, num=200)
# t_ = np.linspace(0, 1,10)
# xx_,tt_ = np.meshgrid(x_,t_)
# observe_x = np.vstack((np.ravel(xx_), np.ravel(tt_))).T

data = dde.data.TimePDE(
    geomtime,
    pde,
    [bc1, bc2, bc3, bc4],
    num_domain=400,
    num_boundary=60,
    #anchors=observe_x,
    #train_distribution='uniform',
    solution=func,
    num_test=4000,
)

layer_size = [2] + [50] * 4 + [1]
activation = "tanh"
initializer = "Glorot uniform"
net = dde.nn.FNN(layer_size, activation, initializer)

#net = dde.nn.STMsFFN(layer_size, activation, initializer, sigmas_x=[1], sigmas_t=[1, 10])

net.apply_output_transform(lambda x, y:  x[:,0:1] * (1 - x[:,0:1]) * x[:,1:] * y)
model = dde.Model(data, net)
model.compile("adam", lr=0.001, metrics=["l2 relative error"])

#pde_residual_resampler = dde.callbacks.PDEResidualResampler(period=1)

model.train(epochs=20000, display_every=2000)
#model.train(epochs=20000, callbacks=[pde_residual_resampler], display_every=2000)

model.compile("L-BFGS",  metrics=["l2 relative error"])
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave=True, isplot=True)

x = np.linspace(0, 1, 20, endpoint=True)
t = [0.2]
xx, tt = np.meshgrid(x, t)
X = np.vstack((np.ravel(xx), np.ravel(tt))).T
y_pred = model.predict(X)
y_exact = func(X)
plt.plot(x,y_pred, color="blue", linewidth=2, linestyle="solid", label="predict")
plt.plot(x,func(X), color="red", linewidth=2.5, linestyle="dotted", label="exact")
plt.legend()
plt.xlabel("x")
plt.xticks(np.arange(0,1.1,0.1))
plt.ylabel("y")
plt.title("t=0.2s")
plt.show()

if name == "main": main() ` image Step Train loss Test loss Test metric
0 [1.14e+01, 0.00e+00, 3.05e-02, 0.00e+00, 1.34e+00] [8.76e+00, 0.00e+00, 3.05e-02, 0.00e+00, 1.34e+00] [7.24e+00]
2000 [2.36e-04, 0.00e+00, 1.88e-04, 0.00e+00, 2.12e-04] [1.68e-04, 0.00e+00, 1.88e-04, 0.00e+00, 2.12e-04] [6.91e-02]
4000 [1.05e-04, 0.00e+00, 5.39e-05, 0.00e+00, 2.02e-05] [8.56e-05, 0.00e+00, 5.39e-05, 0.00e+00, 2.02e-05] [6.07e-02]
6000 [4.67e-04, 0.00e+00, 3.83e-05, 0.00e+00, 2.52e-05] [3.57e-04, 0.00e+00, 3.83e-05, 0.00e+00, 2.52e-05] [6.91e-02]
8000 [5.68e-05, 0.00e+00, 1.58e-05, 0.00e+00, 1.69e-06] [4.85e-05, 0.00e+00, 1.58e-05, 0.00e+00, 1.69e-06] [6.67e-02]
10000 [6.12e-05, 0.00e+00, 1.81e-05, 0.00e+00, 1.54e-06] [4.97e-05, 0.00e+00, 1.81e-05, 0.00e+00, 1.54e-06] [6.67e-02]
12000 [2.41e-04, 0.00e+00, 2.88e-05, 0.00e+00, 1.84e-05] [2.42e-04, 0.00e+00, 2.88e-05, 0.00e+00, 1.84e-05] [9.14e-02]
14000 [4.24e-05, 0.00e+00, 1.62e-05, 0.00e+00, 3.11e-06] [3.57e-05, 0.00e+00, 1.62e-05, 0.00e+00, 3.11e-06] [6.79e-02]
16000 [3.89e-05, 0.00e+00, 1.60e-05, 0.00e+00, 3.17e-06] [3.30e-05, 0.00e+00, 1.60e-05, 0.00e+00, 3.17e-06] [6.78e-02]
18000 [2.43e-04, 0.00e+00, 3.28e-05, 0.00e+00, 9.17e-05] [2.15e-04, 0.00e+00, 3.28e-05, 0.00e+00, 9.17e-05] [1.21e-01]
20000 [3.43e-05, 0.00e+00, 1.44e-05, 0.00e+00, 3.45e-06] [2.92e-05, 0.00e+00, 1.44e-05, 0.00e+00, 3.45e-06] [6.75e-02]

Best model at step 20000: train loss: 5.21e-05 test loss: 4.70e-05 test metric: [6.75e-02]

'train' took 786.180204 s

Compiling model... 'compile' took 4.721393 s

Training model...

Step Train loss Test loss Test metric
20000 [3.43e-05, 0.00e+00, 1.44e-05, 0.00e+00, 3.45e-06] [2.92e-05, 0.00e+00, 1.44e-05, 0.00e+00, 3.45e-06] [6.75e-02]
21000 [5.80e-06, 0.00e+00, 3.88e-06, 0.00e+00, 2.78e-06]
21915 [4.72e-06, 0.00e+00, 2.21e-06, 0.00e+00, 1.93e-06] [3.49e-06, 0.00e+00, 2.21e-06, 0.00e+00, 1.93e-06] [5.94e-02]

Best model at step 21915: train loss: 8.86e-06 test loss: 7.63e-06 test metric: [5.94e-02]

whtwu avatar Mar 04 '22 13:03 whtwu

Have you checked FAQ "Q: I failed to train the network or get the right solution, e.g., large training loss, unbalanced losses."

lululxvi avatar Mar 09 '22 07:03 lululxvi

Thanks for your reply. Absoulately yes, I have checked FAQ and other issues. and the time domain is (0,1), the result gets better as the closer t is to 1.

whtwu avatar Mar 10 '22 01:03 whtwu

@lululxvi ,Looking forward to your suggestion.

whtwu avatar Mar 11 '22 01:03 whtwu

You may try hard constraints for the BC.

lululxvi avatar Mar 15 '22 20:03 lululxvi

Is there any update on this issue? I want to know if the results improved when using hard constraints for BC.

I'm having the same problem, and I did not resolve to use BC because I found it hard to represent my initial and boundary conditions with hard constraints without negatively impacting my results.

Looking forward to completing this issue discussion!

ShaikhaTheGreen avatar Mar 16 '22 21:03 ShaikhaTheGreen

This is my BC: image In the above code, I used the hard constraint to satisfy the Dirichlet BC,, net.apply_output_transform(lambda x, y: x[:,0:1] * (1 - x[:,0:1]) * x[:,1:] * y) but I don't know how to satisfy the OperatorBC at the same time.

whtwu avatar Mar 18 '22 09:03 whtwu