deepxde icon indicating copy to clipboard operation
deepxde copied to clipboard

Separate train_x_all and train_x_bc in PDE

Open vl-dud opened this issue 9 months ago • 2 comments

Currently, arrays of points train_x_all and train_x_bc are mixed. Because of this, some points are duplicated, which can be critical with a large volume.

Look at the example:

geom = dde.geometry.Interval(0, 1)
bc = dde.icbc.DirichletBC(geom, lambda x: 0, lambda x, on_boundary: on_boundary)
data = dde.data.PDE(
    geom,
    pde,
    [bc],
    num_domain=50,
    num_boundary=25
)

Shapes before PR: train_x (100, 1) train_x_all (75, 1) train_x_bc (25, 1) Shapes after PR: train_x (75, 1) train_x_all (50, 1) train_x_bc (25, 1)

Another example with num_domain equals 0

geom = dde.geometry.Interval(0, 1)
bc = dde.icbc.DirichletBC(geom, lambda x: 0, lambda x, on_boundary: on_boundary)
data = dde.data.PDE(
    geom,
    pde,
    [bc],
    num_domain=0,
    num_boundary=25
)

Shapes before PR: train_x (50, 1) train_x_all (25, 1) train_x_bc (25, 1) Shapes after PR: train_x (25, 1) train_x_all (0, 1) train_x_bc (25, 1)

vl-dud avatar May 07 '24 10:05 vl-dud

Check this https://github.com/lululxvi/deepxde/pull/1113

lululxvi avatar May 07 '24 21:05 lululxvi

Check this #1113

These pull requests are certainly related, but in my opinion they are not the same thing. The current PR specifically concerns the duplication of points. The introduction of train_x_pde can be considered in the next step.

vl-dud avatar May 08 '24 15:05 vl-dud

This PR changes the desired behavior of the code. In the current design, both inside points and BC points are used for training PDE losses. That is why I call it train_x_all.

lululxvi avatar Jun 23 '24 00:06 lululxvi

I see, in that case I will close this PR.

vl-dud avatar Jun 24 '24 16:06 vl-dud