deepxde
deepxde copied to clipboard
Separate train_x_all and train_x_bc in PDE
Currently, arrays of points train_x_all
and train_x_bc
are mixed. Because of this, some points are duplicated, which can be critical with a large volume.
Look at the example:
geom = dde.geometry.Interval(0, 1)
bc = dde.icbc.DirichletBC(geom, lambda x: 0, lambda x, on_boundary: on_boundary)
data = dde.data.PDE(
geom,
pde,
[bc],
num_domain=50,
num_boundary=25
)
Shapes before PR:
train_x
(100, 1)
train_x_all
(75, 1)
train_x_bc
(25, 1)
Shapes after PR:
train_x
(75, 1)
train_x_all
(50, 1)
train_x_bc
(25, 1)
Another example with num_domain
equals 0
geom = dde.geometry.Interval(0, 1)
bc = dde.icbc.DirichletBC(geom, lambda x: 0, lambda x, on_boundary: on_boundary)
data = dde.data.PDE(
geom,
pde,
[bc],
num_domain=0,
num_boundary=25
)
Shapes before PR:
train_x
(50, 1)
train_x_all
(25, 1)
train_x_bc
(25, 1)
Shapes after PR:
train_x
(25, 1)
train_x_all
(0, 1)
train_x_bc
(25, 1)
Check this https://github.com/lululxvi/deepxde/pull/1113
Check this #1113
These pull requests are certainly related, but in my opinion they are not the same thing. The current PR specifically concerns the duplication of points. The introduction of train_x_pde
can be considered in the next step.
This PR changes the desired behavior of the code. In the current design, both inside points and BC points are used for training PDE losses. That is why I call it train_x_all
.
I see, in that case I will close this PR.