Results 94 comments of Kirill Zubov

you need only one NN because you have only one unknown variable `u`

so if Dx(u(x,y)) ~ 0 then simplifying Dxx(u(x,y)) + Dyy(u(x,y)) ~ (delP/mu) + ((rho/mu)*u(x,y)*Dx(u(x,y))) => Dxx(u(x,y)) + Dyy(u(x,y)) ~ (delP/mu)

yeah, in this form `Dxx(u(x,y)) + Dyy(u(x,y)) ~ (delP/mu) + ((rho/mu)*u(x,y)*Dx(u(x,y))) `, optimisation converges to a trivial solution u(x,y) -> 0. A good test for convergence

@v-chau try to use more points in train_data or use QuasiRandomTraining. it should works, I guess

so as we generate data inside, that means that every iteration it new set of data of random, quasi-random number, adaptive quadrature's adaptively selects the set and the size of...

probably I think it is a bad idea to inject data in a high level of eq. to use the additional loss is best. And probably the issue can be...

it will be cool to have an example with ground-truth data, like interpolating some operator in pde from real-world data. but the big question is how to connect it together...

@vboussange which can I help with it issue?