deepxde
deepxde copied to clipboard
Adding a function to the automatic differentiation part of deepxde for a PDE
Hello @lululxvi and other researchers.
Hopefully, all are safe and well.
I am trying to solve
with
over a unit square along with some boundary conditions. While running the code below, I face some errors regarding ‘OperatorNotAllowedInGraphError’ or ‘TypeError: Expected float32, got type function. By the way, the code is run properly without including K. I will be highly appreciative if you could assist me. Warm regards,
import tensorflow.compat.v1 as tf import deepxde as dde dde.config.disable_xla_jit() from deepxde.backend import tf import numpy as np
geom = dde.geometry.Rectangle([0,0],[1,1])
def source(x,y): if 0.1<x<0.3 and 0.2<y<0.25: K=10 else: K=1 Return K
def pde(x,y):
dy_x = tf.gradients(y, x)[0]
dy_x, dy_y = dy_x[:, 0:1], dy_x[:, 1:2]
dy_xxs = tf.gradients(source(x,y)*dy_x, x)[0][:, 0:1]
dy_yys = tf.gradients(source(x,y)*dy_y, x)[0][:, 1:2]
return dy_xxs + dy_yys
def func_1(x): return 100*np.ones((len(x),1))
def func_2(x): return 50*np.ones((len(x),1))
def func_3(x): return np.zeros((len(x),1))
def boundary_t(x, on_boundary): return on_boundary and np.isclose(x[1], 1)
def boundary_b(x, on_boundary): return on_boundary and np.isclose(x[1], 0)
def boundary_r(x, on_boundary): return on_boundary and np.isclose(x[0], 1)
def boundary_l(x, on_boundary): return on_boundary and np.isclose(x[0], 0)
bc_l = dde.DirichletBC(geom, func_1, boundary_l) bc_r = dde.DirichletBC(geom, func_2, boundary_r) bc_t = dde.NeumannBC(geom , func_3 , boundary_t) bc_b = dde.NeumannBC(geom , func_3 , boundary_b)
data = dde.data.PDE( geom, pde, [bc_t,bc_b,bc_r,bc_l], num_domain=1000, num_boundary=100 )
layer_size = [3] + [15] + [1] activation = "tanh" initializer = "Glorot uniform" net = dde.maps.FNN(layer_size, activation, initializer)
model = dde.Model(data, net)
model.compile("adam", lr=0.001) losshistory, train_state = model.train(epochs=10000)
dde.saveplot(losshistory, train_state, issave=True, isplot=True)
In pde(x,y)
, x
and y
are Tensors. You need to modify source
.
daer @lululxvi would you please explain more? maybe I am not allowed to define a discontinuous function for the source(x,y), but there is no other option for me.
You cannot use the following for Tensor
if 0.1<x<0.3 and 0.2<y<0.25:
K=10
else:
K=1
The following code should help you
xx = x[:, 0:1]
yy = x[:, 1:2]
x_left = tf.greater(xx, 0.1)
x_right = tf.less(xx, 0.3)
y_left = tf.greater(yy, 0.2)
y_right = tf.less(yy, 0.25)
condition = x_left & x_right & y_left & y_right
K = tf.where(condition, 10.0, 1.0)
Dear @forxltk Tnx so much. yy = x[: , 0:2] or yy= x[: , 1:2] ? x_left_condition & ... or x_left & ... ? should I put this in def pde(x,y) or define it as a separate function?
Dear @forxltk Tnx so much. yy = x[: , 0:2] or yy= x[: , 1:2] ? x_left_condition & ... or x_left & ... ? should I put this in def pde(x,y) or define it as a separate function?
Yes, in the pde
function and use x_left &....
I used the below but received an error: ValueError: Shapes must be equal rank,...
def pde(x,y):
xx = x[:, 0:1] yy = x[:, 1:2]
x_left = tf.greater(xx, 0.1) x_right = tf.less(xx, 0.3) y_left = tf.greater(yy, 0.2) y_right = tf.less(yy, 0.25)
condition = x_left & x_right & y_left & y_right
K = tf.where(condition, 10.0, 1.0)
dy_x = tf.gradients(y, x)[0] dy_x, dy_y = dy_x[:, 0:1], dy_x[:, 1:2] dy_xxs = tf.gradients(K * dy_x, x)[0][:, 0:1] dy_yys = tf.gradients(K * dy_y, x)[0][:, 1:2] return dy_xxs + dy_yys
layer_size = [3] + [15] + [1]
activation = "tanh"
initializer = "Glorot uniform"
net = dde.maps.FNN(layer_size, activation, initializer)
There are only x
and y
, so it should be [2] + [15] + [1]
. And you should try more hidden layers.
@forxltk You are right. I corrected this but again got the same error.
@forxltk You are right. I corrected this but again got the same error.
Seems that tf.where
work differently in tf1 and tf2.
- tf1: K = tf.where(condition, 10.0*tf.ones_like(xx), 1.0*tf.ones_like(xx))
- Or Use
import tensorflow as tf
, instead ofimport tensorflow.compat.v1 as tf
.
Unfortunately, it did not work also. Maybe, there is an incompatibility between the xx=x[:, 0:1], yy=x[:, 1:2], and dy_x...
Dear @forxltk Tnx so much. Eventually, tf.where(condition, 10*tf.ones_like(xx) worked for my case. Please help me to define the below also: if 0.1<x<0.3 and 0.2<y<0.25 or 0.35<x<0.4 and 0.35<y<0.45: K=10 else: K=1
You should learn something from the above code. BTW, you can also use tf.logical_or
tf.logical_and
, then everything should be easy.
OK. I will try to do that. Once again, thank you so much @forxltk and @lululxvi
@forxltk Would you please let me know whether the below is correct:
xx = x[:, 0:1] yy = x[:, 1:2]
x_left = tf.greater(xx, 0.1) x_right = tf.less(xx, 0.3) y_left = tf.greater(yy, 0.2) y_right = tf.less(yy, 0.25)
x_left1 = tf.greater(xx, 0.35) x_right1 = tf.less(xx, 0.4) y_left1 = tf.greater(yy, 0.35) y_right1 = tf.less(yy, 0.45)
condition = x_left & x_right & y_left & y_right | x_left1 & x_right1 & y_left1 & y_right1
K = tf.where(condition, 10.0tf.ones_like(xx), 1.0tf.ones_like(xx))
@forxltk would you plz help me? I tried in different ways, but not successful.
@forxltk Would you please let me know whether the below is correct:
xx = x[:, 0:1] yy = x[:, 1:2]
x_left = tf.greater(xx, 0.1) x_right = tf.less(xx, 0.3) y_left = tf.greater(yy, 0.2) y_right = tf.less(yy, 0.25)
x_left1 = tf.greater(xx, 0.35) x_right1 = tf.less(xx, 0.4) y_left1 = tf.greater(yy, 0.35) y_right1 = tf.less(yy, 0.45)
condition = x_left & x_right & y_left & y_right | x_left1 & x_right1 & y_left1 & y_right1
K = tf.where(condition, 10.0_tf.ones_like(xx), 1.0_tf.ones_like(xx))
condition = (x_left & x_right & y_left & y_right) | (x_left1 & x_right1 & y_left1 & y_right1)
@lululxvi @forxltk I tried different activation functions, epochs, number of neurons, and ...but the result does not change. I mean the pattern of results is the same with the case there is no condition. Any suggestions plz?
import tensorflow.compat.v1 as tf import deepxde as dde dde.config.disable_xla_jit() from deepxde.backend import tf import numpy as np
geom = dde.geometry.Rectangle([0,0],[1,1])
def pde(x,y):
xx = x[:, 0:1]
yy = x[:, 1:2]
x_left = tf.greater(xx, 0.1) x_right = tf.less(xx, 0.3) y_left = tf.greater(yy, 0.2) y_right = tf.less(yy, 0.25)
x_left1 = tf.greater(xx, 0.35) x_right1 = tf.less(xx, 0.4) y_left1 = tf.greater(yy, 0.35) y_right1 = tf.less(yy, 0.45)
condition = (x_left & x_right & y_left & y_right) | (x_left1 & x_right1 & y_left1 & y_right1)
K = tf.where(condition, 10.0 * tf.ones_like(xx), 1.0 * tf.ones_like(xx))
dy_x = tf.gradients(y, x)[0]
dy_x, dy_y = dy_x[:, 0:1], dy_x[:, 1:2]
dy_xxs = tf.gradients(K * dy_x, x)[0][:, 0:1]
dy_yys = tf.gradients(K * dy_y, x)[0][:, 1:2]
return dy_xxs + dy_yys
def func_1(x): return 100*np.ones((len(x),1))
def func_2(x): return 50*np.ones((len(x),1))
def func_3(x): return np.zeros((len(x),1))
def boundary_t(x, on_boundary): return on_boundary and np.isclose(x[1], 1)
def boundary_b(x, on_boundary): return on_boundary and np.isclose(x[1], 0)
def boundary_r(x, on_boundary): return on_boundary and np.isclose(x[0], 1)
def boundary_l(x, on_boundary): return on_boundary and np.isclose(x[0], 0)
bc_l = dde.DirichletBC(geom, func_1, boundary_l) bc_r = dde.DirichletBC(geom, func_2, boundary_r) bc_t = dde.NeumannBC(geom , func_3 , boundary_t) bc_b = dde.NeumannBC(geom , func_3 , boundary_b)
data = dde.data.PDE( geom, pde, [bc_t,bc_b,bc_r,bc_l], num_domain=10000, num_boundary=100 )
layer_size = [2] + [15] + [1] activation = "tanh" initializer = "Glorot uniform" net = dde.maps.FNN(layer_size, activation, initializer)
model = dde.Model(data, net)
model.compile("adam", lr=0.001) losshistory, train_state = model.train(epochs=10000) dde.saveplot(losshistory, train_state, issave=True, isplot=True)
x=geom.uniform_points(1000, True) y=model.predict(x)
Hi everyone!
I have kind of the same issue but what I want to do is, that the variable 'K' depend of the value of 'x'. This is not a function, it is more like a so, so many conditions.
are there some function on tensorflow that can help me? or does anyone have any idea how can I solve my problem?
I can not find the solution for my problem :(
Thank you so much !!!
I tried different activation functions, epochs, number of neurons, and ...but the result does not change. I mean the pattern of results is the same with the case there is no condition. Any suggestions plz?
I usually suggest starting with a similar but simpler problem to obtain some experience.
This is not a function, it is more like a so, so many conditions.
Don't understand your question.
I can immediately say that you are not gonna solve this problem with PINNs. Your PDE is nothing but 2D steady-state heat conduction problem with space dependent conductivity. If the variation is discontinuous then you are out of luck.
PINNs are still to reach such maturity. Can you please show your predicted and expected results. Also, please attach the loss plot. i might give you some problem specific fix, but no promises.
Dear @praksharma
You are generally right. I am supposed to solve this problem using PINNs for different cases,i.e., slightly/moderate/highly heterogeneous porous media. When a medium is homogeneous (i.e., there is only one value for conductivity), PINN predicts very well. When it comes to the heterogeneous media the story is different, depending on the structure of NN, the number of boundary data, etc each time a distribution is obtained that does not follow the actual pattern correctly. I've seen in some projects that extended PINN is used but I think it does not work for my case mainly because e.g. for a highly heterogeneous medium the number of loss function terms is very high. A medium with its actual pattern is attached.