Learning-Scientific_Machine_Learning_Residual_Based_Attention_PINNs_DeepONets icon indicating copy to clipboard operation
Learning-Scientific_Machine_Learning_Residual_Based_Attention_PINNs_DeepONets copied to clipboard

May I ask how to represent a system of differential equations?

Open squarefaceyao opened this issue 2 years ago • 3 comments

I want to ask if you think you can use PINN to solve this problem.

We will solve a simple ODE system:

$$ {\frac{dV}{dt}}=10- {G_{Na}m^3h(V-50)} - {G_{K}n^4(V+77)} - {G_{L}(V+54.387)}$$

$${\frac{dm}{dt}}=\left(\frac{0.1{(V+40)}}{1-e^\frac{-V-40}{10}}\right)(1-m) - \left(4e^{\frac{-V-65}{18}}\right)m $$

$$\frac{dh}{dt}= {\left(0.07e^{\frac{-V-65}{20}}\right)(1-h)} - \left(\frac{1}{1+e^\frac{-V-35}{10}}\right)h$$

$$\frac{dn}{dt}= {\left(\frac{0.01(V+55)}{1-e^\frac{-V-55}{10}}\right)}(1-n) - \left(0.125e^{\frac{-V-65}{80}}\right)n$$

$$\qquad \text{where} \quad t \in [0,7],$$

May I ask how to represent a system of differential equations?

with the initial conditions
$$V(0) = -65, \quad m(0) = 0.05 , \quad h(0) = 0.6 , \quad n(0) = 0.32 $$

The reference solution is here, where the parameters $G_{na},G_{k},G_{L}$ are gated variables and whose true values are 120, 36, and 0.3, respectivly.

The code below is the data description

data = np.load('g_Na=120_g_K=36_g_L=0.3.npz')
t = data['t']
v = data['v']
m = data['m']
h = data['h']
n = data['n']

squarefaceyao avatar Aug 21 '22 14:08 squarefaceyao

Sure, you can solve it with a PINN.

First ensure that the last layer of your NN contains 4 neurons (i.e., one for V,m,h,n). Please review this repository: https://github.com/alexpapados/Physics-Informed-Deep-Learning-Solid-and-Fluid-Mechanics/blob/main/Compressible%20Hydrodynamic%20Shock-Tube%20Problems%20(1-D%20Compressible%20Euler%20Equations)/Forward%20Problems%20W-PINNs-DE/Euler_Eq_Reverse_Sod_Shock_Tube_Problem.py

Then, using automatic differentiation you can obtain each time derivative $\frac{dV}{dt}$, ...

Set up your four PDEs as loss functions (remember they should be equal to zero).

The total loss PDE would be the sum of your four PDEs:

$$L_{PDE}=L_{PDE_1}+L_{PDE_2}+L_{PDE_3}+L_{PDE_4}$$

I hope it helps!

jdtoscano94 avatar Aug 21 '22 15:08 jdtoscano94

Thank you very much for your help. Have a nice day.

squarefaceyao avatar Aug 22 '22 02:08 squarefaceyao

Sure, you can solve it with a PINN.

First ensure that the last layer of your NN contains 4 neurons (i.e., one for V,m,h,n). Please review this repository: https://github.com/alexpapados/Physics-Informed-Deep-Learning-Solid-and-Fluid-Mechanics/blob/main/Compressible%20Hydrodynamic%20Shock-Tube%20Problems%20(1-D%20Compressible%20Euler%20Equations)/Forward%20Problems%20W-PINNs-DE/Euler_Eq_Reverse_Sod_Shock_Tube_Problem.py

Then, using automatic differentiation you can obtain each time derivative dVdt, ...

Set up your four PDEs as loss functions (remember they should be equal to zero).

The total loss PDE would be the sum of your four PDEs:

LPDE=LPDE1+LPDE2+LPDE3+LPDE4

I hope it helps!

Hello,

My problem is described as follows

We will solve a simple ODE system:

$$ {\frac{dV}{dt}}=10- {G_{Na}m^3h(V-50)} - {G_{K}n^4(V+77)} - {G_{L}(V+54.387)}$$

$${\frac{dm}{dt}}=\left(\frac{0.1{(V+40)}}{1-e^\frac{-V-40}{10}}\right)(1-m) - \left(4e^{\frac{-V-65}{18}}\right)m $$

$$\frac{dh}{dt}= {\left(0.07e^{\frac{-V-65}{20}}\right)(1-h)} - \left(\frac{1}{1+e^\frac{-V-35}{10}}\right)h$$

$$\frac{dn}{dt}= {\left(\frac{0.01(V+55)}{1-e^\frac{-V-55}{10}}\right)}(1-n) - \left(0.125e^{\frac{-V-65}{80}}\right)n$$

$$\qquad \text{where} \quad t \in [0,7],$$

with the initial conditions

$$ V(0) = -65, m(0) = 0.05 , h(0) = 0.6 , n(0) = 0.32 $$

The reference solution is here, where the parameters $G_{na},G_{k},G_{L}$ are gated variables and whose true values are 120, 36, and 0.3, respectivly.

My code can't predict the parameters correctly, how can I modify the loss function so that the parameters can be predicted correctly?

# General Loss Function
  def loss_func(self):
      y_pred = self.net_y(self.t)    
      v_nn, m_pred, h_pred, n_pred = y_pred[:, 0], y_pred[:, 1], y_pred[:, 2], y_pred[:, 3] # NN_{rho}, NN_{u}, NN_{p}

      # Reshape data
      m_pred = m_pred.reshape(len(m_pred), 1) 
      h_pred = h_pred.reshape(len(h_pred), 1)
      n_pred = n_pred.reshape(len(n_pred), 1)
      
      v_nn = v_nn.reshape(len(v_nn), 1)

      v_pred = 10.0- (self.g1 * m_pred**3 * h_pred *(v_nn-50.0))-\
               (self.g2 * n_pred**4 * (v_nn-77.0))-(self.g3 * (v_nn-54.387))

      # Total Loss
     
      loss = torch.mean((self.m - m_pred) ** 2) + torch.mean((self.h - h_pred) ** 2) + \
              torch.mean((self.n - n_pred) ** 2) + torch.mean(((self.v - v_pred)) ** 2)
      self.optimizer.zero_grad()
      loss.backward()

      self.iter += 1
      # if self.iter%101==0:
      # print("iter: ",self.iter)
      print(
      'Loss: %.3f, g1_PINNs: %.5f ,g2_PINNs: %.5f,g3_PINNs: %.5f ' %
      (
          loss.item(),

          self.g1.item(),
          self.g2.item(),
          self.g3.item()
          )
      )
      return loss

  # Train network through minimization of loss function w/r to theta and gamma
  def train(self, nIter):
      self.dnn.train()
      # Backward and optimize
      self.optimizer.step(self.loss_func)

My complete code is in this link https://github.com/squarefaceyao/pinn_inverse_pes/blob/main/HH_inverse__pytorch.py

Thank you for your help.

squarefaceyao avatar Aug 28 '22 03:08 squarefaceyao