Next, let’s jump into a Self-Adaptive PINN example, where we demonstrate some of the capabilities of the self-adaptive training. You may notice that the interface doesn’t change too much, and all we need to do is define weight vectors in the form of tf.Variables for the collocation initial condition weights.

A full example is shown below for the Allen-Cahn PDE:

Domain = DomainND(["x", "t"], time_var='t')

N_f = 50000
Domain.generate_collocation_points(N_f)

def func_ic(x):
return x ** 2 * np.cos(math.pi * x)

## Weights initialization
# dictionary with keys "residual" and "BCs". Values must be a tuple with dimension
# equal to the number of residuals and boundary conditions, respectively
init_weights = {"residual": [tf.random.uniform([N_f, 1])],
"BCs": [100 * tf.random.uniform([512, 1]), None]}

# Conditions to be considered at the boundaries for the periodic BC
def deriv_model(u_model, x, t):
u = u_model(tf.concat([x, t], 1))
return u, u_x, u_xxx, u_xxxx

init = IC(Domain, [func_ic], var=[['x']])
x_periodic = periodicBC(Domain, ['x'], [deriv_model])

BCs = [init, x_periodic]

# We must select which loss functions will have adaptive weights
# "residual" should a tuple for the case of multiple residual equation
# BCs have to follow the same order as the previously defined BCs list
"BCs": [True, False]}
# So, in this case, we are telling the SA-PINN to have put weights on the residual,
# and init, but not the periodic BC

def f_model(u_model, x, t):
u = u_model(tf.concat([x, t], 1))
c1 = tdq.utils.constant(.0001)
c2 = tdq.utils.constant(5.0)
f_u = u_t - c1 * u_xx + c2 * u * u * u - c2 * u
return f_u

col_weights = tf.Variable(tf.random.uniform([N_f, 1]), trainable=True, dtype=tf.float32)
u_weights = tf.Variable(100 * tf.random.uniform([512, 1]), trainable=True, dtype=tf.float32)

layer_sizes = [2, 128, 128, 128, 128, 1]

model = CollocationSolverND()

# Now we just need to include the dict_adaptive and init_weights in the compile call
model.fit(tf_iter=10000, newton_iter=10000)


Lets break this script up and discuss it a bit. First we define the domain and everything associated in it, in this case we have a problems that is only dependent on x and t.

Domain = DomainND(["x", "t"], time_var='t')

N_f = 50000
Domain.generate_collocation_points(N_f)


Notice how this problem we take more collocation points than the last example with its simpler example.

Next up lets take a look at defining the initial condition and the periodic BC derivative model. Then we drop those conditions into a list to drop them into the solver.

def func_ic(x):
return x ** 2 * np.cos(math.pi * x)

# Conditions to be considered at the boundaries for the periodic BC
def deriv_model(u_model, x, t):
u = u_model(tf.concat([x, t], 1))
return u, u_x, u_xxx, u_xxxx

init = IC(Domain, [func_ic], var=[['x']])
x_periodic = periodicBC(Domain, ['x'], [deriv_model])

BCs = [init, x_periodic]


Next, we define the physics:

def f_model(u_model, x, t):
u = u_model(tf.concat([x, t], 1))
c1 = tdq.utils.constant(.0001)
c2 = tdq.utils.constant(5.0)
f_u = u_t - c1 * u_xx + c2 * u * u * u - c2 * u
return f_u


Following the definition of the f_model, we will define initial condition weights and collocation point weights, and compile the model

col_weights = tf.Variable(tf.random.uniform([N_f, 1]), trainable=True, dtype=tf.float32)
u_weights = tf.Variable(100 * tf.random.uniform([512, 1]), trainable=True, dtype=tf.float32)

layer_sizes = [2, 128, 128, 128, 128, 1]

model = CollocationSolverND()

This will train a solution $$u(x,t)$$ for the Allen-Cahn PDE using self-adaptive training