Note
Click here to download the full example code
Parameter transformation for the Stillinger-Weber potential#
Parameters in the empirical interatomic potential are often restricted by some physical
constraints. As an example, in the Stillinger-Weber (SW) potential, the energy scaling
parameters (e.g., A
and B
) and the length scaling parameters (e.g., sigma
and
gamma
) are constrained to be positive.
Due to these constraints, we might want to work with the log of the parameters, i.e.,
log(A)
, log(B)
, log(sigma)
, and log(gamma)
when doing the optimization.
After the optimization, we can transform them back to the original parameter space using
an exponential function, which will guarantee the positiveness of the parameters.
In this tutorial, we show how to apply parameter transformation to the SW potential for silicon that is archived on OpenKIM. Compare this with Train a Stillinger-Weber potential.
To start, let’s first install the SW model:
$ kim-api-collections-management install user SW_StillingerWeber_1985_Si__MO_405512056662_006
See also
This installs the model and its driver into the User Collection
. See
Install a model for more information about installing KIM models.
This is
import numpy as np
from kliff.calculators import Calculator
from kliff.dataset import Dataset
from kliff.dataset.weight import Weight
from kliff.loss import Loss
from kliff.models import KIMModel
from kliff.models.parameter_transform import LogParameterTransform
from kliff.utils import download_dataset
Before creating a KIM model for the SW potential, we first instantiate the parameter
transformation class that we want to use. kliff
has a built-in log-transformation;
however, extending it to other parameter transformation can be done by creating a
subclass of ParameterTransform
.
To make a direct comparison to Train a Stillinger-Weber potential, in this tutorial we will apply
log-transformation to parameters A
, B
, sigma
, and gamma
, which
correspond to energy and length scales.
transform = LogParameterTransform(param_names=["A", "B", "sigma", "gamma"])
model = KIMModel(
model_name="SW_StillingerWeber_1985_Si__MO_405512056662_006",
params_transform=transform,
)
model.echo_model_params(params_space="original")
Out:
#================================================================================
# Available parameters to optimize.
# Parameters in `original` space.
# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006
#================================================================================
name: A
value: [15.28484792]
size: 1
name: B
value: [0.60222456]
size: 1
name: p
value: [4.]
size: 1
name: q
value: [0.]
size: 1
name: sigma
value: [2.0951]
size: 1
name: gamma
value: [2.51412]
size: 1
name: cutoff
value: [3.77118]
size: 1
name: lambda
value: [45.5322]
size: 1
name: costheta0
value: [-0.33333333]
size: 1
'#================================================================================\n# Available parameters to optimize.\n# Parameters in `original` space.\n# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006\n#================================================================================\n\nname: A\nvalue: [15.28484792]\nsize: 1\n\nname: B\nvalue: [0.60222456]\nsize: 1\n\nname: p\nvalue: [4.]\nsize: 1\n\nname: q\nvalue: [0.]\nsize: 1\n\nname: sigma\nvalue: [2.0951]\nsize: 1\n\nname: gamma\nvalue: [2.51412]\nsize: 1\n\nname: cutoff\nvalue: [3.77118]\nsize: 1\n\nname: lambda\nvalue: [45.5322]\nsize: 1\n\nname: costheta0\nvalue: [-0.33333333]\nsize: 1\n\n'
model.echo_model_params(params_space="original")
above will print out parameter
values in the original, untransformed space, i.e., the original parameterization of
the model. If we supply the argument params_space="transformed"
, then the printed
parameter values are given in the transformed space, e.g., log space (below). The
values of the other parameters are not changed.
model.echo_model_params(params_space="original")
Out:
#================================================================================
# Available parameters to optimize.
# Parameters in `original` space.
# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006
#================================================================================
name: A
value: [15.28484792]
size: 1
name: B
value: [0.60222456]
size: 1
name: p
value: [4.]
size: 1
name: q
value: [0.]
size: 1
name: sigma
value: [2.0951]
size: 1
name: gamma
value: [2.51412]
size: 1
name: cutoff
value: [3.77118]
size: 1
name: lambda
value: [45.5322]
size: 1
name: costheta0
value: [-0.33333333]
size: 1
'#================================================================================\n# Available parameters to optimize.\n# Parameters in `original` space.\n# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006\n#================================================================================\n\nname: A\nvalue: [15.28484792]\nsize: 1\n\nname: B\nvalue: [0.60222456]\nsize: 1\n\nname: p\nvalue: [4.]\nsize: 1\n\nname: q\nvalue: [0.]\nsize: 1\n\nname: sigma\nvalue: [2.0951]\nsize: 1\n\nname: gamma\nvalue: [2.51412]\nsize: 1\n\nname: cutoff\nvalue: [3.77118]\nsize: 1\n\nname: lambda\nvalue: [45.5322]\nsize: 1\n\nname: costheta0\nvalue: [-0.33333333]\nsize: 1\n\n'
Compare the output of params_space="transformed"
and params_space="original"
,
you can see that the values of A
, B
, sigma
, and gamma
are in the log
space after the transformation.
Next, we will set up the initial guess of the parameters to optimize. A value of
"default"
means the initial guess will be directly taken from the value already in
the model.
Note
The parameter values we initialize, as well as the lower and upper bounds, are in transformed space (i.e. log space here).
model.set_opt_params(
A=[[np.log(5.0), np.log(1.0), np.log(20)]],
B=[["default"]],
sigma=[[np.log(2.0951), "fix"]],
gamma=[[np.log(1.5)]],
)
model.echo_opt_params()
Out:
#================================================================================
# Model parameters that are optimized.
# Note that the parameters are in the transformed space if
# `params_transform` is provided when instantiating the model.
#================================================================================
A 1
1.6094379124341003e+00 0.0000000000000000e+00 2.9957322735539909e+00
B 1
-5.0712488263019628e-01
sigma 1
7.3960128493182953e-01 fix
gamma 1
4.0546510810816438e-01
'#================================================================================\n# Model parameters that are optimized.\n# Note that the parameters are in the transformed space if \n# `params_transform` is provided when instantiating the model.\n#================================================================================\n\nA 1\n 1.6094379124341003e+00 0.0000000000000000e+00 2.9957322735539909e+00 \n\nB 1\n -5.0712488263019628e-01 \n\nsigma 1\n 7.3960128493182953e-01 fix \n\ngamma 1\n 4.0546510810816438e-01 \n\n'
We can show the parameters we’ve just set by model.echo_opt_params()
.
Note
model.echo_opt_params()
always displays the parameter values in the transformed
space. And it only shows all the parameters specified to optimize. To show all
the parameters, do model.echo_model_params(params_space="transformed")
.
Once we set the model and the parameter transformation scheme, then further calculations, e.g., training the model, will be performed using the transformed space and can be done in the same way as in Train a Stillinger-Weber potential.
# Training set
dataset_path = download_dataset(dataset_name="Si_training_set")
weight = Weight(energy_weight=1.0, forces_weight=0.1)
tset = Dataset(dataset_path, weight)
configs = tset.get_configs()
# Calculator
calc = Calculator(model)
_ = calc.create(configs)
# Loss function and model training
steps = 100
loss = Loss(calc, nprocs=2)
loss.minimize(method="L-BFGS-B", options={"disp": True, "maxiter": steps})
model.echo_model_params(params_space="original")
Out:
2022-04-28 11:08:19.996 | INFO | kliff.dataset.dataset:_read:397 - 1000 configurations read from /Users/mjwen/Applications/kliff/examples/Si_training_set
2022-04-28 11:08:23.706 | INFO | kliff.calculators.calculator:create:107 - Create calculator for 1000 configurations.
2022-04-28 11:08:23.707 | INFO | kliff.loss:minimize:290 - Start minimization using method: L-BFGS-B.
2022-04-28 11:08:23.707 | INFO | kliff.loss:_scipy_optimize:406 - Running in multiprocessing mode with 2 processes.
2022-04-28 11:09:37.865 | INFO | kliff.loss:minimize:292 - Finish minimization using method: L-BFGS-B.
#================================================================================
# Available parameters to optimize.
# Parameters in `original` space.
# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006
#================================================================================
name: A
value: [14.93863379]
size: 1
name: B
value: [0.58740269]
size: 1
name: p
value: [4.]
size: 1
name: q
value: [0.]
size: 1
name: sigma
value: [2.0951]
size: 1
name: gamma
value: [2.2014612]
size: 1
name: cutoff
value: [3.77118]
size: 1
name: lambda
value: [45.5322]
size: 1
name: costheta0
value: [-0.33333333]
size: 1
'#================================================================================\n# Available parameters to optimize.\n# Parameters in `original` space.\n# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006\n#================================================================================\n\nname: A\nvalue: [14.93863379]\nsize: 1\n\nname: B\nvalue: [0.58740269]\nsize: 1\n\nname: p\nvalue: [4.]\nsize: 1\n\nname: q\nvalue: [0.]\nsize: 1\n\nname: sigma\nvalue: [2.0951]\nsize: 1\n\nname: gamma\nvalue: [2.2014612]\nsize: 1\n\nname: cutoff\nvalue: [3.77118]\nsize: 1\n\nname: lambda\nvalue: [45.5322]\nsize: 1\n\nname: costheta0\nvalue: [-0.33333333]\nsize: 1\n\n'
The optimized parameter values from this model training are very close, if not the same, as in Train a Stillinger-Weber potential. This is expected for the simple tutorial example considered. But for more complex models, training in a transformed space can make it much easier for the optimizer to navigate the parameter space.
Total running time of the script: ( 1 minutes 20.550 seconds)