Train a Lennard-Jones potential#
In this tutorial, we train a Lennard-Jones potential that is build in KLIFF (i.e. not models archived on OpenKIM_). From a user’s perspective, a KLIFF built-in model is not different from a KIM model.
Compare this with tut_kim_sw
.
from kliff.calculators import Calculator
from kliff.dataset import Dataset
from kliff.loss import Loss
from kliff.models import LennardJones
from kliff.utils import download_dataset
# training set
dataset_path = download_dataset(dataset_name="Si_training_set_4_configs")
tset = Dataset(dataset_path)
configs = tset.get_configs()
# calculator
model = LennardJones()
model.echo_model_params()
# fitting parameters
model.set_opt_params(sigma=[["default"]], epsilon=[["default"]])
model.echo_opt_params()
calc = Calculator(model)
calc.create(configs)
# loss
loss = Loss(calc, nprocs=1)
result = loss.minimize(method="L-BFGS-B", options={"disp": True, "maxiter": 10})
# print optimized parameters
model.echo_opt_params()
model.save("kliff_model.yaml")
2023-08-01 21:59:15.496 | INFO | kliff.dataset.dataset:_read:398 - 4 configurations read from /Users/mjwen.admin/Packages/kliff/docs/source/tutorials/Si_training_set_4_configs
2023-08-01 21:59:15.499 | INFO | kliff.calculators.calculator:create:107 - Create calculator for 4 configurations.
2023-08-01 21:59:15.499 | INFO | kliff.loss:minimize:310 - Start minimization using method: L-BFGS-B.
2023-08-01 21:59:15.500 | INFO | kliff.loss:_scipy_optimize:427 - Running in serial mode.
This problem is unconstrained.
#================================================================================
# Available parameters to optimize.
# Parameters in `original` space.
# Model: LJ6-12
#================================================================================
name: epsilon
value: [1.]
size: 1
name: sigma
value: [2.]
size: 1
name: cutoff
value: [5.]
size: 1
#================================================================================
# Model parameters that are optimized.
# Note that the parameters are in the transformed space if
# `params_transform` is provided when instantiating the model.
#================================================================================
sigma 1
2.0000000000000000e+00
epsilon 1
1.0000000000000000e+00
RUNNING THE L-BFGS-B CODE
* * *
Machine precision = 2.220D-16
N = 2 M = 10
At X0 0 variables are exactly at the bounds
At iterate 0 f= 6.40974D+00 |proj g|= 2.92791D+01
At iterate 1 f= 2.98676D+00 |proj g|= 3.18782D+01
At iterate 2 f= 1.56102D+00 |proj g|= 1.02614D+01
At iterate 3 f= 9.61567D-01 |proj g|= 8.00167D+00
At iterate 4 f= 3.20489D-02 |proj g|= 7.63379D-01
At iterate 5 f= 2.42400D-02 |proj g|= 5.96998D-01
At iterate 6 f= 1.49914D-02 |proj g|= 6.87782D-01
At iterate 7 f= 9.48615D-03 |proj g|= 1.59376D-01
At iterate 8 f= 6.69609D-03 |proj g|= 1.14378D-01
2023-08-01 21:59:16.968 | INFO | kliff.loss:minimize:312 - Finish minimization using method: L-BFGS-B.
At iterate 9 f= 4.11024D-03 |proj g|= 3.20712D-01
At iterate 10 f= 2.97209D-03 |proj g|= 7.03411D-02
* * *
Tit = total number of iterations
Tnf = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip = number of BFGS updates skipped
Nact = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F = final function value
* * *
N Tit Tnf Tnint Skip Nact Projg F
2 10 13 1 0 0 7.034D-02 2.972D-03
F = 2.9720927488600178E-003
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT
#================================================================================
# Model parameters that are optimized.
# Note that the parameters are in the transformed space if
# `params_transform` is provided when instantiating the model.
#================================================================================
sigma 1
2.0629054951532582e+00
epsilon 1
1.5614850326987884e+00