kliff.loss#

kliff.loss.energy_forces_residual(identifier, natoms, weight, prediction, reference, data)[source]#

A residual function using both energy and forces.

The residual is computed as

weight.config_weight * wi * (prediction - reference)

where wi can be weight.energy_weight or weight.forces_weight, depending on the property.

Parameters
  • identifier (str) – (unique) identifier of the configuration for which to compute the residual. This is useful when you want to weigh some configuration differently.

  • natoms (int) – number of atoms in the configuration

  • weight (Weight) – an instance that computes the weight of the configuration in the loss function.

  • prediction (array) – prediction computed by calculator, 1D array

  • reference (array) – references data for the prediction, 1D array

  • data (Dict[str, Any]) – additional data for calculating the residual. Supported key value pairs are: - normalize_by_atoms: bool (default: True) If normalize_by_atoms is True, the residual is divided by the number of atoms in the configuration.

Returns

1D array of the residual

Note

The length of prediction and reference (call it S) are the same, and it depends on use_energy and use_forces in Calculator. Assume the configuration contains of N atoms.

1. If use_energy == True and use_forces == False, then S = 1. prediction[0] is the potential energy computed by the calculator, and reference[0] is the reference energy.

2. If use_energy == False and use_forces == True, then S = 3N. prediction[3*i+0], prediction[3*i+1], and prediction[3*i+2] are the x, y, and z component of the forces on atom i in the configuration, respectively. Correspondingly, reference is the 3N concatenated reference forces.

3. If use_energy == True and use_forces == True, then S = 3N + 1. prediction[0] is the potential energy computed by the calculator, and reference[0] is the reference energy. prediction[3*i+1], prediction[3*i+2], and prediction[3*i+3] are the x, y, and z component of the forces on atom i in the configuration, respectively. Correspondingly, reference is the 3N concatenated reference forces.

kliff.loss.energy_residual(identifier, natoms, weight, prediction, reference, data)[source]#

A residual function using just the energy.

See the documentation of energy_forces_residual() for the meaning of the arguments.

kliff.loss.forces_residual(identifier, natoms, weight, prediction, reference, data)[source]#

A residual function using just the forces.

See the documentation of energy_forces_residual() for the meaning of the arguments.

class kliff.loss.Loss(calculator, nprocs: int = 1, residual_fn: Optional[Callable] = None, residual_data: Optional[Dict[str, Any]] = None)[source]#

Loss function class to optimize the potential parameters.

This is a wrapper over LossPhysicsMotivatedModel and LossNeuralNetworkModel to provide a united interface. You can use the two classes directly.

Parameters
class kliff.loss.LossPhysicsMotivatedModel(calculator, nprocs=1, residual_fn=None, residual_data=None)[source]#

Loss function class to optimize the physics-based potential parameters.

Parameters
scipy_minimize_methods = ['Nelder-Mead', 'Powell', 'CG', 'BFGS', 'Newton-CG', 'L-BFGS-B', 'TNC', 'COBYLA', 'SLSQP', 'trust-constr', 'dogleg', 'trust-ncg', 'trust-exact', 'trust-krylov']#
scipy_minimize_methods_not_supported_args = ['bounds']#
scipy_least_squares_methods = ['trf', 'dogbox', 'lm', 'geodesiclm']#
scipy_least_squares_methods_not_supported_args = ['bounds']#
minimize(method='L-BFGS-B', **kwargs)[source]#

Minimize the loss.

Parameters
class kliff.loss.LossNeuralNetworkModel(calculator, nprocs=1, residual_fn=None, residual_data=None)[source]#

Loss function class to optimize the ML potential parameters.

This is a wrapper over LossPhysicsMotivatedModel and LossNeuralNetworkModel to provide a united interface. You can use the two classes directly.

Parameters
torch_minimize_methods = ['Adadelta', 'Adagrad', 'Adam', 'SparseAdam', 'Adamax', 'ASGD', 'LBFGS', 'RMSprop', 'Rprop', 'SGD']#
minimize(method='Adam', batch_size=100, num_epochs=1000, start_epoch=0, **kwargs)[source]#

Minimize the loss.

Parameters
  • method (str) – PyTorch optimization methods, and available ones are: [Adadelta, Adagrad, Adam, SparseAdam, Adamax, ASGD, LBFGS, RMSprop, Rprop, SGD] See also: https://pytorch.org/docs/stable/optim.html

  • batch_size (int) – Number of configurations used in each minimization step.

  • num_epochs (int) – Number of epochs to carry out the minimization.

  • start_epoch (int) – The starting epoch number. This is typically 0, but if continuing a training, it is useful to set this to the last epoch number of the previous training.

  • kwargs – Extra keyword arguments that can be used by the PyTorch optimizer.

save_optimizer_state(path='optimizer_state.pkl')[source]#

Save the state dict of optimizer to disk.

load_optimizer_state(path='optimizer_state.pkl')[source]#

Load the state dict of optimizer from file.

exception kliff.loss.LossError(msg)[source]#
args#
with_traceback()#

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.