candle.clr_keras_utils.CyclicLR

candle.clr_keras_utils.CyclicLR#

class candle.clr_keras_utils.CyclicLR(base_lr=0.001, max_lr=0.006, step_size=2000.0, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle')#

This callback implements a cyclical learning rate policy (CLR). The method cycles the learning rate between two boundaries with some constant frequency.

  • base_lr: initial learning rate which is the lower boundary in the cycle.

  • max_lr: upper boundary in the cycle. Functionally, it defines the cycle amplitude (max_lr - base_lr). The lr at any cycle is the sum of base_lr and some scaling of the amplitude; therefore max_lr may not actually be reached depending on scaling function.

  • step_size: number of training iterations per half cycle. Authors suggest setting step_size 2-8 x training iterations in epoch.

  • mode: one of {triangular, triangular2, exp_range}. Default ‘triangular’. Values correspond to policies detailed above. If scale_fn is not None, this argument is ignored.

  • gamma: constant in ‘exp_range’ scaling function: gamma**(cycle iterations)

  • scale_fn: Custom scaling policy defined by a single argument lambda function, where 0 <= scale_fn(x) <= 1 for all x >= 0. mode paramater is ignored

  • scale_mode: {‘cycle’, ‘iterations’}. Defines whether scale_fn is evaluated on cycle number or cycle iterations (training iterations since start of cycle). Default is ‘cycle’.

The amplitude of the cycle can be scaled on a per-iteration or per-cycle basis. This class has three built-in policies, as put forth in the paper.

  • “triangular”: A basic triangular cycle w/ no amplitude scaling.

  • “triangular2”: A basic triangular cycle that scales initial amplitude by half each cycle.

  • “exp_range”: A cycle that scales initial amplitude by gamma**(cycle iterations) at each cycle iteration.

For more detail, please see paper.

# Example for CIFAR-10 w/ batch size 100:

clr = CyclicLR(base_lr=0.001, max_lr=0.006,
               step_size=2000., mode='triangular')
model.fit(X_train, Y_train, callbacks=[clr])

Class also supports custom scaling functions:

clr_fn = lambda x: 0.5*(1+np.sin(x*np.pi/2.))
clr = CyclicLR(base_lr=0.001, max_lr=0.006,
               step_size=2000., scale_fn=clr_fn,
               scale_mode='cycle')
model.fit(X_train, Y_train, callbacks=[clr])

# References

__init__(base_lr=0.001, max_lr=0.006, step_size=2000.0, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle')#

Methods

__init__([base_lr, max_lr, step_size, mode, ...])

clr()

on_batch_begin(batch[, logs])

A backwards compatibility alias for on_train_batch_begin.

on_batch_end(epoch[, logs])

A backwards compatibility alias for on_train_batch_end.

on_epoch_begin(epoch[, logs])

Called at the start of an epoch.

on_epoch_end(epoch[, logs])

Called at the end of an epoch.

on_predict_batch_begin(batch[, logs])

Called at the beginning of a batch in predict methods.

on_predict_batch_end(batch[, logs])

Called at the end of a batch in predict methods.

on_predict_begin([logs])

Called at the beginning of prediction.

on_predict_end([logs])

Called at the end of prediction.

on_test_batch_begin(batch[, logs])

Called at the beginning of a batch in evaluate methods.

on_test_batch_end(batch[, logs])

Called at the end of a batch in evaluate methods.

on_test_begin([logs])

Called at the beginning of evaluation or validation.

on_test_end([logs])

Called at the end of evaluation or validation.

on_train_batch_begin(batch[, logs])

Called at the beginning of a training batch in fit methods.

on_train_batch_end(batch[, logs])

Called at the end of a training batch in fit methods.

on_train_begin([logs])

Called at the beginning of training.

on_train_end([logs])

Called at the end of training.

set_model(model)

set_params(params)