pyttb.gcp.optimizers
Optimizer Implementations for GCP.
- class pyttb.gcp.optimizers.StochasticSolver(rate: float = 1e-3, decay: float = 0.1, max_fails: int = 1, epoch_iters: int = 1000, f_est_tol: float = -inf, max_iters: int = 1000, printitn: int = 1)[source]
Bases:
ABC
Interface for Stochastic GCP Solvers.
General Setup for Stochastic Solvers.
- Parameters:
rate – Rate of descent, proportional to step size.
decay – How much to decrease step size on failed epochs.
max_fails – How many failed epochs before terminating the solve.
epoch_iters – Number of steps to take per epoch.
f_est_tol – Tolerance for function estimate changes to terminate solve.
max_iters – Maximum number of epochs.
printitn – Controls verbosity of information during solve.
- __init__(rate: float = 1e-3, decay: float = 0.1, max_fails: int = 1, epoch_iters: int = 1000, f_est_tol: float = -inf, max_iters: int = 1000, printitn: int = 1)[source]
General Setup for Stochastic Solvers.
- Parameters:
rate – Rate of descent, proportional to step size.
decay – How much to decrease step size on failed epochs.
max_fails – How many failed epochs before terminating the solve.
epoch_iters – Number of steps to take per epoch.
f_est_tol – Tolerance for function estimate changes to terminate solve.
max_iters – Maximum number of epochs.
printitn – Controls verbosity of information during solve.
- abstract update_step(model: ktensor, gradient: List[ndarray], lower_bound: float) Tuple[List[ndarray], float] [source]
Calculate the update step for the solver.
- Parameters:
model – Current decomposition.
gradient – Gradient calculation.
lower_bound – Minimum value for the decomposition.
- Returns:
Update to be applied to decomposition (to be applied by caller).
Step size used.
- solve(initial_model: ktensor, data: tensor | sptensor, function_handle: Callable[[ndarray, ndarray], ndarray], gradient_handle: Callable[[ndarray, ndarray], ndarray], lower_bound: float = -np.inf, sampler: GCPSampler | None = None) Tuple[ktensor, Dict] [source]
Run solver until completion.
- Parameters:
initial_model – Beginning solution.
data – Tensor to solve for.
function_handle – Callable to sample objective values.
gradient_handle – Callable to sample gradient values.
lower_bound – Lower bound on model values.
sampler – Sampler to select which values to evaluate or take gradients from.
- Returns:
Final answer and dictionary of details.
- class pyttb.gcp.optimizers.SGD(rate: float = 1e-3, decay: float = 0.1, max_fails: int = 1, epoch_iters: int = 1000, f_est_tol: float = -inf, max_iters: int = 1000, printitn: int = 1)[source]
Bases:
StochasticSolver
General Stochastic Gradient Descent.
General Setup for Stochastic Solvers.
- Parameters:
rate – Rate of descent, proportional to step size.
decay – How much to decrease step size on failed epochs.
max_fails – How many failed epochs before terminating the solve.
epoch_iters – Number of steps to take per epoch.
f_est_tol – Tolerance for function estimate changes to terminate solve.
max_iters – Maximum number of epochs.
printitn – Controls verbosity of information during solve.
- update_step(model: ktensor, gradient: List[ndarray], lower_bound: float) Tuple[List[ndarray], float] [source]
Calculate the update step for the solver.
- Parameters:
model – Current decomposition.
gradient – Gradient calculation.
lower_bound – Minimum value for the decomposition.
- Returns:
Update to be applied to decomposition (to be applied by caller).
Step size used.
- class pyttb.gcp.optimizers.Adam(rate: float = 1e-3, decay: float = 0.1, max_fails: int = 1, epoch_iters: int = 1000, f_est_tol: float = -inf, max_iters: int = 1000, printitn: int = 1, beta_1: float = 0.9, beta_2: float = 0.999, epsilon: float = 1e-8)[source]
Bases:
StochasticSolver
Adam Optimizer.
General Setup for Adam Solver.
- Parameters:
rate – Rate of descent, proportional to step size.
decay – How much to decrease step size on failed epochs.
max_fails – How many failed epochs before terminating the solve.
epoch_iters – Number of steps to take per epoch.
f_est_tol – Tolerance for function estimate changes to terminate solve.
max_iters – Maximum number of epochs.
printitn – Controls verbosity of information during solve.
beta_1 – Adam specific momentum parameter beta_1.
beta_2 – Adam specific momentum parameter beta_2.
epsilon – Adam specific momentum parameter to avoid division by zero.
- __init__(rate: float = 1e-3, decay: float = 0.1, max_fails: int = 1, epoch_iters: int = 1000, f_est_tol: float = -inf, max_iters: int = 1000, printitn: int = 1, beta_1: float = 0.9, beta_2: float = 0.999, epsilon: float = 1e-8)[source]
General Setup for Adam Solver.
- Parameters:
rate – Rate of descent, proportional to step size.
decay – How much to decrease step size on failed epochs.
max_fails – How many failed epochs before terminating the solve.
epoch_iters – Number of steps to take per epoch.
f_est_tol – Tolerance for function estimate changes to terminate solve.
max_iters – Maximum number of epochs.
printitn – Controls verbosity of information during solve.
beta_1 – Adam specific momentum parameter beta_1.
beta_2 – Adam specific momentum parameter beta_2.
epsilon – Adam specific momentum parameter to avoid division by zero.
- update_step(model: ktensor, gradient: List[ndarray], lower_bound: float) Tuple[List[ndarray], float] [source]
Calculate the update step for the solver.
- Parameters:
model – Current decomposition.
gradient – Gradient calculation.
lower_bound – Minimum value for the decomposition.
- Returns:
Update to be applied to decomposition (to be applied by caller).
Step size used.
- class pyttb.gcp.optimizers.Adagrad(rate: float = 1e-3, decay: float = 0.1, max_fails: int = 1, epoch_iters: int = 1000, f_est_tol: float = -inf, max_iters: int = 1000, printitn: int = 1)[source]
Bases:
StochasticSolver
Adagrad Optimizer.
General Setup for Stochastic Solvers.
- Parameters:
rate – Rate of descent, proportional to step size.
decay – How much to decrease step size on failed epochs.
max_fails – How many failed epochs before terminating the solve.
epoch_iters – Number of steps to take per epoch.
f_est_tol – Tolerance for function estimate changes to terminate solve.
max_iters – Maximum number of epochs.
printitn – Controls verbosity of information during solve.
- __init__(rate: float = 1e-3, decay: float = 0.1, max_fails: int = 1, epoch_iters: int = 1000, f_est_tol: float = -inf, max_iters: int = 1000, printitn: int = 1)[source]
General Setup for Stochastic Solvers.
- Parameters:
rate – Rate of descent, proportional to step size.
decay – How much to decrease step size on failed epochs.
max_fails – How many failed epochs before terminating the solve.
epoch_iters – Number of steps to take per epoch.
f_est_tol – Tolerance for function estimate changes to terminate solve.
max_iters – Maximum number of epochs.
printitn – Controls verbosity of information during solve.
- update_step(model: ktensor, gradient: List[ndarray], lower_bound: float) Tuple[List[ndarray], float] [source]
Calculate the update step for the solver.
- Parameters:
model – Current decomposition.
gradient – Gradient calculation.
lower_bound – Minimum value for the decomposition.
- Returns:
Update to be applied to decomposition (to be applied by caller).
Step size used.
- class pyttb.gcp.optimizers.LBFGSB(m: int | None = None, factr: float = 1e7, pgtol: float | None = None, epsilon: float | None = None, iprint: int | None = None, disp: int | None = None, maxfun: int | None = None, maxiter: int = 1000, callback: Callable[[ndarray], None] | None = None, maxls: int | None = None)[source]
Bases:
object
Simple wrapper around scipy lbfgsb.
NOTE: If used for publications please see scipy documentation for adding citation for the implementation.
Prepare all hyper-parameters for solver.
See scipy for details and standard defaults. A variety of defaults are set specifically for gcp opt.
- __init__(m: int | None = None, factr: float = 1e7, pgtol: float | None = None, epsilon: float | None = None, iprint: int | None = None, disp: int | None = None, maxfun: int | None = None, maxiter: int = 1000, callback: Callable[[ndarray], None] | None = None, maxls: int | None = None)[source]
Prepare all hyper-parameters for solver.
See scipy for details and standard defaults. A variety of defaults are set specifically for gcp opt.