NBeatsGenericModel¶
- class NBeatsGenericModel(input_size: int, output_size: int, loss: Union[Literal['mse'], Literal['mae'], Literal['smape'], Literal['mape'], torch.nn.modules.module.Module] = 'mse', stacks: int = 30, layers: int = 4, layer_size: int = 512, lr: float = 0.001, window_sampling_limit: Optional[int] = None, optimizer_params: Optional[dict] = None, train_batch_size: int = 1024, test_batch_size: int = 1024, trainer_params: Optional[dict] = None, train_dataloader_params: Optional[dict] = None, test_dataloader_params: Optional[dict] = None, val_dataloader_params: Optional[dict] = None, split_params: Optional[dict] = None, random_state: Optional[int] = None)[source]¶
Bases:
etna.models.nn.nbeats.nbeats.NBeatsBaseModel
Generic N-BEATS model.
Paper: https://arxiv.org/pdf/1905.10437.pdf
Official implementation: https://github.com/ServiceNow/N-BEATS
Init generic N-BEATS model.
- Parameters
input_size (int) – Input data size.
output_size (int) – Forecast size.
loss (Union[Literal['mse'], typing.Literal['mae'], typing.Literal['smape'], typing.Literal['mape'], torch.nn.Module]) – Optimisation objective. The loss function should accept three arguments:
y_true
,y_pred
andmask
. The last parameter is a binary mask that denotes which points are valid forecasts. There are several implemented loss functions available in theetna.models.nn.nbeats.metrics
module.stacks (int) – Number of block stacks in model.
layers (int) – Number of inner layers in each block.
layer_size (int) – Inner layers size in blocks.
lr (float) – Optimizer learning rate.
window_sampling_limit (Optional[int]) – Size of history for sampling training data. If set to
None
full series history used for sampling.optimizer_params (Optional[dict]) – Additional parameters for the optimizer.
train_batch_size (int) – Batch size for training.
test_batch_size (int) – Batch size for testing.
optimizer_params – Parameters for optimizer for Adam optimizer (api reference
torch.optim.Adam
).trainer_params (Optional[dict]) – Pytorch ligthning trainer parameters (api reference
pytorch_lightning.trainer.trainer.Trainer
).train_dataloader_params (Optional[dict]) – Parameters for train dataloader like sampler for example (api reference
torch.utils.data.DataLoader
).test_dataloader_params (Optional[dict]) – Parameters for test dataloader.
val_dataloader_params (Optional[dict]) – Parameters for validation dataloader.
split_params (Optional[dict]) –
- Dictionary with parameters for
torch.utils.data.random_split()
for train-test splitting train_size: (float) value from 0 to 1 - fraction of samples to use for training
generator: (Optional[torch.Generator]) - generator for reproducibile train-test splitting
torch_dataset_size: (Optional[int]) - number of samples in dataset, in case of dataset not implementing
__len__
- Dictionary with parameters for
random_state (Optional[int]) – Random state for train batches generation.
- Inherited-members
Methods
fit
(ts)Fit model.
forecast
(ts, prediction_size[, ...])Make predictions.
Get model.
load
(path)Load an object.
Get default grid for tuning hyperparameters.
predict
(ts, prediction_size[, return_components])Make predictions.
raw_fit
(torch_dataset)Fit model on torch like Dataset.
raw_predict
(torch_dataset)Make inference on torch like Dataset.
save
(path)Save the object.
set_params
(**params)Return new object instance with modified parameters.
to_dict
()Collect all information about etna object in dict.
Attributes
Context size of the model.
- fit(ts: etna.datasets.tsdataset.TSDataset) etna.models.base.DeepBaseModel ¶
Fit model.
- Parameters
ts (etna.datasets.tsdataset.TSDataset) – TSDataset with features
- Returns
Model after fit
- Return type
- forecast(ts: etna.datasets.tsdataset.TSDataset, prediction_size: int, return_components: bool = False) etna.datasets.tsdataset.TSDataset ¶
Make predictions.
This method will make autoregressive predictions.
- Parameters
ts (etna.datasets.tsdataset.TSDataset) – Dataset with features and expected decoder length for context
prediction_size (int) – Number of last timestamps to leave after making prediction. Previous timestamps will be used as a context.
return_components (bool) – If True additionally returns forecast components
- Returns
Dataset with predictions
- Return type
- get_model() etna.models.base.DeepBaseNet ¶
Get model.
- Returns
Torch Module
- Return type
- classmethod load(path: pathlib.Path) typing_extensions.Self ¶
Load an object.
Warning
This method uses
dill
module which is not secure. It is possible to construct malicious data which will execute arbitrary code during loading. Never load data that could have come from an untrusted source, or that could have been tampered with.- Parameters
path (pathlib.Path) – Path to load object from.
- Returns
Loaded object.
- Return type
typing_extensions.Self
- params_to_tune() Dict[str, etna.distributions.distributions.BaseDistribution] [source]¶
Get default grid for tuning hyperparameters.
This grid tunes parameters:
stacks
,layers
,lr
,layer_size
. Other parameters are expected to be set by the user.- Returns
Grid to tune.
- Return type
Dict[str, etna.distributions.distributions.BaseDistribution]
- predict(ts: etna.datasets.tsdataset.TSDataset, prediction_size: int, return_components: bool = False) etna.datasets.tsdataset.TSDataset ¶
Make predictions.
This method will make predictions using true values instead of predicted on a previous step. It can be useful for making in-sample forecasts.
- Parameters
ts (etna.datasets.tsdataset.TSDataset) – Dataset with features and expected decoder length for context
prediction_size (int) – Number of last timestamps to leave after making prediction. Previous timestamps will be used as a context.
return_components (bool) – If True additionally returns prediction components
- Returns
Dataset with predictions
- Return type
- raw_fit(torch_dataset: torch.utils.data.dataset.Dataset) etna.models.base.DeepBaseModel ¶
Fit model on torch like Dataset.
- Parameters
torch_dataset (torch.utils.data.dataset.Dataset) – Torch like dataset for model fit
- Returns
Model after fit
- Return type
- raw_predict(torch_dataset: torch.utils.data.dataset.Dataset) Dict[Tuple[str, str], numpy.ndarray] ¶
Make inference on torch like Dataset.
- Parameters
torch_dataset (torch.utils.data.dataset.Dataset) – Torch like dataset for model inference
- Returns
Dictionary with predictions
- Return type
Dict[Tuple[str, str], numpy.ndarray]
- save(path: pathlib.Path)¶
Save the object.
- Parameters
path (pathlib.Path) – Path to save object to.
- set_params(**params: dict) etna.core.mixins.TMixin ¶
Return new object instance with modified parameters.
Method also allows to change parameters of nested objects within the current object. For example, it is possible to change parameters of a
model
in aPipeline
.Nested parameters are expected to be in a
<component_1>.<...>.<parameter>
form, where components are separated by a dot.- Parameters
**params – Estimator parameters
self (etna.core.mixins.TMixin) –
params (dict) –
- Returns
New instance with changed parameters
- Return type
etna.core.mixins.TMixin
Examples
>>> from etna.pipeline import Pipeline >>> from etna.models import NaiveModel >>> from etna.transforms import AddConstTransform >>> model = model=NaiveModel(lag=1) >>> transforms = [AddConstTransform(in_column="target", value=1)] >>> pipeline = Pipeline(model, transforms=transforms, horizon=3) >>> pipeline.set_params(**{"model.lag": 3, "transforms.0.value": 2}) Pipeline(model = NaiveModel(lag = 3, ), transforms = [AddConstTransform(in_column = 'target', value = 2, inplace = True, out_column = None, )], horizon = 3, )
- to_dict()¶
Collect all information about etna object in dict.
- property context_size: int¶
Context size of the model.