Random Forests
Random Forest Classifier
- class snapml.RandomForestClassifier(n_estimators=10, criterion='gini', max_depth=None, min_samples_leaf=1, max_features='auto', bootstrap=True, n_jobs=1, random_state=None, verbose=False, use_histograms=False, hist_nbins=256, use_gpu=False, gpu_ids=[0], compress_trees=False)
Random Forest Classifier
This class implements a random forest classifier using the IBM Snap ML library. It can be used for binary and multi-class classification problems.
- Parameters:
- n_estimatorsinteger, default=10
This parameter defines the number of trees in forest.
- criterionstring, default=”gini”
This function measures the quality of a split. The currently supported criterion is “gini”.
- max_depthinteger or None, default=None
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_leaf samples.
- min_samples_leafint or float, default=1
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least
min_samples_leaf
training samples in each of the left and right branches. - If int, then consider min_samples_leaf as the minimum number. - If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.- max_featuresint, float, string or None, default=’auto’
- The number of features to consider when looking for the best split:
If int, then consider max_features features at each split.
If float, then max_features is a fraction and int(max_features * n_features) features are considered at each split.
If “auto”, then max_features=sqrt(n_features).
If “sqrt”, then max_features=sqrt(n_features).
If “log2”, then max_features=log2(n_features).
If None, then max_features=n_features.
- bootstrapboolean, default=True
This parameter determines whether bootstrap samples are used when building trees.
- n_jobsinteger, default=1
The number of jobs to run in parallel the fit function.
- random_stateinteger, or None, default=None
If integer, random_state is the seed used by the random number generator. If None, the random number generator is the RandomState instance used by np.random.
- verboseboolean, default=False
If True, it prints debugging information while training. Warning: this will increase the training time. For performance evaluation, use verbose=False.
- use_histogramsboolean, default=False
Use histogram-based splits rather than exact splits.
- hist_nbinsint, default=256
Number of histogram bins.
- use_gpuboolean, default=False
Use GPU acceleration (only supported for histogram-based splits).
- gpu_idsarray-like of int, default: [0]
Device IDs of the GPUs which will be used when GPU acceleration is enabled.
- compress_treesbool, default=False
Compress trees after training for fast inference.
- Attributes:
- feature_importances_array-like, shape=(n_features,)
Feature importances computed across trees.
- export_model(output_file, output_type='pmml')
Export model trained in snapml to the given output file using a format of the given type.
Currently only PMML is supported as export format. The corresponding output file type to be provided to the export_model function is ‘pmml’.
- Parameters:
- output_filestr
Output filename
- output_type{‘pmml’}
Output file type
- fit(X_train, y_train, sample_weight=None)
Fit the model according to the given train data.
- Parameters:
- X_traindense matrix (ndarray)
Train dataset
- y_trainarray-like, shape = (n_samples,)
The target vector corresponding to X_train.
- sample_weightarray-like, shape = [n_samples] or None
Sample weights. If None, then samples are equally weighted. TODO: Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node.
- Returns:
- selfobject
- get_metadata_routing()
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- import_model(input_file, input_type, tree_format='auto', X=None)
Import a pre-trained forest ensemble model and optimize the trees for fast inference.
Supported import formats include PMML, ONNX. The corresponding input file types to be provided to the import_model function are ‘pmml’ and ‘onnx’ respectively.
Depending on how the tree_format argument is set, this function will return a different optimized model format. This format determines which inference engine is used for subsequent calls to ‘predict’ or ‘predict_proba’.
If tree_format is set to ‘compress_trees’, the model will be optimized for execution on the CPU, using our compressed decision trees approach. Note: if this option is selected, an optional dataset X can be provided, which will be used to predict node access characteristics during node clustering.
If tree_format is set to ‘zdnn_tensors’, the model will be optimized for execution on the IBM z16 AI accelerator, using a matrix-based inference algorithm leveraging the zDNN library.
By default tree_format is set to ‘auto’. A check is performed and if the IBM z16 AI accelerator is available the model will be optimized according to ‘zdnn_tensors’, otherwise it will be optimized according to ‘compress_trees’. The selected optimized tree format can be read by parameter self.optimized_tree_format_.
Note: If the input file contains features that are not supported by the import function, then an exception is thrown indicating the feature and the line number within the input file containing the feature.
- Parameters:
- input_filestr
Input filename
- input_type{‘pmml’, ‘onnx’}
Input file type
- tree_format{‘auto’, ‘compress_trees’, ‘zdnn_tensors’}
Tree format
- Xdense matrix (ndarray)
Dataset used for compressing trees
- Returns:
- selfobject
- optimize_trees(tree_format='auto', X=None)
Optimize the trees in the ensemble for fast inference.
Depending on how the tree_format argument is set, this function will return a different optimized model format. This format determines which inference engine is used for subsequent calls to ‘predict’ or ‘predict_proba’.
If tree_format is set to ‘compress_trees’, the model will be optimized for execution on the CPU, using our compressed decision trees approach. Note: if this option is selected, an optional dataset X can be provided, which will be used to predict node access characteristics during node clustering.
If tree_format is set to ‘zdnn_tensors’, the model will be optimized for execution on the IBM z16 AI accelerator, using a matrix-based inference algorithm leveraging the zDNN library.
By default tree_format is set to ‘auto’. A check is performed and if the IBM z16 AI accelerator is available the model will be optimized according to ‘zdnn_tensors’, otherwise it will be optimized according to ‘compress_trees’. The selected optimized tree format can be read by parameter self.optimized_tree_format_.
- Parameters:
- tree_format{‘auto’, ‘compress_trees’, ‘zdnn_tensors’}
Tree format
- Xdense matrix (ndarray)
Dataset used for compressing trees
- Returns:
- selfobject
- predict(X, n_jobs=None)
Class/Regression predictions
The returned class/regression estimates.
- Parameters:
- Xdense matrix (ndarray) or memmap (np.memmap)
Dataset used for predicting class/regression estimates.
- n_jobsint, default=None
Number of threads used to run inference. By default the value of the class attribute is used..
- Returns:
- pred/proba: array-like, shape = (n_samples,)
Returns the predicted class/values of the sample.
- predict_proba(X, n_jobs=None)
Predict class probabilities.
- Parameters:
- Xdense matrix (ndarray)
Dataset used for predicting probabilities.
- n_jobsint, default=None
By default the value of the class attribute is used..
- Returns
- ——-
- proba: array-like, shape = (n_samples, n_classes)
Returns the predicted probabilities the sample.
- score(X, y, sample_weight=None)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
Test samples.
- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- Returns:
- scorefloat
Mean accuracy of
self.predict(X)
w.r.t. y.
- set_fit_request(*, X_train: bool | None | str = '$UNCHANGED$', sample_weight: bool | None | str = '$UNCHANGED$', y_train: bool | None | str = '$UNCHANGED$') RandomForestClassifier
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- X_trainstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
X_train
parameter infit
.- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weight
parameter infit
.- y_trainstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
y_train
parameter infit
.
- Returns:
- selfobject
The updated object.
- set_params(**params)
Set the parameters of this model.
Valid parameter keys can be listed with
get_params()
.- Returns:
- self
- set_predict_proba_request(*, n_jobs: bool | None | str = '$UNCHANGED$') RandomForestClassifier
Request metadata passed to the
predict_proba
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict_proba
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict_proba
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- n_jobsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
n_jobs
parameter inpredict_proba
.
- Returns:
- selfobject
The updated object.
- set_predict_request(*, n_jobs: bool | None | str = '$UNCHANGED$') RandomForestClassifier
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- n_jobsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
n_jobs
parameter inpredict
.
- Returns:
- selfobject
The updated object.
- set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') RandomForestClassifier
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weight
parameter inscore
.
- Returns:
- selfobject
The updated object.
Random Forest Regressor
- class snapml.RandomForestRegressor(n_estimators=10, criterion='mse', max_depth=None, min_samples_leaf=1, max_features='auto', bootstrap=True, n_jobs=1, random_state=None, verbose=False, use_histograms=False, hist_nbins=256, use_gpu=False, gpu_ids=[0], compress_trees=False)
Random Forest Regressor
This class implements a random forest regressor using the IBM Snap ML library. It can be used for regression tasks.
- Parameters:
- n_estimatorsinteger, default=10
This parameter defines the number of trees in forest.
- criterionstring, default=”mse”
This function measures the quality of a split. The currently supported criterion is “mse”.
- max_depthinteger or None, default=None
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_leaf samples.
- min_samples_leafint or float, default=1
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least
min_samples_leaf
training samples in each of the left and right branches. - If int, then consider min_samples_leaf as the minimum number. - If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.- max_featuresint, float, string or None, default=’auto’
- The number of features to consider when looking for the best split:
If int, then consider max_features features at each split.
If float, then max_features is a fraction and int(max_features * n_features) features are considered at each split.
If “auto”, then max_features=n_features.
If “sqrt”, then max_features=sqrt(n_features).
If “log2”, then max_features=log2(n_features).
If None, then max_features=n_features.
- bootstrapboolean, default=True
This parameter determines whether bootstrap samples are used when building trees.
- n_jobsinteger, default=1
The number of jobs to run in parallel the fit function.
- random_stateinteger, or None, default=None
If integer, random_state is the seed used by the random number generator. If None, the random number generator is the RandomState instance used by np.random.
- verboseboolean, default=False
If True, it prints debugging information while training. Warning: this will increase the training time. For performance evaluation, use verbose=False.
- use_histogramsboolean, default=False
Use histogram-based splits rather than exact splits.
- hist_nbinsint, default=256
Number of histogram bins.
- use_gpuboolean, default=True
Use GPU acceleration (only supported for histogram-based splits).
- gpu_idsarray-like of int, default: [0]
Device IDs of the GPUs which will be used when GPU acceleration is enabled.
- compress_treesbool, default=False
Compress trees after training for fast inference.
- Attributes:
- feature_importances_array-like, shape=(n_features,)
Feature importances computed across trees.
- export_model(output_file, output_type='pmml')
Export model trained in snapml to the given output file using a format of the given type.
Currently only PMML is supported as export format. The corresponding output file type to be provided to the export_model function is ‘pmml’.
- Parameters:
- output_filestr
Output filename
- output_type{‘pmml’}
Output file type
- fit(X_train, y_train, sample_weight=None)
Fit the model according to the given train data.
- Parameters:
- X_traindense matrix (ndarray)
Train dataset
- y_trainarray-like, shape = (n_samples,)
The target vector corresponding to X_train.
- sample_weightarray-like, shape = [n_samples] or None
Sample weights. If None, then samples are equally weighted. TODO: Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node.
- Returns:
- selfobject
- get_metadata_routing()
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- import_model(input_file, input_type, tree_format='auto', X=None)
Import a pre-trained forest ensemble model and optimize the trees for fast inference.
Supported import formats include PMML, ONNX. The corresponding input file types to be provided to the import_model function are ‘pmml’ and ‘onnx’ respectively.
Depending on how the tree_format argument is set, this function will return a different optimized model format. This format determines which inference engine is used for subsequent calls to ‘predict’ or ‘predict_proba’.
If tree_format is set to ‘compress_trees’, the model will be optimized for execution on the CPU, using our compressed decision trees approach. Note: if this option is selected, an optional dataset X can be provided, which will be used to predict node access characteristics during node clustering.
If tree_format is set to ‘zdnn_tensors’, the model will be optimized for execution on the IBM z16 AI accelerator, using a matrix-based inference algorithm leveraging the zDNN library.
By default tree_format is set to ‘auto’. A check is performed and if the IBM z16 AI accelerator is available the model will be optimized according to ‘zdnn_tensors’, otherwise it will be optimized according to ‘compress_trees’. The selected optimized tree format can be read by parameter self.optimized_tree_format_.
Note: If the input file contains features that are not supported by the import function, then an exception is thrown indicating the feature and the line number within the input file containing the feature.
- Parameters:
- input_filestr
Input filename
- input_type{‘pmml’, ‘onnx’}
Input file type
- tree_format{‘auto’, ‘compress_trees’, ‘zdnn_tensors’}
Tree format
- Xdense matrix (ndarray)
Dataset used for compressing trees
- Returns:
- selfobject
- optimize_trees(tree_format='auto', X=None)
Optimize the trees in the ensemble for fast inference.
Depending on how the tree_format argument is set, this function will return a different optimized model format. This format determines which inference engine is used for subsequent calls to ‘predict’ or ‘predict_proba’.
If tree_format is set to ‘compress_trees’, the model will be optimized for execution on the CPU, using our compressed decision trees approach. Note: if this option is selected, an optional dataset X can be provided, which will be used to predict node access characteristics during node clustering.
If tree_format is set to ‘zdnn_tensors’, the model will be optimized for execution on the IBM z16 AI accelerator, using a matrix-based inference algorithm leveraging the zDNN library.
By default tree_format is set to ‘auto’. A check is performed and if the IBM z16 AI accelerator is available the model will be optimized according to ‘zdnn_tensors’, otherwise it will be optimized according to ‘compress_trees’. The selected optimized tree format can be read by parameter self.optimized_tree_format_.
- Parameters:
- tree_format{‘auto’, ‘compress_trees’, ‘zdnn_tensors’}
Tree format
- Xdense matrix (ndarray)
Dataset used for compressing trees
- Returns:
- selfobject
- predict(X, n_jobs=None)
Class/Regression predictions
The returned class/regression estimates.
- Parameters:
- Xdense matrix (ndarray) or memmap (np.memmap)
Dataset used for predicting class/regression estimates.
- n_jobsint, default=None
Number of threads used to run inference. By default the value of the class attribute is used..
- Returns:
- pred/proba: array-like, shape = (n_samples,)
Returns the predicted class/values of the sample.
- score(X, y, sample_weight=None)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred)** 2).sum()
and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters:
- Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- Returns:
- scorefloat
\(R^2\) of
self.predict(X)
w.r.t. y.
Notes
The \(R^2\) score used when calling
score
on a regressor usesmultioutput='uniform_average'
from version 0.23 to keep consistent with default value ofr2_score()
. This influences thescore
method of all the multioutput regressors (except forMultiOutputRegressor
).
- set_fit_request(*, X_train: bool | None | str = '$UNCHANGED$', sample_weight: bool | None | str = '$UNCHANGED$', y_train: bool | None | str = '$UNCHANGED$') RandomForestRegressor
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- X_trainstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
X_train
parameter infit
.- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weight
parameter infit
.- y_trainstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
y_train
parameter infit
.
- Returns:
- selfobject
The updated object.
- set_params(**params)
Set the parameters of this model.
Valid parameter keys can be listed with
get_params()
.- Returns:
- self
- set_predict_request(*, n_jobs: bool | None | str = '$UNCHANGED$') RandomForestRegressor
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- n_jobsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
n_jobs
parameter inpredict
.
- Returns:
- selfobject
The updated object.
- set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') RandomForestRegressor
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weight
parameter inscore
.
- Returns:
- selfobject
The updated object.