Releases: tidymodels/tune
tune 0.1.4
-
Fixed an issue in
finalize_recipe()
which failed during tuning of recipe steps that contain multipletune()
parameters in an single step. -
Changed
conf_mat_resampled()
to return the same type of object asyardstick::conf_mat()
whentidy = FALSE
(#370). -
The automatic parameter machinery for
sample_size
with the C5.0 engine was changes to usedials::sample_prop()
.
tune 0.1.3
-
The
rsample::pretty()
methods were extended totune_results
objects. -
Added
pillar
methods for formattingtune
objects in list columns. -
A method for
.get_fingerprint()
was added. This helps determine iftune
objects used the same resamples.
tune 0.1.2
Bug Fixes
-
last_fit()
andworkflows::fit()
will now give identical results for the same workflow when the underlying model uses random number generation (#300). -
Fixed an issue where recipe tuning parameters could be randomly matched to the tuning grid incorrectly (#316).
-
last_fit()
no longer accidentally adjusts the random seed (#264). -
Fixed two bugs in the acquisition function calculations.
Other Changes
-
New
parallel_over
control argument to adjust the parallel processing method that tune uses. -
The
.config
column that appears in the returned tibble from tuning and fitting resamples has changed slightly. It is now always of the form"Preprocessor<i>_Model<j>"
. -
predict()
can now be called on the workflow returned fromlast_fit()
(#294, #295, #296). -
tune now supports setting the
event_level
option from yardstick through the control objects (i.e.control_grid(event_level = "second")
) (#240, #249). -
tune now supports workflows created with the new
workflows::add_variables()
preprocessor. -
Better control the random number streams in parallel for
tune_grid()
andfit_resamples()
(#11) -
Allow
...
to pass options fromtune_bayes()
toGPfit::GP_fit()
. -
Additional checks are done for the initial grid that is given to
tune_bayes()
. If the initial grid is small relative to the number of model terms, a warning is issued. If the grid is a single point, an error occurs. (#269) -
Formatting of some messages created by
tune_bayes()
now respect the width and wrap lines using the newmessage_wrap()
function. -
tune functions (
tune_grid()
,tune_bayes()
, etc) will now error if a model specification or model workflow are given as the first argument (the soft deprecation period is over). -
An
augment()
method was added for objects generated bytune_*()
,fit_resamples()
, andlast_fit()
.
tune 0.1.1
Breaking Changes
-
autoplot.tune_results()
now requires objects made by version 0.1.0 or higher of tune. -
tune
objects no longer keep therset
class that they have from theresamples
argument.
Other Changes
-
autoplot.tune_results()
now produces a different plot when the tuning grid is a regular grid (i.e. factorial or nearly factorial in nature). If there are 5+ parameters, the standard plot is produced. Non-regular grids are plotted in the same way (although see next bullet point). See?autoplot.tune_results
for more information. -
autoplot.tune_results()
now transforms the parameter values for the plot. For example, if thepenalty
parameter was used for a regularized regression, the points are plotted on the log-10 scale (its default transformation). For non-regular grids, the facet labels show the transformation type (e.g."penalty (log-10)"
or"cost (log-2)"
). For regular grid, the x-axis is scaled usingscale_x_continuous()
. -
Finally,
autoplot.tune_results()
now shows the parameter labels in a plot. For example, if a k-nearest neighbors model was used withneighbors = tune()
, the parameter will be labeled as"# Nearest Neighbors"
. When an ID was used, such asneighbors = tune("K")
, this is used to identify the parameter. -
In other plotting news,
coord_obs_pred()
has been included for regression models. When plotting the observed and predicted values from a model, this forces the x- and y-axis to be the same range and uses an aspect ratio of 1. -
The outcome names are saved in an attribute called
outcomes
to objects with classtune_results
. Also, several accessor functions (named `.get_tune_*()) were added to more easily access such attributes. -
conf_mat_resampled()
computes the average confusion matrix across resampling statistics for a single model. -
show_best()
, and theselect_*()
functions will now use the first metric in the metric set if no metric is supplied. -
filter_parameters()
can trim the.metrics
column of unwanted results (as well as columns.predictions
and.extracts
) fromtune_*
objects. -
In concert with
dials
> 0.0.7, tuning engine-specific arguments is possible. Many known engine-specific tuning parameters and handled automatically. -
If a grid is given, parameters do not need to be finalized to be used in the
tune_*()
functions. -
Added a
save_workflow
argument tocontrol_*
functions that will result in the workflow object used to carry out tuning/fitting (regardless of whether a formula or recipe was given as input to the function) to be appended to the resultingtune_results
object in aworkflow
attribute. The new.get_tune_workflow()
function can be used to access the workflow.
tune 0.1.0
Breaking Changes
- The arguments to the main tuning/fitting functions (
tune_grid()
,tune_bayes()
, etc) have been reordered to better align with parsnip'sfit()
. The first argument to all these functions is now a model specification or model workflow. The previous versions are soft-deprecated as of 0.1.0 and will be deprecated as of 0.1.2.
Other Changes
-
Added more packages to be fully loaded in the workers when run in parallel using
doParallel
(#157), (#159), and (#160) -
collect_predictions()
gains two new arguments.parameters
allows for pre-filtering of the hold-out predictions by tuning parameters values. If you are only interested in one sub-model, this makes things much faster. The other option issummarize
and is used when the resampling method has training set rows that are predicted in multiple holdout sets. -
select_best()
,select_by_one_std_err()
, andselect_by_pct_loss()
no longer have a redundantmaximize
argument (#176). Each metric set in yardstick now has a direction (maximize vs. minimize) built in.
Bug Fixes
tune_bayes()
no longer errors with a recipe, which has tuning parameters, in combination with a parameter set, where the defaults contain unknown values (#168).
last version before removing references to external workflow functions
old-workflows added a vignette page describing optimizations