You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
library(mlr3torch)
library(mlr3tuning)
library(mlr3learners)
learner= lrn("classif.mlp",
# define the tuning space via the to_tune() tokens# use either 16, 32, or 64batch_size= to_tune(c(16, 32, 64)),
# tune the dropout probability in the interval [0.1, 0.9]p= to_tune(0.1, 0.9),
# tune the epochs using early stopping (internal = TRUE)epochs= to_tune(upper=1000L, internal=TRUE),
# configure the early-stopping / validationvalidate=0.3,
measures_valid= msr("classif.acc"),
patience=10,
device="cpu"
)
at= auto_tuner(
learner=learner,
tuner= tnr("grid_search"),
resampling= rsmp("cv"),
measure= msr("classif.acc"),
term_evals=10
)
task= tsk("iris")
at$train(task)
future::plan("multisesssion")
design= benchmark_grid(
tsk("iris"),
learners=list(at, lrn("classif.ranger")),
resampling= rsmp("cv", folds=10)
)
benchmark(design)
# parallelize the outer resampling, not the inner resampling# 1. apply learner at to fold 1 of iris (outer)# 2. apply learner at to fold 2 of iris (outer)# the autotuner itself also can parallelize execution (inner)# ...# 10. apply learner at to fold 10 of iris (outer)# 11. apply learner ranger to fold 1 of iris (outer)# ..# 20. apply learner ranger to fold 10 of iris (outer)
cxzhang4
changed the title
simple use case: xgboost vs torch: no factors, no missing values
simple use case: rf vs torch: no factors, no missing values
Oct 17, 2024
Goal: Compare random forests with a simple multi layer perceptron in a simple benchmark experiment.
Use three simple small tasks from OpenML: https://mlr3book.mlr-org.com/chapters/chapter11/large-scale_benchmarking.html. Simple means no missing values, also no factor variables, i.e. only numeric features.
Use only classification tasks
classif.ranger
(no hyperparameter tuning)classif.mlp
with hyperparameter tuning:For that we need to wrap the
classif.mlp
learner in anAutoTuner
.You need to define an:
And also we need to parallelize the experiment executing using the future package: https://mlr3book.mlr-org.com/chapters/chapter10/advanced_technical_aspects_of_mlr3.html#sec-parallel-learner
The text was updated successfully, but these errors were encountered: