geocompr
geocompr copied to clipboard
Ch12 comments
Continuing my read thru the book and examples and have some minor things :) All of this is probably best for @jannes-m.
- The task initialization syntax seems to have changed and the spatial arguments don't need to be passed in a list.
> task = mlr3spatiotempcv::TaskClassifST$new(
+ id = "ecuador_lsl",
+ backend = mlr3::as_data_backend(lsl),
+ target = "lslpts",
+ positive = "TRUE",
+ extra_args = list(
+ coordinate_names = c("x", "y"),
+ coords_as_features = FALSE,
+ crs = "EPSG:32717"
+ ))
Error in .__TaskClassifST__initialize(self = self, private = private, :
argument "coordinate_names" is missing, with no default
> # create task
> task = mlr3spatiotempcv::TaskClassifST$new(
+ id = "ecuador_lsl",
+ backend = mlr3::as_data_backend(lsl),
+ target = "lslpts",
+ positive = "TRUE",
+ coordinate_names = c("x", "y"),
+ coords_as_features = FALSE,
+ crs = "EPSG:32717"
+ )
>
- When differentiating between hyperparameters and parameters a 'machine mastery' blog post is referred to but no link/detailed reference is present (note 78, sect 12.5.2)
- Copy pasting the SVM code in 12.5.2 and running it in RStudio results in an AUROC of 0.5 in all 5 cases, not sure what's going on. I'm pasting my
sessionInfo()
below in case some package version differences explain this... - Still to do the exercises
> sessionInfo()
R version 4.2.0 (2022-04-22 ucrt)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 22000)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.utf8
[2] LC_CTYPE=English_United States.utf8
[3] LC_MONETARY=English_United States.utf8
[4] LC_NUMERIC=C
[5] LC_TIME=English_United States.utf8
attached base packages:
[1] stats graphics grDevices utils datasets
[6] methods base
other attached packages:
[1] terra_1.5-34 sf_1.0-7
[3] progressr_0.10.1 mlr3viz_0.5.9
[5] mlr3tuning_0.13.1 paradox_0.9.0
[7] mlr3spatiotempcv_2.0.1 mlr3learners_0.5.3
[9] mlr3_0.13.4 lgr_0.4.3
[11] future_1.27.0 dplyr_1.0.9
loaded via a namespace (and not attached):
[1] uuid_1.1-0 backports_1.4.1
[3] lwgeom_0.2-8 plyr_1.8.7
[5] igraph_1.3.2 sp_1.5-0
[7] crosstalk_1.2.0 listenv_0.8.0
[9] leaflet_2.1.1 usethis_2.1.6
[11] ggplot2_3.3.6 digest_0.6.29
[13] htmltools_0.5.3 tiff_0.1-11
[15] fansi_1.0.3 magrittr_2.0.3
[17] checkmate_2.1.0 memoise_2.0.1
[19] remotes_2.4.2 link2GI_0.4-7
[21] globals_0.15.1 prettyunits_1.1.1
[23] jpeg_0.1-9 colorspace_2.0-3
[25] xfun_0.31 leafem_0.2.0
[27] callr_3.7.0 crayon_1.5.1
[29] RCurl_1.98-1.7 roxygen2_7.2.0
[31] glue_1.6.2 mlr3measures_0.4.1
[33] stars_0.5-5 gtable_0.3.0
[35] webshot_0.5.3 kernlab_0.9-31
[37] pkgbuild_1.3.1 future.apply_1.9.0
[39] BiocGenerics_0.42.0 abind_1.4-5
[41] scales_1.2.0 DBI_1.1.3
[43] GGally_2.1.2 Rcpp_1.0.9
[45] viridisLite_0.4.0 progress_1.2.2
[47] palmerpenguins_0.1.0 units_0.8-0
[49] proxy_0.4-27 stats4_4.2.0
[51] htmlwidgets_1.5.4 RColorBrewer_1.1-3
[53] ellipsis_0.3.2 pkgconfig_2.0.3
[55] reshape_0.8.9 XML_3.99-0.10
[57] farver_2.1.1 locfit_1.5-9.5
[59] utf8_1.2.2 tidyselect_1.1.2
[61] labeling_0.4.2 rlang_1.0.4
[63] tmaptools_3.1-1 munsell_0.5.0
[65] tools_4.2.0 cachem_1.0.6
[67] mlr3extralearners_0.5.45 cli_3.3.0
[69] generics_0.1.3 devtools_2.4.3
[71] evaluate_0.15 stringr_1.4.0
[73] fastmap_1.1.0 fftwtools_0.9-11
[75] yaml_2.3.5 processx_3.7.0
[77] knitr_1.39 fs_1.5.2
[79] purrr_0.3.4 satellite_1.0.4
[81] readbitmap_0.1.5 bbotk_0.5.3
[83] pracma_2.3.8 xml2_1.3.3
[85] brio_1.1.3 compiler_4.2.0
[87] rstudioapi_0.13 curl_4.3.2
[89] png_0.1-7 e1071_1.7-11
[91] testthat_3.1.4 tibble_3.1.8
[93] stringi_1.7.8 ps_1.7.1
[95] desc_1.4.1 lattice_0.20-45
[97] classInt_0.4-7 vctrs_0.4.1
[99] pillar_1.8.0 lifecycle_1.0.1
[101] data.table_1.14.2 bitops_1.0-7
[103] raster_3.5-21 mapview_2.11.0
[105] R6_2.5.1 imager_0.42.13
[107] KernSmooth_2.23-20 bmp_0.3
[109] mlr3misc_0.10.0 parallelly_1.32.1
[111] sessioninfo_1.2.2 codetools_0.2-18
[113] dichromat_2.0-0.1 MASS_7.3-56
[115] pkgload_1.2.4 rprojroot_2.0.3
[117] withr_2.5.0 hms_1.1.1
[119] parallel_4.2.0 EBImage_4.38.0
[121] grid_4.2.0 class_7.3-20
[123] rmarkdown_2.14 pROC_1.18.0
[125] base64enc_0.1-3
Thanks for these additional comments and agreed Jannes is best placed to respond. Keep the feedback coming!
Thanks for reviewing! And yes, you are right, the interface changed, will update accordingly!
As a followup I went through the exercises and had one more thing to fix up. Will add I sent the chapter to another lab member and he already plans to incorporate the spatial CV methods for the paper he's working on, so great work introducing it and publicizing it to the community!
- The reference to benchmarking in Exercise E4 should be to https://mlr3book.mlr-org.com/03-perf-benchmarking.html, the current link (https://mlr3book.mlr-org.com/perf-eval-cmp.html#benchmarking) fails.
@Lvulis thank you again so much for your comments. I have addressed them all.
The task initialization syntax seems to have changed and the spatial arguments don't need to be passed in a list.
Thanks for noting! Finally changed.
When differentiating between hyperparameters and parameters a 'machine mastery' blog post is referred to but no link/detailed reference is present (note 78, sect 12.5.2)
The link has been commented out. I suppose because it was a link in a footnote because when compiling the pdf all links in the main text have been converted to footnote links, so I suppose that a link in a footnote might have led to trouble (just guessing, @Nowosad?). In any case I have put the link now into the main text.
Copy pasting the SVM code in 12.5.2 and running it in RStudio results in an AUROC of 0.5 in all 5 cases, not sure what's going on.
I haven't indicated the type
argument. Setting it to "C-svc"
makes sure that ksvm()
is solving a classification task. This was probably the default value in former times. However, now not setting it, results in an error which in turn lets the fallback learner run which will result in an AUROC of 0.5 for all runs. So thanks for spotting this, good catch!
The reference to benchmarking in Exercise E4 should be to https://mlr3book.mlr-org.com/03-perf-benchmarking.html, the current link (https://mlr3book.mlr-org.com/perf-eval-cmp.html#benchmarking) fails.
Thanks for noting. The link has changed once again, and this one we are using now.
I suppose because it was a link in a footnote because when compiling the pdf all links in the main text have been converted to footnote links, so I suppose that a link in a footnote might have led to trouble (just guessing, @Nowosad?)
I do not remember. We will see soon;)
Don't think this will happen as I have now moved the link to the main text.