Fleming Kretschmer

Results 11 comments of Fleming Kretschmer

lsp and the python-shell-interpreter already work with tramp when `conda-env-current-dir` is set manually, with ```emacs-lisp ;; remove "/ssh:host:" from conda dir (setq conda-tramp-path (replace-regexp-in-string ".*:" "" (format "%s/bin" conda-env-current-dir))) (add-to-list...

Unfortunately BERTax needs an older version of tensorflow due to its `keras-bert`-dependency. Perhaps the docker version can help here: https://github.com/f-kretschmer/bertax#docker ?

Hi, sorry for the delay in answering. Unfortunately, I think you will have to do comparisons based on the plots, as I do not have this data in another format.

I've adapted the PR above to introduce only the `threads` option.

Hello Peter, Many thanks for your tests and suggestions! I haven't looked into runtime optimization that much so far, so I think there are definitely some improvements that can be...

Sorry for the late answer. 1. Since we did our evaluations, the NCBI taxonomy has likely had some changes. Here is a taxdump for the version we used: https://upload.uni-jena.de/data/656deff28d9cd2.73093822/taxdump.tar.gz. It...

I'm sorry, I think the `taxdump.tar.gz` is the incorrect version, this must be the correct one: https://ftp.ncbi.nih.gov/pub/taxonomy/taxdump_archive/new_taxdump_2021-04-01.zip

Hi! Both the balanced accuracy calculation ([sklearn.metrics](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics).balanced_accuracy_score) and average precision calculation ([sklearn.metrics](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics).precision_score) is used for all ranks.

In this table it is Average Precision (AveP), but we also have Precision-Recall-plots, ROC-curves and balanced accuracy.

The "final" dataset has a lot more data and also an additional output layer for "genus" prediction. Everything is detailed in the section "Performance of Final BERTax Model" in the...