Aleksei Petrenko
Aleksei Petrenko
Fair enough! I guess I never really used split models with RNNs after we introduced this optimization. Thank you for pointing out! I'm trying to work on a major update...
For now plans are relatively modest. I want to make API more modular so that several types of online algorithms could be manifested. Specifically, I want to add following use...
@gebob19 btw any additional feedback is welcome!
Hi @LinxiFan! Thank you for reporting this! Did you install from a python package or built the code yourself? We're looking into this! @BoyuanLong FYI
I found one machine where I have the exact same problem, after installing from a prebuilt package. It helped to just install from sources, or build a package myself. Can...
@LinxiFan can you please send your version of OS (i.e. Ubunbu release) and version of GLM? (`dpkg -l | grep glm` on Ubuntu for GLM version). Also, can you just...
@tushartk I assigned you for now, please let me know if you're interested in working on this. Could be a really cool feature to have.
Hi Eugene! There's currently no such feature, but it should be relatively straighforward to add. You'd need to introduce a new CLI parameter for this, and modify the code here:...
Sorry, the topic of interpretability in RL is rather obscure to me. I didn't even know about https://captum.ai/ From their tutorial on the website, I can't see why this can't...
Hi! Provided that you can run PyTorch on M1 I don't see why this wouldn't work. But M1 is a CPU, not a dedicated GPU that is currently used to...