Elijah Rippeth
Elijah Rippeth
Excellent, thanks for the clarification. One idea that's kind of messy is using `globals()` to populate the module with `as_` or fully qualified name. The issue is that it will...
So you want to stick with the current, but wrap up the try/except logic? ```python class XXXDataPipe: def __init__(self, ...): self._abc = lazy_import("abc") def __iter__(self): self._abc ```
I asked [here](https://stackoverflow.com/questions/70582704/how-can-i-lazily-import-a-module-in-python?noredirect=1#comment124772376_70582704) and it seems like [pandas has something similar](https://github.com/pandas-dev/pandas/blob/a6c1f6cccee6bbccfb29488a94664ed07db024d9/pandas/compat/_optional.py#L65) to what we want. They still need to import where used so it's not terribly different, but it prevents...
You can perform forced decoding with the following script: ```python #!/usr/bin/env python3 import torch from fairseq.sequence_scorer import SequenceScorer from fairseq.models.transformer import TransformerModel if __name__ == "__main__": sent_src = "Hello world!"...
Reddit seems like a classic choice for negative examples. 🙃
Note that this was discussed previously in #168.
Only slightly related, but is there something we (non-meta'ers) can read about the sunsetting of torchscript?
I've handled simple conda packaging before, but complexity seems to increase with more targets (like CUDA toolkits). I'm happy to work with someone more experienced to push this through. It...
Worth mentioning that triton is an _optional_ dependency here, not that it makes a huge difference from the condafication perspective.
See [here](https://github.com/facebookresearch/xformers/blob/b582882cfd9ca526068843b9debf5bd905c66425/xformers/triton/__init__.py#L9) -- basically if CUDA is available, triton is enabled for fusing layers. For CPU-only, triton is not required.