罗崚骁 (LUO Lingxiao)

Results 20 issues of 罗崚骁 (LUO Lingxiao)

Currently, importing `xformers.ops` will implicitly initializes CUDA context. This has an unpleasant effect that we cannot use the "fork" multi-processing method. The line of code that initializes CUDA context is...

Hello, I would like to pretrain Medical Net with my custom datasets and settings. Any plan on releasing code for pretraining?

Numpy has deprecated the aliases of builtin types like `np.int` [since v1.20](https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations). When using a recent version of Numpy, exception will be raised when access the deprecated dtype. For example,...

Hello, thank you so much for sharing this repository! I want to ask a question about the implementation. According to the code, the `ModelCheckpoint` callback is used to save the...

**Describe the bug** `apply_affine_to_boxes` does not handle flipping properly, due to the left-closed and right-open nature of the boxes. That is to say, when a box is flipped, the left-closed...

### Preflight Checklist * [x] I agree to follow the [Code of Conduct](https://github.com/jgraph/drawio/blob/master/CODE_OF_CONDUCT.md) that this project adheres to. * [x] I have searched the issue tracker for a feature request...

### Bug description Gradients seem not synchronized with manual optimization and DDPStrategy with `static_graph=True`. ### What version are you seeing the problem on? v2.0.5 ### How to reproduce the bug...

bug
needs triage
ver: 2.0.x

### 📚 Documentation The doc of manual optimization give an example of gradient clipping (added by #16023): ```python from lightning.pytorch import LightningModule class SimpleModel(LightningModule): def __init__(self): super().__init__() self.automatic_optimization = False...

bug
precision: amp

To be brief, I just have no idea why `torch.bool` (and other integer dtypes except for torch.int32) is considered to be not supported by PyTorch (it seems to work for...

enhancement
Contribution wanted
Feature request