Bhimraj Yadav
Bhimraj Yadav
Closing this as it has been fixed by [#507](https://github.com/Lightning-AI/litData/pull/507). Please feel free to reopen the issue if it still persists.
@ysjprojects, Any thoughts on this?
Hi @philgzl , Thanks for creating this PR and adding support for parallel streaming! Just wanted to share a runtime error I encountered when testing with `batch_size >= 3` and...
For the macOS test, I believe it will require a rerun by the admin to allow it to run beyond 35 minutes, as it's currently exceeding the limit. @philgzl, you...
> Does anyone have any recommendations for alternative frameworks that allow per-model user-provided code like torchserve's `handler.py`? Hi @geodavic, I’d recommend [LitServe](https://github.com/Lightning-AI/LitServe) as a great alternative. As a contributor, I...
Hi @yuzhichang, By design, LitServe is kept simple yet performant. Btw, you can easily configure devices, GPUs, and workers while setting up the LitServer (see: [LitServer Devices](https://lightning.ai/docs/litserve/api-reference/litserver#devices)). For multiple endpoints,...
In the docs it does seem to say, [limited support](https://docs.pytorch.org/docs/stable/tensors.html#id12) though: cc: @robmarkcole @tchaton
Hi @laclouis5, You can disable the client file generation by passing an argument like this: ```python server.run(port=8000, generate_client_file=False) ``` However, I agree that it would be a good idea to...
Hi @xinsir6, thanks for the detailed issue! Currently, `StreamingDataLoader` doesn’t support custom batch samplers. As a workaround, you can optimize your data into separate datasets based on the sizes and...
### Additional Context: Need for passing CORSMiddleware A user reported an issue on Discord where he was unable to call the LitServe API from his frontend application due to CORS...