Aaron Pham

Results 403 comments of Aaron Pham

[tensorrt docs](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html) [onnxruntime dockerfile ](https://github.com/microsoft/onnxruntime/blob/master/dockerfiles/Dockerfile.tensorrt)

we will consider this integration after bentoml 1.0

cc @ssheng @sauyon @parano > On CLI, should we use the `--grpc` option or `serve-grpc` sub-command? Using `--grpc` makes sense if most of the other options are also applicable to...

> I'm in favor of having separate a separate gRPC port configuration value. > > I think `--grpc` is probably fine here; most of the options are shared, right? well...

then we should also merge `bentoml.torchscript` to `bentoml.pytorch.save_model(..., save_as_torchscript=True)`. Since the runner implementation are also the same.

I think this is a pretty low triage.

I wonder whats the advantages of this comparing to statsmodels? Looks to me its a more powerful statsmodel. I would like to learn more about your usecase with darts.

Hi @sotte, Sorry for the late response. We are currently working on some internal designs changes and will get back asap on how you can contribute.

I will close this for now, as it is lower priority in the list

I will revisit this once we have an environment manager.