stable-diffusion-webui
stable-diffusion-webui copied to clipboard
MPS backend out of memory
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
MacOS,已经顺利进入http://127.0.0.1:7860/网站,但是生成图片出现这个错误 RuntimeError: MPS backend out of memory (MPS allocated: 5.05 GB, other allocations: 2.29 GB, max allowed: 6.77 GB). Tried to allocate 1024.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure)
Steps to reproduce the problem
安装MPS
What should have happened?
RuntimeError: MPS backend out of memory (MPS allocated: 5.05 GB, other allocations: 2.29 GB, max allowed: 6.77 GB). Tried to allocate 1024.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure)
Commit where the problem happens
A python: 3.10.10 • torch: 1.12.1 • xformers: N/A • gradio: 3.16.2 • commit: 0cc0ee1b • checkpoint: bf864f41d5
What platforms do you use to access the UI ?
MacOS
What browsers do you use to access the UI ?
Apple Safari
Command Line Arguments
NO
List of extensions
NO
Console logs
RuntimeError: MPS backend out of memory (MPS allocated: 5.05 GB, other allocations: 2.29 GB, max allowed: 6.77 GB). Tried to allocate 1024.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure)
Additional information
No response
8gb version of mac is not enought to have mps accerlating and pytorch 2.0 or mps is only work in 13+ version
inter i7 core 16G
I have also experienced this runtime error while running the open-source version of Whisper on a 2019 Macbook built on an Intel i9 8-core CPU with 16GB RAM and an AMD Radeon Pro 5500M.
I had previously been running a decoder simulation that runs perfectly on Google Colab, which is when the error we've both experienced first appeared, but reducing batch sizes massively made no difference to the error, which then started appearing in Whisper runs on audio files of negligible size. So I concluded that it wasn't really a memory error at all, whatever the error message may say.
However, I extracted the Whisper code to another Jupyter Notebook and it ran perfectly on the GPU using the latest releases from Apple and PyTorch on Ventura macOS 13.3, with 13.0, as @elisezhu123 says , the minimum requirement. So the problem has "gone away" rather than being solved, but I'd suggest just rerunning your code in another clean notebook as a first step. The suggested "fix" with the environment variable is dangerous, and probably unnecessary, but if you do use it I'd try setting it to another value than 0.0; I think the default is 0.7, i.e. the GPU can use 70% memory, so maybe raise it a bit, but I really don't think memory is the problem; there's a "glitch" somewhere that changing notebooks fixes. Obviously very happy to be corrected on this if I am mistaken.
So I can only switch to another computer, right?
No - misunderstanding of "notebook". I meant that changing the code to another Jupyter (Anaconda3) notebook (not another physical Mac notebook) sorted the problem out for me, but since writing that it has come back again, so I am not sure that what I did solved it at all. There are some suggestions elsewhere that there may be an issue with MacOS Ventura 13.3 but I am not in a position to explore that.
I have also experienced this runtime error while running the open-source version of Whisper on a 2019 Macbook built on an Intel i9 8-core CPU with 16GB RAM and an AMD Radeon Pro 5500M.
I had previously been running a decoder simulation that runs perfectly on Google Colab, which is when the error we've both experienced first appeared, but reducing batch sizes massively made no difference to the error, which then started appearing in Whisper runs on audio files of negligible size. So I concluded that it wasn't really a memory error at all, whatever the error message may say.
However, I extracted the Whisper code to another Jupyter Notebook and it ran perfectly on the GPU using the latest releases from Apple and PyTorch on Ventura macOS 13.3, with 13.0, as @elisezhu123 says , the minimum requirement. So the problem has "gone away" rather than being solved, but I'd suggest just rerunning your code in another clean notebook as a first step. The suggested "fix" with the environment variable is dangerous, and probably unnecessary, but if you do use it I'd try setting it to another value than 0.0; I think the default is 0.7, i.e. the GPU can use 70% memory, so maybe raise it a bit, but I really don't think memory is the problem; there's a "glitch" somewhere that changing notebooks fixes. Obviously very happy to be corrected on this if I am mistaken.
it is just the bug of 13.3… 13.2 works
Excuse me, could you please tell me how to activate the MPS mode. I don't quite understand this.
Excuse me, could you please tell me how to activate the MPS mode. I don't quite understand this.
On Mac, cuda doesn't work as it doesn't have a dedicated nvidia GPU. So we would have to download a specific version of PyTorch to utilize the Metal Performance Shaders (MPS) backend. This webpage on Apple explains it best.
After installing the specific version of PyTorch, you should be able to simply call the MPS backend. Personally, I utilize this line of code
device = torch.device('mps')
and you can check by calling on device
and if it gives you back 'mps'
, you are good to go.
Hope this helps.
Same Problem here, any solution? running from transformers import Blip2Processor, Blip2ForConditionalGeneration import torch for Salesforce/blip2-opt-2.7b On macbook 2019 16GB ram i9 and the Radeon
I'm experiencing this with the latest commit of automatic and PyTorch v2 on my M1 8 GB running on macOS Ventura 13.3.1 (a).
Click to see the stack trace
Traceback (most recent call last):
File "/Users/honza/Projects/stable-diffusion-webui/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "/Users/honza/Projects/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/Users/honza/Projects/stable-diffusion-webui/modules/img2img.py", line 181, in img2img
processed = process_images(p)
File "/Users/honza/Projects/stable-diffusion-webui/modules/processing.py", line 515, in process_images
res = process_images_inner(p)
File "/Users/honza/Projects/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/Users/honza/Projects/stable-diffusion-webui/modules/processing.py", line 604, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "/Users/honza/Projects/stable-diffusion-webui/modules/processing.py", line 1084, in init
self.init_latent = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(image))
File "/Users/honza/Projects/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/Users/honza/Projects/stable-diffusion-webui/modules/sd_hijack_utils.py", line 26, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "/Users/honza/Projects/stable-diffusion-webui/modules/sd_hijack_unet.py", line 76, in <lambda>
first_stage_sub = lambda orig_func, self, x, **kwargs: orig_func(self, x.to(devices.dtype_vae), **kwargs)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/honza/Projects/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 830, in encode_first_stage
return self.first_stage_model.encode(x)
File "/Users/honza/Projects/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py", line 83, in encode
h = self.encoder(x)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/honza/Projects/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 526, in forward
h = self.down[i_level].block[i_block](hs[-1], temb)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/honza/Projects/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 131, in forward
h = self.norm1(h)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 273, in forward
return F.group_norm(
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 2956, in native_group_norm
out, mean, rstd = _normalize(input_reshaped, reduction_dims, eps)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 2914, in _normalize
biased_var, mean = torch.var_mean(
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 2419, in var_mean
m = mean(a, dim, keepdim)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 2373, in mean
result = true_divide(result, nelem)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 220, in _fn
result = fn(*args, **kwargs)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 130, in _fn
result = fn(**bound.arguments)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 926, in _ref
return prim(a, b)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 1619, in true_divide
return prims.div(a, b)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 287, in __call__
return self._op(*args, **kwargs or {})
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_prims/__init__.py", line 278, in _prim_impl
meta(*args, **kwargs)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_prims/__init__.py", line 400, in _elementwise_meta
return TensorMeta(device=device, shape=shape, strides=strides, dtype=dtype)
File "/Users/honza/Projects/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_prims/__init__.py", line 256, in TensorMeta
return torch.empty_strided(shape, strides, dtype=dtype, device=device)
RuntimeError: MPS backend out of memory (MPS allocated: 4.13 GB, other allocations: 5.24 GB, max allowed: 9.07 GB). Tried to allocate 512 bytes on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
While normal image generation works, this often occurs if I'm trying to use control net, but not always. Couldn't really figure out what's the differentiator. I have almost all other apps closed to leave maximum RAM unused.
What are my options to avoid this? I've noticed @brkirch is posting to discussions about Apple performance and has a fork at https://github.com/brkirch/stable-diffusion-webui/ with 14 commits ahead. Is this something that could speed up my poor performance or solve the "MPS backend out of memory" problem? Will it be ever merged to upstream? 🤔
I also keep having this issue if if scale the images on my M1 8Gb Mac Mini.
anyway to work around the issue? would the recommended solution from the error help? and how to do it?
Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit
This seems to help, at least in my case:
PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --precision full --no-half
Excuse me, could you please tell me how to activate the MPS mode. I don't quite understand this.
On Mac, cuda doesn't work as it doesn't have a dedicated nvidia GPU. So we would have to download a specific version of PyTorch to utilize the Metal Performance Shaders (MPS) backend. This webpage on Apple explains it best.
After installing the specific version of PyTorch, you should be able to simply call the MPS backend. Personally, I utilize this line of code
device = torch.device('mps')
and you can check by calling ondevice
and if it gives you back'mps'
, you are good to go.Hope this helps.
Where do you put that line of code? device = torch.device('mps')
I recommend reading the very good documentation on the PyTorch website which has examples showing how to use the MPS device and how to load data onto it.https://pytorch.org/docs/stable/notes/mps.htmlSent from my iPhoneOn 12 May 2023, at 21:39, Robert Dean @.***> wrote:
Excuse me, could you please tell me how to activate the MPS mode. I don't quite understand this.
On Mac, cuda doesn't work as it doesn't have a dedicated nvidia GPU. So we would have to download a specific version of PyTorch to utilize the Metal Performance Shaders (MPS) backend. This webpage on Apple explains it best. After installing the specific version of PyTorch, you should be able to simply call the MPS backend. Personally, I utilize this line of code device = torch.device('mps') and you can check by calling on device and if it gives you back 'mps', you are good to go. Hope this helps.
Where do you put that line of code? device = torch.device('mps')
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
I think the latest automatic release with pytorch2 already does this for you?
On Sat 13. 5. 2023 at 8:59, pudepiedj @.***> wrote:
I recommend reading the very good documentation on the PyTorch website which has examples showing how to use the MPS device and how to load data onto it.https://pytorch.org/docs/stable/notes/mps.htmlSent from my iPhoneOn 12 May 2023, at 21:39, Robert Dean @.***> wrote:
Excuse me, could you please tell me how to activate the MPS mode. I don't quite understand this.
On Mac, cuda doesn't work as it doesn't have a dedicated nvidia GPU. So we would have to download a specific version of PyTorch to utilize the Metal Performance Shaders (MPS) backend. This webpage on Apple explains it best. After installing the specific version of PyTorch, you should be able to simply call the MPS backend. Personally, I utilize this line of code device = torch.device('mps') and you can check by calling on device and if it gives you back 'mps', you are good to go. Hope this helps.
Where do you put that line of code? device = torch.device('mps')
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9133#issuecomment-1546548804, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACFGMM65ZJ63V6EYDELOWTXF4WNXANCNFSM6AAAAAAWLQ2XOU . You are receiving this because you commented.Message ID: @.***>
I am not sure what you mean. PyTorch2 has MPS support through torch.mps and the PyTorch-nightly now at 2.1.0v20330512 also has it, but unless I have missed something the mps device must still be deliberately invoked because some hardware systems don’t have it. Please let me know if I am mistaken!Sent from my iPhoneOn 13 May 2023, at 08:16, Honza Javorek @.***> wrote: I think the latest automatic release with pytorch2 already does this for you?
On Sat 13. 5. 2023 at 8:59, pudepiedj @.***> wrote:
I recommend reading the very good documentation on the PyTorch website which has examples showing how to use the MPS device and how to load data onto it.https://pytorch.org/docs/stable/notes/mps.htmlSent from my iPhoneOn 12 May 2023, at 21:39, Robert Dean @.***> wrote:
Excuse me, could you please tell me how to activate the MPS mode. I don't quite understand this.
On Mac, cuda doesn't work as it doesn't have a dedicated nvidia GPU. So we would have to download a specific version of PyTorch to utilize the Metal Performance Shaders (MPS) backend. This webpage on Apple explains it best. After installing the specific version of PyTorch, you should be able to simply call the MPS backend. Personally, I utilize this line of code device = torch.device('mps') and you can check by calling on device and if it gives you back 'mps', you are good to go. Hope this helps.
Where do you put that line of code? device = torch.device('mps')
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9133#issuecomment-1546548804, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACFGMM65ZJ63V6EYDELOWTXF4WNXANCNFSM6AAAAAAWLQ2XOU . You are receiving this because you commented.Message ID: @.***>
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
Excuse me, could you please tell me how to activate the MPS mode. I don't quite understand this.
On Mac, cuda doesn't work as it doesn't have a dedicated nvidia GPU. So we would have to download a specific version of PyTorch to utilize the Metal Performance Shaders (MPS) backend. This webpage on Apple explains it best. After installing the specific version of PyTorch, you should be able to simply call the MPS backend. Personally, I utilize this line of code
device = torch.device('mps')
and you can check by calling ondevice
and if it gives you back'mps'
, you are good to go. Hope this helps.Where do you put that line of code? device = torch.device('mps')
So the line of code device = torch.device('mps')
is merely a line to initiate the device as mps instead of the normal cpu. If we don't run this line, PyTorch would just place its data and parameters on the cpu. So this line has be run anywhere in the code. However, be it on Jupyter notebooks or Python code, I recommend you to make sure it runs at the very top or somewhere where you import all your necessary libraries.
Without this line ran first, when you move your model and data to device, .to(device = device)
, those data won't be placed in the mps.
If you are new to PyTorch and the usage of mps on mac, I encourage you to read loading data onto the mps here. It is important to know how to load data and model parameters onto devices if you wish to run large models quickly. Without them, it would probably take you hours and even days to run just one epoch.
Hope this helps!
What about this?
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/b08500cec8a791ef20082628b49b17df833f5dda/modules/devices.py#LL38C21
PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --no-half (without --precision full) works perfectly for me. Since I added PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 I didn't encountered the bug and the 4 performance cores of my MacBook M1 are much used than before
Total noob here. Trying to utilize stable diffusion with deforum extension. Where exactly do I input the PYTORCH_MPS_HIGH_WATERMARK code into?
Total noob here. Trying to utilize stable diffusion with deforum extension. Where exactly do I input the PYTORCH_MPS_HIGH_WATERMARK code into?
In terminal, type : cd ~/stable-diffusion-webui; PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --no-half
Total noob here. Trying to utilize stable diffusion with deforum extension. Where exactly do I input the PYTORCH_MPS_HIGH_WATERMARK code into?
In terminal, type : cd ~/stable-diffusion-webui; PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --no-half
Lifesaver. Thank you. It works now.
This seems to help, at least in my case:
PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --precision full --no-half
tyvm sir, this works but it is painfully long 2,3 hours to upscale 2x an image from 640x950 res. Is there anyway to speed this up? what setting to adjust highres.fix?
Have you tried all the Apple optimisation suggestions at https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon? In the last paragraph there are specific suggestions about timing and how to improve it.
On Sat, May 13, 2023 at 10:45 PM akamitoro @.***> wrote:
This seems to help, at least in my case:
PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --precision full --no-half
tyvm sir, this works but it is painfully long 2,3 hours to upscale 2x an image from 640x950 res. Is there anyway to speed this up? what setting to adjust highres.fix?
— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9133#issuecomment-1546755946, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGG22YOVVNI73D4733H33RLXF76HTANCNFSM6AAAAAAWLQ2XOU . You are receiving this because you commented.Message ID: @.***>
I see what you mean. I was misunderstanding you to be suggesting that PyTorch2 automatically selects the mps device, which I don't think it does. Sorry for the confusion!
On Sat, May 13, 2023 at 12:23 PM Honza Javorek @.***> wrote:
What about this?
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/b08500cec8a791ef20082628b49b17df833f5dda/modules/devices.py#LL38C21
— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9133#issuecomment-1546626417, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGG22YLNYDNAW6V3WZLAYCTXF5VJPANCNFSM6AAAAAAWLQ2XOU . You are receiving this because you commented.Message ID: @.***>
@pudepiedj no problem!
Regarding the settings, you can put the environment variable to your webui-user.sh
as well. This is how my look like right now:
#!/bin/bash
#########################################################
# Uncomment and change the variables below to your need:#
#########################################################
# Install directory without trailing slash
#install_dir="/home/$(whoami)"
# Name of the subdirectory
#clone_dir="stable-diffusion-webui"
# PyTorch settings
export PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7
export PYTORCH_ENABLE_MPS_FALLBACK=1
# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --no-half --opt-sub-quad-attention --use-cpu interrogate"
# python3 executable
#python_cmd="python3"
... file continues unchanged ...
Then all you need to run your web UI is plain ./webui.sh
, everything gets applied automatically.
Does this in fact implement and use the MPS device? I've been investigating over the weekend using the Activity Monitor "GPU History" display and I don't think my GPU is being used at all; stable-diffusion is just running on the CPU. This of course may explain why I am not getting the "MPS Backend Out of Memory" error, too! :)
On Mon, May 15, 2023 at 9:17 AM Honza Javorek @.***> wrote:
Regarding the settings, you can put the environment variable to your webui-user.sh as well. This is how my look like right now:
#!/bin/bash########################################################## Uncomment and change the variables below to your need:##########################################################
Install directory without trailing slash#install_dir="/home/$(whoami)"
Name of the subdirectory#clone_dir="stable-diffusion-webui"
PyTorch settingsexport PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7export PYTORCH_ENABLE_MPS_FALLBACK=1
Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --no-half --opt-sub-quad-attention --use-cpu interrogate"
python3 executable#python_cmd="python3"
... file continues unchanged ...
— Reply to this email directly, view it on GitHub https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9133#issuecomment-1547400004, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGG22YK442MUJIMJ54CVC3LXGHRATANCNFSM6AAAAAAWLQ2XOU . You are receiving this because you were mentioned.Message ID: @.***>
Not sure exactly. I'm just cargo culting the command line options based on whatever I read around the discussions and issues.