sparseml icon indicating copy to clipboard operation
sparseml copied to clipboard

Default quantization- True or false in SparseGPT

Open sriyachakravarthy opened this issue 1 year ago • 6 comments

Hi! in the recipe, if i do not want to quantize and perform structured pruning, is it okk to give quantize:false like below and do not provide QuantizationModifier in the recipe?

SparseGPTModifier:
  sparsity: 0.5
  block_size: 128
  sequential_update: true
  quantize: false
  percdamp: 0.01
  mask_structure: "16:32"
  targets: ["re:model.layers.\\d+$"]

sriyachakravarthy avatar Oct 03 '24 10:10 sriyachakravarthy

Hi @sriyachakravarthy,

Thank you for reaching out and opening an issue on SparseML!

The SparseGPTModifier no longer accepts a quantize argument, so you can safely remove it from your recipe. This will ensure that your model remains unquantized without affecting the pruning process.

Additionally, I’d recommend considering our latest framework, LLMCompressor, which offers enhanced capabilities for model compression. If you're open to using it, the recipe would look slightly different:

oneshot_stage:
  pruning_modifiers:
    SparseGPTModifier:
      sparsity: 0.5
      block_size: 128
      sequential_update: true
      percdamp: 0.01
      mask_structure: "16:32"
      targets: ["re:model.layers.\\d+$"]

rahul-tuli avatar Oct 03 '24 11:10 rahul-tuli

Thank you, @rahul-tuli , will try

sriyachakravarthy avatar Oct 03 '24 12:10 sriyachakravarthy

Also, will the llm-compressor run on an AMD machine?

sriyachakravarthy avatar Oct 03 '24 12:10 sriyachakravarthy

Hi @sriyachakravarthy, I'd like to clarify a bit more about this. Our LLM Compressor flows are currently for vLLM / our compression pathways for GPUs and specifically for Transformers models. SparseML is still used to create compressed ONNX models that can run in DeepSparse and ONNX Runtime for NLP, NLG, and CV models.

For AMD, SparseML will work for AMD CPUs, and LLM Compressor will work for AMD GPUs.

Hope this helps!

markurtz avatar Oct 03 '24 13:10 markurtz

Yes, Thanks!!

sriyachakravarthy avatar Oct 03 '24 14:10 sriyachakravarthy

Hi! I do not see model size reduction after pruning using llmpcompressor framework. Kindly help

sriyachakravarthy avatar Oct 04 '24 19:10 sriyachakravarthy

Per the main README announcement, SparseML is being deprecated by early June 2, 2025. Closing the issue and thank you for the inputs and support!

jeanniefinks avatar May 09 '25 17:05 jeanniefinks