AdvancedLiterateMachinery
AdvancedLiterateMachinery copied to clipboard
Is there a quantized model for VGT/Doclaynet
The VGT models seem significantly larger than the DocXLayout models. ~1GB vs ~75MB.
Can you quantize the VGT models? How do they perform post-quantization?