Results 52 comments of Alexander Kozlov

> model = OVModelForVisualCausalLM.from_pretrained(model_id, trust_remote_code=True) > > processor = VLChatProcessor.from_pretrained(model_id) To clarity, I am just looking at the code in the PR description and wondering why it could not look...

there was a discussion about this some time ago. Maybe @slyalin remembers the context.

`OVMultiQuantizationConfig` sounds too generic. As you mentioned that this is for pipeline quantization, I think it makes sense to call it `OVPipelineQuantizationConfig` in the future.

I am fine with this reshuffle, but a bit concerned about the changes in the import system and BC. I noticed you should change imports in the tests anyway. Can...

can we have a usage example? this could clarify everything around API.

> I suppose the right place is GenAI repo? There we will switch to nightly releases soon, so can perform integration tests E2E The idea is to understand the status...

Can we proceed with the merge? The test suite works on Linux CPU/GPU and WIndows CPU. There is a crash when using GPU on Windows but this is some runtime...

Details in Ref. 154161.

I think the PR is ready. Windows issue was fixed on the CI side.

Gentle ping to review and merge.