Abhilash Majumder

Results 32 issues of Abhilash Majumder

This PR has a minor modification to remove the error due to tensorflow version (>=2) incompatibility for optimizer.py. Due to deprecation of 'tf.train.Optimizer', an improvement is made which allows the...

Changes addressed: - In DNC, util_test.py, assertRaisesIncompatibleShapesError is present for TF>=2.5 .Added conditions so that TF

cla: yes

The script at "[test_weight_prepack.py](https://github.com/intel/intel-extension-for-pytorch/blob/master/tests/cpu/test_weight_prepack.py)" has some errors as follows: - ipex.optimize has no attribute as 'sample_input' : `ipex.optimize(origin_model1, dtype=dtype, optimizer=origin_optimizer1, level='O1', sample_input=x)` - Issues with 3d torch tensor for NWC...

Motivation: This PR highlights the correct attention output/dense Gemm on AutoTP for the 2 models - 1. [FunnelTransformer](https://github.com/huggingface/transformers/blob/v4.28.1/src/transformers/models/funnel/modeling_funnel.py) - ```post_proj``` 2. [TransformerXL](https://github.com/huggingface/transformers/blob/main/src/transformers/models/transfo_xl/modeling_transfo_xl.py) - ```o_net``` The configuration for FunnelTransformer: ``` FunnelModel(...

Motivation for This PR: In [engine.py](https://github.com/microsoft/DeepSpeed/blob/5c6da1f001f936234a31a238e71ca386e34eb51a/deepspeed/runtime/engine.py#L1408) there is a dependency on contiguous_gradients on MoE, for Stage1 which would imply that even with "contiguous_geadients" enabled, Stage 1 would still default to...

Motivation: Fix for reproducible issue #3837 on cpu. On cpus direct invocation of torch.cpu.tensor leads to dtype mismatch. Another way would be to have something like : ["torch.DoubleTensor" if device_type...

Thanks for creating the project. This is an initial effort to upstream SYCL kernel support for our Intel GPUs. (cxx compiler)

Motivation: Thanks for creating this repository . There is an ongoing effort planned to collaborate from Intel GPU to enable out of the box runtime functionality of code llama on...

cla signed

From thread https://github.com/ggerganov/llama.cpp/issues/2555 Initial support for AMX bf16 isa for build. @ggerganov could you take a look ? Thanks

In the plan to support an extensive ecosystem spanning from Huggingface, Lora/LLama families, this addition would enable this great FW to run on our GPU cards seemlessly. The discussion started...

medium priority
Intel Integration
Medium risk