Ling Jing

Results 6 issues of Ling Jing

Added 'xpu' device type, which represent Intel GPU. Using Intel GPU for training and inference requires configuring oneAPI and IPEX. oneAPI: https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html?operatingsystem=linux&distributions=offline IPEX: https://github.com/intel/intel-extension-for-pytorch Example commands for running nerfstudio on...

Add zipnerf-pytorch to the documentation as an external method. I’ve submitted a [PR](https://github.com/SuLvXiangXin/zipnerf-pytorch/pull/98) to the [original zipnerf pytorch repo](https://github.com/SuLvXiangXin/zipnerf-pytorch) to support nerfstudio. I'll update link in docs if its merged.

In order to be compatible with the original format of the 360_v2 dataset without reprocessing it, I added an option to colmapDataParser to round up the image size when downscaling....

# What does this PR do? Actually I think it’s better to handle `cache_idx` in `prepare_inputs_for_generation()`. But considering many models already implement `--bucket_internal`, I just simplified the implementation and tried...

# What does this PR do? Modified the kv_cache initialization method and optimized performance. Co-author: @atakaha Test command: ```bash python run_generation.py --model_name_or_path mosaicml/mpt-7b --use_hpu_graphs --use_kv_cache --limit_hpu_graph --batch_size 128 --max_input_tokens 128...

synapse1.17

# What does this PR do? The existing `bucket_internal` for MPT model implements the processing of the first token. This PR supplements the processing of subsequent tokens. Although throughput has...

review wip