ChiYeung Law

Results 39 comments of ChiYeung Law

can you provide your prompt for us? so we can test it on our machine. If yes, you can send the prompt to [[email protected]](mailto:[email protected]).

yes. this would be fine to use torch>=2.0.0.

We have checked the SFT training set. The HumanEval test set does not leak in it.

This is an excellent research direction, but we haven't focused much on this topic.

Please follow [human-eval](https://github.com/openai/human-eval/tree/master) to install 'human_eval'.

WizardCoder is based on StarCoder. The max length is 8192 (input + output).

Maybe you can try some hierarchical methods. 1. Review each file -> summary of each file 2. Combine them to get the final review Or you can try some retrieval...

``` CUDA_VISIBLE_DEVICES=6,7 python src\inference_wizardcoder.py \ --base_model "WizardCoder-15B-V1.0" \ --input_data_path "data.jsonl" \ --output_data_path "result.jsonl" ``` This works fine on our machine. Which version of transformers do you use?

``` torch==2.0.1 transformers==4.29.2 2xV100 32GiB python==3.10 cuda==11.4 ```