Switchsyj

Results 6 comments of Switchsyj

Just be sure you use python3.7. Higher level versions may not find dgl 0.4.0

My scripts `run_codegen.sh` is listed above, and here is the entire process:

Since the max_model_length of base model is 16K, I think is possibly to be the reason you said. So, can I limit the number of token generated with hf backend...

Okay, I checked it a bit. `max_new_tokens` is surely passed into `model.generate`. However, I think it should be also received from `HuggingFaceDecoder` which is inside `__init__.py`, where `self.max_new_tokens` is determined.

Hi, ganler, thanks for your detailed comment : ). I have solved the above problem by hardcoding `max_new_tokens=512` to speedup (as mentioned in [this issue](https://github.com/evalplus/evalplus/issues/39)), however I only get 0.207...

Sorry, I would like to find out reasons which make the pass@1 lower than leader board reported so, the question is: How could I get the error log to debug...