LLM4Decompile icon indicating copy to clipboard operation
LLM4Decompile copied to clipboard

Any plans on using/testing with latest deepseek distilled models with thinking capabilities ?

Open Greatz08 opened this issue 10 months ago • 1 comments

Pretty interesting project, was wondering if there are plans on testing it with deepseek latest distill models which are much more capable and Many people are fine tuning with different dataset and achieving greater results.

Greatz08 avatar Feb 10 '25 15:02 Greatz08

Thank you for your interest in our project!

Using deepseek-distill checkpoints: Absolutely, we plan to employ these checkpoints as starting points for fine-tuning our decompile model. The models we currently use, llm4decompile 1.3b, 6.7b, and 33b, are based on the 2023 release of deepseek-coder, and incorporating the latest models is definitely on our roadmap.

Regarding the distill technique: No, our evaluations with larger models, for example, GPT-4, have shown limited effectiveness in the context of decompilation tasks. As highlighted in table 1 of our paper, GPT-4o only achieves a 16% pass@1 rate, whereas our LLM4Decompile-1.3B model reaches a 27% success rate on the HumanEval-decompile benchmark (see https://aclanthology.org/2024.emnlp-main.203.pdf). Therefore, we currently do not consider distillation for this particular application.

albertan017 avatar Feb 11 '25 02:02 albertan017