MOSS icon indicating copy to clipboard operation
MOSS copied to clipboard

NameError: name 'autotune' is not defined

Open nlp-enthusiast opened this issue 1 year ago • 12 comments

使用pip 安装 autotune-0.0.3 后仍存在这个问题

nlp-enthusiast avatar Apr 24 '23 15:04 nlp-enthusiast

我这边是把models目录里的custom_autotune.py文件放在~/.cache/huggingface/modules/transformers_modules/local/ 目录下解决

cnsky2016 avatar Apr 25 '23 01:04 cnsky2016

我这边是把models目录里的custom_autotune.py文件放在~/.cache/huggingface/modules/transformers_modules/local/ 目录下解决

解决了autotune的问题,但是会出现下面的问题 File "/home/.cache/huggingface/modules/transformers_modules/local/custom_autotune.py", line 93, in run self.cache[key] = builtins.min(timings, key=timings.get) TypeError: '<' not supported between instances of 'tuple' and 'float'

nlp-enthusiast avatar Apr 25 '23 05:04 nlp-enthusiast

我这边是把models目录里的custom_autotune.py文件放在~/.cache/huggingface/modules/transformers_modules/local/ 目录下解决

解决了autotune的问题,但是会出现下面的问题 File "/home/.cache/huggingface/modules/transformers_modules/local/custom_autotune.py", line 93, in run self.cache[key] = builtins.min(timings, key=timings.get) TypeError: '<' not supported between instances of 'tuple' and 'float'

已经解决了! timings中的value存在inf,将带有inf的item删除就可以正常运行了

nlp-enthusiast avatar Apr 25 '23 06:04 nlp-enthusiast

老哥,代码具体咋改才能跑起来,我也遇到这个问题了

ColorfulDick avatar Apr 25 '23 08:04 ColorfulDick

老哥,代码具体咋改才能跑起来,我也遇到这个问题了

将报错的代码改为 temp = {} for config in pruned_configs: if isinstance(self._bench(*args, config=config, **kwargs),float): continue temp[config] = {self._bench(*args, config=config, **kwargs)} bench_end = time.time() self.bench_time = bench_end - bench_start self.cache[key] = builtins.min(temp, key=timings.get)

nlp-enthusiast avatar Apr 25 '23 15:04 nlp-enthusiast

也可以尝试清一下本地的Huggingface的缓存,重新从Huggingface Hub上下载一下,模型文件有所更新,最新的模型文件直接按照Readme里的步骤跑量化模型应该ok的

xiami2019 avatar Apr 26 '23 01:04 xiami2019

我这边~/.cache/huggingface/modules/transformers_modules下没有local文件,是local是自己建吗?

sun1092469590 avatar Apr 26 '23 09:04 sun1092469590

https://github.com/linonetwo/MOSS-DockerFile

我在 dockerfile 里把这些问题都解决了,相关笔记 https://onetwo.ren/wiki/#调研GPU上运行的语言模型

linonetwo avatar Apr 26 '23 09:04 linonetwo

老哥,代码具体咋改才能跑起来,我也遇到这个问题了

将报错的代码改为 temp = {} for config in pruned_configs: if isinstance(self._bench(*args, config=config, **kwargs),float): continue temp[config] = {self._bench(*args, config=config, **kwargs)} bench_end = time.time() self.bench_time = bench_end - bench_start self.cache[key] = builtins.min(temp, key=timings.get)

大佬,这个缩进看不明白,能贴下源码吗,多谢了

66li avatar May 05 '23 08:05 66li

老哥,代码具体咋改才能跑起来,我也遇到这个问题了

将报错的代码改为 temp = {} for config in pruned_configs: if isinstance(self._bench(*args, config=config, **kwargs),float): continue temp[config] = {self._bench(*args, config=config, **kwargs)} bench_end = time.time() self.bench_time = bench_end - bench_start self.cache[key] = builtins.min(temp, key=timings.get)

大佬,这个缩进看不明白,能贴下源码吗,多谢了

image

nlp-enthusiast avatar May 05 '23 08:05 nlp-enthusiast

是的,我也通过这个解决了。

zls130921 avatar Jun 16 '23 03:06 zls130921

老哥,代码具体咋改才能跑起来,我也遇到这个问题了

将报错的代码改为 temp = {} for config in pruned_configs: if isinstance(self._bench(*args, config=config, **kwargs),float): continue temp[config] = {self._bench(*args, config=config, **kwargs)} bench_end = time.time() self.bench_time = bench_end - bench_start self.cache[key] = builtins.min(temp, key=timings.get)

大佬,这个缩进看不明白,能贴下源码吗,多谢了

image

@txsun1997 官方可以修复下这个问题吗,好多人都碰到了,感谢~

nezhazheng avatar Jun 19 '23 07:06 nezhazheng