Shun Lu

Results 24 comments of Shun Lu

@Pcyslist Thanks for pointing our this error. I notice that a transfer learning procedure was missing in my training and I will repeat this experiment. In summary, I should first...

> I met the same problem. Have you solved this problem? I would really appreciate if anyone could give me some help. ```python Traceback (most recent call last): File "train_supernet_spos.py",...

I met the same error when loading the pre-trained LLaMA-2-7b model using the function AutoTokenizer.from_pretrained(). After setting `use_fast=True`, this error was fixed.

Finally, I re-downloaded the tokenizer-relevant files and solved this issue. Take the LLaMA-2-7b-hf as an example, update the files as below: * special_tokens_map.json * tokenizer_config.json * tokenizer.json * tokenizer.model

Thanks for the tips from @Maystern. I provide the code for using proxy in get_vtab1k.py: ```python import os os.environ['HTTP_PROXY'] = 'http://your_proxy_ip:your_proxy_port' os.environ['HTTPS_PROXY'] = 'http://your_proxy_ip:your_proxy_port' ``` And don't forget to remove...

I want to share a more convenient way to download the dataset. Thanks for the efforts of [RepAdapter](https://github.com/luogen1996/RepAdapter#repadapter), we can use Google Drive provided by them to download the dataset...

Thanks for your timely reply. After trying your suggested method, I still can not solve the problem. And I tried to re-install the plugin from the Chrome store and it...

您好,由于NAS-Bench-201官方提供的文件读取很慢,这个文件相当于把官方文件信息(主要包括搜索空间的模型结构和对应的精度,从而易于计算排序一致性)提取出来了,读取速度非常快,方便计算排序一致性。如果应用我们方法到新数据集,应该需要根据新数据集提取一个新的类似文件。

需要收集你自己定义的搜索空间中的模型结构在新数据集的精度和结构信息,存成字典格式,最后保存为npy文件即可