FileNotFoundError: [Errno 2] No such file or directory: '../datasets/rplan/list.txt'
how to solve it?
hey, u should writ a script.
import os
def get_image_number(name):
return int(name.split(".")[0])
def write_filenames_to_txt(directory, txt_file):
json_names = os.listdir(directory)
json_names = sorted(json_names, key=get_image_number)
with open(txt_file, 'w') as f:
for filename in json_names:
f.write(filename + '\n')
write_filenames_to_txt("plan_json", "list.txt")
@YuP2905 Hi, thanks for the suggestion. I tried the script and made the list.txt file, then placed it inside the rplan folder. But I am getting this issue:
PS E:\HouseDiffusion\house_diffusion> python image_train.py --dataset rplan --batch_size 32 --set_name train --target_set 8
Logging to ckpts\openai_2024_04_06_02_24_29_449396
creating model and diffusion...
Number of model parameters: 26541330
COSINE
creating data loader...
training...
loading train of target set 8
Traceback (most recent call last):
File "E:\HouseDiffusion\house_diffusion\image_train.py", line 90, in <module>
main()
File "E:\HouseDiffusion\house_diffusion\image_train.py", line 47, in main
TrainLoop(
File "E:\HouseDiffusion\house_diffusion\house_diffusion\train_util.py", line 160, in run_loop
batch, cond = next(self.data)
File "E:\HouseDiffusion\house_diffusion\house_diffusion\rplanhg_datasets.py", line 31, in load_rplanhg_data
dataset = RPlanhgDataset(set_name, analog_bit, target_set)
File "E:\HouseDiffusion\house_diffusion\house_diffusion\rplanhg_datasets.py", line 117, in __init__
with open(f'{base_dir}/list.txt') as f:
FileNotFoundError: [Errno 2] No such file or directory: '../datasets/rplan/list.txt'
I think u should check your path
Hi, I have the same problem. The error message is exactly the same as yours. In which path did you put the list.txt file to solve this problem?
Hi, I have the same problem. The error message is exactly the same as yours. In which path did you put the list.txt file to solve this problem?
Sorry, I haven't focused on this project for a long time. But I think you can find out original dataset. That's how I solved this issue at that time.