Pixel2Mesh
Pixel2Mesh copied to clipboard
How does converting the .xyz file to a .dat file work ?
If I have understood everything correctly the generate_data.py file generates the .xyz files in the rendering folder. But these files can't be used for training yet. They need to be converted to .dat files. In another issue I read that the .dat files are just a binary wrapper for the .xyz files and you should use pickle to create them. But if I use pickle as following:
import pickle
import numpy as np
content = []
with open('00.xyz', 'r') as f:
for line in f.readlines():
content.append(line.split(' '))
with open('00.dat', 'wb') as f:
pickle.dump(content, f)
the created .dat file looks different. If you open a .dat file from the ShapeNet training data provided from the developers via google drive you see that every .dat file starts with
cnumpy.core.multiarray
_reconstruct
but my generated .dat file doesn't.
A snippet of a code which converts the .xyz files into .dat files that are usable for training would be great. Does anyone know what I am doing wrong ?
Thanks in advance.
Have you solved the problem of data set generation? Can I ask you for help
I have the same problem and want to generate.dat file for data set. Have you solved it? Can you share the solution?
Yes I think I solved it. I wrote myself a short python program that converts the .xyz files into .dat files. You will have to give the program the path to your .xyz file as a command line argument. The program itself uses pickle to convert the .xyz into a .dat file and will drop that .dat file into the same directory where your .xyz file is located.
import pickle
import numpy as np
import sys
def myFunc():
content = []
with open(sys.argv[1], 'r') as f:
for line in f.readlines():
content.append(line.split(' '))
print(content[0][0])
data = []
for i in range(len(content)):
data.append([])
for k in range(len(content[i])):
data[i].append(float(content[i][k]))
output_path = sys.argv[1]
output_path = output_path[:output_path.rfind('/') + 1] + output_path[output_path.rfind('/') + 1:output_path.rfind('.')]
output_path = output_path + ".dat"
print(output_path)
with open(output_path, 'wb') as f:
try:
data = np.array(data)
print(data.shape)
pickle.dump(data, f, 2)
except pickle.PicklingError:
print('Error while reading from object. Object is not picklable')
myFunc()
Hi, I find the generate_dada.py
also can sample the point from ground truth surface. Do you know the sampling operation difference between generate_dada.py
and 1_sample_points.txt
? I guess the sample in generate_dada.py
is uniform. But if I want to generate the ground truth data in point form to compare with prediction, whether I just need to run the generate_dada.py
? Or do I need to follow the step including 1_sample_points.txt, 2_generate_normal.py, 3_camera_transform.py
?
Hi, I find the
generate_dada.py
also can sample the point from ground truth surface. Do you know the sampling operation difference betweengenerate_dada.py
and1_sample_points.txt
? I guess the sample ingenerate_dada.py
is uniform. But if I want to generate the ground truth data in point form to compare with prediction, whether I just need to run thegenerate_dada.py
? Or do I need to follow the step including1_sample_points.txt, 2_generate_normal.py, 3_camera_transform.py
?
Hi, have you known the sampling operation difference between generate_dada.py and (1_sample_points.txt, 2_generate_normal.py, 3_camera_transform.py?). They seem to achieve the same function