deepmind-research
deepmind-research copied to clipboard
How to generate train.tfrecord?
Dear author: Hello! I am a graduate student from Wuhan Textile University in China. I am studying your project about learning recently_ to_ simulate. I'm very interested in this. Can you provide the generated data train.tfrecord Source dataset file for? My email address is: [email protected]. Thank you very much for your reading and look forward to your reply.
Thank you for your message?
Can you provide the generated data train.tfrecord Source dataset file for?
Could you clarify the question/which datasets you are referring to? All datasets should be downloadable by name using the download script, as per the instructions in the README:
mkdir -p /tmp/datasets
bash ./learning_to_simulate/download_dataset.sh WaterRamps /tmp/datasets
Hope this helps.
Thank you very much for your answer! Because I want to try to make a cloth dataset for simulation experiments, but it is difficult for me to obtain the data content of tf.records file. When I forcibly open it, I can only see the garbled code. I'm confused now. Please give me some advice.
Thanks for your reply. If you want human readable access, I would recommend using python to read the dataset and iterate through the examples. You may use TF2 to do this:
import functools
import tensorflow as tf
from learning_to_simulate import reading_utils
ds = tf.data.TFRecordDataset([os.path.join(data_path, f'{split}.tfrecord')])
ds = ds.map(functools.partial(
reading_utils.parse_serialized_simulation_example, metadata=metadata))
for element in ds.as_numpy_iterator():
print(element)
for element in ds.as_numpy_iterator():
for element in ds:
I have a simular question. I want to creat a test dataset with ramps set by myself. What should I do? How can I creater such a test.tfrecord
file?
Thanks for your reply. If you want human readable access, I would recommend using python to read the dataset and iterate through the examples. You may use TF2 to do this:
import functools import tensorflow as tf from learning_to_simulate import reading_utils ds = tf.data.TFRecordDataset([os.path.join(data_path, f'{split}.tfrecord')]) ds = ds.map(functools.partial( reading_utils.parse_serialized_simulation_example, metadata=metadata)) for element in ds.as_numpy_iterator(): print(element)
What is metadata? Do I need to change reading_utils according to different dataset?