Broken Offline NeRFCapture
Thanks for sharing your paper and codes first, they look really cool.
My approach
I tried to run the codes on my own computer, but I don't know anything about cyclonedds at all. So I can only try to run splatam by manually putting the rgb, depth and transforms.json file captured by NeRFCapture on my computer. Here is what I tried to do:
- Transfer files captured by NeRFCapture to your computer, and unzip it at
<SplaTAM>/experiments/iPhone_Captures/offline_demo. - Create new rgb and depth folders, the file structure should now look like:
offline_demo
├───depth
├───rgb
├───images
└───transforms.json
- Go to the images folder and run the following script, it will move the images to the depth and rgb folders respectively.
import os
import shutil
# get current path
current_dir = os.getcwd()
# create target folder
depth_dir = os.path.join(current_dir, '..', 'depth')
rgb_dir = os.path.join(current_dir, '..', 'rgb')
# iter all files in current folder
for filename in os.listdir(current_dir):
if filename.endswith('.png'):
if '.depth' in filename:
# move it to depth folder
shutil.move(os.path.join(current_dir, filename), depth_dir)
else:
# move it to rgb folder
shutil.move(os.path.join(current_dir, filename), rgb_dir)
- Go to the images folder and synthesize a three-channel depth map into a single channel.
import os
import cv2
import numpy as np
import imageio
folder_path = os.path.dirname(os.path.abspath(__file__))
depth_scale=10.0
full_res_width = 1920
full_res_height = 1440
for filename in os.listdir(folder_path):
if filename.endswith(".jpg") or filename.endswith(".png"):
image_path = os.path.join(folder_path, filename)
image = cv2.imread(image_path)
# rgb to gray
gray_img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# depth16_img = cv2.convertScaleAbs(gray_img, alpha=65535.0/255.0, beta=0.0)
dimensions = gray_img.ndim
print(f"Image: {filename}, Dimensions: {dimensions}")
# save changes
cv2.imwrite(image_path, gray_img)
- Changes the file names from
<num>.depth.pngto<num>.png - Return to the
offline_demofolder and run this to change the images name in json.
import json
with open('transforms.json', 'r') as f:
data = json.load(f)
for frame in data['frames']:
# print(frame)
if '.png' not in frame['file_path']:
frame['file_path'] += '.png'
frame['file_path'] = frame['file_path'].replace('images', 'rgb')
print(frame['file_path'])
if '.depth.png' in frame['depth_path']:
frame['depth_path'] = frame['depth_path'].replace('.depth.png', '.png')
frame['depth_path'] = frame['depth_path'].replace('images', 'depth')
print(frame['depth_path'])
with open('transforms.json', 'w') as f:
json.dump(data, f, indent=4)
Question
The code runs without any problems, but unfortunately the results very poorly (even though I have GT Poses for Tracking turned on).
Without turning on GT, the generated Gaussian sphere clusters are all squished together, and both Ground Truth Depth and Rasterized Depth within the eval show 0. I'm wondering what I'm doing wrong? One possibility I can think of is that I'm getting the wrong depth map because the same code behaves fine when running replica data.
Any suggestions on adjusting the NeRFCapture depth map please? Or any way to get depth from images/videos.
Appreciate it.
Hi, Thanks for trying out our code!
As suggested by this comment, the offline mode in NeRFCapture is broken and tends to give erroneous depth maps: https://github.com/spla-tam/SplaTAM/issues/7#issuecomment-1845065246.
I believe that this is the problem you are currently facing. We don't have an alternative for the NeRFCapture setup yet. We hope to release an updated variant of the demo soon.
Another possibility is to use apps such as Record3D (however, the input format might have to be looked into).
I try to fix the offlineMode of NerfCapture. (https://github.com/Zhangyangrui916/NeRFCapture) It output depth as binary. This is the scripts I ran on PC to process it:
def readDepthImage(file_path):
with open(file_path, 'rb') as f:
data = f.read()
arr = np.frombuffer(data, dtype=np.float32)
arr = arr.reshape(transformsJson['depth_map_height'], transformsJson['depth_map_width'])
return arr # 1 unit = 1 meter
directory = 'Z:/240223175130/'
#读取json文件
import json
with open(directory + 'transforms.json') as f:
transformsJson = json.load(f)
os.mkdir(directory + 'rgb/')
os.mkdir(directory + 'depth/')
paths = os.listdir(directory + 'images/')
for path in paths:
if path.endswith('.png'):
os.rename(directory + 'images/' + path, directory + 'rgb/' + path)
elif path.endswith('.depth'):
depthMap = readDepthImage(directory + 'images/' + path)
save_depth = (depthMap*65535/float(10)).astype(np.uint16)
cv2.imwrite(directory + 'depth/' + path.replace('.depth', '.png'), save_depth)
for frame in transformsJson['frames']:
frame['file_path'] = frame['file_path'].replace('images', 'rgb')
frame['file_path'] += '.png'
frame['depth_path'] = frame['depth_path'].replace('.depth.png', '.png')
frame['depth_path'] = frame['depth_path'].replace('images', 'depth')
with open(directory + 'transforms.json', 'w') as f:
json.dump(transformsJson, f)
This is what i get:
For who are still struggling for offline data, pls use https://spectacularai.github.io/docs/sdk/tools/nerf.html instead of Nerfcapture.
Thanks for sharing this, looks very cool! We will update our README to reflect the same.
For who are still struggling for offline data, pls use https://spectacularai.github.io/docs/sdk/tools/nerf.html instead of Nerfcapture.
Hi, i also noticed the nerfcapture offline is broken. I tested the spectacular app with nerfstudio and it works. For Splatam, do you need to modify the dataloader or other stuff of splatam to run the recrded data from specular? Thanks!