You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to ask how you authors build the first scene as described in the paper.
I tried to build the scene in the first frame but it falls whether using COLMAP or the provided point cloud and camera poses.
I guess the init point cloud and camera poses are provided from CMU Panoptic Dataset (HD videos and calibration data), which is different from 3DGS setting.
Could you kindly explain how you authors build the first scene? It seems the current code only shows how to move the point cloud, not create the GS.
Thanks for your time again!
The text was updated successfully, but these errors were encountered:
@robot0321 were you able to get it working? I am in a similar position with my custom dataset where I get a sparse reconstructions from COLMAP but can't get the final thing to work.
Another thing to note is if you try to reverse engineer the train_meta.json given with any panoptic datasets you get many unique k matrices. If you create train_meta.json on your own from COLMAP output with "share camera intrisics" option it will not be the case
Hi authors, Thanks for the interesting work.
I want to ask how you authors build the first scene as described in the paper.
I tried to build the scene in the first frame but it falls whether using COLMAP or the provided point cloud and camera poses.
I guess the init point cloud and camera poses are provided from CMU Panoptic Dataset (HD videos and calibration data), which is different from 3DGS setting.
Could you kindly explain how you authors build the first scene? It seems the current code only shows how to move the point cloud, not create the GS.
Thanks for your time again!
The text was updated successfully, but these errors were encountered: