You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, this is really cool work on dynamic modeling. I was trying to reproduce the paper and found that I got significantly worse results than your reported values. I was digging through your code because I didn't expect the L2 loss to accurately be able to capture the dynamics, and found tv loss. If I understand correctly, it's an L2 loss between time steps, but evaluated from the same camera position to ensure smoothness. I was wondering if the results in the paper use this loss?
I was also curious what tv stands for, as I'm not super familiar with this.
Thanks!
The text was updated successfully, but these errors were encountered:
JulianKnodt
changed the title
tv loss deviation from Paper
tv loss deviation from paper
Jun 14, 2021
@violetteshev I did not directly reproduce the results, I re-implemented this in my own repo without a coarse/fine approach. Instead, it might be worth to trying to run NR-NeRF on the dataset and see how it performs.
Hey, this is really cool work on dynamic modeling. I was trying to reproduce the paper and found that I got significantly worse results than your reported values. I was digging through your code because I didn't expect the L2 loss to accurately be able to capture the dynamics, and found tv loss. If I understand correctly, it's an L2 loss between time steps, but evaluated from the same camera position to ensure smoothness. I was wondering if the results in the paper use this loss?
I was also curious what
tv
stands for, as I'm not super familiar with this.Thanks!
The text was updated successfully, but these errors were encountered: