You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Quiquian,
I've realized that after the implementation of photon sharing on GPU, data saved at the detector seems inconsistent. This is probably related to issue #101, where the test of the photon data is not passed after replay.
I'm preparing some exercises for my class in which I compare the analytical solution of the DE for a simple slab (semi-infinite) to the solution obtained at the detector after binning and applying the correct weight using Lambert-Beer's law.
I attach the zip code.
What I've realized is that, after the implementation of photon-sharing on GPU, the agreement with the analytical solution is no longer fulfilled. From the plot, you can see that this is not a matter of normalization but something more complicated.
The problem happens with GPU and no issues are experienced if run on CPU. Exercise_2.zip
Moreover, on my Apple Silicon M1 the problem is amplified because detp.ppath contains paths that are much more longer than the allowed path set through cfg.tend = 5e-9. If you run histogram(detp.ppath) you see this output:
The maximum allowed path must be, based on the settings of the code, lmax = c0 / n * cfg.tend ~ 1000 mm.
I strongly appreciate your support!
Thank you very much!
Best regards!
Andrea
The text was updated successfully, but these errors were encountered:
hi @andreafarina, thanks for reporting both problem. I have started debugging these issues and will update you soon. I suspect that it is related to the shared memory buffer use because I had to change the format and length when adding photon-sharing support.
Thanks a lot! I don't know if it can be useful for debugging, but I've noticed that if you run the demo_photon_replay.m example with 10^5 photons, the reply is successful, although I'm not sure about the consistency of the detp.ppath.
If you continue to run the same example, at the 4th time the reply is unsuccessful, like the memory was saturated somehow. If now you write the command clear mex, the next execution is successful. This can be also connected to the well known problem of the GPU NVidia memory saturation using mmclab. Actually, after the first execution, MATLAB seems to keep in memory information of the mex file to be faster the next run.
Thanks a lot!
Regards
Andrea
Dear Quiquian,
I've realized that after the implementation of photon sharing on GPU, data saved at the detector seems inconsistent. This is probably related to issue #101, where the test of the photon data is not passed after replay.
I'm preparing some exercises for my class in which I compare the analytical solution of the DE for a simple slab (semi-infinite) to the solution obtained at the detector after binning and applying the correct weight using Lambert-Beer's law.
I attach the zip code.
What I've realized is that, after the implementation of photon-sharing on GPU, the agreement with the analytical solution is no longer fulfilled. From the plot, you can see that this is not a matter of normalization but something more complicated.
The problem happens with GPU and no issues are experienced if run on CPU.
Exercise_2.zip
Moreover, on my Apple Silicon M1 the problem is amplified because
detp.ppath
contains paths that are much more longer than the allowed path set throughcfg.tend = 5e-9
. If you runhistogram(detp.ppath)
you see this output:The maximum allowed path must be, based on the settings of the code,
lmax = c0 / n * cfg.tend ~ 1000 mm
.I strongly appreciate your support!
Thank you very much!
Best regards!
Andrea
The text was updated successfully, but these errors were encountered: