-
Notifications
You must be signed in to change notification settings - Fork 73
New render function with support for material properties and multiple light sources #75
Conversation
Supports material properties and multiple light sources
The rendering of shreks_donut looks really nice! Great work. But man is it ever slow. On my 2010 Macbook Air, shreks_donut renders at 60FPS using the old algorithm, but at only 3FPS using the new algorithm. The mandelbulb renders at some small fraction of 1 FPS -- I need to fix a bug in the FPS calculator to report fractional FPS numbers to quantify how slow it is. The original render code by Inigo Quilez is fast because it is straight line code with no loops or |
As much I like the new rendering output, speed is paramount for interactive editing and parameter exploring, anything below 20FPS is going to be problematic. To me, this shows how important it is to have "pluggable renderers" in Curv. The default one could favour speed over quality, and then we might choose alternative if we want to have a better quality output, like the one by @p-e-w. Also, I think it would be much better to have the improved phong shading implemented in Curv, so that we could tweak it on a per-sketch basis. |
@doug-moen It seems that there is once again a GPU/driver issue here as I did test this and there is no way it should be anywhere near as slow as it is for you. On my 4 year old laptop with only an integrated GPU and open source drivers, I am seeing 60 FPS with both the old and the new implementation for the default viewer window size. If I make the viewer window full screen and zoom in a lot, the framerate drops to 9 FPS for the new render function and 27 FPS for the old one. So for me, the new code is at most 3 times slower than the old one, not 20+ times like it is for you. And with my regular window size, it makes essentially no difference. Having used the new renderer for a few days, I find it hard to go back to the old one as it looks so much better. I really want something like it to become the default, so I will try and see if I can achieve a similar effect while maintaining higher framerates. However, given the huge performance difference between your setup and mine, it's plausible that some of it is not due to the code itself but due to driver limitations or bugs. We will see. |
I just tested your branch today and the rendering is better for some examples, but worse for some others. For instance, the mandlebulb would require tweaking: here we see that it seems like the ambient is too high/colors are overexposed The performance is adequate at preview size (my machine is from 2018 and has an Intel GPU), but degrades quickly once it's more than half the screen, but do not seem as bad as what @doug-moen experienced. Personally I think it's best to have a default rendering that looks good and is reasonablly fast and then let authors choose specific parameters tweaking and configuration to get the best results. For people who want to use Curv only for modeling, I think speed and the ability to show surface details would be key factors of the rendering algorithm. That's something I like about Thingiverse's rendering: it removes styling from the equation and displays the models on an equal basis. Maybe your implementation can be tweaked to have more consistent quality across the examples? The ability to have more light sources and dynamic materials seems definitely nice. |
In the current renderer, there is an interaction between ambient occlusion and the At this point, I think that the default lighting model should have 1 light source and fake shadows, based on however the current code works. To get better shadows (at a performance cost) you should change the lighting model or change its parameters. The current lighting model is based on code written by Inigo Quilez. He has written many blog posts about lighting. Check out his web site. For example,
There is a lot of diversity in Quilez's rendering techniques, too much to be captured in a simple set of parameters. It would be interesting to build a I haven't studied Quilez's rendering techniques enough to be able to propose what a high level rendering library would look like, so I don't know how to design the API. The low level 'shadertoy' API that I discussed, the 'view' function, would allow the rendering library to be prototyped in Curv. As I mentioned elsewhere, this low level API would eventually be removed once the 'new renderer' is designed, and the render library would be ported to the new renderer. |
Thank you for your feedback, @doug-moen and @sebastien! Just a quick update, I'm a bit short on time at the moment but I'm definitely still working on this. Already, I have improved the algorithm to the point of being just 10-15% slower than the existing one, while looking even better than the "New render function" screenshot above. Hopefully, I'll be able to refine and push the changes next week. |
I regret to say that I won't have time to work on this PR for the foreseeable future, so I'm closing it to not be in the way or create a false impression of ongoing activity. As stated before, I did make some more progress, but every attempt I made that looked even halfway decent was slower than the existing code. Along the way, I managed to explore some of the Shadertoy community. While I am absolutely in awe of the incredible stuff that people create there, I found it rather difficult to learn from it because the overwhelming majority of the code is undocumented if not outright obfuscated. Many authors seem to be (re)using code they do not understand completely. A prime example is the RNG one-liner you mentioned in #59:
At least half of all Shadertoy shaders seem to contain this line, so I was eager to find out where it originates from and what the constants mean. Turns out that apparently nobody knows. People just copy and paste mystery code from one shader to another. In that world of secrets, Inigo Quilez' articles are a ray of light (no pun intended). The shadow rendering post you linked to above was extremely helpful and I did implement the penumbra techniques described therein. The result looks amazing, though as mentioned, performance is worse than the existing code. But those articles didn't bring me any closer to understanding what exactly the existing code does. It's quite unfortunate that the naming is so unclear (What are |
Thanks for the contribution, even if you aren't able to carry the work forward. I agree with your comments about the code found on shadertoy. I would like to get rid of the "magic" code in Curv. It's possible that "fre" stands for "Fresnel" and "dom" might be "Discrete Ordinate Methods". I've recently been reading about the BRDF material model, and "physically based rendering". For example, this code is documented and has lots of useful links: https://github.com/McNopper/OpenGL/blob/master/Example32/shader/brdf.frag.glsl I'm keeping this PR open until I have a chance to make some use of the code in an updated Curv shader. |
@doug-moen Since you indicated that you might be interested in using some of this code in the future, here are some more observations from my now abandoned investigations:
|
The new rendering code has been added to the master branch. There is a new rendering parameter called |
This is a complete rewrite of the existing render code, using only this Wikipedia article as a reference.
Compared to the previous implementation, it has:
material
function (which for now is hardcoded).And here is how it looks:
Old render function
New render function
I think it is clear from this comparison that the new implementation generates a superior illusion of depth, with much more complex shadows made possible by multiple independent light sources casting overlapping shadow regions. The color is also noticeably different and while I prefer the more "lively" colors I realize this might not be everyone's preference. If you want to see muted colors more similar to the old renderer you can e.g. reduce the material's
ambient_reflectivity
.This PR paves the way for #73 and #74. I did see that you and @sebastien are currently discussing a much more far-reaching overhaul of the rendering system but whatever additional improvements there are going to be in the future, having rendering code that we can actually understand is surely the first step.