-
Notifications
You must be signed in to change notification settings - Fork 38.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid repeated resolving of singleton beans through @Lazy
proxy
#33841
Comments
I should mention the wish from my colleague to have a Maybe we could extend I haven't really looked into this deeply as I don't have time to provide PRs these days. But I wanted to at least forward this idea :) |
Generally speaking, the target bean can be of any scope, hence the fresh retrieval. However, it certainly is very common that the target is a lazily resolved singleton bean, so we should be able to specifically add a local cache for the |
I don't think that would cover the |
Good point, a collection/array entirely consisting of singleton beans qualifies as well. |
FWIW a related mechanism is in place within There is an additional concern here with the mutability of collections and arrays. Strictly speaking we'd have to clone the collection/array of singleton beans before we return it since it could have been mutated after a previous retrieval. Is that overhead that you would be concerned about? Should we rather trust the local That said, the idiomatic way of dealing with multiple beans in the |
I'm not sure if you want input from me here or if you're just thinking out loud? :) If the cloning of the collection is something I would be concerned about is nothing I can answer. That would need measuring. If the additional allocations would drive up GC activity this might outweigh the other CPU gains. Hard to imagine (there are other allocations like constructing ResolvableType instances that we would save), but this is something worth measuring. |
That was a bit of both, I guess :-) The effect would be similar to the above code doing |
To the second (idiomatic) part: If that's the preferred way of doings things, I would be fine with optimizing retrieval of singleton beans through As part of this ticket I would see extending the documentation with notes about the performance characteristics or pros and cons and what the preferred way should be. I'm not sure if your experiments involve the @lazy annotation or if that goes through the same code-paths eventually? The annotation suffers from the same characteristics. |
With |
I'm reducing this ticket to caching for That said, I'm also tightening |
@Lazy
annotation@Lazy
proxy
Sounds fair - could we nonetheless make a note in the docs about the performance characteristics of the |
Hi 👋 ,
we have discovered that one of our applications spends a reasonable amount of CPU cycles (10-20%) in
DefaultListableBeanFactory.doResolveDependency
during runtime. We have found out that this is caused by repeated resolving of beans through e.g.ObjectProvider.getObject
(or in fact@Lazy
annotated fields suffer from the same problem).I've spent a few minutes creating a reproducer example here: https://github.com/dreis2211/objectprovider-example
But essentially it's this:
A little test
wrk
benchmark shows the following results:Caching those beans e.g. in a class local field (also provided in the example already) yields the following results:
As you can see the latter is considerably better in terms of throughput.
As the application at hand is making use of Lazy or ObjectProvider in many places, a colleague of mine spent some time on writing a
CustomAutowireConfigurer
that avoids the repeated resolving for us so we don't have to clutter our code with class local caches. That however feels overkill to me. Is there any specific reason why the result of deferred bean resolving is not cached by default? I have the feeling the current behaviour is a little counter-intuitive and might not be known to everybody. A quick look into the docs also doesn't make a note about the runtime performance characteristics of such approaches.In case there is nothing to be done about the performance in general, I'd at least vote for an additional hint in the documentation about this.
Thank you :)
Cheers,
Christoph
The text was updated successfully, but these errors were encountered: