-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too many logs produced by container #564
Comments
@nilekhc I am installing using yaml, and inside I dont see this arg specified .. |
You can use arg |
thanks, @nilekhc , I will put an arg -v=X, but how to choose the right value, so that:
How did you come up with the -v=10? Why not 9 or 11? |
also this one provides no solution: #358 |
@nilekhc , you know that -v=10 will make it worse, right? The higher the more logs will be included, this is opposite to what I am used to in other logging libraries ... |
This is really not nice, and leaving 2 options to the customers:
Not nice ... |
@deyanp Thank you for the feedback. The current logs generated for the provider are for each secret/key/cert object defined in the |
@aramase If I set the level to error I expect that errors are in the logs but informational messages not ... Not really sure why would I need an informational log every 2 minutes that secret x has been refreshed ... Actually currently the informational logs are written regardless of the secret was changed or not - I would expect that only if underlying secret changes then I will have a log message saying this secret was refreshed. Why would I need an info log every 2 minutes that the same secret (unchanged) has been refreshed? |
This issue is stale because it has been open 14 days with no activity. Please comment or this will be closed in 7 days. |
@deyanp the problem is that the bug is inside the Klog library that many of the Kubernetes components use. To solve the problem, Klog needs to be fixed. Ref: kubernetes/klog#212 |
@pierluigilenoci , check the issue you referenced yourself - dims there is implying the issue is not with klog but with the client applications (secrets-store-csi-driver-provider-azure) .. |
@deyanp I have read the Klog code and I have also proposed a PR. |
This issue is stale because it has been open 14 days with no activity. Please comment or this will be closed in 7 days. |
🚀 |
This issue is stale because it has been open 14 days with no activity. Please comment or this will be closed in 7 days. |
@pierluigilenoci could you then pls answer the question of/convince dims in issue kubernetes/klog#212 that this is an issue in klog, and that your PR makes sense? |
@deyanp I've already tried. They basically said it is a known bug but there is no intention to fix it because it would change the behavior of the logger for practically all the components of Kubernetes and must be fixed while maintaining backward compatibility. Obviously, this is not possible because it works badly and to fix it the behavior must necessarily be changed and it is not possible to do this. So we're stuck. |
As soon as I have a few hours of time I will try to find a solution that is backward compatible but I don't promise anything. 😞 |
@pierluigilenoci , appreciated, I understand that you may not have time for this! @aramase as a workaround, if I change |
Ok, I guess this is going nowhere, so I found an alternative solution (in case another poor soul needs it) - see below.
Basically we stopped using the csi driver in 99% of the cases, as it was used mainly for mapping secrets from key vault to env vars. |
Have you
Yes
What steps did you take and what happened:
Looked at the Log Analytics -> Logs with the following query:
and saw on the top 2 places:
These seems to be because of the Secret Rotation interval of 2m (I have this active) and probably due to the fact that I have more than 50 pods in my cluster using CSI/Aad Pod Identity.
What did you expect to happen:
Wondering how to decrease the number of logs ...
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
No
Which access mode did you use to access the Azure Key Vault instance:
[e.g. Service Principal, Pod Identity, User Assigned Managed Identity, System Assigned Managed Identity]
Pod Identity
Environment:
kubectl version
andkubectl get nodes -o wide
): 1.20.5The text was updated successfully, but these errors were encountered: