Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proxy not syncing latest upstream versions #11

Open
HaveFun83 opened this issue Mar 19, 2024 · 5 comments
Open

proxy not syncing latest upstream versions #11

HaveFun83 opened this issue Mar 19, 2024 · 5 comments
Assignees

Comments

@HaveFun83
Copy link
Contributor

We have an odd issue.
After some time of running the proxy is not syncing the latest upstream version and just provide the cached versions.
When a manual pod restart is triggered the sync will work for some time again.

Example: grafana chart
harbor.log

2024-03-19T00:46:40Z [INFO] [/controller/replication/transfer/image/transfer.go:182]: 
copying grafana.github.io/helm-charts/grafana:[<omit>,7.3.2,7.3.3,7.3.4,7.3.5,7.3.6](source registry)
to helm-oci/grafana:[<omit>,7.3.2,7.3.3,7.3.4,7.3.5,7.3.6](destination registry)...

latest upstream is 7.3.7
This happens several times with different charts now.

Any suggestion what is happen here and how to fix it?

@HaveFun83 HaveFun83 changed the title proxy not syncing latest upstream version proxy not syncing latest upstream versions Mar 19, 2024
@HaveFun83
Copy link
Contributor Author

HaveFun83 commented Mar 28, 2024

Still an issue. I enabled the debug log but nothing suspicious. Can someone help here?

I dig a bit in the logs and saw that the following event download index only occur once when the proxy pod is restarted

proxy-2024/03/28 14:45:22 download index: https://metallb.github.io/metallb/index.yaml
proxy-2024/03/28 14:45:22 downloading : https://metallb.github.io/metallb/index.yaml

The next time the replication is triggered the index get not downloaded maybe this is a trace

@HaveFun83
Copy link
Contributor Author

maybe something like #3 (comment)

I tried to set the INDEX_CACHE_TTL: 1 but nothing changed looks like the index.yaml is cached forever and never got refreshed from upstream after the first download.
@Vad1mo maybe you can help here?

@Vad1mo
Copy link
Contributor

Vad1mo commented Mar 29, 2024

ok, this issue is on our agenda now.

@HaveFun83
Copy link
Contributor Author

HaveFun83 commented Apr 10, 2024

JFI

i did some tests and changed the following parameter

--- a/registry/manifest/charts.go
+++ b/registry/manifest/charts.go
@@ -171,7 +171,7 @@ func (m *Manifests) GetIndex(repoURLPath string) (*repo.IndexFile, error) {
                        // cache error too to avoid external resource exhausting
                        ttl = m.config.IndexErrorCacheTTl
                }
-               m.cache.SetWithTTL(repoURLPath, res, 1000, ttl)
+               m.cache.SetWithTTL(repoURLPath, res, 100000, ttl)
                return res.c, res.err
        }
 
@@ -239,7 +239,7 @@ func (m *Manifests) getIndexBytes(url string) ([]byte, error) {
                        // cache error too to avoid external resource exhausting
                        ttl = m.config.IndexErrorCacheTTl
                }
-               m.cache.SetWithTTL(url, res, 1000, ttl)
+               m.cache.SetWithTTL(url, res, 100000, ttl)
                return res.c, res.err
        }

now the upstream index.yaml get downloaded every time the replication within harbor is triggered

@HaveFun83
Copy link
Contributor Author

@Vad1mo @tpoxa i tested this cost change several days now and it works like expected.
Should i open an PR for the cost change within the cache or is there any other solution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants