You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thanks for building LiveSync and LiveSync-Bridge!
I've been running LiveSync for a few months and experimenting on and off with LiveSync-Bridge, and ran into an issue: if LiveSync-Bridge needs to download a lot of updates (for example today I started it after a few weeks of not running it), it can overwhelm my upstream nginx proxy that is fronting CouchDB:
2024/12/14 16:38:19 [alert] 5616#5616: *45850 1024 worker_connections are not enough while connecting to upstream,
client: 192.168.1.115, server: obsync.example.com, request: "GET /notes/h%3A%2B36i4ey8kwxlb2? HTTP/1.1",
upstream: "https://192.168.1.201:8443/notes/h%3A%2B36i4ey8kwxlb2?", host: "obsync.example.com"
In general increasing worker_connections in nginx.conf might work around this problem but it seems like an Obsidian vault with a large number of files/chunks it would always be possible to reach this point? Is there a good place to insert a rate limit on the client side to avoid opening too many connections? Happy to help with a pull request if this is useful to others.
The text was updated successfully, but these errors were encountered:
Hi! Thanks for building LiveSync and LiveSync-Bridge!
I've been running LiveSync for a few months and experimenting on and off with LiveSync-Bridge, and ran into an issue: if LiveSync-Bridge needs to download a lot of updates (for example today I started it after a few weeks of not running it), it can overwhelm my upstream nginx proxy that is fronting CouchDB:
In general increasing
worker_connections
in nginx.conf might work around this problem but it seems like an Obsidian vault with a large number of files/chunks it would always be possible to reach this point? Is there a good place to insert a rate limit on the client side to avoid opening too many connections? Happy to help with a pull request if this is useful to others.The text was updated successfully, but these errors were encountered: