-
-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Statefulset vs. Deployment + PVC? #10
Comments
If interested, I have a successful PR into my fork that I could merge upstream. |
Hey @chrisbsmith I would really be interested your change proposals and I think you are complelety right that the Statefulset fits the application. Please open a new Pull Request with your changes 🚀 |
FYI: You had the right approach in the beginning with Deployment + PVC. StatefulSets are used to scale-out stateful workloads since you define volumeClaimTemplates which allow the K8s Controller to replicate PVCs as you increase Since uptime-kuma is using SQLite, it only allows for a single instance. Once support for PgSQL (#959) or MySQL (#953) is added, StatefulSets would make sense. But you'd also want to add the PgSQL/MySQL DB as a dependency. |
Thanks for this explanation @maxirus. So in the current state, things are really a wash? Is there a benefit to reverting to Deployment + PVC? I should've done more research on the uptime-kuma model, but I don't see a harm to leaving it as a StatefulSet, especially if a team is working to add SQL support. |
@chrisbsmith Yes. "Weird" things will start to happen. I say weird in the sense that it's not what a typical user would expect. For example, you'd want |
@maxirus, got it. These make sense now that you describe them. These events you outlined aren't critical when the data is being persisted to a remote service. But when using a remote storage solution like SQL, you wouldn't need to put uptime-kuma in a STS anyways; instead you would do that with the backend. @dirsigler apologies for push this change. At least we got to learn a little something here. |
Correct; unless for some reason they decide to persist config data, cache, etc. to disk. I'm not too familiar with the project as I don't intend to use it until a proper DB is supported. IMHO, it's important that your tool doing the monitoring be HA. In a world with Docker compose and K8s, this becomes trivial to use MySQL/PgSQL. |
@chrisbsmith no worries! Would then say we revert the changes later on to again spin up a Deployment and PVC. EDIT: and big thanks @maxirus for the insights ❤️ |
@dirsigler my suggestion on this would be not to revert the changes but add a switch that changes the deployment from deploy to sts with a default value on deploy. So it is easy for the future to change that, as soon as Uptime Kuma is able to handle it and if someone wants to use a sts before that there is an easy way of doing that. Of course, a breaking change in changing a default value must be communicated, but that's a bridge we'll cross when we come to it. |
Interesting approach, indeed. With these changes we default to Deployments but keep the Statefulset "design" for the future. |
I'm just getting started with Uptime Kuma and was looking for a helm chart for it (thanks btw!) and came across your repo. However, I was wondering why you chose to use a deployment + pvc instead of a statefulset to deploy the pod? I've had issues in the past where a new pod was unable to start because the terminating pod was still running and hanging onto the PVC.
My understanding is that this is the exact situation where a Statefulset is useful. Maybe I don't understand the uptime-kuma infrastructure requirements well enough though.
The text was updated successfully, but these errors were encountered: