-
-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restore command #146
Comments
+1 |
2 similar comments
+1 |
+1 |
Writing down a few thoughts on this before I forget them. Goals
Non-Goals
APIThis assumes the archive is available on the host that issues the compose command. Is this a problem?
Alternatively the archive could be mounted as well:
Error handlingHow cautious does error handling need to be? Should the command stash the previous contents so it can always roll back to the pre-restoration state? |
I like the stepped approach: first just concentrate of the copy/restore process using the docker compose command. Add the download etc. later.
As a restore is done in either a testing scenario (non-critical) or when it's actually needed (often critical), any sources of issues should be limited as much as possible imo. I'm thinking if something like this right now:
Edit: Atomic file writes are only possible on Linux based systems, not Windows: https://github.com/google/renameio |
Restoring workflow concept (draft, open for discussion)I'd prefer to choose one approach where multiple options exist and not let the user choose (or only were it makes really sense) to not make everything too huge and complex. Also, we might do the more complicated stuff later for a restore v2 including downloading the archive from the specified storage backend. This is also by no means final, more like notes to get a restore strategy built up step by step. Edit: Replaced long list with flow chart for easier understanding and thinking. |
Some thoughts without having worked through your write up in all detail:
I.e. I'd personally maybe focus on a. fast copy / extraction, b. robust recovery. |
Not necessarily, no. It would just safe time. With many gigabytes of data, errors could occur from minutes to hours after starting, as insufficient space or (partially) permissions could lead to an error in the middle or end stage of the recovery. Could also lead to unstable system behavior (storage full).
Without any checksums provided in the backup - which would be a nice addition - we could only stream the extracted contents of files through checksum calculation (block-wise for large files which don't fit into memory?) and when writing of that file is done, check if writing was successful and complete (verify with the calculated checksum). But that's arguably quite a lot, complex and too much effort.
Atomic copy is basically just doing |
Updated workflow above, without checksums or crazy amounts of pre-checks. Also visual now, way easier on the eyes - at least for me. |
A container based off this image could also expose a command to restore a volume from a tar archive. This would have to be run manually in the context of the existing Docker setup, providing an archive location (this could probably also be remote).
The text was updated successfully, but these errors were encountered: