Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configuration of s3 region for registry backend #10208

Closed
wants to merge 2 commits into from

Conversation

nandajavarma
Copy link
Contributor

@nandajavarma nandajavarma commented May 24, 2022

Description

This PR adds an option to configure the bucket region. Alongside it also ensures the secret is created even when the registry is set to inCluster since s3 backend can be set for inCluster registry.

Related Issue(s)

Fixes #9800

How to test

Use S3 as the In-cluster Storage provider in KOTS config UI. You will see the option to set the Region for S3

Release Notes

[kots] Add option to set the `S3` bucket region used for registry backend

Documentation

  • /werft no-preview
  • /werft publish-to-kots

@nandajavarma nandajavarma requested a review from a team May 24, 2022 07:45
@github-actions github-actions bot added the team: delivery Issue belongs to the self-hosted team label May 24, 2022
@corneliusludmann
Copy link
Contributor

Do we need to change the region here as well?

Region: context.Config.Metadata.Region,

@@ -201,6 +201,7 @@ type ContainerRegistryExternal struct {
type S3Storage struct {
Bucket string `json:"bucket" validate:"required"`
Certificate ObjectRef `json:"certificate" validate:"required"`
Region string `json:"region"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to set cfg.Config.Metadata.Region as default value (especially for backwards compatibility)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way: Couldn't we just set the region of metadata instead?

metadata:
  region: local

What is the reason that we need a specific S3 storage region that differs from the metadata region?

As far as I see this in the code metadata.region is for the storage provider and here:

{Name: "GITPOD_REGION", Value: cfg.Metadata.Region},

Don't know for what the env variable is used, though. Maybe also for the storage provider?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand what you mean, this probably is true and we don't need a separate region field for each s3 storage. In my mind I was very convinced that people might want to use buckets belonging to different regions for different usecases.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the user decides not to configure the object storage and uses Minio in-cluster, this will remain as local and will error

@@ -52,6 +52,13 @@ spec:
when: '{{repl (ConfigOptionEquals "reg_incluster_storage" "s3") }}'
help_text: The name of the bucket to act as your S3 storage backend.

- name: reg_incluster_storage_s3_region
title: S3 bucket region
type: text
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we set local as default value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good point! I will add it!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
type: text
type: text
value: "local"

I think it's value that we need here, right?

@nandajavarma
Copy link
Contributor Author

nandajavarma commented May 25, 2022

Do we need to change the region here as well?

Region: context.Config.Metadata.Region,

@corneliusludmann This is a good question! But in the case of S3 storage(not for registry), it is already accepting the region option and is storing to metadata.region

yq e -i ".metadata.region = \"{{repl ConfigOption "store_region" }}\"" "${CONFIG_FILE}"

The reason why I added a new struct field is because, I was thinking what if the user wants to use two different regions for s3 between registry bucket and the storage bucket. Not sure if this makes much sense, but the idea of keeping a global configuration value for region didn't seem right.

@corneliusludmann
Copy link
Contributor

OK, I see that is for the registry buckets only. Sorry, I missed this. 😇

Thank you for explaining your reasoning. Makes sense and I see the benefit of having a dedicated config value! 👍

I see the following issues with having a second region config for the registry only:

  • Backwards compatibility is harder: Currently, with setting metadata.region, both, bucket storage and registry storage change. When a user has changed the metadata.region value but does not have a value for the registry bucket region set, it comes to a breaking change. At least, we need to make sure, that the default value is the same as metadata.region when the registry bucket region is not set.
  • The meaning of metadata.region is not very obvious when we have a dedicated registry region. Maybe we should think of deprecating metadata.region and adding a dedicated storage and registry region instead.

When we don't have an urgent requirement to have separate regions for bucket storage and registry storage, I would prefer to just change the KOTS UI to be able to change the S3 region even when it's used for the internal registry only. And when we think it would be beneficial in the future to have separate configs for both, then we should introduce a v2 version of the config for the June release that replaces the metadata.region value.

@mrzarquon What do you think?

@nandajavarma
Copy link
Contributor Author

Thanks for your detailed thoughts, @corneliusludmann! I will for now rollback to using metadata.config. We can take this up and bring more flexibility later!

@nandajavarma nandajavarma marked this pull request as draft May 25, 2022 09:11
@nandajavarma nandajavarma marked this pull request as ready for review May 25, 2022 09:14
@corneliusludmann
Copy link
Contributor

corneliusludmann commented May 25, 2022

/werft run

👍 started the job as gitpod-build-nvn-fix-9800.7
(with .werft/ from main)

Copy link
Contributor

@corneliusludmann corneliusludmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an issue with the current implementation:

The user could set different values for Container registry → S3 bucket region and Object storage → Storage region. However, we don't support different values and one value will be overwritten.

As far as I know, KOTS doesn't support field validation (so that we cannot simply reject when users have different values in the fields).

What other option do we have?

  1. Move the storage region to a common section at the bottom and make it visible when one of the 2 (storage or registry) are configured.
  2. Move the storage region to the general section at the top, make it always visible, and call it just “region” (would fit better to the installer config where it is just metadata.region.
  3. Just live with it how it currently is and find a better approach later.

@lucasvaltl What do you think from a product perspective?

I personally would prefer (2). With a proper help text, we could make clear what it is for. In my opinion (1) does not fit very well in what we currently have and it could be confusing when this text field appears/disappears.

@@ -52,6 +52,13 @@ spec:
when: '{{repl (ConfigOptionEquals "reg_incluster_storage" "s3") }}'
help_text: The name of the bucket to act as your S3 storage backend.

- name: reg_incluster_storage_s3_region
title: S3 bucket region
type: text
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
type: text
type: text
value: "local"

I think it's value that we need here, right?

@lucasvaltl
Copy link
Contributor

With very limited context, I agree that (2) sounds like the best approach.

I do have questions though - is the original problem here that we do not allow you to set a region for S3 if S3 is being used as the storage for the (in cluster?) container registry? (If yes, how many customers are using this right now?) And right now we are just reusing the value from metadata.region for this, however that means we are using metadata.region. for both the S3 region for the container registry and the region for where the normal blob storage lives - which leads to the UX issues in the kots installer UI mentioned?

@nandajavarma nandajavarma marked this pull request as draft May 25, 2022 14:08
@@ -9,7 +9,7 @@ metadata:
app: gitpod
component: gitpod-installer
annotations:
kots.io/when: '{{repl and (ConfigOptionEquals "reg_incluster" "0") (ConfigOptionEquals "reg_incluster_storage" "s3") }}'
kots.io/when: '{{repl ConfigOptionEquals "reg_incluster_storage" "s3" }}'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran into this today, and this worked!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

KOTS incluster registry S3 dialog doesn't request region
6 participants