Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

template: Deploy oauth-proxy sidecar for TLS + authentication #54

Merged
merged 2 commits into from
Apr 24, 2018
Merged

Conversation

elad661
Copy link

@elad661 elad661 commented Apr 16, 2018

This PR changes our template to deploy oauth-proxy to prefor TLS termination and authentication.

The end goal is that only the Alertmanager serviceaccount will be able to trigger heals and only the prometheus service account will be able to scrape metrics.

cc @zgalor @moolitayer @jhernand your feedback would be appreciated.

@openshift-ci-robot openshift-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 16, 2018
template.yml Outdated
@@ -157,7 +196,40 @@ objects:
- name: config
configMap:
name: ${NAME}-config
- name: proxy-tls
secret:
secretName: proxy-tls
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be ${NAME}-proxy-tls.

template.yml Outdated
secretName: proxy-tls
- name: proxy-cookie
secret:
secretName: ${NAME}-proxy-secret
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be ...proxy-cookie? That way there is one name less.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll change to ${NAME}-proxy-cookie, that's more consistent with the TLS one

protocol: TCP
port: 9099
targetPort: 9099
port: 443
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 443? Nothing in this pod is using port 443.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

port is not the port in the pod, but rather the port to expose for the Service. The service forwards incoming connections to targetPort on the pod. I set it as 443 because that's the standard port for HTTPS, but it can be any port number you want.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, got it, thanks.

template.sh Outdated
@@ -21,11 +21,12 @@

oc process \
--filename=template.yml \
--param=PROXY_SECRET="$(dd if=/dev/urandom count=1 bs=32 2>/dev/null | base64 --wrap=0)" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be OAUTH_COOKIE_SECRET? Or something similar that conveys the fact that this is related to authentication, not to the proxy used to talk to the AWX proxy.

template.yml Outdated
@@ -36,6 +36,10 @@ parameters:
description: |-
The namespace where the auto-heal service will be created.
value: openshift-autoheal
- name: PROXY_SECRET
description: |-
Secret for oauth-proxy authentication cookie.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Secret used to encrypt OAuth session cookies?

@elad661
Copy link
Author

elad661 commented Apr 16, 2018

I've addressed the feedback in a separate commit for easier reviewing. I'll squash it when this PR will go out of the "wip" state.

@elad661
Copy link
Author

elad661 commented Apr 16, 2018

Regarding the SAR question, would a SAR that only allows SAs with read access to secrets in the openshift-monitoring namespace to connect to autoheal be considered secure enough?

This is assuming that nobody other than cluster operators or prometheus/alertmanager can read those secrets, and therefor it's supposedly "safe".

@jhernand
Copy link
Contributor

jhernand commented Apr 16, 2018

That subject access review looks OK to me. However, maybe we should create a dummy (and empty) secret in the openshift-autoheal namespace, and then use it instead. This would allow a more explicit set of permissions, and would make them independent of the implementation details of the openshift-monitoring namespace. I mean create a secret like this:

apiVersion: v1
kind: Secret
metadata:
  name: autoheal-access-key
  namespace: openshift-autoheal

Then a role that gives read access to this dummy/empty secret:

apiVersion: authorization.openshift.io/v1
kind: Role
metadata:
  name: autoheal-access
  namespace: openshift-autoheal
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  resourceNames:
  - autoheal-access-key
  verbs:
  - get

Then, in the OAuth configuration, we could use a subject access review that checks if the subject can read this dummy secret:

{
  "namespace": "openshift-autoheal",
  "resource": "secrets",
  "resourceName": "autoheal-access-key",
  "verb": "get",
}

With this we can control who can send alerts explicitly, by granting the autoheal-access role.

@jhernand
Copy link
Contributor

Actually, I think that the dummy/empty secret doesn't even need to exist.

@elad661
Copy link
Author

elad661 commented Apr 17, 2018

My problem with that approach is that it means we have to add a new role and role binding to the resources created by cluster-monitoring-operator, and it feels a bit "weird" to touch another component "from the outside", and if we add that role and binding to the cluster-monitoring-operator itself it blurs the boundaries between the components

@jhernand
Copy link
Contributor

I think that none of the objects need to be created in the openshift-monitoring namespace. The autoheal-access role can be a cluster role, like this:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: autoheal-access
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  resourceNames:
  - autoheal-access-key
  verbs:
  - get

The role binding can (and must) be created in the openshift-autoheal namespace, no need to interfere with the operator at all:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: openshift-autoheal
  name: alertmanager-autoheal-access
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: autoheal-access
subjects:
- kind: ServiceAccount
  namespace: openshift-monitoring
  name: <the name of the service account used by the alert manager>

This gives us the freedom to decide exclusively within the openshift-autoheal namespace who has permission to send alerts to the auto-heal service.

@elad661
Copy link
Author

elad661 commented Apr 17, 2018

That sounds reasonable for me, I will update the PR accordingly.

@openshift-ci-robot openshift-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 17, 2018
@elad661
Copy link
Author

elad661 commented Apr 17, 2018

This now uses the custom cluster role as suggested. I still consider this WIP because I haven't yet documented how to configure Alertmanager and Prometheus to preform authentication and I haven't tested this properly yet.

@elad661
Copy link
Author

elad661 commented Apr 18, 2018

It turns out that there's a problem with the oauth approach: Alertmanager doesn't support this yet. http_config (which is needed for bearer_token_file) is only supported in the 0.15 series of Alertmanager, which is not considered stable yet.

This means that until we can ship a newer Alertmanager, we can't use oauth for authenticating with the autoheal.

@jhernand @ironcladlou we need some other form of authentication, and to do TLS termination without oauth-proxy for now. Our options are either basic auth or a key as a GET parameter.

@jhernand
Copy link
Contributor

jhernand commented Apr 18, 2018

@brancz do you know if we will include alert manager 0.15 in OpenShift 3.11? If so I think we should continue working assuming the support for http_config and bearer_token_file will be there.

@ironcladlou
Copy link

It turns out that there's a problem with the oauth approach: Alertmanager doesn't support this yet. http_config (which is needed for bearer_token_file) is only supported in the 0.15 series of Alertmanager, which is not considered stable yet.

My inclination is to say let's start deploying with the "unstable" Alertmanager and focus on fixing whatever's broken.

@brancz
Copy link

brancz commented Apr 18, 2018

@ironcladlou normally I'd agree but in the 0.15 release the entire gossip mechanism was re-written, so unless we are considering possibly spending a significant amount of time on stabilizing it, I'm not sure it's the greatest idea. I know @simonpasquier has been working on it a lot lately, maybe he can comment on the stability (we could also consider this part of QAing exercise to get 0.15 out of the door more quickly as we would start using it).

@simonpasquier
Copy link

From my POV, 0.15 RC is quite stable on its own. The main risk is making sure that the cluster is setup properly in a Kubernetes deployment. There's been at least one report (prometheus/alertmanager#1307) of a failed setup due to DNS issue. On the other hand,
users already have tested 0.15 with the Prometheus operator (prometheus/alertmanager#1312) so the situation isn't unknown either.

And maybe another aspect to consider is the lack of encryption support for the cluster traffic (prometheus/alertmanager#1322). But AFAIU it wasn't supported either by the Prometheus operator for AM versions prior to 0.15.

@brancz
Copy link

brancz commented Apr 18, 2018

Indeed we had to fix an issue so we will need to bump both prometheus-operator and Alertmanager if we want to use that version. (RE: https://github.com/coreos/prometheus-operator/releases/tag/v0.18.1)

@elad661
Copy link
Author

elad661 commented Apr 23, 2018

I've managed to deploy Alertmanager 0.15, but I'm having problems with the SAR in oauth-proxy,

authorizer reason: User "system:serviceaccount:openshift-monitoring:alertmanager-main" cannot get secrets in project "openshift-autoheal"

Looks like it ignores the resourceName setting. I tried a bunch of options (including removing -openshift-sar and leaving only openshift-delegate-urls) and non of them worked.

Perhaps we need to use NonResourceURLs for this instead? as seen here.

Or maybe I'm missing something obvious. Any advice would be welcome.

@@ -111,6 +115,33 @@ objects:
username: ${AWX_USER}
password: ${AWX_PASSWORD}

- apiVersion: rbac.authorization.k8s.io/v1beta1

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: you should be able to use the system:auth-delegator ClusterRole which already exists in OpenShift/Kubernetes and is equivalent to the role created here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

template.yml Outdated
- --tls-key=/etc/tls/private/tls.key
- -http-address=
- -email-domain=*
- '-openshift-sar={"resource": "", "verb": "get"}'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some random thought but have you tried this?

-openshift-sar={"resource": "secrets", "verb": "get", "resourceName": "autoheal-access-key", "namespace": "openshift-autoheal"}

I've used a similar approach in a template of mine and it worked fine (see here).

AFAIU service accounts use tokens so they are only affected by openshift-sar, not openshift-delegate-urls.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried that, no luck.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@elad661 can you share the actual definition of the autoheal-access role and the alertmanager-autoheal-access role binding? I would like to check that they have been created as specified, and that nothing has been ignored. I mean the output of oc get ... -o yaml.

Would be nice if you check the permission with oc policy who-can get secret autoheal-access-key -n openshift-autoheal.

It is also useful to run oc auth can-i get secret/autoheal-access-key.

If you use the --v=8 option in those commands you will also see the request and response bodies send to the API server, which can help you build the right SAR to send.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With openshift-delegate-urls, I'm getting authorizer reason: User "system:serviceaccount:openshift-monitoring:alertmanager-main" cannot get secrets in project "openshift-autoheal"

Without it (and with the SAR mentioned by @simonpasquier ) I don't get any log lines in oauth-proxy, but Alertmanager still gets http 403.

oc get clusterrole autoheal-access -o yaml

apiVersion: authorization.openshift.io/v1
kind: ClusterRole
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"autoheal-access","namespace":""},"rules":[{"apiGroups":[""],"resourceNames":["autoheal-access-key"],"resources":["secrets"],"verbs":["get"]}]}
  creationTimestamp: 2018-04-24T09:32:02Z
  name: autoheal-access
  resourceVersion: "3838"
  selfLink: /apis/authorization.openshift.io/v1/clusterroles/autoheal-access
  uid: 58240e0d-47a2-11e8-aede-42010a8e0002
rules:
- apiGroups:
  - ""
  attributeRestrictions: null
  resourceNames:
  - autoheal-access-key
  resources:
  - secrets
  verbs:
  - get

oc get rolebinding alertmanager-autoheal-access -o yaml

apiVersion: authorization.openshift.io/v1
groupNames: null
kind: RoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"alertmanager-autoheal-access","namespace":"openshift-autoheal"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"autoheal-access"},"subjects":[{"kind":"ServiceAccount","name":"alertmanager-main","namespace":"openshift-monitoring"}]}
  creationTimestamp: 2018-04-24T09:32:02Z
  name: alertmanager-autoheal-access
  namespace: openshift-autoheal
  resourceVersion: "3839"
  selfLink: /apis/authorization.openshift.io/v1/namespaces/openshift-autoheal/rolebindings/alertmanager-autoheal-access
  uid: 586f8e4a-47a2-11e8-aede-42010a8e0002
roleRef:
  name: autoheal-access
subjects:
- kind: ServiceAccount
  name: alertmanager-main
  namespace: openshift-monitoring
userNames:
- system:serviceaccount:openshift-monitoring:alertmanager-main

Could this be because one of them is a RoleBinding and the other is ClusterRole? Should I change this to ClusterRoleBinding?

 oc policy who-can get secret autoheal-access-key -n openshift-autoheal --v=8
I0424 10:52:55.146753    3139 loader.go:357] Config loaded from file /root/.kube/config
I0424 10:52:55.166488    3139 request.go:874] Request Body: {"kind":"LocalResourceAccessReview","apiVersion":"authorization.openshift.io/v1","namespace":"","verb":"get","resourceAPIGroup":"","resourceAPIVersion":"","resource":"secrets","resourceName":"autoheal-access-key","path":"","isNonResourceURL":false,"content":null}
I0424 10:52:55.166592    3139 round_trippers.go:383] POST https://internal-api.elad-dev.origin-gce.dev.openshift.com:8443/apis/authorization.openshift.io/v1/namespaces/openshift-autoheal/localresourceaccessreviews
I0424 10:52:55.166622    3139 round_trippers.go:390] Request Headers:
I0424 10:52:55.166632    3139 round_trippers.go:393]     Accept: application/json, */*
I0424 10:52:55.166641    3139 round_trippers.go:393]     Content-Type: application/json
I0424 10:52:55.166649    3139 round_trippers.go:393]     User-Agent: oc/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8
I0424 10:52:55.180299    3139 round_trippers.go:408] Response Status: 201 Created in 13 milliseconds
I0424 10:52:55.180336    3139 round_trippers.go:411] Response Headers:
I0424 10:52:55.180346    3139 round_trippers.go:414]     Content-Type: application/json
I0424 10:52:55.180354    3139 round_trippers.go:414]     Content-Length: 1189
I0424 10:52:55.180362    3139 round_trippers.go:414]     Date: Tue, 24 Apr 2018 10:52:55 GMT
I0424 10:52:55.180372    3139 round_trippers.go:414]     Cache-Control: no-store
I0424 10:52:55.180414    3139 request.go:874] Response Body: {"kind":"ResourceAccessReviewResponse","apiVersion":"authorization.openshift.io/v1","namespace":"openshift-autoheal","users":["system:admin","system:kube-controller-manager","system:serviceaccount:kube-system:clusterrole-aggregation-controller","system:serviceaccount:kube-system:generic-garbage-collector","system:serviceaccount:kube-system:namespace-controller","system:serviceaccount:kube-system:persistent-volume-binder","system:serviceaccount:openshift-infra:build-controller","system:serviceaccount:openshift-infra:cluster-quota-reconciliation-controller","system:serviceaccount:openshift-infra:ingress-to-route-controller","system:serviceaccount:openshift-infra:service-serving-cert-controller","system:serviceaccount:openshift-infra:serviceaccount-pull-secrets-controller","system:serviceaccount:openshift-infra:template-instance-controller","system:serviceaccount:openshift-infra:template-service-broker","system:serviceaccount:openshift-monitoring:alertmanager-main","system:serviceaccount:openshift-monitoring:clu [truncated 165 chars]
Namespace: openshift-autoheal
Verb:      get
Resource:  secrets

Users:  system:admin
        system:kube-controller-manager
        system:serviceaccount:kube-system:clusterrole-aggregation-controller
        system:serviceaccount:kube-system:generic-garbage-collector
        system:serviceaccount:kube-system:namespace-controller
        system:serviceaccount:kube-system:persistent-volume-binder
        system:serviceaccount:openshift-infra:build-controller
        system:serviceaccount:openshift-infra:cluster-quota-reconciliation-controller
        system:serviceaccount:openshift-infra:ingress-to-route-controller
        system:serviceaccount:openshift-infra:service-serving-cert-controller
        system:serviceaccount:openshift-infra:serviceaccount-pull-secrets-controller
        system:serviceaccount:openshift-infra:template-instance-controller
        system:serviceaccount:openshift-infra:template-service-broker
        system:serviceaccount:openshift-monitoring:alertmanager-main
        system:serviceaccount:openshift-monitoring:cluster-monitoring-operator
        system:serviceaccount:openshift-monitoring:prometheus-operator

Groups: system:cluster-admins
        system:masters

And when logged in as the alertmanager-main serviceaccount:

# oc auth can-i get secret/autoheal-access-key -n openshift-autoheal
yes

Copy link
Contributor

@jhernand jhernand Apr 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think that the problem is that OpenShift is ignoring the kind of role that you specified in the role binding. You said this:

roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: autoheal-access

But it only remembered this:

roleRef:
  name: autoheal-access

So that role binding is only applying to a non-existing autoheal-access role inside the openshift-autoheal namespace.

When I tested this I did it in a Kubernetes 1.10 cluster. It may be that the behaviour has not been backported, or that it just isn't available in OpenShift 3.10.

@ironcladlou do you know who can best help us with this issue?

@elad661 if the above is accurate, and if you are loggged in with the alert manager service account then the response should have been no. Are you sure you used that service account? Can you share the --v=8 output of that command?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# oc whoami
system:serviceaccount:openshift-monitoring:alertmanager-main
# oc auth can-i get secret/autoheal-access-key -n openshift-autoheal --v=8
I0424 11:09:50.783564   10534 loader.go:357] Config loaded from file /root/.kube/config
I0424 11:09:50.803465   10534 request.go:874] Request Body: {"kind":"SelfSubjectAccessReview","apiVersion":"authorization.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"resourceAttributes":{"namespace":"openshift-autoheal","verb":"get","resource":"secrets","name":"autoheal-access-key"}},"status":{"allowed":false}}
I0424 11:09:50.803568   10534 round_trippers.go:383] POST https://internal-api.elad-dev.origin-gce.dev.openshift.com:8443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews
I0424 11:09:50.803581   10534 round_trippers.go:390] Request Headers:
I0424 11:09:50.803590   10534 round_trippers.go:393]     Accept: application/json, */*
I0424 11:09:50.803600   10534 round_trippers.go:393]     Content-Type: application/json
I0424 11:09:50.803616   10534 round_trippers.go:393]     User-Agent: oc/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8
I0424 11:09:50.803628   10534 round_trippers.go:393]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhbGVydG1hbmFnZXItbWFpbi10b2tlbi04NnhtNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhbGVydG1hbmFnZXItbWFpbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI2MTQ0NDc3LTQ3YTEtMTFlOC1hZWRlLTQyMDEwYThlMDAwMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpvcGVuc2hpZnQtbW9uaXRvcmluZzphbGVydG1hbmFnZXItbWFpbiJ9.i7MhpIbZ2T2RZVYXBIZLtcJSuRZ6z5lNOiSbwe_w7vaB3lyapstX91EtnjlyY58whV4vJyg5E5yynhSBX7XLof3qvNQnkHbtLdXFw8GK2GnlY1jm6sSKq5wzZOm99SjYHGgqmA_IQntrwfCJHiUpUmnFMLF8BNKEMj_Rvsub620Ed6kIEQJh09HMGH8ciVCKROBQSTWRCX19tuAHfqA32HxW5AxFiplmKK8KEvnFbQqPCg64pD8B_tbqNBBLTPUprPXly1kP04GJJbIwq6K57K-qf8zf7l3wUZee7aIzvCv2Xjr-dyT_VxVXjnUcHwtGqzso_hS9_9JBxj45W2ELrA
I0424 11:09:50.817093   10534 round_trippers.go:408] Response Status: 201 Created in 13 milliseconds
I0424 11:09:50.817157   10534 round_trippers.go:411] Response Headers:
I0424 11:09:50.817167   10534 round_trippers.go:414]     Date: Tue, 24 Apr 2018 11:09:50 GMT
I0424 11:09:50.817176   10534 round_trippers.go:414]     Cache-Control: no-store
I0424 11:09:50.817188   10534 round_trippers.go:414]     Content-Type: application/json
I0424 11:09:50.817196   10534 round_trippers.go:414]     Content-Length: 309
I0424 11:09:50.817242   10534 request.go:874] Response Body: {"kind":"SelfSubjectAccessReview","apiVersion":"authorization.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"resourceAttributes":{"namespace":"openshift-autoheal","verb":"get","resource":"secrets","name":"autoheal-access-key"}},"status":{"allowed":true,"reason":"allowed by openshift authorizer"}}
yes

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aha! it should be name, not resourceName!

@elad661
Copy link
Author

elad661 commented Apr 24, 2018

With the last commit, alerts are properly passed from Alertmanager to Autoheal! Turns out SARs should use name and not resourceName, and that all examples that use resourceName are incorrect.

I'm removing the WIP label. I can squash the commits later, or right now - whatever you prefer for easier review.

@elad661 elad661 changed the title [WIP] template: Deploy oauth-proxy sidecar for TLS + authentication template: Deploy oauth-proxy sidecar for TLS + authentication Apr 24, 2018
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 24, 2018
@jhernand
Copy link
Contributor

Great @elad661! Can you squash now, please?

@elad661
Copy link
Author

elad661 commented Apr 24, 2018

@jhernand squashed.

Copy link
Contributor

@jhernand jhernand left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Just some very minor comments.

README.md Outdated
to configure alerts.

For reference, here is an example Alertmanager configuration that sends
an alert to Autoheal with authentication. This example assumes autoheal
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... to the auto-heal service with ...

That is how we call ourselves: the auto-heal service.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

README.md Outdated
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
```

When using the cluster-monitoring-operator, save the configuration as `alertmanager.yaml` and use this command to apply it:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try to wrap lines at 80 columns.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

README.md Outdated

When using the cluster-monitoring-operator, save the configuration as `alertmanager.yaml` and use this command to apply it:

`oc -n openshift-monitoring create secret generic alertmanager-main --from-literal=alertmanager.yaml="$(< alertmanager.yaml)" --dry-run -oyaml | oc -n openshift-monitoring replace secret --filename=-`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more readable if you use a code block, and split the lines:

```
oc create secret generic alertmanager-main \
--namespace openshift-monitoring \
--from-literal=... \
--dry-run \
--output=yaml \
| \
oc replace secret \
--namespace=openshift-monitoring \
--filename=-
```

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I copied this from the CoreOS monitoring documentation (and just modified it for OpenShift), I think it's better as a one-liner because it's easier to copy and paste - but I can change this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

template.yml Outdated
metadata:
name: ${NAME}-access
rules:
- apiGroups:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you use the same indenting style than in the rest of the files?

rules:
- apiGroups:
  - ""
  resources:
...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@jhernand jhernand added lgtm Indicates that a PR is ready to be merged. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Apr 24, 2018
@openshift-merge-robot openshift-merge-robot merged commit 7f71db0 into openshift:master Apr 24, 2018
fengxsong pushed a commit to fengxsong/autoheal that referenced this pull request Jan 18, 2024
template: Deploy oauth-proxy sidecar for TLS + authentication
Former-commit-id: 7f71db0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants