Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

template: Deploy oauth-proxy sidecar for TLS + authentication #54

Merged
merged 2 commits into from
Apr 24, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,6 +270,56 @@ awxJob:
}
```

### Alertmanager Configuration

Follow the upstream [Prometheus Alertmanager documentation](https://prometheus.io/docs/alerting/configuration/)
to configure alerts.

For reference, here is an example Alertmanager configuration that sends
an alert to the auto-heal service with authentication. This example assumes
autoheal and the Alertmanager are running on the same OpenShift cluster,
and requires Alertmanager 0.15 or newer.

```yaml
global:
resolve_timeout: 1m

route:
group_wait: 1s
group_interval: 1s
repeat_interval: 5m
receiver: autoheal
routes:
- match:
alertname: DeadMansSwitch
repeat_interval: 5m
receiver: autoheal
receivers:
- name: default
- name: deadmansswitch
- name: autoheal
webhook_configs:
- url: https://autoheal.openshift-autoheal.svc/alerts
http_config:
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
```

When using the cluster-monitoring-operator, save the configuration as
`alertmanager.yaml` and use this command to apply it:

```oc create secret generic alertmanager-main \
--namespace=openshift-monitoring \
--from-literal=alertmanager.yaml="$(< alertmanager.yaml)" \
--dry-run -oyaml \
| \
oc replace secret \
--namespace=openshift-monitoring \
--filename=-
```



## Building

To build the binary run this command:
Expand Down
3 changes: 2 additions & 1 deletion template.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,12 @@

oc process \
--filename=template.yml \
--param=OAUTH_PROXY_SECRET="$(dd if=/dev/urandom count=1 bs=32 2>/dev/null | base64 --wrap=0)" \
--param=AWX_ADDRESS="https://my-awx.example.com/api" \
--param=AWX_USER="$(echo -n 'autoheal' | base64 --wrap=0)" \
--param=AWX_PASSWORD="$(echo -n 'redhat123' | base64 --wrap=0)" \
| \
oc create --filename=-
oc apply --filename=-

# Add a line like this if you have a `ca.crt` file containing the CA
# certificates needed to connect to the AWX server:
Expand Down
94 changes: 90 additions & 4 deletions template.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,10 @@ parameters:
description: |-
The namespace where the auto-heal service will be created.
value: openshift-autoheal
- name: OAUTH_PROXY_SECRET
description: |-
Secret used to encrypt OAuth session cookies.
value: openshift-monitoring
- name: AWX_ADDRESS
description: |-
The URL of the AWX API endpoint, including the `/api` suffix, but not the
Expand Down Expand Up @@ -111,6 +115,19 @@ objects:
username: ${AWX_USER}
password: ${AWX_PASSWORD}

- apiVersion: rbac.authorization.k8s.io/v1beta1

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: you should be able to use the system:auth-delegator ClusterRole which already exists in OpenShift/Kubernetes and is equivalent to the role created here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

kind: ClusterRoleBinding
metadata:
name: ${NAME}-auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: ${NAME}
namespace: ${NAMESPACE}

- apiVersion: v1
kind: Secret
metadata:
Expand All @@ -119,6 +136,42 @@ objects:
data:
ca.crt: ${AWX_CA}

- apiVersion: v1
kind: Secret
metadata:
namespace: ${NAMESPACE}
name: ${NAME}-proxy-cookie
data:
session_secret: ${OAUTH_PROXY_SECRET}

- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ${NAME}-access
rules:
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- ${NAME}-access-key
verbs:
- get

- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: ${NAMESPACE}
name: alertmanager-${NAME}-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ${NAME}-access
subjects:
- kind: ServiceAccount
namespace: openshift-monitoring
name: alertmanager-main

- apiVersion: v1
kind: ConfigMap
metadata:
Expand Down Expand Up @@ -157,7 +210,38 @@ objects:
- name: config
configMap:
name: ${NAME}-config
- name: ${NAME}-proxy-tls
secret:
secretName: ${NAME}-proxy-tls
- name: ${NAME}-proxy-cookie
secret:
secretName: ${NAME}-proxy-cookie
containers:
- name: oauth-proxy
image: openshift/oauth-proxy:v1.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8443
name: public
args:
- --https-address=:8443
- --provider=openshift
- --openshift-service-account=${NAME}
- --upstream=http://localhost:9099
- --tls-cert=/etc/tls/private/tls.crt
- -email-domain=*
- '-openshift-sar={"resource": "secrets", "verb": "get", "name": "${NAME}-access-key", "namespace": "${NAMESPACE}"}'
- '-openshift-delegate-urls={"/": {"resource": "secrets", "verb": "get", "name": "${NAME}-access-key", "namespace": "${NAMESPACE}"}}'
- -tls-key=/etc/tls/private/tls.key
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-secret-file=/etc/proxy/secrets/session_secret
- -openshift-ca=/etc/pki/tls/cert.pem
- -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
volumeMounts:
- mountPath: /etc/tls/private
name: ${NAME}-proxy-tls
- mountPath: /etc/proxy/secrets
name: ${NAME}-proxy-cookie
- name: service
image: openshift/origin-autoheal:latest
imagePullPolicy: IfNotPresent
Expand All @@ -171,15 +255,17 @@ objects:
- --config-file=/etc/autoheal/autoheal.yml
- --logtostderr

- kind: Service
- apiVersion: v1
kind: Service
metadata:
namespace: ${NAMESPACE}
name: ${NAME}
annotations:
service.alpha.openshift.io/serving-cert-secret-name: ${NAME}-proxy-tls
spec:
selector:
app: ${NAME}
ports:
- name: autoheal
protocol: TCP
port: 9099
targetPort: 9099
port: 443
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 443? Nothing in this pod is using port 443.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

port is not the port in the pod, but rather the port to expose for the Service. The service forwards incoming connections to targetPort on the pod. I set it as 443 because that's the standard port for HTTPS, but it can be any port number you want.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, got it, thanks.

targetPort: 8443