title | authors | creation-date | last-updated | status | ||
---|---|---|---|---|---|---|
Support Knative Service for Triggers EventListener Pod |
|
2020-07-28 |
2020-08-25 |
implemented |
- Summary
- Motivation
- Proposal
- Design Details
- Advantages
- Test Plan
- Alternatives
- Open Points
- Implementation PRs
The proposal helps users to deploy triggers eventlistener as Knative Service along with the existing Kubernetes Deployments, Also users can bring their own CRD with a defined contract for spec and status.
Original Google Doc proposal visible to members of tekton-dev@: design doc
Triggers in conjunction with Pipeline enables the creation of full-fledged CI/CD systems. Triggers EventListener helps to process incoming HTTP based events with JSON payloads, So there may be chances that the EventListener should be capable to handle more requests without dropping any of those and should automatically scales up and down based on requests and should not use any resources during its idle state, So to achieve all those triggers should be more flexible to support different approaches.
First goal is to get the benefits of serverless features.
Second goal of this proposal is to provide flexibility to the user to bring their own CRD with a specified spec, status contract to deploy triggers eventlistener pod.
Third goal is to make use of PodSpecable for the existing kubernetes based deployment.
Deploying and maintaining the dependencies is not the goal of this proposal. Ex: Deploying and maintaining of Knative will not be part of the Triggers.
Allow triggers to support Knative Service to deploy eventlistener pod along with the existing kubernetes deployment in order to support serverless functionality.
Knative (pronounced kay-nay-tiv
) is an open source community project which adds components for deploying, running, and managing serverless(In short, serverless lets developers focus on their code, and mostly ignore the infrastructure.
), cloud-native applications to Kubernetes.
Find more information on Knative here.
With the support Knative Service triggers get the serverless features without any additional configuration.
User deploy tekton triggers and configure their application to perform some action based on the events.
considering some scenarios for Github actions
user configure triggers eventlistener to watch for github actions which happens less frequently for example closing PR
and for these kind of events also eventlistener pod keeps on running if deployed using Kubernetes Deployment and consumes resources(ex: cpu, memory
),
so to avoid unnecessary resource usage triggers should provide flexibility to the user to deploy eventlistener as a knative service which scales down the instance to 0 during its idle state.
user configure triggers eventlistener to watch for github actions on active github repo where things execute very frequently
and if eventlistener pod deployed by Kubernetes Deployment then autoscale needs to be handled by triggers explicitly (may be HPA
)in order to support max number of request,
So usage of Knative Service solves the autoscale problem by default(using KPA
).
Though this is a bit rare case because right now triggers are in alpha
so not sure about the usage per seconds.
To summarize, Knative does not only promise to scale-out (and let’s be honest, there likely won’t be millions of events), but also scale-to-zero. As Knative(Serverless Solution
) is pay-per-use which ideally costs based on the usage.
Knative is built on top of Kubernetes and the yaml looks like Deployment so there won't be difficulty to the user with respect usage. Also Knative handles k8s service creation by default with public accessibility which in turn available as part of eventlistener status address.
Triggers provide flexibility to the user to deploy Knative Service but not the installation of Knative itself and its responsibility of the user to have Knative running beforehand.
For Backward compatibility the default behavior will be as it is for few releases.
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: github-listener-interceptor
spec:
serviceAccountName: tekton-triggers-example-sa
serviceType: NodePort
podTemplate:
nodeSelector:
app: test
tolerations:
- key: key
value: value
operator: Equal
effect: NoSchedule
triggers:
- name: foo-trig
interceptors:
- github:
secretRef:
secretName: foo
secretKey: bar
eventTypes:
- pull_request
bindings:
- ref: pipeline-binding
template:
name: pipeline-template
This is exactly the same whatever we have right now with default.
The reason to move serviceAccountName
, podTemplate
,
to kubernetesResource
field is because those are part of WithPodSpec{} duck type
and which helps to support any of the pod and container field without hardcoding in podTemplate.
1.If user specify podSpec fields.
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: github-listener-interceptor
spec:
triggers:
- name: foo-trig
interceptors:
- github:
secretRef:
secretName: foo
secretKey: bar
eventTypes:
- pull_request
bindings:
- ref: pipeline-binding
template:
name: pipeline-template
resources:
kubernetesResource:
serviceType: NodePort
spec:
template:
metadata:
annotations:
k8s.based.annotation: "value"
spec:
serviceAccountName: tekton-triggers-github-sa
nodeSelector:
app: test
tolerations:
- key: key
value: value
operator: Equal
effect: NoSchedule
2.If user wants go with default values of podSpec fields then no need to specify resources
in that case trigger deploy
kubernetes deployment with default values and yaml looks something like below.
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: github-listener-interceptor
spec:
triggers:
- name: foo-trig
interceptors:
- github:
secretRef:
secretName: foo
secretKey: bar
eventTypes:
- pull_request
bindings:
- ref: pipeline-binding
template:
name: pipeline-template
To support Knative Service along with Kubernetes Deployment we use customResource Raw
data so that it can be any CRD like serving.knative.dev.
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: github-listener-interceptor
spec:
triggers:
- name: foo-trig
interceptors:
- github:
secretRef:
secretName: foo
secretKey: bar
eventTypes:
- pull_request
bindings:
- ref: pipeline-binding
template:
name: pipeline-template
resources:
customResource:
apiVersion: serving.knative.dev/v1 #It can be any CRD (foo.bar.com)
kind: Service
metadata:
labels:
serving.knative.dev/visibility: "cluster-local"
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
serviceAccountName: tekton-triggers-github-sa
nodeSelector:
app: test
tolerations:
- key: key
value: value
operator: Equal
effect: NoSchedule
The main goal of this TEP is to make triggers flexible enough to accept any CRD in order to create an eventlistener pod.
Kubernetes Deployment, Knative Service(or any custom CRD) have PodSpec as a common sub-field so usage of WithPodSpec, WithPod{} duck typing respectively helps users to configure podSpec fields. Note: Duck typing in computer programming is an application of the duck test—"If it walks like a duck and it quacks like a duck, then it must be a duck"... -Wikipedia
Right now triggers eventlistener focused to support Kubernetes Deployment and Knative Service,
But in future there may be possibility to support new CRD (ex: foo.bar.com
) so to make it standardized implementation
created the following contract,
So whoever implements a new CRD in order to support triggers that CRD should satisfy below contract
For Knative or new custom should satisfy WithPod{}
spec:
template:
metadata:
spec:
type EventListenerStatus struct {
duckv1beta1.Status `json:",inline"`
// EventListener is Addressable. It currently exposes the service DNS
// address of the the EventListener sink
duckv1alpha1.AddressStatus `json:",inline"`
}
Below are the few basic high level validation
-
If no
resources
field specified as part ofEventListener
the Kubernetes Deployment will be created with default values like the existing behavior because nowserviceAccountName
is optional and if not provideddefault
serviceaccount will be used which is tracked by this issue -
If
resources
is provided then at a time it can have eitherkubernetesResource
orcustomResource
not both. -
Validation of all the podSpec and containerSpec fields.
- If user provided podSpec and containerSpec fields are not supported triggers webhook can give an error like below
admission webhook "validation.webhook.triggers.tekton.dev" denied the request: validation failed: must not set the field(s): spec.template.spec.containers[0].image
- Should not allow more than one container.
-
Triggers eventlistener now gets the serverless feature by default.
-
Along with Knative, triggers eventlistener support any CRD which satisfy the contract.
-
No management of dependencies ex: Knative.
- e2e and unit tests
We can achieve above proposal based on
With all of the above implementation we should have Knative
dependency as vendored and no way to support other CRD
As per the proposal kubernetesResource
will have serviceType
and spec
which is WithPodSpec{} duck type
so the created Deployment/Service will get the information of annotation/labels if provided as part of EventListener.
But there is discussion thread here where there is a point like user should get the way to specify annotation/labels to Deployment/Service.
This is not a blocker to proceed with implementation but it can be considered and addressed if there is any real usecase in future or before moving to beta
so adding this as part of open points.