-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A better story for sub-routes. #3419
Comments
During the discussion in the API WG Wednesday, we kicked around the prospects of:
One problem with the first is that there isn't anything to decorate with a label if you want a particular Kevin to become internal-only, which was a nice property when I was thinking about this... One problem with the second is that the names are independent of the URLs, and we enter into the same problematic territory that K8s Ingress has around possible conflicting URL definitions. This also doesn't handle the internal-only label problem because Ingress is "lower-level" than where we'd make that decision. |
/milestone Serving 0.6 |
/assign |
@tcnghia and I discussed this a bit a week or so ago, and I wanted to surface the current thinking here. I'll start with the broad strokes, so folks can tl;dr the rest.
Sticking with a single ClusterIngress punts on a variety of migration issues that we'd have had sharding the programming under N Kevins. In principle, as structured above, the The placeholder services being 1:1 with Kevin give us all of the nice properties we want without the need for our own Kevin resource.
|
I think this work should start with some cleanup of the Route controller, which is currently a bit messy. Currently the Route controller has Currently the
I feel like we should replace the key-space with the names of kevins:
|
One more thing about the N+1 place holder K8 Service. In our current reconciliation loop we create them after the ClusterIngress are ready. In order to better make use of API Server to detect unavailable sub route name earlier we will need to change to. For each
|
So I believe the work items for this issue are: Part 1: Make use of API Server to avoid subRoute conflicts
Part 2: Allow users to set visibility setting per subRoute by adding label to the subRoute's placeholder
|
/assign @andrew-su Thanks! 🙏 |
As part of these changes, I started with the refactor as suggested but that quickly went down horribly as there were many places relying on the map referring to "defaultName" for the default route, which led to lots of different places failing. After fighting with that for a long while, I decided to just do the refactor later.. Going with as minimal change as I need to get the functionality working. The current state of this change, a placeholder service for each tagged service + 1 service for the aggregated traffic. The Some undesirable behaviour I noticed is that updating the route did not update the clusteringress resource to reflect the new/deleted hosts, the services created do not get cleaned up (this is likely because there's no cleanup logic at the moment.) |
It was a typo... which caused it to attempt to create the same service twice thus failing early before updating cluster ingress. Thank you controller logs. |
@andrew-su I think missing CI update would fail a bunch of e2e. so hopefully fixing the typo will fix all those e2e failures? |
|
In the cleanup logic for the "Kevin" work we list the K8s services for a route by the label selector with the route name, but we mistakenly did this cluster-wide. This means that we'd discover services in other namespaces, and then (very likely) fail deleting the service with that name in our namespace, leading the route to never become ready. This could happen if I have Routes in parallel namespaces with the same name, but different "tags". However, @tcnghia and I saw this playing with @bbrowning's Ambassador implementation for ClusterIngress, which creates a K8s service in the ambassador namespace as part of the ClusterIngress implementation. Related to: knative#3419
In the cleanup logic for the "Kevin" work we list the K8s services for a route by the label selector with the route name, but we mistakenly did this cluster-wide. This means that we'd discover services in other namespaces, and then (very likely) fail deleting the service with that name in our namespace, leading the route to never become ready. This could happen if I have Routes in parallel namespaces with the same name, but different "tags". However, @tcnghia and I saw this playing with @bbrowning's Ambassador implementation for ClusterIngress, which creates a K8s service in the ambassador namespace as part of the ClusterIngress implementation. Related to: #3419
In the cleanup logic for the "Kevin" work we list the K8s services for a route by the label selector with the route name, but we mistakenly did this cluster-wide. This means that we'd discover services in other namespaces, and then (very likely) fail deleting the service with that name in our namespace, leading the route to never become ready. This could happen if I have Routes in parallel namespaces with the same name, but different "tags". However, @tcnghia and I saw this playing with @bbrowning's Ambassador implementation for ClusterIngress, which creates a K8s service in the ambassador namespace as part of the ClusterIngress implementation. Related to: knative#3419
Let's say an operator changed the service's visibility label to
That should already be done, as the only time we modify the metadata is during creating. |
Since we're being more specific about the visibility within each rule. |
@andrew-su ClusterIngress.Visibility could be a convenient fallback when a rule is missing its own Visibility setting. |
@andrew-su thanks for landing all the PRs. generally the rite of passage for such a large feature like this to close it with an integration test PR or else @mattmoor will reopen it right away :D. thanks ! |
In what area(s)?
/area API
/area networking
Describe the feature
This is an offshoot of the
v1beta1
task force, which has been considering ourv1beta1
shape holistically. While that discussion is ongoing, this chunk of work (at the Route level) felt worthy of surfacing for broader discussion. Thanks to @evankanderson for highlighting this issue, and the task force for helping shape this proposal for consideration by the broader group.The Problem
@evankanderson presented (recording, doc) several weeks back about the problem with the current method of handling
name:
inRoute
.The Proposal
This proposal is fairly close to what was proposed there, but with a small change. Instead of disallowing
name:
inspec.traffic
for Route and instantiating multiple Routes from Service, this proposes modifying Route to accomplish what it does through a new sub-resource that for the sake of focus I'll call "Kevin" (meet kevin).Kevin is a new internal API between Route / ClusterIngress that encapsulates basic traffic splits.
Kevin is effectively exactly what @evankanderson proposed as the future of Route, and the net effect of this is an identical "shape" for Service in release mode, however, we preserve the ability to program the Route in the same intuitive / powerful way we have today.
Detailed Proposal
We will introduce
status.traffic[*].url
that MUST be used in conjunction withstatus.traffic[*].name
to determine the URL by which a particularname:
will be accessed (a URL path MUST not be returned). The spec will no longer be prescriptive about how this URL is formed, and (similar to Allow for the format of Routes of a Service to be configurable #3306) Operators MAY allow this to be configured, but these names MUST(*) be stable.Create the Kevin resource. Kevin is analogous to Route, but disallows
name:
on splits and requiresrevisionName:
(for consistency across Kevins).Have Route start to create Kevin resources in parallel to ClusterIngress (**), make the switch to the Kevin URLs.
Deprecate the
{name}.{route}...
(to be deleted when we release v1beta1).(*) - This is the goal state, but we will intentional violate this as part of rolling this out to go from
{name}.{route}
to (configurable){route}-{name}
.(**) - At least Istio seems to be cool with redundant, but not-conflicting VirtualService definitions. We should talk to ClusterIngress implementers about this. We might also want to consider using this transition to make a switch to non-cluster scoped resources in anticipation of #1997 which will eliminate our need for cross-namespace ingress (cc @bbrowning since this angle wasn't discussed, but the cluster-scoped resources came up recently).
cc @evankanderson @vaikas-google
The text was updated successfully, but these errors were encountered: