-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug] network policy is not compatible with kube-proxy's config of clusterCIDR
under new ippool
#6545
Comments
clusterCIDR
under subnetclusterCIDR
under new ippool
Hi @luckymrwang , does the cluster CIDR in the kube-proxy config match your IP pool CIDRs? Reason I'm asking is that a misconfigured kube-proxy can cause NAT to occur when Calico does not expect it which would cause Calico policy to drop that traffic (more details can be found here). |
@mgleung Thanks for your reply. |
@luckymrwang gotcha. I think it makes sense then that the traffic in CIDR |
This is unfortunately a limitation in Kubernetes - you could disable the kube-proxy clusterCIDR altogether, but then you lose out on the ability to bridge out-of-cluster traffic to the pod IP range in certain scenarios. There is upstream work in progress to make this better, both in the form of supporting multiple cluster CIDRs as well as improving kube-proxy "local" detection to not rely on CIDR at all (the more useful fix) |
Closing this since it seems that it's a limitation of the number of cluster CIDRs that kube-proxy can be configured with. I think the correct fix should be made upstream unless anyone has any objections? |
Expected Behavior
under the networkpolicy : allow pod connection each other in one namespace, deny cross namespace traffic.
under one namespace bond new ippool, in one pod requesting clusterIP:port should be allowed to annother pod
Current Behavior
under the networkpolicy : allow pod connection each other in one namespace, deny cross namespace traffic.
under one namespace bond new ippool, in one pod requesting clusterIP:port is denied to annother pod
Possible Solution
clusterCIDR
containing new ippool, but the config only support one ipv4 cidr.clusterCIDR
to0.0.0.0/0
, but it may affect request service whose endpoints ishostnetwork
mode.tunl0
IP to networkpolicy. But every node is different and the tunl0 is created dynamically.Steps to Reproduce (for bugs)
192.168.1.0/24
as followsnamespace
namedns1
, add ns annotation"cni.projectcalico.org/ipv4pools": "192.168.1.0/24"
ns1
:deployment
nameddp1
whosepod
is onnode1
, and thepod
name is likedp1-xxx
, podIP is192.168.1.1
deployment
nameddp2
whosepod
is onnode2
, thepod
name is likedp2-xxx
and podIP is192.168.1.2
. In addition, create aservice
namedsvc2
whose selector matches poddp2-xxx
, such assvc2
's clusterIP is10.233.56.189
and port is8080
networkpolicy
as followsto allow pod connection each other in one namespace, deny cross namespace traffic.
kubectl -n ns1 exec -it dp1-xxx curl 10.233.56.189:8080
, the traffic is denied.Context
Reason:
10.238.64.0/18
and the kube-proxy config ofclusterCIDR
is10.238.64.0/18
-A KUBE-SERVICES ! -s 10.238.64.0/18 -m comment --comment "Kubernetes service cluster ip + port for masquerade purpose" -m set --match-set KUBE-CLUSTER-IP dst,dst -j KUBE-MARK-MASQ
in nat tablekubectl -n ns1 exec -it dp1-xxx curl 10.233.56.189:8080
, package will leave node1, this moment package's src IP is 192.168.1.1 not in10.238.64.0/18
matching above iptable rule, so doSNAT
chaning package's src totunl0
's IP10.233.90.0
, leaving node1.netpol1
.such as
cali-tw-calid0xxxx -m comment "cali:xxx" -m mark 0x0/0x20000 -j cali-pi-_xxxxx
cali-pi-_xxxxx -m comment "xxxxx" -m set --match-set cali40s:xx src -j MARK --set-xmark 0x10000/0x10000
ipset
cali40s:xx
contains poddp1-xxx
's IP192.168.1.1
allowed, not containstunl0
's IP10.233.90.0
So the package is droped.
Your Environment
Calico version
v3.21.4
Orchestrator version (e.g. kubernetes, mesos, rkt):
kubernetes v1.20.10
Operating System and version:
CentOS7
Link to your project (optional):
The text was updated successfully, but these errors were encountered: