Skip to content

Latest commit

 

History

History
53 lines (43 loc) · 5.28 KB

v3.1.0-release-notes.md

File metadata and controls

53 lines (43 loc) · 5.28 KB

06 April 2018

Support for OpenStack

{{site.prodname}} v3.1 reintroduces support for OpenStack. Existing users can upgrade their {{site.prodname}} OpenStack clusters to v3.1 by following the documented procedure.

Introducing GlobalNetworkSets

{{site.prodname}} now supports a new resource type: GlobalNetworkSet. A GlobalNetworkSet contains a set of CIDRs with associated labels, which can be matched by global network policies. This allows for rules to refer to external networks, possibly consisting of thousands of CIDRs. GlobalNetworkSets allow you to write network policies that are more portable across clusters by introducing a label-based abstraction on top of network CIDRs. To learn more, see the GlobalNetworkSet resource definition.

Beta support for IPVS kube-proxy

{{site.prodname}} v3.1 moves support for the IPVS kube-proxy from alpha to beta with support for pod ingress, pod egress, and host endpoint network policy. The IPVS kube-proxy is itself still beta, but promises greater scale and performance compared to the existing iptables proxy.

Kubernetes IPv6 support

{{site.prodname}} v3.1 includes fixes which better support running an IPv6-based Kubernetes cluster. In {{site.prodname}} v3.1, you can now use the Kubernetes API datastore in IPv6 mode. Additionally, {{site.prodname}} now generates a /48 unique local address (ULA) prefix when no IPv6 pool is specified rather than using a fixed CIDR. This prevents multiple {{site.prodname}} clusters from sharing the same IPv6 address space. Check out the documentation on enabling IPv6 support for more information.

HostEndpoint support for Kubernetes API datastore

{{site.prodname}} now supports configuration of host endpoints when using the Kubernetes API datastore. This allows you to seamlessly apply network policy to Kubernetes host machines and Kubernetes pods alike using {{site.prodname}} global network policies.

Other changes

  • The install-cni container now maintains the original mode on certificates copied from Kubernetes secrets. cni-plugin #481 (@caseydavenport)
  • The install-cni container now writes the calico-kubeconfig file with mode 600 by default. It can be configured by setting the KUBECONFIG_MODE option. cni-plugin #481 (@caseydavenport)
  • The {{site.prodname}} CNI plugin by default expects the /var/lib/calico/nodename file to be created by {{site.nodecontainer}}. To disable this feature, set nodename_file_optional: true in your CNI network configuration. cni-plugin #480 (@caseydavenport)
  • Fix a bug where IPs could be assigned from disabled IP pools. libcalico-go #806 (@ozdanborne)
  • Fix a bug where profiles were periodically and unnecessarily reprogrammed by kube-controllers. libcalico-go #805 (@caseydavenport)
  • Fix a bug where nodes were periodically and unnecessarily processed by kube-controllers. kube-controllers #216 (@caseydavenport)
  • Closes a number of race conditions and failure scenarios in IPAM block allocation and releasing. libcalico-go #785 (@caseydavenport)
  • Improves log output around IPAM block allocation and releasing. libcalico-go #785 (@caseydavenport)
  • The self-hosted Kubernetes manifests now set mode 400 for TLS secrets by default calico #1725 (@caseydavenport)
  • Fix a rare bug where a node could in some circumstances advertise /26 blocks that it didn't own calico #1712 (@caseydavenport)
  • Use networking.k8s.io api in place of deprecated extensions/v1beta1. calico #1614 (@bcreane)
  • Fix an interaction between failsafe inbound/outbound ports and do-not-track policy that resulted in failsafe ports being blocked if do-not-track policy was added. felix #1718 (@fasaxc)
  • Fix bug in icmp validation where ipVersion was required for all icmp rules. calicoctl #1814 (@ozdanborne)

Limitations

  • Offers only Kubernetes, OpenShift, OpenStack, and host endpoint integrations: the Mesos, DC/OS, and libnetwork orchestrators have not been tested. The latest supported release for these orchestrators is v2.6. We plan to resume support for these orchestrators in a future release.

  • GoBGP not supported: Setting the CALICO_NETWORKING_BACKEND environment variable to gobgp is not supported. See Configuring {{site.nodecontainer}} for more information. We plan to resume support for GoBPG in a future release.

  • Route reflectors cannot be clustered: We plan to resume support for this in a future release.