- Kubernetes Windows Walkthrough - shows how to create a Kubernetes cluster on Windows.
Here are the steps to deploy a simple Kubernetes cluster:
- install acs-engine
- generate your ssh key
- generate your service principal
- edit the Kubernetes example and fill in the blank strings
- generate the template
- deploy the output azuredeploy.json and azuredeploy.parameters.json
- To enable the optional network policy enforcement using calico, you have to set the parameter during this step according to this guide
- Temporary workaround when deploying a cluster in a custom VNET with
Kubernetes 1.5.3:
-
After a cluster has been created in step 6 get id of the route table resource from Microsoft.Network provider in your resource group. The route table resource id is of the format:
/subscriptions/SUBSCRIPTIONID/resourceGroups/RESOURCEGROUPNAME/providers/Microsoft.Network/routeTables/ROUTETABLENAME
-
Update properties of all subnets in the newly created VNET that are used by Kubernetes cluster to refer to the route table resource by appending the following to subnet properties:
"routeTable": { "id": "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/routeTables/<RouteTableResourceName>" }
E.g.:
"subnets": [ { "name": "subnetname", "id": "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/virtualNetworks/<VirtualNetworkName>/subnets/<SubnetName>", "properties": { "provisioningState": "Succeeded", "addressPrefix": "10.240.0.0/16", "routeTable": { "id": "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/routeTables/<RouteTableResourceName>" } .... } .... } ]
-
Once your Kubernetes cluster has been created you will have a resource group containing:
-
1 master accessible by SSH on port 22 or kubectl on port 443
-
a set of nodes in an availability set. The nodes can be accessed through a master. See agent forwarding for an example of how to do this.
The following image shows the architecture of a container service cluster with 1 master, and 2 agents:
In the image above, you can see the following parts:
- Master Components - The master runs the Kubernetes scheduler, api server, and controller manager. Port 443 is exposed for remote management with the kubectl cli.
- Nodes - the Kubernetes nodes run in an availability set. Azure load balancers are dynamically added to the cluster depending on exposed services.
- Common Components - All VMs run a kubelet, Docker, and a Proxy.
- Networking - All VMs are assigned an ip address in the 10.240.0.0/16 network. Each VM is assigned a /24 subnet for their pod CIDR enabling IP per pod. The proxy running on each VM implements the service network 10.0.0.0/16.
All VMs are in the same private VNET and are fully accessible to each other.
Using the default configuration, Kubernetes allows communication between all
Pods within a cluster. To ensure that Pods can only be accessed by authorized
Pods, a policy enforcement is needed. To enable policy enforcement using Calico
azuredeploy.parameters.json
needs to be modified like that:
"networkPolicy": {
"value": "calico"
}
This will deploy a Calico node controller to every instance of the cluster using a Kubernetes DaemonSet. After a successful deployment you should be able to see these Pods running in your cluster:
kubectl get pods --namespace kube-system -l k8s-app=calico-node -o wide
NAME READY STATUS RESTARTS AGE IP NODE
calico-node-034zh 2/2 Running 0 2h 10.240.255.5 k8s-master-30179930-0
calico-node-qmr7n 2/2 Running 0 2h 10.240.0.4 k8s-agentpool1-30179930-1
calico-node-z3p02 2/2 Running 0 2h 10.240.0.5 k8s-agentpool1-30179930-0
Per default Calico still allows all communication within the cluster. Using Kubernetes' NetworkPolicy API, you can define stricter policies. Good resources to get information about that are:
After completing this walkthrough you will know how to:
- access Kubernetes cluster via SSH,
- deploy a simple Docker application and expose to the world,
- the location of the Kube config file and how to access the Kubernetes cluster remotely,
- use
kubectl exec
to run commands in a container, - and finally access the Kubernetes dashboard.
-
After successfully deploying the template write down the master FQDNs (Fully Qualified Domain Name).
- If using Powershell or CLI, the output parameter is in the OutputsString section named 'masterFQDN'
- If using Portal, to get the output you need to:
-
SSH to the master FQDN obtained in step 1.
-
Explore your nodes and running pods:
-
to see a list of your nodes type
kubectl get nodes
. If you want full detail of the nodes, add-o yaml
to becomekubectl get nodes -o yaml
. -
to see a list of running pods type
kubectl get pods --all-namespaces
. -
Start your first Docker image by typing
kubectl run nginx --image nginx
. This will start the nginx Docker container in a pod on one of the nodes. -
Type
kubectl get pods -o yaml
to see the full details of the nginx deployment. You can see the host IP and the podIP. The pod IP is assigned from the pod CIDR on the host. Run curl to the pod ip to see the nginx output, eg.curl 10.244.1.4
- The next step is to expose the nginx deployment as a Kubernetes service on the private service network 10.0.0.0/16:
- expose the service with command
kubectl expose deployment nginx --port=80
. - get the service IP
kubectl get service
- run curl to the IP, eg.
curl 10.0.105.199
- The final step is to expose the service to the world. This is done by changing the service type from
ClusterIP
toLoadBalancer
: - edit the service:
kubectl edit svc/nginx
- change
type
fromClusterIP
toLoadBalancer
and save it. This will now cause Kubernetes to create an Azure Load Balancer with a public IP. - the change will take about 2-3 minutes. To watch the service change from "pending" to an external ip type
watch 'kubectl get svc'
- once you see the external IP, you can browse to it in your browser:
- The next step in this walkthrough is to show you how to remotely manage your Kubernetes cluster. First download Kubectl to your machine and put it in your path:
- The Kubernetes master contains the kube config file for remote access under the home directory ~/.kube/config. Download this file to your machine, set the KUBECONFIG environment variable, and run kubectl to verify you can connect to cluster:
# MASTERFQDN is obtained in step1
pscp -P 22 azureuser@MASTERFQDN:.kube/config .
SET KUBECONFIG=%CD%\config
kubectl get nodes
- OS X or Linux:
# MASTERFQDN is obtained in step1
scp azureuser@MASTERFQDN:.kube/config .
export KUBECONFIG=`pwd`/config
kubectl get nodes
- The next step is to show you how to remotely run commands in a remote Docker container:
- Run
kubectl get pods
to show the name of your nginx pod - using your pod name, you can run a remote command on your pod. eg.
kubectl exec nginx-701339712-retbj date
- try running a remote bash session. eg.
kubectl exec nginx-701339712-retbj -it bash
. The following screen shot shows these commands:
- The final step of this tutorial is to show you the dashboard:
- run
kubectl proxy
to directly connect to the proxy - in your browser browse to the dashboard
- browse around and explore your pods and services.
Scaling your cluster up or down requires different parameters and template than the create. More details here Scale up
If your cluster is not reachable, you can run the following command to check for common failures.
If your Service Principal is misconfigured, none of the Kubernetes components will come up in a healthy manner. You can check to see if this the problem:
ssh -i ~/.ssh/id_rsa USER@MASTERFQDN sudo journalctl -u kubelet | grep --text autorest
If you see output that looks like the following, then you have not configured the Service Principal correctly. You may need to check to ensure the credentials were provided accurately, and that the configured Service Principal has read and write permissions to the target Subscription.
Nov 10 16:35:22 k8s-master-43D6F832-0 docker[3177]: E1110 16:35:22.840688 3201 kubelet_node_status.go:69] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: autorest#WithErrorUnlessStatusCode: POST https://login.microsoftonline.com/72f988bf-86f1-41af-91ab-2d7cd011db47/oauth2/token?api-version=1.0 failed with 400 Bad Request: StatusCode=400
- Link to documentation on how to create/configure a service principal for an ACS-Engine Kubernetes cluster.
Here are recommended links to learn more about Kubernetes:
- Kubernetes Bootcamp - shows you how to deploy, scale, update, and debug containerized applications.
- Kubernetes Userguide - provides information on running programs in an existing Kubernetes cluster.
- Kubernetes Examples - provides a number of examples on how to run real applications with Kubernetes.