The Tarantool Operator provides automation that simplifies the administration of Tarantool Cartridge-based cluster on Kubernetes.
The Operator introduces new API version tarantool.io/v1alpha1
and installs
custom resources for objects of three custom types: Cluster, Role, and
ReplicasetTemplate.
- Resources
- Resource ownership
- Documentation
- Deploying the Tarantool operator on minikube
- Example: key-value storage
Cluster represents a single Tarantool Cartridge cluster.
Role represents a Tarantool Cartridge user role.
ReplicasetTemplate is a template for StatefulSets created as members of Role.
Resources managed by the Operator being deployed have the following resource ownership hierarchy:
Resource ownership directly affects how Kubernetes garbage collector works. If you execute a delete command on a parent resource, then all its dependants will be removed.
The documentation is on the Tarantool official website.
-
Install the required deployment utilities:
Pick one of these to run a local kubernetes cluster
To install and configure a local minikube installation:
Create a
minikube
cluster:$ minikube start --memory=4096
You will need 4Gb of RAM allocated to the
minikube
cluster to run examples.Ensure
minikube
is up and running:$ minikube status --- minikube type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured
-
Build the operator image
$ make docker-build
By default, the image is tagged as
tarantool-operator:<VERSION>
-
Add image to local minikube registry
$ make push-to-minikube --- minikube image load tarantool-operator:0.0.9
NOTE: If you want to use the official docker image of the Tarantool operator use the helm charts from the tarantool helm repository. Read more about this in the documentation.
-
Install the operator
$ helm install -n tarantool-operator operator helm-charts/tarantool-operator \ --create-namespace \ --set image.repository=tarantool-operator \ --set image.tag=0.0.9 --- NAME: operator LAST DEPLOYED: Wed Dec 15 22:54:13 2021 NAMESPACE: tarantool-operator STATUS: deployed REVISION: 1 TEST SUITE: None
Or you can use make:
$ make helm-install-operator
Ensure the operator is up:
$ kubectl get pods -n tarantool-operator --- NAME READY STATUS RESTARTS AGE controller-manager-778db958cf-bhw6z 1/1 Running 0 77s
Wait for
controller-manager-xxxxxx-xx
Pod's status to becomeRunning
.
examples/kv
contains a Tarantool-based distributed key-value storage.
Data are accessed via HTTP REST API.
We assume that commands are executed from the repository root and Tarantool Operator is up and running.
-
Create a cluster:
$ helm install -n tarantool-app cartridge-app helm-charts/tarantool-cartridge \ --create-namespace \ --set LuaMemoryReserveMB=256 --- NAME: cartridge-app LAST DEPLOYED: Wed Dec 15 23:50:09 2021 NAMESPACE: tarantool-app STATUS: deployed REVISION: 1
Or you can use make:
$ make helm-install-cartridge-app
Wait until all the cluster Pods are up (status becomes
Running
):$ kubectl -n tarantool-app get pods --- NAME READY STATUS RESTARTS AGE routers-0-0 1/1 Running 0 6m12s storage-0-0 1/1 Running 0 6m12s storage-0-1 1/1 Running 0 6m12s
-
Ensure cluster became operational:
$ kubectl -n tarantool-app describe clusters.tarantool.io/tarantool-cluster
wait until Status.State is Ready:
... Status: State: Ready ...
-
Access the cluster web UI:
$ kubectl -n tarantool-app port-forward routers-0-0 8081:8081 --- Forwarding from 127.0.0.1:8081 -> 8081 Forwarding from [::1]:8081 -> 8081 Handling connection for 8081
-
Access the key-value API:
-
Store some value:
$ curl -XPOST http://localhost:8081/kv -d '{"key":"key_1", "value": "value_1"}' --- {"info":"Successfully created"}
-
Access stored value:
$ curl http://localhost:8081/kv/key_1 --- "value_1"
-
Update stored value:
$ curl -XPUT http://localhost:8081/kv/key_1 -d '"new_value_1"' --- ["key_1", "new_value_1"]
-
Delete stored value:
$ curl -XDELETE http://localhost:8081/kv/key_1 --- {"info":"Successfully deleted"}
-
Increase the number of replica sets in Storages Role:
In the cartridge helm chart, edit the helm-charts/tarantool-cartridge/values.yaml
file to be
- RoleName: storage
ReplicaCount: 2
ReplicaSetCount: 2
Then run:
$ helm upgrade -n tarantool-app cartridge-app helm-charts/tarantool-cartridge \
--set LuaMemoryReserveMB=256
This will add another storage role replica set to the existing cluster. View the new cluster topology via the cluster web UI.
Read more about cluster management in the documentation.
Use make help
to describe all targets.
Below are some of them.
$ make manifests
$ make docker-build
$ make test