Skip to content

Latest commit

 

History

History
151 lines (121 loc) · 7.78 KB

openg2p-in-a-box.md

File metadata and controls

151 lines (121 loc) · 7.78 KB
description
Getting started with OpenG2P

OpenG2P In a Box

This document describes a deployment model wherein the infrastructure and components required by OpenG2P modules can be set up on a single node/VM/machine. This will help you to get started with OpenG2P and experience the functionality without having to meet all resource requirements for a production-grade setup. This is based on V4 architecture, but a compact version of the same. The essence of the V4 is preserved so that upgrading the infra is easier when more hardware resources are available.

Deployment architecture

{% embed url="https://miro.com/app/board/uXjVKEY_ZNk=/?share_link_id=892398727661" %}

{% hint style="danger" %} Do NOT use this deployment model for production/pilots. {% endhint %}

Installation

Prerequisites

  • Machine with the following configuration
    • 16 vCPU/64GB RAM/256 GB storage
    • OS: Ubuntu 22.04

Base infrastructure setup

To set up the base infrastructure, login to the machine and install the following:

  1. Set up Firewall. Make sure to include K8s Firewall, NFS Firewall, Wireguard Firewall, and LB Firewall, all in the same machine.

  2. Install Kubernetes Cluster (RKE2 Server).

  3. Install Wireguard Bastion servers:

    • Run this command for each wireguard server/channel:

      WG_MODE=k8s ./wg.sh <name for this wireguard server> <client ips subnet mask> <port> <no of peers> <subnet mask of the cluster nodes & lbs>
    • For example:

      WG_MODE=k8s ./wg.sh wireguard_app_users 10.15.0.0/16 51820 254 172.16.0.0/24
      WG_MODE=k8s ./wg.sh wireguard_sys_admins 10.16.0.0/16 51821 254 172.16.0.0/24
    • Check logs of the servers and wait for all servers to finish startup. Example:

      kubectl -n wireguard-system logs -f wireguard-sys-admins
  4. Install NFS Server.

  5. Install Kubernetes NFS CSI Driver.

  6. Istio: Setup; from kubernetes/istio directory, run the following:

    istioctl operator init
    kubectl apply -f istio-operator-no-external-lb.yaml
    kubectl apply -f istio-ef-spdy-upgrade.yaml
  7. Set up TLS using the following:

    • Create SSL Certificate using Letsencrypt for Rancher (Edit hostname below):

      certbot certonly --agree-tos --manual \
          --preferred-challenges=dns \
          -d rancher.your.org
    • Create Rancher TLS Secret (Edit certificate paths below):

      kubectl -n istio-system create secret tls tls-rancher-ingress \
          --cert /etc/letsencrypt/live/rancher.your.org/fullchain.pem \
          --key /etc/letsencrypt/live/rancher.your.org/privkey.pem
    • Create SSL Certificate using Letsencrypt for Keycloak (Edit hostname below):

      certbot certonly --agree-tos --manual \
          --preferred-challenges=dns \
          -d keycloak.your.org
    • Create Keycloak TLS Secret, using (Edit certificate paths below):

      kubectl -n istio-system create secret tls tls-keycloak-ingress \
          --cert /etc/letsencrypt/live/keycloak.your.org/fullchain.pem \
          --key /etc/letsencrypt/live/keycloak.your.org/privkey.pem
  8. Set up DNS for Rancher and Keycloak hostnames to point to the IP of the node.

  9. Rancher Install; from kubernetes/rancher directory, run the following (Edit hostname below):

    RANCHER_HOSTNAME=rancher.your.org \
    TLS=true \
        ./install.sh --set replicas=1
    • Login to Rancher using the above hostname and bootstrap the admin user according to the instructions. After successfully logging in to Rancher as admin, save the new admin user password in local cluster, in cattle-system namespace, under rancher-secret, with key adminPassword.
  10. Keycloak Install; from kubernetes/keycloak directory, run the following (Edit hostname below):

    KEYCLOAK_HOSTNAME=keycloak.your.org \
    TLS=true \
        ./install.sh --set replicaCount=1
  11. Integrate Rancher & Keycloak.

  12. Continue to use the same cluster (local cluster) for OpenG2P Modules also.

    • In Rancher, create a Project and Namespace, on which the OpenG2P modules will be installed. The rest of this guide will assume the Namespace to be dev .
    • In Rancher -> Namespaces menu, enable "Istio Auto Injection" for dev namespace.
  13. Follow Istio Namespace setup:

    1. Edit and run this to define the variables:

      export NS=dev
      export WILDCARD_HOSTNAME='*.dev.your.org'
      
    2. Run this apply gateways

      kubectl create ns $NS
      envsubst < istio-gateway-tls.yaml | kubectl apply -f -
    3. Create SSL Certificate using Letsencrypt for the wildcard hostname used above. Example usage:

      certbot certonly --agree-tos --manual \
          --preferred-challenges=dns \
          -d dev.your.org \
          -d *.dev.your.org
    4. Add the certificate to K8s.

      kubectl -n istio-system create secret tls tls-openg2p-$NS-ingress \
          --cert=<certificate path> \
          --key=<certificate key path>
  14. Install Prometheus and Monitoring from Rancher

  15. Install Logging and Fluentd. (TODO)

OpenG2P modules' installation

Install OpenG2P modules via Rancher.

{% hint style="info" %} How is "In a Box" different from V4? Why should this not be used for production?

  • In-a-box does not use the Nginx Load Balancer. The HTTPS traffic directly terminates on the Istio gateway via Wireguard. However, Nginx is required in production as described here.
  • The SSL certificates are loaded on the Istio gateway while in V4 the certificates are loaded on the Nginx server.
  • The Wireguard bastion runs inside the Kubernetes cluster itself as a pod. This is not recommended in production where Wireguard must run on a separate node.
  • A single private access channel is enabled (via Wireguard). In production, you will typically need several channels for access control.
  • In-a-box does not offer high availability as the node is a single point of failure.
  • NFS runs inside the box. In production, NFS must run on a separate node with its access control, allocated resources and backups. {% endhint %}