Skip to content

Commit

Permalink
add notes for Minio Gateway
Browse files Browse the repository at this point in the history
  • Loading branch information
streamnsight committed Sep 19, 2022
1 parent 0381ee7 commit 4430433
Show file tree
Hide file tree
Showing 7 changed files with 24 additions and 5 deletions.
15 changes: 13 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,12 +150,15 @@ To configure a MySQL as a Service instance for KubeFlow:
To use OCI Object Storage as storage for Pipeline and Pipeline Artifacts:
- Gather the `namespace` name of your tenancy, the `region` code (for example us-ashburn-1) from the tenancy details.
Note: this ONLY works with the home region at this point, because Minio Gateway does not support other regions for S3 compatible gateways.
- Create a bucket at the root of the tenancy (or in the compartment defined as the root for the S3 Compatibility API, which defaults to the root of the tenancy)
- Create a Customer Secret Key under your user (or a user created for this purpose), which will provide you with an Access Key and a Secret Access Key. Take note of these credentials.
- Run the `setup_object_storage.sh` script to generate the minio.env and params.env files
- Edit the kubeflow.env file with the details gathered.
- Run the `setup_object_storage.sh` script to generate the minio.env, config and params.env files
### Deploy
Expand All @@ -181,10 +184,18 @@ Note:
If deployment fails due to a wrong configuration, update the `kubeflow.env`, source it, and re-run the related script(s). Then re-run the kustomize build and kubectl apply commands
After this, you may still need to restart the deployments with
After this, you may still need to restart the deployments with:
```bash
kubectl rollout restart deployments -n kubeflow
# for IDCS config change, also run
kubectl rollout restart deployments -n auth
```
If you are having issues with meta data, pipelines and artifacts, you might need to reset the database/cache.
Use the following script that clears the MySQL database and rollout restarts all deployments:
```bash
./reset_db.sh
```
3 changes: 2 additions & 1 deletion kubeflow.env.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@ export IDCS_CLIENT_ID="f0xxxxxxxxxxxx"
export IDCS_CLIENT_SECRET="xxxxxxx-xxxxxxx-xxxx-xxxxx-xxxxxx"

# Object Storage info
export REGION="us-sanjose-1"
# Note: Minio Gateway only works with the home region at this point
export REGION="us-auburn-1"
export BUCKET="xyz-kubeflow-metadata"
export OSNAMESPACE="mytenancynamespace"

Expand Down
1 change: 1 addition & 0 deletions oci/apps/pipeline/oci-object-storage/params.env.tmpl
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
bucketName=${BUCKET}
# Note: Minio Gateway only works with the home region
minioServiceHost=${OSNAMESPACE}.compat.objectstorage.${REGION}.oraclecloud.com
minioServiceRegion=${REGION}
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ spec:
- https://$(MINIO_HOST)
env:
- name: MINIO_HOST
# Note: this gateway setup only works with the home region
valueFrom:
configMapKeyRef:
name: pipeline-install-config
Expand Down
5 changes: 5 additions & 0 deletions reset_db.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#!/bin/bash

. ./kubeflow.env
kubectl exec -it mysql-temp -n default -- mysql -u ${DBUSER} -p${DBPASS} -h mysql -e "drop database mlpipeline; drop database metadb; drop database cachedb;"
kubectl rollout restart deployments -n kubeflow
2 changes: 1 addition & 1 deletion setup_mysql.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/bin/bash

. kubeflow.env
. ./kubeflow.env

if [[ -z "${REGION}" ]] ; then
echo "Region not set"
Expand Down
2 changes: 1 addition & 1 deletion setup_object_storage.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/bin/bash

. kubeflow.env
. ./kubeflow.env

# update params.env

Expand Down

0 comments on commit 4430433

Please sign in to comment.