Skip to main content

Development Helm Chart

warning

The development Helm Chart is currently not maintained and needs to be updated to deploy all required components. For now, use the Docker Compose setup instead.

For development, ORT Server provides a Helm Chart that can be used to deploy the ORT Server on Minikube.

Preparations

Configure Minikube

Start minikube with one node due to the limitation that minikube mount can only mount to one node. By default, minikube uses 2 CPUs and 2GB of memory. This might not be enough for a good startup performance and to run the ORT server at all, so it is recommended to increase the values, for example, to 4 CPUs and 8GB.

minikube start --nodes 1 --cpus 4 --memory 8g

Configure Helm

Add the Bitnami repository to your Helm command:

helm repo add bitnami https://charts.bitnami.com/bitnami

Add the HashiCorp repository your Helm environment:

helm repo add hashicorp https://helm.releases.hashicorp.com

Delete previous database (if any)

Please also delete your database content. Bitnami Helm charts create a specific configuration, which is skipped by Postgres' Docker entrypoint if a database already exists.

Therefore, with:

minikube ssh

you can make sure that /data/postgres-pv is empty.

Initialize Docker engine

We need to configure the Docker CLI to connect it to the Docker Engine inside Minikube. This allows:

  • Building images to Minikube Docker registry
  • Listing images from this registry (with docker images)
  • Listing Docker containers running in Minikube's Docker Engine (with docker ps)

Run once per shell to expose the Docker Engine inside Minikube:

eval $(minikube docker-env)

Build the required Docker images

Build the required Docker images in the cluster. See https://minikube.sigs.k8s.io/docs/handbook/pushing/ for alternatives.

First the base image for the Analyzer Worker:

workers/analyzer/docker$ DOCKER_BUILDKIT=1 docker build . -f Analyzer.Dockerfile -t ort-server-analyzer-worker-base-image:latest

Then the different images of ORT Server:

./gradlew :core:jibDockerBuild
./gradlew :orchestrator:jibDockerBuild
./gradlew :workers:analyzer:jibDockerBuild

Mount initialization directory

Finally, mount the directory containing Keycloak initialization script:

minikube mount scripts/docker/keycloak:/data/init-keycloak &

Usage

When a dependency changes in the chart, update the version in the Chart.yml and then run:

scripts/helm/ort-server$ helm dependency update

This will install the chart locally and update the Chart.lock file that you should commit.

Validate the syntax of the chart:

scripts/helm/$ helm lint ort-server

View the Kubernetes configuration generated by the template engine:

scripts/helm/$ helm template ort-server ort-server

Install the chart in Kubernetes:

scripts/helm/$ helm install ort-server ./ort-server --namespace ort-server --create-namespace

Switch to the newly created namespace with https://github.com/ahmetb/kubectx/blob/master/kubens[KubeNS]:

kubens ort-server

To Uninstall the chart run:

scripts/helm/$ helm uninstall ort-server

Make sure all the pods are terminated and persistent volume claims have been removed, since Keycloak takes some time to shut down.

For that, you can use:

watch kubectl get pods

There is also an upgrade command , but it does not seem to redeploy the resources correctly. Additionally, special care should be taken because the secrets are autogenerated.

Hence, for now it is recommended to always do an uninstall.

(Optional) Create an archive for the chart

Create a .tgz of a chart for redistribution:

helm package ort-server

Access the remote services on the host

Keycloak needs to have the exact same domain and port as in the cluster to avoid running into a JWT validation error because of issuer change. Same for Postgres for consistency.

The solution here is to start the following command (it should not be stopped):

minikube tunnel --bind-address=127.0.0.1

Then add an alias to localhost in the /etc/hosts file (or your platform equivalent) for each service being exposed:

127.0.0.1 localhost ort-server-keycloak postgres artemis core rabbitmq vault

Afterward, you can configure a local ORT Service with the following JDBC URL:

jdbc:postgresql://postgres:5432

And you can send requests to the ORT Server using the following Keycloak URLs:

Authorization URL: http://ort-server-keycloak:8081/realms/master/protocol/openid-connect/auth Access Token URL: http://ort-server-keycloak:8081/realms/master/protocol/openid-connect/token

And you can send requests to the core module of the ORT Server. Its SwaggerUI is reachable under the following URL:

http://core:8080/swagger-ui/index.html

The RabbitMQ administration URL is available at http://rabbitmq:15672 (user=admin, password=admin).

You can access the web UI of HashiCorp Vault Web UI with the following URL at http://vault:8200/ui.

Troubleshooting

Pod fails with:

Warning Failed 2m4s (x2 over 4m31s) kubelet Failed to pull image "docker.io/bitnami/keycloak:20.0.3-debian-11-r5": rpc error: code = Unknown desc = context deadline exceeded Warning Failed 2m4s (x2 over 4m31s) kubelet Error: ErrImagePull Normal BackOff 112s (x2 over 4m30s) kubelet Back-off pulling image "docker.io/bitnami/keycloak:20.0.3-debian-11-r5" Warning Failed 112s (x2 over 4m30s) kubelet Error: ImagePullBackOff

This can happen with large Docker images.

Connect with minikube ssh to the cluster and run:

docker pull docker.io/bitnami/keycloak:20.0.3-debian-11-r5

It should also work if you just run docker pull on the host after having run an eval as explained above.