Cardano on Kubernetes
Advanced enterprise setup with Kubernetes
Cardano Namespace
Running Cardano on Kubernetes is easy. First, lets create a dedicated namespace for it and make it the default. In case you don't have a running Kubernetes cluster just yet, take a look at Creating a Kubernetes Cluster
$ kubectl create namespace cardano
namespace/cardano created
$ kubectl config set-context --current --namespace=cardano
Context "gke_beaconchain_us-central1-a_cardano" modified.Great, that is working.
Attaching Node Labels
Here we get the external IP for our relay node in the cluster
$ kubectl get nodes --output wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-cardano-default-pool-9bec42f2-g9p2 Ready <none> 14m v1.18.12-gke.1210 10.128.0.33 34.71.138.50 Container-Optimized OS from Google 5.4.49+ docker://19.3.9
gke-cardano-default-pool-9bec42f2-w5w3 Ready <none> 14m v1.18.12-gke.1210 10.128.0.32 35.223.44.154 Container-Optimized OS from Google 5.4.49+ docker://19.3.9
EXTERNAL_IP=34.71.138.50
NODE_PORT=30010
# The IP address that you choose must be a valid IPv4 or IPv6 address from within the service-cluster-ip-range CIDR range.
RELAY_CLUSTER_IP=10.3.240.100
BPROD_CLUSTER_IP=10.3.240.200
We want to deploy the relay to a node with a known IP. Here we assign node type labels that we later use in node selectors.
Create a Firewall Rule
Here we create a firewall rule for incoming traffic to a given node port.
Relay Configuration
Next, we create a configuration map for the Relay
Block Producer Configuration
Next, we create a configuration map for the Block Producer
Block Producer Keys Secret
Next, we store the Block Producer configuration keys in a Kubernetes Secret.
Deploying the Pods
Finally, we deploy a Cardano pods like this ...
For details you may want to have a look at nix/docker/k8s/cardano-nodes.yaml.
Looking at the Relay output
Looking at the Block Producer output
Deleting Resources
To stop the nodes, we can simply delete their respective pod. The Cardano node process will shutdown gracefully and continue from where it left off when we redeploy the pod.
The persistent volume is a cluster resource that survives namespace deletion.
Deleting the cluster
Last updated
Was this helpful?