Cardano on Kubernetes Advanced enterprise setup with Kubernetes
Cardano Namespace
Running Cardano on Kubernetes is easy. First, lets create a dedicated namespace for it and make it the default. In case you don't have a running Kubernetes cluster just yet, take a look at Creating a Kubernetes Cluster
Copy $ kubectl create namespace cardano
namespace/cardano created
$ kubectl config set-context --current --namespace=cardano
Context "gke_beaconchain_us-central1-a_cardano" modified.
Great, that is working.
Attaching Node Labels
Here we get the external IP for our relay node in the cluster
Copy $ kubectl get nodes --output wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-cardano-default-pool-9bec42f2-g9p2 Ready <none> 14m v1.18.12-gke.1210 10.128.0.33 34.71.138.50 Container-Optimized OS from Google 5.4.49+ docker://19.3.9
gke-cardano-default-pool-9bec42f2-w5w3 Ready <none> 14m v1.18.12-gke.1210 10.128.0.32 35.223.44.154 Container-Optimized OS from Google 5.4.49+ docker://19.3.9
EXTERNAL_IP=34.71.138.50
NODE_PORT=30010
# The IP address that you choose must be a valid IPv4 or IPv6 address from within the service-cluster-ip-range CIDR range.
RELAY_CLUSTER_IP=10.3.240.100
BPROD_CLUSTER_IP=10.3.240.200
We want to deploy the relay to a node with a known IP. Here we assign node type labels that we later use in node selectors.
Copy $ kubectl label nodes gke-cardano-default-pool-9bec42f2-g9p2 nodeType=relay
$ kubectl label nodes gke-cardano-default-pool-9bec42f2-w5w3 nodeType=bprod
Create a Firewall Rule
Here we create a firewall rule for incoming traffic to a given node port.
Copy $ gcloud compute firewall-rules create k8s-relay --allow tcp:$NODE_PORT
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
k8s-relay default INGRESS 1000 tcp:30010 False
Relay Configuration
Next, we create a configuration map for the Relay
Copy RELAY_TOPOLOGY=`cat << EOF
{
"Producers": [
{
"addr": "relays-new.cardano-mainnet.iohk.io",
"port": 3001,
"valency": 1
},
{
"addr": "$BPROD_CLUSTER_IP",
"port": 3001,
"valency": 1
}
]
}
EOF`
$ kubectl create configmap relaycfg \
--from-literal=publicIP="$EXTERNAL_IP:$NODE_PORT" \
--from-literal=customPeers="$BPROD_CLUSTER_IP:3001" \
--from-literal=topology="$RELAY_TOPOLOGY"
configmap/relaycfg created
Block Producer Configuration
Next, we create a configuration map for the Block Producer
Copy BPROD_TOPOLOGY=`cat << EOF
{
"Producers": [
{
"addr": "$RELAY_CLUSTER_IP",
"port": 3001,
"valency": 1
}
]
}
EOF`
$ kubectl create configmap bprodcfg \
--from-literal=topology="$BPROD_TOPOLOGY"
configmap/bprodcfg created
Block Producer Keys Secret
Next, we store the Block Producer configuration keys in a Kubernetes Secret.
Copy $ kubectl create secret generic nodekeys \
--from-file=./cardano/keys/pool/kes.skey \
--from-file=./cardano/keys/pool/vrf.skey \
--from-file=./cardano/keys/pool/node.cert
secret/nodekeys created
Deploying the Pods
Finally, we deploy a Cardano pods like this ...
Copy $ kubectl apply -f nix/docker/k8s/cardano-nodes.yaml
storageclass.storage.k8s.io/cardano-standard-rwo created
persistentvolumeclaim/relay-data created
pod/relay created
service/relay-np created
service/relay-clip created
persistentvolumeclaim/bprod-data created
pod/bprod created
service/bprod-clip created
For details you may want to have a look at nix/docker/k8s/cardano-nodes.yaml .
Looking at the Relay output
Copy $ kubectl logs --tail=500 -f relay
Running the cardano node ...
Generating /var/cardano/config/mainnet-topology.json ...
CARDANO_CONFIG=/opt/cardano/config/mainnet-config.json
CARDANO_TOPOLOGY=/var/cardano/config/mainnet-topology.json
CARDANO_BIND_ADDR=0.0.0.0
CARDANO_PORT=3001
CARDANO_DATABASE_PATH=/opt/cardano/data
CARDANO_SOCKET_PATH=/opt/cardano/ipc/socket
CARDANO_LOG_DIR=/opt/cardano/logs
CARDANO_PUBLIC_IP=34.71.138.50:30010
CARDANO_CUSTOM_PEERS=10.3.240.200:3001
CARDANO_UPDATE_TOPOLOGY=true
CARDANO_BLOCK_PRODUCER=false
cardano-node run --config /opt/cardano/config/mainnet-config.json --topology /var/cardano/config/mainnet-topology.json --database-path /opt/cardano/data --socket-path /opt/cardano/ipc/socket --host-addr 0.0.0.0 --port 3001
Topology update: 47 * * * * root topologyUpdate
Initially waiting for 10 minutes ...
Listening on http://127.0.0.1:12798
[relay:cardano.node.networkMagic:Notice:5] [2021-02-27 14:35:06.97 UTC] NetworkMagic 764824073
[relay:cardano.node.basicInfo.protocol:Notice:5] [2021-02-27 14:35:06.97 UTC] Byron; Shelley
[relay:cardano.node.basicInfo.version:Notice:5] [2021-02-27 14:35:06.97 UTC] 1.25.1
...
[relay:cardano.node.ChainDB:Notice:35] [2021-02-27 14:35:08.22 UTC] Chain extended, new tip: 1dbc81e3196ba4ab9dcb07e1c37bb28ae1c289c0707061f28b567c2f48698d50 at slot 1
[relay:cardano.node.ChainDB:Notice:35] [2021-02-27 14:35:08.22 UTC] Chain extended, new tip: 52b7912de176ab76c233d6e08ccdece53ac1863c08cc59d3c5dec8d924d9b536 at slot 2
[relay:cardano.node.ChainDB:Notice:35] [2021-02-27 14:35:08.22 UTC] Chain extended, new tip: be06c81f4ad34d98578b67840d8e65b2aeb148469b290f6b5235e41b75d38572 at slot 3
Looking at the Block Producer output
Copy $ kubectl logs --tail=500 -f bprod
Running the cardano node ...
Generating /var/cardano/config/mainnet-topology.json ...
CARDANO_CONFIG=/opt/cardano/config/mainnet-config.json
CARDANO_TOPOLOGY=/var/cardano/config/mainnet-topology.json
CARDANO_BIND_ADDR=0.0.0.0
CARDANO_PORT=3001
CARDANO_DATABASE_PATH=/opt/cardano/data
CARDANO_SOCKET_PATH=/opt/cardano/ipc/socket
CARDANO_LOG_DIR=/opt/cardano/logs
CARDANO_PUBLIC_IP=
CARDANO_CUSTOM_PEERS=
CARDANO_UPDATE_TOPOLOGY=false
CARDANO_BLOCK_PRODUCER=true
CARDANO_SHELLEY_KES_KEY=/var/cardano/config/keys/kes.skey
CARDANO_SHELLEY_VRF_KEY=/var/cardano/config/keys/vrf.skey
CARDANO_SHELLEY_OPERATIONAL_CERTIFICATE=/var/cardano/config/keys/node.cert
cardano-node run --config /opt/cardano/config/mainnet-config.json --topology /var/cardano/config/mainnet-topology.json --database-path /opt/cardano/data --socket-path /opt/cardano/ipc/socket --host-addr 0.0.0.0 --port 3001 --shelley-kes-key /var/cardano/config/keys/kes.skey --shelley-vrf-key /var/cardano/config/keys/vrf.skey --shelley-operational-certificate /var/cardano/config/keys/node.cert
Listening on http://127.0.0.1:12798
[bprod:cardano.node.networkMagic:Notice:5] [2021-02-27 14:35:06.97 UTC] NetworkMagic 764824073
[bprod:cardano.node.basicInfo.protocol:Notice:5] [2021-02-27 14:35:06.97 UTC] Byron; Shelley
[bprod:cardano.node.basicInfo.version:Notice:5] [2021-02-27 14:35:06.97 UTC] 1.25.1
...
[bprod:cardano.node.ChainDB:Notice:35] [2021-02-27 14:35:08.22 UTC] Chain extended, new tip: 1dbc81e3196ba4ab9dcb07e1c37bb28ae1c289c0707061f28b567c2f48698d50 at slot 1
[bprod:cardano.node.ChainDB:Notice:35] [2021-02-27 14:35:08.22 UTC] Chain extended, new tip: 52b7912de176ab76c233d6e08ccdece53ac1863c08cc59d3c5dec8d924d9b536 at slot 2
[bprod:cardano.node.ChainDB:Notice:35] [2021-02-27 14:35:08.22 UTC] Chain extended, new tip: be06c81f4ad34d98578b67840d8e65b2aeb148469b290f6b5235e41b75d38572 at slot 3
Deleting Resources
To stop the nodes, we can simply delete their respective pod. The Cardano node process will shutdown gracefully and continue from where it left off when we redeploy the pod.
Copy $ kubectl delete pod,svc --all
pod "bprod" deleted
pod "relay" deleted
service "bprod-clip" deleted
service "relay-clip" deleted
service "relay-np" deleted
The persistent volume is a cluster resource that survives namespace deletion.
Copy $ kubectl delete pvc,pv --all
persistentvolumeclaim "relay-data" deleted
persistentvolume "pvc-53819848-09bf-4bc7-9a26-25e318b1f98e" deleted
Deleting the cluster
Copy $ gcloud container clusters delete $CLUSTER_NAME \
--zone=$CLUSTER_ZONE