Run Cardano Nodes
Run a Relay Node
Running a Cardano node is easy. But first, lets check whether your Docker environment is working.
In case you'd like to find out more about what we've just done, take a look at docker run.
In case you don't have Docker installed just yet, take a look at Installing Docker.
Great, this is working. Now lets start the Cardano relay node.
This will run the Cardano node detached from the current terminal session, publish the default port 3001 on the host network and write the block data to a docker volume. Please make sure that this port is accessible for incoming connections. The above should run on Windows, MacOS, Linux and on x86_64 or arm64 just the same. Full details about this image are given here.
A quick check of our docker stats should show that this container is indeed running.
The above shows the output on a RaspberryPi 4, with aggregated CPU usage for the 4 ARM CPUs. The Pi is not very busy at the moment and when fully synced will even be less so.
You can look at the container's console output like this ...
After a little while, you should be seeing that the node is finding initial peers and starts syncing the block chain.
Stopping the Container
The Cardano node likes to do a graceful shutdown with some database housekeeping before the process terminates. This image has built-in SIGINT (Ctrl+C) redirection so that you can gracefully shut down the node like this ...
Removing the Container
It is not a good idea to forcefully remove a running container with docker rm -f
because this bypasses graceful shutdown and will cause the node to re-validate the entire blockchain. On a fully synchronized node this may take > 15min. Instead, you'd want to stop the container first and then do a regular remove like this ...
Topology Updater
There is currently no active P2P module in cardano-1.25.x. Your node may call out to well known relay nodes, but you may never have incoming connections. According to this it is necessary to update your topology every hour. At the time of writing, the node doesn't do this on its own.
This functionality has been built into the image as well. The topology updater is triggered by CARDANO_UPDATE_TOPOLOGY=true
, which will automatically call a topology update procedure once every hour.
After three hours, the network will have accepted you as a new peer. On consecutive hourly calls it will respond with ...
nice to meet you
welcome to the topology
glad you're staying with us
You can look at the output of the topology updater like this ...
Live View Monitoring
After looking at the console output for a while, you may wish to have all the pertinent information available on one screen. Perhaps even without running an additional heavy-weight monitoring process.
An excellent bash based monitor that runs once every few seconds is provided for us by guild-operators. The image incorporates this monitor and configures it automatically according to the node's config
You can start the monitoring process like this ...
For details on how to execute a process within a running container, take a look at docker exec.
Command Line Interface
We can also use the image to run Cardano CLI commands.
For this to work, the node must share its IPC socket location, which can then be use in the alias definition.
Define a cardano-cli alias
Run stateless CLI commands
This approach is stateless because all output files that such a CLI process might generate, will be lost when the process terminates, which is when the command returns.
There are two possible solutions to this problem. First, we could exec into to running container and invoke cardano-cli from within the relay process like this ...
All generated output files would live in the relay node and also share its lifecycle i.e. get removed when the container gets removed.
Run stateful CLI commands
A better approach is perhaps to mount some local directory into the container like this ...
and then ...
Custom Configuration
In this section we define a custom config volume called cardano-relay-config
. It holds the mainnet-topology.json
that we define according to our needs. Note, that our custom config lives in /var/cardano/config
and not in the default location /opt/cardano/config
.
Define the Relay Topology
The Relay connects to the World and the Block Producer
Setup the config volume
We now copy the topology file that we generated above to the cardano-relay-config
volume.
Start the Relay Node
We can now start the relay again with that custom config volume in place. Of course, we could have mounted the topology file into the container directly, but the volume approach is much to be preferred especially when you consider multiple config/key files in such a volume.
Running a Block Producer
The block producer connects to one or more trusted relays and no other peers.
Setup the config volume
Similar to above, we now copy the topology to the cardano-prod-config
volume. Additionally, we copy keys and certificates to that volume so that we can then reference it from the container configuration.
Start the Block Producer Node
After having done all the required steps to generate the pool keys, we can now start the block producer like this ...
The block producer is meant to run on a different host than the relay. We do this for security reasons and also for fail safety. The block producer could connect to more than one redundant relays. If one of them goes down, the block producer would continue to receive transactions and create blocks when scheduled.
Leaderlog Schedule
For a Stake Pool Operator it is important to know when the node is scheduled to produce the next block. We definitely want to be online at that important moment and fulfill our block producing duties. There are better times to do node maintenance.
This important functionality has also been built into nessusio/cardano-tools the image.
First, we also define an alias and ping the node that we want to work with.
Details about this API are here.
Syncing the database
This command connects to a remote node and synchronizes blocks to a local sqlite database.
Running leaderlog
We can now obtain the leader schedule for our pool.
Last updated