Growing the cluster
You can expand an existing cluster by adding new nodes after it has been started.
To do this, launch a new node with the same cluster-name
and specify at least one existing node's address in metadata-client.addresses
.
This allows the new node to discover the metadata servers and join the cluster.
If you plan to scale your cluster over time, we strongly recommend enabling snapshotting. Without it, newly added nodes may not be fully utilized by the system. See the snapshotting documentation for more details.
Currently, removing nodes from the cluster is not supported. Once a node joins the cluster, it cannot be removed or stopped. Support for shrinking the cluster will be added in the future.
Modify Cluster Configuration
A Restate cluster can initially be started with a single node. Follow the cluster deployment instructions and ensure that default-log-replication
is set to 1 in the configuration:
[bifrost.replicated-loglet]# Replicate the data to 1 node. The cluster in this# cannot tolerate the failure of any nodes.default-log-replication = 1
Later after adding new nodes, you may want to update the cluster’s replication settings to take advantage of the additional nodes and improve fault tolerance.
You can modify the configuration using the restatectl config
command
Viewing Cluster Configuration
To check the current cluster settings, run:
restatectl config get
Example output
⚙️ Cluster Configuration├ Number of partitions: 8├ Partition replication: *└ Logs Provider: replicated├ Log replication: {node: 1}└ Nodeset size: 0
Updating Cluster Configuration
To increase log replication to 2, run:
restatectl config set --log-replication 2
Run restatectl config set --help
to see all available configuration options.
Example output:
⚙️ Cluster Configuration├ Number of partitions: 8├ Partition replication: *└ Logs Provider: replicated- ├ Log replication: {node: 1}+ ├ Log replication: {node: 2}└ Nodeset size: 0? Apply changes? (y/n) › no
Press y
to confirm and apply the new configuration.
Verifying the Update
To check if the logs have been reconfigured, run:
restatectl logs list
Example output
Log chain v5└ Logs Provider: replicated├ Log replication: {node: 2}└ Nodeset size: 0L-ID FROM-LSN KIND LOGLET-ID REPLICATION SEQUENCER NODESET0 2 Replicated 0_2 {node: 2} N1:2 [N1, N2, N3]1 2 Replicated 1_2 {node: 2} N1:2 [N1, N2, N3]2 2 Replicated 2_2 {node: 2} N1:2 [N1, N2, N3]3 2 Replicated 3_2 {node: 2} N1:2 [N1, N2, N3]4 2 Replicated 4_2 {node: 2} N1:2 [N1, N2, N3]5 2 Replicated 5_2 {node: 2} N1:2 [N1, N2, N3]6 2 Replicated 6_2 {node: 2} N1:2 [N1, N2, N3]7 2 Replicated 7_2 {node: 2} N1:2 [N1, N2, N3]
You may need to run restatectl logs list
multiple times before all logs reflect the updated replication setting. If the update takes longer than expected, check the node logs for errors or warnings.