Skip to main content

Snapshots

note

This page covers configuring a Restate cluster to share partition snapshots for fast fail-over and bootstrapping new nodes. For backup of Restate nodes, see Data Backup.

Architectural overview

To understand the terminology used on this page, it might be helpful to read through the architecture reference.

caution

Snapshots are essential to support safe log trimming and also allow you to set partition replication to a subset of all cluster nodes, while still allowing for fast partition fail-over to any live node. Snapshots are also necessary to add more nodes in the future.

Restate workers can be configured to periodically publish snapshots of their partition state to a shared destination. Snapshots are not necessarily backups. Rather, snapshots allow nodes that had not previously served a partition to bootstrap a copy of its state. Without snapshots, placing a partition processor on a node that wasn't previously a follower would require the full replay of that partition's log. Replaying the log from the start might take a long time, or be impossible if the log was trimmed.

Configuring Snapshots

Restate clusters should always be configured with a snapshot repository to allow nodes to efficiently share partition state, and for new nodes to be added to the cluster in the future. Restate currently supports using Amazon S3 (or an API-compatible object store) as a shared snapshot repository. To set up a snapshot destination, update your server configuration as follows:

[worker.snapshots]
destination = "s3://snapshots-bucket/cluster-prefix"
snapshot-interval-num-records = 10000

This enables automated periodic snapshots to be written to the specified bucket. You can also trigger snapshot creation manually using the restatectl:

restatectl snapshots create-snapshot --partition-id <PARTITION_ID>

We recommend testing the snapshot configuration by requesting a snapshot and examining the contents of the bucket. You should see a new prefix with each partition's id, and a latest.json file pointing to the most recent snapshot.

No additional configuration is required to enable restoring snapshots. When partition processors first start up, and no local partition state is found, the processor will attempt to restore the latest snapshot from the repository. This allows for efficient bootstrapping of additional partition workers.

Experimenting with snapshots without an object store

For testing purposes, you can also use the file:// protocol to publish snapshots to a local directory. This is mostly useful when experimenting with multi-node configurations on a single machine. The file provider does not support conditional updates, which makes it unsuitable for potentially contended operation.

Object Store endpoint and access credentials

Restate supports Amazon S3 and S3-compatible object stores. In typical server deployments to AWS, the configuration will be automatically inferred. Object store locations are specified in the form of a URL where the scheme is s3:// and the authority is the name of the bucket. Optionally, you may supply an additonal path within the bucket, which will be used as a common prefix for all operations. If you need to specify a custom endpoint for S3-compatible stores, you can override the API endpoint using the aws-endpoint-url config key.

For typical server deployments in AWS, you might not need to set region or credentials at all when using Amazon S3 beyond setting the path. Restate's object store support uses the conventional AWS SDKs and Tools credentials discovery. We strongly recommend against using long-lived credentials in configuration. For development, you can use short-term credentials provided by a profile.

Local development with Minio

Minio is a common target while developing locally. You can configure it as follows:

[worker.snapshots]
destination = "s3://bucket/cluster-name"
snapshot-interval-num-records = 1000
aws-region = "local"
aws-access-key-id = "minioadmin"
aws-secret-access-key = "minioadmin"
aws-endpoint-url = "http://localhost:9000"
aws-allow-http = true

Local development with S3

Assuming you have a profile set up to assume a specific role granted access to your bucket, you can work with S3 directly using a configuration like:

[worker.snapshots]
destination = "s3://bucket/cluster-name"
snapshot-interval-num-records = 1000
aws-profile = "restate-dev"

This assumes that in your ~/.aws/config you have a profile similar to:

[profile restate-dev]
source_profile = ...
region = us-east-1
role_arn = arn:aws:iam::123456789012:role/restate-local-dev-role

Log trimming and Snapshots

In a distributed environment, the shared log is the mechanism for replicating partition state among nodes. Therefore it is critical to that all cluster members can get all the relevant events recorded in the log, even newly built nodes that will join the cluster in the future. This requirement is at odds with an immutable log growing unboundedly. Snapshots enable log trimming - the proces of removing older segments of the log.

When partition processors successfully publish a snapshot, they update their "archived" log sequence number (LSN). This reflects the position in the log at which the snapshot was taken and allows the cluster to safely trim its logs.

By default, Restate will attempt to trim logs once an hour which you can override or disable in the server configuration:

[admin]
log-trim-check-interval = "1h"

This interval determines only how frequently the check is performed, and not a guarantee that logs will be trimmed. Restate will automatically determine the appropriate safe trim point for each partition's log.

If replicated logs are in use in a clustered environment, the log safe trim point will be determined based on the archived LSN. If a snapshot repository is not configured, then archived LSNs are not reported. Instead, the safe trim point will be determined by the smallest reported persisted LSN across all known processors for the given partition. Single-node local-only logs are also trimmed based on the partitions' persisted LSNs.

The presence of any dead nodes in a cluster will cause trimming to be suspended for all partitions, unless a snapshot repository is configured. This is because we can not know what partitions may reside on the unreachable nodes, which will become stuck when the node comes back.

When a node starts up with pre-existing partition state and finds that the partition's log has been trimmed to a point beyond the most recent locally-applied LSN, the node will attempt to download the latest snapshot from the configured repository. If a suitable snapshot is available, the processor will re-bootstrap its local state and resume applying the log.

Handling log trim gap errors

If you observe repeated Shutting partition processor down because it encountered a trim gap in the log. errors in the Restate server log, it is an indication that a processor is failing to start up due to missing log records. To recover, you must ensure that a snapshot repository is correctly configured and accessible from the node reporting errors. You can still recover even if no snapshots were taken previously as long as there is at least one healthy node with a copy of the partition data. In that case, you must first configure the existing node(s) to publish snapshots for the affected partition(s) to a shared destination.

Pruning the snapshot repository

Pruning

Restate does not currently support pruning older snapshots from the snapshot repository. We recommend implementing an object lifecycle policy directly in the object store to manage retention.