The following is the default configuration. It does not include all possible configuration options, since some can be conflicting. Take a look at the configuration reference below for a full list of options.Note that configuration defaults might change across server releases, if you want to make sure you use stable values, use an explicit configuration file an pass the path via --config-path=<PATH> as described above.
The number of timers in memory limit is used to bound the amount of timers loaded in memory. If this limit is set, when exceeding it, the timers farther in the future will be spilled to disk.
The memory budget for rocksdb memtables in bytesThe total is divided evenly across partitions. The divisor is defined in num-partitions-to-share-memory-budget. If this value is set, it overrides the ratio defined in rocksdb-memory-ratio.
The memory budget for rocksdb memtables as ratioThis defines the total memory for rocksdb as a ratio of all memory available to memtables (See rocksdb-total-memtables-ratio in common). The budget is then divided evenly across partitions. The divisor is defined in num-partitions-to-share-memory-budget
Files will be opened in “direct I/O” mode which means that data r/w from the disk will not be cached or buffered. The hardware buffer of the devices may however still be used. Memory mapped files are not impacted by these parameters.
If non-zero, we perform bigger reads when doing compaction. If you’re running RocksDB on spinning disks, you should set this to at least 2MB. That way RocksDB’s compaction is doing sequential instead of random reads.
StatsLevel can be used to reduce statistics overhead by skipping certain types of stats in the stats collection process.Default: “except-detailed-timers”
Collect all stats, including measuring duration of mutex operations. If getting time is expensive on the platform to run, it can reduce scalability to more threads, especially for writes.
Defines the threshold after which queues invocations will spill to disk at the path defined in tmp-dir. In other words, this is the number of invocations that can be kept in memory before spilling to disk. This is a per-partition limit.
Configures throttling for service invocations at the node level. This throttling mechanism uses a token bucket algorithm to control the rate at which invocations can be processed, helping to prevent resource exhaustion and maintain system stability under high load.The throttling limit is shared across all partitions running on this node, providing a global rate limit for the entire node rather than per-partition limits. When unset, no throttling is applied and invocations are processed without throttling.
The rate at which the tokens are replenished.Syntax: <rate>/<unit> where <unit> is s|sec|second, m|min|minute, or h|hr|hour. unit defaults to per second if not specified.
Configures rate limiting for service actions at the node level. This throttling mechanism uses a token bucket algorithm to control the rate at which actions can be processed, helping to prevent resource exhaustion and maintain system stability under high load.The throttling limit is shared across all partitions running on this node, providing a global rate limit for the entire node rather than per-partition limits. When unset, no throttling is applied and actions are processed without throttling.
The rate at which the tokens are replenished.Syntax: <rate>/<unit> where <unit> is s|sec|second, m|min|minute, or h|hr|hour. unit defaults to per second if not specified.
Partition store snapshotting settings. At a minimum, set destination and snapshot-interval-num-records to enable snapshotting. For a complete example, see Snapshots.
Base URL for cluster snapshots. Supports s3:// and file:// protocol scheme. S3-compatible object stores must support ETag-based conditional writes.Default: None
Number of log records that trigger a snapshot to be created.As snapshots are created asynchronously, the actual number of new records that will trigger a snapshot will vary. The counter for the subsequent snapshot begins from the LSN at which the previous snapshot export was initiated. Only leader Partition Processors will take snapshots for a given partition.This setting does not influence explicitly requested snapshots triggered using restatectl.Default: None - automatic snapshots are disabled
The AWS configuration profile to use for S3 object store destinations. If you use named profiles in your AWS configuration, you can replace all the other settings with a single profile reference. See the [AWS documentation on profiles] (https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html) for more.
AWS region to use with S3 object store destinations. This may be inferred from the environment, for example the current region when running in EC2. Because of the request signing algorithm this must have a value. For Minio, you can generally set this to any string, such as us-east-1.
When you use Amazon S3, this is typically inferred from the region and there is no need to set it. With other object stores, you will have to provide an appropriate HTTP(S) endpoint. If not using HTTPS, also set aws-allow-http to true.
Local concurrency limit to use to limit the amount of concurrent requests. If exceeded, the ingress will reply immediately with an appropriate status code. Default is unlimited.
Maximum number of inflight records sequencer can acceptOnce this maximum is hit, sequencer will induce back pressure on clients. This controls the total number of records regardless of how many batches.Note that this will be increased to fit the biggest batch of records being enqueued.
Maximum number of records to prefetch from log serversThe number of records bifrost will attempt to prefetch from replicated loglet’s log-servers for every loglet reader (e.g. partition processor). Note that this mainly impacts readers that are not co-located with the loglet sequencer (i.e. partition processor followers).
Trigger to prefetch more recordsWhen read-ahead is used (readahead-records), this value (percentage in float) will determine when readers should trigger a prefetch for another batch to fill up the buffer. For instance, if this value is 0.3, then bifrost will trigger a prefetch when 30% or more of the read-ahead slots become available (e.g. partition processor consumed records and freed up enough slots).The higher the value is, the longer bifrost will wait before it triggers the next fetch, potentially fetching more records as a result.To illustrate, if readahead-records is set to 100 and readahead-trigger-ratio is 1.0. Then bifrost will prefetch up to 100 records from log-servers and will not trigger the next prefetch unless the consumer consumes 100% of this buffer. This means that bifrost will read in batches but will not do while the consumer is still reading the previous batch.Value must be between 0 and 1. It will be clamped at 1.0.
When enabled, automatic improvement periodically checks with the loglet provider if the loglet configuration can be improved by performing a reconfiguration.This allows the log to pick up replication property changes, apply better placement of replicas, or for other reasons.
The memory budget for rocksdb memtables as ratioThis defines the total memory for rocksdb as a ratio of all memory available to memtables (See rocksdb-total-memtables-ratio in common).
Auto join the metadata cluster when being startedDefines whether this node should auto join the metadata store cluster when being started for the first time.
Files will be opened in “direct I/O” mode which means that data r/w from the disk will not be cached or buffered. The hardware buffer of the devices may however still be used. Memory mapped files are not impacted by these parameters.
If non-zero, we perform bigger reads when doing compaction. If you’re running RocksDB on spinning disks, you should set this to at least 2MB. That way RocksDB’s compaction is doing sequential instead of random reads.
StatsLevel can be used to reduce statistics overhead by skipping certain types of stats in the stats collection process.Default: “except-detailed-timers”
Collect all stats, including measuring duration of mutex operations. If getting time is expensive on the platform to run, it can reduce scalability to more threads, especially for writes.
The number of ticks before triggering an electionThe number of ticks before triggering an election. The value must be larger than raft_heartbeat_tick. It’s recommended to set raft_election_tick = 10 * raft_heartbeat_tick. Decrease this value if you want to react faster to failed leaders. Note, decreasing this value too much can lead to cluster instabilities due to falsely detecting dead leaders.
The number of ticks before sending a heartbeatA leader sends heartbeat messages to maintain its leadership every heartbeat ticks. Decrease this value to send heartbeats more often.
Common network configuration options for communicating with Restate cluster nodes. Note that similar keys are present in other config sections, such as in Service Client options.
The memory budget for rocksdb memtables as ratioThis defines the total memory for rocksdb as a ratio of all memory available to the log-server.(See rocksdb-total-memtables-ratio in common).
The maximum number of subcompactions to run in parallel.Setting this to 1 means no sub-compactions are allowed (i.e. only 1 thread will do the compaction).Default is 0 which maps to floor(number of CPU cores / 2)
Files will be opened in “direct I/O” mode which means that data r/w from the disk will not be cached or buffered. The hardware buffer of the devices may however still be used. Memory mapped files are not impacted by these parameters.
If non-zero, we perform bigger reads when doing compaction. If you’re running RocksDB on spinning disks, you should set this to at least 2MB. That way RocksDB’s compaction is doing sequential instead of random reads.
StatsLevel can be used to reduce statistics overhead by skipping certain types of stats in the stats collection process.Default: “except-detailed-timers”
Collect all stats, including measuring duration of mutex operations. If getting time is expensive on the platform to run, it can reduce scalability to more threads, especially for writes.
[PREVIEW FEATURE] Setting the location allows Restate to form a tree-like cluster topology. The value is written in the format of “region[.zone]” to assign this node to a specific region, or to a zone within a region.The value of region and zone is arbitrary but whitespace and . are disallowed.NOTE: It’s strongly recommended to not change the node’s location string after its initial registration. Changing the location may result in data loss or data inconsistency if log-server is enabled on this node.When this value is not set, the node is considered to be in the default location. The default location means that the node is not assigned to any specific region or zone.
Examples - us-west — the node is in the us-west region. - us-west.a1 — the node is in the us-west region and in the a1 zone. - “ — [default] the node is in the default location
If true, then this node is allowed to automatically provision as a new cluster. This node must have an admin role and a new nodes configuration will be created that includes this node.auto-provision is allowed by default in development mode and is disabled if restate-server runs with --production flag to prevent cluster nodes from forming their own clusters, rather than forming a single cluster.Use restatectl to provision the cluster/node if automatic provisioning is disabled.This can also be explicitly disabled by setting this value to false.Default: true
This location will be used to persist cluster metadata. Takes the form of a URL with s3:// as the protocol and bucket name as the authority, plus an optional prefix specified as the path component.Example: s3://bucket/prefix
The AWS configuration profile to use for S3 object store destinations. If you use named profiles in your AWS configuration, you can replace all the other settings with a single profile reference. See the [AWS documentation on profiles] (https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html) for more.
AWS region to use with S3 object store destinations. This may be inferred from the environment, for example the current region when running in EC2. Because of the request signing algorithm this must have a value. For Minio, you can generally set this to any string, such as us-east-1.
When you use Amazon S3, this is typically inferred from the region and there is no need to set it. With other object stores, you will have to provide an appropriate HTTP(S) endpoint. If not using HTTPS, also set aws-allow-http to true.
Address to bind for the Node server. Derived from the advertised address, defaulting to 0.0.0.0:$PORT (where the port will be inferred from the URL scheme).
Number of partitions that will be provisioned during initial cluster provisioning. partitions are the logical shards used to process messages.Cannot be higher than 65535 (You should almost never need as many partitions anyway)NOTE 1: This config entry only impacts the initial number of partitions, the value of this entry is ignored for provisioned nodes/clusters.NOTE 2: This will be renamed to default-num-partitions by default as of v1.3+Default: 24
Configures the global default replication factor to be used by the the system.Note that this value only impacts the cluster initial provisioning and will not be respected after the cluster has been provisioned.To update existing clusters use the restatectl utility.
Log filter configuration. Can be overridden by the RUST_LOG environment variable. Check the RUST_LOG documentation for more details how to configure it.
Address to bind for the tokio-console tracing subscriber. If unset and restate-server is built with tokio-console support, it’ll listen on 0.0.0.0:6669.
Storage high priority thread poolThis configures the restate-managed storage thread pool for performing high-priority or latency-sensitive storage tasks when the IO operation cannot be performed on in-memory caches.
Storage low priority thread poolThis configures the restate-managed storage thread pool for performing low-priority or latency-insensitive storage tasks.
The memory size used across all memtables (ratio between 0 to 1.0). This limits how much memory memtables can eat up from the value in rocksdb-total-memory-limit. When set to 0, memtables can take all available memory up to the value specified in rocksdb-total-memory-limit. This value will be sanitized to 1.0 if outside the valid bounds.
Note if automatic memory budgeting is enabled, it should be safe to allow rocksdb to stall if it hits the limit. However, if rocksdb stall kicked in, it’s unlikely that the system will recover from this without intervention.
Restate uses Scarf to collect anonymous usage data to help us understand how the software is being used. You can set this flag to true to disable this collection. It can also be set with the environment variable DO_NOT_TRACK=1.
This is a shortcut to set both [Self::tracing_runtime_endpoint], and [Self::tracing_services_endpoint].Specify the tracing endpoint to send runtime traces to. Traces will be exported using OTLP gRPC through opentelemetry_otlp.To configure the sampling, please refer to the opentelemetry autoconfigure docs.
Overrides [Self::tracing_endpoint] for runtime tracesSpecify the tracing endpoint to send runtime traces to. Traces will be exported using OTLP gRPC through opentelemetry_otlp.To configure the sampling, please refer to the opentelemetry autoconfigure docs.
Overrides [Self::tracing_endpoint] for services tracesSpecify the tracing endpoint to send services traces to. Traces will be exported using OTLP gRPC through opentelemetry_otlp.To configure the sampling, please refer to the opentelemetry autoconfigure docs.
If set, an exporter will be configured to write traces to files using the Jaeger JSON format. Each trace file will start with the trace prefix.If unset, no traces will be written to file.It can be used to export traces in a structured format without configuring a Jaeger agent.To inspect the traces, open the Jaeger UI and use the Upload JSON feature to load and inspect them.
A path to a file, such as “/var/secrets/key.pem”, which contains exactly one ed25519 private key in PEM format. Such a file can be generated with openssl genpkey -algorithm ed25519. If provided, this key will be used to attach JWTs to requests from this client which SDKs may optionally verify, proving that the caller is a particular Restate instance.This file is currently only read on client creation, but this may change in future. Parsed public keys will be logged at INFO level in the same format that SDKs expect.
Configuration for the HTTP/2 keep-alive mechanism, using PING frames.Please note: most gateways don’t propagate the HTTP/2 keep-alive between downstream and upstream hosts. In those environments, you need to make sure the gateway can detect a broken connection to the upstream deployment(s).
A URI, such as http://127.0.0.1:10001, of a server to which all invocations should be sent, with the Host header set to the deployment URI. HTTPS proxy URIs are supported, but only HTTP endpoint traffic will be proxied currently. Can be overridden by the HTTP_PROXY environment variable.
HTTP authorities eg localhost, restate.dev, 127.0.0.1 that should not be proxied by the http_proxy. Ports are ignored. Subdomains are also matched. An entry “*” matches all hostnames. Can be overridden by the NO_PROXY environment variable, which supports comma separated values.
Sets the initial maximum of locally initiated (send) streams.This value will be overwritten by the value included in the initial SETTINGS frame received from the peer as part of a [connection preface].Default: NoneNOTE: Setting this value to None (default) users the default recommended value from HTTP2 specs
Request minimum size to enable compression. The request size includes the total of the journal replay and its framing using Restate service protocol, without accounting for the json envelope and the base 64 encoding.Default: 4MB (The default AWS Lambda Limit is 6MB, 4MB roughly accounts for +33% of Base64 and the json envelope).
Files will be opened in “direct I/O” mode which means that data r/w from the disk will not be cached or buffered. The hardware buffer of the devices may however still be used. Memory mapped files are not impacted by these parameters.
If non-zero, we perform bigger reads when doing compaction. If you’re running RocksDB on spinning disks, you should set this to at least 2MB. That way RocksDB’s compaction is doing sequential instead of random reads.
StatsLevel can be used to reduce statistics overhead by skipping certain types of stats in the stats collection process.Default: “except-detailed-timers”
Collect all stats, including measuring duration of mutex operations. If getting time is expensive on the platform to run, it can reduce scalability to more threads, especially for writes.
On every gossip interval, how many peers each node attempts to gossip with. The default is optimized for small clusters (less than 5). On larger clusters, if gossip overhead is noticeable, consider reducing this value to 1.
How many intervals need to pass without receiving any gossip messages before considering this node as potentially isolated/dead. This threshold is used in the case where the node can still send gossip messages but did not receive any. This can rarely happen in asymmetric network partitions.In this case, the node will advertise itself as dead in the gossip messages it sends out.Note: this threshold does not apply to a cluster that’s configured with a single node.
In addition to basic health/liveness information, the gossip protocol is used to exchange extra information about the roles hosted by this node. For instance, which partitions are currently running, their configuration versions, and the durable LSN of the corresponding partition databases. This information is sent every Nth gossip message. This setting controls the frequency of this exchange. For instance, 10 means that every 10th gossip message will contain the extra information about.
Maximum journal retention duration that can be configured. When discovering a service deployment, or when modifying the journal retention using the Admin API, the given value will be clamped.Unset means no limit.
Maximum max attempts configurable in an invocation retry policy. When discovering a service deployment with configured retry policies, or when modifying the invocation retry policy using the Admin API, the given value will be clamped.None means no limit, that is infinite retries is enabled.