Each event leads to an invocation, meaning a request to execute a handler. Each invocation has its own unique ID and lifecycle.
Have a look at managing invocations to learn how to manage the lifecycle of an invocation.
Invoking Handlers via Kafka Events
You can invoke handlers via Kafka events, by doing the following:Develop and register an event handler
You can invoke any handler via Kafka events.
The event payload will be (de)serialized as JSON.
- When invoking Virtual Object or Workflow handlers via Kafka, the key of the Kafka record will be used to determine the Virtual Object/Workflow key. The key needs to be a valid UTF-8 string. The events are delivered to the subscribed handler in the order in which they arrived on the topic partition.
- When invoking Virtual Object or Workflow shared handlers via Kafka, the key of the Kafka record will be used to determine the Virtual Object/Workflow key. The key needs to be a valid UTF-8 string. The events are delivered to the subscribed handler in parallel without ordering guarantees.
- When invoking Service handlers over Kafka, events are delivered in parallel without ordering guarantees.
Configure Restate to connect to a Kafka cluster
Define the Kafka cluster that Restate needs to connect to in the Restate configuration file:And make sure the Restate Server uses it via
restate.toml
restate-server --config-file restate.toml.Check the configuration docs for more details.Using SASL/SSL (e.g. Confluent Kafka)
Using SASL/SSL (e.g. Confluent Kafka)
To connect to a Kafka cluster that requires SASL/SSL authentication (e.g., Confluent Kafka), you can specify the necessary parameters in the Note the quotation marks around the configuration keys.For Confluent Cloud, the rest of the configuration can be copied from the Confluent Cloud “Rust client” configuration.
restate.toml file:restate.toml
Using SASL OAuth2.0 / OpenID Connect
Using SASL OAuth2.0 / OpenID Connect
The Kafka ingress supports SASL OAUTHBEARER authentication, enabling OAuth 2.0/OpenID Connect (OIDC) token-based connections to managed Kafka services.Configure SASL OAUTHBEARER via the Common OAUTHBEARER options:
For the full list of available options, see the librdkafka CONFIGURATION.md.
additional_options field in your Kafka cluster configuration. These options are passed directly to librdkafka.Example for Confluent Cloud:| Option | Description |
|---|---|
security.protocol | Set to SASL_SSL for encrypted connections |
sasl.mechanism | Set to OAUTHBEARER |
sasl.oauthbearer.method | Set to oidc for OIDC-based token retrieval |
sasl.oauthbearer.client.id | OAuth client ID |
sasl.oauthbearer.client.secret | OAuth client secret |
sasl.oauthbearer.token.endpoint.url | OAuth token endpoint URL |
sasl.oauthbearer.scope | OAuth scope (if required by provider) |
Configuring Kafka clusters via environment variables
Configuring Kafka clusters via environment variables
You can also configure the Kafka clusters via the
RESTATE_INGRESS__KAFKA_CLUSTERS environment variable:Experimental: Improved Kafka batch ingestion
Experimental: Improved Kafka batch ingestion
In certain scenarios, such as consuming from a Kafka topic with few partitions, the new batch ingestion can significantly improve ingestion throughput compared to the legacy implementation.This feature is disabled by default and should be used with caution.All nodes in the cluster must be running Restate v1.6 before enabling this feature.Once enabled and data has been ingested, you cannot roll back to a version prior to Restatev1.6.To enable the experimental Kafka batch ingestion, set the following environment variable on all nodes:Or in your configuration file:Contact us on Discord or Slack to test it together with us.
Subscribe the event handler to the Kafka topic
Let Restate forward events from the Kafka topic to the event handler by creating a subscription:Once you’ve created a subscription, Restate immediately starts consuming events from Kafka.
The handler will be invoked for each event received from Kafka.The
options field is optional and accepts any configuration parameter from librdkafka configuration.Kafka connection configuration
Kafka connection configuration
You can pass arbitrary Kafka cluster options in the For the full list of options, check librdkafka configuration.
restate.toml, and those options will be applied for all the subscriptions to that cluster, for example:restate.toml
Multiple Kafka clusters support
Multiple Kafka clusters support
You can configure multiple kafka clusters in the And then, when creating the subscriptions, you refer to the specific cluster by
restate.toml file:restate.toml
name:Event metadata
Event metadata
You can access the event metadata in the handler by getting the request headers map:Each event carries within this map the following entries:
restate.subscription.id: The subscription identifier, as shown by the Admin API.kafka.offset: The record offset.kafka.partition: The record partition.kafka.timestamp: The record timestamp.
Raw event support
Raw event support
Check out the serialization documentation of your SDK to learn how to receive raw events in your handler.
Managing Kafka Subscriptions
Restate can trigger handlers via Kafka events.Create Subscriptions
Subscribe a handler to a Kafka topic:options field is optional and accepts any librdkafka configuration parameter.
List Subscriptions
View current subscriptions:Delete Subscriptions
Remove a subscription using its ID (starts withsub_):