- Build durable AI agents that recover automatically from crashes and API failures
- Integrate with existing AI SDKs like Vercel AI SDK and OpenAI Agent SDK
- Observe and debug agent executions with detailed traces
- Implement resilient human-in-the-loop workflows with approvals and timeouts
- Manage conversation history and state across multi-turn interactions
- Orchestrate multiple agents working together on complex tasks
Getting Started
A Restate AI application has two main components:- Restate Server: The core engine that takes care of the orchestration and resiliency of your agents
- Agent Services: Your agent or AI workflow logic using the Restate SDK for durability

Run the agent
Install Restate and launch it:http://localhost:9070
) or CLI:
Durable Execution
AI agents make multiple LLM calls and tool executions that can fail due to rate limits, network issues, or service outages. Restate uses Durable Execution to make your agents withstand failures without losing progress. The Restate SDK records the steps the agent executes in a log and replays them if the process crashes or is restarted:
Creating a Durable Agent
To implement a durable agent, you can use the Restate SDK in combination with existing AI frameworks like the Vercel AI SDK. Here’s the implementation of the durable weather agent you just invoked:durableexecution/agent.ts
run
handler.
The endpoint that serves the agents of this tour over HTTP is defined in src/app.ts
. The agent can now be called at http://localhost:8080/WeatherAgent/run
.
The main difference compared to a standard Vercel AI agent is the use of the Restate Context at key points throughout the agent logic.
Any action with the Context is automatically recorded by the Restate Server and survives failures.
We use this for:
- Persisting LLM responses: We wrap the model with the
durableCalls(ctx)
middleware, so that every LLM response is saved in Restate Server and can be replayed during recovery. The middleware is provided via the package@restatedev/vercel-ai-middleware
. - Resilient tool execution: Tools can make steps durable by using Context actions. Their outcome will then be persisted for recovery and retried until they succeed.
ctx.run
runs an action durably, retrying it until it succeeds and persisting the result in Restate (e.g. database interaction, API calls, non-deterministic actions).
Try out Durable Execution
Try out Durable Execution
Ask for the weather in Denver:On the invocation page in the UI, click on the invocation ID of the failing invocation.
You can see that your request is retrying because the weather API is down:
To fix the problem, remove the line Once you restart the service, the workflow finishes successfully.

failOnDenver
from the fetchWeather
function in the utils.ts
file:Observing your Agent
As you saw in the previous section, the Restate UI comes in handy when monitoring and debugging your agents. The Invocations tab shows all agent executions with detailed traces of every LLM call, tool execution, and state change:
OpenTelemetry Integration
OpenTelemetry Integration
Restate supports OpenTelemetry for exporting traces to external systems like Langfuse, DataDog, or Jaeger:Have a look at the tracing docs to set this up.
Human-in-the-Loop Agent
Many AI agents need human oversight for high-risk decisions or gathering additional input. Restate makes it easy to pause agent execution and wait for human input. Benefits with Restate:- If the agent crashes while waiting for human input, Restate continues waiting and recovers the promise on another process.
- If the agent runs on function-as-a-service platforms, the Restate SDK lets the function suspend while its waiting. Once the approval comes in, the Restate Server invokes the function again and lets it resume where it left off. This way, you don’t pay for idle waiting time (Learn more).
humanintheloop/agent.ts
You can also use awakeables outside of tools, for example, to implement human approval steps in between agent iterations.
Try out human approval
Try out human approval
Start a request for a high-value claim that needs human approval.
Use You can restart the service to see how Restate continues waiting for the approval.Simulate approving the claim by executing the curl request that was printed in the service logs, similar to:See in the UI how the workflow resumes and finishes after the approval.
/send
to start the claim asynchronously, without waiting for the result.
Timeouts and Escalation
Timeouts and Escalation
Add timeouts to human approval steps to prevent workflows from hanging indefinitely.Restate persists the timer and the approval promise, so if the service crashes or is restarted, it will continue waiting with the correct remaining time:Try it out by sending a request to the service:You restart the service and check in the UI how the process will block for the remaining time without starting over.
humanintheloop/agent-with-timeout.ts
Chat Agent with Memory
The next ingredient we need to build AI agents is the ability to maintain context and memory across multiple interactions. To implement stateful entities like chat sessions, or stateful agents, Restate provides Virtual Objects. Each Virtual Object instance maintains isolated state and is identified by a unique key. Here is an example of a Virtual Object that represents chat sessions:
chat/agent.ts
- Long-lived state: K/V state is stored permanently. It has no automatic expiry. Clear it via
ctx.clear()
. - Durable state changes: State changes are logged with Durable Execution, so they survive failures and are consistent with code execution
- State is queryable via the state tab in the UI.

- Built-in concurrency control: Restate’s Virtual Objects have built-in queuing and consistency guarantees per object key. Handlers either have read-write access (
ObjectContext
) or read-only access (shared object context).- Only one handler with write access can run at a time per object key to prevent concurrent/lost writes or race conditions.
- Handlers with read-only access can run concurrently to the write-access handlers.

Try out Virtual Objects
Try out Virtual Objects
Stateful Chat Agent:Ask the agent to do some task:Continue the conversation - the agent remembers previous context:Get conversation history or view it in the UI:Seeing concurrency control in action:In the chat service, the The UI shows how Restate queues the requests per session to ensure consistency:
message
handler is an exclusive handler, while the getHistory
handler is a shared handler.Let’s send some messages to a chat session:
Stateful Serverless Agents
Stateful Serverless Agents
You can run Virtual Objects on serverless platforms like Vercel, Modal, Cloudflare Workers, or AWS Lambda.
When the request comes in, Restate attaches the correct state to the request, so your handler can access it locally.This way, you can implement stateful, serverless agents without managing any external state store and without worrying about concurrency issues.
Agent Orchestration
As your agents grow more complex, you may want to break them down into smaller, specialized sub-workflows and sub-agents. Each of these can then be developed, deployed, and scaled independently.Tools as sub-workflows
You can pull out complex parts of your tool logic into separate workflows. The Restate SDK gives you clients to call other Restate services durably from your agent logic. All calls are proxied via Restate. Restate persists the call and takes care of retries and recovery. For example, let’s implement the human approval tool as a separate service:orchestration/sub-workflow-agent.ts
orchestration/sub-workflow-agent.ts
Try out sub-workflows
Try out sub-workflows
Start a request for a high-value claim that needs human approval.
Use In the UI, you can see that the agent called the workflow service and is waiting for the response.
You can see the trace of the sub-workflow in the timeline.Once you approve the claim, the workflow returns, and the agent continues.
/send
to start the claim asynchronously, without waiting for the result.
Follow the Tour of Workflows to learn more about implementing resilient workflows with Restate.
Multi-agent Systems
Similar to sub-workflows, you can break down complex agents into multiple specialized agents. You can let your agent hand off tasks to other agents by calling them from tools:orchestration/multi-agent.ts
Try out multi-agent systems
Try out multi-agent systems
Start a request for a claim that needs to be analyzed by multiple agents.In the UI, you can see that the agent called the sub-agents and is waiting for their responses.
You can see the trace of the sub-agents in the timeline.Once all sub-agents return, the main agent continues and makes a decision.

Parallel Work
Now that our agents are broken down into smaller parts, let’s have a look at how to run different parts of our agent logic in parallel to speed up execution.You might have noticed that all example agents set
parallelToolCalls: false
in the OpenAI provider options.
This is required to ensure deterministic execution during replays.
When multiple tools execute in parallel and use the Context, the order of operations might differ between the original execution and the replay, leading to inconsistencies.RestatePromise.all
, RestatePromise.allSettled
, and RestatePromise.race
to gather their results.
Parallel Tool Steps
To parallelize tool steps, implement an orchestrator tool that usesRestatePromise
to run multiple steps in parallel.
Here is an insurance claim agent that runs multiple analyses in parallel:
parallelwork/parallel-tools-agent.ts
If you want to allow the LLM to call multiple tools in parallel with
parallelToolCalls: true
, then you need to manually implement the agent tool execution loop using RestatePromise
.Try out parallel tool steps
Try out parallel tool steps
Start a request for a claim that needs to be analyzed by multiple tools in parallel.In the UI, you can see that the agent ran the tool steps in parallel.
Their traces all start at the same time.Once all tools return, the agent continues and makes a decision.

Parallel Agents
You can use the sameRestatePromise
primitives to run multiple agents in parallel.
For example, to race agents against each other and use the first result that returns, while cancelling the others.
Or to let a main orchestrator agent combine the results of multiple specialized agents in parallel:
parallelwork/parallel-agents.ts
Try out parallel agents
Try out parallel agents
Start a request for a claim that needs to be analyzed by multiple agents in parallel.In the UI, you can see that the handler called the sub-agents in parallel.
Once all sub-agents return, the main agent makes a decision.

Error Handling
LLM calls are costly, so you can configure retry behavior in both Restate and your AI SDK to avoid infinite loops and high costs. Restate distinguishes between two types of errors:- Transient errors: Temporary issues like network failures or rate limits. Restate automatically retries these until they succeed or the retry policy is exhausted.
- Terminal errors: Permanent failures like invalid input or business rule violations. These are not retried and cause the invocation to fail.
Retries of LLM calls
In the Vercel AI SDK, setmaxRetries
on generateText
(default: 2) to retry failed calls due to rate limits or transient errors.
After retries are exhausted, the agent throws an error.
Restate then retries the invocation with exponential backoff to handle longer outages or network issues.
You can limit Restate’s retries with the maxRetryAttempts
option in durableCalls
middleware:
errorhandling/fail-on-terminal-tool-agent.ts
maxRetries
SDK attempts.
For example, with maxRetryAttempts
: 3 and maxRetries
: 2, a call may be attempted 6 times.
Once Restate’s retries are exhausted, the invocation fails with a TerminalError
and won’t be retried further.
Tool execution errors
The Vercel AI SDK will convert any error in tool execution into a message to the LLM, and the agent will decide how to proceed. By default, it also does this for terminal errors thrown in the tool execution. This is often desirable, as the LLM can decide to retry the tool call, use a different tool, or provide a fallback answer. However, if you want to treat terminal tool execution errors as permanent failures, you can use one of the following options.Fail the agent on terminal tool errors
Fail the agent on terminal tool errors
To fail the agent on terminal tool errors, rethrow the error in
onStepFinish
:errorhandling/fail-on-terminal-tool-agent.ts
Stop the agent on terminal tool errors
Stop the agent on terminal tool errors
To stop the agent on terminal tool errors and handle it after the agent finishes, you can use
hasTerminalToolError
in stopWhen
and then inspect the steps for errors:errorhandling/stop-on-terminal-tool-agent.ts
You can set custom retry policies for
ctx.run
steps in your tool executions.Advanced patterns
Manual Agent Loop
Manual Agent Loop
If you need more control over the agent loop, you can implement it manually using Restate’s durable primitives.This allows you to:This can be extended to include any custom control flow you need: persistent state, parallel tool calls, custom stopping conditions, or custom error handling.Try it out by sending a request to the service:In the UI, you can see how the agent runs multiple iterations and calls tools.
- Parallelize tool calls with
RestatePromise
- Implement custom stopping conditions
- Implement custom logic between steps (e.g. human approval)
- Interact with external systems between steps
- Handle errors in a custom way
advanced/manual-loop-agent.ts

Rolling back tool executions on failure
Rolling back tool executions on failure
Sometimes you need to undo previous agent actions when a later step fails. Restate makes it easy to implement compensation patterns (Sagas) for AI agents.Just track the rollback actions as you go, let the agent rethrow terminal tool errors, and execute the rollback actions in reverse order.Here is an example of a travel booking agent that first reserves a hotel, flight and car, and then either confirms them or rolls back if any step fails with a terminal error (e.g. car type not available):Try it out by sending the following request:This will succeed successfully. Have a look at the invocation trace in the UI.To see the compensation in action, ask for an SUV rental, which will cause the car booking step to fail and trigger the compensation logic:Have a look at the UI to see how the car booking fails, and the flight and hotel bookings are rolled back.Check out the sagas guide for more details.
advanced/rollback-agent.ts

Long-running background agents
Long-running background agents
Restate supports implementing scheduling and timer logic in your agents.
This allows you to build agents that run periodically, wait for specific times, or implement complex scheduling logic.
Agents can either be long-running or reschedule themselves for later execution.Have a look at the scheduling docs to learn more.
Streaming back intermediate results
Streaming back intermediate results
Have a look at the pub-sub example.
Interrupting agents
Interrupting agents
Have a look at the interruptible coding agent.
Summary
Durable Execution, paired with your existing SDKs, gives your agents a powerful upgrade:- Durable Execution: Automatic recovery from failures without losing progress
- Persistent memory and context: Persistent conversation history and context
- Observability by default across your agents and workflows
- Human-in-the-Loop: Seamless approval workflows with timeouts
- Multi-Agent Coordination: Reliable orchestration of specialized agents
- Suspensions to save costs on function-as-a-service platforms when agents need to wait
- Advanced Patterns: Real-time progress updates, interruptions, and long-running workflows
Next Steps
- Learn more about how to implement resilient tools with Restate in the Tour of Workflows
- Check out the other Restate AI examples on GitHub
- Sign up for Restate Cloud and start building agents without managing infrastructure