How does Restate help?
The benefits of using Restate here are:- Automatic retries of failed tasks: LLM API down, timeouts, infrastructure failures, etc.
- Recovery of previous progress: After a failure, Restate recovers the progress the execution did before the crash.
- Works with any LLM SDK (Vercel AI, LangChain, LiteLLM, etc.) and any programming language supported by Restate (TypeScript, Python, Go, etc.).
Example
Wrap each step in the chain withctx.run() to ensure fault tolerance and automatic recovery. Restate uses durable execution to persist the result of each step as it completes, so if any step fails, Restate will retry from that exact point without losing previous progress or re-executing completed steps.

Run the example
Run the example
1
Requirements
- AI SDK of your choice (e.g., OpenAI, LangChain, Pydantic AI, LiteLLM, etc.) to make LLM calls.
- API key for your model provider.
2
Download the example
3
Start the Restate Server
4
Start the Service
Export the API key of your model provider as an environment variable and then start the agent. For example, for OpenAI:
5
Register the services
- UI
- CLI

6
Send a request
In the UI (
http://localhost:9070), click on the process handler of the CallChainingService to open the playground and send a default request:
7
Check the Restate UI
You see in the Invocations Tab of the UI how the LLM is called multiple times, and how each result is persisted in Restate:
