Skip to main content
AI agents are long-running processes that combine LLMs with tools and external APIs to complete complex tasks. With Restate, you can build agents that are resilient to failures, stateful across conversations, and observable without managing complex retry logic or external state stores. In this guide, you’ll learn how to:
  • Build durable AI agents that recover automatically from crashes and API failures
  • Integrate Restate with the
  • Observe and debug agent executions with detailed traces
  • Implement resilient human-in-the-loop workflows with approvals and timeouts
  • Manage conversation history and state across multi-turn interactions
  • Orchestrate multiple agents working together on complex tasks

Getting Started

A Restate AI application has two main components:
  • Restate Server: The core engine that takes care of the orchestration and resiliency of your agents
  • Agent Services: Your agent or AI workflow logic using the Restate SDK for durability
Application Structure Restate works with how you already deploy your agents, whether that’s in Docker, on Kubernetes, or via serverless platforms (Modal, AWS Lambda…). You don’t need to run your agents in any special way. Let’s run an example locally to get a better feel for how it works.

Run the agent

Install Restate and launch it:
restate-server
Get the example:
git clone [email protected]:restatedev/ai-examples.git
cd ai-examples/google-adk/tour-of-agents
Export your Google API key and run the agent:
export GOOGLE_API_KEY=your-api-key
uv run .
Then, tell Restate where your agent is running via the UI (http://localhost:9070) or CLI:
restate deployments register http://localhost:9080
This registers a set of agents that we will be covering in this tutorial. To test your setup, invoke the weather agent, either via the UI playground by clicking on the run handler of the WeatherAgent in the overview:
Playground
Or via curl:
curl localhost:8080/WeatherAgent/run \
  --json '{"message": "What is the weather like in San Francisco?", "user_id": "user-123", "session_id": "session-123"}'
You should see the weather information printed in the terminal. Let’s have a look at what happened under the hood to make your agents resilient.

Durable Execution

AI agents make multiple LLM calls and tool executions that can fail due to rate limits, network issues, or service outages. Restate uses Durable Execution to make your agents withstand failures without losing progress. The Restate SDK records the steps the agent executes in a log and replays them if the process crashes or is restarted: Durable AI Agent Execution Durable Execution is the basis of how Restate makes your agents resilient to failures. Restate offers durable execution primitives via its SDK.

Creating a Durable Agent

To implement a durable agent, you can use the Restate SDK in combination with the Google Agent Development Kit (ADK). Here’s the implementation of the durable weather agent you just invoked:
durable_agent.py
APP_NAME = "agents"


async def get_weather(city: str) -> WeatherResponse:
    """Get the current weather for a given city."""
    #  Do one or more durable steps using the Restate context
    return await restate_context().run_typed(
        f"Get weather {city}", call_weather_api, city=city
    )


# Specify your agent in the default ADK way
agent = Agent(
    model="gemini-2.5-flash",
    name="weather_agent",
    instruction="You are a helpful agent that provides weather updates.",
    tools=[get_weather],
)

app = App(name=APP_NAME, root_agent=agent, plugins=[RestatePlugin()])
session_service = InMemorySessionService()

agent_service = restate.Service("WeatherAgent")


@agent_service.handler()
async def run(_ctx: restate.Context, req: WeatherPrompt) -> str | None:
    await get_or_create_session(session_service, APP_NAME, req.user_id, req.session_id)
    runner = Runner(app=app, session_service=session_service)
    events = runner.run_async(
        user_id=req.user_id,
        session_id=req.session_id,
        new_message=Content(role="user", parts=[Part.from_text(text=req.message)]),
    )

    final_response = None
    async for event in events:
        if event.is_final_response() and event.content and event.content.parts:
            if event.content.parts[0].text:
                final_response = event.content.parts[0].text
    return final_response
First, you implement your agent and its tools, similar to how you would do it with the Google ADK. With these three additions, you make the agent durable:
  1. Restate Plugin: Add the RestatePlugin to your Google ADK App. This enables Restate’s durability features for model calls and tool executions within the agent.
  2. Resilient Tools: Wrap tool logic in durable steps using the Restate Context. In the get_weather tool, the call to the weather API is wrapped in restate_context().run_typed, making it a durable step that Restate can retry and recover.
To serve the agent over HTTP with Restate, you create a Restate Service and define handlers. Here, the agent logic is called from the run handler. The endpoint that serves the agents of this tour over HTTP is defined in __main__.py. The agent can now be called at http://localhost:8080/WeatherAgent/run.
Ask for the weather in Denver:
curl localhost:8080/WeatherAgent/run \
--json '{"message": "What is the weather like in Denver?", "user_id": "user-123", "session_id": "session-234"}'
On the invocation page in the UI, click on the invocation ID of the failing invocation. You can see that your request is retrying because the weather API is down:
Invocation overview
To fix the problem, remove the line fail_on_denver from the fetch_weather function in the app/utils/utils.py file:
utils/utils.py
async def call_weather_api(city: str) -> WeatherResponse:
    fail_on_denver(city)
    weather_data = await fetch_weather(city)
    return parse_weather_data(weather_data)
Once you restart the service, the workflow finishes successfully.

Observing your Agent

As you saw in the previous section, the Restate UI comes in handy when monitoring and debugging your agents. The Invocations tab shows all agent executions with detailed traces of every LLM call, tool execution, and state change:
Invocation overview
Restate supports OpenTelemetry for exporting traces to external systems like Langfuse, DataDog, or Jaeger:Have a look at the tracing docs to set this up.
Now that you know how to build and debug an agent, let’s look at more advanced patterns.

Human-in-the-Loop Agent

Many AI agents need human oversight for high-risk decisions or gathering additional input. Restate makes it easy to pause agent execution and wait for human input. Benefits with Restate:
  • If the agent crashes while waiting for human input, Restate continues waiting and recovers the promise on another process.
  • If the agent runs on function-as-a-service platforms, the Restate SDK lets the function suspend while it’s waiting. Once the approval comes in, the Restate Server invokes the function again and lets it resume where it left off. This way, you don’t pay for idle waiting time (Learn more).
Here’s a tool that asks for human approval for high-value claims:
human_approval_agent.py
async def human_approval(claim: InsuranceClaim) -> str:
    """Ask for human approval for high-value claims."""
    # Create an awakeable for human approval
    approval_id, approval_promise = restate_context().awakeable(type_hint=str)

    # Request human review
    await restate_context().run_typed(
        "Request review",
        request_human_review,
        claim=claim,
        awakeable_id=approval_id,
    )

    # Wait for human approval
    return await approval_promise
To implement human approval steps, you can use Restate’s awakeables. An awakeable is a promise that can be resolved externally via an API call by providing its ID. When you create the awakeable, you get back an ID and a promise. You can send the ID to the human approver, and then wait for the promise to be resolved.
You can also use awakeables outside of tools, for example, to implement human approval steps in between agent iterations.
Start a request for a high-value claim that needs human approval. Use the playground or curl with /send to start the claim asynchronously, without waiting for the result.
curl localhost:8080/HumanClaimApprovalAgent/run/send \
--json '{"message": "Process my hospital bill of 3000USD for a broken leg.", "user_id": "user-123", "session_id": "session-123"}'
You can restart the service to see how Restate continues waiting for the approval.If you wait for more than a minute, the invocation will get suspended.
Invocation overview
Simulate approving the claim by executing the curl request that was printed in the service logs, similar to:
curl localhost:8080/restate/awakeables/sign_1M28aqY6ZfuwBmRnmyP/resolve --json 'true'
See in the UI how the workflow resumes and finishes after the approval.
Invocation overview
Add timeouts to human approval steps to prevent workflows from hanging indefinitely.Restate persists the timer and the approval promise, so if the service crashes or is restarted, it will continue waiting with the correct remaining time:
human_approval_agent_with_timeout.py
# Wait for human approval for at most 3 hours to reach our SLA
match await restate.select(
    approval=approval_promise,
    timeout=restate_context().sleep(timedelta(hours=3)),
):
    case ["approval", approved]:
        return "Approved" if approved else "Rejected"
    case _:
        return "Approval timed out - Evaluate with AI"
Try it out by sending a request to the service:
curl localhost:8080/HumanClaimApprovalWithTimeoutsAgent/run/send \
--json '{"message": "Process my hospital bill of 3000USD for a broken leg.", "user_id": "user-123", "session_id": "session-123"}'
You restart the service and check in the UI how the process will block for the remaining time without starting over.You can also lower the timeout to a few seconds to see how the timeout path is taken.

Resilient workflows as tools

You can pull out complex parts of your tool logic into separate workflows. This lets you break down complex agents into smaller, reusable components that can be developed, deployed, and scaled independently. The Restate SDK gives you clients to call these workflows durably from your agent logic. All calls are proxied via Restate. Restate persists the call and takes care of retries and recovery. For example, let’s implement the human approval tool as a separate service:
sub_workflow_agent.py
# Sub-workflow service for human approval
human_approval_workflow = restate.Service("HumanApprovalWorkflow")


@human_approval_workflow.handler()
async def review(ctx: restate.Context, claim: InsuranceClaim) -> str:
    """Request human approval for a claim and wait for response."""
    # Create an awakeable that can be resolved via HTTP
    approval_id, approval_promise = ctx.awakeable(type_hint=str)

    # Request human review
    await ctx.run_typed(
        "Request review", request_human_review, claim=claim, awakeable_id=approval_id
    )

    # Wait for human approval
    return await approval_promise
This can now be called from the main agent via a service client:
sub_workflow_agent.py
async def human_approval(claim: InsuranceClaim) -> str:
    """Ask for human approval for high-value claims."""
    return await restate_context().service_call(review, claim)
These workflows have access to all Restate SDK features, including durable execution, state management, awakeables, and observability. They can be developed, deployed, and scaled independently.
Start a request for a high-value claim that needs human approval. Use /send to start the claim asynchronously, without waiting for the result.
curl localhost:8080/SubWorkflowClaimApprovalAgent/run/send \
--json '{"message": "Process my hospital bill of 3000USD for a broken leg.", "user_id": "user-123", "session_id": "session-123"}'
In the UI, you can see that the agent called the workflow service and is waiting for the response. You can see the trace of the sub-workflow in the timeline.Once you approve the claim, the workflow returns, and the agent continues.
Invocation overview
Follow the Tour of Workflows to learn more about implementing resilient workflows with Restate.

Durable Sessions

The next ingredient we need to build AI agents is the ability to maintain context and memory across multiple interactions. To implement stateful entities like chat sessions, or stateful agents, Restate provides a special service type called Virtual Objects. When you send a message to a Virtual Object, you provide a unique key that identifies the object instance (for example, a chat session ID or user ID). Each instance of a Virtual Object maintains isolated state. The handlers of the Virtual Object can read and modify the object’s state via the Restate ObjectContext. Objects

Virtual Objects for stateful agents

With Google ADK and Restate, you can create stateful agents that maintain conversation history across multiple interactions. Here is an example of a chat agent represented as a Virtual Object that is keyed by user ID:
chat.py
APP_NAME = "agents"

agent = Agent(
    model="gemini-2.5-flash",
    name="assistant",
    instruction="You are a helpful assistant. Be concise and helpful.",
)

# Enables retries and recovery for model calls and tool executions
app = App(name=APP_NAME, root_agent=agent, plugins=[RestatePlugin()])
runner = Runner(app=app, session_service=RestateSessionService())

chat = restate.VirtualObject("Chat")


@chat.handler()
async def message(ctx: restate.ObjectContext, req: ChatMessage) -> str | None:
    events = runner.run_async(
        user_id=ctx.key(),
        session_id=req.session_id,
        new_message=Content(role="user", parts=[Part.from_text(text=req.message)]),
    )
    final_response = None
    async for event in events:
        if event.is_final_response() and event.content and event.content.parts:
            if event.content.parts[0].text:
                final_response = event.content.parts[0].text
    return final_response
This automatically persists the agent events (LLM calls, tool calls, etc.) and the conversation history in Restate. It uses Restate as the session provider for the Google ADK Agent.
Ask the agent to do some task and provide a user ID as the object key:
curl localhost:8080/Chat/user123/message \
--json '{"message": "Make a poem about durable execution.", "session_id": "session-123"}'
Continue the conversation with the same user and session ID. The agent will remember previous context:
curl localhost:8080/Chat/user123/message \
--json '{"message": "Shorten it to 2 lines.", "session_id": "session-123"}'
Go to the state tab of the UI to view the conversation history.
Virtual Objects are ideal for implementing stateful agents because they provide:
  • Long-lived state: K/V state is stored permanently. It has no automatic expiry. Clear it via ctx.clear().
  • Durable state changes: State changes are logged with Durable Execution, so they survive failures and are consistent with code execution
  • State is queryable via the state tab in the UI.
Conversation State Management

Built-in concurrency control

Restate’s Virtual Objects have built-in queuing and consistency guarantees per object key. You provide the unique key when invoking the Virtual Object, for example, the user ID. When multiple requests come in for the same object key, Restate automatically queues them and ensures consistency. Queue The semantics are as follows:
  • Handlers either have read-write access (ObjectContext) or read-only access (shared object context).
  • Only one handler with write access can run at a time per object key to prevent concurrent/lost writes or race conditions (for example message()).
  • Handlers with read-only access can run concurrently to the write-access handlers (for example get_history()).
Let’s send several messages concurrently to different users:
curl localhost:8080/Chat/user123/message/send --json '{"message": "make a poem about durable execution", "session_id": "session-123"}' &
curl localhost:8080/Chat/user456/message/send --json '{"message": "what are the benefits of durable execution?", "session_id": "session-567"}' &
curl localhost:8080/Chat/user789/message/send --json '{"message": "how does workflow orchestration work?", "session_id": "session-999"}' &
curl localhost:8080/Chat/user123/message/send --json '{"message": "can you make it rhyme better?", "session_id": "session-123"}' &
curl localhost:8080/Chat/user456/message/send --json '{"message": "what about fault tolerance in distributed systems?", "session_id": "session-567"}' &
curl localhost:8080/Chat/user789/message/send --json '{"message": "give me a practical example", "session_id": "session-999"}' &
curl localhost:8080/Chat/user101/message/send --json '{"message": "explain event sourcing in simple terms", "session_id": "session-123"}' &
curl localhost:8080/Chat/user202/message/send --json '{"message": "what is the difference between async and sync processing?", "session_id": "session-123"}'
The UI shows how Restate queues the requests per session to ensure consistency:
Conversation State Management
You can run Virtual Objects on serverless platforms like Modal, Render, or AWS Lambda. When the request comes in, Restate attaches the correct state to the request, so your handler can access it locally.This way, you can implement stateful, serverless agents without managing any external state store and without worrying about concurrency issues.

Virtual Objects for storing context

You can store any context information in Virtual Objects, for example, user preferences or the last agent they interacted with. Use ctx.set and ctx.get in your handler to store and retrieve state. Have a look here for more information.

Resilient multi-agent coordination

As your agents grow more complex, you may want to break them down into smaller, specialized agents that can delegate tasks to each other. Similar to sub-workflows, you can break down complex agents into multiple specialized agents. All agents can run in the same process or be deployed independently.

Agents as tools/handoffs

If you want to share context between agents, run the agents in the same process and use handoffs or tools. You don’t need to do anything special to make this work with Restate:
multi_agent.py
APP_NAME = "agents"

# AGENTS
# Determine which specialist to use based on claim type
medical_agent = Agent(
    model="gemini-2.5-flash",
    name="medical_specialist",
    description="Reviews medical insurance claims for coverage and necessity.",
    instruction="Review medical claims for coverage and necessity. Approve/deny up to $50,000.",
)

car_agent = Agent(
    model="gemini-2.5-flash",
    name="car_specialist",
    description="Assesses car insurance claims for liability and damage.",
    instruction="Assess car claims for liability and damage. Approve/deny up to $25,000.",
)

agent = Agent(
    model="gemini-2.5-flash",
    name="intake_agent",
    instruction="Route insurance claims to the appropriate specialist",
    sub_agents=[car_agent, medical_agent],
)

# Enables retries and recovery for model calls and tool executions
app = App(name=APP_NAME, root_agent=agent, plugins=[RestatePlugin()])
runner = Runner(app=app, session_service=RestateSessionService())

agent_service = restate.VirtualObject("MultiAgentClaimApproval")


@agent_service.handler()
async def run(ctx: restate.ObjectContext, claim: InsuranceClaim) -> str | None:
    events = runner.run_async(
        user_id=ctx.key(),
        session_id=claim.session_id,
        new_message=Content(
            role="user",
            parts=[Part.from_text(text=f"Claim: {claim.model_dump_json()}")],
        ),
    )

    final_response = None
    async for event in events:
        if event.is_final_response() and event.content and event.content.parts:
            if event.content.parts[0].text:
                final_response = event.content.parts[0].text
    return final_response
The execution trace in the Restate UI will allow you to see the full chain of calls between agents and their individual steps.
Start a request for a claim that needs to be analyzed by multiple agents.
curl localhost:8080/MultiAgentClaimApproval/user123/run --json '{
    "amount": 3000,
    "category": "orthopedic",
    "date": "2024-10-01",
    "placeOfService": "General Hospital",
    "reason": "hospital bill for a broken leg",
    "sessionId": "session-123"
}'
In the UI, you can see that the agent called the sub-agents (multiple LLM calls) and is waiting for their responses. You can see the trace of the sub-agents in the timeline.Once all sub-agents return, the main agent continues and makes a decision.
Invocation overview

Remote agents as tools

If you want to run agents independently, for example, to scale them separately, run them on different platforms, or let them get developed by different teams, then you can call them as tools via service calls. Restate will proxy all calls, persist them, and will guarantee that they complete successfully. Your main agent can suspend and save resources while waiting for the remote agent to finish. Restate invokes your main agent again once the remote agent returns.
multi_agent_remote.py
# Durable service call to the fraud agent; persisted and retried by Restate
async def check_fraud(claim: InsuranceClaim) -> str:
    """Analyze the probability of fraud."""
    return await restate_context().service_call(run_fraud_agent, claim)


agent = Agent(
    model="gemini-2.5-flash",
    name="ClaimApprovalCoordinator",
    instruction="You are a claim approval engine. Analyze the claim and use your tools to decide whether to approve it.",
    tools=[check_fraud, check_eligibility],
)

app = App(name=APP_NAME, root_agent=agent, plugins=[RestatePlugin()])
runner = Runner(app=app, session_service=RestateSessionService())

agent_service = restate.VirtualObject("RemoteMultiAgentClaimApproval")


@agent_service.handler()
async def run(ctx: restate.ObjectContext, claim: InsuranceClaim) -> str | None:
    events = runner.run_async(
        user_id=ctx.key(),
        session_id=claim.session_id,
        new_message=Content(
            role="user",
            parts=[Part.from_text(text=f"Claim: {claim.model_dump_json()}")],
        ),
    )

    final_response = None
    async for event in events:
        if event.is_final_response() and event.content and event.content.parts:
            if event.content.parts[0].text:
                final_response = event.content.parts[0].text
    return final_response
Note, any shared context between agents needs to be passed explicitly via the input. The execution trace in the Restate UI will allow you to see the full chain of calls between agents and their individual steps.
Start a request for a claim that needs to be analyzed by multiple agents.
curl localhost:8080/RemoteMultiAgentClaimApproval/user123/run --json '{
    "amount": 3000,
    "category": "orthopedic",
    "date": "2024-10-01",
    "placeOfService": "General Hospital",
    "reason": "hospital bill for a broken leg",
    "sessionId": "session-123"
}'
In the UI, you can see that the agent called the sub-agents and is waiting for their responses. You can see the trace of the sub-agents in the timeline.Once all sub-agents return, the main agent continues and makes a decision.
Invocation overview
You cannot put both agents within the same Virtual Object, because this leads to deadlocks. The main agent would block on the call to the sub-agent, preventing the sub-agent from executing, cause only one handler can run at a time per object key.

Parallel Work

Now that our agents are broken down into smaller parts, let’s have a look at how to run different parts of our agent logic in parallel to speed up execution. Restate provides primitives that allow you to run tasks concurrently while maintaining deterministic execution during replays. Most actions on the Restate Context can be composed using restate.gather to gather their results or restate.select to wait for the first one to complete.

Parallel Tool Steps

When using the Google ADK with Restate, tool calls are forced to be executed sequentially to ensure deterministic execution during replays. When multiple tools execute in parallel and use the Restate Context, the order of operations might differ between the original execution and the replay, leading to inconsistencies. Therefore, the only way to run multiple tool steps in parallel is to implement an orchestrator tool that uses durable execution to run multiple steps in parallel. Here is an insurance claim agent tool that runs multiple analyses in parallel:
parallel_tools.py
async def calculate_metrics(claim: InsuranceClaim) -> List[str]:
    """Calculate claim metrics using parallel execution."""
    ctx = restate_object_context()

    # Run tools/steps in parallel with durable execution
    results_done = await restate.gather(
        ctx.run_typed("eligibility", check_eligibility, claim=claim),
        ctx.run_typed("cost", compare_to_standard_rates, claim=claim),
        ctx.run_typed("fraud", check_fraud, claim=claim),
    )
    return [await result for result in results_done]
Restate makes sure that all parallel tasks are retried and recovered until they succeed.
Start a request for a claim that needs to be analyzed by multiple tools in parallel.
curl localhost:8080/ParallelToolClaimAgent/user123/run --json '{
    "amount": 3000,
    "category": "orthopedic",
    "date": "2024-10-01",
    "placeOfService": "General Hospital",
    "reason": "hospital bill for a broken leg",
    "sessionId": "session-123"
}'
In the UI, you can see that the agent ran the tool steps in parallel. Their traces all start at the same time.Once all tools return, the agent continues and makes a decision.
Invocation overview

Parallel Agents

You can use the same durable execution primitives to run multiple agents in parallel. For example, to race agents against each other and use the first result that returns, while cancelling the others. Or to let a main orchestrator agent combine the results of multiple specialized agents in parallel:
parallel_agents.py
@agent_service.handler()
async def run(ctx: restate.ObjectContext, claim: InsuranceClaim) -> str | None:

    # Start multiple agents in parallel with auto retries and recovery
    eligibility = ctx.service_call(run_eligibility_agent, claim)
    cost = ctx.service_call(run_rate_comparison_agent, claim)
    fraud = ctx.service_call(run_fraud_agent, claim)

    # Wait for all responses
    await restate.gather(eligibility, cost, fraud)

    # Get the results
    eligibility_result = await eligibility
    cost_result = await cost
    fraud_result = await fraud

    # Run decision agent on outputs
    prompt = f"""Decide about claim: {claim.model_dump_json()}. Assessments:
    Eligibility: {eligibility_result} Cost: {cost_result} Fraud: {fraud_result}"""

    events = runner.run_async(
        user_id=ctx.key(),
        session_id=claim.session_id,
        new_message=Content(role="user", parts=[Part.from_text(text=prompt)]),
    )

    final_response = None
    async for event in events:
        if event.is_final_response() and event.content and event.content.parts:
            if event.content.parts[0].text:
                final_response = event.content.parts[0].text
    return final_response
Start a request for a claim that needs to be analyzed by multiple agents in parallel.
curl localhost:8080/ParallelAgentClaimApproval/user123/run --json '{
    "amount": 3000,
    "category": "orthopedic",
    "date": "2024-10-01",
    "placeOfService": "General Hospital",
    "reason": "hospital bill for a broken leg",
    "sessionId": "session-123"
}'
In the UI, you can see that the handler called the sub-agents in parallel. Once all sub-agents return, the main agent makes a decision.
Invocation overview

Error Handling

LLM calls are costly, so you can configure retry behavior in both Restate and your AI SDK to avoid infinite loops and high costs. Restate distinguishes between two types of errors:
  • Transient errors: Temporary issues like network failures or rate limits. Restate automatically retries these until they succeed or the retry policy is exhausted.
  • Terminal errors: Permanent failures like invalid input or business rule violations. Restate does not retry these. The invocation fails permanently. You can catch these errors and handle them gracefully.
You can throw a terminal error via:
from restate import TerminalError

raise TerminalError("This tool is not allowed to run for this input.")
You can catch and handle terminal errors in your agent logic if needed. Many AI SDKs also have their own retry behavior for LLM calls and tool executions, so let’s look at how these interact.

Retries of LLM calls

Configure the number of retries for LLM calls when activating the Restate plugin for your ADK App:
error_handling.py
app = App(
    name=APP_NAME, root_agent=agent, plugins=[RestatePlugin(max_model_call_retries=3)]
)
By default, the runner retries ten times with an initial interval of one second. Once Restate’s retries are exhausted, the invocation fails with a TerminalError and won’t be retried further.

Tool execution errors

Restate retries tool executions by default until they succeed. For errors which should not be retried, you can raise terminal errors from within your tool implementations. You can catch these terminal errors in your handler and handle them accordingly.
error_handling.py
@agent_service.handler()
async def run(_ctx: restate.Context, req: WeatherPrompt) -> str | None:
    try:
        await get_or_create_session(
            session_service, APP_NAME, req.user_id, req.session_id
        )
        runner = Runner(app=app, session_service=session_service)
        events = runner.run_async(
            user_id=req.user_id,
            session_id=req.session_id,
            new_message=Content(role="user", parts=[Part.from_text(text=req.message)]),
        )

        final_response = None
        async for event in events:
            if event.is_final_response() and event.content and event.content.parts:
                if event.content.parts[0].text:
                    final_response = event.content.parts[0].text
        return final_response
    except TerminalError as e:
        # Handle the error appropriately, e.g., log it or return a default response
        print(f"An error occurred: {e}")
        return "Sorry, I'm unable to process your request at the moment."
You can set custom retry policies for .run actions in your tool executions.

Advanced patterns

If you need more control over the agent loop, you can implement it manually using Restate’s durable primitives.This allows you to:
  • Parallelize tool calls with restate.select and restate.gather
  • Implement custom stopping conditions
  • Implement custom logic between steps (e.g. human approval)
  • Interact with external systems between steps
  • Handle errors in a custom way
Learn more from the composable patterns guides.
Sometimes you need to undo previous agent actions when a later step fails. Restate makes it easy to implement compensation patterns (Sagas) for AI agents.Just track the rollback actions as you go, let the agent raise terminal tool errors, and execute the rollback actions in reverse order.Have a look at the saga guide to learn more.
Restate supports implementing scheduling and timer logic in your agents. This allows you to build agents that run periodically, wait for specific times, or implement complex scheduling logic. Agents can either be long-running or reschedule themselves for later execution.Have a look at the scheduling docs to learn more.
Have a look at the pub-sub example.
Have a look at the interruptible coding agent.

Summary

Durable Execution, paired with your existing SDKs, gives your agents a powerful upgrade:
  • Durable Execution: Automatic recovery from failures without losing progress
  • Persistent memory and context: Persistent conversation history and context
  • Observability by default across your agents and workflows
  • Human-in-the-Loop: Seamless approval workflows with timeouts
  • Multi-Agent Coordination: Reliable orchestration of specialized agents
  • Suspensions to save costs on function-as-a-service platforms when agents need to wait
  • Advanced Patterns: Real-time progress updates, interruptions, and long-running workflows
Consult the Restate AI Agents documentation to learn more about building agents with Restate