LiteLLM is a library that simplifies multiple model providers. You can combine it with Restate to build resilient agents that can be customized to your use case. You wrap model calls inDocumentation Index
Fetch the complete documentation index at: https://docs.restate.dev/llms.txt
Use this file to discover all available pages before exploring further.
ctx.run and use Restate context actions to execute tools.
Learn more about LiteLLM
- Examples: The examples which use only Restate to implement the agent logic, use LiteLLM to make model calls.
- Guides: Check out the Restate Python examples to learn how to use Restate with LiteLLM.
Learn more
- Blog: AI Agents should be serverless