litellm (BerriAI/litellm) is an open-source AI project on GitHub. Repository summary: Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM] Its focus includes MCP and tool-calling integration, developer-centric engineering workflows, workflow automation. It is suitable for extension, integration, and iterative delivery in real workflows.
License
Other
Stars
46,896
Homepage
https://docs.litellm.ai/docs/Features
- Core capability: Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
- Provides MCP or tool-calling integration
- Built for code generation, debugging, or engineering integration
- Supports orchestrated automation flows and scheduling
- Repository: BerriAI/litellm
- Primary language: Python
Use Cases
- Connects external systems into agent workflows
- Supports AI engineering build-and-iterate workflows for dev teams
- Used for cross-system process automation and operations efficiency
- Build internal AI workflow prototypes with litellm
- Validate litellm in production-like engineering scenarios
- Building AI development workflows
FAQ
Teams should first define integration boundaries and call patterns, then map repository capabilities into concrete interfaces, parameters, and access rules. GitHub repository: https://github.com/BerriAI/litellm. Community traction is around 46,890 stars. License: Other.
It usually works as an execution component or capability layer, with common deployment fits such as: Connects external systems into agent workflows, Supports AI engineering build-and-iterate workflows for dev teams, Used for cross-system process automation and operations efficiency.