litellm
LiteLLM is a Python SDK and proxy server to unify 100+ LLM APIs, enabling calls in OpenAI format. This is a critical tool for abstracting LLM providers, offering features like cost tracking, load balancing, and guardrails.
OpenInference delivers OpenTelemetry instrumentation for AI observability. This allows tracing AI applications, including LLMs and their ecosystem components, providing critical insights into their runtime behavior across any OpenTelemetry-compatible backend.
Visit github.com →