Skip to main content

Athina

Athina is an evaluation framework and production monitoring platform for your LLM-powered app. Athina is designed to enhance the performance and reliability of AI applications through real-time monitoring, granular analytics, and plug-and-play evaluations.

Getting Started

Use Athina to log requests across all LLM Providers (OpenAI, Azure, Anthropic, Cohere, Replicate, PaLM)

liteLLM provides callbacks, making it easy for you to log data depending on the status of your responses.

Using Callbacks

First, sign up to get an API_KEY on the Athina dashboard.

Use just 1 line of code, to instantly log your responses across all providers with Athina:

litellm.success_callback = ["athina"]

Complete code

from litellm import completion

## set env variables
os.environ["ATHINA_API_KEY"] = "your-athina-api-key"
os.environ["OPENAI_API_KEY"]= ""

# set callback
litellm.success_callback = ["athina"]

#openai call
response = completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}]
)

Support & Talk with Athina Team