cubesLiteLLM

Integration with LiteLLM's OpenAI-Compatible Endpoints with Infron AI

Account & API Keys Setup

The first step to start using Infron AI is to create an accountarrow-up-right and get your API keyarrow-up-right.

The second step to start using Google AI Studio is create a projectarrow-up-right and get your API Keyarrow-up-right.

Usage - completion

import litellm
import os

response = litellm.completion(
    model="openai/<<Model Name>>",               # add `openai/` prefix to model so litellm knows to route to OpenAI
    api_key="<<API key>>",                  # api key to your openai compatible endpoint
    api_base="https://llm.onerouter.pro/v1",     # set API Base of your Custom OpenAI Endpoint
    messages=[
                {
                    "role": "user",
                    "content": "Hey, how's it going?",
                }
    ],
)
print(response.json())

Please copy the <<Model name>> at model marketplacearrow-up-right.

For example:

Usage - embedding

Usage with LiteLLM Proxy Server

  1. Modify the config.yaml

  1. Start the proxy

  1. Send Request to LiteLLM Proxy Server

An response example is like below:

Last updated