Overview
Infron provides OpenAI-compatible API endpoints, letting you use multiple AI providers through a familiar interface. You can use existing OpenAI client libraries, switch to the Infron with a URL change, and keep your current tools and workflows without code rewrites.
The OpenAI-compatible API implements the same specification as the OpenAI API.
Base URL
The OpenAI-compatible API is available at the following base URL:
https://llm.onerouter.pro/v1
Authentication
The OpenAI-compatible API supports the same authentication methods:
API key: Use your Infron API key with the
Authorization: Bearer <token>header
Supported endpoints
The AI Gateway supports the following OpenAI-compatible endpoints:
GET /models- List available modelsPOST /chat/completions- Create chat completions with support for streaming, attachments, tool calls, and structured outputsPOST /embeddings- Generate vector embeddingsPOST /rerank- Generate vector embeddings
Integration with existing tools
You can use the Infron's OpenAI-compatible API with existing tools and libraries like the OpenAI client libraries. Point your existing client to the Infron's base URL and use your Infron API key for authentication.
OpenAI client libraries
List models
Retrieve a list of all available models that can be used with the Infron.
Endpoint
GET /models
Example request
Error handling
The API returns standard HTTP status codes and error responses:
Common error codes
400: Bad Request (invalid or missing params, CORS)
401: Invalid credentials (OAuth session expired, disabled/invalid API key)
402: Your account or API key has insufficient credits. Add more credits and retry the request.
403: Your chosen model requires moderation and your input was flagged
408: Your request timed out
429: You are being rate limited
502: Your chosen model is down or we received an invalid response from it
503: There is no available model provider that meets your routing requirements
Error response format
Last updated