Latency and Performance

Understanding Infron AI's performance characteristics.

Infron is designed with performance as a top priority. Infron is heavily optimized to add as little latency as possible to your requests.

Base Latency

Under typical production conditions, Infron AI adds approximately 100ms of latency to your requests. This minimal overhead is achieved through:

  • Edge computing using Cloudflare Workers to stay as close as possible to your application

  • Efficient caching of user and API key data at the edge

  • Optimized routing logic that minimizes processing time

Performance Considerations

Cache Warming

When Infron's edge caches are cold (typically during the first 5 minutes of operation in a new region), you may experience slightly higher latency as the caches warm up. This normalizes once the caches are populated.

Credit Balance Checks

To maintain accurate billing and prevent overages, Infron AI performs additional database checks when:

  • A user's credit balance is low (single digit dollars)

Infron expires caches more aggressively under these conditions to ensure proper billing, which increases latency until additional credits are added.

Model Fallback

When using provider routing, if the primary model or provider fails, Infron AI will automatically try the next option. A failed initial completion unsurprisingly adds latency to the specific request. Infron tracks provider failures, and will attempt to intelligently route around unavailable providers so that this latency is not incurred on every request.

Best Practices

To achieve optimal performance with Infron AI:

  1. Maintain Healthy Credit Balance

    • Recommended minimum balance: $50-100 to ensure smooth operation

  2. Use Provider Preferences

    • If you have specific latency requirements (whether time to first token, or time to last), there are provider routing features to help you achieve your performance and cost goals.

Last updated