Skip to main content

Rate Limits

Understand rate limits and how to handle them gracefully.

Rate Limits

The following rate limits apply per user:

LimitValueScope
Requests per minute100Per user
Tokens per minute100,000Per user
Requests per day10,000Per user

Rate Limit Headers

When rate limited (429 status), the response includes a Retry-After header:

Handling Rate Limits

Implement retry logic with exponential backoff to handle rate limits gracefully:

retry-with-backoff.ts
async function retryWithBackoff<T>(
  fn: () => Promise<T>,
  maxRetries = 3
): Promise<T> {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await fn();
    } catch (error: any) {
      if (error.status === 429 && i < maxRetries - 1) {
        // Get retry delay from header or use exponential backoff
        const retryAfter = error.headers?.get("Retry-After");
        const delay = retryAfter
          ? parseInt(retryAfter) * 1000
          : Math.pow(2, i) * 1000;

        console.log(`Rate limited. Retrying in ${delay}ms...`);
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
      throw error;
    }
  }
  throw new Error("Max retries exceeded");
}

// Usage
const response = await retryWithBackoff(() =>
  client.chat.completions.create({
    model: "Qwen/Qwen3-235B",
    messages: [{ role: "user", content: "Hello!" }]
  })
);

Best Practices

Follow these practices to avoid hitting rate limits:

  • Implement client-side throttling Add delays between requests to stay under the limit.
  • Batch requests where possible Combine multiple operations into fewer API calls.
  • Cache responses Store and reuse responses for identical requests.
Was this page helpful?