AgnicPay

Quickstart

Get started with AgnicPay AI Gateway

Quickstart

AgnicPay provides a unified API that gives you access to 340+ AI models through a single endpoint. Get started with just a few lines of code using your preferred SDK or framework.

Looking for the full model list and pricing? Visit AI Gateway Pricing.


Using the OpenAI SDK

The fastest way to get started. AgnicPay is fully compatible with the OpenAI SDK - just change the base URL.

Installation

# Python
pip install openai
 
# Node.js
npm install openai

Python

from openai import OpenAI
 
client = OpenAI(
    base_url="https://api.agnic.ai/v1",
    api_key="agnic_tok_YOUR_TOKEN",
)
 
completion = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[
        {
            "role": "user",
            "content": "What is the meaning of life?"
        }
    ]
)
 
print(completion.choices[0].message.content)

TypeScript / JavaScript

import OpenAI from 'openai';
 
const client = new OpenAI({
  baseURL: 'https://api.agnic.ai/v1',
  apiKey: 'agnic_tok_YOUR_TOKEN',
});
 
async function main() {
  const completion = await client.chat.completions.create({
    model: 'openai/gpt-4o',
    messages: [
      {
        role: 'user',
        content: 'What is the meaning of life?',
      },
    ],
  });
 
  console.log(completion.choices[0].message.content);
}
 
main();

Using the API Directly

Make requests directly to the AgnicPay API without any SDK.

Python (requests)

import requests
import json
 
response = requests.post(
    url="https://api.agnic.ai/v1/chat/completions",
    headers={
        "Authorization": "Bearer agnic_tok_YOUR_TOKEN",
        "Content-Type": "application/json",
    },
    json={
        "model": "openai/gpt-4o",
        "messages": [
            {
                "role": "user",
                "content": "What is the meaning of life?"
            }
        ]
    }
)
 
print(response.json()["choices"][0]["message"]["content"])

TypeScript (fetch)

const response = await fetch('https://api.agnic.ai/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer agnic_tok_YOUR_TOKEN',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: 'openai/gpt-4o',
    messages: [
      {
        role: 'user',
        content: 'What is the meaning of life?',
      },
    ],
  }),
});
 
const data = await response.json();
console.log(data.choices[0].message.content);

cURL

curl https://api.agnic.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer agnic_tok_YOUR_TOKEN" \
  -d '{
    "model": "openai/gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": "What is the meaning of life?"
      }
    ]
  }'

Try Different Models

Access models from all major providers with the same API:

# OpenAI
completion = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
 
# Anthropic Claude
completion = client.chat.completions.create(
    model="anthropic/claude-3.5-sonnet",
    messages=[{"role": "user", "content": "Hello!"}]
)
 
# Google Gemini
completion = client.chat.completions.create(
    model="google/gemini-2.0-flash",
    messages=[{"role": "user", "content": "Hello!"}]
)
 
# Meta Llama (Free!)
completion = client.chat.completions.create(
    model="meta-llama/llama-3.3-70b",
    messages=[{"role": "user", "content": "Hello!"}]
)

Free Models

Start building without spending. These models are completely free:

ModelProviderContext
meta-llama/llama-3.3-70bMeta128K
meta-llama/llama-3.1-8bMeta128K
mistralai/mixtral-8x7bMistral32K
mistralai/mistral-7bMistral32K
# Use free models for development
completion = client.chat.completions.create(
    model="meta-llama/llama-3.3-70b",  # Free!
    messages=[{"role": "user", "content": "Write a poem about coding"}]
)

Streaming Responses

Get responses in real-time:

stream = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[{"role": "user", "content": "Write a short story"}],
    stream=True
)
 
for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

For more details, see Streaming.


Response Format

A successful response looks like:

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "model": "openai/gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The meaning of life is a philosophical question..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 15,
    "completion_tokens": 150,
    "total_tokens": 165
  }
}

Authentication

AgnicPay supports two authentication methods:

MethodFormatUse Case
API Tokenagnic_tok_...Server-side, CI/CD
OAuth2 Tokenagnic_at_...User-authorized apps
# Using API Token
client = OpenAI(
    base_url="https://api.agnic.ai/v1",
    api_key="agnic_tok_YOUR_TOKEN",
)
 
# Using OAuth2 Token
client = OpenAI(
    base_url="https://api.agnic.ai/v1",
    api_key="agnic_at_YOUR_OAUTH_TOKEN",
)

Error Handling

from openai import OpenAI, APIError
 
client = OpenAI(
    base_url="https://api.agnic.ai/v1",
    api_key="agnic_tok_YOUR_TOKEN",
)
 
try:
    completion = client.chat.completions.create(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "Hello!"}]
    )
    print(completion.choices[0].message.content)
except APIError as e:
    if e.status_code == 401:
        print("Invalid API token")
    elif e.status_code == 402:
        print("Insufficient balance or spending limit exceeded")
    elif e.status_code == 429:
        print("Rate limit exceeded - try again later")
    else:
        print(f"API error: {e}")

Next Steps

On this page