Gemini API
activeGoogle's most capable AI model for text generation, reasoning, and conversation.
Quick Start
Send a request to the Gemini API:
| 1 | curl -X POST https://apiin.one/api/v1/chat/completions \ |
| 2 | -H "Authorization: Bearer YOUR_API_KEY" \ |
| 3 | -H "Content-Type: application/json" \ |
| 4 | -d '{ |
| 5 | "model": "gemini", |
| 6 | "messages": [ |
| 7 | { |
| 8 | "role": "user", |
| 9 | "content": "Explain quantum computing in simple terms." |
| 10 | } |
| 11 | ], |
| 12 | "stream": false, |
| 13 | "include_thoughts": true |
| 14 | }' |
API Endpoint
/api/v1/chat/completionsSend a POST request with your API key to generate content using Gemini.
Headers
Bearer YOUR_API_KEY
application/json
Body Parameters
Model identifier: gemini
Array of message objects with role (user/assistant/system/developer) and content
Enable SSE streaming responses Defaults to true.
Include model reasoning/thinking steps Defaults to true.
Reasoning depth: low or high Defaults to high.
Tool definitions for function calling (OpenAI-compatible format)
Force structured output (e.g. {type: json_object})
Example Request
| 1 | { |
| 2 | "model": "gemini", |
| 3 | "messages": [ |
| 4 | { |
| 5 | "role": "user", |
| 6 | "content": "Explain quantum computing in simple terms." |
| 7 | } |
| 8 | ], |
| 9 | "stream": false, |
| 10 | "include_thoughts": true |
| 11 | } |
Example Response
Successful completion response.
| 1 | { |
| 2 | "code": 200, |
| 3 | "message": "success", |
| 4 | "data": { |
| 5 | "id": "chatcmpl_mno345", |
| 6 | "model": "gemini-3-flash", |
| 7 | "choices": [ |
| 8 | { |
| 9 | "message": { |
| 10 | "role": "assistant", |
| 11 | "content": "Quantum computing uses quantum bits (qubits)..." |
| 12 | } |
| 13 | } |
| 14 | ], |
| 15 | "usage": { |
| 16 | "prompt_tokens": 12, |
| 17 | "completion_tokens": 156, |
| 18 | "total_tokens": 168 |
| 19 | }, |
| 20 | "credits_consumed": 2, |
| 21 | "task_id": "n62xxxx_chat" |
| 22 | } |
| 23 | } |
Use Cases
- ✓Build AI-powered chatbots and customer support agents
- ✓Generate and summarize text content at scale
- ✓Create AI coding assistants and code generation tools
- ✓Power reasoning and analysis features in your application
API Tester
Test the Gemini API directly from your browser:
Error Codes
| 1 | { |
| 2 | "error": { |
| 3 | "code": 400, |
| 4 | "message": "Invalid parameters", |
| 5 | "type": "invalid_request" |
| 6 | } |
| 7 | } |
Frequently Asked Questions
Is the Gemini API compatible with OpenAI's format?
Yes. API in One uses the standard OpenAI chat/completions format. You can switch from OpenAI to Gemini by changing the base URL and model name — no other code changes needed.
How much does the Gemini API cost?
5 credits are pre-deducted per request, then adjusted based on actual token usage (pay-per-token). One of the most affordable LLM APIs. Free credits included on sign up.
Does the Gemini API support streaming?
Yes. Streaming is enabled by default via SSE (Server-Sent Events). Set stream to false for synchronous responses.
What is the maximum context length?
Gemini 3 Flash supports up to 1M tokens of context, suitable for processing long documents and maintaining extended conversations.
Why Use Gemini Through API in One?
Ready to use Gemini?
Get Your API Key →