Grokpi Documentation

Build, Run, and Integrate Grokpi

This page gives a practical setup path, API examples, and operational notes so teams can run Grokpi and connect it to their own products.

Self-host Ready

Run with Docker Compose and keep full control of config and data.

Access Control

Use admin auth, API keys, and token pools to separate workloads.

OpenAI-style API

Use familiar endpoints for chat, image, and video generation.

Usage Visibility

Track requests, token usage, and model behavior from one console.

1) Quick Start

From project root, run:

docker compose up -d --build
curl -s http://127.0.0.1:8080/health

Then open /login, sign in with admin password, create API keys, and start calling /v1/* endpoints.

2) Core API Examples

List available models

curl -s http://127.0.0.1:8080/v1/models \
  -H "Authorization: Bearer <YOUR_API_KEY>"

Chat completion

curl -s http://127.0.0.1:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <YOUR_API_KEY>" \
  -d '{
    "model": "grok-3-mini",
    "messages": [
      {"role":"user","content":"Explain token pools in 3 bullet points."}
    ]
  }'

Image generation

curl -s http://127.0.0.1:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <YOUR_API_KEY>" \
  -d '{
    "model": "grok-imagine-1.0",
    "messages": [
      {"role":"user","content":"A cyberpunk city street at sunrise"}
    ],
    "image_config": {
      "aspect_ratio": "16:9"
    }
  }'

Video generation

curl -s http://127.0.0.1:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <YOUR_API_KEY>" \
  -d '{
    "model": "grok-imagine-1.0-video",
    "messages": [
      {"role":"user","content":"A drone shot above rice fields with cinematic light"}
    ],
    "video_config": {
      "aspect_ratio": "16:9",
      "video_length": 8,
      "resolution_name": "480p",
      "preset": "normal"
    }
  }'

3) Important Configuration Notes

  • Set a strong admin password in config before exposing Grokpi publicly.
  • Use token pools and model groups to isolate heavy and light workloads.
  • Configure proxy and FlareSolverr only when your upstream route needs it.
  • Review usage and request logs regularly to detect retry spikes and model failures.