Kilo Code Now Supports GPT-5.2 (Get $30 of Model Usage for $10)
OpenAI just launched the GPT-5.2 family of models. Since we had early access, we’re excited to share that Kilo Code already fully supports all of these new models from OpenAI.
The GPT-5.2 family of models includes GPT-5.2, GPT-5.2 Pro, and GPT-5.2 Chat.
Let’s dive deeper.
GPT-5.2 Is an Extremely Powerful Model
GPT-5.2 is the most capable model family OpenAI has released to date. It is specifically designed to execute complex, end-to-end workflows with higher consistency and reliability. For Kilo users, this translates directly into:
Stronger long-context reasoning
Think: multi-file refactoring, getting up to speed with a new repo faster, and pinpointing the root cause of a bug in minutes instead of hours.
Reliable multi-step tool use across large inputs
Let’s say you want to do a refactor that touches multiple files. We all know how hard this is to do reliably with non-SOTA models. GPT-5.2 is optimized for this specific task, so expect far fewer WDM (why did it modify that?) moments.
Sharply improved vision + UI understanding
We often want to upload a screenshot or two and let the LLM convert that image into usable code. GPT-5.2 was optimized with this in mind. It can read and reason over charts, understand UI layouts and screenshots, and convert them into front-end components (think: React/HTML/clean CSS code).
We created a 2D version of a 2026 World Cup game using a single prompt in GPT-5.2. Check out this video to see how it turned out:
How we made GPT-5.2 work better in Kilo Code
We tested GPT-5.2 internally for several weeks before launch and made several improvements to make this model family work great within Kilo Code:
Implemented native tool calling, resulting in faster and more reliable performance.
We implemented a tool similar to OpenAI’s apply_patch tool, allowing you to create, update, and delete files in your codebase using structured diffs.
We’ve migrated to OpenAI’s Responses API.
We implemented reasoning-trace preservation between requests. The goal here is to have more consistent performance across requests.
We now have extended prompt caching (up to 24 hours) for the OpenAI provider.
Get $30 worth of AI usage for $10 on your first top-up
Here’s the API pricing for the GPT-5.2 family of models:
GPT-5.2:
Standard Tier
Input: $1.75 / million tokens (90% cache discount)
Output: $14 / million tokens
Priority Tier:
Input: $3.50 / million tokens (90% cache discount)
Output: $28 / million tokens
GPT-5.2 Pro
Standard Tier
Input: $21 / million tokens
Output: $168 / million tokens
You can purchase tokens either directly through OpenAI or through our Kilo Gateway, which allows you to use your balance to access 400+ different AI models (the GPT-5.2 family of models is part of that list).
We’re currently running a promo where you’ll get an extra $20 whenever you top up for the first time (so if you top up $10, you’ll get $30 total).
To redeem this offer:
Sign up for a Kilo Code account and go to your profile page
On your profile page, you should see this:
Top up your balance using Stripe (Stripe also supports Apple Pay) or crypto.
Start using GPT-5.2 in Kilo Code!




Any plans to support GLM-4.6V?