What Is the AI Proofreader?
The AI Proofreader is a privacy-first text correction tool that runs a large language model (LLM) entirely in your web browser using WebGPU technology. Unlike cloud-based services, your text never leaves your device — all processing happens locally on your GPU. The recommended model, Qwen 3 (4B), supports over 119 languages and delivers high-quality proofreading at a compact 2.5 GB download size.
How to Use This AI Proofreader
- Select an AI model (Qwen 3 4B is recommended) and a correction mode — "Faithful to Original" for minimal fixes, "Balanced" for natural improvements, or "Enhance Style" for polished writing.
- Click "Load AI Model" to download the model (~2.5 GB). This only happens once; the model is cached in your browser for future visits.
- Paste or type the text you want to proofread. Long texts are automatically split into chunks and processed sequentially.
- Click "Proofread" and the AI will return the corrected text along with a detailed list of every change made.
How It Works
This tool uses WebGPU, a modern browser API that provides direct access to your GPU for computation. The selected model is loaded and run locally via the WebLLM engine. For reasoning-capable models like Qwen 3, you can watch the AI's thought process in real time before the final result appears. Long texts are automatically split into manageable chunks, processed one by one, and combined — so there is no hard character limit.
Requirements
WebGPU is supported in Chrome 113+, Edge 113+, and recent versions of other Chromium-based browsers. You need a GPU with at least 4 GB of VRAM. The initial model download ranges from 1.3 to 5 GB depending on the model you choose, and is cached for future visits.
Frequently Asked Questions
Is my text sent to any server?
No. The AI model runs entirely in your browser using WebGPU. Your text is processed locally on your device and never transmitted anywhere. This makes it safe for confidential documents.
Why is the first load slow?
The first time you use the tool, it needs to download the AI model and compile it for your GPU. After that, the model is cached in your browser and loads much faster on subsequent visits.
What languages are supported?
The default Qwen 3 (4B) model supports 119 languages including English, Japanese, French, German, Spanish, Chinese, Korean, and many more. Alternative models are also available — Qwen 3 (8B) offers higher accuracy for demanding tasks, while lighter models like Phi-3.5-mini and Llama 3.2 are options for systems with limited GPU memory.