Browser-Based Local LLM Chat

Experience local LLM conversations running entirely in your browser. Complete privacy with no data sent to external servers.

Loading the model may take a few minutes on first use. Make sure to enable WebGPU in your browser for optimal performance.

Start a conversation with the AI assistant

Powered by WebLLM - High-performance in-browser LLM inference.