In Browser Chatbot

IMPORTANT: this is using a Origin Trial so this may break at any time

You’ll need to be running the latest version of Google Chrome on a desktop

Overview

As part of my ongoing experimentation with browser-native AI tools, I explored the new Prompt API in Chrome — a feature that lets developers run on-device AI models directly within the browser. This means no API keys, no external servers, and no data leaving the user’s device.

It’s still in early access, but already it opens up a new wave of possibilities for AI-powered web experiences that are fast, private, and portable.


What Is the Prompt API?

The Prompt API allows you to create and interact with language model sessions using simple JavaScript calls. Instead of connecting to OpenAI, Gemini, or Ollama, your app talks to Chrome’s built-in AI model.

How This Works

  1. First the system checks if your browser supports the feature “LanguageModel”.
  2. If your browser supports the feature it tries to create one if not then it downloads it.
  3. Then I setup the chatbot prompt to have the info I want to talk about in the chat
  4. When you send a message the chatbot processes your question and the info I gave it in the system prompt.
  5. (optional) looking at dev tools you can see the interactions don’t reach out to the web for answers or an API for the chatbot responses

How can this be used?

  1. Private Data, since there isn’t an API request to another server the questions stay on device only
  2. Offset AI costs, with your device doing the thinking I’m not paying anything for this LLM to run
  3. Add MCP to interact with outside services while still keeping costs down

Here’s what the Demo looks like: