DEV Community

Cover image for window.ai - running AI LOCALLY from DevTools! 🤯
GrahamTheDev
GrahamTheDev

Posted on

window.ai - running AI LOCALLY from DevTools! 🤯

On-device AI in the browser is here - kinda.

It is currently in Chrome canary which means, it will be here soon(ish).

In this article I will show you how to get it running on your device, so that you can have a play with it and see what use cases you can think of.

And I will just say this: Running window.ai from DevTools without an internet connection is pretty fun, even if the results are "meh"!

Setup

Getting up and running only takes 5 minutes!

1. Download Chrome Canary

Go to the Chrome Canary site and download Chrome Canary.

2. Enable "Prompt API for Gemini Nano".

Open Chrome Canary and type "chrome://flags/" in the URL bar and press enter.

Then in the search box at the top type "prompt API"

You should see "Prompt API for Gemini Nano" as the only option

prompt API in search box on chrome experiments page, there is one item highlighted called

Switch that to "enabled".

3. Enable "Enables optimization guide on device"

While you are on the "chrome://flags" page, you need to enable a second item.

Remove your previous search and search for "optimization guide on".

You should see "Enables optimization guide on device" as your only option.

This time you want to enable it, but with the "Enabled ByPassPerfRequirement" option.

4. Install Gemini Nano

Last step, we need to install Gemini Nano on our device.

This is actually part of a bigger tool, but we don't need to worry about that, except for the fact that it helps us know what to download.

Warning: This file is 1.5gb. It doesn't tell you that anywhere so if you have a slow connection / pay per Gb of data / low storage space you may not want to do this!

Head to: "chrome://components/".

Hit Ctrl + f and search for "Optimization Guide".

You will see an item "Optimization Guide On Device Model".

Click "Check for Update" and it will install the file.

On the chrome components page, search box is showing with

5. DONE!

Last step: Restart Chrome Canary for the changes to take effect.

Add that is it, now we can move on to using AI locally!

Using window.ai

If everything worked as expected then you should now be able to open DevTools (F12), go to the "Console" tab and start playing!

The easiest way to check is to type window. into the console and see if ai comes up as an option.

If not, go back and check you didn't miss a step!

Creating our first session.

Just one command is needed to start a session with our AI model.

const chatSession = await window.ai.createTextSession()
Enter fullscreen mode Exit fullscreen mode

Tip: Don't forget the await. I did originally 🤦🏼‍♂️!

There is also an option of createGenericSession() but I haven't worked out what the difference is yet!

Now we can use that session to ask questions.

Sending a prompt

For this we just use the .prompt function on our chatSession object!

const result = await chatSession.prompt("hi, what is your name?")
Enter fullscreen mode Exit fullscreen mode

Yet again, all async, don't forget the await (I didn't make the same mistake twice...honest!).

Depending on the complexity of your prompt and your hardware this can take anywhere from a few milliseconds to several seconds, but you should eventually see undefined in the console once it has done.

Getting the response.

Now we just have to console.log the result!

console.log(result)
Enter fullscreen mode Exit fullscreen mode

And we get:

  As a large language model, I do not have a name.
Enter fullscreen mode Exit fullscreen mode

Pretty underwhelming, but at least it works!

Quick and Dirty Reusable example

Obviously you don't want to have to keep sending multiple commands, so you can copy and paste this function into your console to make things easier:

  async function askLocalGPT(promptText){
    if(!window.chatSession){
      console.log("starting chat session") 
      window.chatSession = await window.ai.createTextSession()
      console.log("chat session created") 
    }

    return console.log(await window.chatSession.prompt(promptText)) 
  }
Enter fullscreen mode Exit fullscreen mode

And now you can just type askLocalGPT("prompt text") into your console.

I personally have that saved as a snippet in Sources > snippets for quick access when I want to play with it.

Have fun!

Is it any good?

No

Really? It isn't any good?

I mean, it depends on the measuring stick you are using.

If you are trying to compare it to Claude or ChatGPT, it is terrible.

However for local playing and experimentation it is awesome!

Also bear in mind that each time you ask a question, it does not automatically have memory of what you asked previously.

So if you want to have a conversation where the model "remembers" what was said previously you need to feed previous questions and answers in with your new question.

Is it fun to play with?

Yes.

The fact I can get it to work locally in my browser is pretty cool. Plus it can do simple coding questions etc.

And the beauty is no big bills! You can use the full 32k context window as often as you want without worrying about racking up a big bill by mistake.

Oh and while I said it isn't very good, it can do summaries quite well:

  askLocalGPT("can you summarise this HTML 
for me please and explain what the page is 
about etc, please return a plain text response 
with the summary and nothing else:" + document.querySelector('article').textContent.toString())
Enter fullscreen mode Exit fullscreen mode

And with a little playing it outputs:

This article explains how to run window.ai locally in your browser using Google's large language model (LGBL).

It describes the necessary steps, including enabling the "Prompt API for Gemini Nano" and "Optimization Guide on Device Model" flags in Google Chrome Canary, installing Gemini Nano, and restarting Chrome Canary.

The article then demonstrates how to use window.ai by creating a text session, prompting the AI model, and receiving the response. It concludes by discussing the possibilities and future enhancements of window.ai.

What will you build?

I have only just scratched the surface of the new API, but I can see it being really handy for creating "custom GPTs" for your own use for now.

In the future once AI is available in-browser for everybody, who knows what amazing things will be created.

Final thought

While I find this exciting as a developer and the possibilities it opens up, there is a large part of me that dislikes / is cautious of this.

People are already throwing "AI" into everything for no reason. Having it run locally on people's machines will only encourage them to use it for even stupider things!

Plus there are probably about 50 other things around security, remote AI farms, etc. etc. that are likely to make me cry in the future the more I think about it.

Top comments (15)

Collapse
 
best_codes profile image
Best Codes

Opera browser developer has a much better setup for this than Chrome. You can use literally over 100 AI models locally with no internet through Aria, Opera's built in AI.

GPT4ALL.io and Ollama are great for running models from huggingface locally on Linux, Windows, or macOS.

Nice article!

Collapse
 
grahamthedev profile image
GrahamTheDev

That is cool to know, I will have to check that out! 💗

Collapse
 
best_codes profile image
Best Codes

😁 Let me know what you think!

Collapse
 
mindplay profile image
Rasmus Schultz

Also bear in mind that each time you ask a question, it does not automatically have memory of what you asked previously.

There is an API for conversation/chat as well.

It doesn't look like there's any documentation for the JS API yet though. I'm not sure this is open source? so we might not even be able to reference the C++ code.

Based on the announcement from Google, they want us to use the API wrapper for their hosted inference, which has a built-in adapter for the JS API, which can be used with their hosted inference as a fallback.

I'd love a reply if you can find the docs or code?

Collapse
 
grahamthedev profile image
GrahamTheDev

I had a good look, but didn't find anything when I was writing this.

I imagine documentation will come once it all starts to filter down towards production.

If I do find anything I will let you know! 💗

Collapse
 
devarshishimpi profile image
Devarshi Shimpi

Time to use it to summarise this article haha

Collapse
 
grahamthedev profile image
GrahamTheDev

Already done that, so that tells me you didn't read it all! hahahahah. 💗

Collapse
 
lyatziv profile image
Lavi Yatziv

This is pretty neat. This could open a lot of doors for accessibility and language translation.

Collapse
 
irfnrdh profile image
Irfannur Diah

Linux is not supported. haha

Collapse
 
grahamthedev profile image
GrahamTheDev

Oh Really? That is a shame. 💗

Collapse
 
bobjames profile image
Bob James

Is it any good?
No
Is it fun to play with?
Yes
laugh out loud on this, the story of my life 🤣🤣

Collapse
 
grahamthedev profile image
GrahamTheDev

hehe, well I can't go writing articles on anything that is actually useful, it would ruin my reputation! 🤷🏼‍♂️🤣💗

Collapse
 
bobjames profile image
Bob James

i like fun things 💛💛

Collapse
 
franklinthaker profile image
Franklin Thaker

Aren't you guys concerned of setting up AI in browser ? I mean, I'm ok with this in new browser which I don't use often but I don't want a Blackbox to run on my machine.

Collapse
 
grahamthedev profile image
GrahamTheDev

At the moment, no. It is heavily sandboxed.

Am I concerned for what security problems this may pose in the future. Yes, I imagine there will be security holes introduced as they give AI more freedom. 💗