Now that Node.js and Ollama are ready, it’s time to create your first real agent — a small script that sends a prompt to a local model and streams back the response.
With Node.js installed and your models running smoothly, you're ready to create your first real AI agent — a small, self‑contained program that can think, respond, and interact with your local models. This is where local‑first AI becomes more than a concept: you’re now building intelligence that runs entirely on your machine.
Modern AI agents are built on simple foundations: a runtime (Node.js), a local model (via Ollama), and a bit of JavaScript to connect the two. From here, you’ll eventually add tools, memory, planning loops, and multi‑agent coordination — but it all starts with this first script.
Inside your project folder, create a new JavaScript file. This will be the entry point for your agent:
Open agent.js in your editor. You’ll write a short script that uses the official
Ollama JavaScript client to send
messages to your local model.
Add the following code to agent.js. This script sends a prompt to your model and prints the response:
This is the simplest possible agent: a single prompt, a single response. But even here, you’re interacting with a fully local model — no cloud, no API keys, and no external dependencies. Everything happens on your machine.
If you're curious how the chat API works under the hood, the official documentation explains the message format: Ollama Chat API .
Run your script from the terminal:
If everything is set up correctly, your model will respond instantly. You’ve just built your first local AI agent — a foundational milestone in agent‑native development.
Agents become more useful when you give them context or tasks. Try modifying your script:
You can also switch models instantly by changing the model field:
This flexibility is one of the biggest advantages of local‑first development — you can experiment freely without worrying about API limits or cloud costs.
Your agent now has the ability to:
This simple script is the foundation of every advanced agent you’ll build later — including tool‑using agents, autonomous loops, and multi‑agent systems. If you want to explore how agents are evolving across the industry, the LLM‑as‑Agents research paper is a great high‑level overview.
“Cannot find module 'ollama'”
npm install ollamaModel not found
ollama pull llama3ollama listScript hangs or is slow
phi or qwen
Next Step
Run a Local Agent Server →