Background image representing the theme of this page: Building your first local AI agent.

Build Your First Agent

Step 5 — Write a simple agent that thinks and responds.

Now that Node.js and Ollama are ready, it’s time to create your first real agent — a small script that sends a prompt to a local model and streams back the response.

Build Your First Agent

With Node.js installed and your models running smoothly, you're ready to create your first real AI agent — a small, self‑contained program that can think, respond, and interact with your local models. This is where local‑first AI becomes more than a concept: you’re now building intelligence that runs entirely on your machine.

Modern AI agents are built on simple foundations: a runtime (Node.js), a local model (via Ollama), and a bit of JavaScript to connect the two. From here, you’ll eventually add tools, memory, planning loops, and multi‑agent coordination — but it all starts with this first script.


Create Your Agent File

Inside your project folder, create a new JavaScript file. This will be the entry point for your agent:

touch agent.js

Open agent.js in your editor. You’ll write a short script that uses the official Ollama JavaScript client to send messages to your local model.


Write Your First Agent

Add the following code to agent.js. This script sends a prompt to your model and prints the response:

import ollama from "ollama"; const response = await ollama.chat({ model: "llama3", messages: [ { role: "user", content: "Hello! What can you do?" } ] }); console.log(response.message.content);

This is the simplest possible agent: a single prompt, a single response. But even here, you’re interacting with a fully local model — no cloud, no API keys, and no external dependencies. Everything happens on your machine.

If you're curious how the chat API works under the hood, the official documentation explains the message format: Ollama Chat API .


Run Your Agent

Run your script from the terminal:

node agent.js

If everything is set up correctly, your model will respond instantly. You’ve just built your first local AI agent — a foundational milestone in agent‑native development.


Try a More Interesting Prompt

Agents become more useful when you give them context or tasks. Try modifying your script:

const response = await ollama.chat({ model: "llama3", messages: [ { role: "user", content: "Summarize the concept of local-first AI in one paragraph." } ] });

You can also switch models instantly by changing the model field:

model: "mistral"

This flexibility is one of the biggest advantages of local‑first development — you can experiment freely without worrying about API limits or cloud costs.


Understanding What You Built

Your agent now has the ability to:

  • Send structured messages to a local model
  • Receive and process responses
  • Run entirely offline
  • Be extended with tools, memory, and planning logic

This simple script is the foundation of every advanced agent you’ll build later — including tool‑using agents, autonomous loops, and multi‑agent systems. If you want to explore how agents are evolving across the industry, the LLM‑as‑Agents research paper is a great high‑level overview.


Troubleshooting

“Cannot find module 'ollama'”

  • Install the client: npm install ollama
  • Ensure you’re in the correct project folder

Model not found

  • Download the model first: ollama pull llama3
  • Check installed models: ollama list

Script hangs or is slow

  • Close other apps to free CPU/GPU resources
  • Try a smaller model like phi or qwen

Next Step
Run a Local Agent Server →