11/09/2025

Building Your Own Local Grafana AI Assistant with Ollama, Prometheus, and Loki

 


In this guide, we’ll walk through setting up Grafana’s new LLM-powered Assistant — entirely locally — using open-source components:
Grafana OSS, Ollama for local AI models, Prometheus for metrics, and Loki + Promtail for logs.


 Overview

We’ll deploy a local observability stack that includes:

Component

Purpose

Grafana OSS + LLM App

Dashboards + built-in AI Assistant

Ollama

Runs Llama 3 or other open-source LLMs

Prometheus

Metrics collection

Loki + Promtail

Log aggregation

Node Exporter

Host metrics

Grafana MCP (optional)

Connect Grafana data to external AI clients like Claude Desktop

All of this runs locally with docker-compose — no cloud dependencies, no API billing, and complete data privacy.


 Step 1 — Download and Extract

Clone the ready-made project:

https://github.com/dhanuka84/grafana-llm-observability

cd grafana-llm-observability



Step 2 — Fix Docker Root for Promtail

Check your Docker data root:

docker info --format '{{ .DockerRootDir }}'


Then create an .env file so Promtail can find container logs:

echo "DOCKER_ROOT=$(docker info --format '{{ .DockerRootDir }}')" > .env



Step 3 — Start the Stack

docker-compose up -d


This launches Grafana, Prometheus, Loki, Promtail, Node Exporter, Ollama, and optionally the MCP server.


Step 4 — Pull a Local Model

docker-compose exec ollama ollama pull llama3


Confirm it’s available:

docker-compose exec ollama ollama list



Step 5 — Configure Grafana LLM App

  1. Open Grafana → http://localhost:3000
    Login: admin / admin

  2. Go to Administration → Plugins → LLM → Configuration

Use the following settings:

Field

Value


Provider

Custom API

API URL

http://ollama:11434

API Path

/v1

API Key

any value (e.g. local)

Model (Base & Large)

llama3:latest

 

Save → you should see
LLM provider health check succeeded!



Step 6 — Check Metrics and Logs

  • Prometheus: http://localhost:9090

  • Loki: via Grafana → Explore → Loki → query {job="docker"}

Grafana auto-loads an Observability Starter dashboard that shows Prometheus scrapes and Docker logs.


Step 7 — Enable and Use the AI Assistant

  1. In Grafana, go to Administration → Plugins → LLM

  2. Toggle Enable all LLM features in Grafana

  3. Refresh your browser (Ctrl + Shift + R)

Now you can:

  • Hover over any panel → ⋯ → Ask AI → Explain this panel

  • Use Explain query in Explore

  • Add a persistent LLM Chat panel:
    + Create → Dashboard → Add Visualization → “LLM Chat”


Step 8 — (Advanced) Enable MCP Server

If you’d like to integrate Grafana with external AI clients like Claude Desktop:

  1. Create a Grafana API key (Editor role)






Add it to docker-compose.yml:




Restart MCP:

docker-compose up -d --force-recreate mcp

  1. Visit http://localhost:8000/healthz → ok

Then configure your AI client to use:

SSE URL: http://localhost:8000/sse



Step 9 — Verify Everything

# LLM test

curl http://localhost:11434/v1/chat/completions \

  -H "Content-Type: application/json" \

  -d '{"model":"llama3:latest","messages":[{"role":"user","content":"Say hi!"}]}'


→ returns Hi!

# Prometheus test

curl http://localhost:9090/metrics | head



Step 10 — Enjoy Your Offline AI Observability Assistant

You now have:

  • A full local monitoring stack

  • A private LLM Assistant powered by Ollama

  • AI-driven insights and explanations directly inside Grafana dashboards




No comments:

Post a Comment