In this guide, we’ll walk through setting up Grafana’s new LLM-powered Assistant — entirely locally — using open-source components:
Grafana OSS, Ollama for local AI models, Prometheus for metrics, and Loki + Promtail for logs.
Overview
We’ll deploy a local observability stack that includes:
All of this runs locally with docker-compose — no cloud dependencies, no API billing, and complete data privacy.
Step 1 — Download and Extract
Clone the ready-made project:
https://github.com/dhanuka84/grafana-llm-observability
cd grafana-llm-observability
Step 2 — Fix Docker Root for Promtail
Check your Docker data root:
docker info --format '{{ .DockerRootDir }}'
Then create an .env file so Promtail can find container logs:
echo "DOCKER_ROOT=$(docker info --format '{{ .DockerRootDir }}')" > .env
Step 3 — Start the Stack
docker-compose up -d
This launches Grafana, Prometheus, Loki, Promtail, Node Exporter, Ollama, and optionally the MCP server.
Step 4 — Pull a Local Model
docker-compose exec ollama ollama pull llama3
Confirm it’s available:
docker-compose exec ollama ollama list
Step 5 — Configure Grafana LLM App
Open Grafana → http://localhost:3000
Login: admin / adminGo to Administration → Plugins → LLM → Configuration
Use the following settings:
Save → you should see
✅ LLM provider health check succeeded!
Step 6 — Check Metrics and Logs
Prometheus: http://localhost:9090
Loki: via Grafana → Explore → Loki → query {job="docker"}
Grafana auto-loads an Observability Starter dashboard that shows Prometheus scrapes and Docker logs.
Step 7 — Enable and Use the AI Assistant
In Grafana, go to Administration → Plugins → LLM
Toggle Enable all LLM features in Grafana
Refresh your browser (Ctrl + Shift + R)
Now you can:
Hover over any panel → ⋯ → Ask AI → Explain this panel
Use Explain query in Explore
Add a persistent LLM Chat panel:
+ Create → Dashboard → Add Visualization → “LLM Chat”
Step 8 — (Advanced) Enable MCP Server
If you’d like to integrate Grafana with external AI clients like Claude Desktop:
Create a Grafana API key (Editor role)
Add it to docker-compose.yml:
Restart MCP:
docker-compose up -d --force-recreate mcp
Visit http://localhost:8000/healthz → ok
Then configure your AI client to use:
SSE URL: http://localhost:8000/sse
Step 9 — Verify Everything
# LLM test
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"llama3:latest","messages":[{"role":"user","content":"Say hi!"}]}'
→ returns Hi!
# Prometheus test
curl http://localhost:9090/metrics | head
Step 10 — Enjoy Your Offline AI Observability Assistant
You now have:
A full local monitoring stack
A private LLM Assistant powered by Ollama
AI-driven insights and explanations directly inside Grafana dashboards
No comments:
Post a Comment