KIPP UI v1, memory updates, daily notes 2026-02-10

This commit is contained in:
2026-02-10 00:29:20 -06:00
parent c5b941b487
commit 9e7e3bf13c
12 changed files with 48292 additions and 9 deletions

62
memory/2026-02-10.md Normal file
View File

@ -0,0 +1,62 @@
# 2026-02-10 — Monday Night / Tuesday Morning
## KIPP Build Session
Major KIPP infrastructure session with D J.
### Completed
- **KIPP Gitea account** — user: kipp, repo: git.letsgetnashty.com/kipp/workspace (private)
- **KIPP switched to local LLM** — llamacpp/glm-4.7-flash as primary, Claude Sonnet as fallback
- **KIPP ChromaDB memory** — collection `kipp-memory` (ccf4f5b6-a64e-45b1-bf1b-7013e15c3363), seeded 9 docs
- **Ollama URL updated** — 192.168.86.40:11434 (same machine as llama.cpp, not the old 192.168.86.137)
- **KIPP model config tuned** — maxTokens 2048, contextWindow 32768, auto-recall 1 result
- **KIPP web UI v1** — thin client at https://kippui.host.letsgetnashty.com/
- WebSocket JSON-RPC protocol v3 working (connect handshake, chat.send, streaming)
- Weather widget, grocery list, timers, quick actions, mic button
- Served from kipp-ui.service on port 8080
- Gateway at kipp.host.letsgetnashty.com (port 18789)
- CORS handled in Caddy config
- **Meg added to USER.md** — "the boss"
- **@RohOnChain tweet analyzed** — Kelly Criterion / gabagool22 ($788K PnL), mostly legit math but misleading framing
### Key Findings
- GLM-4 Flash is a thinking model — burns ~200 tokens reasoning before responding
- 32K context split across 4 slots = 8K effective per conversation
- D J has 30GB free RAM on llama.cpp server — could bump to 128K context
- OpenClaw WebSocket protocol: JSON-RPC v3, needs connect with client{id,version,platform,mode}, auth{token}, minProtocol:3
- Event types: `agent` (stream:assistant for deltas, stream:lifecycle for start/end), `chat` (state:delta/final)
- Caddy reverse proxy handles both services: kippui→8080, kipp→18789
- `dangerouslyDisableDeviceAuth: true` still requires auth token in connect params
### KIPP Infrastructure
- **VM:** 192.168.86.100 (wdjones user, SSH key from Case)
- **llama.cpp:** 192.168.86.40:8080 (GLM-4 Flash 30B q4, 2 GPUs 12GB+10GB, 32GB RAM)
- **Ollama:** 192.168.86.40:11434 (nomic-embed-text for embeddings)
- **ChromaDB:** 192.168.86.25:8000 (kipp-memory collection)
- **Gitea:** kipp:K1pp-H0me-2026! @ git.letsgetnashty.com/kipp/workspace
- **Telegram:** @dzclaw_kipp_bot
- **Web UI:** https://kippui.host.letsgetnashty.com/ (port 8080)
- **Gateway:** https://kipp.host.letsgetnashty.com/ (port 18789)
- **Token:** kipp-local-token-2026
### Next (Night Shift)
- **Redesign KIPP UI** — Alexa+ inspired dashboard-first layout
- Widget grid: weather, calendar, grocery list, smart home, photos
- Chat hidden by default — appears on "Hey KIPP" / button press
- D J sent Alexa Echo Show screenshot as reference
- Research Alexa+ gen AI UI patterns
- General improvements across projects
### Caddy Config (D J's reverse proxy)
```
kippui.host.letsgetnashty.com {
reverse_proxy 192.168.86.100:8080
}
kipp.host.letsgetnashty.com {
header Access-Control-Allow-Origin "https://kippui.host.letsgetnashty.com"
header Access-Control-Allow-Methods "GET, POST, OPTIONS"
header Access-Control-Allow-Headers "Content-Type, Authorization"
@options method OPTIONS
handle @options { respond 204 }
reverse_proxy 192.168.86.100:18789
}
```