CORE DIRECTORY // SYSTEM.USER.DIANA_ISMAIL
Labs by Diana — Experiments that ship.
Side projects that got out of hand. AI tools built for problems I kept tripping over — now live, now yours.
Diana's Digital Twin
MODULE_002
TECHNICAL_OVERVIEW
Visitors ask questions about Diana's work, background, and professional decisions — the agent responds in her voice, drawing from 13 structured context files: 9 always-injected (profile, positioning, experience, projects, experiments, agentic workflow, personal, tools, links) and 4 keyword-triggered on-demand layers (early career, legacy tools, AI team, creative coding). Mtime caching hot-reloads any edited context file without a server restart. Conversation history persists in Redis with 30-day rolling TTL, and a summarisation engine compresses older messages to keep the context window lean over long conversations.
The Twin serves three surfaces from one backend: a standalone landing page with a terminal-style hero and glassmorphic chat widget, an embeddable widget on Labs and the Portfolio, and a Telegram bot. Cross-platform auto-pairing lets users seamlessly continue a web conversation in Telegram via deep links, or go the other direction with an OTP code — both interfaces resolve to a single shared session via Redis aliasing. The web API supports both full JSON responses and SSE streaming. Input sanitisation strips prompt injection patterns before every LLM call. Built with Python 3.11, FastAPI, AsyncOpenAI (gpt-5.4-mini / gpt-5.4-nano), Redis, and deployed on Railway via Docker.
PROJECT_LEARNINGS_LOG
KEY_LEARNING_01
Tiered context injection — 9 always-on context files plus 4 keyword-triggered on-demand blocks — reduced token usage by ~57% compared to injecting the full knowledge base on every request. Mtime caching hot-reloads any edited context file without a server restart.
KEY_LEARNING_02
Cross-platform session pairing (web to Telegram and back) works through Redis aliasing — an OTP code or deep link creates a permanent alias:{telegramChatId} pointer to the shared session hash, so both interfaces read and write the same conversation history.
KEY_LEARNING_03
Input sanitisation strips control characters, role-switching keywords (system:, assistant:), and XML tag injections before every LLM call — applied at the boundary, not in-flight, paired with length limits (4000 chars web, 1000 Telegram).