CORE DIRECTORY // SYSTEM.USER.DIANA_ISMAIL
Labs by Diana — Experiments that ship.
Side projects that got out of hand. AI tools built for problems I kept tripping over — now live, now yours.
AI Chat Scheduler
MODULE_004
TECHNICAL_OVERVIEW
Attendees type questions into the chat — 'show me workshops on Thursday afternoon' or 'build me a schedule that avoids the keynote clashes' — and the assistant returns recommendations and generates a downloadable itinerary.
The scheduling logic is driven by structured markers embedded in the AI's streamed output: when the model determines a schedule is ready, it emits a [GENERATE_SCHEDULE] token that the client intercepts to trigger the PDF-generation flow, keeping UI state changes decoupled from prompt engineering. There is no form-based configuration for organisers; the event data is loaded as structured context at session start. Built with Vercel AI SDK, OpenAI GPT, Next.js 16, and TypeScript 5.
PROJECT_LEARNINGS_LOG
KEY_LEARNING_01
AI-emitted markers ([GENERATE_SCHEDULE], schedule_download JSON blocks) trigger client-side behaviours from within streamed responses — bridging LLM output and UI state without post-processing the full response.
KEY_LEARNING_02
Markers must appear at predictable positions in the stream; if the model embeds them mid-sentence or omits them under ambiguous prompts, the UI silently skips the action.
KEY_LEARNING_03
Schedule conflict detection with a 5-minute gap buffer was silently removed to maximise session density — a tradeoff that means 'dense schedule' returns back-to-back sessions with zero transition time, and the two generation paths (client-side vs. API) now enforce different constraints.