Why Markdoc for LLM Streaming UI
Every AI chatbot I've built hits the same wall. The LLM writes beautiful markdown — headings, bold, lists, code blocks. Then someone asks for a chart. Or a form. Or a data table with sortable colum...

Source: DEV Community
Every AI chatbot I've built hits the same wall. The LLM writes beautiful markdown — headings, bold, lists, code blocks. Then someone asks for a chart. Or a form. Or a data table with sortable columns. Suddenly you need a component rendering layer. And every approach has tradeoffs. That's why I built mdocUI: a streaming-first generative UI library that lets LLMs mix markdown and interactive components in one output stream. The Problem JSON blocks in markdown Some teams embed JSON in fenced code blocks: Here's your revenue data: ```json:chart {"type": "bar", "labels": ["Q1", "Q2", "Q3"], "values": [120, 150, 180]} ``` This works until you're streaming. A JSON object that arrives token-by-token is invalid JSON until the closing brace lands. You either buffer the entire block (killing the streaming experience) or parse incomplete JSON (fragile). JSX in markdown Here's your data: <Chart type="bar" labels={["Q1", "Q2", "Q3"]} values={[120, 150, 180]} /> Models get confused. They mix HT