I turned Markdown into a protocol for generative UI
121 points - last Thursday at 1:42 PM
There's a lot of work happening around both generative UI and code execution for AI agents. I kept wondering: how do you bring them together into a fully featured architecture? I built a prototype:
- Markdown as protocol — one stream carrying text, executable code, and data
- Streaming execution — code fences execute statement by statement as they stream in
- A mount() primitive — the agent creates React UIs with full data flow between client, server, and LLM
If you're still looking for a name let me suggest "hyper text".
It embodies the whole idea of having data, code and presentation at the same place.
If you're open for contributions I already have an idea for cascading styles system in mind.
pbkhrvlast Thursday at 7:50 PM
Very cool. I'm imagining using this with Claude Code, allowing it to wire this up to MCP or to CLI commands somehow and using that whole system as an interactive dashboard for administering a kubernetes cluster or something like that - and the hypothetical first feature request is to be able to "freeze" one of these UI snippets and save it as some sort of a "view" that I can access later. Use case: it happens to build a particularly convenient way to do a bunch of calls to kubectl, parse results and present them in some interactive way - and I'd like to reuse that same widget later without explaining/iterating on it again.
joelreslast Thursday at 7:41 PM
I quite like this! I've been incrementally building similar tooling for a project I've been working on, and I really appreciate the ideas here.
I think the key decision for someone implementing a flexible UI system like this is the required level of expressiveness. To me, the chief problem with having agents build custom html pages (as another comment suggested) is far too unconstrained. I've been working with a system of pre-registered blocks and callbacks that are very constrained. I quite like this as a middleground, though it may still be too dynamic for my use case. Will explore a bit more!
realrockerlast Thursday at 8:04 PM
The streamed execution idea is novel to me. Not sure what’s it significance ?
I have been working on something with a similar goal:
I will say I came upon this same design pattern to make all my chats into semantic Markdown that is backward compatible with markdown. I did:
````assistant
<Short Summary title>
gemini/3.1-pro - 20260319T050611Z
Response from the assistant
````
with a similar block for tool calling
This can be parsed semantically as part of the conversation
but also is rendered as regular Markdown code block when needed
Helps me keep AI chats on the filesystem, as a valid document, but also add some more semantic meaning atop of Markdown
theturtletalkslast Thursday at 6:11 PM
OpenUI and JSON-render are some other players in this space.
I’m building an agentic commerce chat that uses MCP-UI and want to start using these new implementations instead of MCP-UI but can’t wrap my head around how button on click and actions work? MCP-UI allows onClick events to work since you’re “hard coding” the UI from the get-go vs relying on AI generating undertemistic JSON and turning that into UI that might be different on every use.
Suracyesterday at 3:23 PM
That‘a fascinating take on the UI problem. I find myself less and less coding cause there is no real easy way to build simple UI nowerdays. Languages like go and rust gloss over the UI question and offer no real easy way. Web frameworks take the roll of a emergency UI. I most of the time still use windows.forms for fast easy statefull ui forms
mncharityyesterday at 12:10 AM
Here's[1] the rest of the prompt which begins the video.
In an agentic loop, the model can keep calling multiple tools for each specialized artifact (like how claude webapp renders HTML/SVG artifacts within a single turn). Models are already trained for this (tested this approach with qwen 3.5 27B and it was able to follow claude's lead from the previous turns).
I see potential to take over Notion's / Obsidian's business here. Imagine highly customizable notebooks people can generate on the fly with the right kind of UI they need. Compared to fixed blocks in Notion
mncharityyesterday at 12:25 AM
Brainstorming, perhaps `<<named-block-code-transclusion>>`? It goes against the grain of "eval() line-by-line", even if it's handled ASAP. But it might relax the order constraint on codegen. Especially if the UI gets complex, or rendered on a "pane off to the side".
iusethemouselast Thursday at 6:00 PM
There’s definitely a lot of merit to this idea, and the gifs in the article look impressive. My strong opinion is that there’s a lot more to (good) UIs than what an LLM will ever be able to bring (happy to be proven wrong in a few years…), but for utilitarian and on-the-fly UIs there’s definitely a lot of promise
itmiticayesterday at 9:39 AM
Interesting food for thought for the HITL.
deletedlast Thursday at 6:01 PM
sanjosanjoyesterday at 9:01 PM
This is a really cool idea, as a guy who has always been primarily a React dev
4ndrewllast Thursday at 7:47 PM
The bots that read the instruction and yet add the emoji to the _beginning_ of the PR title though. Even bigger red flag I guess?
nthypeslast Thursday at 9:09 PM
Why not MDX?
dominotwlast Thursday at 7:51 PM
would be nice if it wasnt just ui but other form like voice narration, sounds ect