LLM-MD®

Build AI workflows with natural language

Like markdown for LLMs

Status: Alpha Release. The API may change significantly. Checkout the llm-md git repo
llm-md is an open-source tool that lets you design and run AI workflows using natural language and minimal syntax. It works like markdown, but instead of just formatting text, you build structured conversations with large language models. Demo GIF

Write AI workflows the same way you build everything else

Many AI tools feel like black boxes. You enter a prompt and hope the output makes sense. When it doesn't, there's no clear way to see what happened or how to fix it.

llm-md changes that.

It applies the idea of metacognition (thinking about thinking) to AI conversations to make the logic behind each step visible. Instead of reacting to results, you can reason through the process.

AI interactions become plain-text files you can version, share, and trace, complete with context, agent flow, and conversation history. Define roles, pass messages between agents, run shell commands, splice in code, or chain multiple models together.

llm-md works with any LLM provider. It's free and fits into development workflows with minimal setup and no vendor lock-in.

Features

Write workflows in plain text

llm-md uses .md files to define conversations. Context, user input, model responses, and agent flow all stay in one version-controllable file.

Low-friction syntax

The format is minimal and familiar. Markdown headers define agent turns. Arrows control message flow. Variables and commands are embedded with double braces.

Multi-agent support

Define multiple agents and control how messages move between them. This is useful for splitting tasks or creating role clarity in complex interactions.

Shell command integration

Run shell commands and include their output in your prompts. Reference real-time system data without manually copying it in.

Reusable context blocks

Define context inline or load it from files. Set system messages, model parameters, and shared variables once and reuse them across workflows.

Input splicing

llm-md supports input to inject external content at runtime. Pass input from a file or pipe it in through stdin, which helps with automation.

Streaming output

For supported providers, responses can stream back as they generate. This is useful for longer outputs or workflows where early feedback matters.

Provider-agnostic configuration

Set the provider and model in a config block or on the command line. llm-md supports OpenAI, Anthropic, Mistral, Google, and others through the same interface.

CLI-based workflow

Manage everything through the command line: running files, validating them, creating new ones, or parsing output. llm-md works in scripts or editor integrations.

Patch-based editing

Generate file patches from model suggestions. Apply them manually or integrate them into your workflow.

Editor-agnostic

Use llm-md in any text editor. It fits cleanly into existing developer environments. No extensions or plugins required.

Additional Resources

For developers and advanced users who want to explore the API or extend llm-md, please refer to our technical documentation.

Try it out

Install with one command. Get started now:

curl -fsSL https://llm.md/install.sh | bash