Like markdown for LLMs
Write AI workflows the same way you build everything else
Many AI tools feel like black boxes. You enter a prompt and hope the output makes sense. When it doesn't, there's no clear way to see what happened or how to fix it.
llm-md changes that.
It applies the idea of metacognition (thinking about thinking) to AI conversations to make the logic behind each step visible. Instead of reacting to results, you can reason through the process.
AI interactions become plain-text files you can version, share, and trace, complete with context, agent flow, and conversation history. Define roles, pass messages between agents, run shell commands, splice in code, or chain multiple models together.
llm-md works with any LLM provider. It's free and fits into development workflows with minimal setup and no vendor lock-in.
Features
Write workflows in plain text
llm-md uses .md files to define conversations. Context, user input, model responses, and agent flow all stay in one version-controllable file.
Low-friction syntax
The format is minimal and familiar. Markdown headers define agent turns. Arrows control message flow. Variables and commands are embedded with double braces. Use @tool:operation syntax to access tools, and pipe results between operations with |> and ||>.
Multi-agent support
Define multiple agents and control how messages move between them. Chain agents sequentially with >>> or use advanced operators: fan-out messages to multiple agents with >>=, collect results from multiple agents with =>>, or create agent loops with !>>.
Shell command integration
Run shell commands and include their output in your prompts. Reference real-time system data without manually copying it in. Use persistent shell sessions to maintain state across commands.
Reusable context blocks
Define context inline or load it from files. Set system messages, model parameters, and shared variables once and reuse them across workflows. Use scoped variables (global, session, agent, local) for precise state management.
Input splicing
llm-md supports input to inject external content at runtime. Pass input from a file or pipe it in through stdin, which helps with automation.
Streaming output
For supported providers, responses can stream back as they generate. This is useful for longer outputs or workflows where early feedback matters.
Provider-agnostic configuration
Set the provider and model in a config block or on the command line. llm-md supports OpenAI, Anthropic, Mistral, Google, and others through the same interface. Define custom providers or use URN-based model addressing for flexible configurations.
CLI-based workflow
Manage everything through the command line: running files, validating them, creating new ones, or parsing output. llm-md works in scripts or editor integrations.
Patch-based editing
Generate file patches from model suggestions. Apply them manually or integrate them into your workflow.
Advanced tools ecosystem
Access built-in tools for knowledge management, planning, web fetching, file operations, and more. Create custom tools or integrate with external services through the Model Context Protocol (MCP).
Knowledge and reasoning systems
Store and manage agent beliefs with confidence levels. Use the AI planner for goal-based problem solving or apply non-monotonic reasoning for handling exceptions and defaults.
Batch processing
Run workflows in batch mode across supported providers. Monitor progress and retrieve results for large-scale AI processing tasks.
Editor-agnostic
Use llm-md in any text editor. It fits cleanly into existing developer environments. No extensions or plugins required.
Getting started
llm-md works with simple markdown files and familiar syntax. The expanded toolset and agent operators give you flexibility to build workflows that match your specific needs.
Additional Resources
For developers and advanced users who want to explore the API or extend llm-md, please refer to our technical documentation.
Try it out
Install with one command. Get started now:
curl -fsSL https://llm.md/install.sh | bash