llm-md:   Large Language Model Markdown Documentation
1 Overview
1.1 Separation of Concerns
2 Version History
3 Parser Module
3.1 Abstract Syntax Tree (AST)
3.1.1 Core Structures
3.1.2 Agent Elements
3.1.3 Operations and Chains
3.1.4 Content Structures
3.1.5 Commands and Operations
3.1.6 Labels and References
3.1.7 Control Flow Structures
3.1.8 Helper Functions
3.2 Lexer
3.2.1 Overview
3.2.2 Exported Functions and Tokens
3.2.3 Token Types
3.2.3.1 Syntax Tokens
3.2.3.2 Value Tokens
3.2.4 Implementation Details
3.3 Parser
3.3.1 Overview
3.3.1.1 Primary Functions
3.3.1.2 Exceptions
3.3.1.3 Parser Implementation Details
3.3.1.4 Helper Functions
4 Interpreter Module
4.1 Interpreter
4.1.1 Overview
4.1.2 Main Interface
4.1.3 Initialization and Cleanup
4.1.4 Helper Functions
4.2 Interpreter Types
4.2.1 Overview
4.2.2 Environment
4.2.3 Agent State
4.2.4 Execution Context
4.2.5 Type Predicates
4.3 Agents
4.3.1 Overview
4.3.2 Agent State Management
4.3.3 Agent Interpretation
4.3.4 Agent Chain Processing
4.3.5 Types
4.4 Commands
4.4.1 Overview
4.4.2 Command Interpretation
4.4.3 Helper Functions
4.5 Context
4.5.1 Overview
4.5.2 Core Functions
4.5.3 Structure Types
4.5.4 Usage Examples
4.5.5 Contracts
4.5.6 Error Handling
4.6 Environment
4.6.1 Overview
4.6.2 Data Structures
4.6.3 Environment Creation
4.6.4 Variable Operations
4.6.5 Environment Manipulation
4.7 LLM-MD Evaluation
4.7.1 Overview
4.7.2 Data Structures
4.7.3 Main Functions
4.7.4 Constants
4.7.5 Usage
4.7.6 Error Handling
4.8 Interpreter Utilities
4.8.1 Overview
4.8.2 Error Handling Functions
4.8.3 Logging Functions
4.8.4 Profiling Utilities
4.8.5 AST Visualization
4.8.6 Cache Management
4.8.7 Memory Management
4.8.8 Environment Utilities
5 LLMs Module
5.1 LLM Types and Structures
5.1.1 Overview
5.1.2 Constants
5.1.3 Core Structures
5.1.4 Input Validation Functions
5.2 LLM API
5.2.1 Overview
5.2.2 Core Structures
5.2.3 Creating Conversations
5.2.4 Sending Prompts
5.2.5 Error Handling
5.2.6 Working with Streaming Responses
5.2.7 Complete Example
5.3 LLM Models Configuration
5.3.1 Overview
5.3.2 Configuration Tables
5.3.3 Helper Functions
5.3.4 Supported Models
5.4 LLM Providers
5.4.1 Overview
5.4.2 Core Provider Functions
5.4.3 Provider Configurations
5.4.4 Enhanced Anthropic Support
5.4.5 Provider Data Structures
5.4.6 Provider Features
5.4.7 Provider Rate Limits
5.4.8 Example Usage
5.5 Custom LLM Provider Configuration
5.5.1 Overview
5.5.2 API Reference
5.5.2.1 Provider and Model Reference Functions
5.5.2.2 Configuration Extraction Functions
5.5.2.3 Helper Functions
5.5.3 Request Formatting and Response Parsing
5.5.4 Default Functions
5.5.5 Provider Templates
5.5.6 Rate Limits
5.6 Enhanced LLM Provider Configuration
5.6.1 Overview
5.6.2 API Reference
5.6.3 Internal Behavior
5.6.4 Error Handling
5.6.5 Logging
5.7 LLM Utilities
5.7.1 Overview
5.7.2 URN Handling
5.7.3 Model Capabilities
5.7.4 Pricing and Cost Estimation
5.7.5 Contracts
5.7.6 Dependencies
5.8 llm-md file creation wizard
5.8.1 Overview
5.8.2 Templates
5.8.3 Functions
create-llm-md-file
create-llm-md-file/  interactive
create-llm-md-file/  params
get-template-defaults
5.8.4 Command-line Usage
6 LLM-MD Grammar Specification
6.1 Overview
6.2 Grammar Rules
6.2.1 Terminal Definitions
6.2.2 Top-Level Structure
6.2.3 Context Messages
6.2.4 User and Agent Messages
6.2.5 Message Content
6.2.6 Escaped Content
6.2.7 Links and Images
6.2.8 Commands
6.2.9 Variable Operations and Assignments
6.2.10 Shell Commands
6.2.11 Control Statements
6.2.12 Comments
7 License
Citation
8.11.1

llm-md: Large Language Model Markdown Documentation

Hugo O'Connor

This is an alpha release. The API may change significantly.

Modern Command Usage: llm-md <command> [ <option> ... ] [<file>]

Commands:

parse
  Parse an LLM-MD file and print its AST

evaluate
  Process an LLM-MD file (default if command omitted)

run
  Alias for ’evaluate’

create
  Wizard to generate a new llm-md file

validate
  Check syntax without executing the file

update
  Update to the latest version

version
  Display the current version

help
  Show this help message

Legacy Usage: llm-md [ <option> ... ] [<file>]

Options:

-i <file-path>, input <file-path>
  Input llm-md file to process

-p, parse
  Only parse and print AST (don’t evaluate)

-o <file-path>, output <file-path>
  Save output to specified file

no-append
  Don’t append response to input file (responses are appended by default)

-d, debug
  Show detailed error information including stack traces

provider <provider-name>
  LLM provider to use (anthropic, openai)

model <model-name>
  Model to use (defaults to provider’s default model)

-v, version
  Display the current version

-u, update
  Update to the latest version

check-update
  Check if a new version is available

splice <file-or-text>
  Content to splice into input tags in the LLM-MD file

help, -h
  Show this help

Examples:

llm-md parse file.md
  Parse an LLM-MD file and print its AST

llm-md evaluate file.md
  Process an LLM-MD file with default behavior

llm-md run file.md -o output.json
  Process a file and save to JSON

llm-md create myfile.md
  Create a new file with interactive wizard

llm-md create myfile.md template coding
  Create a new file with the coding template

llm-md validate file.md
  Check syntax without executing the file

llm-md evaluate file.md splice content.txt
  Splice content from file into input tags

echo "hello world" | llm-md evaluate file.md
  Pipe content from stdin into input tags

llm-md version
  Display version information

Multiple single-letter switches can be combined after one -. For example, -h- is the same as -h .

Build Instructions:

To build the executable from source:

raco pkg update -a && raco pkg install --auto --no-docs
raco exe -o llm-md main.rkt

Git repo:
https://codeberg.org/anuna/llm-md

    1 Overview

      1.1 Separation of Concerns

    2 Version History

    3 Parser Module

      3.1 Abstract Syntax Tree (AST)

        3.1.1 Core Structures

        3.1.2 Agent Elements

        3.1.3 Operations and Chains

        3.1.4 Content Structures

        3.1.5 Commands and Operations

        3.1.6 Labels and References

        3.1.7 Control Flow Structures

        3.1.8 Helper Functions

      3.2 Lexer

        3.2.1 Overview

        3.2.2 Exported Functions and Tokens

        3.2.3 Token Types

          3.2.3.1 Syntax Tokens

          3.2.3.2 Value Tokens

        3.2.4 Implementation Details

      3.3 Parser

        3.3.1 Overview

          3.3.1.1 Primary Functions

          3.3.1.2 Exceptions

          3.3.1.3 Parser Implementation Details

          3.3.1.4 Helper Functions

    4 Interpreter Module

      4.1 Interpreter

        4.1.1 Overview

        4.1.2 Main Interface

        4.1.3 Initialization and Cleanup

        4.1.4 Helper Functions

      4.2 Interpreter Types

        4.2.1 Overview

        4.2.2 Environment

        4.2.3 Agent State

        4.2.4 Execution Context

        4.2.5 Type Predicates

      4.3 Agents

        4.3.1 Overview

        4.3.2 Agent State Management

        4.3.3 Agent Interpretation

        4.3.4 Agent Chain Processing

        4.3.5 Types

      4.4 Commands

        4.4.1 Overview

        4.4.2 Command Interpretation

        4.4.3 Helper Functions

      4.5 Context

        4.5.1 Overview

        4.5.2 Core Functions

        4.5.3 Structure Types

        4.5.4 Usage Examples

        4.5.5 Contracts

        4.5.6 Error Handling

      4.6 Environment

        4.6.1 Overview

        4.6.2 Data Structures

        4.6.3 Environment Creation

        4.6.4 Variable Operations

        4.6.5 Environment Manipulation

      4.7 LLM-MD Evaluation

        4.7.1 Overview

        4.7.2 Data Structures

        4.7.3 Main Functions

        4.7.4 Constants

        4.7.5 Usage

        4.7.6 Error Handling

      4.8 Interpreter Utilities

        4.8.1 Overview

        4.8.2 Error Handling Functions

        4.8.3 Logging Functions

        4.8.4 Profiling Utilities

        4.8.5 AST Visualization

        4.8.6 Cache Management

        4.8.7 Memory Management

        4.8.8 Environment Utilities

    5 LLMs Module

      5.1 LLM Types and Structures

        5.1.1 Overview

        5.1.2 Constants

        5.1.3 Core Structures

        5.1.4 Input Validation Functions

      5.2 LLM API

        5.2.1 Overview

        5.2.2 Core Structures

        5.2.3 Creating Conversations

        5.2.4 Sending Prompts

        5.2.5 Error Handling

        5.2.6 Working with Streaming Responses

        5.2.7 Complete Example

      5.3 LLM Models Configuration

        5.3.1 Overview

        5.3.2 Configuration Tables

        5.3.3 Helper Functions

        5.3.4 Supported Models

      5.4 LLM Providers

        5.4.1 Overview

        5.4.2 Core Provider Functions

        5.4.3 Provider Configurations

        5.4.4 Enhanced Anthropic Support

        5.4.5 Provider Data Structures

        5.4.6 Provider Features

        5.4.7 Provider Rate Limits

        5.4.8 Example Usage

      5.5 Custom LLM Provider Configuration

        5.5.1 Overview

        5.5.2 API Reference

          5.5.2.1 Provider and Model Reference Functions

          5.5.2.2 Configuration Extraction Functions

          5.5.2.3 Helper Functions

        5.5.3 Request Formatting and Response Parsing

        5.5.4 Default Functions

        5.5.5 Provider Templates

        5.5.6 Rate Limits

      5.6 Enhanced LLM Provider Configuration

        5.6.1 Overview

        5.6.2 API Reference

        5.6.3 Internal Behavior

        5.6.4 Error Handling

        5.6.5 Logging

      5.7 LLM Utilities

        5.7.1 Overview

        5.7.2 URN Handling

        5.7.3 Model Capabilities

        5.7.4 Pricing and Cost Estimation

        5.7.5 Contracts

        5.7.6 Dependencies

      5.8 llm-md file creation wizard

        5.8.1 Overview

        5.8.2 Templates

        5.8.3 Functions

        5.8.4 Command-line Usage

    6 LLM-MD Grammar Specification

      6.1 Overview

      6.2 Grammar Rules

        6.2.1 Terminal Definitions

        6.2.2 Top-Level Structure

        6.2.3 Context Messages

        6.2.4 User and Agent Messages

        6.2.5 Message Content

        6.2.6 Escaped Content

        6.2.7 Links and Images

        6.2.8 Commands

        6.2.9 Variable Operations and Assignments

        6.2.10 Shell Commands

        6.2.11 Control Statements

        6.2.12 Comments

    7 License

    Citation

1 Overview

LLM-MD is a domain-specific language for defining and managing conversations with Large Language Models (LLMs). It provides a structured framework for creating, orchestrating, and analyzing complex LLM interactions through a markdown-inspired syntax.

LLM-MD combines the readability of markdown with the power of a programming language, enabling both simple conversational prototyping and sophisticated multi-agent systems development in a single, expressive format.

For user-friendly documentation, tutorials, and practical examples, please visit the llm-md user guide.

1.1 Separation of Concerns

LLM-MD provides a clean separation of concerns that makes it flexible and powerful:

This separation makes LLM-MD both accessible to beginners and powerful for experts, allowing conversations to be created, edited, and shared using standard tools while maintaining the full expressive power of a purpose-built conversational programming language.

2 Version History

3 Parser Module

3.1 Abstract Syntax Tree (AST)

3.1.1 Core Structures

struct

(struct llm-md-file (messages)
    #:extra-constructor-name make-llm-md-file)
  messages : (listof message?)
Represents an LLM-MD file containing multiple messages.

struct

(struct message ()
    #:extra-constructor-name make-message)
Base structure for all message types.

struct

(struct agent-message message (agent-chain content)
    #:extra-constructor-name make-agent-message)
  agent-chain : agent-chain?
  content : any/c
Represents a message from an agent.

struct

(struct context-message message (toml-data)
    #:extra-constructor-name make-context-message)
  toml-data : hash?
Represents a context message containing TOML data.

struct

(struct agent-chain (elements)
    #:extra-constructor-name make-agent-chain)
  elements : (listof agent-element?)
Represents a chain of agents.

3.1.2 Agent Elements

struct

(struct agent-element (agent operation label)
    #:extra-constructor-name make-agent-element)
  agent : agent-type?
  operation : (or/c operation? #f)
  label : (or/c agent-label? #f)
Represents an element in an agent chain.

struct

(struct agent (identifier)
    #:extra-constructor-name make-agent)
  identifier : agent-id?
Represents an agent in the system.

struct

(struct terminator ()
    #:extra-constructor-name make-terminator)
Represents a terminator in the agent chain.

struct

(struct agent-id (name)
    #:extra-constructor-name make-agent-id)
  name : string?
Represents an agent identifier.

3.1.3 Operations and Chains

struct

(struct operation (type)
    #:extra-constructor-name make-operation)
  type : symbol?
Represents an operation between agents.

struct

(struct fan-out-chain (source targets)
    #:extra-constructor-name make-fan-out-chain)
  source : agent-element?
  targets : (listof agent-element?)
Represents a fan-out operation chain.

struct

(struct fan-in-chain (sources target)
    #:extra-constructor-name make-fan-in-chain)
  sources : (listof agent-element?)
  target : agent-element?
Represents a fan-in operation chain.

struct

(struct sequential-chain (agents)
    #:extra-constructor-name make-sequential-chain)
  agents : (listof agent-element?)
Represents a sequential chain of agents.

3.1.4 Content Structures

struct

(struct text-content (text)
    #:extra-constructor-name make-text-content)
  text : string?
Represents plain text in message content.

struct

(struct string-text (text)
    #:extra-constructor-name make-string-text)
  text : string?
Represents quoted text in message content.

struct

(struct link (text url title)
    #:extra-constructor-name make-link)
  text : string?
  url : string?
  title : (or/c string? #f)
Represents a link in message content.

struct

(struct image (alt-text url title)
    #:extra-constructor-name make-image)
  alt-text : string?
  url : string?
  title : (or/c string? #f)
Represents an image in message content.

3.1.5 Commands and Operations

struct

(struct llm-md-command (force-modifier? content)
    #:extra-constructor-name make-llm-md-command)
  force-modifier? : boolean?
  content : command-content?
Represents an LLM-MD command.

struct

(struct shell-command (shell command)
    #:extra-constructor-name make-shell-command)
  shell : (or/c string? #f)
  command : string?
Represents a shell command.

struct

(struct variable-operation (name value)
    #:extra-constructor-name make-variable-operation)
  name : string?
  value : any/c
Represents a variable operation.

struct

(struct pending-response ()
    #:extra-constructor-name make-pending-response)
Represents a pending response placeholder.

struct

(struct comment (text)
    #:extra-constructor-name make-comment)
  text : string?
Represents a comment in the message content.

struct

(struct escaped-text (text)
    #:extra-constructor-name make-escaped-text)
  text : string?
Represents escaped text in the message content.

struct

(struct newline-content ()
    #:extra-constructor-name make-newline-content)
Represents a newline in the message content.

3.1.6 Labels and References

struct

(struct agent-label (name)
    #:extra-constructor-name make-agent-label)
  name : string?
Represents a label for an agent.

struct

(struct node-reference (identifier)
    #:extra-constructor-name make-node-reference)
  identifier : any/c
Represents a reference to a node.

struct

(struct variable (id)
    #:extra-constructor-name make-variable)
  id : any/c
Represents a variable.

3.1.7 Control Flow Structures

struct

(struct assignment (variable operator value)
    #:extra-constructor-name make-assignment)
  variable : any/c
  operator : any/c
  value : any/c
Represents a variable assignment.

struct

(struct return-statement (expression)
    #:extra-constructor-name make-return-statement)
  expression : any/c
Represents a return statement.

struct

(struct break-statement ()
    #:extra-constructor-name make-break-statement)
Represents a break statement.

struct

(struct continue-statement ()
    #:extra-constructor-name make-continue-statement)
Represents a continue statement.

3.1.8 Helper Functions

procedure

(message-content-item? x)  boolean?

  x : any/c
Checks if x is a valid message content item.

The following are considered valid message content items:
  • text-content?

  • string-text?

  • link?

  • image?

  • llm-md-command?

  • comment?

  • escaped-text?

  • newline-content?

procedure

(agent-type? x)  boolean?

  x : any/c
Checks if x is a valid agent type.

Valid agent types include:
  • agent?

  • terminator?

  • agent-chain?

  • node-reference?

procedure

(command-content? x)  boolean?

  x : any/c
Checks if x is valid command content.

Valid command content includes:
  • shell-command?

  • variable-operation?

  • assignment?

  • return-statement?

  • break-statement?

  • continue-statement?

  • pending-response?

  • comment?

  • text-content?

  • variable?

procedure

(valid-toml-data? data)  boolean?

  data : any/c
Validates that the given data is a valid TOML data structure.

A valid TOML data structure must:
  • Be a hash table (hash?)

  • Have string keys (string?)

  • Have values that are one of:
    • Strings (string?)

    • Numbers (number?)

    • Booleans (boolean?)

    • Hash tables (hash?)

    • Lists (list?)

3.2 Lexer

3.2.1 Overview

This module provides a lexical analyzer for LLM Markdown, a specialized markdown format for LLM interactions.

3.2.2 Exported Functions and Tokens

procedure

(llm-md-lexer)  (-> input-port? position-token?)

Creates and returns a lexical analyzer function that processes LLM Markdown syntax. The returned function takes an input port and returns position tokens.

Here’s a basic example:
(define lexer (llm-md-lexer))
(define input-port (open-input-string "### context >>>"))
(lexer input-port) ; Returns a position-token

procedure

(un-lex tokens)  string?

  tokens : (list/c position-token?)
Converts a list of position tokens back into their original text representation. This is useful for reconstructing the original input from lexed tokens.

3.2.3 Token Types
3.2.3.1 Syntax Tokens

The module defines the following syntax tokens through syntax-tokens:

value

syntax-tokens : symbol?

The following token types are included:

NEWLINE - Represents a single newline

DOUBLE-NEWLINE - Represents two consecutive newlines

CONTEXT-START - Matches "### context >>>"

MESSAGE-DELIMITER - Matches "###"

FAN-OUT - Matches ">>="

FAN-IN - Matches "=>>"

SEQUENTIAL-FLOW - Matches ">>>"

LEFT-PAREN - Matches "("

RIGHT-PAREN - Matches ")"

LINK-START - Matches "["

LINK-END - Matches "]("

IMAGE-START - Matches "!["

COMMAND-START - Matches "{{"

COMMAND-END - Matches "}}"

FORCE-MODIFIER - Matches "!!"

PENDING-RESPONSE - Matches "??"

COMMENT-START - Matches ";;"

ESCAPED-TEXT - Matches "“‘"

TERMINATOR - Matches "_"

SHELL-COMMAND-SEPARATOR - Matches "$"

COLON - Matches ":"

STRING-DELIMITER - Matches "\""

RETURN-KEYWORD - Matches "@return"

BREAK-KEYWORD - Matches "@break"

CONTINUE-KEYWORD - Matches "@continue"

EQUALS - Matches "="

CONTEXT - Represents a context marker

EOF - Represents end of file

3.2.3.2 Value Tokens

The module defines the following value tokens through value-tokens:

value

value-tokens : symbol?

The following token types are included:

TEXT - Regular text content

URI - URI/URL strings

VARIABLE-NAME - Variable names (prefixed with @)

3.2.4 Implementation Details

The lexer uses parser-tools/lex for lexical analysis and implements:

The lexer is designed to handle the full LLM Markdown syntax, including:
  • Context markers and message delimiters

  • Flow control operators (>>>, >>=, =>>)

  • Links and images

  • Commands and variables

  • Comments and escaped text

  • TOML-style configuration syntax

3.3 Parser

3.3.1 Overview

The parser module provides functionality to parse LLM-MD files into their abstract syntax tree (AST) representation. It handles the syntactic analysis of the input and produces structured data according to the AST specification.

3.3.1.1 Primary Functions

procedure

(parse-llm-md input)  
llm-md-file?
exact-nonnegative-integer?
exact-nonnegative-integer?
  input : input-port?
Parses an LLM-MD file from the given input port.

Returns three values:
  • An llm-md-file? containing the parsed AST

  • The starting position in the input

  • The ending position in the input

If parsing fails, raises exn:fail:llm-md.

3.3.1.2 Exceptions

struct

(struct exn:fail:llm-md exn:fail:syntax (token start-pos end-pos)
    #:extra-constructor-name make-exn:fail:llm-md)
  token : any/c
  start-pos : exact-nonnegative-integer?
  end-pos : exact-nonnegative-integer?
Represents a parsing error in LLM-MD syntax.

Fields:
  • token - The token where the error occurred

  • start-pos - Starting position of the error

  • end-pos - Ending position of the error

Inherits from exn:fail:syntax, providing standard syntax error functionality.

3.3.1.3 Parser Implementation Details

The parser is implemented using parser-tools/yacc and handles:

The grammar includes precedence rules for operators:

3.3.1.4 Helper Functions

procedure

(extract-agents text)

  (listof (cons string? (or/c string? #f)))
  text : string?
Extracts agent names and optional labels from text.

Returns a list of pairs where:
  • car is the agent name

  • cdr is either the label or #f

procedure

(extract-value pos-token)  any/c

  pos-token : any/c
Extracts the value from a position token.

procedure

(parse-context-message content)  context-message?

  content : (listof any/c)
Parses TOML content into a context message.

Returns a context-message struct containing the parsed TOML data. If parsing fails, returns a context message with an empty hash.

4 Interpreter Module

4.1 Interpreter

4.1.1 Overview

The interpreter module provides functionality for executing LLM-MD files and strings, processing messages, and managing the interpretation environment.

4.1.2 Main Interface

procedure

(interpret-file path    
  [#:debug? debug?    
  #:profile? profile?])  any/c
  path : path-string?
  debug? : boolean? = #f
  profile? : boolean? = #f
Interprets an LLM-MD file from the given path.

Arguments:
  • path - Path to the LLM-MD file to interpret

  • debug? - When true, enables debug output

  • profile? - When true, enables performance profiling

procedure

(interpret-content-item _item env)  any/c

  _item : any/c
  env : environment?
Interprets a single content item within a message.

Supports these content types:
  • Text content: Plain text content

  • Links: Hyperlinks with optional titles

  • Images: Images with alt text and optional titles

  • Commands: LLM-MD commands

  • Comments: Comments in the content

  • Escaped text: Escaped text content

  • Newlines: Explicit newlines

4.1.3 Initialization and Cleanup

procedure

(initialize-interpreter!)  void?

Initializes the interpreter by clearing caches and agent states. Should be called before starting interpretation.

procedure

(shutdown-interpreter!)  void?

Performs cleanup operations including cache clearing and agent state cleanup. Should be called when finished with interpretation.

4.1.4 Helper Functions

procedure

(process-link text url title env)  list?

  text : string?
  url : string?
  title : (or/c string? #f)
  env : environment?
Processes a link, optionally fetching its content if configured in the environment. Returns a link structure with text, URL, title, and optionally fetched content.

procedure

(fetch-url-content url-string)  (or/c string? #f)

  url-string : string?
Fetches and processes content from a URL. Returns the processed text content (the text of the <body />) or #f if fetching fails.

4.2 Interpreter Types

4.2.1 Overview

This module provides the core type definitions used throughout the interpreter implementation. These types handle environment management, agent state tracking, and execution context management.

4.2.2 Environment

struct

(struct environment (bindings parent)
    #:extra-constructor-name make-environment
    #:mutable
    #:transparent)
  bindings : hash?
  parent : (or/c environment? #f)
Represents a mutable environment that maintains variable bindings and supports lexical scoping.

Fields:
  • bindings - A hash table containing variable bindings

  • parent - Reference to parent environment or #f if this is the root environment

Example Usage:
(define root-env
  (environment (make-hash) #f))
 
(define child-env
  (environment (make-hash) root-env))
 
; Modify bindings
(hash-set! (environment-bindings root-env) 'x 42)

4.2.3 Agent State

struct

(struct agent-state (id context history last-access)
    #:extra-constructor-name make-agent-state
    #:mutable
    #:transparent)
  id : string?
  context : hash?
  history : list?
  last-access : exact-integer?
Represents the mutable state of an agent in the system.

Fields:
  • id - String identifier for the agent

  • context - Hash table containing the agent’s context

  • history - List of historical actions performed by the agent

  • last-access - Timestamp of the agent’s last access

Example Usage:
(define agent1-state
  (agent-state "agent1"
               (make-hash)
               '()
               (current-seconds)))
 
; Update context
(hash-set! (agent-state-context agent1-state)
           'status
           'active)
 
; Update history
(set-agent-state-history!
  agent1-state
  (cons 'new-action
        (agent-state-history agent1-state)))

4.2.4 Execution Context

struct

(struct execution-context (current-agent
    flow-type
    parent-context
    variables)
    #:extra-constructor-name make-execution-context
    #:mutable
    #:transparent)
  current-agent : string?
  flow-type : symbol?
  parent-context : (or/c execution-context? #f)
  variables : environment?
Represents a mutable execution context that tracks the current state of execution.

Fields:
  • current-agent - Identifier of the currently executing agent

  • flow-type - Symbol indicating the current flow control type (e.g., ’sequential, ’fan-out)

  • parent-context - Reference to parent execution context or #f if this is the root context

  • variables - Environment containing variables for this execution context

Example Usage:
(define root-context
  (execution-context
    "main-agent"
    'sequential
    #f
    (environment (make-hash) #f)))
 
(define child-context
  (execution-context
    "sub-agent"
    'fan-out
    root-context
    (environment (make-hash)
                (execution-context-variables root-context))))
 
; Update current agent
(set-execution-context-current-agent!
  child-context
  "new-agent")

4.2.5 Type Predicates

procedure

(environment? v)  boolean?

  v : any/c
Returns #t if v is an environment struct, #f otherwise.

procedure

(agent-state? v)  boolean?

  v : any/c
Returns #t if v is an agent-state struct, #f otherwise.

procedure

(execution-context? v)  boolean?

  v : any/c
Returns #t if v is an execution-context struct, #f otherwise.

4.3 Agents

4.3.1 Overview

This module provides functionality for interpreting and managing agent-based operations and their states.

4.3.2 Agent State Management

procedure

(make-agent-state id)  agent-state?

  id : string?
Creates a new agent state with the given identifier. The state includes context, history, and timestamp information.

(make-agent-state "test-agent")

procedure

(get-agent-state agent-id)  agent-state?

  agent-id : string?
Retrieves an existing agent state or creates a new one if it doesn’t exist.

procedure

(update-agent-context! agent-id key value)  void?

  agent-id : string?
  key : any/c
  value : any/c
Updates the context of an agent with the specified key-value pair.

procedure

(add-to-agent-history! agent-id entry)  void?

  agent-id : string?
  entry : any/c
Adds a new entry to the agent’s history.

4.3.3 Agent Interpretation

procedure

(interpret-agent agent env)  string?

  agent : (or/c agent? node-reference?)
  env : environment?
Interprets an agent or agent reference, returning the agent’s identifier.

If given an agent?, activates the agent and records the activation in its history. If given a node-reference?, looks up the referenced agent in the environment.

procedure

(interpret-operation op env)  (or/c symbol? #f)

  op : (or/c operation? #f)
  env : environment?
Interprets an operation type, which must be one of 'fan-out, 'sequential, or 'fan-in. Returns #f if no operation is provided.

procedure

(interpret-label label env)  (or/c string? #f)

  label : (or/c agent-label? #f)
  env : environment?
Interprets an agent label, returning the label string or #f if no label is provided.

4.3.4 Agent Chain Processing

procedure

(interpret-agent-element element env)  any/c

  element : agent-element?
  env : environment?
Interprets a single element in an agent chain, processing its agent, operation, and label components.

procedure

(interpret-agent-chain chain env)  any/c

  chain : agent-chain?
  env : environment?
Interprets a complete agent chain, handling different flow types:
  • 'fan-out: Processes elements in parallel

  • 'fan-in: Combines results from multiple elements

  • 'sequential: Processes elements in sequence

4.3.5 Types

The module uses the following main types:

4.4 Commands

4.4.1 Overview

This module provides the core interpretation functionality for LLM-MD commands and expressions.

4.4.2 Command Interpretation

procedure

(interpret-command cmd force? env)  any/c

  cmd : any/c
  force? : boolean?
  env : environment?
Interprets an LLM-MD command within the given environment.

The function handles various types of commands including:
  • Variables - Retrieves variable values from the environment

  • Shell commands - Executes system commands when force? is true

  • Variable operations - Handles variable assignments and modifications

  • Assignment statements - Sets variable values in the environment

  • Control flow statements - Handles return, break, and continue

  • Text content - Processes raw text, converting to numbers when possible

Parameters:
  • cmd - The command to interpret

  • force? - Boolean flag indicating whether to execute shell commands

  • env - The current environment context

Returns the result of executing the command, or raises an error for invalid commands.

procedure

(interpret-expression expr env)  any/c

  expr : any/c
  env : environment?
Evaluates an LLM-MD expression within the given environment.

Handles various expression types including:
  • Variables - Resolves variable references

  • Text content - Processes raw text

  • Pending responses - Returns ’pending symbol

  • LLM-MD commands - Delegates to interpret-command

  • Primitive values (numbers, strings, booleans)

Parameters:
  • expr - The expression to evaluate

  • env - The current environment context

Returns the evaluated result, or raises an error for invalid expressions.

procedure

(convert-to-number val)  number?

  val : any/c
Safely converts a value to a number.

Supports conversion from:
  • Numbers (returned as-is)

  • Strings (parsed as numbers)

  • Text content structures

Parameters:
  • val - The value to convert to a number

Returns the converted number value. Raises an error if conversion fails.

4.4.3 Helper Functions

procedure

(variable-name var)  string?

  var : any/c
Extracts the name from a variable structure.

Parameters:
  • var - A variable structure

Returns the variable name as a string. Raises an error if given an invalid variable structure.

4.5 Context

4.5.1 Overview

This module provides functionality for managing execution contexts in a program. Execution contexts help track the current execution state, including agent information and hierarchical relationships between different execution contexts.

4.5.2 Core Functions

procedure

(make-execution-context agent-id [parent])  execution-context?

  agent-id : string?
  parent : (or/c execution-context? #f) = #f
Creates a new execution context with the specified agent-id and optional parent context.

The context includes:
  • The agent identifier as a string

  • A flow type (always ’sequential)

  • An optional parent context

  • A new environment that inherits from the parent’s environment if present

(require context)
(define ctx (make-execution-context "agent-1"))
(execution-context? ctx)

parameter

(current-execution-context)  (or/c execution-context? #f)

(current-execution-context context)  void?
  context : (or/c execution-context? #f)
A parameter that holds the current execution context. The default value is #f. This parameter is typically managed using with-new-context rather than being set directly.

procedure

(with-new-context agent-id thunk)  any

  agent-id : string?
  thunk : (-> any)
Executes the given thunk within a new execution context created with the specified agent-id. The new context is set as the current context for the duration of the thunk’s execution and uses the previous context (if any) as its parent.

(require context)
(with-new-context "agent-1"
  (λ ()
    (define ctx (current-execution-context))
    (execution-context-current-agent ctx)))
4.5.3 Structure Types

The module relies on the following structure types defined in "types.rkt":

struct

(struct execution-context (current-agent
    flow-type
    parent-context
    variables)
    #:extra-constructor-name make-execution-context)
  current-agent : string?
  flow-type : symbol?
  parent-context : (or/c execution-context? #f)
  variables : environment?
Represents an execution context with the following fields:
  • current-agent: The identifier of the current agent

  • flow-type: The type of execution flow (always ’sequential)

  • parent-context: The parent execution context, if any

  • variables: An environment containing context variables

4.5.4 Usage Examples

Here’s an example demonstrating nested contexts:

(with-new-context "parent-agent"
  (λ ()
    (with-new-context "child-agent"
      (λ ()
        (define ctx (current-execution-context))
        (list
         (execution-context-current-agent ctx)
         (execution-context-current-agent
          (execution-context-parent-context ctx)))))))
; Returns '("child-agent" "parent-agent")
4.5.5 Contracts

All exported functions are protected by contracts:

4.5.6 Error Handling

The module includes proper error checking:
  • Both make-execution-context and with-new-context will raise an error if the agent-id is not a string

  • Contract violations will be reported for any invalid arguments to the exported functions

4.6 Environment

4.6.1 Overview

The environment module provides a structured way to manage variable bindings in the interpreter. It implements a hierarchical environment system where variables can be defined, looked up, and modified across different scopes.

4.6.2 Data Structures

struct

(struct environment (bindings parent)
    #:extra-constructor-name make-environment)
  bindings : hash?
  parent : (or/c environment? #f)
Represents an environment for variable bindings.

4.6.3 Environment Creation

procedure

(make-environment [parent])  environment?

  parent : (or/c environment? #f) = #f
Creates a new environment with an optional parent.

The new environment includes metadata like a unique ID, timestamp of creation, and a generated name that reflects its relationship to any parent environment.

(define global-env (make-environment))
(define local-env (make-environment global-env))

procedure

(make-global-environment)  environment?

Creates or returns the global environment singleton.

This function ensures there is only one global environment instance. Subsequent calls return the same instance.

(define global-env (make-global-environment))
4.6.4 Variable Operations

procedure

(lookup-variable name env)  any/c

  name : string?
  env : environment?
Looks up a variable in the environment chain, starting from the given environment and proceeding up through parent environments until the variable is found or the chain ends.

Returns the variable’s value or #f if not found.

(lookup-variable "x" my-env)

procedure

(lookup-variable/default name env default)  any/c

  name : string?
  env : environment?
  default : any/c
Looks up a variable with a default value if not found in the environment chain.

Returns the variable’s value or the provided default if not found.

(lookup-variable/default "x" my-env 0)

procedure

(set-variable! name value env)  void?

  name : string?
  value : any/c
  env : environment?
Sets a variable’s value in the environment chain.

If the variable is already defined in the environment chain, updates its value in the defining environment. Otherwise, adds the variable to the current environment.

(set-variable! "x" 42 my-env)

procedure

(define-variable! var val env)  void?

  var : string?
  val : any/c
  env : environment?
Defines a new variable in the current environment.

This always creates or updates the variable in the specified environment, regardless of whether it exists in parent environments.

(define-variable! "x" 42 my-env)

procedure

(find-defining-environment name env)  (or/c environment? #f)

  name : string?
  env : environment?
Finds the environment where a variable is defined.

Returns the environment containing the variable definition or #f if not found in the environment chain.

(define defining-env (find-defining-environment "x" my-env))

procedure

(delete-variable! name env)  boolean?

  name : string?
  env : environment?
Removes a variable from its defining environment.

Returns #t if the variable was found and deleted, #f otherwise.

(delete-variable! "x" my-env)

procedure

(variable-exists? name env)  boolean?

  name : string?
  env : environment?
Checks if a variable exists in the environment chain.

Returns #t if the variable exists, #f otherwise.

(variable-exists? "x" my-env)
4.6.5 Environment Manipulation

procedure

(extend-environment vars vals base-env)  environment?

  vars : (listof string?)
  vals : (listof any/c)
  base-env : environment?
Creates a new environment with given bindings and parent.

The number of variables must match the number of values. Returns a new environment that extends the base environment with the specified bindings.

(define extended-env (extend-environment '("x" "y") '(1 2) base-env))

procedure

(with-extended-environment base-env    
  vars    
  vals    
  thunk)  any
  base-env : environment?
  vars : (listof string?)
  vals : (listof any/c)
  thunk : (-> environment? any)
Executes a function in an extended environment.

Creates a new environment extending the base environment with the specified variables and values, then calls the given function with that environment and returns its result.

(with-extended-environment base-env '("x" "y") '(1 2)
  (lambda (env) (+ (lookup-variable "x" env) (lookup-variable "y" env))))

procedure

(get-environment-chain env)  (listof environment?)

  env : environment?
Gets a list of environments in the chain from current to root.

Returns a list starting with the given environment, followed by its parent, and so on up to the root environment.

(define env-chain (get-environment-chain my-env))

procedure

(get-all-bindings env)  (hash/c string? any/c)

  env : environment?
Gets all bindings visible from the current environment.

Returns a hash table containing all variables accessible from the given environment, with proper shadowing (variables in child environments override those with the same name in parent environments).

(define all-vars (get-all-bindings my-env))

procedure

(merge-environments! target source shadow?)  void?

  target : environment?
  source : environment?
  shadow? : boolean?
Merges all bindings from source into target environment.

When shadow? is #t, variables from the source environment will override those with the same name in the target. When #f, only variables not present in the target will be added.

(merge-environments! target-env source-env #t)

4.7 LLM-MD Evaluation

4.7.1 Overview

The evaluation module provides functionality for evaluating parsed LLM-MD abstract syntax trees (ASTs) and converting the resulting conversations to JSON format. It handles system messages from TOML context data and processes agent messages through the interpreter.

4.7.2 Data Structures

The module works with these core data structures:

4.7.3 Main Functions

procedure

(evaluate-llm-md ast env)  conversation?

  ast : llm-md-file?
  env : environment?
Evaluates a parsed LLM-MD file (AST) in the provided interpreter environment.

The function:
  • Extracts the system message from TOML context data if present

  • Falls back to a default system message if none is specified

  • Processes all non-context messages through the interpreter

  • Constructs message hashes with ’role and ’content keys

The evaluation process:
  • Scans for context messages containing a "system_message" key

  • Interprets each agent message using interpret-message

  • Converts interpreter results to role/content hash tables

  • Builds a final conversation struct with messages and system prompt

procedure

(conversation->json conv)  string?

  conv : conversation?
Converts a conversation struct into a JSON string representation.

The output JSON contains:
  • messages - Array of message objects with role/content

  • system_message - The conversation’s system message

4.7.4 Constants

value

default-system-message : string?

The default system message used when no TOML context message is defined:

"You are a helpful AI assistant."

4.7.5 Usage

To use the evaluation module:

;; Import the module
(require "evaluation.rkt")
 
;; Create environment and parse input
(define env (make-environment))
(define ast (parse-llm-md input-port))
 
;; Evaluate the AST
(define conv (evaluate-llm-md ast env))
 
;; Convert to JSON if needed
(define json-str (conversation->json conv))
4.7.6 Error Handling

The module performs these validations:

Invalid inputs will raise appropriate error messages with details about the failure.

4.8 Interpreter Utilities

4.8.1 Overview

This module provides utility functions used throughout the interpreter implementation including error handling, logging, profiling, and AST visualization.

4.8.2 Error Handling Functions

procedure

(format-error-message error include-stack?)  string?

  error : (or/c exn? string?)
  include-stack? : boolean?
Formats error messages with improved readability.

Handles specialized formatting for different error types:
  • LLM provider errors (rate limiting, authentication, context length)

  • Parser errors with line and column information

  • Generic errors

If include-stack? is true and error is an exception, the formatted message includes a stack trace.

Examples:
> (require racket/exn)
> (format-error-message "Parse error at line 10, column 5: Unexpected token" #f)

"Parse Error: at line 10, column 5\n\nUnexpected token"

> (format-error-message (exn:fail:read "Read error" (current-continuation-marks)) #t)

"Error: Read error\n\nStack Trace:\n..."

procedure

(display-error error include-stack?)  void?

  error : (or/c exn? string?)
  include-stack? : boolean?
Displays a formatted error message to stderr.

Example:
> (display-error "API request failed" #f)

procedure

(with-error-handling thunk debug-mode?)  any?

  thunk : thunk?
  debug-mode? : boolean?
Executes a thunk with basic error handling, boolean (debug-mode?). Logs errors and passes them through.

Example:
> (with-error-handling (λ () (+ 1 2)) |#f|)

3

procedure

(colored-text text color)  string?

  text : string?
  color : symbol?
Formats text with ANSI color codes if supported by the terminal.

Supported colors: ’red, ’green, ’yellow, ’blue, ’magenta, ’cyan, ’bold.

Example:
> (colored-text "Error message" 'red)

"\e[31mError message\e[0m"

4.8.3 Logging Functions

procedure

(log-debug fmt arg ...)  void?

  fmt : string?
  arg : any/c
Logs a debug-level message using the interpreter’s logger.

Example:
> (log-debug "Processing item ~a" 42)

procedure

(log-info fmt arg ...)  void?

  fmt : string?
  arg : any/c
Logs an info-level message using the interpreter’s logger.

Example:
> (log-info "Operation completed in ~a ms" 150)

procedure

(log-error fmt arg ...)  void?

  fmt : string?
  arg : any/c
Logs an error-level message using the interpreter’s logger.

Example:
> (log-error "Failed to process: ~a" "invalid input")

procedure

(debug-print fmt arg ...)  void?

  fmt : string?
  arg : any/c
Conditionally prints debug information to current output port when debug output is enabled.

Example:
> (parameterize ([enable-debug-output? #t])
    (debug-print "Value: ~a" 42))

parameter

(enable-debug-output?)  boolean?

(enable-debug-output? enabled?)  void?
  enabled? : boolean?
 = #f

parameter

(enable-profiling?)  boolean?

(enable-profiling? enabled?)  void?
  enabled? : boolean?
 = #f
Parameters that control debug output and profiling, respectively.

4.8.4 Profiling Utilities

syntax

(with-profiling name body ...)

Executes the body expressions with profiling enabled if enable-profiling? is true. Records execution time under the given name.

Example:
> (parameterize ([enable-profiling? #t])
    (with-profiling parse-operation
      (sleep 0.1)
      'result))

'result

procedure

(print-profile-report)  void?

Prints a summary of profiling data collected through with-profiling. Shows calls, average time, maximum time, and minimum time for each operation.

Example:
> (parameterize ([enable-profiling? #t])
    (print-profile-report))

4.8.5 AST Visualization

procedure

(pretty-print-ast ast)  void?

  ast : llm-md-file?
Pretty prints an LLM-MD AST for debugging purposes.

Example:
> (pretty-print-ast some-ast)

4.8.6 Cache Management

procedure

(clear-caches!)  void?

Clears the variable and expression caches used by the interpreter.

Example:
> (clear-caches!)

procedure

(with-clean-caches thunk)  any

  thunk : (-> any)
Executes a thunk with fresh caches, clearing them before and after execution.

Example:
> (with-clean-caches
   (λ () (perform-operation)))

'result

4.8.7 Memory Management

procedure

(cleanup-agent-states!)  void?

Removes agent states that haven’t been accessed for more than an hour.

Example:
> (cleanup-agent-states!)

procedure

(touch-agent-state! state)  void?

  state : agent-state?
Updates the last access timestamp of an agent state to prevent it from being cleaned up.

Example:
> (touch-agent-state! some-agent-state)

4.8.8 Environment Utilities

procedure

(make-scope parent-env)  environment?

  parent-env : environment?
Creates a new environment with the given parent, establishing a lexical scope for variable bindings.

Example:
> (define child-scope (make-scope parent-env))

5 LLMs Module

5.1 LLM Types and Structures

Hugo O'Connor

5.1.1 Overview

This module provides core types and structures for LLM (Language Learning Model) interactions, including provider configurations, model specifications, error handling, and input validation.

5.1.2 Constants

value

model-types : (listof symbol?)

List of supported model types:
  • 'text-to-text - Traditional LLMs (input: text, output: text)

  • 'text-to-image - Image generation (input: text, output: image)

  • 'image-to-text - Image understanding (input: image, output: text)

  • 'text-to-speech - TTS (input: text, output: audio)

  • 'speech-to-text - STT (input: audio, output: text)

  • 'embedding - Text embeddings (input: text, output: vector)

  • 'multimodal - Multiple input/output types

(require llm-types)
model-types
; => '(text-to-text text-to-image image-to-text
;     text-to-speech speech-to-text embedding multimodal)

value

error-types : (listof symbol?)

Standard error types for consistent error handling:
  • 'rate-limit-error - Rate limiting errors

  • 'authentication-error - Authentication failures

  • 'validation-error - Input validation errors

  • 'context-length-error - Context length exceeded

  • 'timeout-error - Request timeout errors

  • 'api-error - General API errors

value

provider-features : (listof symbol?)

Supported provider features:
  • 'streaming - Supports streaming responses

  • 'function-calling - Supports function calling

  • 'vision - Supports image input

  • 'tools - Supports tool use

  • 'fine-tuning - Supports model fine-tuning

  • 'embedding - Supports text embeddings

5.1.3 Core Structures

struct

(struct provider-config (endpoint
    api-key-env
    headers-fn
    format-request-fn
    parse-response-fn
    supported-types
    api-version
    rate-limits
    supports-streaming?
    token-counter
    features)
    #:extra-constructor-name make-provider-config)
  endpoint : string?
  api-key-env : string?
  headers-fn : (-> string? (listof string?))
  format-request-fn : (-> (listof hash?) string? string? hash?)
  parse-response-fn : (-> hash? any/c)
  supported-types : (listof symbol?)
  api-version : string?
  rate-limits : provider-rate-limits?
  supports-streaming? : boolean?
  token-counter : (-> string? exact-nonnegative-integer?)
  features : (listof symbol?)
Represents configuration for a provider’s API.

(define config
  (provider-config
    "https://api.example.com"
    "API_KEY"
    (λ (key) (list (format "Authorization: Bearer ~a" key)))
    (λ (msgs model sys-prompt)
      (hash 'messages msgs 'model model))
    (λ (response) (hash-ref response 'choices))
    '(text-to-text)
    "v1"
    (provider-rate-limits 60 250000 10)
    #t
    (λ (text) (string-length text))
    '(streaming function-calling)))

struct

(struct llm-urn (provider model)
    #:extra-constructor-name make-llm-urn)
  provider : symbol?
  model : string?
Unique identifier for LLM models.

(define urn (llm-urn 'openai "gpt-4"))
(llm-urn-provider urn) ; => 'openai
(llm-urn-model urn)    ; => "gpt-4"

struct

(struct provider-error (type message details)
    #:extra-constructor-name make-provider-error)
  type : symbol?
  message : string?
  details : hash?
Standardized error representation.

(define error
  (provider-error
    'rate-limit-error
    "Too many requests"
    (hash 'retry-after 30)))

struct

(struct model-io (input-types
    output-types
    max-input-size
    max-output-size)
    #:extra-constructor-name make-model-io)
  input-types : (listof symbol?)
  output-types : (listof symbol?)
  max-input-size : exact-positive-integer?
  max-output-size : exact-positive-integer?
Input/output specifications for models.

(define io
  (model-io
    '(text)
    '(text)
    4096
    4096))

struct

(struct model-pricing (input-price output-price currency)
    #:extra-constructor-name make-model-pricing)
  input-price : (>=/c 0)
  output-price : (>=/c 0)
  currency : symbol?
Pricing information for model usage.

(define pricing
  (model-pricing
    0.0015  ; $0.0015 per 1K input tokens
    0.002   ; $0.002 per 1K output tokens
    'USD))

struct

(struct provider-rate-limits (requests-per-minute
    tokens-per-minute
    concurrent-requests)
    #:extra-constructor-name make-provider-rate-limits)
  requests-per-minute : exact-positive-integer?
  tokens-per-minute : exact-positive-integer?
  concurrent-requests : exact-positive-integer?
Rate limiting configuration.

(define limits
  (provider-rate-limits
    60      ; 60 requests per minute
    250000  ; 250K tokens per minute
    10))    ; 10 concurrent requests

struct

(struct model-metadata (name
    provider
    capabilities
    context-window
    pricing
    io-specs
    release-date
    deprecated?
    version)
    #:extra-constructor-name make-model-metadata)
  name : string?
  provider : symbol?
  capabilities : (listof symbol?)
  context-window : exact-positive-integer?
  pricing : model-pricing?
  io-specs : model-io?
  release-date : string?
  deprecated? : boolean?
  version : string?
Comprehensive model metadata.

(define metadata
  (model-metadata
    "gpt-4"
    'openai
    '(text completion chat)
    8192
    (model-pricing 0.03 0.06 'USD)
    (model-io '(text) '(text) 8192 8192)
    "2023-03-14"
    #f
    "1.0"))
5.1.4 Input Validation Functions

procedure

(valid-text? content)  boolean?

  content : any/c
Validates text input. Returns #t if content is a non-empty string.

(valid-text? "hello")     ; => #t
(valid-text? "")          ; => #f
(valid-text? 123)         ; => #f

procedure

(valid-image? content)  boolean?

  content : any/c
Validates image input. Returns #t if content is a byte string.

(valid-image? #"image-data")  ; => #t
(valid-image? "not-bytes")    ; => #f

procedure

(valid-audio? content)  boolean?

  content : any/c
Validates audio input. Returns #t if content is a byte string.

(valid-audio? #"audio-data")  ; => #t
(valid-audio? "not-bytes")    ; => #f

procedure

(valid-embedding? content)  boolean?

  content : any/c
Validates embedding vectors. Returns #t if content is a non-empty vector.

(valid-embedding? (vector 1 2 3))  ; => #t
(valid-embedding? (vector))        ; => #f
(valid-embedding? '(1 2 3))        ; => #f

procedure

(validate-input content type)  boolean?

  content : any/c
  type : symbol?
Validates input based on type. Supports 'text, 'image, 'audio, and 'embedding types.

(validate-input "hello" 'text)           ; => #t
(validate-input #"data" 'image)          ; => #t
(validate-input (vector 1 2) 'embedding) ; => #t
(validate-input "data" 'unknown)         ; => #f

5.2 LLM API

5.2.1 Overview

This module provides functionality for creating and managing conversations with Language Learning Models (LLMs) through various providers like OpenAI and Anthropic. It includes support for both standard and streaming responses, along with enhanced error handling.

5.2.2 Core Structures

struct

(struct conversation (messages system-message)
    #:extra-constructor-name make-conversation
    #:transparent)
  messages : (listof hash?)
  system-message : string?
Represents a conversation with an LLM.

Fields:
  • messages - List of message hashes, each containing ’role and ’content keys

  • system-message - System prompt for the conversation

Example:
> (define conv
    (conversation
      (list (hash 'role "user" 'content "Hello"))
      "Be helpful"))

5.2.3 Creating Conversations

procedure

(make-conversation messages system-message)  conversation?

  messages : (listof hash?)
  system-message : string?
Creates a validated conversation struct.

Parameters:
  • messages - List of message hashes, each must contain ’role and ’content keys

  • system-message - System prompt string

Raises an error if any message hash doesn’t have the required keys.

Example:
> (define conv
    (make-conversation
      (list (hash 'role "user" 'content "Hi"))
      "You are a helpful assistant."))

5.2.4 Sending Prompts

procedure

(send-prompt provider-or-urn 
  conv 
  [#:model model 
  #:streaming? streaming? 
  #:temperature temperature 
  #:max-tokens max-tokens]) 
  (or/c string? async-channel?)
  provider-or-urn : (or/c symbol? llm-urn?)
  conv : conversation?
  model : (or/c #f string?) = #f
  streaming? : boolean? = #f
  temperature : (real-in 0 1) = 0.7
  max-tokens : (or/c #f exact-positive-integer?) = #f
Sends a prompt to an LLM provider and returns the response.

Parameters:
  • provider-or-urn - Either a provider symbol (e.g., ’openai) or an LLM URN

  • conv - Conversation struct containing messages and system prompt

  • model - Optional model name to use

  • streaming? - Whether to use streaming mode if supported

  • temperature - Controls randomness (0-1)

  • max-tokens - Maximum tokens to generate

Returns:
  • If streaming? is false: a string containing the model’s response

  • If streaming? is true: an async channel that yields response chunks

Errors:
  • Provider doesn’t support streaming

  • Invalid provider or model

  • Missing API key

  • API request failures

Examples:
> (define response
    (send-prompt 'anthropic
                my-conversation
                #:model "claude-3-opus-20240229"))

"Paris is the capital of France."

> (define stream-channel
    (send-prompt 'anthropic
                my-conversation
                #:streaming? #t
                #:temperature 0.5))

#<async-channel>

procedure

(send-prompt/stream provider-or-urn    
  conv    
  [#:model model    
  #:temperature temperature    
  #:max-tokens max-tokens])  async-channel?
  provider-or-urn : (or/c symbol? llm-urn?)
  conv : conversation?
  model : (or/c #f string?) = #f
  temperature : (real-in 0 1) = 0.7
  max-tokens : (or/c #f exact-positive-integer?) = #f
Convenience function for sending streaming requests. This is equivalent to calling send-prompt with #:streaming? #t.

Returns an async channel that yields response chunks as they arrive from the provider. The final value in the channel will be eof.

Examples:
> (define channel (send-prompt/stream 'anthropic my-conversation))

#<async-channel>

> (let loop ()
    (define chunk (async-channel-get channel))
    (unless (eof-object? chunk)
      (display chunk)
      (loop)))

5.2.5 Error Handling

procedure

(format-provider-error provider error-data)  provider-error?

  provider : symbol?
  error-data : any/c
Formats provider-specific error responses into standardized error objects.

Handles different error formats from various providers (Anthropic, OpenAI, etc.) and maps them to consistent error types.

Example:
> (define error-obj
    (format-provider-error
     'anthropic
     (hash 'type "rate_limit_error"
           'message "Too many requests")))

procedure

(parse-streaming-response in    
  provider    
  out-channel)  void?
  in : input-port?
  provider : symbol?
  out-channel : async-channel?
Processes a streaming response from an LLM provider and sends chunks to a channel.

Handles the provider-specific streaming formats for:
  • Anthropic (Claude models)

  • OpenAI (GPT models)

  • Generic server-sent event streams

This function is used internally by send-prompt when streaming is enabled.

5.2.6 Working with Streaming Responses

When using streaming responses, you can process the chunks as they arrive:

(define channel
  (send-prompt/stream 'anthropic conversation))
 
;; Process chunks as they arrive
(let loop ()
  (define chunk (async-channel-get channel))
  (cond
    [(eof-object? chunk)
     ;; End of response
     (displayln "\n--- Complete ---")]
    [else
     ;; Display chunk and continue
     (display chunk)
     (flush-output)
     (loop)]))
5.2.7 Complete Example

Here’s a complete example showing how to use the module:

;; Create a conversation
(define messages
  (list (hash 'role "user"
              'content "What is the capital of France?")))
 
(define conv
  (make-conversation messages
                    "You are a helpful geography teacher."))
 
;; Option 1: Standard response
(define response
  (send-prompt 'anthropic
              conv
              #:model "claude-3-opus-20240229"))
 
(displayln response)
 
;; Option 2: Streaming response
(define channel
  (send-prompt/stream 'anthropic
                     conv
                     #:model "claude-3-opus-20240229"))
 
;; Process the streaming response
(let loop ()
  (define chunk (async-channel-get channel))
  (unless (eof-object? chunk)
    (display chunk)
    (flush-output)
    (loop)))

5.3 LLM Models Configuration

5.3.1 Overview

This module provides configurations and metadata for various Language Learning Models (LLMs) and their providers. It includes context window sizes, pricing information, model capabilities, and groupings.

5.3.2 Configuration Tables

value

model-context-windows

 : (hash/c string? exact-nonnegative-integer?)
Hash table mapping model names to their maximum context window sizes (in tokens).

Example:
(hash-ref model-context-windows "gpt-4") ; => 8192
(hash-ref model-context-windows "claude-3-opus-20240229") ; => 200000

value

model-pricing-table : (hash/c string? model-pricing?)

Hash table containing pricing information for different models. Prices are per 1K tokens for input and output.

Example:
(define gpt4-pricing (hash-ref model-pricing-table "gpt-4"))
(model-pricing-input-price gpt4-pricing) ; => 0.03
(model-pricing-output-price gpt4-pricing) ; => 0.06
(model-pricing-currency gpt4-pricing) ; => 'USD

value

provider-models : (hash/c symbol? (listof string?))

Hash table mapping providers to their available models.

Example:
(hash-ref provider-models 'openai)
; => '("gpt-4" "gpt-4-turbo-preview" "gpt-3.5-turbo")
 
(hash-ref provider-models 'anthropic)
; => '("claude-3-opus-20240229" "claude-3-sonnet-20240229" "claude-3-haiku-20240229")

value

model-capabilities : (hash/c string? model-metadata?)

Hash table containing detailed metadata and capabilities for each model.

Example:
(define gpt4-metadata (hash-ref model-capabilities "gpt-4"))
(model-metadata-provider gpt4-metadata) ; => 'openai
(model-metadata-context-window gpt4-metadata) ; => 8192
(model-metadata-capabilities gpt4-metadata) ; => '(text-to-text)

value

model-families : (hash/c symbol? (listof string?))

Hash table grouping related models into families.

Example:
(hash-ref model-families 'gpt)
; => '("gpt-4" "gpt-4-turbo-preview" "gpt-3.5-turbo")
 
(hash-ref model-families 'claude)
; => '("claude-3-opus-20240229" "claude-3-sonnet-20240229" "claude-3-haiku-20240229")

5.3.3 Helper Functions

procedure

(get-model-context-window model-id)

  exact-nonnegative-integer?
  model-id : string?
Returns the context window size for the given model. Raises exn:fail if the model is not found.

Example:
(get-model-context-window "gpt-4") ; => 8192
(get-model-context-window "nonexistent-model") ; raises error

procedure

(get-model-pricing model-id)  model-pricing?

  model-id : string?
Returns the pricing information for the given model. Raises exn:fail if the model is not found.

Example:
(define pricing (get-model-pricing "gpt-4"))
(model-pricing-input-price pricing) ; => 0.03
(model-pricing-output-price pricing) ; => 0.06

5.3.4 Supported Models

The module currently supports models from the following providers:

5.4 LLM Providers

5.4.1 Overview

This module manages configurations and metadata for various Language Learning Model (LLM) service providers. It provides a centralized system for handling provider-specific settings, rate limits, API configurations, and model capabilities.

5.4.2 Core Provider Functions

procedure

(get-provider-config urn)  provider-config?

  urn : (or/c llm-urn? symbol?)
Gets configuration for a provider specified by URN or symbol.

If a URN is provided, extracts the provider from it. If the provider is not found, raises an error with a descriptive message.

Examples:
> (get-provider-config 'openai)
> (get-provider-config (make-llm-urn 'openai "gpt-4"))

procedure

(get-provider-rate-limits provider)  provider-rate-limits?

  provider : symbol?
Retrieves rate limit settings for a provider.

Returns a provider-rate-limits structure containing the provider’s requests per minute, tokens per minute, and concurrent request limits.

Examples:
> (define limits (get-provider-rate-limits 'openai))
> (provider-rate-limits-requests-per-minute limits)

3500

procedure

(get-provider-features provider)  (listof symbol?)

  provider : symbol?
Gets list of features supported by a provider.

Returns an empty list if the provider is not found in the features table.

Examples:
> (get-provider-features 'openai)

'(streaming function-calling vision tools fine-tuning embedding)

> (get-provider-features 'anthropic)

'(streaming vision tools)

procedure

(get-available-models provider)  (listof string?)

  provider : symbol?
Returns list of available models for a provider.

Example:
> (get-available-models 'anthropic)

'("claude-3-opus-20240229"

  "claude-3-sonnet-20240229"

  "claude-3-haiku-20240229")

procedure

(valid-model? provider model)  boolean?

  provider : symbol?
  model : string?
Checks if a model is valid for a provider.

Returns #t if the model is in the provider’s list of available models, otherwise returns #f.

Examples:
> (valid-model? 'openai "gpt-4")

#t

> (valid-model? 'anthropic "invalid-model")

#f

5.4.3 Provider Configurations

value

provider-configs : (hash/c symbol? provider-config?)

Hash table mapping provider symbols to their configurations.

Each configuration includes:

Featured Providers:

Additional Supported Providers:

Cohere, Together, Perplexity, Nomic, AI21, Stability, DeepInfra, Ollama, LemonFox AI, Moonshot, SiliconFlow, DeepBricks, Voyage, and Novita.

5.4.4 Enhanced Anthropic Support

The Anthropic provider configuration includes specialized handling for Claude models:

5.4.5 Provider Data Structures

struct

(struct provider-config (endpoint
    api-key-env
    headers-fn
    format-request-fn
    parse-response-fn
    supported-types
    api-version
    rate-limits
    supports-streaming?
    token-counter
    features)
    #:extra-constructor-name make-provider-config)
  endpoint : string?
  api-key-env : string?
  headers-fn : (-> string? (listof string?))
  format-request-fn : (-> (listof hash?) string? string? hash?)
  parse-response-fn : (-> hash? any/c)
  supported-types : (listof symbol?)
  api-version : string?
  rate-limits : provider-rate-limits?
  supports-streaming? : boolean?
  token-counter : (-> string? exact-nonnegative-integer?)
  features : (listof symbol?)
Represents complete configuration for an LLM provider.

Fields:
  • endpoint - API endpoint URL

  • api-key-env - Environment variable name for API key

  • headers-fn - Function that generates request headers given an API key

  • format-request-fn - Function that formats messages, system prompt, and model into a request payload

  • parse-response-fn - Function that extracts content from provider responses

  • supported-types - List of model types supported by the provider

  • api-version - API version string

  • rate-limits - Rate limit configuration

  • supports-streaming? - Whether the provider supports streaming responses

  • token-counter - Function that estimates token count for a given text

  • features - List of special features supported by the provider

struct

(struct provider-rate-limits (requests-per-minute
    tokens-per-minute
    concurrent-requests)
    #:extra-constructor-name make-provider-rate-limits)
  requests-per-minute : exact-positive-integer?
  tokens-per-minute : exact-positive-integer?
  concurrent-requests : exact-positive-integer?
Represents rate limit configuration for a provider.

Fields:
  • requests-per-minute - Maximum number of requests allowed per minute

  • tokens-per-minute - Maximum number of tokens that can be processed per minute

  • concurrent-requests - Maximum number of concurrent requests allowed

Examples:
> (define openai-limits (provider-rate-limits 3500 250000 50))
> (define anthropic-limits (provider-rate-limits 5000 300000 100))

5.4.6 Provider Features

value

provider-features-table : (hash/c symbol? (listof symbol?))

Hash table mapping providers to their supported features.

Common Features:
  • 'streaming - Support for streaming responses (incremental generation)

  • 'function-calling - Support for function calling/tool use

  • 'vision - Support for image/visual inputs

  • 'tools - Support for tool use

  • 'fine-tuning - Support for model fine-tuning

  • 'embedding - Support for generating embeddings/vectors

Examples:
> (hash-ref provider-features-table 'openai)

'(streaming function-calling vision tools fine-tuning embedding)

> (hash-ref provider-features-table 'anthropic)

'(streaming vision tools)

5.4.7 Provider Rate Limits

value

provider-rate-limits-table

 : (hash/c symbol? provider-rate-limits?)
Hash table containing rate limit configurations for each provider.

Examples:
> (hash-ref provider-rate-limits-table 'openai)
> (hash-ref provider-rate-limits-table 'anthropic)

5.4.8 Example Usage

Creating a custom request to a provider:

;; Get the Anthropic provider configuration
(define anthropic-config (get-provider-config 'anthropic))
 
;; Extract components from the config
(define endpoint (provider-config-endpoint anthropic-config))
(define headers-fn (provider-config-headers-fn anthropic-config))
(define format-request (provider-config-format-request-fn anthropic-config))
 
;; Build a request
(define messages (list (hash 'role "user" 'content "Hello, Claude!")))
(define system-message "You are Claude, a helpful AI assistant.")
(define request-data (format-request messages system-message "claude-3-opus-20240229"))
 
;; Check if provider supports streaming
(define supports-streaming? (provider-config-supports-streaming? anthropic-config))

Checking model compatibility:

;; Check if a model is valid for a provider
(define valid? (valid-model? 'anthropic "claude-3-opus-20240229"))
 
;; Get all models for a provider
(define all-models (get-available-models 'anthropic))
 
;; Get rate limits
(define rate-limits (get-provider-rate-limits 'anthropic))
(define max-rpm (provider-rate-limits-requests-per-minute rate-limits))

5.5 Custom LLM Provider Configuration

Anuna Research

5.5.1 Overview

The custom provider module enables dynamic registration of user-defined LLM providers and models. This system allows users to configure and use new providers and models without modifying the core code of LLM-MD.

5.5.2 API Reference
5.5.2.1 Provider and Model Reference Functions

procedure

(lookup-provider-by-reference ref-name)  (or/c symbol? #f)

  ref-name : symbol?
Looks up a provider symbol by its reference name. Returns the provider symbol or #f if not found.

(define provider-config
  (hash "name" "test-provider"
        "endpoint" "https://api.test.com/v1/chat"
        "api_key_env" "TEST_API_KEY"
        "reference" "my-provider"))
 
(register-custom-provider! provider-config)
(lookup-provider-by-reference 'my-provider) ; => 'test-provider

procedure

(lookup-model-by-reference ref-name)  (or/c string? #f)

  ref-name : symbol?
Looks up a model name by its reference name. Returns the model name or #f if not found.

(define model-config
  (hash "name" "test-model"
        "context_window" 16384
        "reference" "my-model"))
 
(register-custom-model! 'test-provider model-config)
(lookup-model-by-reference 'my-model) ; => "test-model"
5.5.2.2 Configuration Extraction Functions

procedure

(extract-request-parameters toml-data)  (or/c hash? #f)

  toml-data : hash?
Extracts request parameters from TOML data. Returns a hash of request parameters with symbol keys or an empty hash if not found.

(define toml-data
  (hash "request_parameters"
        (hash "temperature" 0.7
              "max_tokens" 1024
              "thinking" (hash "enabled" #t))))
 
(extract-request-parameters toml-data)
; => #hash((temperature . 0.7)
;         (max_tokens . 1024)
;         (thinking . #hash((enabled . #t))))

procedure

(extract-providers-list toml-data)  (or/c (listof hash?) #f)

  toml-data : hash?
Extracts providers list from TOML data. Returns a list of provider configurations or #f.

(define toml-data
  (hash "providers"
        (list (hash "name" "provider1"
                   "endpoint" "https://api1.test.com")
              (hash "name" "provider2"
                   "endpoint" "https://api2.test.com"))))
 
(extract-providers-list toml-data)

procedure

(extract-models-list toml-data)  (or/c (listof hash?) #f)

  toml-data : hash?
Extracts models list from TOML data. Returns a list of model configurations or #f.

(define toml-data
  (hash "models"
        (list (hash "name" "model1"
                   "context_window" 8192)
              (hash "name" "model2"
                   "context_window" 16384))))
 
(extract-models-list toml-data)
5.5.2.3 Helper Functions

procedure

(string-keys->symbol-keys h)  hash?

  h : hash?
Recursively converts all string keys in a hash table to symbol keys.

(define h (hash "temperature" 0.7
                "nested" (hash "enabled" #t)))
 
(string-keys->symbol-keys h)
; => #hash((temperature . 0.7)
;         (nested . #hash((enabled . #t))))

procedure

(get-hash-value h key default)  any/c

  h : hash?
  key : any/c
  default : any/c
Safely gets a value from a hash with different key formats (string/symbol).

(define h (hash 'temperature 0.7
                "max_tokens" 1024))
 
(get-hash-value h 'temperature 0.5) ; => 0.7
(get-hash-value h "max_tokens" 512) ; => 1024
(get-hash-value h 'unknown 0) ; => 0
5.5.3 Request Formatting and Response Parsing

procedure

(create-request-format-fn base-fn    
  provider-sym)  procedure?
  base-fn : procedure?
  provider-sym : symbol?
Creates a request format function that incorporates custom parameters.

(define my-base-fn
  (lambda (messages system-message model)
    (hash 'messages messages
          'model model
          'system_message system-message)))
 
(define enhanced-fn
  (create-request-format-fn my-base-fn 'test-provider))

procedure

(create-response-parser response-path)  procedure?

  response-path : string?
Creates a response parser function based on a path expression.

(define parser
  (create-response-parser "choices[0].message.content"))
 
(define response
  (hash 'choices
        (list (hash 'message
                   (hash 'content "Hello world")))))
 
(parser response) ; => "Hello world"
5.5.4 Default Functions

procedure

(default-headers-fn api-key)  (listof string?)

  api-key : string?
Default function for generating request headers.

(default-headers-fn "sk-1234")
; => (list "Content-Type: application/json"
;         "Authorization: Bearer sk-1234")

procedure

(default-request-format-fn messages    
  system-message    
  model)  hash?
  messages : (listof hash?)
  system-message : string?
  model : string?
Default function for formatting request payload.

(default-request-format-fn
  (list (hash 'role "user" 'content "Hi"))
  "Be helpful"
  "gpt-4")

procedure

(default-response-parse-fn response)  string?

  response : hash?
Default function for parsing response data with fallbacks for common formats.

(define response
  (hash 'choices
        (list (hash 'message
                   (hash 'content "Hello")))))
 
(default-response-parse-fn response) ; => "Hello"

procedure

(default-token-counter-fn text)  exact-nonnegative-integer?

  text : string?
Default function for counting tokens in text (simple approximation).

(default-token-counter-fn "Hello, world!") ; => 4
5.5.5 Provider Templates

The module includes built-in templates for common API patterns:

5.5.6 Rate Limits

struct

(struct provider-rate-limits (requests_per_minute
    tokens_per_minute
    concurrent_requests)
    #:extra-constructor-name make-provider-rate-limits)
  requests_per_minute : exact-nonnegative-integer?
  tokens_per_minute : exact-nonnegative-integer?
  concurrent_requests : exact-nonnegative-integer?
Structure defining provider rate limits.

(provider-rate-limits 1000 100000 20)

5.6 Enhanced LLM Provider Configuration

5.6.1 Overview

The enhanced provider module serves as a bridge between built-in and custom LLM providers, applying model-specific parameters and configurations. It provides functionality to enhance provider configurations with custom parameters while maintaining compatibility with the base provider interface.

5.6.2 API Reference

procedure

(get-enhanced-provider-config provider-or-urn 
  [model]) 
  provider-config?
  provider-or-urn : (or/c symbol? llm-urn?)
  model : (or/c string? #f) = #f
Returns an enhanced provider configuration that incorporates both base provider settings and any custom model parameters.

The function accepts either a provider symbol or an LLM URN, and optionally a model name. When using an LLM URN, the model parameter is extracted from the URN itself.

If custom parameters exist for the specified model, they are merged with the base provider configuration, enhancing the request formatter to include these parameters.

; Using a provider symbol
(get-enhanced-provider-config 'openai "gpt-4")
 
; Using an LLM URN
(get-enhanced-provider-config
  (llm-urn #:provider 'openai
           #:model "gpt-4"
           #:version "1"))
 
; Example with custom parameters
(define base-config
  (get-enhanced-provider-config 'anthropic "claude-2"))
 
; The enhanced config will include custom model parameters
; if they exist for "claude-2"
(define messages
  (list (hash 'role "user"
              'content "Hello")))
 
((provider-config-format-request-fn base-config)
 messages
 "Be helpful"
 "claude-2")
; Returns a request hash with merged custom parameters
5.6.3 Internal Behavior

The enhanced provider configuration system follows these steps:

5.6.4 Error Handling

The function will raise an error in the following cases:

5.6.5 Logging

The module implements informational logging for:

This logging helps track configuration enhancement operations during development and debugging.

5.7 LLM Utilities

5.7.1 Overview

This module provides utility functions for working with LLM (Language Learning Model) URNs, model capabilities, and cost estimation. It includes validation, parsing, and analysis tools for LLM-related operations.

5.7.2 URN Handling

procedure

(validate-urn urn)  boolean?

  urn : llm-urn?
Validates a LLM URN structure. Checks for:
  • Valid llm-urn struct

  • Valid URI syntax

  • Correct "urn:llm:" prefix

  • Existing provider in provider-configs

  • Valid model for the provider (if model specified)

Example:
(validate-urn (llm-urn 'openai "gpt-4")) ; => #t
(validate-urn (llm-urn 'invalid "model")) ; => #f

procedure

(parse-llm-urn urn-string)  llm-urn?

  urn-string : string?
Parses a LLM URN string into a structured representation. Raises an error if the input is not a valid LLM URN string.

Example:
(parse-llm-urn "urn:llm:openai:gpt-4")
; => (llm-urn 'openai "gpt-4")
 
(parse-llm-urn "urn:llm:openai")
; => (llm-urn 'openai "")
 
(parse-llm-urn "invalid")
; => raises error

5.7.3 Model Capabilities

procedure

(get-model-types model)  (listof symbol?)

  model : string?
Returns a list of supported types for a specific model. Returns an empty list if the model is not found.

Example:
(get-model-types "gpt-4")
; => '(text-to-text)
 
(get-model-types "invalid-model")
; => '()

procedure

(model-supports-type? model type)  boolean?

  model : string?
  type : symbol?
Checks if a model supports a specific capability type.

Example:
(model-supports-type? "gpt-4" 'text-to-text)
; => #t
 
(model-supports-type? "gpt-4" 'invalid-type)
; => #f

5.7.4 Pricing and Cost Estimation

procedure

(get-model-pricing model)  (or/c model-pricing? #f)

  model : string?
Retrieves pricing information for a specific model. Returns #f if the model is not found.

Example:
(get-model-pricing "gpt-4")
; => (model-pricing 0.03 0.06 'USD)
 
(get-model-pricing "invalid-model")
; => #f

procedure

(estimate-cost model    
  input-tokens    
  output-tokens)  (or/c real? #f)
  model : string?
  input-tokens : exact-nonnegative-integer?
  output-tokens : exact-nonnegative-integer?
Calculates estimated cost for a request based on input and output tokens. Returns #f if pricing information is unavailable.

The cost is calculated as:
  • (input-tokens / 1000) × input-price

  • + (output-tokens / 1000) × output-price

Example:
(estimate-cost "gpt-4" 1000 500)
; => 0.06 ; Example cost in USD
 
(estimate-cost "invalid-model" 1000 500)
; => #f

5.7.5 Contracts

All exported functions are protected by contracts:

5.7.6 Dependencies

This module requires:
  • "types.rkt" - For LLM type definitions

  • "models.rkt" - For model information

  • "providers.rkt" - For provider configurations

  • uri-old - For URI parsing

  • racket/contract - For contract definitions

  • racket/string - For string operations

5.8 llm-md file creation wizard

 (require "src/llms/wizard.rkt") package: llm-md

The wizard module provides functionality for creating new LLM-MD files, either programmatically or through an interactive command-line interface.

5.8.1 Overview

The wizard module allows users to create LLM-MD files using templates and custom configurations. It can run in interactive mode, prompting the user for configuration options, or it can create files non-interactively with specified parameters.

5.8.2 Templates

The wizard supports several built-in templates for different use cases:

5.8.3 Functions

procedure

(create-llm-md-file file-path    
  [#:template template])  void?
  file-path : path-string?
  template : (or/c string? #f) = #f
Creates a new LLM-MD file at the specified path. If the current input port is a terminal, launches the interactive wizard. Otherwise, creates a file using the default settings or the specified template.

procedure

(create-llm-md-file/interactive [file-path    
  #:template template])  void?
  file-path : (or/c path-string? #f) = #f
  template : (or/c string? #f) = #f
Creates a new LLM-MD file through an interactive wizard that prompts the user for:
  • File path (if not provided)

  • LLM provider

  • Model

  • System message

  • Initial user message

  • Model parameters (temperature, max tokens, thinking capabilities)

When a template is specified, provides sensible defaults for system and user messages.

procedure

(create-llm-md-file/params file-path 
  [#:provider provider 
  #:model model 
  #:system-message system-message 
  #:user-message user-message 
  #:parameters parameters 
  #:template template]) 
  void?
  file-path : path-string?
  provider : symbol? = 'anthropic
  model : (or/c string? #f) = #f
  system-message : string? = "You are a helpful assistant."
  user-message : (or/c string? #f) = #f
  parameters : (or/c hash? #f) = #f
  template : (or/c string? #f) = #f
Creates a new LLM-MD file with the specified parameters.

procedure

(get-template-defaults template-name)

  
string? (or/c string? #f)
  template-name : (or/c string? #f)
Returns default system message and optional user message for the specified template.

5.8.4 Command-line Usage

The wizard can be invoked directly from the command line:

llm-md create myfile.md

llm-md create myfile.md --template coding

llm-md create --template creative

When running interactively, the wizard provides a user-friendly interface for configuring all aspects of the LLM-MD file.

6 LLM-MD Grammar Specification

6.1 Overview

This section specifies the formal grammar for LLM-MD (Large Language Model Markdown) format.

The reference implementation is available at gitlab.com/anuna/llm-md/parser

6.2 Grammar Rules

6.2.1 Terminal Definitions
6.2.2 Top-Level Structure

 

llm-md-file

 ::= 

messages

 

messages

 ::= 

message  |  message messages

 

message

 ::= 

context-message  |  user-message  |  agent-message

6.2.3 Context Messages

 

context-message

 ::= 

### context >>> newline toml-section

 

toml-section

 ::= 

toml-lines

 

toml-lines

 ::= 

toml-line  |  toml-line newline toml-lines

 

toml-line

 ::= 

toml-tokens

 

toml-tokens

 ::= 

toml-token  |  toml-token toml-tokens

 

toml-token

 ::= 

text  |  uri  |  @ variable-name  |  newline  |  =  |  "  |  (  |  )  |  [  |  ](  |  :  |  >>>  |  >>=  |  =>>  |  {  |  }  |  !!  |  ??  |  ;;  |  ```  |  _  |  $

6.2.4 User and Agent Messages

 

user-message

 ::= 

### user [agent-chain] newline user-message-content

 

agent-message

 ::= 

### agent-name [agent-chain] newline agent-message-content

 

agent-chain

 ::= 

agent-elements  |  agent-chain operation agent-elements

 

agent-elements

 ::= 

agent-element  |  ( agent-chain )

 

agent-element

 ::= 

agent-with-label  |  @ variable-name  |  ε

 

agent-with-label

 ::= 

text  |  text : text

 

operation

 ::= 

>>= (Fan-Out)  |  >>> (Sequential Flow)  |  =>> (Fan-In)

6.2.5 Message Content

User message content supports full LLM-MD syntax:

 

user-message-content

 ::= 

content-items

 

content-items

 ::= 

ε  |  content-item content-items

 

content-item

 ::= 

text  |  " text "  |  link  |  image  |  llm-md-command  |  comment  |  ``` escaped-content ```  |  newline

Agent message content is treated as plain text:

 

agent-message-content

 ::= 

plain-text-items

 

plain-text-items

 ::= 

ε  |  plain-text-item plain-text-items

 

plain-text-item

 ::= 

text  |  /* All special syntax is treated as plain text */  |  newline

6.2.6 Escaped Content

 

escaped-content

 ::= 

escaped-lines

 

escaped-lines

 ::= 

escaped-line  |  escaped-line newline  |  escaped-line newline escaped-lines

 

escaped-line

 ::= 

escaped-tokens

 

escaped-tokens

 ::= 

escaped-token  |  escaped-token escaped-tokens

 

escaped-token

 ::= 

text  |  uri  |  variable-name  |  =  |  "  |  (  |  )  |  [  |  ](  |  :  |  >>>  |  >>=  |  =>>  |  {  |  }  |  !!  |  ??  |  ;;  |  _  |  $

6.2.7 Links and Images

 

link

 ::= 

[ text ]( uri )  |  [ text ]( uri text )  |  [ ]( uri )

 

image

 ::= 

![ text ]( uri )  |  ![ text ]( uri text )  |  ![ ]( uri )

6.2.8 Commands

 

llm-md-command

 ::= 

{{ [force-modifier] command-content }}

 

force-modifier

 ::= 

!!

 

command-content

 ::= 

control-statement  |  assignment  |  variable-operation  |  shell-command  |  ??  |  comment  |  @ variable-name  |  text

6.2.9 Variable Operations and Assignments

 

assignment

 ::= 

@ variable-name = expression

 

variable-operation

 ::= 

@ variable-name expression

 

expression

 ::= 

text  |  " text "  |  @ variable-name  |  ??  |  llm-md-command

6.2.10 Shell Commands

 

shell-command

 ::= 

$ remaining-text  |  @ variable-name $ remaining-text  |  text $ remaining-text

 

remaining-text

 ::= 

shell-token  |  shell-token remaining-text

 

shell-token

 ::= 

text  |  uri  |  variable-name  |  =  |  newline  |  "  |  (  |  )  |  [  |  ](  |  :  |  >>>  |  >>=  |  =>>  |  {  |  !!  |  ??  |  ;;  |  ```  |  _  |  $

6.2.11 Control Statements

 

control-statement

 ::= 

@return expression  |  @break  |  @continue  |  @input

6.2.12 Comments

 

comment

 ::= 

;; text newline

7 License

This documentation and the associated software are provided under the Apache License Version 2.0  (Hugo O'Connor 2024)

Citation

Hugo O'Connor. llm-md: A Domain-Specific Language for Large Language Model Conversations. Anuna Research, AR-2024-01, 2024. https://llm.md Version 0.1.0-alpha "Glossolalia"