๐Ÿงช EXPERIMENTAL AI OPERATING SYSTEM

LLMunix ALPHA

An experimental Pure Markdown Operating System research project where everything is either an agent or tool defined in markdown documents. Claude Code serves as the runtime engine interpreting these markdown specifications.

โš ๏ธ Experimental Research - This project is a research prototype and will remain permanently in alpha status.

Watch LLMunix in Action

See how LLMunix boots and executes intelligent tasks with adaptive behavior management

LLMunix Demo

๐Ÿง  Sentient State Architecture

Behavioral constraints evolve dynamically based on user sentiment, task context, and execution events. The system maintains modular state files that enable atomic updates and resumable execution.

๐ŸŽฏ Adaptive Behavior Management

System behavior adapts in real-time through evolving constraints. Priority shifts between speed and comprehensiveness, communication style adjusts to user preferences, and error tolerance adapts to task criticality.

๐Ÿ” Intelligent Memory System

Structured experience database with YAML frontmatter enables intelligent querying of past executions. Pattern recognition across experiences guides current decision-making and optimization strategies.

๐Ÿ› ๏ธ Real Tool Integration

Maps to Claude Code's native tools with graceful degradation and intelligent error recovery. Production-ready execution with enterprise-grade error handling and cost optimization.

๐Ÿ“š Pure Markdown Framework

Everything is defined in markdown documents - no code generation required. System behavior emerges from Claude interpreting markdown specifications as a functional operating system.

๐Ÿ”„ Training Data Generation

Automatic conversion of real execution experiences into fine-tuning datasets. Complete execution traces with behavioral context enable training of increasingly autonomous agents.

Quick Start

1. Boot LLMunix
boot llmunix

2. Execute Intelligent Tasks
llmunix execute: "Monitor 5 tech news sources, extract trending topics, and generate intelligence briefing"
# System adapts constraints based on API limitations, maintains intelligence value through graceful degradation

llmunix execute: "Research AI safety papers - query memory for past research patterns and apply successful approaches"
# QueryMemoryTool consults past experiences, MemoryAnalysisAgent recommends optimal strategy

llmunix execute: "Urgent: analyze this legal document for risks in 10 minutes"
# System detects urgency, adapts constraints: priority='speed_and_clarity', persona='concise_assistant'

Architecture

LLMunix implements a modular state architecture with specialized files for different aspects of execution state:

workspace/state/
โ”œโ”€โ”€ plan.md # Execution steps and metadata
โ”œโ”€โ”€ context.md # Knowledge accumulation
โ”œโ”€โ”€ variables.json # Structured data passing
โ”œโ”€โ”€ history.md # Execution log
โ””โ”€โ”€ constraints.md # Behavioral modifiers (sentient state)

Key Behavioral Modifiers:
โ€ข user_sentiment: Detected emotional state (neutral, pleased, frustrated, stressed)
โ€ข priority: Execution focus (speed_and_clarity, comprehensiveness, cost_efficiency)
โ€ข active_persona: Communication style (concise_assistant, detailed_analyst, proactive_collaborator)
โ€ข error_tolerance: Risk acceptance level (strict, moderate, flexible)
โ€ข human_review_trigger_level: Guidance threshold (low, medium, high)
Explore the Code Back to Experiments