Building the Future of Intelligent Agents

Exploring adaptive AI systems through experimental frameworks and autonomous agent architectures

๐Ÿงช ALPHA EXPERIMENTS

All projects are experimental research and will remain in permanent alpha status

Experiments

โ”œโ”€โ”€ jit-agent-poc ALPHA

Unified Qwen Architecture POC

Part of Agent Forge framework. Demonstrates a unified approach using a single fine-tuned Qwen2.5-Coder-1.5B model as both Orchestrator and Translator, eliminating multi-model complexity through specialized LoRA training.

Unified Model Qwen 2.5 Coder LoRA Fine-tuning

โ”œโ”€โ”€ jit-agent-learn ALPHA

Learning & Adaptation POC

Part of Agent Forge framework. Extension of the JIT agent architecture focused on reinforcement learning capabilities, allowing agents to improve their performance through experience and feedback loops using Qwen models.

Reinforcement Learning Qwen Models Self-Improvement

โ””โ”€โ”€ jit-agent-memory ALPHA

Persistent Memory POC

Part of Agent Forge framework. The memory component of the JIT agent trilogy, adding persistent memory capabilities to enable contextual awareness and long-term information retention using Qwen models.

Persistent State Qwen Models Memory Management

About Evolving Agents Labs

We're advancing the frontier of autonomous AI through experimental frameworks and research prototypes. Our work explores early-stage concepts in adaptive agent systems - all projects remain permanently in alpha status as ongoing research experiments.

๐Ÿ“ข Our Research Evolution

Phase 1: Evolving Agents Toolkit (EAT) - Sunset

Our first project, the Evolving Agents Toolkit (EAT), was officially discontinued in July 2025. While EAT demonstrated powerful concepts in multi-agent orchestration with MongoDB backend, we recognized that the complex Python architecture was over-engineered for achieving adaptive agent behavior.

Phase 2: LLMunix - Simplified Evolution

EAT's concepts were dramatically simplified and reimplemented in LLMunix - a Pure Markdown Operating System that achieves the same adaptive agent goals through elegant simplicity. From EAT's multi-component Python architecture with MongoDB backend to LLMunix's pure markdown definitions interpreted by LLM runtime engines - same adaptive capabilities, 10x simpler implementation.

Phase 3: Agent Forge JIT POCs - Current Focus

With Claude Code's implementation of sub-agents in markdown as an official feature, we realized our original markdown-based agent concept was validated. We now focus on Agent Forge and JIT POCs - exploring just-in-time compilation, hybrid architectures, and benchmarkable performance improvements over pure LLM approaches.

Adaptive Behavior Research

Experimental systems that explore how agents might modify their decision-making processes based on context and interaction patterns.

Pure Markdown Architecture

Exploring the use of markdown as a full operating system specification, enabling clean separation of behavior, state, and execution logic.