Convention-based meta-skill dispatcher for Claude. 78 modules, zero context overhead for the ones you don't use.
Anthropic's skill system loads in 3 tiers. Reflex adds a 4th: convention-based module dispatch that routes in Python before the LLM sees anything.
Only the trigger word "reflex" and a one-line description. Loaded into every conversation at startup.
Routing instructions. Tells Claude to run dispatch.py and follow the protocol output. No module content yet.
dispatch.py resolves the module, extracts params, builds chains, runs resolvers — all in Python. Zero LLM tokens.
Only the matched module's instructions enter context. 77 other modules remain on disk, unread, costing nothing.
A standard Claude skill puts all instructions in one SKILL.md. Reflex splits them across 78 modules and loads only what's needed.
Modules connect with + syntax. Each step writes structured JSON to disk. Downstream steps read it. The workspace is the data bus.
Every claim in the final document traces back through the chain to its source. The certify module maps this explicitly.
Personas make the module system accessible without knowing the grammar. The user talks. Walt decides when to invoke modules.
The parallel architecture isn't cosmetic. It protects a type-level invariant.
modules/, then websearch+walt would be valid syntax — jamming a persistent behavioral overlay into a bounded chain step. The parallel directory means the composition engine never sees it. No flags. No exclusion lists. The filesystem is the type system.
dispatch.py has never been modified to support a new module. Adding capability means adding a folder.
mkdir modules/my-module echo "# My Module\n\nDo the thing." > modules/my-module/MODULE.md
That's it. dispatch.py discovers it. reflex my-module works. No registry, no config, no engine changes.
mkdir personas/strategist echo "# Strategist\n\nYou think in frameworks." > personas/strategist/PERSONA.md
Same convention. persona.py discovers it. reflex persona strategist works. dispatch.py never knows it exists.