Cognitive Layer¶
The Cognitive layer is the intelligence center — it handles natural language understanding, memory, learning, and self-modification.
Pipeline¶
Natural language goes through:
- Working memory assembles system state (agent health, trust scores, Hebbian weights, capabilities) within a token budget
- Episodic recall finds similar past interactions for context (top-3 by keyword-overlap cosine similarity)
- Workflow cache checks for previously successful DAG patterns (exact match, then fuzzy)
- LLM decomposer converts text into a
TaskDAG— a directed acyclic graph of typed intents with dependencies - Attention manager scores tasks:
urgency × relevance × deadline_factor × dependency_bonus - DAG executor runs independent intents in parallel, respects dependency ordering
Dynamic Intent Discovery¶
Each agent class declares structured IntentDescriptor metadata. The decomposer's system prompt is assembled at runtime from whatever agents are registered. New agent types self-integrate without any configuration changes.
This means adding a new agent type makes its intents available to the LLM automatically — no prompt editing, no routing tables, no configuration files.
Self-Modification¶
When ProbOS encounters a capability gap (no agent can handle a request), it designs a new agent:
Capability gap detected
→ LLM generates agent code
→ CodeValidator static analysis
→ SandboxRunner isolation test
→ Probationary trust assigned
→ SystemQA smoke tests
→ BehavioralMonitor tracks post-deployment
Agents can also be designed collaboratively via the /design command.
Correction Feedback Loop¶
Human corrections are the richest learning signal:
- CorrectionDetector identifies when the user is correcting a previous result
- AgentPatcher modifies the responsible agent
- Hot-reload the patched agent
- Auto-retry the original request
- Update trust, Hebbian weights, and episodic memory
Dreaming¶
During idle periods, the dreaming engine:
- Replays recent episodes to strengthen successful pathways
- Weakens failed pathways
- Prunes dead connections
- Adjusts trust scores
- Pre-warms predictions for likely upcoming requests
Source Files¶
| File | Purpose |
|---|---|
cognitive/decomposer.py |
NL → TaskDAG + DAG executor |
cognitive/prompt_builder.py |
Dynamic system prompt assembly |
cognitive/llm_client.py |
OpenAI-compatible + mock client |
cognitive/cognitive_agent.py |
Instructions-first LLM agent base |
cognitive/working_memory.py |
Bounded context assembly |
cognitive/episodic.py |
ChromaDB semantic long-term memory |
cognitive/attention.py |
Priority scoring + focus tracking |
cognitive/dreaming.py |
Offline consolidation + pre-warm |
cognitive/workflow_cache.py |
LRU pattern cache |
cognitive/agent_designer.py |
LLM designs new agents |
cognitive/self_mod.py |
Self-modification pipeline orchestrator |
cognitive/code_validator.py |
Static analysis for generated code |
cognitive/sandbox.py |
Isolated execution for untrusted agents |
cognitive/skill_designer.py |
Skill template generation |
cognitive/skill_validator.py |
Skill safety validation |
cognitive/behavioral_monitor.py |
Runtime behavior tracking |
cognitive/feedback.py |
Human feedback → trust/Hebbian/episodic |
cognitive/correction_detector.py |
Distinguishes corrections from new requests |
cognitive/agent_patcher.py |
Hot-patches designed agent code |
cognitive/strategy.py |
StrategyRecommender (skill attachment) |
cognitive/dependency_resolver.py |
Auto-install agent dependencies (uv) |
cognitive/emergent_detector.py |
5 algorithms for emergent behavior |
cognitive/embeddings.py |
Embedding utilities |
cognitive/research.py |
Web research phase for agent design |